Data with Bert logo

Power BI Dashboards, Reports, and Datasets with the SQL Chefs

Watch this week's video on YouTube

This week I had the opportunity to learn about dashboards, reports, and datasets in Power BI using the best kind of teaching tool: fresh, delicious, chocolate chip cookies.

Be our guest for the web's premier Power BI food show as Eugene Meidinger (@SQLGene) explains how these concepts relate to each other and then grab a second helping with his blog post that will leave you feeling full.

Shortchanged with International Money in SQL Server

Watch this week's video on YouTube

Imagine you have to perform some salary analysis for your employer International Mega Corp.

The data you have to work with looks something like this:

DROP TABLE IF EXISTS ##InternationalMegaCorpSalaries;
GO
CREATE TABLE ##InternationalMegaCorpSalaries
(
    EmployeeId int IDENTITY,
    Salary nvarchar(10),
    Country nvarchar(20),
);
INSERT INTO ##InternationalMegaCorpSalaries VALUES ('1,000.00','United States');
INSERT INTO ##InternationalMegaCorpSalaries VALUES ('8 789,37','Sweden');
INSERT INTO ##InternationalMegaCorpSalaries VALUES ('6.274,26','Turkey');
INSERT INTO ##InternationalMegaCorpSalaries VALUES ('1000.00','United States');

2018-09-21_12-48-56

Why are the salaries stored as nvarchar and formatted with commas, spaces, and periods?

Great question!  Someone wanted to make sure these amounts would look good in the UI so storing the formatted values in the database would be the way to go...

Pretty for the UI, not really great for needing to do analysis on.

Thanks to Zanoni Labuschagne, one of the subscribers to my YouTube channel, for recommending this topic!

CONVERT!

I'm a firm believer that money values should always be stored in the decimal datatype.  I can't think of a time where I wouldn't care about the precision and accuracy of money.

So let's try converting our salaries to decimal(10,2):

SELECT 
    EmployeeId,
    TRY_CONVERT(decimal(10,2),Salary) AS Salary,
    Country 
FROM ##InternationalMegaCorpSalaries

image-3

Well, that didn't work.  Maybe converting to floats will work as a quick fix?

SELECT 
    EmployeeId,
    TRY_CONVERT(float,Salary) AS Salary,
    Country 
FROM ##InternationalMegaCorpSalaries

image-4

Nope that didn't work either

(sidenote: I'm OK with that though - I don't think float should ever be used for storing money.  If you want to see a quick example of why float math is problematic take a look at this (and for more detail read about it here):

DECLARE 
    @Num1 float = .15,
    @Num2 float = .15,
    @Num3 float = .1,
    @Num4 float = .2
-- Not equal
SELECT IIF(@Num1+@Num2 = @Num3+@Num4,1,0)

Ok so those didn't work.  What if we try converting to the money datatype - that should work for being able to read these money formats right?

SELECT 
    EmployeeId,
    TRY_CONVERT(money,Salary) AS Salary,
    Country 
FROM ##InternationalMegaCorpSalaries

image

50% correct...!  Our employees in Turkey are being seriously underpaid with conversion though.  I'm kind of glad to not have to rely on this solution though since the money datatype has its own fair share of problems as well.

Time to get Cultured

SQL Server's TRY_PARSE function might be able to help us, but first we need to create a relationship between each country's money formatting and it's culture code:

ALTER TABLE ##InternationalMegaCorpSalaries
ADD CultureCode AS CASE Country 
                        WHEN 'United States' THEN 'en-US'
                        WHEN 'Sweden' THEN 'sv-SE'
                        WHEN 'Turkey' THEN 'Tr-TR' END

And finally our SELECT query:

SELECT 
    EmployeeId,
    TRY_PARSE(Salary AS DECIMAL(10,2) USING CultureCode) AS Salary,
    Country 
FROM ##InternationalMegaCorpSalaries

image-1

Success!  Our salary values are now perfectly converted into the decimal datatype without the need for any ugly REPLACE(), SUBSTRING(), or other string parsing functions.

While this carefully curated demo correctly converted all of our values, it's important to always test that the culture value you choose correctly formats your string formatted number.  For example, Wikipedia leads me to believe that the  Danes write their numbers like "6 338,70" SQL Server's culture definition doesn't convert this correctly:

SELECT TRY_PARSE('6 338,70' AS DECIMAL(10,2) USING 'da-dk')

image-2

In those instances, you may need to substitute another culture code to get the correct conversion to occur.

Extracting JSON Values Longer Than 4000 Characters

Watch this week's video on YouTube

A while back I built an automated process that parses JSON strings into a relational format.

Up until recently this process had been working great: my output table had all of the data I was expecting, neatly parsed into the correct rows and columns.

Last week I noticed an error in the output table however.  One row that was supposed to have a nicely parsed JSON value for a particular column had an ugly NULL instead.

Truncated?

First I checked my source JSON string - it had the "FiveThousandAs" property I was looking for:

DECLARE @json nvarchar(max) = N'{
    "Id" : 1,
    "FiveThousandAs" : "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
    "FourAs" : "aaaa"
}';

So the source data was fine.

I checked the table column I was inserting into as well and confirmed it was defined as nvarchar(max), so no problem there.

The last thing I checked was the query I was using:

SELECT JSON_VALUE(@json, '$.FiveThousandAs')

If I run that on it's own, I reproduce the NULL I was seeing inserted into my table:

2018-09-15_18-26-27

JSON_VALUE is limiting

After a little bit more research, I discovered that the return type for JSON_VALUE is limited to 4000 characters.   Since JSON_VALUE is in lax mode by default, if the output has more than 4000 characters, it fails silently.

To force an error in future code I could use SELECT JSON_VALUE(@json, 'strict \$.FiveThousandAs')  so at least I would be notified immediately of an problem with my  query/data (via failure).

Although strict mode will notify me of issues sooner, it still doesn't help me extract all of the data from my JSON property.

(Side note: I couldn't define my nvarchar(max) column as NOT NULL because for some rows the value could be NULL, but in the future I might consider adding additional database validation with a check constraint).

OPENJSON

The solution to reading the entire 5000 character value from my JSON property is to use OPENJSON:

SELECT * 
FROM OPENJSON(@json) 
WITH (FiveThousandAs nvarchar(max) '$.FiveThousandAs')

My insert query needed to be slightly refactored, but now I'm able to return any length value (as long as it's under 2gb).

In hindsight, I should have used OPENJSON() from the start: not only is it capable of parsing the full length values from JSON strings, but it performs significantly faster than any of the other SQL Server JSON functions.

As a best practice, I think I'm going to use OPENJSON by default for any JSON queries to avoid problems like this in the future.

Faking Temporal Tables with Triggers

MJ-t-sql-TuesdayThis post is a response to this month's T-SQL Tuesday #106 prompt by Steve Jones.  T-SQL Tuesday is a way for the SQL Server community to share ideas about different database and professional topics every month.

This month's topic asks to share our experiences with triggers in SQL Server.


Watch this week's video on YouTube

Triggers are something that I rarely use.  I don't shy away from them because of some horrible experience I've had, but rather I rarely have a good need for using them.

The one exception is when I need a poor man's temporal table.

Temporal Table <3

When temporal tables were added in SQL Server 2016 I was quick to embrace them.

A lot of the data problems I work on benefit from being able to view what data looked like at a certain point back in time, so the easy setup and queriability of temporal tables was something that I immediately loved.

No System Versioning For You

Sometimes I can't use temporal tables though, like when I'm forced to work on an older version of SQL Server.

Now, this isn't a huge issue; I can still write queries on those servers to achieve the same result as I would get with temporal tables.

But temporal tables have made me spoiled.  They are easy to use and I like having SQL Server manage my data for me automatically.

Fake Temporal Tables With Triggers

I don't want to have to manage my own operational versus historical data and write complicated queries for "point-in-time" analysis, so I decided to fake temporal table functionality using triggers.

Creating the base table and history table are pretty similar to that of a temporal table, just without all of the fancy PERIOD and GENERATED ALWAYS syntax:

CREATE TABLE dbo.Birds  
(   
 Id INT IDENTITY PRIMARY KEY,
 BirdName varchar(50),
 SightingCount int,
 SysStartTime datetime2 DEFAULT SYSUTCDATETIME(),
 SysEndTime datetime2 DEFAULT '9999-12-31 23:59:59.9999999'  
);
GO
CREATE TABLE dbo.BirdsHistory
(   
 Id int,
 BirdName varchar(50),
 SightingCount int,
 SysStartTime datetime2,
 SysEndTime datetime2  
) WITH (DATA_COMPRESSION = PAGE);
GO
CREATE CLUSTERED INDEX CL_Id ON dbo.BirdsHistory (Id);
GO

The single UPDATE,DELETE trigger is really where the magic happens though.  Everytime a row is updated or deleted, the trigger inserts the previous row of data into our history table with correct datetimes:

CREATE TRIGGER TemporalFaking ON dbo.Birds
AFTER UPDATE, DELETE
AS
BEGIN
SET NOCOUNT ON;

DECLARE @CurrentDateTime datetime2 = SYSUTCDATETIME();

/* Update start times for newly updated data */
UPDATE b
SET
       SysStartTime = @CurrentDateTime
FROM
    dbo.Birds b
    INNER JOIN inserted i
        ON b.Id = i.Id

/* Grab the SysStartTime from dbo.Birds
   Insert into dbo.BirdsHistory */
INSERT INTO dbo.BirdsHistory
SELECT d.Id, d.BirdName, d.SightingCount,d.SysStartTime,ISNULL(b.SysStartTime,@CurrentDateTime)
FROM
       dbo.Birds b
       RIGHT JOIN deleted d
              ON b.Id = d.Id
END
GO

The important aspect to this trigger is that we always join our dbo.Birds table to our inserted and deleted tables based on the primary key, which is the Id column in this case.

If you try to insert/update/delete data from the dbo.Birds table, the dbo.BirdsHistory table will be updated exactly like a regular temporal table would:

/* inserts */
INSERT INTO dbo.Birds (BirdName, SightingCount) VALUES ('Blue Jay',1);
GO
INSERT INTO dbo.Birds (BirdName, SightingCount) VALUES ('Cardinal',1);
GO
BEGIN TRANSACTION
INSERT INTO dbo.Birds (BirdName, SightingCount) VALUES ('Canada Goose',1)
INSERT INTO dbo.Birds (BirdName, SightingCount) VALUES ('Nuthatch',1)
COMMIT
GO
BEGIN TRANSACTION
INSERT INTO dbo.Birds (BirdName, SightingCount) VALUES ('Dodo',1)
INSERT INTO dbo.Birds (BirdName, SightingCount) VALUES ('Ivory Billed Woodpecker',1)
ROLLBACK
GO

/* updates */
UPDATE dbo.Birds SET SightingCount = SightingCount+1 WHERE id = 1;
GO
UPDATE dbo.Birds SET SightingCount = SightingCount+1 WHERE id in (2,3);
GO
BEGIN TRANSACTION
UPDATE dbo.Birds SET SightingCount = SightingCount+1 WHERE id =4;
GO
ROLLBACK

/* deletes */

DELETE FROM dbo.Birds WHERE id = 1;
GO
DELETE FROM dbo.Birds WHERE id in (2,3);
GO
BEGIN TRANSACTION
UPDATE dbo.Birds SET SightingCount = SightingCount+1 WHERE id =4;
GO
ROLLBACK

If you run each of those batches one at a time and check both tables, you'll see how the dbo.BirdsHistory table keeps track of all of our data changes.

2018-09-07_12-31-21

Now seeing what our dbo.Birds data looked like at a certain point-in-time isn't quite as easy as a system versioned table in SQL Server 2016, but it's not bad:

DECLARE @SYSTEM_TIME datetime2 = '2018-09-07 16:30:11';
SELECT * 
FROM
    (
    SELECT * FROM dbo.Birds
    UNION ALL
    SELECT * FROM dbo.BirdsHistory
    ) FakeTemporal
WHERE 
    @SYSTEM_TIME >= SysStartTime 
    AND @SYSTEM_TIME < SysEndTime;

Real Performance

One reason many people loath triggers is due to their potential for bad performance (particular when many triggers get chained together).

I wanted to see how this trigger solution compares to an actual temporal table.  While searching for good ways to test this difference, I found that Randolph West has done some testing on trigger-based temporal tables.  While our solutions are different, I like their performance testing methodology: view the transaction log records for real temporal tables and compare them to those of the trigger-based temporal tables.

I'll let you read the details of how to do the comparison test in their blog post but I'll just summarize the results of my test: the trigger based version is almost the same as a real system versioned temporal table.

Because of how I handle updating the SysStartTime column in my dbo.Birds table, I get one more transaction than a true temporal table:

2018-09-07_12-48-54

You could make the trigger solution work identical to the true temporal table (as Randolph does) if you are willing to make application code changes to populate the SysStartTime column on insert into dbo.Birds.

Conclusion

For my purposes, the trigger-based temporal table solution has a happy ending.  It works for the functionality that I need it for and prevents me from having to manage a history table through some other process.

If you decide to use this in your own pre-2016 instances, just be sure to test the functionality you need; while it works great for the purposes that I use temporal tables for, your results may vary if you need additional functionality (preventing truncates on the history table, defining a retention period for the history, etc... are all features not implemented in the examples above).

Does The Order Of Index Columns Matter?

Watch this week's video on YouTube

When beginning to learn SQL, at some point you learn that indexes can be created to help improve the performance of queries.

Creating your first few indexes can be intimidating though, particularly when trying to understand what order to put your key columns in.

Today we'll look at how row store indexes work to understand whether index column order matters.

Heap: Stack of Pages

blue-jay-heapImagine a stack of loose leaf pages.  This collection of pages is our table.

Each page has information about a bird on it - the bird's name, picture, description, habitat, migration patterns, visual markings, etc...  You can think of each of these pages as a row of data.

The problem with this stack of pages is that there is no enforced order: it's a heap.  Without any enforced order, searching for individual birds is time consuming; in order to find a particular bird, for example a blue jay, you would have to go through the stack of pages one at a time until you find the blue jay page.

The scanning doesn't stop there though.  Even though we found a blue jay page, there's no way for us to guarantee that there are no other blue jay pages in the stack.  This means we have to continue flipping through every page until we finish searching through the whole heap of pages.

Having to do this process every single time we need to retrieve data from our bird table is painful.  To make our job easier, we can define and enforce an order on the data by defining a clustered index.

Clustered Index: Bound Pages

clustered-index-1To make searching through our pages easier, we sort all of the pages by bird name and glue on a binding.  This book binding now keeps all of our pages in alphabetical order by bird name.

The SQL version of a book binding is a clustered index.  The clustered index is not an additional object to our data - it is that same exact table data, but now with an enforced sort order.

Having all of our data in sorted order by bird name makes certain queries really fast and efficient - instead of having to scan through every page to find the blue jay entry, we can now quickly flip to the "B" section, then the "BL" section, then the "BLU" section, etc... until we find BLUE JAY.  This is done quickly and efficiently because we know where to find blue jays in the book because the bird names are stored in alphabetical order.

Even better, after we find the blue jay page, we flip to the next page and see a page for cardinal.  Since we know all of the entries are stored alphabetically, we know that once we get to the next bird we have found all of our blue jay pages and don't need to continue flipping through the rest of the book.

While the clustered index allows us to find birds by name quickly, it's not perfect; since the clustered index is the table, it contains every property (column) of each bird, which is a lot of data!

Having to constantly reference this large, clustered index for each of my queries can be too cumbersome.  For most of our queries, we could get by with condensed version of my bird book that only contains the most essential information in it.

Nonclustered Index: Cut and Copy

nc-indexLet's say we want a lighter-weight version of our book that contains the most relevant information (bird name, color, description).

We can photocopy the entire book and then cut out and keep only the pieces of information that are relevant while discarding the rest.  If we paste all of those relevant pieces of information into a new book, still sorted by bird name, we now have a second copy of our data.  This is our nonclustered index.

This nonclustered index contains all of the same birds as my clustered index, just with fewer columns.  This means I can fit multiple birds onto a page, requiring me to flip through fewer pages to find the bird I need.

If we ever need to look up additional information about a particular bird that's not in our nonclustered index, we can always go back to my giant clustered index and retrieve any information we need.

With the lighter-weight nonclustered index in-hand, we go out to the woods to start identifying some birds.

Upon spotting an unfamiliar bird in our binoculars, we can flip open the nonclustered index to identify the bird.

The only problem is, since we don't know this bird's name, our nonclustered index by bird name is of no help.  We end up having to flip through each page one at a time trying to identify the bird instead of flipping directly to the correct page.

For these types of inquires where we want to identify a bird don't know the bird's name, a different index would beneficial...

Nonclustered Index 2: Color Bugaloo

nc-index-2Instead of having a nonclustered index sorted by bird name, what we really need is a way to filter down to the list of potential birds quickly.

One way we can do this is to create another copy of my book, still containing just bird names, colors, and descriptions, but this time order the book pages so they are in order of color first, then bird name.

When trying to identify an unknown bird, we can first limit the number of pages to search through by filtering on the bird's color.  In our case, color is a highly selective trait, since it filters down our list of potential birds to only a small subset of the whole book.  In our blue jay example, this means we would find the small subset of pages that contain blue birds, and then just check each one of those pages individually until we find the blue jay.

Order Matters

Indexes aren't magic; their high-performance capabilities come from the fact that they store data in a predetermined order.  If your query can utilize data stored in that order, great!

However, if your query wants to filter down on color first, but your index is sorted on bird name, then you'll be out of luck.  When it comes to determining what column should be the first key in your index, you should choose whichever one will be most selective (which one will filter you down to the fewest subset of results) for your particular query.

There's a lot more optimizing that can be done with indexes, but correctly choosing the order of columns for your index key is an essential first step.

Want to learn even more about index column order? Be sure to check out this post on cardinality.