Testing with Temporary Stored Procedures

Watch this week's video on YouTube

A while back I learned that it's possible to create temporary stored procedures in SQL Server.

I never put that knowledge into practice however because I struggled to think of a good use case for when a temporary stored procedure would be preferable to a permanent stored procedure.

Not long ago I encountered a scenario where using a temporary stored procedure was the perfect solution to my problem.

Building New Tables

Recently, I had to build a new version of a legacy table because the legacy table's upstream data source was going to be retired.

The new table would contain all the same data as the legacy table, but populated from the new data source. Additionally, the new table would also include additional rows and columns. After building the final table, the plan was to create a view to replace the functionality of the legacy table.

I had to spend quite a bit of time writing a fairly elaborate query to make the data for the new data source match what was appearing in the legacy table (remember my gaps and islands post from a few weeks back? Imagine that on steroids).

Before building out the new ETL and tables though, I wanted to be able to test that the new query was producing the correct results. This was challenging because some of the data sources were on other servers. I was in a catch 22: couldn't test the data because I didn't build the ETL yet, but I didn't want to build the ETL until I tested the results.

Less Than Ideal Options

To make matters worse, I was only able to test my query using data in the production environment.

I thought of putting my new query into a stored procedure to make the logic easier to test. But with the production elevate process being length and restricted, I couldn't easily put my parameterized query into a permanent stored procedure to test my query with.

At this point I had a few options:

  • Don't test anything and just build out the tables and ETL in production. Cross my fingers and hope it all works, fixing any issues after the fact.
  • Create a permanent stored procedure with the query and elevate it to production. Hope that I don't have to make changes and go through the slow elevate process again.
  • Run the query over and over again with different parameters.

I didn't like the first two options because of the amount of time I would lose trying to elevate new tables or procedures into production.

The third option wasn't ideal either because while it would allow me to iterate quickly, documenting all of my tests would involve a massive file that would not be easy to navigate or change.

What I needed was a way to run a query through many different parameters in a concise manner without making any permanent production changes.

Temporary Stored Procedures for Regression Testing

Since I have access to create temporary stored procedures in production, I was able to create a temporary procedure containing my complex query:

CREATE PROCEDURE #ComplexBusinessLogic
  @parm1 int
AS
BEGIN
  /* This isn't the actual query.  The real one was ugly and hundreds of lines long.*/
  SELECT CASE @parm1
    WHEN 1 THEN 'A'
    WHEN 2 THEN 'B'
    WHEN 3 THEN 'C'
    END AS ResultValue
END;

As I mention in the comment of the stored procedure, this query was huge. If I wanted to execute it multiple times with different parameter values to test with, I'd have a gigantic file that would be difficult to navigate and easy to make errors in.

Instead, the temporary stored procedure made it easy to document my tests and execute the query as many times as needed:

/* Test for when parameter is 1 */
EXEC #ComplexBusinessLogic @parm1 = 1;

/* Test for scenario 2 */
EXEC #ComplexBusinessLogic @parm1 = 2;

/* The rarely occuring but very important test scenario 3 */
EXEC #ComplexBusinessLogic @parm1 = 3

I was able then to clearly define all of my tests and run them against the production data without creating any permanent production objects. This was great because I did find errors with my logic, but I was able to fix them and retry my tests over and over again until everything ran without issues.

Testing in Production

Ideally I wouldn't have to use this solution. It would have been much better to have data to test with in a non-production environment. Using a temporary stored procedure to test in production is a hack to get around environment restrictions.

However, what is ideal and what is real-world doesn't always align. Sometimes a hack helps meet deadlines when better options aren't available. In this instance, temporary stored procedures helped make testing a breeze.

Joining on NULLs

Watch this week's video on YouTube

It's important to be aware of columns that allow NULL values since SQL Server may handle NULLs differently than you might expect.

Today I want to look at what things to consider when joining on columns containing NULL values.

Natural, Composite, NULLable keys

Let's pretend we have an Account table containing the accounts of various users and an AccountType table describing the different types of accounts:

2019-03-22-12-33-46

These tables have the unfortunate design characteristics of:

  1. They use a natural, composite key of YearOpened and AccountType
  2. NULL is the valid default for AccountType

Not that either of the above attributes are outright bad, just that we need to handle them appropriately. For example, if we want to bring back a description of each user's account, we might write a query with an inner join like this:

SELECT
    a.UserId,
    at.YearOpened,
    at.AccountType,
    at.Description
FROM
    dbo.Account a
    INNER JOIN dbo.AccountType at
        ON a.YearOpened = at.YearOpened
        AND a.AccountType = at.AccountType

Only to discover the rows with NULLs are not present:

2019-03-22-12-31-39

Joining on NULLs

Since it's not possible to join on NULL values in SQL Server like you might expect, we need to be creative to achieve the results we want.

One option is to make our AccountType column NOT NULL and set some other default value. Another option is to create a new column that will act as a surrogate key to join on instead.

Both of the above options would fix the problem at the source, but what about if we can only make changes to our queries?

One common approach is to convert the NULLs to some other non-NULL value using a function like COALESCE or ISNULL:

SELECT
    a.UserId,
    at.YearOpened,
    at.AccountType,
    at.Description
FROM
    dbo.Account a
    INNER JOIN dbo.AccountType at
        ON a.YearOpened = at.YearOpened
        AND ISNULL(a.AccountType,'`') = ISNULL(at.AccountType,'`')

2019-03-22-12-49-47

While this returns the results we want, there are two major issues with this approach:

  1. In the above example we converted NULLs to the ` character. If we had a valid ` character in our data, we would get logically incorrect joins.
  2. Our query can no longer perform index seeks.

The first issue isn't a huge deal if you can guarantee the character you are replacing NULLs with will never appear in the column of data.

The second issue is more important since ISNULL prevents your query from being SARGable and will cause poor performance on large tables of data.

2019-03-22-12-51-04

Those Compute Scalar operators are forcing SQL Server to Scan the indexes and compute a value for every row.

A More Efficient Solution

If using a function like ISNULL hurts the performance of our queries, what can we do instead?

SELECT
    a.UserId,
    at.YearOpened,
    at.AccountType,
    at.Description
FROM
    dbo.Account a
    INNER JOIN dbo.AccountType at
        ON a.YearOpened = at.YearOpened
        AND (a.AccountType = at.AccountType OR (a.AccountType IS NULL AND at.AccountType IS NULL))

2019-03-22-12-48-59

This produces the same exact results while allowing SQL Server to Seek when possible and avoid costly row by row computations:

There are no seeks here since I don't have any additional filters, but the lack of Compute Scalar operators should be enough to prove the point.

While there are a few more variations that can achieve the same results using different execution plans (writing a query that joins non-nulls and unioning it with a query that selects only the nulls, using a computed column to convert the NULLs to non-null values, etc...) the key to good performance is to choose a solution that will not force SQL Server to compute values for every single row.

4 Reasons To Avoid VARCHAR(8000)

Watch this week's video on YouTube

When first learning SQL Server, I had the brilliant idea of defining every column on my table as VARCHAR(8000). After all, the VARCHAR datatype only consumes as much storage space as the string itself (plus two bytes), so even if I defined the max length of a column as 8000, I wouldn't be wasting storage space if my data was smaller than that.

My development life would be easier too since I wouldn't have to think about the kind of data I was storing in my columns; I could define everything as VARCHAR(8000) once and never have to go back to make any changes. Brilliant!

While I was correct about not wasting storage space, it turns out the idea of making every column VARCHAR(8000) is a terrible idea.

What follows is a (partial) list of reasons to avoid defining every column in your table as VARCHAR(8000).

1. Documentation

When columns are defined accurately with specific data types and lengths, they help provide understanding about the data they contain.

For example, imagine you see following column in a table:

ZipCode VARCHAR(10)

If your database is full of U.S. data, this column definition provides enough information to make some pretty good assumptions:

  • With a length of 10, we can expect some of the data to be in the 5 digit zip code + 4 digit extended zip code format (eg. 12345-6789)
  • Choosing VARCHAR instead of CHAR means there are probably some entries that contain 5 digit zip codes and some with 5+4 digit zip codes (and a dash)

We can't be 100% confident about our assumptions until we verify them by profiling the data itself, but a lot of times having decent guesses is good enough.

2. Validation

Your application (hopefully) has some good client and server side validation built in. Those validations don't always work so it never hurts to have some extra validation on the database side of things as well.

Imagine the zip code field is directly next to the "Delivery Instructions" box on our app's "Shipping Details" page. It's not a stretch to think that a user will someday accidentally type in the special delivery instructions ("Please leave the package behind the house next to the giant pineapple bush") into the zip code field instead of the instructions field.

If your ZipCode field is defined as VARCHAR(8000), that set of delivery instructions will make its way into the database in the completely wrong column.

Have fun cleaning up that dirty data. If instead the field was correctly defined as ZipCode VARCHAR(10), the insert would fail and you would prevent that erroneous data from entering your database.

3. Indexing

Ever try to index a column (or set of columns) that total more than 1700 bytes (or 900 bytes in older versions)?

2019-03-13-19-42-15

SQL Server will let you create indexes that contain keys that may potentially be greater than 1700 bytes, but as soon as you try to insert large data into those indexes you will be greeted with this wall of error:

2019-03-13-19-44-11

Of course there are some tricks you can use to index those wide key combinations, but you don't want to use those unless you absolutely have to.

4. Inaccurate Estimates

Imagine we have 1000 rows of data in our table with a VARCHAR(8000) column:

CREATE TABLE #Varchar8000Test
(
    Id int identity PRIMARY KEY,
    BigColumn varchar(8000)
);

INSERT INTO #Varchar8000Test VALUES ('a');
GO 1000

You'll notice that each of those rows' BigColumn fields only contain 3 bytes of data (1 for "a" + 2 for varchar overhead). That's about 3 KB total for the whole table of data (plus whatever the int column takes up)

You would therefore think that SQL Server would know and use this information when executing a query :

SELECT * FROM #Varchar8000Test ORDER BY BigColumn
OPTION(MAXDOP 1)

However it doesn't:

2019-03-13-20-00-07

SQL Server estimates that each row is ~4 KB! This is because SQL Server doesn't sample the data to estimate the average row size for VARCHAR columns, it uses half of the defined length.

This carries over to other parts of the query plan too, like memory grants:

2019-03-13-20-04-39

Fitting this whole table into memory should only take about ~7 KB (3 KB for our BigColumn data), but SQL Server reserves significantly more than that. In this example an excess of 6 MB isn't a huge deal, but if you are working with a table with many more rows, those excessive memory grants can use up hundreds of megabytes or even gigabytes of memory unnecessarily.

...and more!

This list is only a start: there are plenty of additional reasons why defining every column on your table as VARCHAR(8000). My goal with this post wasn't to create a comprehensive list, but rather to provide enough compelling reasons to have dissuaded my novice self a few years ago from even considering this as an option.

COUNT, DISTINCT, and NULLs

Watch this week's video on YouTube

One thing I see fairly often (and am occasionally guilty of myself) is using COUNT(DISTINCT) and DISTINCT interchangeably to get an idea of the number of unique values in a column.

While they will sometimes give you the same results, depending on the data the two methods may not always be interchangeable.

Let's start off with some test data. Important to note are the duplicate values, including the NULLs:

DROP TABLE IF EXISTS ##TestData;
CREATE TABLE ##TestData (Id int identity, Col1 char(1) NULL); 
INSERT INTO ##TestData VALUES ('A'); 
INSERT INTO ##TestData VALUES ('A'); 
INSERT INTO ##TestData VALUES ('B'); 
INSERT INTO ##TestData VALUES ('B');
INSERT INTO ##TestData VALUES (NULL); 
INSERT INTO ##TestData VALUES (NULL); 
CREATE CLUSTERED INDEX CL_Id ON ##TestData (Col1); 

If you want to know how many unique values are in Col1, you might write something like this:

SELECT COUNT(DISTINCT Col1) 
FROM ##TestData 

image-4

Two distinct values great! Except...weren't there some NULLs in there? If we want to see the actual values instead of just seeing the count:

SELECT DISTINCT Col1 
FROM ##TestData 

image-5

Interesting, when doing a plain DISTINCT we see there are three unique values, but in our previous query when we wrote COUNT(DISTINCT Col1) a count of two was returned.

And while the SQL Server documentation specifies that DISTINCT will include nulls while COUNT(DISTINCT) will not, this is not something that many people find intuitive.

Viewing and COUNTing the NULLs

Sometimes we might have to do the opposite of what the default functionality does when using DISTINCT and COUNT functions.

For example, viewing the unique values in a column and not including the nulls is pretty straightforward:

SELECT DISTINCT 
    Col1 
FROM 
    ##TestData 
WHERE  
    Col1 IS NOT NULL 

image-6

Getting the opposite effect of returning a COUNT that includes the NULL values is a little more complicated. One thing we can try to do is COUNT all of our DISTINCT non-null values and then combine it with a COUNT DISTINCT for our NULL values:

select  COUNT(DISTINCT Col1) + COUNT(DISTINCT CASE WHEN Col1 IS NULL THEN 1 END)
from    ##TestData;

image-7

While this logic is easy to interpret, it forces us to read our column of data twice, once for each COUNT - not very efficient on larger sets of data:

image-11

Another thing we can try is to put in a placeholder value (that doesn't exist elsewhere in the column's data) so that COUNT will include it in its calculation:

SELECT 
    /* ~~~ will never exist in our data */
    COUNT(DISTINCT ISNULL(Col1,'~~~')) 
FROM 
    ##TestData 

image-7

image-12

The ISNULL here functions the same as the CASE statement in our first attempt, without having to read the table twice. However, that Compute Scalar occurring to the left of our Clustered Index Scan will start to become painful as our data size increases since SQL Server will need to check each and every row and convert any NULLs it finds. Not to mention after computing all of those ~~~ values, SQL Server needs to re-sort the data to be able to find the DISTINCT values.

That leads us to a final attempt: using a DISTINCT in a derived table (to return our NULL) and then taking a count of that:

SELECT COUNT(*)  
FROM (SELECT DISTINCT Col1 FROM ##TestData) v 

image-7

image-13

This last option eliminates the Compute Scalar and extra sort. While there might be even better options out there for accomplishing the same task, at this point I'm pretty happy with how this will perform.

What's the Point?

SQL Server's documentation says that COUNT(*) returns items in a group while COUNT(Col1) return non nulls in the group.

/* returns items in the group.  Includes nulls */
SELECT COUNT(*) 
FROM ##TestData;

/* returns non null values in group */
SELECT COUNT(Col1) 
FROM ##TestData;

image-14

Because of this, COUNT, DISTINCT, and NULLs have a special relationship with each other that isn't always as intuitive as many people think.

Whenever using COUNT or DISTINCT, make sure to test with NULLs to make sure SQL Server is handling them like you expect. 

The 5 Scariest Moments for a SQL Server Developer

Watch this week's video on YouTube

While families and friends are scaring each other this Halloween week with stories of ghosts and ghouls, I thought it'd be way scarier to talk about truncate tables and source control.

Accidental Data Deletion

You've spent all morning loading millions of records into your tables.  Some of that data was painstakingly loaded by hand from flat files, and some lookup values you even authored manually because "you'll only have to do this once."

You are jumping around between SSMS windows, running this query here and that query there, pressing F5 like there's no tomorrow. And then right before you press to F5 to finish all of your work...

...you realize that you ran the whole tab worth of code instead of the one statement you meant to highlight.  "Commands completed successfully" flashes in your Message window, and in horror you discover you accidentally ran a truncate table statement!

Ok so this is pretty bad.

You know backups are being taken but…can the DBA restore the most recent data you spent all morning working on?  You tell her about your misfortune ("...yeah one of my coworkers accidentally deleted the data…") and hold your breath.

After a few minutes she gets back to you saying everything is back up and working.  Saved by good backup practices...this time!

Problems With code You Inherited

You receive an instant message from your DBA.

DBA: "Hey , how are you doing?

"You: "Good how are you?"

DBA: "Look, the instance XYZ has been at 100% CPU utilization for the past hour.  I see you have that has been running that whole time….and it's using a scalar function.

You: "…"

You: "Sorry.  I am just running this job for Bill since he left a few weeks ago.  Obviously there are some problems with the way he was writing some of this queries and I haven't had a chance to look them over yet."

DBA: "That's ok.  I know you would never write a scalar function that totally destroys the server.  Anyway, I was able to easily rewrite the query into a table-function that should run a lot more efficiently.  Is it ok if I kill your original query and run this new one instead?"

You: "Yeah sure, that'd be great.  Thank you!"

Missing Index Details

You've been having a good day: no outages, no annoying customer requests, and the only thing keeping you from going home is to finish tuning the query you are working on.

You execute your query after making all necessary changes and…nothing.  The executing query icon just keeps spinning, and spinning, and spinning…

You go grab some water, and by that time the query has finished.  You switch over to the Execution plan tab and…you see SQL Server's Missing Index hint recommending that you create an index.

Now don't be confused, this text looks green and friendly.  You might be tempted to follow the recommendation and add the missing index it recommends - but don't! 

That siren song of easy query tuning will steer you straight into the rocks known as unnecessary index maintenance. 

Just before you execute Microsoft's missing index details script, you realize that if you add some include columns and reorder a couple of the key columns, you'll have an index that will be applicable to all of your table's queries.

You avoid getting ship wrecked and create an index that useful for more than just this one query - phew!

Email Subjects That Read: "Does anyone know why none of my queries are running?"

Last night's data load failed.  "No big deal," you assure yourself. "I'll just run it now - no one will know the difference."

You kick-off your ETL queries and decide to go make some oatmeal.  At your office's kitchen you run into a coworker you haven't seen in a while, and you get to talking about work, life, that weekend's football game that was lost in overtime…that's been happening a lot this season, huh?

On the way back to your desk, one of your customers stops you to ask some questions, your manager needs to discuss project planning with you, a new employee is introduced, etc…

By this point the office is busy with activity.  You finally make it back to your desk MUCH later than you originally intended.  What were you doing again?

You can't remember so you decide to check your email.  You notice a chain emails asking "Does anyone know what's going on with server ABC?" Things are running so SLOW!  I can't run even the simplest queries!"

OH CRAP you think.  You look at your load process queries and they are still running with no end in sight.  Do you kill the queries and wait for the hour-long rollback to complete or…

...just then your queries finish.  A minute later you receive another email saying "seems like everything is working fine now." 

Another bullet dodged.

Where Did My Code Go?

Working on this team project has been a fun experience for you - responsibilities have been shared and you and your coworkers have been delivering on deadlines ahead of schedule.  You've never been able to write queries and make progress to the end product as quickly as this.

You sit down to start working and remember that you had to debug a query from yesterday afternoon.  You go and open the stored procedure in dev and…… it's not the same query anymore.

Sure, parts of it are the same, but there are some extra parameters and some of the logic has changed.  What the heck…?

"Did anyone mess with USP_GetProducts since yesterday?" you ask your alley of teammates.

"Uhh…I updated it this morning to make it work for my business logic requirement," said intern Sam.  "Is something wrong?"

"Well, it's totally different from the way I had it when I left yesterday evening.  I bet we got out of sync and the merge conflict wasn't resolved correctly" you say.

"Oh yeah, I got some message about merging.  I just clicked the 'keep my version' button," says Sam.

"No big deal," you say with confidence.  "I can pull my version out of source control.  Come on over and I can show you how we can merge both of our queries together."

Source control saves the day.

Happy Halloween.