Photo by Chad Kirchoff on Unsplash

Prefer watching on YouTube

In the ideal world, you fully test how your SQL Server will handle upgrading to the latest version.  You’re able to catch and mitigate any performance surprises early on.

In the real world, you might not be able to perform a good test before upgrading.  You only learn about poor performance once you upgrade in production.

Does this scenario sound familiar?  It’s what I’ve been thinking about recently as I have to deal with query performance issues after upgrading to SQL Server 2016.

Coming from SQL Server 2012, I know the new cardnality estimator added in 2014 is a major source of the problems I’m experiencing.  While the new cardinality estimator improves performance of some queries, it has also made made some of my queries take hours to run instead of seconds.

Long-term, the solution is to revisit these queries, their stats, and the execution plans being generated for them to see what can be rewritten for better performance.

But ain’t nobody got time for that (at least when facing performance crippling production issues).

Short-term, put a band-aid on to stop the bleeding.

I could change the compatibility mode of the database to revert back to SQL Server 2012 (before the new cardinality estimator was introduced), but then I miss out on being able to use all of the new SQL Server 2016 improvements just because a few queries are misbehaving.

I could enable trace flag 9481 to have my queries use the old cardinality estimator, however as a developer I probably don’t have access to play with trace flags (and for good reason).

Starting with 2016 SP1, what I can do is use the legacy cardinality estimator query hint:

This hint is great because it doesn’t require developers to have any special permissions.  It also allows SQL to use the old cardinality estimator for poor performing queries only – the rest of the server gets to benefit from the improvements brought on by the new cardinality estimator.

With time, I can revisit the queries that are using this hint to try to improve their performance, but in the mean time it’s a great way to fix regressed query performance due to the new cardinality estimator.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

Protecting against SQL Injection Part 2

Photo by Igor Ovsyannykov on Unsplash

Watch this content on YouTube.

Last week we talked about building dynamic SQL queries and how doing so might leave you open to SQL injection attacks.

In that post we examined how using sp_executesql to parameterize our dynamic SQL statements protects us.

Today, we are going to look at where sp_executesql falls short.

The problem with sp_executesql

Let’s modify last week’s final query to make our table name dynamic:

(there’s an entirely different discussion to be had about whether you should allow table and column names to be dynamic, but we’ll assume we need a dynamic table name for the rest of this post)

If we pass in a table name parameter value and execute this query, we’ll be greeted with this error:

Yeah, sp_executesql doesn’t like parameterizing a table names.

So how do we make table name dynamic and safe?

Unfortunately we have to fall back on SQL’s EXEC command.

However, like we discussed last week, we need to be vigilant about what kind of user input we allow to be built as part of our query.

Assuming our app layer is already sanitizing as much of the user input as possible, here are some precautions we can take on the SQL side of the equation:

QUOTENAME()

If we wrap our user input parameters with QUOTENAME(), we’ll achieve safety:

This results in:

Although QUOTENAME() works well by adding brackets (by default) to sanitize quotes, it’s downside is that it will only output strings up to 128 characters long. If you are expecting parameters with values longer than that, you will have to use something like REPLACE(@TableName,'''','''''') instead to delimit single quotes (however, rolling your own logic like this is really hard to do securely and not recommended).

EXECUTE AS

The account running any dynamic SQL queries should be locked down so that it won’t be able to perform any operations you don’t want it to do.

Taking this idea one step further, you can create another account with very limited permissions, and add EXECUTE AS to your stored procedure to run that code under the very limited account.

This won’t prevent injection, but it will limit what the malicious user is able to do.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

Medicine 1” by marosh is licensed under CC BY-NC-ND 2.0.

Looking for a script to find possible SQL injection vulnerabilities on your server? Scroll to the bottom of this post.

Watch this content on YouTube.


OWASP names SQL injection as the #1 vulnerability for web applications. The infamy of this attack has even made its way into the popular XKCD comic.

What is SQL injection?

A SQL query is vulnerable to SQL injection if a user can run a query other than the one that was originally intended.

Sometimes SQL injection is easier to understand with an example. Let’s use this table of registered users:

And then let’s create a simple stored procedure that will query that table:

The important thing to note in the query above is that we are generating a dynamic SQL statement; that is, we are building the SQL query string, and then we are executing it.

Imagine this stored procedure is running in order to display a “Welcome <Full Name>!” message in our app — a website visitor types in their@ParmUserName and we execute the stored procedure to return their full name.

Here’s our code that calls the stored procedure:

And result:

Cool. No problems so far.

However, what if our user decides to pass in the following value as their username?

This funny looking parameter value returns this:

AHHHHHH!!!!

This user just hacked our website and viewed all of the users in our table.

In this specific case only our user’s full names were breached, but in other instances it’s just as easy for more sensitive data like passwords, social security numbers, and bank account numbers to be revealed as well (If you are looking for some fun, search for “SQL injection” on the Have I been pwned? list of Pwned websites to see all of the companies that aren’t doing a good job protecting your privacy).

So how did that example of SQL injection actually work?

Since our stored procedure executes a dynamically generated query, let’s look at what that generated query actually looks like for both of the parameters that were passed in:

Even though TFly37'' or 1=1-- doesn’t look like a intelligible input parameter, when its concatenated into our query it makes sense.

Our malicious user is basically writing their own SQL query, one that will return all of the names of our users instead of just their own. In many instances, the crafty hacker can go a step further and write additional injection code to reveal the contents of the entire user table (or other tables in the database)!

How do I prevent SQL injection?

Simple: don’t concatenate unsanitized user input data to your SQL queries.

In this example, this is easy to do: simply rewrite the stored procedure to not use dynamic SQL:

When you don’t have dynamic SQL, you can’t be affected by SQL injection attempts.

Avoiding dynamic SQL isn’t always possible though, so what can we do in those cases? Use sp_executesql:

sp_executesql allows for parameterization of your dynamic SQL, removing the possibility of unwanted code injection.

Are any of my current SQL queries vulnerable to SQL injection attacks?

So SQL injection is really bad and you don’t want to become like Sony or Yahoo. How do we check to see if we have any vulnerable queries on our server?

I wrote the query below to help you start searching. It is not perfect, but it does act as a good starting point for auditing potentially injectable queries:

I’ve found this script useful for myself, however if you find any issues with it please let me know, thanks!

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

SQL in 60 Seconds #4

On white: Who you really are” by James Jordan is licensed under CC BY-ND 2.0

Prefer video? Watch this content on my YouTube channel!

How many times have you had to transform some column value and ended up stacking several nested SQL REPLACE() functions like this?

Ugly right? And that’s after careful formatting to try and make it look readable. I could have just left it as:

Here we only have 4 nested REPLACE functions. My shameful record is 29. I’m not proud of it, but sometimes it’s the only way to get things done.

Not only are these nested REPLACE() functions difficult to write, but they are difficult to read too.

Instead of suffering through all of that ugly nesting, what you can do instead is use CROSS APPLY:

Technically the CROSS APPLY solution uses more characters. But it is infinitely more readable.

And the server? The server doesn’t care about the additional characters —it still compiles it down to the same 1s and 0s:

So next time you have to nest several REPLACE() functions (or any other string functions), do yourself a favor and make it more readable by using CROSS APPLY instead. Your future self will thank you.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

Unexpected SQL Server Performance Killers #1

In this series I explore scenarios that hurt SQL Server performance and show you how to avoid them. Pulled from my collection of “things I didn’t know I was doing wrong for years.”


Have you ever encountered a query that runs slowly, even though you’ve created indexes for it?

There’s a few different reasons why this may happen. The one I see most frequently happens in the following scenario.

I’ll have an espresso please

Let’s say I have a table dbo.CoffeeInventory of coffee beans and prices that I pull from my favorite green coffee bean supplier each week. It looks something like this:

If you want to follow along, you can get this data set from this GitHub Gist

I want to be able to efficiently query this table and filter on price, so next I create an index like so:

Now, I can write my query to find out what coffee prices are below my willingness to pay:

You would expect this query to be blazing fast and use a clustered index seek, right?

WRONG!

What the heck?

Why is SQL scanning the table when I added a clustered index on the column that I am filtering in my predicate? That’s not how it’s supposed to work!

Well dear reader, if we look a little bit closer at the table scan operation, we’ll notice a little something called CONVERT_IMPLICIT:

CONVERT_IMPLICIT: ruiner of fast queries

What is CONVERT_IMPLICIT doing? Well as it implies, it’s having to convert some data as it executes the query (as opposed to me having specified an explicit CAST() or CONVERT() function in my query).

The reason it needs to do this is because I defined my Price column as a VARCHAR(5):

Who put numeric data into a string datatype? Someone who hasn’t had their coffee yet today.

In my query however, I’m doing a comparison against a number WHERE Price < 6.75. SQL Server is saying it doesn’t know how to compare a string to a number, so it has to convert the VARCHAR string to a NUMERIC(3,2).

This is painful.

Why? Because SQL is performing that implicit conversion to the numeric datatype for every single row in my table. Hence, it can’t seek using the index because it ends up having to scan the whole table to convert every record to a number first.

And this doesn’t only happen with numbers and string conversion. Microsoft has posted an entire chart detailing what types of data type comparisons will force an implicit conversion:

https://docs.microsoft.com/en-us/sql/t-sql/data-types/data-type-conversion-database-engine

That’s a lot of orange circles/implicit conversions!

How can I query my coffee faster?

Well in this scenario, we have two options.

  1. Fix the datatype of our table to align with the data actually being stored in this (data stewards love this).
  2. Not cause SQL Server to convert every row in the column.

Number 1 above is self-explanatory, and the better option if you can do it. However, if you aren’t able to modify the column type, you are better off writing your query like this:

Since we do a comparison of equivalent datatypes, SQL Server doesn’t need to do any conversions and our index gets used. Woo-hoo!

What about the rest of my server?

Remember that chart above? There are a lot of different data comparisons that can force a painful column side implicit conversion by SQL Server.

Fortunately, Jonathan Kehayias has written a great query that helps you find column side implicit conversions by querying the plan cache. Running his query is a great way to identify most of the implicit conversions happening in your queries so you can go back and fix them — and then rejoice in your improved query performance!

 

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

T-SQL Tuesday #92: Lessons Learned the Hard Way

This post is a response to this month’s T-SQL Tuesday prompt. T-SQL Tuesday was created by Adam Machanic and is a way for SQL users to share ideas about interesting topics. This month’s topic is Lessons Learned the Hard Way.


“Is this your query that’s killing the server?”

It was my first week on the job and I was learning to query one of our major databases.

Up until that point, my SQL experience was limited to working on a *tiny* e-commerce database. Query performance was never something I had to deal with because any query I wrote, no matter how poorly written, would always execute quickly.

This new database I was working on though had tables with a billion+ rows. I should have been more conscious about how I was writing my joins and filtering my records, but I wasn’t. I wrote my query and executed it in SQL Server Management Studio.

About 20 minutes into my query’s execution, I received an email from my new DBA, and it looked something like this:

Uhh, there might be a problem here

“Is this your query that’s killing the server?”

Oops.

I don’t think my mouse ever moved to the stop execution button as quickly as it did that moment.

I was incredibly embarrassed to have brought our production server to a crawl. I was also incredibly embarrassed to have had my first interaction with my new DBA be about a query that created major problems for him.

Although there were no long-term damages from my server-crushing query, it was a scenario that I definitely didn’t want to relive again in the future.

Next time: don’t do that again

Obviously, this was an experience where I learned that maybe I shouldn’t write queries against unfamiliar data in production.

  • I should have been practicing on a dev database.
  • I should have looked at table meta data and made sure I understood relationships between tables better.
  • I should have done some more preliminary querying with more restrictive filters to be able to catch performance problems earlier on with smaller result sets.
  • I should have examined what indexes were available and made sure I was attempting to use them.
  • I should have used a (NOLOCK) if I absolutely had to test on the production data so that at the very least I wouldn’t prevent the high transaction ETLs from modifying data in that database.

All of those “should haves” quickly became my checklist for what to do before running any query in an unfamiliar environment.

I’ve still written plenty of ugly and inefficient queries since then, however none of them ever caused me to bring the SQL server to a halt like I did in my first week. That was one lesson that I learned the hard way.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

SQL in 60 Seconds #1

A few keystrokes and BAM! A mostly formatted query

SQL in 60 Seconds is a series where I share SQL tips and tricks that you can learn and start using in less than a minute.

Have you ever copied and pasted a query into SQL Server Management Studio and been annoyed that the list of column names in the SELECT statement were all on one line?

There are 30 columns here. Ugh.

You can make the query easier to read by putting each column name onto its own line.

Simply open the Find and Replace window (CTRL + H) and type in ,(:Wh)* for the Find value and ,nt for the Replace value (in some versions of SSMS you may have better luck using ,(:Wh|t| )* in the Find field). Make sure “Use Regular Expressions” is checked and press Replace All:

Make sure the regular expression icon/box is checked
A few keystrokes and BAM! A mostly formatted query

The magic you just used is a Regular Expression, and Microsoft has its own flavor used in SSMS and Visual Studio. Basically, we found text that

  • began with a comma (,)
  • followed by any whitespace (:Wh) (line break, tab, space, etc…)
  • (in newer versions of SSMS we add |t| to indicate or tab or space)
  • and replaced it with a comma (,) and a new line (n) and tab (t).

Sure, this trick isn’t going to give you the same output as if you used a proper SQL formatter, but this technique is free and built straight into SSMS.

 

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

It’s 4:30 pm on Friday and Mr. Manager comes along to tell you that he needs you to run some important ad-hoc analysis for him.

Previously this meant having to stay late at the office, writing cumbersome queries to extract business information from transactional data.

Lucky for you, you’ve recently started using Temporal Tables in SQL Server ensuring that you’ll be able to answer your boss’s questions and still make it to happy hour for $3 margaritas.

Sound like a plan? Keep reading below!

The Data

For these demos, we’ll be using my imaginary car rental business data. It consists of our temporal table dbo.CarInventory and our history table dbo.CarInventoryHistory:

I’ve upgraded my business — we now have FOUR Chevy Malibus available for you to rent

Business Problem #1 — “Get me current inventory!”

To get our current inventory of rental cars, all we have to do is query the temporal table:

That’s it.

I know this query seems lame — it’s just a SELECT FROM statement. There are no FOR SYSTEM TIME clauses, WHERE statements, and no other interesting T-SQL features.

But that’s the point! Have you ever had to get the “current” rows out of a table that is keeping track of all transactions? I’m sure it involved some GROUP BY statements, some window functions, and more than a few cups of coffee.

Temporal tables automatically manage your transaction history, providing the most current records in one table (dbo.CarInventory) and all of the historical transactions in another (dbo.CarInventoryHistory). No need for complicated queries.

Business Problem #2 — “How many miles on average do our customers drive each of our cars?”

In this example, we use FOR SYSTEM_TIME ALL and a plain old GROUP BY to get the data we need:

Some cars get driven a lot more. Causation or correlation?

FOR SYSTEM_TIME ALL returns all rows from both the temporal and history table. It’s equivalent to:

Once again, there isn’t anything too fancy going on here — but that’s the point. With temporal tables, your data is organized to make analysis easier.

Business Problem #3 — “How many cars do we rent out week over week?”

Here at Wagner Car Rentals we want to figure out how often our cars are being rented and see how those numbers change from week to week.

In this query, we are using FOR SYSTEM_TIME FOR/TO on our temporal table to specify what data we want in our “CurrentWeek” and “PreviousWeek” subqueries.

FOR/TO returns any records that were active during the specified range(BETWEEN/AND does the same thing, but its upper bound datetime2 value is inclusive instead of exclusive).

Business Problem #4 — “What color cars are rented most often?”

We’re thinking of expanding our fleet of rental vehicles and want to purchase cars in the most popular colors so we can keep customers happy (and get more of their business!). How can we tell which color cars get rented most often?

Here we use CONTAINED IN because we want to get precise counts of how many cars were rented and returned in a specific date range (if a car wasn’t returned — stolen, wrecked and totaled, etc… — we don’t want to purchase more of those colors in the future).

Business Problem #5 — “Jerry broke it. FIX IT!”

The computer systems that we use at Wagner Car Rentals are a little…dated.

Instead of scanning a bar code to return a car back into our system, our employees need to manually type in the car details. The problem here is that some employees (like Jerry) can’t type, and often makes typos:

Having inconsistent data makes our reporting much more difficult, but fortunately since we have our temporal table tracking row-level history, we can easily correct Jerry’s typos by pulling the correct values from a previous record:

Typos fixed!

Although we could have fixed this issue without using a temporal table, it shows how having all of the row-level transaction history makes it possible to repair incorrect data in more difficult scenarios. For even hairier situations, you can even roll-back your temporal table data.

Conclusion

Temporal tables are easy to setup and make writing analytical queries a cinch.

Hopefully writing queries against temporal tables will prevent you from having to stay late in the office the next time your manager asks you to run some ad-hoc analysis.

 

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

The Big Red Button” by włodi used under CC BY-SA 2.0 / Cropped and text added from original

You can watch this blog post on YouTube too!

So you’ve started using temporal tables because they make your point-in-time analysis queries super easy.

Your manager is happy because you’re getting historical data to him quickly. Your DBA is happy because she doesn’t have to clean up any performance killing triggers that replicate a temporal table’s functionality. Everything with temporal tables has made your life better.

Except that time when you accidentally inserted some bad data into your temporal table.

Whoops

The good news is that all of your data is still intact — it’s been copied over to the historical table. Phew!

Now all you need to do is rollback this inadvertent row insertion and make your tables look just like you did before you started breaking them.

This should be easy right?

Well not exactly — there’s no automatic way to roll back the data in a temporal table. However, that doesn’t mean we can’t write some clever queries to accomplish the same thing.

Let’s make some data

Don’t mind the details of this next query too much. It uses some non-standard techniques to fake the data into a temporal/historical table with “realistic” timestamps:

If you look at the results of our temporal table (top) and historical table (bottom), they should look something like this:

This data represents my totally real, very very not-fake rental car business.

You see those two rows in the top temporal table? Those are the ones I just added accidentally. I actually had a bug in my code *ahem* and all of the data inserted after 2017–05–18 is erroneous.

The bug has been fixed, but we want to clean up the incorrect entries and roll back the data in our temporal tables to how it looked on 2017–05–18. Basically, we want the following two rows to appear in our “current” temporal table and the historical table to be cleaned up of any rows inserted after 2017–05–18:

Fortunately, we can query our temporal table using FOR SYSTEM_TIME AS OF to get the two rows highlighted above pretty easily. Let’s do that and insert into a temp table called ##Rollback:

You’ll notice we also updated the SysEndTime — that’s because a temporal table always has its AS ROW END column set to the max datetime value.

Looking at ##Rollback, we have the data we want to insert into our temporal table:

This is the data we want!

Now, it’d be nice if we could just insert the data from #Rollback straight into our temporal table, but that would get tracked by the temporal table!

So instead, we need to turn off system versioning, allow identity inserts, delete our existing data, and insert from ##Rollback. Basically:

While system versioning is off, we can also clean up the historical table by deleting all records after 2017–05–18 by joining the ##Rollback temp table on SysStartTime:

We have rolled back our data successfully!

Only One Tiny Problem

Did you notice that the last SysEndTime values in our historical table don’t match up with the SysStartTime values in our temporal table?

This is a data integrity issue for our temporal table — our datetimes should always be continuous.

Fortunately this is easily fixed with one more UPDATE statement:

Our correctly rolled back temporal table

Finally, remember to turn system versioning back on and to turn off our identity inserts to restore the original functionality of our temporal tables:

Congratulations, you’ve rolled back your temporal table data!

 

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

Bordeaux, The Grand Theatre” by Stefano Montagner is licensed under CC BY-NC-ND 2.0

You can watch this blog post on YouTube too!

Have you ever needed to look at what data in a table used to look like?

If you have, it probably took a knuckle-cracking filled session of writing group-by statements, nested sub-queries, and window functions to write your time-travelling query.

Sorry for your lost day of productivity — I’ve been there too.

Fortunately for us, SQL Server 2016 introduces a new feature to make our point-in-time analysis queries easy to write: temporal tables.

Temporal Tables? Are Those The Same As Temporary Tables?

Don’t let the similar sounding name fool you: “temporal” <> “temporary”.

Temporal tables consist of two parts:

  1. The temporal table — this is the table that contains the current values of your data.
  2. The historical table — this table holds all of the previous values that at some point existed in your temporal table.

You might have created a similar setup yourself in previous versions of SQL using triggers. However, using a temporal table is different from this because:

  1. You don’t need to write any triggers/stored procedures! All of the history tracking is done automatically by SQL Server.
  2. Retrieving the data uses a simple WHERE clause — no complex querying required.

I want to make my life easier by using temporal tables! Take my money and show me how!

I’m flattered by your offer, but since we are good friends I’ll let you in on these secrets for free.

First let’s create a temporal table. I’m thinking about starting up a car rental business, so let’s model it after that:

The key things to note with our new table above are that

  1. it contains a PRIMARY KEY.
  2. it contains two datetime2 fields, marked with GENERATED ALWAYS AS ROW START/END.
  3. It contains the PERIOD FOR SYSTEM_TIME statement.
  4. It contains the SYSTEM_VERSIONING = ON property with the (optional) historical table name (dbo.CarIntventoryHistory).

If we query our newly created tables, you’ll notice our column layouts are identical:

Let’s fill it with the choice car of car rental agencies all across the U.S. — the Chevy Malibu:

Although we got some unassuming car models, at least we can express our individuality with two different paint colors!

In all of the remaining screen shots, the top result is our temporal table dbo.CarInventory and the bottom result is our historical table dbo.CarInventoryHistory.

You’ll notice that since we’ve only inserted one row for each our cars, there’s no row history yet and therefore our historical table is empty.

Let’s change that by getting some customers and renting out our cars!

Now we see our temporal table at work: we updated the rows in dbo.CarInventory and our historical table was automatically updated with our original values as well as timestamps for how long those rows existed in our table.

After a while, our customers return their rental cars:

It’s totally possible for someone to have driven 73 or 488 miles in a Chevy Malibu in under 4 minutes…ever hear the phrase “drive it like a rental”?

Our temporal table show the current state of our rental cars: the customers have returned the cars back to our lot and each car has accumulated some mileage.

Our historical table meanwhile got a copy of the rows from our temporal table right before our last UPDATE statement. It’s automatically keeping track of all of this history for us!

Continuing on, business is going well at the car rental agency. We get another customer to rent our silver Malibu:

Unfortunately, our second customer gets into a crash and destroys our car:

The customer walked away from the crash unscathed; the same can not be said for our profits.

With the deletion of our silver Malibu, our test data is complete.

Now that we have all of this great historically tracked data, how can we query it?

If we want to reminisce about better times when both cars were damage free and we were making money, we can write a query using SYSTEM_TIME AS OF to show us what our table looked like at that point in the past:

The good old days.

And if we want to do some more detailed analysis, like what rows have been deleted, we can query both temporal and historical tables normally as well:

C̶o̶l̶l̶i̶s̶i̶o̶n̶ Conclusion

Even with my car rental business not working out, at least we were able to see how SQL Server’s temporal tables helped us keep track of our car inventory data.

I hope you got as excited as I did the first time I saw temporal tables in action, especially when it comes to querying with FOR SYSTEM_TIME AS OF. Long gone are the days of needing complicated queries to rebuild data for a certain point in time.

 

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!