Building Dynamic Table-Driven Queries

Published on: 2018-07-10

This post is a response to this month’s T-SQL Tuesday #104 prompt by me! T-SQL Tuesday is a way for SQL Server bloggers to share ideas about different database and professional topics every month.

This month’s topic is asking what code would you hate to live without?


When given the choice between working on new projects versus maintaining old ones, I’m always more excited to work on something new.

That means that when I build something that is going to used for years to come, I try to build it so that it will require as little maintenance as possible in the future.

One technique I use for minimizing maintenance is making my queries dynamic.  Dynamic queries, while not right for every situation, do one thing really well: they allow you to modify functionality without needing a complete rewrite when your data changes.  The way I look it, it’s much easier to add rules and logic to rows in table than having to modify a table’s columns or structure.

To show you what I mean,let’s say I want to write a query selecting data from model.sys.database_permissions:

Writing the query as above is pretty simple, but it isn’t flexible in case the table structure changes in the future or if we want to programmatically write some conditions.

Instead of hardcoding the query as above, here is a general pattern I use for writing dynamic table-driven queries.  SQL Server has the handy views sys.all_views and sys.all_columns that show information about what columns are stored in each table/view:

Using these two views, I can use this dynamic SQL pattern to build the same exact query as above:

The way building a dynamic statement like this works is that I build my SELECT statement as a string based on the values stored in my all_columns view.  If a column is ever added to this view, my dynamic code will handle it (I wouldn’t expect this view to change that much in future versions of SQL, but in other real-world tables I can regularly expect changing data).

Yes, writing certain queries dynamically like this means more up front work.  It also means some queries won’t run to their full potential (not necessarily reusing plans, not tuning every individual query, needing to be thoughtful about SQL injection attacks, etc…).  There are A LOT of downsides to building queries dynamically like this.

But dynamically built queries make my systems flexible and drastically reduce the amount of work I have to do down the road.  In the next few weeks I hope to go into this type of dynamically built, table-driven process in more detail (so you should see the pattern in the example above get reused soon!).

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

How to Search and Destroy Non-SARGable Queries on Your Server

Published on: 2017-08-22

Unexpected SQL Server Performance Killers #3

Photo by Cibi Chakravarthi on Unsplash

In this series I explore scenarios that hurt SQL Server performance and show you how to avoid them. Pulled from my collection of “things I didn’t know I was doing wrong for years.”

Watch this post on YouTube.


Looking for a script to find non-SARGable queries on your server? Scroll to the bottom of this post.

What is a “SARGable” query?

Just because you add an index to your table doesn’t mean you get immediate performance improvement. A query running against that table needs to be written in such a way that it actually takes advantage of that index.

SARGable, or “Search Argument-able”, queries therefore are queries that are capable of utilizing indexes.

Examples please!

Okay let’s see some examples of SARGable and non-SARGable queries using my favorite beverage data.

There are non-clustered indexes on the Name and CreateDate columns

First, let’s look at a non-SARGable query:

Although this query correctly filters our rows to a specific date, it does so with this lousy execution plan:

SQL Server has to perform an Index Scan, or in other words has to check every single page of this index, to find our ‘2017–08–19’ date value.

SQL Server does this because it can’t immediately look at the value in the index and see if it is equal to the ‘2017–08–19’ date we supplied — we told it first to convert every value in our column/index to a CHAR(10) date string so that it can be compared as a string.

Since the SQL Server has to first convert every single date in our column/index to a CHAR(10) string, that means it ends up having to read every single page of our index to do so.

The better option here would be to leave the column/index value as a datetime2 datatype and instead convert the right hand of the operand to a datetime2:

Alternatively, SQL Server is smart enough to do this conversion implicitly for us if we just leave our ‘2017–08–19’ date as a string:

In this scenario SQL gives us an Index Seek because it doesn’t have to modify any values in the column/index in order to be able to compare it to the datetime2 value that ‘2017–08–19’ got converted to.

This means SQL only has to read what it needs to output to the results. Much more efficient.

One more example

Based on the last example we can assume that any function, explicit or implicit, that is running on the column side of an operator will result in a query that cannot make use of index seeks, making it non-SARGable.

That means that instead of doing something non-SARGable like this:

We want to make it SARGable by doing this instead:

In short, keep in mind whether SQL Server will have to modify the data in a column/index in order to compare it — if it does, your query probably isn’t SARGable and you are going to end up scanning instead of seeking.

OK, non-SARGable queries are bad…how do I check if I have any on my server?

The script below looks at cached query plans and searches them for any table or index scans. Next, it looks for scalar operators, and if it finds any it means we have ourselves a non-SARGable query. The fix is then to rewrite the query to be SARGable or add a missing index.

 

I’ve found this script useful for myself, but if you find any issues with it please let me know, thanks!

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

Why Parameter Sniffing Isn’t Always A Bad Thing (But Usually Is)

Published on: 2017-08-08

Unexpected SQL Server Performance Killers #2

Photo by Jakob Owens on Unsplash

In this series I explore scenarios that hurt SQL Server performance and show you how to avoid them. Pulled from my collection of “things I didn’t know I was doing wrong for years.”

Prefer video? Watch this post on YouTube.

Last week we discussed how implicit conversions could be one reason why your meticulously designed indexes aren’t getting used.

Today let’s look at another reason: parameter sniffing.

Here’s the key: Parameter sniffing isn’t always a bad thing.

Most of the time it’s good: it means SQL Server is caching and reusing query plans to make your queries run faster.

Parameter sniffing only becomes a problem when the cached plan isn’t anywhere close to being the optimal plan for given input parameters.

So what’s parameter sniffing?

Let’s start with our table dbo.CoffeeInventory which you can grab from Github.

The key things to know about this table are that:

  1. We have a nonclustered index on our Name column.
  2. The data is not distributed evenly (we’ll see this in a minute)

Now, let’s write a stored procedure that will return a filtered list of coffees in our table, based on the country. Since there is no specific Country column, we’ll write it so it filters on the Name column:

Let’s take a look at parameter sniffing in action, then we’ll take a look at why it happens and how to solve it.

Running the above statement gives us identical execution plans using table scans:

In this case we explicitly specified the parameter @ParmCountry. Sometimes SQL will parameterize simple queries on its own.

That’s weird. We have two query executions, they are using the same plan, and neither plan is using our nonclustered index on Name!

Let’s step back and try again. First, clear the query plan cache for this stored procedure:

Next, execute the same stored procedure with the same parameter values, but this time with the ‘Ethiopia’ parameter value first. Look at the execution plan:

Now our nonclustered index on Name is being utilized. Both queries are still receiving the same (albeit different) plan.

We didn’t change anything with our stored procedure code, only the order that we executed the query with different parameters.

What the heck is going on here!?

This is an example of parameter sniffing. The first time a stored procedure (or query) is ran on SQL server, SQL will generate an execution plan for it and store that plan in the query plan cache:

All subsequent executions of that same query will go to the query cache to reuse that same initial query plan — this saves SQL Server time from having to regenerate a new query plan.

Note: A query with different values passed as parameters still counts as the “same query” in the eyes of SQL Server.

In the case of the examples above, the first time the query was executed was with the parameter for “Costa Rica”. Remember when I said this dataset was heavily skewed? Let’s look at some counts:

“Costa Rica” has more than 10,000 rows in this table, while all other country names are in the single digits.

This means that when we executed our stored procedure for the first time, SQL Server generated an execution plan that used a table scan because it thought this would be the most efficient way to retrieve 10,003 of the 10,052 rows.

This table scan query plan is only optimal for Costa Rica . Passing in any other country name into the stored procedure would return only a handful of records, making it more efficient for SQL Server to use our nonclustered index.

However, since the Costa Rica plan was the first one to run, and therefore is the one that got added to the query plan cache, all other executions ended up using the same table scan execution plan.

After clearing our cached execution plan using DBCC FREEPROCCACHE, we executed our stored procedure again but with ‘Ethiopia’ as our parameter. SQL Server determined that a plan with an index seek is optimal to retrieve only 6 of the 10,052 rows in the table. It then cached that Index Seek plan, which is why the second time around the ‘Costa Rica’ parameter received the execution plan with Index Seek.

Ok, so how do I prevent parameter sniffing?

This question should really be rephrased as “how do I prevent SQL Server from using a sub-optimal plan from the query plan cache?”

Let’s take a look at some of the techniques.

1. Use WITH RECOMPILE or OPTION (RECOMPILE)

We can simply add these query hints to either our EXEC statement:

or to our stored procedure itself:

What the RECOMPILE hint does is force SQL Server to generate a new execution plan every time these queries run.

Using RECOMPILE eliminates our parameter sniffing problem because SQL Server will regenerate the query plan every single time we execute the query.

The disadvantage here is that we lose all benefit from having SQL Server save CPU cycles by caching execution plans.

If your parameter sniffed query is getting ran frequently, RECOMPILE is probably a bad idea because you will encounter a lot of overheard to generate the query plan regularly.

If your parameter sniffed query doesn’t get ran often, or if the query doesn’t run often enough to stay in the query plan cache anyway, then RECOMPILE is a good solution.

2. Use the OPTIMIZE FOR query hint

Another option we have is to add either one of the following hints to our query. One of these would get added to the same location as OPTION (RECOMPILE) did in the above stored procedure:

or

OPTIMIZE FOR UNKNOWN will use a query plan that’s generated from the average distribution stats for that column/index. Often times it results in an average or bad execution plan so I don’t like using it.

OPTIMIZE FOR VALUE creates a plan using whatever parameter value specified. This is great if you know your queries will be retrieving data that’s optimized for the value you specified most of the time.

In our examples above, if we know the value ‘Costa Rica’ is rarely queried, we might optimize for index seeks. Most queries will then run the optimal cached query plan and we’ll only take a hit when ‘Costa Rica’ is queried.

3. IF/ELSE

This solution allows for ultimate flexibility. Basically, you create different stored procedures that are optimized for different values. Those stored procedures have their plans cached, and then an IF/ELSE statement determines which procedure to run for a passed in parameter:

This option is more work (How do you determine what the IF condition should be? What happens more data is added to the table over time and the distribution of data changes?) but will give you the best performance if you want your plans to be cached and be optimal for the data getting passed in.

Conclusion

  1. Parameter sniffing is only bad when your data values are unevenly distributed and cached query plans are not optimal for all values.
  2. SQL Server caches the query plan that is generated from the first run of a query/stored procedure with whatever parameter values were used during that first run.
  3. Using the RECOMPILE hint is a good solution when your queries aren’t getting ran often or aren’t staying in the the query cache most of the time anyway.
  4. The OPTIMIZE FOR hint is good to use when you can specify a value that will generate a query plan that is efficient for most parameter values and are OK with taking a hit for a sub-optimal plan on infrequently queried values.
  5. Using complex logic (like IF/ELSE) will give you ultimate flexibility and performance, but will also be the worst for long term maintenance.

 

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!