Introduction to SQL Server Execution Plans

Published on: 2019-07-23

Watch this week’s episode on YouTube!

I nearly always use execution plans as the starting point for SQL query performance troubleshooting. In this multi-part series, I plan to review the basics of execution plans and how you can use them to improve the performance of your own queries.

Execution Plans

SQL is a declarative language: instead of programming the details of how data should be retrieved, we describe what data we want and SQL Server figures out how it should return it to us.

For most queries that we want to performance tune, the SQL Server Query Optimizer considers multiple options for returning the data. It estimates the costs of these various approaches and lands on one that it thinks will return the data in an efficient manner. All of these options for retrieving the data are what are known as query execution plans.

Most of the time, this process works well and the Query Optimizer ends up choosing a query plan that returns the requested data quickly. This however does not mean that SQL Server chooses the “most efficient” query plan. Besides debating what “most efficient” means in different scenarios, SQL Server doesn’t want to spend hours calculating every possible way to return your data when it only takes a fraction of that time to find a plan that is “good enough”.

Problems arise when the Query Optimizer selects a query plan that isn’t very good at all. This can happen due to a number of issues that we’ll look at in future parts of this series.

Today we’ll focus on learning how we can obtain execution plans for our queries.

Viewing Execution Plans

To see an execution plan for your query, you can run SET SHOWPLAN_ALL ON. This will provide a text-based tree representation of the plan:

SET SHOWPLAN_ALL ON
GO
--Query
GO
SET SHOWPLAN_ALL OFF
GO
Text-based tree query plan

This may look familiar to you if you are used to looking at plans in other relational database environment that use similar tree-based plan explanations. In SSMS we have a a more visual option available to though. If we click the “Display Estimated Execution Plan” button or run SET SHOWPLAN_XML ON and execute our query, we will get our graphical execution plan:

the Display Estimated Execution Plan button
Graphical estimated execution plan

This graphical representation is my personal preference (and what we will be mostly focusing on in this series) but it’s important to know that as hinted by the last command, you can also right click on this graphical execution plan and choose “Show Execution Plan XML” to see the XML that is driving all of the visuals:

Show execution plan XML
XML execution plan

While most people don’t find the XML as easy to digest as the graphical execution plan, it’s worth knowing it is there: sometimes you will have to dive into the XML to find properties that don’t get displayed in the graphical version.

Actual Plans

So far, every plan we’ve looked at is what’s known as an “estimated” execution plan. The “estimated” name means it only contains “estimates” of how many rows will be processed based on internal meta data that SQL Server has available. You can view the “actual” execution plan by selecting the appropriate icon in SSMS:

Actual execution plan button
Actual execution plan

There often confusion that occurs due to the naming of “estimated” vs “actual” execution plans. The difference is that the estimated plan is calculated before executing the SQL statement so it only has estimated meta data available for it to display, whereas the actual execution plan is that same estimated execution plan overlayed with runtime information like how many rows were processed, how much memory was used, etc…

Live Query Statistics

Live Query statistics give you the best of both estimated and actual execution plans. With Live Query Statistics enabled, SQL Server provides the estimated execution plan but overlays live runtime statistics on top of the plan as the query is executing in real-time.

Live query statistics button
Live query statistics running

Live Query Statistics are nice because they allow you to often see “where” in the execution plan a query is experiencing a performance bottleneck. This is particularly helpful if you are new to analyzing execution plans and haven’t yet learned all of the common signs and operators that might indicate poor performance. It is also helpful when your query is performing so poorly that you are unable to retrieve an actual plan for it (since it executes for what seems like forever).

Historical Plans

Calculating query plans isn’t free, so SQL Server caches query plans for reuse. These cached plans can be viewed in the sys.dm_exec_query_plan DMV:

SELECT 
    *
FROM 
    sys.dm_exec_cached_plans      
    CROSS APPLY sys.dm_exec_query_plan(plan_handle)
    CROSS APPLY  sys.dm_exec_sql_text(plan_handle) 
Cached query plans

It’s worth noting that this set of DMVs only show plans that are still in the cache and do not show actual plan statistics in them (plus some other limitations).

If you have Query Store enabled on your database, you can also access the query plans stored in the Query Store DMVs (or via the Query Store GUI):

SELECT 
    CAST(p.query_plan AS XML), 
    *
FROM 
    sys.query_store_query AS q
    INNER JOIN sys.query_store_plan AS p
        ON q.query_id = p.query_id

Conclusion

Regardless of how and where you decide to retrieve your execution plan from, all of the above techniques will help provide insight into how SQL Server is obtaining the data that you specified in your query.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

12 Ways To Rewrite SQL Queries for Better Performance

Published on: 2019-05-28

Watch this week’s video on YouTube. Thanks to you, we just crossed the 2k subscriber mark!

Over the past several week’s I’ve been exploring ways to rewrite queries to improve execution performance.

I learned a lot of these techniques over time from trial an error, attending presentations, reading blog posts, speaking to other dbas and developers, etc… but never knew of a good resource that summarized these techniques in one place.

This post will be a quick round-up of everything I’ve covered so far, as well as 8 additional techniques that I use occasionally but don’t necessarily require a full detailed post to explain them.

Why Rewrite Queries?

I often find myself working in environments where modifying indexes or changing server settings is out of the question when performance tuning. I usually run into these scenarios when dealing with:

  • Vendor databases
  • “Fragile” systems
  • Not enough disk space
  • Limited tooling/ad hoc analysis
  • Features limited by security software

While solving the root cause of a performance problem is always preferable, sometimes the only way I’m able to fix problems in these environments is by rewriting the queries.

I decided to write this summary post because it is a resource I would have loved to have when starting out. Sometimes it can be easy to get “writer’s block” when trying to think of ways to rewrite a SQL query, so hopefully this list of techniques can provide ideas and get your creative juices flowing.

So, without further ado, here is a list of 12 techniques in no particular order that you can use to rewrite your queries to change their performance.

12 Ways to Refactor a Query to Change Performance

1. Window functions vs GROUP BY

Sometimes window functions rely a little too much on tempdb and blocking operators to accomplish what you ask of them. While using them is always my first choice because of their simple syntax, if they perform poorly you can usually rewrite them as an old-fashioned GROUP BY to achieve better performance.

2. Correlated subqueries vs derived tables

Many people like using correlated subqueries because the logic is often easy to understand, however switching to derived table queries often produces better performance due to their set-based nature.

3. IN vs UNION ALL

When filtering rows of data on multiple values in tables with skewed distributions and non-covering indexes, writing your logic into multiple statements joined with UNION ALLs can sometimes generate more efficient execution plans than just using IN or ORs.

4. Temporary Staging Tables

Sometimes the query optimizer struggles to generate an efficient execution plan for complex queries. Breaking a complex query into multiple steps that utilize temporary staging tables can provide SQL Server with more information about your data. They also cause you to write simpler queries which can cause the optimizer to generate more efficient execution plans as well as allow it to reuse result sets more easily.

5. Forcing Table Join Orders

Sometimes outdated statistics and other insufficient information can cause the SQL Server query optimizer to join tables in a less than ideal sequence. Adam Machanic has a fantastic presentation on forcing table join order with blocking operators without having to resort to join hints.

6. DISTINCT with few unique values

Using the DISTINCT operator is not always the fastest way to return the unique values in a dataset. In particular, Paul White uses recursive CTEs to return distinct values on large datasets with relatively few unique values. This is a great example of solving a problem using a very creative solution.

7. Eliminate UDFs

UDFs often cause poor query performance due to forcing serial plans and causing inaccurate estimates. One way to possibly improve the performance of queries that call UDFs is to try and inline the UDF logic directly into the main query. With SQL Server 2019 this will be something that happens automatically in a lot of cases, but as Brent Ozar points out you might occasionally have to manually inline a UDF’s functionality to get the best performance.

8. Create UDFs

Sometimes a poorly configured server will parallelize queries too frequently and cause poorer performance than their serially equivalent plan. In those cases, putting the troublesome query logic into a scalar or multi-statement table-valued function might improve performance since they will force that part of the plan to run serially. Definitely not a best practice, but it is one way to force serial plans when you can’t change the cost threshold for parallelism.

9. Data Compression

Not only does data compression save space, but on certain workloads it can actually improve performance. Since compressed data can be stored in fewer pages, read disk speeds are improved, but maybe more importantly the compressed data allows more to be stored in SQL Server’s buffer pool, increasing the potential for SQL Server to reuse data already in memory.

10. Indexed Views

When you can’t add new indexes to existing tables, you might be able to get away with creating a view on those tables and indexing the view instead. This works great for vendor databases where you can’t touch any of the existing objects.

11. Switch cardinality estimators

The newer cardinality estimator introduced in SQL Server 2014 improves the performance of many queries. However, in some specific cases it can make queries perform more slowly. In those cases, a simple query hint is all you need to force SQL Server to change back to the legacy cardinality estimator.

12. Copy the data

If you can’t get better performance by rewriting a query, you can always copy the data you need to a new table in a location where you CAN create indexes and do whatever other helpful transformations you need to do ahead of time.

…And more

By no means is this list exhaustive. There are so many ways to rewrite queries, and not all of them will work all the time.

The key is to think about what the query optimizer knows about your data and why it’s choosing the plan it is. Once you understand what it’s doing, you can start getting creative with various query rewrites that address that issue.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

Temporary Staging Tables

Published on: 2019-05-07

Watch this week’s episode on YouTube.

SQL Server Spool operators are a mixed bag. On one hand, they can negatively impact performance when writing data to disk in tempdb. On the other hand, they allow filtered and transformed result sets to be temporarily staged, making it easier for that data to be reused again during that query execution.

The problem with the latter scenario is that SQL Server doesn’t always decide to use a spool; often it’s happy to re-read (and re-process) the same data repeatedly. When this happens, one option you have is to explicitly create your own temporary staging table that will help SQL Server cache data it needs to reuse.

This post is a continuation of my series to document ways of refactoring queries for improved performance. I’ll be using the StackOverflow 2014 data dump for these examples if you want to play along at home.

No spools

Let’s start by looking at the following query:

WITH January2010Badges AS ( 
	SELECT 
		UserId,
		Name,
		Date
	FROM 
		dbo.Badges 
	WHERE 
		Date >= '2010-01-01' 
		AND Date <= '2010-02-01' 
), Next10PopularQuestions AS ( 
	SELECT TOP 10 * FROM (SELECT UserId, Name, Date FROM January2010Badges WHERE Name = 'Popular Question' ORDER BY Date OFFSET 10 ROWS) t
), Next10NotableQuestions AS ( 
	SELECT TOP 10 * FROM (SELECT UserId, Name, Date FROM January2010Badges WHERE Name = 'Notable Question' ORDER BY Date OFFSET 10 ROWS) t
), Next10StellarQuestions AS ( 
	SELECT TOP 10 * FROM (SELECT UserId, Name, Date FROM January2010Badges WHERE Name = 'Stellar Question' ORDER BY Date OFFSET 10 ROWS) t
)
SELECT UserId, Name FROM Next10PopularQuestions 
UNION ALL 
SELECT UserId, Name FROM Next10NotableQuestions
UNION ALL 
SELECT UserId, Name FROM Next10StellarQuestions 

Note: This is not necessarily the most efficient way to write this query, but it makes for a good demo.

This query is returning offset results for different badges from one month of data in the dbo.Badges table. While the query is using a CTE to make the logic easy to understand (i.e. filter the data to just January 2010 results and then calculate our offsets based on those results), SQL Server isn’t actually saving the results of our January2010Badges expression in tempdb to get reused. If we view the execution plan, we’ll see it reading from our dbo.Badges clustered index three times:

Three clustered index scans
Table 'Badges'. Scan count 27, logical reads 151137, ...

That means every time SQL Server needs to run our offset logic in each “Next10…” expression, it needs to rescan the entire clustered index to first filter on the Date column and then the Name column. This results in about 150,000 logical reads.

Divide and Conquer

One potential solution would be to add a nonclustered index that would allow SQL Server to avoid scanning the entire clustered index three times. But since this series is about improving performance without adding permanent indexes (since sometimes you are stuck in scenarios where you can’t easily add or modify an index), we’ll look at mimicking a spool operation ourselves.

We’ll use a temporary table to stage our filtered January 2010 results so SQL Server doesn’t have to scan the clustered index each time it needs to perform logic on that subset of data. For years I’ve referred to this technique as “temporary staging tables” or “faking spools”, but at a recent SQL Saturday Jeff Moden told me he refers to it as “Divide and Conquer”. I think that’s a great name, so I’ll use it going forward. Thanks Jeff!

First let’s divide our query so that we insert our January 2010 data into its own temporary table:

DROP TABLE IF EXISTS #January2010Badges;
CREATE TABLE #January2010Badges
(
	UserId int,
	Name nvarchar(40),
	Date datetime
	CONSTRAINT PK_NameDateUserId PRIMARY KEY CLUSTERED (Name,Date,UserId)
);

INSERT INTO #January2010Badges
SELECT
	UserId,
	Name,
	Date
FROM 
	dbo.Badges
WHERE 
	Date >= '2010-01-01' 
	AND Date <= '2010-02-01'; 

You’ll notice I added a clustered primary key which will index the data in an order that will make filtering easier.

Next, we conquer by changing the rest of our query to read from our newly created temp table:

WITH Next10PopularQuestions AS ( 
	SELECT TOP 10 * FROM (SELECT UserId, Name, Date FROM #January2010Badges WHERE Name = 'Popular Question' ORDER BY Date OFFSET 10 ROWS) t
), Next10NotableQuestions AS ( 
	SELECT TOP 10 * FROM (SELECT UserId, Name, Date FROM #January2010Badges WHERE Name = 'Notable Question' ORDER BY Date OFFSET 10 ROWS) t
), Next10StellarQuestions AS ( 
	SELECT TOP 10 * FROM (SELECT UserId, Name, Date FROM #January2010Badges WHERE Name = 'Stellar Question' ORDER BY Date OFFSET 10 ROWS) t
)
SELECT UserId, Name FROM Next10PopularQuestions 
UNION ALL 
SELECT UserId, Name FROM Next10NotableQuestions 
UNION ALL 
SELECT UserId, Name FROM Next10StellarQuestions 

Running this all together, we get the following plans and logical read counts:

Clustered index seeks
Table 'Badges'. Scan count 9, logical reads 50379, ...

(42317 rows affected)

(20 rows affected)
Table '#January2010Badges______________________________00000000003B'. Scan count 3, logical reads 12, ...

In this version of the query, SQL Server scans the clustered index a single time and saves that data to a temporary table. In the subsequent SELECTs, it seeks from this much smaller temporary table instead of going back to the clustered index, reducing the total amount of reads to 50379 + 12 = 50392: about a third of what the original query was doing.

Temporary Staged Data

At the end of day, you can hope that SQL Server creates a spool to temporarily stage or data, or you can be explicit about it and do it yourself. Either option is going to increase usage on your tempdb database, but at least by defining the temporary table yourself you can customize and index it to achieve maximum reuse and performance for your queries.

It’s important to note that this is not a technique you want to abuse: writing and reading too much data from tempdb can cause contention problems that can make you worse off than having allowed SQL Server to scan your clustered index three times. However, when implemented sparingly and for good reasons, this technique can greatly improve the performance of certain queries.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!