Correlated Subqueries vs Derived Tables

Published on: 2019-04-23

Watch this week’s episode on YouTube.

Correlated subqueries provide an intuitive syntax for writing queries that return related data. However, they often perform poorly due to needing to execute once for every value they join on.

The good news is that many correlated subqueries can be rewritten to use a derived table for improved performance.

This post is a continuation of my series to document ways of refactoring queries for improved performance. I’ll be using the StackOverflow 2014 data dump for these examples if you want to play along at home.

When was each user’s first badge awarded?

StackOverflow awards users badges for things like asking good questions, hitting certain vote thresholds, and more.

I want to write a query that figures out on what date did each user receive their first badge.

Using a correlated subquery, I might write my query as follows:

SET STATISTICS IO, TIME ON;

SELECT DISTINCT
    UserId,
    FirstBadgeDate = (SELECT MIN(Date) FROM dbo.Badges i WHERE o.UserId = i.UserId)
FROM
    dbo.Badges o

The syntax of the correlated subquery here makes it clear that for each UserId we want to return the MIN(Date) associated with that UserId from the badges table.

Looking at the execution plan and time and IO statistics (abbreviated for clarity) we see:

Execution plan for correlated subquery
(1318413 rows affected)
Table 'Worktable'. Scan count 0, logical reads 0, ...
Table 'Workfile'. Scan count 0, logical reads 0, ...
Table 'Badges'. Scan count 2, logical reads 43862, ...

(1 row affected)

 SQL Server Execution Times:
   CPU time = 3625 ms,  elapsed time = 8347 ms.

So, what’s going on here? We read ~8 million rows of data from our index on the dbo.Badges table and then calculate the MIN(Date) for each UserId. This is the “correlated” part of our query, which then gets joined back to the dbo.Badges table using a Hash Match join operator.

Our join doesn’t eliminate any rows so the ~8 million rows continue flowing through until near the very end where we have another Hash Match operator, this time being used to dedupe the rows for the DISTINCT part of query, reducing the final result to ~1 million rows.

Eliminating the Correlated Subquery

What would things look like if we rewrote this correlated subquery as a derived table in the FROM clause?

SELECT DISTINCT
    o.UserId,
    FirstBadgeDate
FROM
    dbo.Badges o
    INNER JOIN 
        (SELECT 
            UserId, 
            MIN(Date) as FirstBadgeDate 
        FROM 
            dbo.Badges GROUP BY UserId
        ) i
    ON o.UserId = i.UserId
Execution plan with derived table
(1318413 rows affected)
Table 'Workfile'. Scan count 0, logical reads 0, ...
Table 'Worktable'. Scan count 0, logical reads 0, ...
Table 'Badges'. Scan count 2, logical reads 43862, ...

(1 row affected)

 SQL Server Execution Times:
   CPU time = 2516 ms,  elapsed time = 5350 ms.

If we look at the IO statistics, it’s interesting to note that there is no difference in reads between these two queries.

Looking at the CPU time statistics however, this derived table query consistently comes in about 33% faster than the correlated subquery example. Why is that?

Looking at the execution plan reveals some details: in this plan, you can see we read in from the dbo.Badges index and go straight into a Hash Match operator. The top stream is deduping our data on UserId, taking it from ~8 million rows to ~1 million rows. The bottom stream does the same deduping while also calculating the MIN(DATE) for each UserId grouping.

When both of those streams join together, the final hash match operator is only joining ~1 million rows with ~1 million rows (as opposed to the first query that was joining ~8 million rows with ~1 million rows).

This last join is the reason for the performance improvement: because this execution plan can reduce the number of rows sooner the final join ends up having to do less work. Additionally, the records were already distinct going into the join, saving us from an extra deduping step.

Further Reducing Redundancy

You may have noticed that both of these queries are a little redundant: they both call on the dbo.Badges table unnecessarily. The best option to improve query performance would be to rewrite it as:

SELECT 
    UserId, 
    MIN(Date) as FirstBadgeDate 
FROM 
    dbo.Badges 
GROUP BY 
    UserId
the best execution plan

While this is the most efficient query of the three, most real-world queries and scenarios aren’t this easy to simplify.

When your queries have more joins, WHERE clauses, and more, knowing how to refactor from a correlated subquery to a derived table query is critical to potentially improving performance.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

Window Functions vs GROUP BYs

Published on: 2019-04-16

Watch this week’s episode on YouTube.

There are many options available for improving the performance of a query: indexes, statistics, configuration settings, etc…

However, not all environments allow you to use those features (eg. vendor databases), leaving query rewriting as the only option.

This is the first post in a series to document common ways to refactor queries without otherwise altering the database. The goal of these posts will be to provide examples of performance pitfalls in queries and how to rewrite those queries to generate different query plans that (hopefully) improve performance.

I’ll be using the StackOverflow 2014 data dump for these examples if you want to play along at home.

Who was first to earn each badge?

StackOverflow awards users badges for things like asking good questions, hitting certain vote thresholds, and more.

I want to write a query that figures out who is the first person awarded each badge. In cases where there is a tie for the first person to receive that badge, I want to return the user with the lowest UserId.

Window functions make this type of question easy to write a query for:

SELECT DISTINCT
    Name,
    FIRST_VALUE(UserId) OVER (PARTITION BY Name ORDER BY Date,UserId) AS UserId
FROM
    dbo.Badges b
ORDER BY
    Name,UserId

If you’ve used FIRST_VALUE before, this query should be easy to interpret: for each badge Name, return the first UserId sorted by Date (earliest date to receive the badge) and UserId (pick the lowest UserId when there are ties on Date).

This query was easy to write and is simple to understand. However, the performance is not great: it takes 46 seconds to finish returning results on my machine.

Window function execution plan

Note: I assumed this table started off with the following index:

CREATE NONCLUSTERED INDEX IX_Badges__Name_Date_UserId ON [dbo].[Badges] (Name,Date,UserId);

Why so slow?

If we SET STATISTICS IO ON we’ll notice that SQL Server reads 46767 pages from a nonclustered index. Since we aren’t filtering our data, there’s not much we can do to make that faster.

Reading right to left, next up we see two Segment operators. These don’t add much overhead since our data is sorted on our segments/groups, so making SQL Server identify when our sorted rows change values is trivial.

Next up is the Window Spool operator which “Expands each row into the set of rows that represent the window associated with it.” While it looks innocent by having a low relative cost, this operator is writing 8 million rows/reading 16 million rows (because of how Window Spool works) from tempdb. Ouch.

After that the Stream Aggregate operator and Compute Scalar operators check to see if the first value in each window being returned from the Window Spool is null and then return the first non-null value. These operations are also relatively painless since the data flowing through is already sorted.

The Hash Match operator then dedupes the data for our DISTINCT and then we sort the remaining ~2k rows for our output.

So while our query looks simple, the fact that our whole table of data is getting written to and read from tempdb before being deduped and sorted is a real performance killer.

Removing tempdb usage the old-fashioned way

When I say “the old fashioned way”, I mean rewriting our window function to use more traditional aggregate functions and a GROUP BY:

SELECT
    b.Name,
    MIN(b.UserId) AS UserId
FROM
    dbo.Badges b
    INNER JOIN
    (
    SELECT
        Name,
        MIN(Date) AS Date
    FROM
        dbo.Badges
    GROUP BY
        Name
    ) m
        ON b.Name = m.Name
        AND b.Date = m.Date
GROUP BY
    b.Name
ORDER BY
    Name,UserId

I think by most people’s standards, this query is not as easy to read. While not overly complex, it does take up a lot more screen space and is complicated by multiple GROUP BYs and a derived table.

And while the query may look ugly on the outside, it’s what lies below the surface that really matters:

GROUP BY execution plan

What a beautifully simple execution plan. And it finishes executing almost instantly.

Let’s break down what’s going on. First, we start with similar Index Scan and Segment operators as the previous query so no real difference there.

At this point you may have noticed that while the written query uses two GROUP BYs and two MIN functions that are then joined together, there are not two Index Scans, two sets of aggregations, and no join happening in the execution plan.

SQL Server can use an optimization with the Top operator that allows it to take the sorted data and return only the Name and UserId rows for the top Name and Date values within a group (essentially matching the MIN logic). This is a great example of how the optimizer can take a declarative SQL query and decide how to efficiently return the data needs.

At this point, the Top operator filters our 8 million rows down to around 30k rows. 30k rows get deduped a lot faster with our Stream Aggregate operator, and since the data is already sorted we don’t need an extra Sort operator.

Overall, this second query runs so much better than the original because SQL Server doesn’t have to go to tempdb for any operations – all the data is pre-sorted in the index and can flow through.

So I shouldn’t use Window Functions?

Not necessarily – it comes down to a trade offs.

I almost always start with a window function because of how easy they are to write and read. Plus I think they are fun to write as well.

However, if the window function is having to read/write a lot of data to tempdb and it’s affecting the overall performance of your query, a rewrite may be necessary.

In that case, I much rather take more verbose syntax to get a 2000x performance boost.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

Searching Complex JSON Data

Published on: 2019-02-26

Watch this week’s video on YouTube

Computed column indexes make querying JSON data fast and efficient, especially when the schema of the JSON data is the same throughout a table.

It’s also possible to break out a well-known complex JSON structure into multiple SQL Server tables.

However, what happens if you have different JSON structures being stored in each row of your database and you want to write efficient search queries against all of the rows of your complex JSON strings?

Complex JSON

Let’s start out by creating a staging table that contains various fragments of JSON stored in a nvarchar column:

DROP TABLE IF EXISTS dbo.ImportedJson;
GO
CREATE TABLE dbo.ImportedJson
(
    Id int IDENTITY,
    JsonValue nvarchar(max)
);
GO

INSERT INTO dbo.ImportedJson (JsonValue) VALUES (N'{ 
    "Property1" : "Value1", 
    "Property2" : [1,2,3]
}');

INSERT INTO dbo.ImportedJson (JsonValue) VALUES (N'{ 
    "Property1" : "Value2", 
    "Property3" : [1,2,3], 
    "Property4" : ["A","B","C",null], 
    "Property5" : { 
                    "SubProp1": "A", 
                    "SubProp2": { 
                                    "SubSubProp1":"B", 
                                    "SubSubProp2": 1.2,
                                    "SubSubProp3" : true
                                } 
                    }, 
    "Property6" : [{"ArrayProp":"A"},{"ArrayProp":"B"}], 
    "Property7" : 123, 
    "Property8" : null 
}');

INSERT INTO dbo.ImportedJson (JsonValue) VALUES (N'{ 
    "Property8" : "Not null", 
    "Property9" : [4,5,6]
}');


SELECT * FROM dbo.ImportedJSON;

And the results: 

Search Queries

If I want to search these values I have a few options.

First, I could write something like:

SELECT * FROM dbo.ImportedJSON WHERE JsonValue LIKE '%Property4" : "["A%';

But that technique is difficult to use on data that I’m not familiar with, and it will run slowly because it won’t be able to seek to the data in any indexes.

A second option is to create something like a full text index, but unlike full text indexes on XML columns, I will have to fight with all of the quotes and colons and curly braces since there is no support for JSON. Yuck.

Option 3: Search Table

Option 3 is my favorite: normalize the data into a key and value columns that are easy to search:

WITH JSONRoot AS ( 
    SELECT 
        Id as RowId,
        CAST(hierarchyid::GetRoot().ToString() + CAST(ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) AS NVARCHAR(4000)) + '/' AS NVARCHAR(4000)) as [HierarchyId], 
        [key],
        [value],
        CAST([type] AS INT) AS [type] 
    FROM 
        dbo.ImportedJson
        CROSS APPLY OPENJSON(JsonValue,'$') 
    UNION ALL 
    SELECT 
        RowId,
        CAST(JSONRoot.[HierarchyId] + CAST(ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) AS NVARCHAR(4000)) + '/' AS NVARCHAR(4000)), 
        CASE WHEN JSONRoot.[type] = 4 THEN JSONRoot.[key]+'['+t.[key]+']' ELSE t.[key] END,
        t.[value],
        CAST(t.[type] AS INT) 
    FROM 
        JSONRoot 
        CROSS APPLY OPENJSON(JSONRoot.[value],'$') t 
    WHERE 
        JSONRoot.[type] > 3 /* Only parse complex data types */

SELECT 
    RowId,
    CAST([HierarchyId] AS HierarchyId) AS [HierarchyId],
    [key],
    [value],
    [type]
FROM 
    JSONRoot 
ORDER BY 
    RowId,
    [HierarchyId]
GO

Results:

This query parses each property of the original JSON input so that each key-value pair gets put on its row. Complex JSON objects are broken out into multiple rows, and a HierarchyId is included to maintain parent-child relationships if needed.

Having all of this complex JSON parsed out into a key value table now opens up possibilities of what we can do with it.

Process and Indexing

The above query isn’t going to run itself. You’ll either need to schedule it or incorporate it into an ETL to parse out your staged JSON data on a regular basis (kind of like full text indexing works asyncronously).

Alternatively you can write the logic into a trigger that fires on new row inserts into your staging table if you need this data in real-time. As with all triggers though, I wouldn’t recommend this if your staging table is getting rows added at a high rate.

Once you decide how to store your parsed JSON data, add some indexes that will help your search queries run nice and fast (CREATE NONCLUSTERED INDEX IX_Value_Include ON dbo.ParsedJSON ([value]) INCLUDE ([key],RowId) would probably be a good starting point for many search queries) and you’ll be doing a lot better than WHERE JsonValue LIKE '%Property4%'.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!