Window Functions vs GROUP BYs

Published on: 2019-04-16

Watch this week’s episode on YouTube.

There are many options available for improving the performance of a query: indexes, statistics, configuration settings, etc…

However, not all environments allow you to use those features (eg. vendor databases), leaving query rewriting as the only option.

This is the first post in a series to document common ways to refactor queries without otherwise altering the database. The goal of these posts will be to provide examples of performance pitfalls in queries and how to rewrite those queries to generate different query plans that (hopefully) improve performance.

I’ll be using the StackOverflow 2014 data dump for these examples if you want to play along at home.

Who was first to earn each badge?

StackOverflow awards users badges for things like asking good questions, hitting certain vote thresholds, and more.

I want to write a query that figures out who is the first person awarded each badge. In cases where there is a tie for the first person to receive that badge, I want to return the user with the lowest UserId.

Window functions make this type of question easy to write a query for:

SELECT DISTINCT
    Name,
    FIRST_VALUE(UserId) OVER (PARTITION BY Name ORDER BY Date,UserId) AS UserId
FROM
    dbo.Badges b
ORDER BY
    Name,UserId

If you’ve used FIRST_VALUE before, this query should be easy to interpret: for each badge Name, return the first UserId sorted by Date (earliest date to receive the badge) and UserId (pick the lowest UserId when there are ties on Date).

This query was easy to write and is simple to understand. However, the performance is not great: it takes 46 seconds to finish returning results on my machine.

Window function execution plan

Note: I assumed this table started off with the following index:

CREATE NONCLUSTERED INDEX IX_Badges__Name_Date_UserId ON [dbo].[Badges] (Name,Date,UserId);

Why so slow?

If we SET STATISTICS IO ON we’ll notice that SQL Server reads 46767 pages from a nonclustered index. Since we aren’t filtering our data, there’s not much we can do to make that faster.

Reading right to left, next up we see two Segment operators. These don’t add much overhead since our data is sorted on our segments/groups, so making SQL Server identify when our sorted rows change values is trivial.

Next up is the Window Spool operator which “Expands each row into the set of rows that represent the window associated with it.” While it looks innocent by having a low relative cost, this operator is writing 8 million rows/reading 16 million rows (because of how Window Spool works) from tempdb. Ouch.

After that the Stream Aggregate operator and Compute Scalar operators check to see if the first value in each window being returned from the Window Spool is null and then return the first non-null value. These operations are also relatively painless since the data flowing through is already sorted.

The Hash Match operator then dedupes the data for our DISTINCT and then we sort the remaining ~2k rows for our output.

So while our query looks simple, the fact that our whole table of data is getting written to and read from tempdb before being deduped and sorted is a real performance killer.

Removing tempdb usage the old-fashioned way

When I say “the old fashioned way”, I mean rewriting our window function to use more traditional aggregate functions and a GROUP BY:

SELECT
    b.Name,
    MIN(b.UserId) AS UserId
FROM
    dbo.Badges b
    INNER JOIN
    (
    SELECT
        Name,
        MIN(Date) AS Date
    FROM
        dbo.Badges
    GROUP BY
        Name
    ) m
        ON b.Name = m.Name
        AND b.Date = m.Date
GROUP BY
    b.Name
ORDER BY
    Name,UserId

I think by most people’s standards, this query is not as easy to read. While not overly complex, it does take up a lot more screen space and is complicated by multiple GROUP BYs and a derived table.

And while the query may look ugly on the outside, it’s what lies below the surface that really matters:

GROUP BY execution plan

What a beautifully simple execution plan. And it finishes executing almost instantly.

Let’s break down what’s going on. First, we start with similar Index Scan and Segment operators as the previous query so no real difference there.

At this point you may have noticed that while the written query uses two GROUP BYs and two MIN functions that are then joined together, there are not two Index Scans, two sets of aggregations, and no join happening in the execution plan.

SQL Server can use an optimization with the Top operator that allows it to take the sorted data and return only the Name and UserId rows for the top Name and Date values within a group (essentially matching the MIN logic). This is a great example of how the optimizer can take a declarative SQL query and decide how to efficiently return the data needs.

At this point, the Top operator filters our 8 million rows down to around 30k rows. 30k rows get deduped a lot faster with our Stream Aggregate operator, and since the data is already sorted we don’t need an extra Sort operator.

Overall, this second query runs so much better than the original because SQL Server doesn’t have to go to tempdb for any operations – all the data is pre-sorted in the index and can flow through.

So I shouldn’t use Window Functions?

Not necessarily – it comes down to a trade offs.

I almost always start with a window function because of how easy they are to write and read. Plus I think they are fun to write as well.

However, if the window function is having to read/write a lot of data to tempdb and it’s affecting the overall performance of your query, a rewrite may be necessary.

In that case, I much rather take more verbose syntax to get a 2000x performance boost.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

Gaps and Islands Across Date Ranges

Published on: 2019-03-12

Watch this week’s episode on YouTube.

In a traditional gaps and islands problem, the goal is to identify groups of continuous data sequences (islands) and groups of data where the sequence is missing (gaps).

While many people encounter gaps and islands problems when dealing with ranges of dates, and recently I did too but with an interesting twist:

How do you determine gaps and islands of data that has overlapping date ranges?

Continue reading “Gaps and Islands Across Date Ranges”

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

Ignoring NULLs with FIRST_VALUE

Published on: 2018-08-28

Watch this week’s episode on YouTube.

The SQL Server FIRST_VALUE function makes it easy to return the “first value in an ordered set of values.”

The problem is that if that first value happens to be a NULL, there is no easy, built-in way to skip it.

While a UserVoice item exists to add the ability to ignore nulls (go vote!), today, we’re going accomplish that end result with some alternative queries.

The Setup

Here’s the example data we’ll be skipping nulls on:

CREATE TABLE ##Data
(
       Id int IDENTITY(0,1),
       GroupId int,
       Value1 int
);
GO
INSERT INTO ##Data VALUES (1,1)
INSERT INTO ##Data VALUES (1,1)
INSERT INTO ##Data VALUES (1,3)
INSERT INTO ##Data VALUES (2,NULL)
INSERT INTO ##Data VALUES (2,NULL)
INSERT INTO ##Data VALUES (2,6)
INSERT INTO ##Data VALUES (2,4)
INSERT INTO ##Data VALUES (2,5);
GO

We’ve got a an integer identity column, two groups of rows, and NULLs that are sprinkled into otherwise unsuspecting integer values.

If we write a query that uses the FIRST_VALUE function, you’ll notice that our NULL gets chosen in group two – not quite what we want:

SELECT
       Id,
       GroupId,
       Value1,
       FIRST_VALUE(Value1) OVER (PARTITION BY GroupId ORDER BY Id) AS FirstValue1
FROM
       ##Data

Let’s look at two queries that will help us get the number 6 into that FirstValue1 column for the second group.

The Contenders

“The Derived FIRST_VALUE”

First up is still the FIRST_VALUE function, but inside of a derived table:

SELECT
    d.Id,
    d.GroupId,
    d.Value1,
    d2.FirstNotNullValue1
FROM
    ##Data d
    INNER JOIN
    (
    SELECT DISTINCT
        GroupId,
        FIRST_VALUE(Value1) OVER (PARTITION BY GroupId ORDER BY Id) as FirstNotNullValue1
    FROM ##Data
    WHERE Value1 IS NOT NULL
    ) d2
        ON d.GroupId = d2.GroupId

By filtering out NULLs in our derived table query, FIRST_VALUE returns the first non-null value like we want.  We then join that back to the original data and all is right again.

“The Triple Join”

Our second attempt at this query sends us back to the dark ages of SQL Server 2008 before the FIRST_VALUE function existed:

SELECT
    d.Id,
    d.GroupId,
    d.Value1,
    d2.Value1 AS FirstNotNullValue1
FROM
    ##Data d
    LEFT JOIN
    (
    SELECT
        GroupId,
        MIN(Id) AS FirstNotNullIdValue1
    FROM
        ##Data
    WHERE
        Value1 IS NOT NULL
    GROUP BY
        GroupId
    ) m
        ON d.GroupId = m.GroupId
    INNER JOIN ##Data d2
        ON m.FirstNotNullIdValue1 = d2.Id;

We perform a triple join, with the critical element being our derived table which gets the MIN Id for each group of rows where Value1 IS NOT NULL.  Once we have the minimum Id for each group, we join back in the original data and produce the same final result:

The Performance

Both of the above queries produce the same output – which one should you use in your production code?

Well, the “Derived FIRST_VALUE” query has a lower relative cost than the “Triple Join” query, maybe it’s better?

This isn’t a real-world execution plan though – surely we never scan heaps our production environments.

Let’s add a quick clustered index and see if that changes anything:

CREATE CLUSTERED INDEX CL_Id ON ##Data (GroupId,Id,Value1)

Okay, a closer match up but the “Derived FIRST_VALUE” query still appears to have a slight edge.

If we SET STATISTICS IO ON though we start to see a different story:

With only 8 rows of data, our “Derived FIRST_VALUE” query sure is performing a lot of reads.

What if we increase the size of our sample dataset?

SET STATISTICS IO, TIME OFF;
SET NOCOUNT ON;
GO
INSERT INTO ##Data (GroupId, Value1)  
SELECT GroupId, Value1 FROM ##Data
GO 10

And now check our plans and stats IO:

WOW that’s a lot of reads in the “Derived FIRST_VALUE” query.

Conclusion

Besides sharing some solutions, the point I tried to make above is that DON’T TRUST CODE YOU FIND ON THE INTERNET (or in books, or copied from colleagues, etc…)

Both of the above queries will return the first value without NULLs.  But they probably won’t perform exactly the same as they did on my examples above.

Copy the above code for sure – but test it out. See what works better on your specific server configuration, data size, and indexes.  Maybe both queries are terrible and you need a third, better way of doing it (if you write one, let me know!) – but please, please, please, always test your code.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!