Visualizing Nested Loops Joins And Understanding Their Implications

Published on: 2018-12-11

Watch this week’s episode on YouTube.

What Physical Join Operators Tell Us

Everyone has their own method of reading an execution plan when performance tuning a slow SQL query.  One of the first things I like to look at are what kind of join operators are being used:

These three little icons may not seem like the most obvious place to begin troubleshooting a slow query, but with larger plans especially I like starting with a quick glance at the join operators because they allow you to infer a lot about what SQL Server thinks about your data.

This will be a three part series where we’ll learn how each join algorithm works and what they can reveal about our upstream execution plan operators.

Nested Loops Join

Nested loops joins work like this: SQL Server takes the first value from our first table (our “outer” table – by default SQL Server decides for us which table of the two this will be), and compares it to every value in our second “inner” table to see if they match. 

Once every inner value has been checked, SQL Server moves to the next value in the outer table and the process repeats until every value from our outer table has been compared to every value in our inner table.

This description is a worst case example of the performance of a nested loop join.  Several optimizations exist that can make the join more efficient.  For example, if the inner table join values are sorted (because of an index you created or a spool that SQL Server created), SQL Server can process the rows much faster:

In the above animation, SQL Server has the inner input data sorted, allowing it to seek directly to the rows it needs, reducing the total number of comparisons that need to be made.

For more in-depth explanations of the internals and optimizations of nested loops joins, I recommend reading this post by Craig Freedman as well as Hugo Kornelis’s reference on nested loops.

What Do Nested Loops Joins Reveal?

Knowing the internals of how a nested loops join works allows us to infer what the optimizer thinks about our data and the join’s upstream operators, helping us focus our performance tuning efforts. 

Here are a few scenarios to consider the next time you see a nested loops join being used in your execution plan:

  • Nested loops joins are CPU intensive; at worst, every row needs to be compared to every other row and this can take some time.  This means when you see a nested loops join, SQL Server probably thinks that one of the two inputs is relatively small.
    • … and if one of the inputs is relatively small, great!  If instead you see upstream operators that are moving large amounts of data, you may have a estimation problem going on in this area of the plan and may need to update stats/add indexes/refactor the query to have SQL Server provide better estimates (and maybe a more appropriate join).
  • Nested loops sometimes accompany RID or key lookups.  I always check for one of these because they often leave room for some performance improvements:
    • If a RID lookup exists, it’s usually easy enough to add a clustered index to that underlying table to squeeze out some extra performance.
    • If either RID or key lookup exist, I always check what columns are being returned to see if a smaller index could be used instead (by including a column in a key/column of an existing index) or if the query can be refactored to not bring back those columns (eg. get rid of the SELECT *).
  • Nested loops joins do not require data to be sorted on input.  However, performance can improve with a sorted inner data source (see animation above), and SQL Server might choose a more efficient operator if the inputs are both sorted. 
    • At the very least, nested loops joins make me think to check whether the input data isn’t sorted because of some upstream transformations, or because of missing indexes.

So while nested loops in your plans will always require more investigation, looking at them and the operators around them can provide some good insight into what SQL Server thinks about your data.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

Bert & Pinal Troubleshoot a Slow Performing SQL Server

Published on: 2018-11-13

What happens when you run into performance tuning expert Pinal Dave?  Talk about how to troubleshoot a slow performing SQL Server of course!

This week is all about the video, so if you can’t view it above, head on over to my YouTube channel to watch it there.

After watching, be sure to check out Pinal’s free performance diagnostic scripts over at SQLAuthority.com:



Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

Ignoring NULLs with FIRST_VALUE

Published on: 2018-08-28

Watch this week’s episode on YouTube.

The SQL Server FIRST_VALUE function makes it easy to return the “first value in an ordered set of values.”

The problem is that if that first value happens to be a NULL, there is no easy, built-in way to skip it.

While a UserVoice item exists to add the ability to ignore nulls (go vote!), today, we’re going accomplish that end result with some alternative queries.

The Setup

Here’s the example data we’ll be skipping nulls on:

We’ve got a an integer identity column, two groups of rows, and NULLs that are sprinkled into otherwise unsuspecting integer values.

If we write a query that uses the FIRST_VALUE function, you’ll notice that our NULL gets chosen in group two – not quite what we want:

Let’s look at two queries that will help us get the number 6 into that FirstValue1 column for the second group.

The Contenders

“The Derived FIRST_VALUE”

First up is still the FIRST_VALUE function, but inside of a derived table:

By filtering out NULLs in our derived table query, FIRST_VALUE returns the first non-null value like we want.  We then join that back to the original data and all is right again.

“The Triple Join”

Our second attempt at this query sends us back to the dark ages of SQL Server 2008 before the FIRST_VALUE function existed:

We perform a triple join, with the critical element being our derived table which gets the MIN Id for each group of rows where Value1 IS NOT NULL.  Once we have the minimum Id for each group, we join back in the original data and produce the same final result:

The Performance

Both of the above queries produce the same output – which one should you use in your production code?

Well, the “Derived FIRST_VALUE” query has a lower relative cost than the “Triple Join” query, maybe it’s better?

This isn’t a real-world execution plan though – surely we never scan heaps our production environments.

Let’s add a quick clustered index and see if that changes anything:

Okay, a closer match up but the “Derived FIRST_VALUE” query still appears to have a slight edge.

If we SET STATISTICS IO ON though we start to see a different story:

With only 8 rows of data, our “Derived FIRST_VALUE” query sure is performing a lot of reads.

What if we increase the size of our sample dataset?

And now check our plans and stats IO:

WOW that’s a lot of reads in the “Derived FIRST_VALUE” query.

Conclusion

Besides sharing some solutions, the point I tried to make above is that DON’T TRUST CODE YOU FIND ON THE INTERNET (or in books, or copied from colleagues, etc…)

Both of the above queries will return the first value without NULLs.  But they probably won’t perform exactly the same as they did on my examples above.

Copy the above code for sure – but test it out. See what works better on your specific server configuration, data size, and indexes.  Maybe both queries are terrible and you need a third, better way of doing it (if you write one, let me know!) – but please, please, please, always test your code.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!