In-Memory OLTP: A Case Study

Watch this week’s episode on YouTube.

When In-Memory OLTP was first released in SQL Server 2014, I was excited to start using it.  All I could think was “my queries are going to run so FAST!

Well, I never got around to implementing In-Memory OLTP.  Besides having an incompatible version of SQL Server at the time, the in-memory features had too many limitations for my specific use-cases.

Fast forward a few years, and I’ve done nothing with In-Memory OLTP.  Nothing that is until I saw Erin Stellato present at our Northern Ohio SQL Server User Group a few weeks ago – her presentation inspired me to take a look at In-Memory OLTP again to see if I could use it.

Use case: Improving ETL staging loads

After being refreshed on the ins and outs of in-memory SQL Server, I wanted to see if I could apply some of the techniques to one of my etls.

The ETL consists of two major steps:

  1. Shred documents into row/column data and then dump that data into a staging table.
  2. Delete some of the documents from the staging table.

In the real world, there’s a step 1.5 that does some processing of the data, but it’s not relevant to these in-memory OLTP demos.

So step one was to create my staging tables.  The memory optimized table is called “NewStage1” and the traditional disked based tabled is called “OldStage1”:

Few things to keep in mind:

  • The tables have the same columns and datatypes, with the only difference being that the NewStage1 table is memory optimized.
  • My database is using simple recovery so I am able to perform minimal logging/bulk operations on my disk-based table.
  • Additionally, I’m using  the SCHEMA_ONLY durability setting.  This gives me outstanding performance because there is no writing to the transaction log!  However, this means if I lose my in-memory data for any reason (crash, restart, corruption, etc…) I am completely out of luck.  This is fine for my staging data scenario since I can easily recreate the data if necessary.

Inserting and deleting data

Next I’m going to create procedures for inserting and deleting my data into both my new and old staging tables:

Few more things to note:

  • My new procedures are natively compiled: SQL Server compiles them up front so at run time it can just execute without any extra steps.  The procedures that target my old disk-based tables will have to compile every time.
  • In the old delete procedure, I am deleting data in chunks so my transaction log doesn’t get full.  In the new version of the procedure, I don’t have to worry about this because, as I mentioned earlier, my memory optimized table doesn’t have to use the transaction log.

Let’s simulate a load

It’s time to see if all of this fancy in-memory stuff is actually worth all of the restrictions.

In my load, I’m going to mimic loading three documents with around 3 million rows each.  Then, I’m going to delete the second document from each table:

The in-memory version should have a significant advantage because:

  1. The natively compiled procedure is precompiled (shouldn’t be a huge deal here since we are doing everything in a single INSERT INTO…SELECT).
  2. The in-memory table inserts/deletes don’t have to write to the transaction log (this should be huge!)

Results

Disk-based In-Memory
INSERT 3 documents 65 sec 6 sec
DELETE 1 document 46 sec 0 sec
Total time 111 sec 6 sec
Difference -95% slower 1750% faster

The results speak for themselves.  In this particular example, in-memory destroys the disk-based solution out of the water.

Obviously there are downsides to in-memory (like consuming a lot of memory) but if you are going for pure speed, there’s nothing faster.

Warning! I am not you.

And you are not me.

While in-memory works great for my ETL scenario, there are many requirements and limitations.  It’s not going to work in every scenario.  Be sure you understand the in-memory durability options to prevent any potential data loss and try it out for yourself!  You might be surprised by the performance gains you’ll see.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

OR vs UNION ALL – Is One Better For Performance?

Today I want to show you a trick that could make your queries run faster.

It won’t always work, but when it does everyone will be impressed with your performance tuning prowess.  Let’s go!

Watch this week’s episode on YouTube.

Our Skewed Data

Let’s create a table and insert some data.
Notice the heavily skewed value distribution.  Also notice how we have a clustered index and a very skimpy nonclustered index:

If we write a query that filters on one of the low-occurrence values in Col3, SQL Server will perform an index seek with a key lookup (since our skimpy nonclustered index doesn’t cover all of the columns in our SELECT):

If we then add an OR to our WHERE clause and filter on another low-occurrence value in Col3, SQL Server changes how it wants to retrieve results:

Suddenly those key-lookups become too expensive for SQL Server and the query optimizer thinks it’ll be faster to just scan the entire clustered index.

In general this makes sense; SQL Server tries to pick plans that are good enough in most scenarios, and in general I think it chooses wisely.

However, sometimes SQL Server doesn’t pick great plans.  Sometimes the plans it picks are downright terrible.

If we encountered a similar scenario in the real-world where our tables had more columns, more rows, and larger datatypes, having SQL Server switch from a seek to a scan could kill performance.

So what can we do?

Solutions…maybe

The first thing that comes to mind is to modify or add some indexes.

But maybe our (real-world) table already has too many indexes.  Or maybe we are working with a data source where we can’t modify our indexes.

We could also use the FORCESEEK hint, but I don’t like using hints as permanent solutions because they feel dirty (and are likely to do unexpected things as your data changes).

One solution to UNION ALL

One solution that a lot of people overlook is rewriting the query so that it uses UNION ALLs instead of ORs.

A lot of the time it’s pretty easy to refactor the query to multiple SELECT statements with UNION ALLs while remaining logically the same and returning the same results:

Sure, the query is uglier and will be a bigger pain to maintain if you need to make changes in the future, but sometimes we have to suffer for fashion query performance.

But does our UNION ALL query perform better?

Well the plan shows seeks, but as Erik Darling recently pointed out, seeks aren’t always a good thing.

So let’s compare the reads of the OR query versus the UNION ALL query using SET STATISTICS IO ON:

So in this case, tricking SQL Server to pick a a different plan by using UNION ALLs gave us a performance boost.  The difference in reads isn’t that large in the above scenario, but I’ve had this trick take my queries from minutes to seconds in the real world.

So the next time you are experiencing poor performance from a query with OR operators in it, try rewriting it using UNION ALLs.

It’s not always going to fix your performance problem but you won’t know until you give it a try.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

How to use Statistics IO to Improve Your Query Performance

SQL Server’s STATISTICS IO reporting is a great tool to help you performance tune queries.
Usually the goal of performance tuning is to make your query run faster.  One of the easiest ways to get a faster query is to reduce the amount of data a query is processing.  STATISTICS IO makes it easy to see how much data SQL Server is actually processing.
Specifically, the STATISTICS IO output helps with performance tuning because:
  1. The data it shows acts as a measuring stick for your performance tuning changes.
  2. It provides a good way of isolating the query changes you are making from other changes that may be happening on the server.

Rather watch than read?  Head over to my YouTube channel and watch this week’s content instead.

So what’s STATISTICS IO data look like?

To show IO statistics on your query, you first need to execute:

After running a query with with the above setting turned on, check SQL Server Management Studio’s Messages tab to see output that looks something like this:

The key things that my eyes are drawn to on initial examination of the STATISTICS IO output are the following:

  • Logical reads: The number of 8kB pages SQL Server had to read from the buffer cache (memory) in order to process and return the results of your query.  The more pages that need to be read, the slower your query.
  • Worktables/Workfiles: These are temporary objects that SQL Server creates in tempdb in order to process query results.  Although not always bad, it might indicate that SQL is doing more work than it needs to (perhaps an index could help?)
  • Lob Logical Reads: The number of large objects (e.g. varchar(max)) SQL is having to read.  I take the most cursory glance at this – if I’m returning high numbers of lobs, I might want to make sure I actually need them.  If not, I may add an index or move the lobs off to a separate table.

There are more properties in the STATISTICS IO output, but if we can significantly decrease the above three indicators then chances are good that we’ll improve our query performance.

So why are these three indicators so useful?

Tracking performance changes

The main reason I like performance tuning with STATISTICS IO is because it makes it easy to create a baseline for my query performance.  When I make changes to the query, it’s then easy to see if my changes helped or hurt the query.

The main metric I use for this is logical reads.  Logical reads refers to pages pulled from the cache (memory) versus physical reads which indicates the number of pages from disk.  However, all pages get loaded from disk into cache before SQL Server is able to use them.

This makes logical reads great for tracking performance changes because it clearly tells me how many 8kB pages in total SQL Server needed to read in order to my return my data.

If I add an index, does the total number of pages read go up or down?  Let me check my logical read counts and see.

What if I add some additional filtering or restructure my query?  I can easily tell if my changes hurt performance by seeing if the total number of logical reads went up or down.

Logical reads allow me to easily track the effectiveness of my tuning tactics.

The same concept applies to my worktable and logical lob read properties.  For the former, any time SQL Server is having to write data out to disk (tempdb in this case), performance will be slower.

In the latter case, if SQL Server is needing to move around large objects that comprise of multiple 8kB pages each, things will be slow.  If I can keep track of how many lob logical reads SQL Server is performing, then I can focus on removing that overhead from my query.

Isolating other factors that impact performance

For the same reason logical reads make it easy to track query performance after making changes, using logical reads makes it easy to mute other factors that might affect performance.

For example, I used to think that simply watching how long it took a query to run was a good indicator of helping me performance tune.  If the total number of seconds it took my query to run decreased, my changes helped improve performance!

This is a potentially deceptive way to measure performance though because what if during my first run the server was getting slammed by other queries?  Using elapsed run time isn’t an effective way to measure performance.

Also, server environment hardware isn’t always the same.  I might test a query in one environment and then deploy it to another.  My testing on an empty dev box might have been great, but as soon as the query runs in production along with all other queries, it might not perform as well.

More than one way to analyze a query

STATISTICS IO is a great place to start your performance tuning process.

It doesn’t mean that you have to stop there though.  While being able to track the effects of your tuning changes and isolating other environment variables is important, ultimately you will have to use other means to actually improve performance.

So be sure to look at execution plans, dig into index and table statistics, rewrite the order of your table joins to see if it makes a difference, etc…  Just remember, performance always comes back to how much data SQL Server needs to process – reduce that and your queries will surely perform better.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!