Data with Bert logo

Why Is My VARCHAR(MAX) Variable Getting Truncated?

Watch this week's video on YouTube

Watch this week's episode on YouTube.

Sometimes SQL Server doesn't do what you tell it to do.

Normally that's ok - SQL is a declarative language after all, so we're supposed to tell it what we want it to do, not how we want it done.

And while that's fine for most querying needs, it can become really frustrating when SQL Server decides to completely disregard what you explicitly asked it to do.

Why Is My VARCHAR(MAX) Truncated to 8000 Characters?

A prime example of this is when you declare a variable as VARCHAR(MAX) because you want to assign a long string to it.  Storing values longer than 8000 characters long is the whole point of VARCHAR(MAX), right?

DECLARE @dynamicQuery VARCHAR(MAX);

SET @dynamicQuery = REPLICATE('a',8000) + 'b'

SELECT @dynamicQuery as dynamicQueryValue, LEN(@dynamicQuery) AS dynamicQueryLength

If we look at the above query, I would expect my variable @dynamicQuery to be 8001 characters long; it should be 8000 letter 'a's followed by a single letter 'b'.  8001 characters total, stored in a VARCHAR(MAX) defined variable.

But does SQL Server actually store all 8001 characters like we explicitly asked it to?

No:

2018-05-13_18-31-03

First we can see that the LEN() of our variable is only 8000 - not 8001 - characters long!

2018-05-13_18-31-28

Copying and pasting our resulting value into a new query window also shows us that there is no character 'b' at position 8001 like we expected.

The Miserly SQL Server

The reason this happens is that SQL Server doesn't want to store something as VARCHAR(MAX) if none of the variable's components are defined as VARCHAR(MAX).  I guess it doesn't want to store something in a less efficient way if there's no need for it.

2018-05-12_06-44-53

However, this logic is flawed since we clearly DO want to store more than 8000 characters.  So what can we do?

Make Something VARCHAR(MAX)

Seriously, that's it.  You can do something like CAST the single character 'b' as VARCHAR(MAX) and your @dynamicQuery variable will now contain 8001 characters:

2018-05-14_18-06-44

But casting a single character as VARCHAR(MAX) isn't very intuitive.

Instead, I recommend casting a blank as VARCHAR(MAX) and prefixing it to the start of your variable string.  Leave yourself a comment for the future and hopefully you'll remember why this superfluous looking piece of code is needed:

-- using CAST('') to force SQL to define
-- as varchar(MAX)
SET @dynamicQuery =  CAST('' AS varchar(MAX))
    + REPLICATE('a',8000)+ 'b'

Contributing to Community

MJ-t-sql-TuesdayThis post is a response to this month's T-SQL Tuesday #102 prompt by Riley Major. T-SQL Tuesday is a way for SQL Server users to share ideas about different database and professional topics every month.

The prompt I've chosen to write about this month is how and why I got started contributing to the SQL Server community.


Watch this week's video on YouTube

About a year ago, I was determined to improve my presentation skills.  I knew that in order to do that I needed to get more practice speaking.

I already was at my max for presenting at local user groups, conferences, etc... because at some point it becomes too cost and time prohibitive to travel to more events.  As an alternative, I decided that if I couldn't get more practice by speaking in person, I could at least film myself presenting.

And I figured if I'm already filming myself presenting, I might as well put a little extra polish on it and make the content available for others to watch.

And that is how I started filming weekly videos about SQL Server.

SQL Server Videos

There are already plenty of great SQL Server presentations on YouTube, spanning a plethora of topics from a variety of experts who know way more about SQL Server than me.

Whenever I want to learn about a SQL Server topic, I search for something like "SQL Server backups" or "SQL Server columnstore indexes" on YouTube.  There are plenty of great recorded presentations, virtual chapter screencasts, Q&As, and other tutorials for learning almost any topic you can imagine.

However, sometimes I'm not in the mood to watch in-depth hour long presentations.  Sometimes I want to watch a short, informative, regularly scheduled entertaining SQL videos - and this is where I saw a gap in programming.

So what better way to get what you want than by scratching your own itch.  I figured if I want to watch that type of SQL Server video, then I'm sure other people out there want to watch those same kinds of short SQL videos too.

Bert, the Director

When I was a kid, I wanted to be a movie maker.  In particular, I was entranced by special effects, so I made movies with friends that involved plenty of lightsabers, explosions, and green screen effects all throughout middle school and high school.

So while making SQL videos wasn't going to be totally new territory, I sure was unprepared for all of the initial work involved.

For the first three months, I was spending 15-20 hours per week writing, creating demos, shooting, editing, publishing, and marketing my videos.  Over time I've cut this process down to 8-10 hours a week, a more manageable amount of work that I can mostly get done on weekend mornings before the rest of the house wakes up.

Results

Making videos about SQL Server has been an amazing experience.  Not only do I personally feel fulfilled creating something week after week that improves my own skills, but it's rewarding to receive positive feedback via comments, messages, and emails that I'm also helping others become better SQL developers.

Contributing has also made me appreciate how amazing the #sqlfamily community truly is.  Everyone I talk to is wonderful and supportive, and everyone I meet wants to see one another succeed.

Your Turn

If you aren't already, I hope you consider contributing to the community .  Whether it be via blog posts, code contributions, presenting, tweeting, or making videos, giving back to the SQL Server community will grow your own skills and allow you to meet some really great people.

It can be scary putting yourself out there publicly, but don't let that stop you.  If you give it your best then the SQL Server community won't let you down.

Is It Possible To Conditionally Index JSON Data?

Watch this week's video on YouTube

Recently I received a great question from an attendee to one of my sessions on JSON (what's up Nam!):

2018-04-25_15-58-21

At first glance it sounds like a filtered index question, and ultimately it is, but because of some of the intricacies involved in the response I thought it would make for a good blog post.

The Problem: Schema On Read

Imagine I have a central table that keeps track of warnings and errors for my burrito ordering app:

DROP TABLE IF EXISTS dbo.BurritoAppLog;
GO

CREATE TABLE dbo.BurritoAppLog 
( 
    Id int IDENTITY PRIMARY KEY,
    ErrorDetails nvarchar(1000)
); 
GO 

INSERT INTO dbo.BurritoAppLog VALUES (N'{"Type":"Warning", "MessageId": 100, "Severity": "High", "Information":"Running low on steak." }'); 
INSERT INTO dbo.BurritoAppLog VALUES (N'{"Type":"Warning", "MessageId": 50, "Severity": "Low", "Information":"Running low on queso." }');
GO 4000
INSERT INTO dbo.BurritoAppLog VALUES (N'{"Type":"Error", "MessageId": 10, "User":"Bert", "ErrorMessage":"Lettuce not available." }'); 
INSERT INTO dbo.BurritoAppLog VALUES (N'{"Type":"Error", "MessageId": 20, "User":"Jim", "ErrorMessage":"Cannot wrap burrito with quadruple meat." }'); 
GO 100

2018-04-25_19-21-04

Now imagine wanting to generate a report of only the rows that are errors.

Obviously, you'd want to index this data for faster querying performance.  Adding a non-clustered index on a non-persisted computed column of our JSON "Type" property will accomplish that:

ALTER TABLE dbo.BurritoAppLog 
ADD ErrorType AS JSON_VALUE(ErrorDetails, '$.Type');

ALTER TABLE dbo.BurritoAppLog 
ADD MessageId AS JSON_VALUE(ErrorDetails, '$.MessageId');

CREATE INDEX IX_ErrorType ON dbo.BurritoAppLog (ErrorType) INCLUDE (MessageId);

SELECT MessageId FROM dbo.BurritoAppLog WHERE ErrorType = 'Error'

And that works great.  Except that error entries in our table make up only 2.5% of our total rows.  Assuming we'll never need to query WHERE ErrorType = 'Warning' , this index is using a lot of unnecessary space.

So what if we create a filtered index instead?

Filtered JSON Indexes...

A filtered index should benefit us significantly here: it should save us space (since it won't include all of those warning rows) and it should make our INSERT queries into this table faster since the index won't need to be maintained for our non-"Error" rows.

So let's create a filtered index:

CREATE INDEX FX_ErrorType ON dbo.BurritoAppLog (ErrorType) INCLUDE (MessageId) WHERE ErrorType = 'Error'

Oh.

2018-04-25_19-47-03-1

So I guess we can't create a filtered index where the filter is on a computed column.  Maybe SQL Server won't mind if we persist the computed column?

DROP INDEX IX_ErrorType ON dbo.BurritoAppLog

ALTER TABLE dbo.BurritoAppLog
DROP COLUMN ErrorType;

ALTER TABLE dbo.BurritoAppLog 
ADD ErrorType AS JSON_VALUE(ErrorDetails, '$.Type') PERSISTED;

CREATE INDEX FX_ErrorType ON dbo.BurritoAppLog (ErrorType) INCLUDE (MessageId) WHERE ErrorType = 'Error'

NOOOOOOPPPPEEEE.  Same error message.

The issue is that SQL Server does not like computed columns, persisted or not, in a filtered index's WHERE clause.  It's one of the many limitations of filtered indexse (Aaron Bertrand has a great post outlining many of the shortcomings).

Computed Column Filtered Index Workaround

What is a performance minded, space-cautious, JSON-loving developer supposed to do?

One workaround to get our filtered index would be to parse our ErrorType property into its own table column on insert:

ALTER TABLE dbo.BurritoAppLog 
ADD PermanentErrorType varchar(10);

UPDATE dbo.BurritoAppLog SET PermanentErrorType = JSON_VALUE(ErrorDetails, '$.Type');

2018-04-25_20-01-45

With our PermanentErrorType column in place, we have no problem generating our filtered index:

CREATE INDEX FX_PermanentErrorType ON dbo.BurritoAppLog (PermanentErrorType) INCLUDE (MessageId) WHERE PermanentErrorType = 'Error'

If we compare the sizes of our nonclustered index to our filtered index, you'll immediately that the filtered index is significantly smaller:

2018-04-25_20-12-31-1

However, our table size is now slightly larger because of the added table column.

Conclusion

So what do you do if you run into this situation?  Well, if the ratio of undesired records to desired records is large like in the example above, you might want to make a permanent column to include in your filtered index - the size/performance benefit is certainly there.  This does mean that your table size will be larger (additional column) but performance will be faster if your queries are able to use the smaller filtered index.

The Forgotten Fourth SQL Server Recovery Model

Watch this week's video on YouTube

SQL Server recovery models define when database transactions are written to the transaction log.   Understanding these models is critical for backup and recovery purposes as well as for how their behaviors impact the performance of queries.

Today we'll examine the differences between SQL Server's three official recovery models as well as an unofficial "fourth" recovery model that won't help in backup/recovery, but will help in performance of certain processes.

Full Recovery

The only recovery model that can potentially save all of your data when something bad happens (NOTE: "potentially" because if you aren't taking enough and/or testing your backups, you might experience data loss).

Under the full recovery model, every transaction is written to the transaction log first, and then persisted to the actual database.  This means if something disastrous happens to your server, as long as the change made its way into the transaction log AND your transaction log is readable AND your previous full/differential/log backups can be restored, you shouldn't experience any data loss (there are a lot of assumptions made with that statement though, so don't use this post as your only data loss prevention guide)

From a performance standpoint, full recovery is the slowest of the bunch because every transaction needs to be logged, and that creates some overhead.   Might be good for your OLTP databases, maybe not so much for your analytical staging databases (assuming you can recreate that data).

Simple Recovery

While some people incorrectly believe that simple recovery means no writing to the transaction log  (need proof that a database in simple recovery still writes to the trans log?  Try running ROLLBACK TRANSACTION after a huge delete) it actually means that the transaction log is cleared as soon as SQL Server is done using it and data has made its way to disk.

Since the transaction log is cleared regularly, your overall log size can be smaller since space is regularly reused.  Additionally, since that space is cleared you don't have to worry about backing it up.

No persistence of the transaction log means you won't be able to recover all of your data in case of server failure though.  This is generally OK if you are using simple recovery in databases where its easy to recreate any data since your last full backup (eg. staging data where you can easily redo the transactions that were lost).

Simple recovery minimally logs as many transactions as possible, making the throughput faster.  This works well in staging databases and for ETLs where data is in flux and can be easily recreated.

Bulk-Logged Recovery

If Goldilocks thinks the full recovery model has too much logging, and the simple model not enough logging, then she'll find the amount of logging in the bulk-logged recovery model to be just right.

Under bulk-logged, most transactions are fully logged, allowing for data restoration of those fully logged transactions if the need arises.  Bulk transactions however are minimally logged, allowing for better performance of things like bulk inserts (but no ability for restoration).

While restorations under the bulk-logged recovery model aren't as flexible as full recovery (eg. if the transaction log has any bulk transactions, you have to restore the whole transaction log instead of just up to a certain point), it does allow full logging for when most people need it and minimal logging for when most people don't need it.  Meaning for certain situations you can have your cake and eat it too!

The Fourth Unofficial Recovery Model: In-Memory SCHEMA_ONLY Durability

The SCHEMA_ONLY durability setting on a memory optimized table isn't a recovery model.  But it does behave a little bit like a recovery model in the sense that it defines how operations against your memory optimized table interact with your transaction log:

They don't.  Well, almost.

And that's the beauty of it, at least from a performance stand point.  If you are willing to trade off the ability to recover data for performance, then the SCHEMA_ONLY durability fits the bill - so long transaction log overhead.

So while none of the official recovery models allow you to prevent writing to the transaction log, the SCHEMA_ONLY durability setting does!

In-Memory OLTP: A Case Study

Watch this week's video on YouTube

When In-Memory OLTP was first released in SQL Server 2014, I was excited to start using it.  All I could think was "my queries are going to run so FAST!"

Well, I never got around to implementing In-Memory OLTP.  Besides having an incompatible version of SQL Server at the time, the in-memory features had too many limitations for my specific use-cases.

Fast forward a few years, and I've done nothing with In-Memory OLTP.  Nothing that is until I saw Erin Stellato present at our Northern Ohio SQL Server User Group a few weeks ago - her presentation inspired me to take a look at In-Memory OLTP again to see if I could use it.

Use case: Improving ETL staging loads

After being refreshed on the ins and outs of in-memory SQL Server, I wanted to see if I could apply some of the techniques to one of my etls.

The ETL consists of two major steps:

  1. Shred documents into row/column data and then dump that data into a staging table.
  2. Delete some of the documents from the staging table.

In the real world, there's a step 1.5 that does some processing of the data, but it's not relevant to these in-memory OLTP demos.

So step one was to create my staging tables.  The memory optimized table is called "NewStage1" and the traditional disked based tabled is called "OldStage1":

DROP DATABASE IF EXISTS InMemoryTest;
GO
CREATE DATABASE InMemoryTest;
GO
USE InMemoryTest;
GO

ALTER DATABASE InMemoryTest ADD FILEGROUP imoltp_mod CONTAINS MEMORY_OPTIMIZED_DATA;
GO 
ALTER DATABASE InMemoryTest ADD FILE (name='imoltp_mod1', filename='C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\DATA\imoltp_mod1') TO FILEGROUP imoltp_mod;
GO  
ALTER DATABASE InMemoryTest SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT=ON;
GO 
ALTER DATABASE InMemoryTest SET RECOVERY SIMPLE
GO

--Numbers Table
-- This needs to be in-memory to be called from a natively compiled procedure
DROP TABLE IF EXISTS InMemoryTest.dbo.Numbers;
GO
CREATE TABLE InMemoryTest.dbo.Numbers
(
    n int
    INDEX ix_n NONCLUSTERED HASH (n) WITH (BUCKET_COUNT=400000)
)
WITH (MEMORY_OPTIMIZED=ON, DURABILITY=SCHEMA_ONLY);  
GO 

INSERT INTO dbo.Numbers (n)
SELECT TOP (4000000) n = CONVERT(INT, ROW_NUMBER() OVER (ORDER BY s1.[object_id]))
FROM sys.all_objects AS s1 CROSS JOIN sys.all_objects AS s2
OPTION (MAXDOP 1);

-- Set up on-disk tables
DROP TABLE IF EXISTS InMemoryTest.dbo.OldStage1;
GO
CREATE TABLE InMemoryTest.dbo.OldStage1
(
    Id int,
    Col1 uniqueidentifier,
    Col2 uniqueidentifier,
    Col3 varchar(1000),
    Col4 varchar(50),
    Col5 varchar(50),
    Col6 varchar(50),
    Col7 int,
    Col8 int,
    Col9 varchar(50),
    Col10 varchar(900),
    Col11 varchar(900),
    Col12 int,
    Col13 int,
    Col14 bit
);
GO
CREATE CLUSTERED INDEX CL_Id ON InMemoryTest.dbo.OldStage1 (Id);
GO


--  Set up in-memory tables and natively compiled procedures
DROP TABLE IF EXISTS InMemoryTest.dbo.NewStage1;
GO
CREATE TABLE InMemoryTest.dbo.NewStage1
(
    Id int,
    Col1 uniqueidentifier,
    Col2 uniqueidentifier,
    Col3 varchar(1000),
    Col4 varchar(50),
    Col5 varchar(50),
    Col6 varchar(50),
    Col7 int,
    Col8 int,
    Col9 varchar(50),
    Col10 varchar(900),
    Col11 varchar(900),
    Col12 int,
    Col13 int,
    Col14 bit
    INDEX ix_id NONCLUSTERED HASH (id) WITH (BUCKET_COUNT=10)
)
WITH (MEMORY_OPTIMIZED=ON, DURABILITY=SCHEMA_ONLY);  
GO

Few things to keep in mind:

  • The tables have the same columns and datatypes, with the only difference being that the NewStage1 table is memory optimized.
  • My database is using simple recovery so I am able to perform minimal logging/bulk operations on my disk-based table.
  • Additionally, I'm using  the SCHEMA_ONLY durability setting.  This gives me outstanding performance because there is no writing to the transaction log!  However, this means if I lose my in-memory data for any reason (crash, restart, corruption, etc...) I am completely out of luck.  This is fine for my staging data scenario since I can easily recreate the data if necessary.

Inserting and deleting data

Next I'm going to create procedures for inserting and deleting my data into both my new and old staging tables:

DROP PROCEDURE IF EXISTS dbo.Insert_OldStage1;
GO
CREATE PROCEDURE dbo.Insert_OldStage1
    @Id int,
    @Rows int
AS
BEGIN
    INSERT INTO InMemoryTest.dbo.OldStage1 (Id, Col1, Col2, Col3, Col4, Col5, Col6, Col7, Col8, Col9, Col10, Col11, Col12, Col13, Col14)
    SELECT Id, Col1, Col2, Col3, Col4, Col5, Col6, Col7, Col8, Col9, Col10, Col11, Col12, Col13, Col14
    FROM
    (
    SELECT
    @Id as Id,
    '92D14DA3-2C55-4E50-A965-7D3C941417B3' as Col1,
    '92D14DA3-2C55-4E50-A965-7D3C941417B3' as Col2,
    'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' as Col3,
    'aaaaaaaaaaaaaaaaaaaa' as Col4,
    'aaaaaaaaaaaaaaaaaaaa' as Col5,
    'aaaaaaaaaaaaaaaaaaaa' as Col6,
    0 as Col7,
    0 as Col8,
    'aaaaaaaaaaaaaaaaaaaa' as Col9,
    'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' as Col10,
    'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' as Col11,
    1 as Col12,
    1 as Col13,
    1 as Col14
    )a
    CROSS APPLY
    (
    SELECT TOP (@Rows) n FROM dbo.Numbers
    )b
END

DROP PROCEDURE IF EXISTS dbo.Delete_OldStage1;
GO
CREATE PROCEDURE dbo.Delete_OldStage1
    @Id int
AS
BEGIN
    -- Use loop to delete to prevent filling transaction log
    DECLARE 
        @Count int = 0,
        @for_delete int,
        @chunk_size int = 1000000

    SELECT @for_delete = COUNT(Id) FROM InMemoryTest.dbo.OldStage1 
                            WHERE Id = @Id;

    WHILE (@Count < @for_delete)
    BEGIN
        SELECT @Count = @Count + @chunk_size;

        BEGIN TRAN
            DELETE TOP(@chunk_size) FROM InMemoryTest.dbo.OldStage1 WHERE Id = @Id
        COMMIT TRAN
    END
END;
GO



DROP PROCEDURE IF EXISTS dbo.Insert_NewStage1;
GO
CREATE PROCEDURE dbo.Insert_NewStage1
    @Id int,
    @Rows int
WITH NATIVE_COMPILATION, SCHEMABINDING  
AS   
BEGIN ATOMIC   
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')  

    INSERT INTO dbo.NewStage1 (Id, Col1, Col2, Col3, Col4, Col5, Col6, Col7, Col8, Col9, Col10, Col11, Col12, Col13, Col14)
    SELECT Id, Col1, Col2, Col3, Col4, Col5, Col6, Col7, Col8, Col9, Col10, Col11, Col12, Col13, Col14 
    FROM
    (
    SELECT
    @Id as Id,
    '92D14DA3-2C55-4E50-A965-7D3C941417B3' as Col1,
    '92D14DA3-2C55-4E50-A965-7D3C941417B3' as Col2,
    'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' as Col3,
    'aaaaaaaaaaaaaaaaaaaa' as Col4,
    'aaaaaaaaaaaaaaaaaaaa' as Col5,
    'aaaaaaaaaaaaaaaaaaaa' as Col6,
    0 as Col7,
    0 as Col8,
    'aaaaaaaaaaaaaaaaaaaa' as Col9,
    'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' as Col10,
    'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' as Col11,
    1 as Col12,
    1 as Col13,
    1 as Col14
    )a
    CROSS APPLY
    (
    SELECT TOP (@Rows) n FROM dbo.Numbers
    )b


END;   
GO  

DROP PROCEDURE IF EXISTS dbo.Delete_NewStage1;
GO
CREATE PROCEDURE dbo.Delete_NewStage1
    @Id int 
WITH NATIVE_COMPILATION, SCHEMABINDING  
AS   
BEGIN ATOMIC   
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')  

    DELETE FROM dbo.NewStage1 WHERE Id = @Id;

END   
GO  

Few more things to note:

  • My new procedures are natively compiled: SQL Server compiles them up front so at run time it can just execute without any extra steps.  The procedures that target my old disk-based tables will have to compile every time.
  • In the old delete procedure, I am deleting data in chunks so my transaction log doesn't get full.  In the new version of the procedure, I don't have to worry about this because, as I mentioned earlier, my memory optimized table doesn't have to use the transaction log.

Let's simulate a load

It's time to see if all of this fancy in-memory stuff is actually worth all of the restrictions.

In my load, I'm going to mimic loading three documents with around 3 million rows each.  Then, I'm going to delete the second document from each table:

-- Old on-disk method
-- Insert data for processing
EXEC InMemoryTest.dbo.Insert_OldStage1 @Id=1, @Rows=2500000;
GO
EXEC InMemoryTest.dbo.Insert_OldStage1 @Id=2, @Rows=3400000;
GO 
EXEC InMemoryTest.dbo.Insert_OldStage1 @Id=3, @Rows=2800000;
GO 

-- Delete set of records after processed
EXEC InMemoryTest.dbo.Delete_OldStage1 @Id = 2
GO

-- New in-memory method
-- Insert data for processing
EXEC InMemoryTest.dbo.Insert_NewStage1 @Id=1, @Rows=2500000;
GO
EXEC InMemoryTest.dbo.Insert_NewStage1 @Id=2, @Rows=3400000;
GO 
EXEC InMemoryTest.dbo.Insert_NewStage1 @Id=3, @Rows=2800000;
GO 

-- Delete set of records after processed
EXEC InMemoryTest.dbo.Delete_NewStage1 @Id = 2
GO

The in-memory version should have a significant advantage because:

  1. The natively compiled procedure is precompiled (shouldn't be a huge deal here since we are doing everything in a single INSERT INTO...SELECT).
  2. The in-memory table inserts/deletes don't have to write to the transaction log (this should be huge!)

Results

  -------------------- ---------------------------------------- -----------------------------------------
                                    **Disk-based**                            **In-Memory**
  INSERT 3 documents                    65 sec                                    6 sec
  DELETE 1 document                     46 sec                                    0 sec
  Total time                           111 sec                                    6 sec
  Difference                        -95% slower                                1750% faster
  -------------------- ---------------------------------------- -----------------------------------------

The results speak for themselves.  In this particular example, in-memory destroys the disk-based solution out of the water.

Obviously there are downsides to in-memory (like consuming a lot of memory) but if you are going for pure speed, there's nothing faster.

Warning! I am not you.

And you are not me.

While in-memory works great for my ETL scenario, there are many requirements and limitations.  It's not going to work in every scenario.  Be sure you understand the in-memory durability options to prevent any potential data loss and try it out for yourself!  You might be surprised by the performance gains you'll see.