Behind the Scenes of an Online Presentation

Charlie is an attentive audience member

Last week I presented my session “DBAs vs Developers: JSON in SQL Server 2016” at the online GroupBy Conference.

As I prepared for the event, I thought about all of the things that were different about getting ready for an online versus an in-person event.

Thinking that others might be interested in seeing what I do to get ready for an online talk, I filmed myself as I prepared for presentation day and put together this “behind the scenes” video.

Check it out, along with my actual talk on JSON in SQL Server 2016, in the videos below!

Slides and demo code from the presentation is available at https://bertwagner.com/presentations/

And the presentation video itself:

And slide deck:

 

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

5 Things You Need to Do When Performance Testing JSON in SQL and C#

You can watch this blog post on YouTube too!

Want to learn more about using JSON in SQL Server? Watch me present at the online GroupBy conference on June 9, 2017 at 8am.

I’ve written a few articles this year about how awesome JSON performance is in SQL Server 2016.

The more I continue to use JSON in SQL Server, the more impressed I become with its speed and versatility. Over time I’ve learned new techniques and realize that JSON in SQL Server is actually much faster than I initially thought.

Today I want to correct some performance tests where I think I unfairly compared SQL Server JSON performance the first time around.

Major thanks to @JovanPop_MSFT for his help with performance testing suggestions.

Performance testing is hard

Before I dive into the performance tests, I want to be clear that these tests are still not perfect.

Performance testing in SQL Server is hard enough. When you start trying to compare SQL Server functions to code in .NET, lots of of other factors come in to play.

I’ll try to to highlight where there still might be some problems with my methodology in the tests below, but overall I think these tests are more accurate comparisons of these features.

SQL Server JSON vs. Json.Net

There are two major issues with comparing SQL Server JSON functions to Json.NET functions in C#:

  1. Queries running in SQL Server Management Studio have significant overhead when rendering results to the results grid.
  2. The way SQL Server retrieves pages of data from disk or memory is not the same as how C# retrieves data from disk or memory.

The below tests should provide a more accurate comparison between SQL Server and .NET.

I am capturing SQL run times for the below tests using SET STATISTICS TIME ON. All of the test data for the below tests is available here: https://gist.github.com/bertwagner/f0645cf1b244af7d6bb75856db8744e0

Test #1 — Deserializing 20k JSON elements

For this first test, we will deserialize ~20k rows of car year-make-model data comparing the SQL Server OPENJSON function against Json.NET’s DeserializeObject.

Previously this test used JSON_VALUE which was adding unnecessary processing overhead. With the query rewritten to run more efficiently, it looks like this:

Now the problem with this query is that we are still drawing all ~20k rows of data to the screen in SQL Server Management Studio. The best way to avoid this extra processing is to simply convert the query to use COUNT:

Looking at the execution plans, the OPENJSON function is still processing all ~20k rows in both queries, only the number of rows being brought back to the SSMS GUI differ.

This still isn’t the same as what the C# test below does (all data in the C# example stays in memory at all times) but it is as close of a comparison that I could think of:

(Full C# code available at: https://gist.github.com/bertwagner/8e5e8b6ec977c1704355166f96ae3efa)

And the result of this test? SQL Server was nearly as fast as Json.NET!

Test #2 — Deserializing ~20k rows with a predicate

In this next test we filter and return only a subset of rows.

SQL:

C#

Result: SQL Server is nearly as fast once again!

One more important thing to note about this specific test — if you add this data into a SQL table and add a computed column index, SQL Server will beat out Json.NET every time.

Test #3 — Serializing ~20 elements into JSON

This scenario is particularly difficult to test. If I want to serialize data in a SQL table to a JSON string, how do I write the equivalent of that in C#? Do I use a DataTable and hope that SQL’s data is all in cache? Is the retrieval speed between the SQL Server buffer equivalent to C#’s DataTable? Would a collection of List’s in C# be more appropriate than a DataTable?

In the end, I decided to force SQL to read pages from disk by clearing the cache and have C# read the object data from a flat file. This still isn’t perfect, but it is as close as I think we can get:

SQL:

C#:

This test still isn’t perfect though because SSMS is outputting the JSON string to the screen while C# never has to. I didn’t want to play around with outputting the C# version to a form or the console window because it still wouldn’t have been an equal comparison.

Result: Json.Net is about twice as fast although this test is by far the most inaccurate. Still, SQL is still much faster than I initially thought.

SQL Server JSON vs. XML

In my previous article comparing SQL Server JSON to SQL Server XML, I focused on tests that were all done within SQL Server.

These tests were incomplete however: most of the time, a developer’s app will have to do additional processing to get data into an XML format, while JSON data usually already exists in JSON format (assuming we have Javascript web app).

These two tests examine cases where XML may have been slightly faster than JSON on SQL Server, but if you consider the entire environment (app + database), using JSON wins.

Scenario #1 — XML data needs to be serialized

Although inserting XML data that is already in memory into a SQL Server table is faster than the equivalent operation in JSON, what happens if we need to serialize the data in our app first before sending the data to SQL Server?

Using the most common libraries available to serializing data to XML and JSON, serializing data to JSON is twice as fast as serializing to XML (and as mentioned before, a lot of the time apps already have JSON data available — no need to serialize). This means the app serialization code will run faster and allow for the data to make it to SQL Server faster.

Scenario #5 — Transferring XML and JSON to SQL Server

Finally, after we have our serialized XML and JSON data in C#, how long does it take to transfer that data to SQL Server?

Result: Writing JSON data to a nvarchar SQL Server column is much faster than writing XML data to an XML typed (or even an nvarchar typed) column.

Not only does SQL server need to parse the XML data upon insert, the physical size of the XML data being sent over TCP is larger due to the repetitive nature of XML syntax.

Conclusion

JSON performance in SQL Server is still awesome. In fact, it’s even better than I had previously thought.

These tests are not meant to be conclusive; think of them more as errata for my previous JSON performance posts.

However, I think that these comparisons show that SQL Server’s JSON functions are competitive with other languages’ performance of handling JSON data.

Additionally, if serializing/deserializing reduces the amount of data transferred over TCP, using the JSON functions in SQL Server will most likely give you better total app/environment performance.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

XML vs JSON Shootout: Which is Superior in SQL Server 2016?

A duel is a duel” by Emanuele Rosso is licensed under CC BY-NC-ND 2.0

You can watch this blog post on YouTube too!

Additional performance comparisons available in an updated post.

Starting with the 2016 release, SQL Server offers native JSON support. Although the implementation is not perfect, I am still a huge fan.

Even if a new feature like JSON support is awesome, I am only likely to use it if it is practical and performs better than the alternatives.

Today I want to pit JSON against XML and see which is the better format to use in SQL Server.

Enter XML, SQL’s Bad Hombre

Full disclosure: I don’t love XML and I also don’t love SQL Server’s implementation of it.

XML is too wordy (lots of characters wasted on closing tags), it has elements AND attributes (I don’t like having to program for two different scenarios), and depending on what language you are programming in, sometimes you need schema files and sometimes you don’t.

SQL Server’s implementation of XML does have some nice features like a dedicated datatype that reduces storage space and validates syntax, but I find the querying of XML to be clumsy.

All XML grievances aside, I am still willing to use XML if it outperforms JSON. So let’s run some test queries!

Is JSON SQL Server’s New Sheriff in Town?

Although performance is the final decider in these comparison tests, I think JSON has a head start over XML purely in terms of usability. SQL Server’s JSON function signatures are easier to remember and cleaner to write on screen.

The test data I’m using is vehicle year/make/model data from https://github.com/arthurkao/vehicle-make-model-data. Here’s what it looks like once I loaded it into a table called dbo.XmlVsJson:

(The full data query is available in this gist if you want to play along at home)

Data Size

So XML should be larger right? It’s got all of those repetitive closing tags?

Turns out the XML is actually smaller! How can this be? This is the magic behind the SQL Server XML datatype. SQL doesn’t store XML as a giant string; it stores only the XML InfoSet, leading to a reduction in space.

The JSON on the other hand is stored as regular old nvarchar(max) so its full string contents are written to disk. XML wins in this case.

INSERT Performance

So XML is physically storing less data when using the XML data type than JSON in the nvarchar(max) data type, does that mean it will insert faster as well? Here’s our query that tries to insert 100 duplicates of the row from our first query:

And the results? Inserting the 100 XML rows took 613ms on my machine, while inserting the 100 JSON rows took 1305ms…XML wins again!

JSON ain’t looking too hot. Wait for it…

I’m guessing since the XML data type physically stores less data, it makes sense that it would also write it out to the table faster as well.

CRUD Operations

I’m incredibly impressed by SQL Server’s JSON performance when compared to .NET — but how does it compare to XML on SQL Server?

Read

Let’s select the fragment for our second car from our XML and JSON:

Result? JSON wins (at 0ms vs 63ms for XML) when needing to pluck out a fragment from our larger object string.

What if we want to grab a specific value instead of a fragment?

Once again JSON wins with 0ms vs 11ms for XML.

If you look at the execution plans for these last two queries, it’s easy to see that XML has a lot more to do behind the scenes to retrieve the data:

XML:

JSON:

Create

We saw above that inserting rows of XML data is faster than inserting rows of JSON, but what if we want to insert new data into the object strings themselves? Here I want to insert the property “mileage” into the first car object:

In addition to the cleaner syntax (JSON_MODIFY() is essentially the same as a REPLACE()) the JSON insert runs in 22ms compared to the 206ms for XML. Another JSON win.

Update

Let’s update the mileage properties we just added to have values of 110,000:

Result? JSON has the quicker draw and was able to perform this update in 54ms vs XML’s 194ms.

Delete

Deleting large string data, a DBA’s dream *snicker*.

Let’s delete the mileage property, undoing all of that hard work we just did:

JSON doesn’t take any time to reload and wins against XML again 50ms to 159ms.

Read Part 2: Indexes

So above we saw that JSON was faster than XML at reading fragments and properties from a single row of serialized data. But our SQL Server’s probably have LOTS of rows of data — how well does indexed data parsing do in our match up?

First let’s expand our data — instead of storing all of our car objects in a single field, let’s build a new table that has each car on its own row:

(once again, full dataset at GitHub if you are playing along at home)

Now that we have our expanded data in our table, let’s add some indexes. The XML datatype in SQL Server has its own types of indexes, while JSON simply needs a computed column with a regular index applied to it.

(Note: I also tried adding an XML secondary index for even better performance, but I couldn’t get the query engine to use that secondary index on such a basic dataset)

If we try to find all rows that match a predicate:

XML is able to filter out 96 rows in 200ms and JSON accomplishes the same in 9ms. A final win for JSON.

Conclusion

If you need to store and manipulate serialized string data in SQL Server, there’s no question: JSON is the format of choice. Although JSON’s storage size is a little larger than its XML predecessor, SQL Server’s JSON functions outperform XML in speed in nearly all cases.

Is there enough performance difference to rewrite all of your old XML code to JSON? Probably not, but every case is different.

One thing is clear: new development should consider taking advantage of SQL Server’s new JSON functions.

 

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

One SQL Cheat Code For Amazingly Fast JSON Queries

How non-persisted computed column indexes make your JSON queries high performance

You can watch this blog post on YouTube too!

Recently I’ve been working with JSON in SQL Server 2016 a lot.

One of the hesitations many people have with using JSON in SQL Server is that they think that querying it must be really slow — SQL is supposed to excel at relational data, not string parsing right?

It turns out that performance is pretty good with the standalone SQL Server JSON functions. Even better is that it’s possible to make queries against JSON data run at ludicrous speeds by using indexes on JSON parsed computed columns. In this post I want to take a look at how SQL is able to parse* with such great performance.

*“Parse” here is actually a lie —it’s doing something else behind the scenes. You’ll see what I mean, keep reading!

Computed Columns in SQL Server

The only way to get JSON indexes working on SQL server is to use a computed column. A computed column is basically a column that performs a function to calculate its values.

For example, let’s say we have a table with some car JSON data in it:

We can add a new computed column to the table, “Make”, which parses and extracts the Make property from each row’s JSON string:

By default, the above Make computed column is non-persisted, meaning its values are never stored to the database (persisted computed columns can also be created, but that’s a topic for a different time). Instead, every time a query runs against our dbo.DealerInventory table, SQL Server will calculate the value for each row.

The performance of this isn’t great — it’s essentially a scalar function running for each row of our output :(. However, when you combine a computed column with an index, something interesting happens.

Time to dive in with DBCC Page

DBCC Page is an undocumented SQL Server function that shows what the raw data stored in a SQL page file looks like. Page files are how SQL Server stores its data.

In the rest of this post we’ll be looking at how data pages (where the actual table data in SQL is stored) and index pages (where our index data is stored) are affected by non-persisted computed columns — and how they make JSON querying super fast.

First, let’s take a look at the existing data we have. We do this by first turning on trace flag 3604 and using DBCC IND to get the page ids of our data. Additional details on the column definitions in DBCC IND and DBCC PAGE can be found in Paul Randal’s blog post on the topic.

If you look at the results above, row 2 contains our data page (indicated by PageType = 1) and the PagePID of that page is 305088 (if you are playing along at home, your PagePID is most likely something else). If we then look up that PagePID using DBCC PAGE we get something like this:

You can see our three rows of data highlighted in red. The important thing to note here is that our computed column of the parsed “Make” value is truly non-persisted and no where to be found, meaning it has to get generated for every row during query execution.

Now, what if we add an index to our non-persisted computed column and then run DBCC IND again:

You’ll now notice that in addition to data page 305088 (PageType = 1), we also have an index page 305096 (PageType = 2). If we examine both the data page and the index page we see something interesting:

Nothing has changed with our data page:

But our index page contains the parsed values for our “Make” column:

What does this mean? I thought non-persisted computed columns aren’t saved to disk!

Exactly right: our non-persisted computed column “Make” isn’t saved to the data page on the disk. However if we create an index on our non-persisted computed column, the computed value is persisted on the index page!

This is basically a cheat code for indexing computed columns.

SQL will only compute the “Make” value on a row’s insert or update into the table (or during the initial index creation) — all future retrievals of our computed column will come from the pre-computed index page.

This is how SQL is able to parse indexed JSON properties so fast; instead of needing to do a table scan and parsing the JSON data for each row of our table, SQL Server can go look up the pre-parsed values in the index and return the correct data incredibly fast.

Personally, I think this makes JSON that much easier (and practical) to use in SQL Server 2016. Even though we are storing large JSON strings in our database, we can still index individual properties and return results incredibly fast.

 

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

[Video] JSON Usage and Performance in SQL Server 2016

Using JSON because you are lazy is not a good excuse!

Last night I had the privilege to present to the Ohio North SQL Server User Group about JSON in SQL Server 2016. There was a great crowd present (they laughed at all of my terrible jokes so how can they not be great!?) and I had a wonderful time sharing what I know about JSON.

Below you can find my video recording of the presentation as well as the slides and demo code.

Also worth highlighting is OnTopReplica, an open source piece of software I used that basically does picture-in-picture display of another window on your desktop. You can hear everyone get excited by it at 18 minutes and 30 seconds into the video.

Enjoy the resources below :). These as well as resources from other past presentations are available at https://bertwagner.com/presentations/ .

Video

Slides

Demo Scripts

https://bertwagner.com/presentations/JSON%20in%20SQL%20Server%202016%20-%20Bert%20Wagner.zip

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

Who Stuck These Letters In My DateTimes?

How to handle JSON DateTimes in SQL Server 2016

Parsing, creating, and modifying JSON in SQL Server 2016 is really easy. JSON dates and times are not.

Coming from a predominantly SQL background, the JSON DateTime format took some getting used to, especially when it came to converting SQL datetimes to JSON and vice versa.

The remainder of this post will get you well on your way to working with JSON date times in SQL Server.

Breakdown of JSON date/time

In SQL Server, datetime2’s format is defined as follows:

JSON date time strings are defined like:

Honestly, they look pretty similar. However, there are few key differences:

  • JSON separates the date and time portion of the string with the letter T
  • The Z is optional and indicates that the datetime is in UTC (if the Z is left off, JavaScript defaults to UTC). You can also specify a different timezone by replacing the Z with a + or  along with HH:mm (ie. -05:00 for Eastern Standard Time)
  • The precision of SQL’s datetime2 goes out to 7 decimal places, in JSON and JavaScript it only goes out to 3 places, so truncation may occur.

Now that we know the key differences between SQL datetime2 and JSON date time strings, let’s explore common transformations when working with JSON data in SQL.

Parsing JSON date time into SQL datetime2

The most common operation I perform with these new JSON functions is parsing, so let’s start with those. Let’s see how we can parse the date/times from JSON using SQL Server 2016’s JSON_VALUE() function:

Inserting SQL datetime2 into JSON

Taking date/time data out of JSON and into SQL was pretty easy. What about going the opposite direction and inserting SQL date/time data into JSON?

Modifying JSON date time with SQL

So we’ve seen how easy it is to parse and create JSON date/time strings, but what about modifying JSON data?


Overall, working with JSON dates/times is really easy using SQL Server 2016’s new JSON functions. Microsoft could have done a really bad job not following the ECMA standards, but they did a great job crossing their T‘s and placing their Z‘s.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

When Is It Appropriate To Store JSON in SQL Server?

Who needs a relational database when everything can be stored in a JSON string?

Every once in a while I hear of some technologist say that relational databases are dead; instead, a non-table based NoSQL storage format is the way of the future. SQL Server 2016 introduced JSON functionality, making it possible for some “non-SQL” data storage to make its way into the traditionally tabled-based SQL Server.

Does this mean all data in SQL Server going forward should be stored in long JSON strings? No, that would be a terrible idea. There are instances when storing JSON in SQL Server is a good choice though. In this post I want to create recommendations for when data should be stored as JSON and when it shouldn’t.

Databases Should Not Be Entirely Comprised Of JSON

The screenshot below is an example of what I think some developers would do if they were given free reign in SQL Server 2016:

Here we have an application database (“InventoryApp”) that consists of only a single table (“dbo.Data”) with three JSON NVARCHAR(MAX) columns to represent all of the data required by the app. Relationships exist between Sales, Purchases, and Customers but these are not defined on the database side.

If you are from the world of relational-SQL, you might not believe that anyone would design such a database structure. Believe me though, this is a realistic scenario. Entire companies (eg. Firebase: https://firebase.google.com/) build their services around abstracting the database layer away from developers, essentially storing entire tables or databases in large JSON strings.

Many developers like storing data this way because it is easy to deserialize JSON strings into objects in their programming languages to use in their apps. They like the fact that with JSON they can have an infinitely changing storage schema (just add new keys, values, and arrays!) so if they need a new field for their app, they can just add it in, serialize the object to a JSON string, and store it again in the database.

Obviously, going completely “NoSQL” might make short term development easier/quicker, but using SQL Server 2016 to only store data this way is a travesty: there’s no way to use many of SQL Server’s amazing performance, schema definition and validation, and security features.

So when is it appropriate to store JSON in SQL Server?

Appropriate Use Case #1: Error Logging

Errors happen. When they do, it’s nice to be able to go back and look at the error message to see what happened.

The problem is that the structure of error messages isn’t always consistent. Sometimes only the value of a single property will help identify the cause of failure. Other times, something more complex fails and it would be nice to have all of the values of a complex object available to make troubleshooting easier.

This is where JSON steps in: in most programming languages, it is easy to convert error messages and run time values to a JSON object on error. And since error messages and data values change in structure depending on where they occur, it’s easy to dynamically turn any type of object into JSON data.

This data is perfect to store in SQL to be looked at later. None of these ideas are new — nvarchar(max) has been in SQL for a while now, and so programmers everywhere have been storing error information in that datatype.

With SQL Server 2016, it is now easier to examine and parse the error information directly in SQL Server Management Studio with the variety of JSON parsing functions available. No longer do programmers have to copy the code into some different tool — they can easily do it in SSMS.

Appropriate Use Case #2: Piloting Ideas

Most large workplaces have controls in place that prevent developers from making changes in production. In general this is a Good Idea™.

However, controls are sometimes too restrictive. For example, due to security restrictions, lack of server space, company politics, etc… developers are sometimes stuck developing in production. It’s an unfortunate fact of life. In those scenarios, developers have to go through hell if they have to elevate each database structure change every time they want to test something in production.

JSON to the rescue! An nvarchar(max) column in a table can have its JSON data be easily added to and modified to fit more data than it was originally intended to hold. All without any database structure change requests.

Now this is not an ideal situation. In fact, it’s a scenario that can add a lot of technical debt to the application long-term if not planned for.

However, if a “flexible” JSON column is built with eventual conversion to a traditional table structure in mind from the start, it’s actually simple for a developer to transition an entirely JSON storage structure to a relational format later on. They key here is that the developer needs to have this conversion planned from day one.

Appropriate Use Case #3: Non-Analytical Data

Analytical data is SQL Server’s bread and butter. Need to store lots of data and be able to query against it all day long? No problem, there are a plethora of performance tuning options to make your queries run fast and efficiently.

However, sometimes not all data needs to be analyzed. Often an app might need to save some session data to a database temporarily — why bother creating all of the maintenance overhead of strict database schemas if the data will never be queried for analytical purposes? Another example might be a website’s dynamically created user profile settings. You can build normalized table(s) to store all of that data, but then you will be writing programming logic to normalize and denormalize your data out of the app.

If this data will not have to be searched, then why bother adding all of the overhead? Keep it in JSON and be done with.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

The Ultimate SQL Server JSON Cheat Sheet

Examples for handling JSON data in SQL Server 2016+

This post is a reference of my examples for processing JSON data in SQL Server. For more detailed explanations of these functions, please see my post series on JSON in SQL Server 2016:

Additionally, the complete reference for SQL JSON handling can be found at MSDN: https://msdn.microsoft.com/en-us/library/dn921897.aspx


Parsing JSON

Getting string JSON data into a SQL readable form.

ISJSON()

Checks to see if the input string is valid JSON.

JSON_VALUE()

Extracts a specific scalar string value from a JSON string using JSON path expressions.

Strict vs. Lax mode

If the JSON path cannot be found, determines if the function should return a NULL or an error message.

JSON_QUERY()

Returns a JSON fragment for the specified JSON path.

This is useful to help filter an array and then extract values with JSON_VALUE():

OPEN_JSON()

Returns a SQL result set for the specified JSON path. The result set includes columns identifying the datatypes of the parsed data.

Creating JSON

Creating JSON data from either strings or result sets.

FOR JSON AUTO

Automatically creates a JSON string from a SELECT statement. Quick and dirty.

FOR JSON PATH

Formats a SQL query into a JSON string, allowing the user to define structure and formatting.

Modifying JSON

Updating, adding to, and deleting from JSON data.

JSON_MODIFY()

Allows the user to update properties and values, add properties and values, and delete properties and values (the delete is unintuitive, see below).

Modify:

Add:

Delete property:

Delete from array (this is not intuitive, see my Microsoft Connect item to fix this: https://connect.microsoft.com/SQLServer/feedback/details/3120404/sql-modify-json-null-delete-is-not-consistent-between-properties-and-arrays)

SQL JSON Performance Tuning

SQL JSON functions are already fast. Adding computed columns and indexes makes them extremely fast.

Computed Column JSON Indexes

JSON indexes are simply regular indexes on computed columns.

Add a computed column:

Add an index to our computed column:

Performance test:

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

JSON Support Is The Best New Developer Feature in SQL 2016 — Part 4: Performance Comparisons

This is the fourth article in my series about learning how to use SQL Server 2016’s new JSON functions. If you haven’t already, you can read Part 1 — Parsing JSON, Part 2 — Creating JSON, and Part 3 — Updating, Adding, and Deleting JSON.


Additional performance comparisons available in an updated post.

We’ve finally come to my favorite part of analyzing any new software feature: performance testing. SQL Server 2016’s new JSON functions are great for parsing JSON data, creating JSON data, and modifying JSON data, but are they efficient?

Today we’ll examine three areas of SQL Server JSON performance:

  1. How to maximize performance for the new SQL Server JSON functions
  2. How the new SQL Server JSON functions compare against what was previously available in SQL Server
  3. How the new SQL JSON functions compare against Newtonsoft’s Json.NET

Maximizing SQL Server JSON Function Performance

I wanted to use a sizable data set in order to test the performance of the new JSON functions in SQL Server 2016. I found arthurkao’s car year/make/model data on GitHub and decided this ~20k element JSON array would be perfect for performance testing purposes. For my tests I’ll be using both the original JSON string as well as a SQL table that I created from the original JSON array:

Unlike XML in SQL Server (which is stored in it’s own datatype), JSON in SQL Server 2016 is stored as an NVARCHAR. This means instead of needing to use special indexes, we can use indexes that we are already familiar with.

To maximize performance, we can use Microsoft’s recommendation of adding a computed column for one of the JSON properties and then indexing that computed column:

Non-persisted computed columns (like in the example above) do not take up any additional space in the table. You can verify this for yourself by running sp_spaceused 'dbo.Cars' before and after adding the non-persisted column to the table.

Having a computed column doesn’t add any performance to our query on its own but it does now allow us to add an index to our parsed/computed JSON property.

Having the computed column doesn’t improve performance — we are still seeing a Table Scan

The clustered index that we add next stores pointers to each parsed/computed value causing the table not to take up any space and only causes the SQL engine to recompute the columns when the index needs to be rebuilt:

And the resulting execution plan now shows both queries (the one using JSON_VALUE() in the WHERE clause directly as well the one calling our computed column) using index seeks to find the data we are looking for:

Yay index seeks!

Overall, adding computed columns to our table adds no overhead in terms of storage space and allows us to then add indexes on JSON properties which improve performance significantly.

SQL Server 2016 JSON vs SQL Server pre-2016 JSON

As I’ve mentioned before, the best option for processing JSON data in SQL Server before 2016 was by using Phil Factor’s amazing JSON parsing function. Although the function works well, it is limited by what SQL Server functionality was available at the time and therefore wasn’t all that efficient.

The above query should work for getting the data we need. I’m abusing what the parseJSON function was probably built to do (I don’t think it was intended to parse ~20k element JSON arrays), and I’ll be honest I waited 10 minutes before killing the query. Basically, trying to parse this much data in SQL before 2016 just wasn’t possible (unless you wrote CLR).

Compared to the following queries which is using our indexed computed column SQL Server 2016 is able to return all of the results to us in 1 ms:

SQL Server 2016 JSON vs Newtonsoft’s Json.NET

In cases like the above where parsing JSON in SQL Server was never an option, my preferred method has always been to parse data in C#. In particular, Newtonsoft’s Json.NET is the standard for high performance JSON parsing, so let’s take a look at how SQL Server 2016 compares to that.

The following code shows 6 tests I ran in SQL Server 2016:

And then the same tests in a C# console app using Json.Net:

And the results compared side by side:

Essentially, it seems like Json.Net beats SQL Server 2016 on larger JSON manipulations, both are equal with small JSON objects, and SQL Server 2016 has the advantage at filtering JSON data when indexes are used.

Conclusion

SQL Server 2016 is excellent at working with JSON. Even though Json.NET beats SQL Server 2016 at working with large JSON objects (on the magnitude of milliseconds), SQL Server is equally fast on smaller objects and is advantageous when JSON data needs to be filtered or searched.

I look forward to using the SQL Server 2016 JSON functions more in the future, especially in instances where network I/O benefits me to process JSON on the SQL Server or when working with applications that cannot process JSON data, like SQL Server Reporting Services.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

How SQL 2016’s JSON Functionality Will Make You Want To Use JSON

This post is a response to this month’s T-SQL Tuesday prompt. T-SQL Tuesday was created by Adam Machanic and is a way for SQL users to share ideas about interesting topics. This month’s topic is Fixing Old Problems with Shiny New Toys.


Why I love JSON

I know that lots of users in the SQL community cringe when they hear about JSON, NoSQL, and other non-relational database storage options. I get it. Lots of times a string storage format compromises the many benefits SQL has to offer: indexing, data integrity, simple querying, oversight into schema changes by developers, etc… I’ve seen (and written) some queries that perform like molasses in winter solely due them being in a non-sargable string format.

Yet, with all of those downsides I still love JSON. I think there’s a happy middle ground where DBAs and developers can agree on when it is appropriate to use JSON. Would I ever want to use JSON in a data warehouse that is being heavily queried for analytics? No. Would I also want to use JSON where performance is extremely important? No. Would I want to use it in an application where data or schema integrity is extremely important? No.

However, I love using JSON when capturing dynamically structured user input, especially if the database is being used only to persist the data instead of analyze it. I love having JSON in the database because it so easily serializes/deserializes between JSON and my C# models. If I want low latency to my webapp, I also might write the JSON from the app to the database and then have it get queued to transform into normalized data by an ETL later on. These reason in particular are what make me incredibly excited for the new JSON functionality in SQL Server 2016.

Okay, I guess there are some valid uses for JSON. What are my options for using JSON in SQL Server?

Before SQL Server 2016 was released there was no support for JSON in SQL Server. None. Zilch. Zero. The best thing we had was Phil Factor’s amazing JSON parsing user defined function or writing your own CLR.

Phil’s function is truly amazing and it cleverly uses some built-in SQL functionality to deserialize almost any type of JSON string you can throw at it. It does however have downsides — the function has a few edge case bugs (try adding a space before any “:” in your JSON), it does not allow for easy querying of the JSON, and the function needs to be added to any database you want to use it on.

I’m not trying to downplay Phil’s function — I love it and use it in a lot of my code — but the fact of the matter is that it is a very clever workaround for something that Microsoft wasn’t supporting at the time.

So what did JSON parsing look like in the pre-2016 world? Let’s take a look at some example data I generated using http://www.json-generator.com/:

Basically we have a JSON string that represents five users and some of their attributes (note: I would argue that storing this much data about an individual would be better suited to fit into some structured tables, but alas this was the type of data that was easy to generate for example purposes).

If I wanted to parse out each user’s name to use for an infrequently ran report or for ETL purposes, this is what my query would have to look like:

There are a couple funny things we have to do in pre-2016 SQL to parse this JSON:

  1. We have to add the parseJson() function to our database
  2. Since there are multiple key’s named “NAME”, we need to add a LAG() function to help identify the property’s key so we can ensure we have the correct one.
  3. Nested queries required since windowed functions can’t run in the WHERE clause.

Overall this code works and it gets the job done. However we need to ensure our server has the parseJson( function added and filtering our JSON data to pull out the properties we are interested in takes a little bit of work. Additionally, the parseJson() function does do some heavy processing which affects our overall performance:

The Shiny New Toys in SQL Server 2016: JSON Functions

SQL Server 2016 includes many new JSON parsing functions that are available for us to use. Writing a query to return the same result set as above is now as simple as this:

The OPENJSON() function allows us to write an XPath query to filter the JSON Users object into five separate rows of data, one per user:

Then, we use the JSON_VALUE() function and XPath once again to filter out just the “name” property. Overall, this code is much simpler to write, performs more consistently (the issue with a space character before a “:” is handled correctly), and performs much quicker:

I love the syntax of the new SQL JSON parsing functions: it’s easy to remember, easy to use in real life scenarios, and is very fast. This is one new feature of SQL Server 2016 that definitely makes my life easier and makes any other ways of parsing out JSON data obsolete.


If you are interested in learning more about JSON in SQL server, I’ve been blogging the past few weeks about how to use all of SQL’s new JSON functions.

There’s also one bug that I found with how SQL handles JSON deletes that I submitted to Microsoft Connect if you want to upvote and see JSON continue to evolve on SQL Server.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!