5 Things You Need to Do When Performance Testing JSON in SQL and C#

Published on: 2017-06-06

You can watch this blog post on YouTube too!

Want to learn more about using JSON in SQL Server? Watch me present at the online GroupBy conference on June 9, 2017 at 8am.

I’ve written a few articles this year about how awesome JSON performance is in SQL Server 2016.

The more I continue to use JSON in SQL Server, the more impressed I become with its speed and versatility. Over time I’ve learned new techniques and realize that JSON in SQL Server is actually much faster than I initially thought.

Today I want to correct some performance tests where I think I unfairly compared SQL Server JSON performance the first time around.

Major thanks to @JovanPop_MSFT for his help with performance testing suggestions.

Performance testing is hard

Before I dive into the performance tests, I want to be clear that these tests are still not perfect.

Performance testing in SQL Server is hard enough. When you start trying to compare SQL Server functions to code in .NET, lots of of other factors come in to play.

I’ll try to to highlight where there still might be some problems with my methodology in the tests below, but overall I think these tests are more accurate comparisons of these features.

SQL Server JSON vs. Json.Net

There are two major issues with comparing SQL Server JSON functions to Json.NET functions in C#:

  1. Queries running in SQL Server Management Studio have significant overhead when rendering results to the results grid.
  2. The way SQL Server retrieves pages of data from disk or memory is not the same as how C# retrieves data from disk or memory.

The below tests should provide a more accurate comparison between SQL Server and .NET.

I am capturing SQL run times for the below tests using SET STATISTICS TIME ON. All of the test data for the below tests is available here: https://gist.github.com/bertwagner/f0645cf1b244af7d6bb75856db8744e0

Test #1 — Deserializing 20k JSON elements

For this first test, we will deserialize ~20k rows of car year-make-model data comparing the SQL Server OPENJSON function against Json.NET’s DeserializeObject.

Previously this test used JSON_VALUE which was adding unnecessary processing overhead. With the query rewritten to run more efficiently, it looks like this:

SELECT year, make, model
FROM OPENJSON(@cars) WITH (year int, make nvarchar(50), model nvarchar(50));
-- 160ms

Now the problem with this query is that we are still drawing all ~20k rows of data to the screen in SQL Server Management Studio. The best way to avoid this extra processing is to simply convert the query to use COUNT:

SELECT COUNT(*)
FROM OPENJSON(@cars) WITH (year int, make nvarchar(50), model nvarchar(50));
-- 71ms

Looking at the execution plans, the OPENJSON function is still processing all ~20k rows in both queries, only the number of rows being brought back to the SSMS GUI differ.

This still isn’t the same as what the C# test below does (all data in the C# example stays in memory at all times) but it is as close of a comparison that I could think of:

var deserializedCars = JsonConvert.DeserializeObject<IEnumerable<Car>>(carsJSON);
// 66ms

(Full C# code available at: https://gist.github.com/bertwagner/8e5e8b6ec977c1704355166f96ae3efa)

And the result of this test? SQL Server was nearly as fast as Json.NET!

Test #2 — Deserializing ~20k rows with a predicate

In this next test we filter and return only a subset of rows.

SQL:

SELECT count(*) FROM OPENJSON(@cars) WITH(model nvarchar(20) ) WHERE model = 'Golf'
// 58ms

C#

var queriedCars = JsonConvert.DeserializeObject<IEnumerable<Car>>(carsJSON).Where(x => x.Model == "Golf");
// 52ms

Result: SQL Server is nearly as fast once again!

One more important thing to note about this specific test — if you add this data into a SQL table and add a computed column index, SQL Server will beat out Json.NET every time.

Test #3 — Serializing ~20 elements into JSON

This scenario is particularly difficult to test. If I want to serialize data in a SQL table to a JSON string, how do I write the equivalent of that in C#? Do I use a DataTable and hope that SQL’s data is all in cache? Is the retrieval speed between the SQL Server buffer equivalent to C#’s DataTable? Would a collection of List’s in C# be more appropriate than a DataTable?

In the end, I decided to force SQL to read pages from disk by clearing the cache and have C# read the object data from a flat file. This still isn’t perfect, but it is as close as I think we can get:

SQL:

DBCC DROPCLEANBUFFERS
SELECT * FROM dbo.Cars FOR JSON AUTO
-- 108ms

C#:

string carsJSONFromFile = File.ReadAllText(@"../../CarData.json");
var serializedCars = JsonConvert.SerializeObject(deserializedCars);
// 63ms

This test still isn’t perfect though because SSMS is outputting the JSON string to the screen while C# never has to. I didn’t want to play around with outputting the C# version to a form or the console window because it still wouldn’t have been an equal comparison.

Result: Json.Net is about twice as fast although this test is by far the most inaccurate. Still, SQL is still much faster than I initially thought.

SQL Server JSON vs. XML

In my previous article comparing SQL Server JSON to SQL Server XML, I focused on tests that were all done within SQL Server.

These tests were incomplete however: most of the time, a developer’s app will have to do additional processing to get data into an XML format, while JSON data usually already exists in JSON format (assuming we have Javascript web app).

These two tests examine cases where XML may have been slightly faster than JSON on SQL Server, but if you consider the entire environment (app + database), using JSON wins.

Scenario #1 — XML data needs to be serialized

Although inserting XML data that is already in memory into a SQL Server table is faster than the equivalent operation in JSON, what happens if we need to serialize the data in our app first before sending the data to SQL Server?

// Serialize Car objects to XML
var result1 = SerializeToXML(cars);
// 166ms
// Serialize Car objects to JSON
var result2 = SerializeToJSON(cars);
// 69ms
public static Tuple<long, string> SerializeToXML(List<Car> cars)
{
  Stopwatch sw = new Stopwatch();
  sw.Start();
  StringWriter writer = new StringWriter();
  XmlSerializer serializer = new XmlSerializer(typeof(List<Car>));
  serializer.Serialize(writer, cars);
  string result = writer.ToString();
  sw.Stop();
  return new Tuple<long, string>(sw.ElapsedMilliseconds, result);
}
public static Tuple<long, string> SerializeToJSON(List<Car> cars)
{
  Stopwatch sw = new Stopwatch();
  sw.Start();
  var json = JsonConvert.SerializeObject(cars);
  sw.Stop();
  return new Tuple<long, string>(sw.ElapsedMilliseconds, json);
}

Using the most common libraries available to serializing data to XML and JSON, serializing data to JSON is twice as fast as serializing to XML (and as mentioned before, a lot of the time apps already have JSON data available — no need to serialize). This means the app serialization code will run faster and allow for the data to make it to SQL Server faster.

Scenario #5 — Transferring XML and JSON to SQL Server

Finally, after we have our serialized XML and JSON data in C#, how long does it take to transfer that data to SQL Server?

// Write XML string to SQL XML column
var result3 = WriteStringToSQL(
  result1.Item2, 
  "INSERT INTO dbo.XmlVsJson (XmlData) VALUES (@carsXML)", 
  new SqlParameter[]
  {
    new SqlParameter("carsXML", result1.Item2)
  });
// 142ms, 1.88mb of data
// Write JSON string to SQL
var result4 = WriteStringToSQL(
  carsJSON,
  "INSERT INTO dbo.XmlVsJson (JsonData) VALUES (@carsJSON)",
  new SqlParameter[]
  {
    new SqlParameter("carsJSON", carsJSON)
  });
// 20ms, 1.45mb of data
// Write XML string to nvarchar SQL column.  Taking the difference between this and result3, 100ms+ of time is spent converting to XML format on insert.
var result5 = WriteStringToSQL(
  result1.Item2, 
  "INSERT INTO dbo.XmlVsJson (JSONData) VALUES (@carsXML)",
  new SqlParameter[]
  {
    new SqlParameter("carsXML", result1.Item2)
  });
// 29ms, 1.88mb of data

Result: Writing JSON data to a nvarchar SQL Server column is much faster than writing XML data to an XML typed (or even an nvarchar typed) column.

Not only does SQL server need to parse the XML data upon insert, the physical size of the XML data being sent over TCP is larger due to the repetitive nature of XML syntax.

Conclusion

JSON performance in SQL Server is still awesome. In fact, it’s even better than I had previously thought.

These tests are not meant to be conclusive; think of them more as errata for my previous JSON performance posts.

However, I think that these comparisons show that SQL Server’s JSON functions are competitive with other languages’ performance of handling JSON data.

Additionally, if serializing/deserializing reduces the amount of data transferred over TCP, using the JSON functions in SQL Server will most likely give you better total app/environment performance.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

Three Changes You Should Always Make Before Checking In Any Code

Published on: 2017-04-25

Photo by Maico Amorim

This story originally appeared in Hacker Noon on March 11, 2017.

A code refactor always leaves me with a feeling of accomplishment. Although major refactorings are the most satisfying, every project doesn’t always warrant them. Here are 3 quick and easy refactorings that I make to all of my projects in order to improve code readability:

1. Clean up formatting

The overall format of your code is what makes it possible to quickly navigate to areas of interest. Consistent indentation, line breaks, and patterns help programmers skim large chunks of code. Take the following sloppily formatted code for example:

Inventory inventory = new Inventory();
for (int i = 0; i < cars.Count; i++){
    inventory.Cars.Add(cars[i]);

    var owner = owners.Where(x => x.VIN == cars[i].VIN).OrderByDescending(x => x.PurchaseDate).FirstOrDefault();
inventory.PreviousOwners.Add(new Owner { VIN = cars[i].VIN,
                                        Name = owner.Name});
}

and compare it to this:

Inventory inventory = new Inventory();
for (int i = 0; i < cars.Count; i++)
{
    inventory.Cars.Add(cars[i]);

    var owner = owners.Where(x => x.VIN == cars[i].VIN)
                        .OrderByDescending(x => x.PurchaseDate)
                        .FirstOrDefault();
inventory.PreviousOwners.Add(new Owner 
                                     {
                                        VIN = cars[i].VIN,
                                        Name = owner.Name
                                     });
}

The second example above consistently indents lines, adds new lines, and follows consistent coding patterns. This makes it easy to skim the code quickly.

Books have chapters, headings, and paragraphs defined by formatting that make it easy to find what is needed at a glance — formatting code makes it possible to find things easily too.

2. Rename classes, methods, and variables

Classes, methods, and variables should be named in such that they help the programmer understand what is happening in the code. The shorter the scope of an object the more permissible it is to use shorter names (eg. “i” as a counter in a loop that’s only a line or two long).

It’s easy to use uninformative names when writing a “first draft” of your program, but using the first name that comes to mind isn’t always the best choice. Take a look at the following example:

public IEnumerable<string> GetData(int year)
{
    var result = CallApi("/Cars", year);
IEnumerable<string> output = new IEnumerable<string>();
foreach(var c in result)
    {
        output.Add(c.Make);
    }
}

versus:

public IEnumerable<string> RetrieveCarMakes(int year)
{
    var inventory = CallApi("/Cars", year);
IEnumerable<string> carMakes = new IEnumerable<string>();

    foreach(var car in inventory)
    {
        carMakes.Add(car.Make);
    }
}

Using names that make sense make it much easier for someone else (or your future-self) to figure out what your code is doing.

3. Break up long expressions

When you get into a code writing groove it’s easy to keep chaining commands together or using single-line syntax to speed up your writing. Often times, I look back on my code a day later and I see long expressions similar to this. Take a look at this two line expression:

bool hasHighSaleProbability = (daysOnLot < 60) ? true : (color == "Red" ? true : false);
var highSaleProbabilityVehicles = Inventory.Where(x => x.DaysOnLot < 60 or x.Color == "Red").Select(x => new { Make = x.Make, Model = x.Model, Year = x.Year }).Distinct().Select(x => new RecentInventoryView { YearDropdown = new SelectListItem { Text = x.Year, Value = x.Year }, MakeModelDropdown = new SelectListItem { Text = x.Make + " " + x.Model, Value = x.Make + " " + x.Model } });

Compared against this expression that has been broken out across multiple lines:

bool hasHighSaleProbability = false;
if (daysOnLot < 60 || color == "Red")
{
    hasHighSaleProbability = true;
}
var distinctMakesModelsYears = Inventory
   .Where(x => x.DaysOnLot < 60 or x.Color == "Red")
   .Select(x => new 
   { 
    Make = x.Make, 
    Model = x.Model, 
    Year = x.Year 
   })
   .Distinct()
   .ToList();
var recentInventoryView = distinctMakesModelsYears
    .Select(x => new RecentInventoryView 
    { 
     YearDropdown = new SelectListItem 
     { 
      Text = x.Year, 
      Value = x.Year 
     }, 
     MakeModelDropdown = new SelectListItem 
     { 
      Text = x.Make + " " + x.Model, 
      Value = x.Make + " " + x.Model 
     } 
    });

Although the first code snippet is technically shorter and has fewer lines, it is nearly unreadable. The second snippet breaks out the the if logic and breaks up all of the chained methods into more logical variables. The result? Code logic that is much easier to follow.

 

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

How to write understandable code for your future self

Published on: 2017-04-18

This story originally appeared in Hacker Noon on February 10, 2017.

I can’t tell you the number of times the title of this post has crossed my mind as I dug through a piece of code that I hadn’t touched in years.

At the time I wrote it, I probably thought my code was beautiful. An elegant masterpiece. It should have been printed, framed, and hung on a wall of The Programming Hall of Fame. As clever as I thought I may have been a few years ago, I rarely am able to read my old code without some serious time wasted debugging.

This problem plagued me regularly. I tried different techniques to try and make my code easier to understand.

First I tried adding comments to my code. Pretty easy to do, but not all that helpful.

When comments weren’t cutting it, I tried to write self-documenting code instead: small, well-named classes and methods that described their limited functionality. This made the code more readable but I would still have questions.

I also tried documenting my code in a separate file. This had its benefits but didn’t solve the problem entirely either.

Eventually I figured out what I needed to do: I needed to use all three of the above techniques to write truly beautiful and understandable code.

Comments

Comments in your code should document the why, not the how. When I first started programming, I would often write very unhelpful comments like this:

public class Class1
{
  public List<string> DoWork(List<string> a)
  {
    List<string> numbers = new List<string>();

    // Loop over data
    for (int i = 1; i < a.Count; i++)
    {
      int s = a[i].IndexOf(" ");
      string num = a[i].Substring(0,s);

      // Save data
      numbers.Add(num);
    }

    return numbers;
  }
  ...
}

“Loop over data”? “Save data”? These comments are beneficial to understanding the code. I can easily tell that I have a loop, and that I’m adding my data to a collection, why should I waste valuable screen real estate with unhelpful comments?

Instead of saying what or how, comments should explain why. A programmer will see the for loop and know that it’s looping over some type of collection of Addresses. However, a programmer will not know why we are starting our counter with int i = 1 — this is where adding a comment can improve the understanding of the code:

// i = 1 because the view will never display the first address
for (int i = 1; i < Addresses.Count; i++) {...

Now, we know some of the business logic driving our app. We know we don’t process the first address because it never gets outputted to our view. This comment answers the why behind skipping the first address, adding clarity to the code.

Additionally, we remove the // Save data comment completely since it adds no insightful value.

Self-Documentation

Comments alone won’t make code easy to reinterpret however. Let’s take at our method again with our improved comments:

public class Class1
{
  public List<string> DoWork(List<string> a)
  {
    List<string> numbers = new List<string>();

    // i = 1 because the view will never display the first address
    for (int i = 1; i < a.Count; i++)
    {
      int s = a[i].IndexOf(" ");
      string num = a[i].Substring(0,s);

      numbers.Add(num);
    }

    return numbers;
  }
  ...
}

What exactly is Class1? What kind of work isDoWork() doing? What about the use of int s? The names of the objects in our code don’t aid in our understanding what this code is doing.

This is where the idea of self-documenting code comes in: instead of creating objects with arbitrary, non-informative names (“I swear I’ll refactor this later”), we build descriptive objects. If I have a class, its name should give me a good idea about what its properties and methods could be. A method’s name should be descriptive enough to tell me what I should expect as an output without having to dig into the details of what that method is doing. Variables should add additional illumination that make what and how type comments obsolete.

In our example, let’s make our code self-documenting. First, this class is intended to help us clean address data. Let’s call itAddressStandardizer. With that simple renaming we know that all of the properties and methods of this class should pertain to dealing with dirty address data and making it cleaner.

What about the method name List<string> DoWork(List<string> a)? Well , I can tell you that this method is trying to parse out the number portion of a street address. So let’s change the method name and signature to something more informative, like List<string> ParseHouseNumbers(List<String> addresses). Now we can make an educated guess that this method accepts some address strings as an input and and it will return a list of parsed house numbers.

If we clean up some variable names, our code becomes much easier to read, like this:

public class AddressStandardizer
{
  public List<string> DoWork(List<string> addresses)
  {
    List<string> houseNumbers = new List<string>();

    // i = 1 because the view will never display the first address
    for (int i = 1; i < addresses.Count; i++)
    {
      int firstSpaceIndex = addresses[i].IndexOf(" ");
      string houseNumber = addresses[i].Substring(0,firstSpaceIndex);

      houseNumbers.Add(houseNumber);
    }

    return houseNumbers;
  }
  ...
}

Documentation

Our code is finally starting to shape up. We have comments explaining why we chose to do something and we refactored our code to have object names that are informative.

The code at this point is ok but not perfect. If we don’t look at this code for a few years, we probably have enough information now to look at the code and figure out what it’s doing with relative ease.

The big piece of information that we are still missing however is knowing why this code was written in the first place.

Often times, I get a question from a manager or analyst about why we decided to build the project in the first place. Or I’ll get a request for information about how the logic in the program works. Without a proper documentation file, the best thing I can do is send the business user a copy of my code. Most of the time that isn’t very helpful.

What would be helpful though is an explanation of what our program is doing at a high-level. This is the purpose of formal documentation.

The documentation for this section of code might look something like this:

…After retrieving our customer information from our vendor, the program processes the data and cleans it up to load into our reporting warehouse. Cleaning up the data means parsing the addresses into multiple columns including house number, street name, street suffix, city, state, and postal code…

Now, when a business user needs to know what your program is doing, you can easily send them the above documentation their way. The documentation also acts as a nice refresher for you, the programmer, when it comes time to revisit the code, as well as any future coworkers who will be new to the project.

Wrap up

All of these techniques are necessary to eliminate code headaches down the road. Learn from my experience — not doing all three may save a little bit of time in the short term but it will hurt at some point in the future. Once you get in the habit of writing all three kinds of documentation, it will become second nature and make your life (and the lives of your future-self!) much easier.

Comments in code should explain the why not the how:

  • The how should be explained by well named classes and methods
  • Separate documentation still needed for developers and non-developers alike. Think of business users who need to know how your process works and the business logic that is in it; nice to have high-level document explaining the business uses of your process that someone non-technical can understand.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!