How To Create Multi-Object JSON Arrays in SQL Server

Recently I was discussing with Peter Saverman whether it would be possible to take some database tables that look like this:

And output them so that the Cars and Toys data would map to a multi-object JSON array like so:

Prefer visuals instead of text?  You can watch this week’s post on my YouTube channel.

Why would you ever need this?

If you are coming from a pure SQL background, at this point you might be wondering  why you would ever want create an object array that contains mixed object types.  Well, from an application development standpoint this type of scenario can be fairly common.

In a database, it makes sense to divide Home and Car and Toy into separate tables.  Sure, we could probably combine the latter two with some normalization, but imagine we will have many different types of entities that will be more difficult to normalize – sometimes it just makes sense to store this information separately.

Not to mention that performing analytical type queries across many rows of data will typically be much faster stored in this three table format.

The three table layout, while organized from a database standpoint, might not be the best way to organize the data in an object-oriented application.  Usually in a transaction oriented application, we want our data to all be together as one entity.  This is why NoSQL is all the rage among app developers.  Having all of your related data all together makes it easy to manage, move, update, etc…  This is where the array of multi-type objects comes in – it’d be pretty easy to use this structure as an array of dynamic or inherited objects inside of our application.

Why not just combine these Car and Toy entities in app?

Reading the data into the app through multiple queries and mapping that data to objects is usually the first way you would try doing something like this.

However, depending on many different variables, like the size of the data, the number of requests, the speed of the network, the hardware the app is running on, etc… mapping your data from multiple queries might not be the most efficient way to go.

On the other hand, if you have a big beefy SQL Server available that can do those transformations for you, and you are willing to pay for the processing time on an $8k/core enterprise licensed machine, then performing all of the these transformations on your SQL Server is the way to go.

The solution

UPDATE: Jovan Popovic suggested an even cleaner solution using CONCAT_WS.  See the update at the bottom of this post.

First, here’s the data if you want to play along at home:

And here’s the query that does all of the transforming:

There are a couple of key elements that make this work.

CROSS APPLY

When using FOR JSON PATH , ALL rows and columns from that result set will get converted to a single JSON string.

This creates a problem if, for example, you want to have a column for your JSON string and a separate column for something like a foreign key (in our case, HomeId).  Or if you want to generate multiple JSON strings filtered on a foreign key.

The way I chose to get around this is to use CROSS APPLY with a join back to our Home table – this way we get our JSON string for either Cars or Toys created but then output it along with some additional columns.

WITHOUT_ARRAY_WRAPPER

When using FOR JSON PATH to turn a result set into a JSON string, SQL Server will automatically add square brackets around the JSON output as if it were an array.

This is a problem in our scenario because when we use FOR JSON PATH to turn the Car and Toy table into JSON strings, we eventually want to combine them together into the same array instead of two separate arrays.  The solution to this is using the WITHOUT_ARRAY_WRAPPER option to output the JSON string without the square brackets.

Conclusion

Your individual scenario and results may vary.  This solution was to solve a specific scenario in a specific environment.

Is it the right way to go about solving your performance problems all of the time? No.  But offloading these transformations onto SQL Server is an option to keep in mind.

Just remember – always test to make sure your performance changes are actually helping.

UPDATED Solution Using CONCAT_WS:

This solution recommended by Jovan Popovic is even easier than above.  It requires using CONCAT_WS, which is available starting in SQL Server 2017 (the above solution requires STRING_AGG which is also in 2017, but it could be rewritten using FOR XML string aggregation if necessary for earlier versions)

 

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

Does The Join Order of My Tables Matter?

Photo by pan xiaozhen on Unsplash

I had a great question submitted to me (thank you Brandman!) that I thought would make for a good blog post:

…I’ve been wondering if it really matters from a performance standpoint where I start my queries. For example, if I join from A-B-C, would I be better off starting at table B and then going to A & C?

The short answer: Yes.  And no.

More of a watcher than a reader?  Watch this week’s episode on YouTube!

Table join order matters for performance!

Disclaimer: For this post, I’m only going to be talking about INNER joins.  OUTER (LEFT, RIGHT, FULL, etc…) joins are a whole ‘nother animal that I’ll save for time.

Let’s use the following query from WideWorldImporters for our examples:

Note: with an INNER join, I normally would prefer putting my ‘USA’ filter in the WHERE clause, but for the rest of these examples it’ll be easier to have it part of the ON.

The key thing to notice is that we are joining  three tables – Orders, OrderLines, and StockItems – and that OrderLines is what we use to join between the other two tables.

We basically have two options for table join orders then – we can join Orders with OrderLines first and then join in StockItems, or we can join OrderLines and StockItems first and then join in Orders.

In terms of performance, it’s almost certain that the latter scenario (joining OrderLines with StockItems first) will be faster because StockItems will help us be more selective.

Selective?  Well you might notice that our StockItems table is small with only 227 rows.  It’s made even smaller by filtering on ‘USA’ which reduces it to only 8 rows.

Since the StockItems table has no duplicate rows (it’s a simple lookup table for product information) it is a great table to join with as early as possible since it will reduce the total number of rows getting passed around for the remainder of the query.

If we tried doing the Orders to OrderLines join first, we actually wouldn’t filter out any rows in our first step, cause our subsequent join to StockItems to be more slower (because more rows would have to be processed).

Basically, join order DOES matter because if we can join two tables that will reduce the number of rows needed to be processed by subsequent steps, then our performance will improve.

So if the order that our tables are joined in makes a big difference for performance reasons, SQL Server follows the join order we define right?

SQL Server doesn’t let you choose the join order

SQL is a declarative language: you write code that specifies *what* data to get, not *how* to get it.

Basically, the SQL Server query optimizer takes your SQL query and decides on its own how it thinks it should get the data.

It does this by using precalculated statistics on your table sizes and data contents in order to be able to pick a “good enough” plan quickly.

So even if we rearrange the order of the tables in our FROM statement like this:

Or if we add parentheses:

Or even if we rewrite the tables into subqueries:

SQL Server will interpret and optimize our three separate queries (plus the original one from the top of the page) into the same exact execution plan:

Basically, no matter how we try to redefine the order of our tables in the FROM statement, SQL Server will still do what it thinks it’s best.

But what if SQL Server doesn’t know best?

The majority of the time I see SQL Server doing something inefficient with an execution plan it’s usually due to something wrong with statistics for that table/index.

Statistics are also a whole ‘nother topic for a whole ‘nother day (or month) of blog posts, so to not get too side tracked with this post, I’ll point you to Kimberly Tripp’s introductory blog post on the subject: https://www.sqlskills.com/blogs/kimberly/the-accidental-dba-day-15-of-30-statistics-maintenance/

The key thing to take away is that if SQL Server is generating an execution plan where the order of table joins doesn’t make sense check your statistics first because they are the root cause of many performance problems!

Forcing a join order

So you already checked to see if your statistics are the problem and exhausted all possibilities on that front.  SQL Server isn’t optimizing for the optimal table join order, so what can you do?

Row goals

If SQL Server isn’t behaving and I need to force a table join order, my preferred way is to do it via a TOP() command.

I learned this technique from watching Adam Machanic’s fantastic presentation on the subject and I highly recommend you watch it.

Since in our example query SQL Server is already joining the tables in the most efficient order, let’s force an inefficient join by joining Orders with OrderLines first.

Basically, we write a subquery around the tables we want to join together first and make sure to include a TOP clause. 

Including TOP forces SQL to perform the join between Orders and OrderLines first – inefficient in this example, but a great success in being able to control what SQL Server does.

This is my favorite way of forcing a join order because we get to inject control over the join order of two specific tables in this case (Orders and OrderLines) but SQL Server will still use its own judgement in how any remaining tables should be joined.

While forcing a join order is generally a bad idea (what happens if the underlying data changes in the future and your forced join no longer is the best option), in certain scenarios where its required the TOP technique will cause the least amount of performance problems (since SQL still gets to decide what happens with the rest of the tables).

The same can’t be said if using hints…

Query and join hints

Query and join hints will successfully force the order of the table joins in your query, however they have significant draw backs.

Let’s look at the FORCE ORDER query hint.  Adding it to your query will successfully force the table joins to occur in the order that they are listed:

Looking at the execution plan we can see that Orders and OrderLines were joined together first as expected:

The biggest drawback with the FORCE ORDER hint is that all tables in your query are going to have their join order forced (not evident in this example…but imagine we were joining 4 or 5 tables in total).

This makes your query incredibly fragile; if the underlying data changes in the future, you could be forcing multiple inefficient join orders.  Your query that you tuned with FORCE ORDER could go from running in seconds to minutes or hours.

The same problem exists with using a join hints:

Using the LOOP hint successfully forces our join order again, but once again the join order of all of our tables becomes fixed:

A join hint is probably the most fragile hint that forces table join order because not only is it forcing the join order, but it’s also forcing the algorithm used to perform the join.

In general, I only use query hints to force table join order as a temporary fix.

Maybe production has a problem and I need to get things running again; a query or join hint may be the quickest way to fix the immediate issue.  However, long term using the hint is probably a bad idea, so after the immediate fires are put out I will go back and try to determine the root cause of the performance problem.

Summary

  • Table join order matters for reducing the number of rows that the rest of the query needs to process.
  • By default SQL Server gives you no control over the join order – it uses statistics and the query optimizer to pick what it thinks is a good join order.
  • Most of the time, the query optimizer does a great job at picking efficient join orders.  When it doesn’t, the first thing I do is check to see the health of my statistics and figure out if it’s picking a sub-optimal plan because of that.
  • If I am in a special scenario and I truly do need to force a join order, I’ll use the TOP clause to force a join order since it only forces the order of a single join.
  • In an emergency “production-servers-are-on-fire” scenario, I might use a query or join hint to immediately fix a performance issue and go back to implement a better solution once things calm down.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

Why Parameter Sniffing Isn’t Always A Bad Thing (But Usually Is)

Unexpected SQL Server Performance Killers #2

Photo by Jakob Owens on Unsplash

In this series I explore scenarios that hurt SQL Server performance and show you how to avoid them. Pulled from my collection of “things I didn’t know I was doing wrong for years.”

Prefer video? Watch this post on YouTube.

Last week we discussed how implicit conversions could be one reason why your meticulously designed indexes aren’t getting used.

Today let’s look at another reason: parameter sniffing.

Here’s the key: Parameter sniffing isn’t always a bad thing.

Most of the time it’s good: it means SQL Server is caching and reusing query plans to make your queries run faster.

Parameter sniffing only becomes a problem when the cached plan isn’t anywhere close to being the optimal plan for given input parameters.

So what’s parameter sniffing?

Let’s start with our table dbo.CoffeeInventory which you can grab from Github.

The key things to know about this table are that:

  1. We have a nonclustered index on our Name column.
  2. The data is not distributed evenly (we’ll see this in a minute)

Now, let’s write a stored procedure that will return a filtered list of coffees in our table, based on the country. Since there is no specific Country column, we’ll write it so it filters on the Name column:

Let’s take a look at parameter sniffing in action, then we’ll take a look at why it happens and how to solve it.

Running the above statement gives us identical execution plans using table scans:

In this case we explicitly specified the parameter @ParmCountry. Sometimes SQL will parameterize simple queries on its own.

That’s weird. We have two query executions, they are using the same plan, and neither plan is using our nonclustered index on Name!

Let’s step back and try again. First, clear the query plan cache for this stored procedure:

Next, execute the same stored procedure with the same parameter values, but this time with the ‘Ethiopia’ parameter value first. Look at the execution plan:

Now our nonclustered index on Name is being utilized. Both queries are still receiving the same (albeit different) plan.

We didn’t change anything with our stored procedure code, only the order that we executed the query with different parameters.

What the heck is going on here!?

This is an example of parameter sniffing. The first time a stored procedure (or query) is ran on SQL server, SQL will generate an execution plan for it and store that plan in the query plan cache:

All subsequent executions of that same query will go to the query cache to reuse that same initial query plan — this saves SQL Server time from having to regenerate a new query plan.

Note: A query with different values passed as parameters still counts as the “same query” in the eyes of SQL Server.

In the case of the examples above, the first time the query was executed was with the parameter for “Costa Rica”. Remember when I said this dataset was heavily skewed? Let’s look at some counts:

“Costa Rica” has more than 10,000 rows in this table, while all other country names are in the single digits.

This means that when we executed our stored procedure for the first time, SQL Server generated an execution plan that used a table scan because it thought this would be the most efficient way to retrieve 10,003 of the 10,052 rows.

This table scan query plan is only optimal for Costa Rica . Passing in any other country name into the stored procedure would return only a handful of records, making it more efficient for SQL Server to use our nonclustered index.

However, since the Costa Rica plan was the first one to run, and therefore is the one that got added to the query plan cache, all other executions ended up using the same table scan execution plan.

After clearing our cached execution plan using DBCC FREEPROCCACHE, we executed our stored procedure again but with ‘Ethiopia’ as our parameter. SQL Server determined that a plan with an index seek is optimal to retrieve only 6 of the 10,052 rows in the table. It then cached that Index Seek plan, which is why the second time around the ‘Costa Rica’ parameter received the execution plan with Index Seek.

Ok, so how do I prevent parameter sniffing?

This question should really be rephrased as “how do I prevent SQL Server from using a sub-optimal plan from the query plan cache?”

Let’s take a look at some of the techniques.

1. Use WITH RECOMPILE or OPTION (RECOMPILE)

We can simply add these query hints to either our EXEC statement:

or to our stored procedure itself:

What the RECOMPILE hint does is force SQL Server to generate a new execution plan every time these queries run.

Using RECOMPILE eliminates our parameter sniffing problem because SQL Server will regenerate the query plan every single time we execute the query.

The disadvantage here is that we lose all benefit from having SQL Server save CPU cycles by caching execution plans.

If your parameter sniffed query is getting ran frequently, RECOMPILE is probably a bad idea because you will encounter a lot of overheard to generate the query plan regularly.

If your parameter sniffed query doesn’t get ran often, or if the query doesn’t run often enough to stay in the query plan cache anyway, then RECOMPILE is a good solution.

2. Use the OPTIMIZE FOR query hint

Another option we have is to add either one of the following hints to our query. One of these would get added to the same location as OPTION (RECOMPILE) did in the above stored procedure:

or

OPTIMIZE FOR UNKNOWN will use a query plan that’s generated from the average distribution stats for that column/index. Often times it results in an average or bad execution plan so I don’t like using it.

OPTIMIZE FOR VALUE creates a plan using whatever parameter value specified. This is great if you know your queries will be retrieving data that’s optimized for the value you specified most of the time.

In our examples above, if we know the value ‘Costa Rica’ is rarely queried, we might optimize for index seeks. Most queries will then run the optimal cached query plan and we’ll only take a hit when ‘Costa Rica’ is queried.

3. IF/ELSE

This solution allows for ultimate flexibility. Basically, you create different stored procedures that are optimized for different values. Those stored procedures have their plans cached, and then an IF/ELSE statement determines which procedure to run for a passed in parameter:

This option is more work (How do you determine what the IF condition should be? What happens more data is added to the table over time and the distribution of data changes?) but will give you the best performance if you want your plans to be cached and be optimal for the data getting passed in.

Conclusion

  1. Parameter sniffing is only bad when your data values are unevenly distributed and cached query plans are not optimal for all values.
  2. SQL Server caches the query plan that is generated from the first run of a query/stored procedure with whatever parameter values were used during that first run.
  3. Using the RECOMPILE hint is a good solution when your queries aren’t getting ran often or aren’t staying in the the query cache most of the time anyway.
  4. The OPTIMIZE FOR hint is good to use when you can specify a value that will generate a query plan that is efficient for most parameter values and are OK with taking a hit for a sub-optimal plan on infrequently queried values.
  5. Using complex logic (like IF/ELSE) will give you ultimate flexibility and performance, but will also be the worst for long term maintenance.

 

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!