Is It Possible To Conditionally Index JSON Data?

Published on: 2018-05-01

Check out this week’s episode on YouTube.

Recently I received a great question from an attendee to one of my sessions on JSON (what’s up Nam!):

At first glance it sounds like a filtered index question, and ultimately it is, but because of some of the intricacies involved in the response I thought it would make for a good blog post.

The Problem: Schema On Read

Imagine I have a central table that keeps track of warnings and errors for my burrito ordering app:

DROP TABLE IF EXISTS dbo.BurritoAppLog;
GO

CREATE TABLE dbo.BurritoAppLog 
( 
	Id int IDENTITY PRIMARY KEY,
	ErrorDetails nvarchar(1000)
); 
GO 

INSERT INTO dbo.BurritoAppLog VALUES (N'{"Type":"Warning", "MessageId": 100, "Severity": "High", "Information":"Running low on steak." }'); 
INSERT INTO dbo.BurritoAppLog VALUES (N'{"Type":"Warning", "MessageId": 50, "Severity": "Low", "Information":"Running low on queso." }');
GO 4000
INSERT INTO dbo.BurritoAppLog VALUES (N'{"Type":"Error", "MessageId": 10, "User":"Bert", "ErrorMessage":"Lettuce not available." }'); 
INSERT INTO dbo.BurritoAppLog VALUES (N'{"Type":"Error", "MessageId": 20, "User":"Jim", "ErrorMessage":"Cannot wrap burrito with quadruple meat." }'); 
GO 100

Now imagine wanting to generate a report of only the rows that are errors.

Obviously, you’d want to index this data for faster querying performance.  Adding a non-clustered index on a non-persisted computed column of our JSON “Type” property will accomplish that:

ALTER TABLE dbo.BurritoAppLog 
ADD ErrorType AS JSON_VALUE(ErrorDetails, '$.Type');

ALTER TABLE dbo.BurritoAppLog 
ADD MessageId AS JSON_VALUE(ErrorDetails, '$.MessageId');

CREATE INDEX IX_ErrorType ON dbo.BurritoAppLog (ErrorType) INCLUDE (MessageId);

SELECT MessageId FROM dbo.BurritoAppLog WHERE ErrorType = 'Error'

And that works great.  Except that error entries in our table make up only 2.5% of our total rows.  Assuming we’ll never need to query WHERE ErrorType = ‘Warning’ , this index is using a lot of unnecessary space.

So what if we create a filtered index instead?

Filtered JSON Indexes…

A filtered index should benefit us significantly here: it should save us space (since it won’t include all of those warning rows) and it should make our INSERT queries into this table faster since the index won’t need to be maintained for our non-“Error” rows.

So let’s create a filtered index:

CREATE INDEX FX_ErrorType ON dbo.BurritoAppLog (ErrorType) INCLUDE (MessageId) WHERE ErrorType = 'Error'

Oh.

So I guess we can’t create a filtered index where the filter is on a computed column.  Maybe SQL Server won’t mind if we persist the computed column?

DROP INDEX IX_ErrorType ON dbo.BurritoAppLog

ALTER TABLE dbo.BurritoAppLog
DROP COLUMN ErrorType;

ALTER TABLE dbo.BurritoAppLog 
ADD ErrorType AS JSON_VALUE(ErrorDetails, '$.Type') PERSISTED;

CREATE INDEX FX_ErrorType ON dbo.BurritoAppLog (ErrorType) INCLUDE (MessageId) WHERE ErrorType = 'Error'

NOOOOOOPPPPEEEE.  Same error message.

The issue is that SQL Server does not like computed columns, persisted or not, in a filtered index’s WHERE clause.  It’s one of the many limitations of filtered indexse (Aaron Bertrand has a great post outlining many of the shortcomings).

Computed Column Filtered Index Workaround

What is a performance minded, space-cautious, JSON-loving developer supposed to do?

One workaround to get our filtered index would be to parse our ErrorType property into its own table column on insert:

ALTER TABLE dbo.BurritoAppLog 
ADD PermanentErrorType varchar(10);

UPDATE dbo.BurritoAppLog SET PermanentErrorType = JSON_VALUE(ErrorDetails, '$.Type');

With our PermanentErrorType column in place, we have no problem generating our filtered index:

CREATE INDEX FX_PermanentErrorType ON dbo.BurritoAppLog (PermanentErrorType) INCLUDE (MessageId) WHERE PermanentErrorType = 'Error'

If we compare the sizes of our nonclustered index to our filtered index, you’ll immediately that the filtered index is significantly smaller:

However, our table size is now slightly larger because of the added table column.

Conclusion

So what do you do if you run into this situation?  Well, if the ratio of undesired records to desired records is large like in the example above, you might want to make a permanent column to include in your filtered index – the size/performance benefit is certainly there.  This does mean that your table size will be larger (additional column) but performance will be faster if your queries are able to use the smaller filtered index.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

Importing GeoJSON Earthquake Data Into SQL Server

Published on: 2018-01-16

A significant portion of Yellowstone National Park sits on top of a supervolcano.  Although it’s not likely to erupt any time soon, the park is constantly monitored for geological events like earthquakes.

This week I want to take a look at how you can import this earthquake data, encoded in GeoJSON format, into SQL Server in order to be able to analyze it using SQL Server’s spatial functions.

Watch this week’s post on YouTube! I really enjoyed making all of the overlays for this episode.

GeoJSON

The source for the data we’ll be using is the 30-day earthquake feed from the USGS.  This data is encoded in the GeoJSON format, a specification that makes it easy to share spatial data via JSON.  To get an idea of how it looks, here’s an extract:

{
    "type": "FeatureCollection",
    "metadata": {
        "generated": 1515603955000,
        "url": "https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/1.0_month.geojson",
        "title": "USGS Magnitude 1.0+ Earthquakes, Past Month",
        ...
    },
    "features": [{
            "type": "Feature",
            "properties": {
                "mag": 1.25,
                "place": "7km NW of The Geysers, CA",
                "time": 1515603166510,
                ...
            },
            "geometry": {
                "type": "Point",
                "coordinates": [-122.8221664, 38.8175011, 2.02]
            },
            "id": "nc72952110"
        },
        ...
        ],
    "bbox": [-179.921, -62.519, -3.35, 179.9551, 85.3951, 608.58]
}

The key thing we’ll be examining in this data is the “features” array: it contains one feature object for each earthquake that’s been recorded in the past 30 days.  You can see the “geometry” child object contains lat/long coordinates that we’ll be importing into SQL Server.

If you want the same 30-day GeoJSON extract we’ll be using in all of the following demo code, you can download it here.

Importing GeoJSON into SQL Server

There’s no out of the box way to import GeoJSON data into SQL Server.

However, using SQL Server’s JSON functions we can build our own solution pretty easily.

First, let’s create a table where we can store all of earthquake data:

CREATE TABLE dbo.EarthquakeData
(
    Id int IDENTITY PRIMARY KEY,
    EventDate DATETIME2,
    Magnitude FLOAT,
    Place VARCHAR(300),
    Coordinates GEOGRAPHY,
    Long varchar(100),
    Lat varchar(100)
)

CREATE SPATIAL INDEX IX_Coordinates ON dbo.EarthquakeData (Coordinates)

Then, let’s use the OPENJSON() function to parse our JSON and insert it into our table:

DECLARE @GeoJSON nvarchar(max) = N'{...}' -- Put the GeoJSON here

INSERT INTO dbo.EarthquakeData (EventDate,Magnitude,Place,Coordinates,Long,Lat)
SELECT 
    -- Convert unix epoch time to datetime
    -- We also know the source is in UTC, so we specify that using AT TIME ZONE for easier conversions down the road
    DATEADD(second,cast(UnixMillisecondsSinceEpoch/1000 as int),'1970-01-01 00:00:00') AT TIME ZONE 'UTC' AS EventDate,
    Magnitude,
    Place,
    -- Build our geography datatype
    geography::STPointFromText('POINT ('+Long + ' ' + Lat + ')', 4326) AS Geography,
    Long,
    Lat
FROM 
    OPENJSON(@GeoJSON, '$.features')
        WITH (
            UnixMillisecondsSinceEpoch bigint '$.properties.time',
            Magnitude float '$.properties.mag',
            Place varchar(300) '$.properties.place',
            Long varchar(100) '$.geometry.coordinates[0]',
            Lat varchar(100) '$.geometry.coordinates[1]'
        )

We use OPENJSON() to parse our JSON hierarchy and then concatenate together the lat and long values into our well known text format to be able to use it with SQL Server’s spatial function STPointFromText:

geography::STPointFromText('POINT ('+Long + ' ' + Lat + ')', 4326) AS Geography

What results is our earthquake data all nicely parsed out into our dbo.EarthquakeData table:

What about Yellowstone?

The above data includes earthquakes from around world.  Since we only want to examine earthquakes in Yellowstone, we’ll need to filter the data out.

There’s a handy Place column in the data that we could probably add a LIKE ‘%yellowstone%’ filter to – but this is a post about spatial data in SQL, we can do better!

The Wyoming State Geological Survey website has Shapefiles for the boundary of Yellowstone National Park.  Since we are practicing our GeoJSON import skills, I converted the Shapefiles to GeoJSON using an online converter and the resulting data looks like this:

{
    "type": "FeatureCollection",
    "bbox": [-111.15354785438899, 44.13238494057162, -109.816111731858, 45.102865336505396],
    "features": [{
        "type": "Feature",
        "geometry": {
            "type": "Polygon",
            "coordinates": [
                [
                    [-111.0970801722779, 44.487322588834374],
                    [-111.09707931336956, 44.48929779632266],
                    [-111.09707877845554, 44.49109904485493],
                    [-111.09707888353985, 44.49472122457225],.
                    ...
                ]
            ]
        },
        ...
    }]
}

You can download the full park boundary GeoJSON file here.

Just like before, we’ll use SQL Server’s OPENJSON() function to parse our GeoJSON data into a well-known text POLYGON.

First we create our table:

DROP TABLE IF EXISTS dbo.ParkBoundaries
CREATE TABLE dbo.ParkBoundaries
(
    Id int IDENTITY PRIMARY KEY,
    ParkName varchar(100),
    ParkBoundary GEOGRAPHY
)
CREATE SPATIAL INDEX IX_Boundary ON dbo.ParkBoundaries (ParkBoundary)

And then populate it, this time using the STPolyFromText spatial function:

INSERT INTO dbo.ParkBoundaries (ParkName, ParkBoundary)
SELECT
    'Yellowstone National Park' AS ParkName,
    geography::STPolyFromText('POLYGON ((' + STRING_AGG(CAST(Long + ' ' + Lat as varchar(max)), ',') + '))',4326).ReorientObject() AS ParkBoundary
FROM
    (
    SELECT 
        Long,
        Lat
    FROM
        OPENJSON(@Yellowstone, '$.features[0].geometry.coordinates[0]')
        WITH
            (
                Long varchar(100) '$[0]',
                Lat varchar(100) '$[1]'
            )
)d

Filtering our data

Now we have two tables: dbo.EarthquakeData and dbo.ParkBoundaries.  What we want to do is select only the Earthquake data points that fall within the boundaries of Yellowstone National Park.

This is easy to do using the STIntersects spatial function, which returns a “1” for any rows where one geography instance (our lat/long earthquake coordinate) intersects another geography instance (our park boundary):

SELECT 
    d.Id,
    d.Magnitude,
    d.EventDate,
    d.Place,
    b.ParkName
FROM 
    dbo.EarthQuakeData d
    CROSS JOIN dbo.ParkBoundaries b
WHERE
    Coordinates.STIntersects(ParkBoundary) =1
    AND b.ParkName = 'Yellowstone National Park'
ORDER BY
    Magnitude DESC

The rest is up to you

So all it takes to import GeoJSON data into SQL Server is knowing how to use SQL Server’s JSON functions.

Once geographical data is imported into geography data types, SQL Server’s spatial functions offer lots of flexibility for how to efficiently slice and dice the data.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!