Data with Bert logo

Conditionally Returning Rows Based On Query Results

Watch this week's video on YouTube

While I normally prefer formatting my query results in a downstream app/reporting layer, sometimes I can't get around adding some business formatting logic to my SQL queries.

Today I want to show you four different ways to conditionally output rows of data based on a SQL query.

Setting Up The Base Query

You've probably worked with a query that results in either 0 or 1 rows being returned:

DECLARE @CurrentDatetime datetime = getdate();
SELECT
    1 AS AreSecondsDivisbleBy2,
    @CurrentDatetime AS CurrentDatetime
WHERE
    DATEPART(second,@CurrentDatetime) % 2 = 0

If you put that query into a derived table and add some IIF() logic on top of it, you now have a situation where you your result may contain a row with one of two distinct values or no rows at all:

If you put that query into a derived table and add some IIF() logic on top of it, you now have a situation where you your result may contain a row with one of two distinct values or no rows at all:

DECLARE @CurrentDatetime datetime = GETDATE();
SELECT
    IIF(DATEPART(second,d.CurrentDatetime) % 3 = 0, 1,0) AS AreSecondsDivisbleBy3And2,
    d.CurrentDatetime
FROM
    (
    SELECT
        1 AS AreSecondsDivisbleBy2,
        @CurrentDatetime AS CurrentDatetime
    WHERE
        DATEPART(second,@CurrentDatetime) % 2 = 0
    ) d

(Note: if you ever need to know whether the seconds part of the current time is divisible by 3 and 2, use SELECT IIF(DATEPART(second,GETDATE()) % 6 = 0,1,0) and not this monstrosity of a query  I'm creating to demonstrate a scenario when you dependent derived table logic).

Sometimes we may want to force certain scenarios based on the output of the query above. Here are few common patterns that I find myself doing when needing to meet certain business requirements of queries to return data conditionally.

Always Return 1 Row

Let's say we are happy with getting a result value of 0 or 1 for AreSecondDivisbleBy3And2, but want to additionally return some other value when our derived table returns no rows. We can accomplish this with a UNION ALL and some sorting:

DECLARE @CurrentDatetime datetime = GETDATE();
SELECT TOP 1
    AreSecondsDivisbleBy3And2,
    CurrentDatetime
FROM
    (
    SELECT
        IIF(DATEPART(second,CurrentDatetime) % 3 = 0, 1,0) AS AreSecondsDivisbleBy3And2,
        d.CurrentDatetime,
        0 AS OrderPrecedence
    FROM
        (
        SELECT
            1 AS AreSecondsDivisbleBy2,
            @CurrentDatetime AS CurrentDatetime
        WHERE
            DATEPART(second,@CurrentDatetime) % 2 = 0
    ) d
    UNION ALL
    SELECT
        -1 AS AreSecondsDivisbleBy3And2,
        @CurrentDatetime AS CurrentDatetime,
        1 AS OrderPrecedence
    ) p
ORDER BY
     OrderPrecedence

We can limit our query to return TOP 1 and then add an OrderPrecedence column to determine which query result row to return. If our original query has a row of data, it will be returned because of it's OrderPrecedence. If our original query returns 0 rows, our fall back UNION ALL default value of -1 will be returned.

Return 1 row when value is 1, 0 rows when value is 0

What about a situation where we want to return a row when AreSecondsDivisbleBy3And2 is equal to 1, but no row when it is equal to 0?

The IIF function and CASE statements work great for conditionally returning a value in a SELECT, but how can we conditionally return a row?  Here's one approach:

DECLARE @CurrentDatetime datetime = GETDATE();
SELECT
     AreSecondsDivisibleBy3And2,
    CurrentDatetime
FROM
    (
    SELECT
        IIF(DATEPART(second,d.CurrentDatetime) % 3 = 0, 1,0) AS AreSecondsDivisibleBy3And2,
        CurrentDatetime
    FROM
        (
        SELECT
            1 AS AreSecondsDivisbleBy2,
            @CurrentDatetime AS CurrentDatetime
        WHERE
            DATEPART(second,@CurrentDatetime) % 2 = 0
        ) d
    ) d1
    INNER JOIN
    (
    SELECT
        0 AS ValueToNotReturn
    ) d2
        ON d1.AreSecondsDivisibleBy3And2 != d2.ValueToNotReturn

In this scenario, we return a row when our value is 1, but do not return a row when the value is 0 or our derived table doesn't return any rows.

We use an INNER JOIN to filter out the row value that we want to return 0 rows.

Return 1 row when null, and 0 rows when the value is 1 or 0

In this scenario we want to return no rows when AreSecondsDivisibleBy3And2=1 and a row when its not.

I've never had a real-world use for this one, but it's essentially a combination of the first two solutions.

SELECT
    AreSecondsDivisibleBy3And2,
    CurrentDatetime
FROM
    (
    SELECT TOP 1
        AreSecondsDivisibleBy3And2,
        CurrentDatetime
    FROM
        (
        SELECT
            IIF(DATEPART(second,d.CurrentDatetime) % 3 = 0, 1,0) AS AreSecondsDivisibleBy3And2,
            d.CurrentDateTime,
            0 AS OrderPrecedence
        FROM
            (
            SELECT
                1 AS AreSecondsDivisbleBy2,
                @CurrentDatetime AS CurrentDatetime
            WHERE
                DATEPART(second,@CurrentDatetime) % 2 = 0
            ) d
        UNION ALL
        SELECT
            -1 AS AreSecondsDivisibleBy3And2,
            @CurrentDatetime AS CurrentDatetime,
            1 AS OrderPrecedence
        )u
        ORDER BY
             OrderPrecedence
    ) d1
    INNER JOIN
    (
    SELECT
        -1 AS ValueToNotReturn
    ) d2
        ON d1.AreSecondsDivisibleBy3And2 = d2.ValueToNotReturn

This one is by far the most difficult one to logic through, but it's a pure reversal of return no rows when rows are present and return a row when no rows are present.

Always return 0 rows

I'm not exactly sure what the business case for never returning any rows would be, but this one is pretty simple: just add a condition that will always evaluate to false:

DECLARE @CurrentDatetime datetime = GETDATE();
SELECT
    IIF(DATEPART(second,d.CurrentDatetime) % 3 = 0, 1,0) AS AreSecondsDivisibleBy3And2
FROM
    (
    SELECT
        1 AS AreSecondsDivisbleBy2,
        @CurrentDatetime AS CurrentDatetime
    WHERE
        DATEPART(second,@CurrentDatetime) % 2 = 0
    ) d
WHERE
    1=0

Since 1=0 will never be true, your query will never return any results regardless of what kind of logic is happening in your SELECT or derived table.

Displaying Long Values in SSMS

Watch this week's video on YouTube

I write a lot of dynamic SQL and frequently encounter variables that contain many characters:

DECLARE @LongValue NVARCHAR(MAX) = CAST('' AS NVARCHAR(MAX)) + 
N'SELECT
    ''A'' AS AShortValue,
    '''+REPLICATE(N'A',4000)+''' as ALongValue
ORDER BY 1';

This variable is 4059 characters long, and when I execute it it runs great.

SELECT LEN(@LongValue); -- 4059 characters
EXEC(@LongValue);

2018-08-01_12-17-24 A homage to one of my favorite Uncyclopedia entries.

If my programmatically built query had an error in it, the first thing I'd want to do when debugging it would be to see the the text of the entire @LongValue variable.

I could do this by just saying SELECT @LongValue, and while recent versions of SSMS will display the whole value for me, it completely loses my formatting which stinks (and is especially bad if there are any comments prefixed with --  in the query):

2018-08-01_12-25-54 Need a ultra HD wide display to fit this all on one screen.

I can say PRINT @LongValue, which will keep the formatting, but it will get trimmed at 4,000 characters (notice the missing ORDER BY):

2018-08-01_12-32-45

Some Better Ways

Erik Darling posts one solution to this problem in his T-SQL Tuesday #104 entry (as well as some other problems/solutions for lengthy SQL variables). Specifically he links to a SQL string printing script that will loop through the lengthy variable and print everything while maintaining formatting:

2018-08-01_12-32-45 Not perfectly formatted, but good enough.

And while I like using that stored procedure on my primary server, I'm too lazy to install it every where I need it.

Instead, I have a couple of go-to solutions that work on all SQL Server instances 2008 forward.

Solution 1: CAST to XML
SELECT CAST(@LongValue AS XML) AS LongValue

Casting the long variable to XML allows SSMS to generate a clickable, single-row result that preserves formatting:

2018-08-01_12-44-37 IntelliSense complains but I'm OK with it

The only downside to this approach is that certain charaters, like "<" and ">", can't be converted to XML:

2018-08-01_12-46-43

Solution 2: FOR XML PATH

A slight variation on solution 1, we can get similar results using FOR XML PATH:

SET @LongValue = '<' + @LongValue -- Let's add in an invalid character
SELECT @LongValue FOR XML PATH('')

2018-08-01_12-50-20 FOR XML PATH is one of the most abused SQL Server functions.

In this solution, the "<" is escaped to "<", which isn't perfect but at least my variable can be displayed with formatting intact.  A quick find and replace for any escaped characters and I'm good to go.

Good Enough

These techniques aren't perfect, but for purposes of debugging dynamically generated code they are good enough.

Maybe one day SSMS will print longer strings or include a syntax formatter and I won't care nearly as much.

And if not, I'll happily continue to abuse FOR XML to do things other than generate XML documents.

4 Ways To Define Lookup Values In A Query

Watch this week's video on YouTube

Occasionally you may need to create a lookup table of values for a single query.  Building a permanent table of these values is a good option if you need to reuse the values frequently, but for one-off, ad-hoc queries you may want to hard-code the lookup table of values directly in your query.

We can hard-code these values using several different techniques.  Below are the techniques I've seen used before, going in order from my least to most favorite.

Table Variables

DECLARE @T TABLE 
(
    ColorName varchar(20), 
    HexCode char(7)
);

INSERT INTO @T VALUES 
        ('FireBrick','#B22222'),
        ('HotPink','#FF69B4'),
        ('Tomato','#FF6347'),
        ('PapayaWhip','#FFEFD5'),
        ('RebeccaPurple','#663399'),
        ('LawnGreen','#7CFC00'),
        ('MidnightBlue','#191970');

SELECT * FROM @T;

Table variables get a bad rap in SQL Server because they don't provide accurate row estimates to the query optimizer.  This can cause SQL Server to come up with some really terrible execution plans that will kill your query's performance.

However, if your use case is a single small lookup table of less than 100 records, table variables might actually be a viable option.  They are quick and easy to setup and can be added to the top of your query.

With that said, I don't think I've ever used a table variable in this type of scenario (or any scenario really).  I know some people love using them and I think that's fine as long as you are keeping track of your query performance.  For me though, there are so many better options available...

Temporary Tables

CREATE TABLE #T 
(
    ColorName varchar(20), 
    HexCode char(7)
);

INSERT INTO #T VALUES 
        ('FireBrick','#B22222'),
        ('HotPink','#FF69B4'),
        ('Tomato','#FF6347'),
        ('PapayaWhip','#FFEFD5'),
        ('RebeccaPurple','#663399'),
        ('LawnGreen','#7CFC00'),
        ('MidnightBlue','#191970');

SELECT * FROM #T;

Temp tables are the answer to many of the table variable's shortcomings.

Temp tables can perform well with larger amounts of data because they can be indexed and can have statistics generated on them.  Both of these features typically help SQL Server generate better execution plans.

There is some overhead in coding a temp table though: just like a table variable, a temp table needs to be created and inserted into before being able to use it in your query.  While normally not a huge deal, this is not something I want to have to do in those instances where I want to define some lookup values quickly...

SELECT with UNION ALL

SELECT
    *
FROM
    (
        SELECT 'FireBrick','#B22222'     UNION ALL
        SELECT 'HotPink','#FF69B4'   UNION ALL
        SELECT 'Tomato','#FF6347'    UNION ALL
        SELECT 'PapayaWhip','#FFEFD5'    UNION ALL
        SELECT 'RebeccaPurple','#663399' UNION ALL
        SELECT 'LawnGreen','#7CFC00'     UNION ALL
        SELECT 'MidnightBlue','#191970'
    ) T(ColorName,HexCode);

The next option is hard-coding values in SELECT statements and then joining them together with UNION ALLs.

This is probably the most common technique I see, and for good reason: the syntax is straight forward and doesn't require any special setup; perfect for the one-time use ad-hoc scenario.

Its format also makes it easy to use the ALT + highlight shortcut to quickly create a derived table lookup from the results of another query or values copied from elsewhere.

I do like this method a lot, but there is one method that I like slightly more...

The VALUES() Constructor

SELECT
    *
FROM
    (VALUES 
        ('FireBrick','#B22222'),
        ('HotPink','#FF69B4'),
        ('Tomato','#FF6347'),
        ('PapayaWhip','#FFEFD5'),
        ('RebeccaPurple','#663399'),
        ('LawnGreen','#7CFC00'),
        ('MidnightBlue','#191970')
        ) T(ColorName,HexCode);

You've probably use the VALUES constructor in an INSERT INTO statement, but did you know you can also use it in a FROM clause?

This syntax is similar to our SELECT + UNION ALL technique above, except we are swapping in single quotes and parentheses for SELECTs and UNION ALLs.  I find this slightly easier to write, if only because it requires typing out fewer characters and feels more programmatic.

One Last Tip: CTE

WITH HtmlColors AS (
        SELECT 'FireBrick' AS ColorName,'#B22222' AS HexCode     UNION ALL
        SELECT 'HotPink','#FF69B4'   UNION ALL
        SELECT 'Tomato','#FF6347'    UNION ALL
        SELECT 'PapayaWhip','#FFEFD5'    UNION ALL
        SELECT 'RebeccaPurple','#663399' UNION ALL
        SELECT 'LawnGreen','#7CFC00'     UNION ALL
        SELECT 'MidnightBlue','#191970'
)

SELECT * FROM HtmlColors

This isn't really an additional technique, but something related that I use often and feels appropriate to mention.

If using either the SELECT + UNION ALL or VALUES techniques, you can put those lookup queries into a common table expression for easier referencing.

This doesn't give any performance advantage, but it does help keep your code clean by putting your lookup logic right at the top of your file.  This becomes particularly useful when using those hard-coded lookup values as parameters, allowing all changes to your query to be made right at the top of your file during subsequent runs.

Importing GeoJSON Earthquake Data Into SQL Server

IMG_0397 A significant portion of Yellowstone National Park sits on top of a supervolcano.  Although it's not likely to erupt any time soon, the park is constantly monitored for geological events like earthquakes.

This week I want to take a look at how you can import this earthquake data, encoded in GeoJSON format, into SQL Server in order to be able to analyze it using SQL Server's spatial functions.

Watch this week's video on YouTube

GeoJSON

The source for the data we'll be using is the 30-day earthquake feed from the USGS.  This data is encoded in the GeoJSON format, a specification that makes it easy to share spatial data via JSON.  To get an idea of how it looks, here's an extract:

{
    "type": "FeatureCollection",
    "metadata": {
        "generated": 1515603955000,
        "url": "https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/1.0_month.geojson",
        "title": "USGS Magnitude 1.0+ Earthquakes, Past Month",
        ...
    },
    "features": [{
            "type": "Feature",
            "properties": {
                "mag": 1.25,
                "place": "7km NW of The Geysers, CA",
                "time": 1515603166510,
                ...
            },
            "geometry": {
                "type": "Point",
                "coordinates": [-122.8221664, 38.8175011, 2.02]
            },
            "id": "nc72952110"
        },
        ...
        ],
    "bbox": [-179.921, -62.519, -3.35, 179.9551, 85.3951, 608.58]
}

The key thing we'll be examining in this data is the "features" array: it contains one feature object for each earthquake that's been recorded in the past 30 days.  You can see the "geometry" child object contains lat/long coordinates that we'll be importing into SQL Server.

If you want the same 30-day GeoJSON extract we'll be using in all of the following demo code, you can download it here.

Importing GeoJSON into SQL Server

There's no out of the box way to import GeoJSON data into SQL Server.

However, using SQL Server's JSON functions we can build our own solution pretty easily.

First, let's create a table where we can store all of earthquake data:

CREATE TABLE dbo.EarthquakeData
(
    Id int IDENTITY PRIMARY KEY,
    EventDate DATETIME2,
    Magnitude FLOAT,
    Place VARCHAR(300),
    Coordinates GEOGRAPHY,
    Long varchar(100),
    Lat varchar(100)
)

CREATE SPATIAL INDEX IX_Coordinates ON dbo.EarthquakeData (Coordinates)

Then, let's use the OPENJSON() function to parse our JSON and insert it into our table:

DECLARE @GeoJSON nvarchar(max) = N'{...}' -- Put the GeoJSON here

INSERT INTO dbo.EarthquakeData (EventDate,Magnitude,Place,Coordinates,Long,Lat)
SELECT 
    -- Convert unix epoch time to datetime
    -- We also know the source is in UTC, so we specify that using AT TIME ZONE for easier conversions down the road
    DATEADD(second,cast(UnixMillisecondsSinceEpoch/1000 as int),'1970-01-01 00:00:00') AT TIME ZONE 'UTC' AS EventDate,
    Magnitude,
    Place,
    -- Build our geography datatype
    geography::STPointFromText('POINT ('+Long + ' ' + Lat + ')', 4326) AS Geography,
    Long,
    Lat
FROM 
    OPENJSON(@GeoJSON, '$.features')
        WITH (
            UnixMillisecondsSinceEpoch bigint '$.properties.time',
            Magnitude float '$.properties.mag',
            Place varchar(300) '$.properties.place',
            Long varchar(100) '$.geometry.coordinates[0]',
            Lat varchar(100) '$.geometry.coordinates[1]'
        )

We use OPENJSON() to parse our JSON hierarchy and then concatenate together the lat and long values into our well known text format to be able to use it with SQL Server's spatial function STPointFromText:

SELECT geography::STPointFromText('POINT ('+Long + ' ' + Lat + ')', 4326) AS Geography

What results is our earthquake data all nicely parsed out into our dbo.EarthquakeData table:

Earthquake-data-parsed

What about Yellowstone?

The above data includes earthquakes from around world.  Since we only want to examine earthquakes in Yellowstone, we'll need to filter the data out.

There's a handy Place column in the data that we could probably add a LIKE '%yellowstone%' filter to - but this is a post about spatial data in SQL, we can do better!

The Wyoming State Geological Survey website has Shapefiles for the boundary of Yellowstone National Park.  Since we are practicing our GeoJSON import skills, I converted the Shapefiles to GeoJSON using an online converter and the resulting data looks like this:

{
    "type": "FeatureCollection",
    "bbox": [-111.15354785438899, 44.13238494057162, -109.816111731858, 45.102865336505396],
    "features": [{
        "type": "Feature",
        "geometry": {
            "type": "Polygon",
            "coordinates": [
                [
                    [-111.0970801722779, 44.487322588834374],
                    [-111.09707931336956, 44.48929779632266],
                    [-111.09707877845554, 44.49109904485493],
                    [-111.09707888353985, 44.49472122457225],.
                    ...
                ]
            ]
        },
        ...
    }]
}

You can download the full park boundary GeoJSON file here.

Just like before, we'll use SQL Server's OPENJSON() function to parse our GeoJSON data into a well-known text POLYGON.

First we create our table:

DROP TABLE IF EXISTS dbo.ParkBoundaries
CREATE TABLE dbo.ParkBoundaries
(
    Id int IDENTITY PRIMARY KEY,
    ParkName varchar(100),
    ParkBoundary GEOGRAPHY
)
CREATE SPATIAL INDEX IX_Boundary ON dbo.ParkBoundaries (ParkBoundary)

And then populate it, this time using the STPolyFromText spatial function:

INSERT INTO dbo.ParkBoundaries (ParkName, ParkBoundary)
SELECT
    'Yellowstone National Park' AS ParkName,
    geography::STPolyFromText('POLYGON ((' + STRING_AGG(CAST(Long + ' ' + Lat as varchar(max)), ',') + '))',4326).ReorientObject() AS ParkBoundary
FROM
    (
    SELECT 
        Long,
        Lat
    FROM
        OPENJSON(@Yellowstone, '$.features[0].geometry.coordinates[0]')
        WITH
            (
                Long varchar(100) '$[0]',
                Lat varchar(100) '$[1]'
            )
)d

Filtering our data

Now we have two tables: dbo.EarthquakeData and dbo.ParkBoundaries.  What we want to do is select only the Earthquake data points that fall within the boundaries of Yellowstone National Park.

This is easy to do using the STIntersects spatial function, which returns a "1" for any rows where one geography instance (our lat/long earthquake coordinate) intersects another geography instance (our park boundary):

SELECT 
    d.Id,
    d.Magnitude,
    d.EventDate,
    d.Place,
    b.ParkName
FROM 
    dbo.EarthQuakeData d
    CROSS JOIN dbo.ParkBoundaries b
WHERE
    Coordinates.STIntersects(ParkBoundary) =1
    AND b.ParkName = 'Yellowstone National Park'
ORDER BY
    Magnitude DESC

The rest is up to you

So all it takes to import GeoJSON data into SQL Server is knowing how to use SQL Server's JSON functions.

Once geographical data is imported into geography data types, SQL Server's spatial functions offer lots of flexibility for how to efficiently slice and dice the data.