SQL Server Stored Procedures vs Functions vs Views

Published on: 2019-10-22

Watch this week’s episode on YouTube.

SQL Server has several ways to store queries for later executions.

This makes developers happy because it allows them to follow DRY principles: Don’t Repeat Yourself. The more code you have, the more difficult it is to maintain. Centralizing frequently used code into stored procedures, functions, etc… is attractive.

While following the DRY pattern is beneficial in many programming languages, it can often cause poor performance in SQL Server.

Today’s post will try to explain all of the different code organization features available in SQL Server and when to best use them (thank you to dovh49 on YouTube for recommending this week’s topic and reminding me how confusing all of these different features can be when first learning to use them).

Scalar Functions

CREATE OR ALTER FUNCTION dbo.GetUserDisplayName
(
	@UserId int
)
RETURNS nvarchar(40)
AS
BEGIN
	DECLARE @DisplayName nvarchar(40);
	SELECT @DisplayName = DisplayName FROM dbo.Users WHERE Id = @UserId

	RETURN @DisplayName
END
SELECT TOP 10000 Title, dbo.GetUserDisplayName(OwnerUserId) FROM dbo.Posts

Scalar functions run statements that return a single value.

You’ll often read about SQL functions being evil, and scalar functions are a big reason for this reputation. If your scalar function executes a query within it to return a single value, that means every row that calls that function runs this query. That’s not good if you have to run a query once for every row in a million row table.

SQL Server 2019 can inline a lot of these, providing better performance in most cases. However, you can already do this yourself today by taking your scalar function and including it in your calling query as a subquery. The only downside is that you’ll be repeating that same logic in every calling query that needs it.

Additionally, using a scalar function on the column side of a predicate will prevent SQL Server from being able to seek to data in any of its indexes; talk about performance killing.

For scalar functions that don’t execute a query, you can always use WITH SCHEMABINDING to gain a performance boost.

Inline Table Valued Functions

CREATE OR ALTER FUNCTION dbo.SplitTags
(	
	@PostId int
)
RETURNS TABLE 
AS
RETURN 
(
	SELECT REPLACE(t.value,'>','') AS Tags 
	FROM dbo.Posts p 
	CROSS APPLY STRING_SPLIT(p.Tags,'<') t 
	WHERE Id = @PostId AND t.value <> ''
)
GO
SELECT * FROM dbo.SplitTags(4)

Inline table-valued functions allow a function to return a table result set instead of just a single value. They essentially are a way for you to reuse a derived table query (you know, when you nest a child query in your main query’s FROM or WHERE clause).

These are usually considered “good” SQL Server functions – their performance is decent because SQL Server can get relatively accurate estimates on the data that they will return, as long as the statistics on that underlying data are accurate. Generally this allows for efficient execution plans to be created. As a bonus, they allow parameters so if you find yourself reusing a subquery over and over again, an inline table-valued function (with or without a parameter) is actually a nice feature.

Multi-Statement Table-Valued Functions

CREATE OR ALTER FUNCTION dbo.GetQuestionWithAnswers
(
	@PostId int
)
RETURNS 
@results TABLE 
(
	PostId bigint,
	Body nvarchar(max),
	CreationDate datetime
)
AS
BEGIN
	-- Returns the original question along with all of its answers in one result set
	-- Would be better to do this with something like a union all or a secondary join. 
	-- But this is an MSTVF demo, so I'm doing it with multiple statements.
	
	-- Statement 1
	INSERT INTO @results (PostId,Body,CreationDate)
	SELECT Id,Body,CreationDate 
	FROM dbo.Posts
	WHERE Id = @PostId;

	-- Statement 2
	INSERT INTO @results (PostId,Body,CreationDate)
	SELECT Id,Body,CreationDate 
	FROM dbo.Posts
	WHERE ParentId = @PostId;
	
	RETURN
END
SELECT * FROM dbo.GetQuestionWithAnswers(4)

Multi-statement table-valued functions at first glance look and feel just like their inline table-value function cousins: they both accept parameter inputs and return results back into a query. The major difference is that they allow multiple statements to be executed before the results are returned in a table variable:

This is a great idea in theory – who wouldn’t want to encapsulate multiple operational steps into a single function to make their querying logical easier?

However, the major downside is that prior to SQL Server 2017, SQL Server knows nothing about what’s happening inside of a mutli-statement table-valued function in the calling query. This means all of your estimates for MSTVFs will be 100 rows (1 if you are on a version prior to 2014, slightly more accurate if you are on versions 2017 and above). This means that execution plans generated for queries that call MSTVFs will often be…less than ideal. Because of this, MSTVFs help add to the “evil” reputation of SQL functions.

Stored Procedures

CREATE OR ALTER PROCEDURE dbo.InsertQuestionsAndAnswers
    @PostId int
AS
BEGIN
    SET NOCOUNT ON;

    INSERT INTO dbo.Questions (Id)
    SELECT Id
    FROM dbo.Posts
    WHERE Id = @PostId;

    INSERT INTO dbo.Answers (Id, PostId)
    SELECT Id, ParentId
    FROM dbo.Posts
    WHERE ParentId = @PostId;
END
EXEC dbo.InsertQuestionsAndAnswers @PostId = 4

Stored procedures encapsulate SQL query statements for easy execution. They return result sets, but those result sets can’t be easily used within another query.

This works great when you want to define single or multi-step processes in a single object for easier calling later.

Stored procedures also have the added benefit of being able to have more flexible security rules placed on them, allowing users to access data in specific ways where they don’t necessarily have access to the underlying sources.

Views

CREATE OR ALTER VIEW dbo.QuestionsWithUsers
WITH SCHEMABINDING
AS
SELECT
    p.Id AS PostId,
    u.Id AS UserId,
    u.DisplayName
FROM  
    dbo.Posts p
    INNER JOIN dbo.Users u
        ON p.OwnerUserId = u.Id
WHERE
    p.PostTypeId = 1;
GO

CREATE UNIQUE CLUSTERED INDEX CL_PostId ON dbo.QuestionsWithUsers (PostId);
SELECT * FROM dbo.QuestionsAndAnswersView;

Views are similar to inline table valued function – they allow you centralize a query in an object that can be easily called from other queries. The results of the view can be used as part of that calling query, however parameters can’t be passed in to the view.

Views also have some of the security benefits of a stored procedure; they can be granted access to a view with a limited subset of data from an underlying table that those same users don’t have access to.

Views also have some performance advantages since they can have indexes added to them, essentially materializing the result set in advance of the view being called (creating faster performance). If considering between an inlined table function and a view, if you don’t need to parameterize the input, a view is usually the better option.

Natively Compiled Stored Procedures and Scalar Functions

CREATE TABLE dbo.QuestionsStaging (Id int PRIMARY KEY NONCLUSTERED) WITH ( MEMORY_OPTIMIZED = ON , DURABILITY = SCHEMA_ONLY );

CREATE TABLE dbo.AnswersStaging (Id int PRIMARY KEY NONCLUSTERED, PostId int) WITH ( MEMORY_OPTIMIZED = ON , DURABILITY = SCHEMA_ONLY );
GO

CREATE PROCEDURE dbo.InsertQuestionsAndAnswersCompiled
    @PostId int
WITH NATIVE_COMPILATION, SCHEMABINDING
AS BEGIN ATOMIC WITH
(
    TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english'
)
    INSERT INTO dbo.Questions (Id)
    SELECT Id
    FROM dbo.Posts
    WHERE Id = @PostId;

    INSERT INTO dbo.Answers (Id, PostId)
    SELECT Id, ParentId
    FROM dbo.Posts
    WHERE ParentId = @PostId;
END

These are same as the stored procedures and scalar functions mentioned above, except they are pre-compiled for use with in-memory tables in SQL Server.

This means instead of SQL Server interpreting the SQL query every time a procedure or scalar function has to run, it created the compiled version ahead of time reducing the startup overhead of executing one of these objects. This is a great performance benefit, however they have several limitations. If you are able to use them, you should, just be aware of what they can and can’t do.

Conclusion

While writing this post I thought about when I was first learning all of these objects for storing SQL queries. Knowing the differences between all of the options available (or what those options even are!) can be confusing. I hope this post helps ease some of this confusion and helps you choose the right objects for storing your queries.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

Are Stored Procedures Faster Than Stand-Alone Queries?

Published on: 2019-10-15

Watch this week’s episode on YouTube.

A few months ago I was presenting for a user group when someone asked the following question:

Does a query embedded in a stored procedure execute faster than that same query submitted to SQL Server as a stand alone statement?

The room was pretty evenly split on the answer: some thought the stored procedures will always perform faster while others thought it wouldn’t really matter.

In short, the answer is that the query optimizer will treat a query defined in a stored procedure exactly the same as a query submitted on its own.

Let’s talk about why.

Start with a Plan

While submitting an “EXEC <stored procedure>” statement to SQL Server may require fewer packets of network traffic than submitting the several hundred (thousands?) lines that make up the query embedded in the procedure itself, that is where the efficiencies of a stored procedure end*.

*NOTE: There are certain SQL Server performance features, like temporary object cachingnatively compiled stored procedures for optimized tables, etc… that will improve the performance of a stored procedure over an ad hoc query. However in my experience, most people aren’t utilizing these types of features so it’s a moot point.

After receiving the query, SQL Server’s query optimizer looks at these two submitted queries exactly the same. It will check to see if a cached plan already exists for either query (and if one does, it will use that), otherwise it will send both queries through the optimization process to find a suitable execution plan. If the standalone query and the query defined in the stored procedure are exactly the same, and all other conditions on the server are exactly the same at the time of execution, SQL Server will generate the same plans for both queries.

To prove this point, let’s look at the following query’s plan as well as the plan for a stored procedure containing that same query:

CREATE OR ALTER PROCEDURE dbo.USP_GetUpVotes
	@UserId int
AS
SELECT  
    COUNT(*) AS UpVotes 
FROM 
	dbo.Posts p
    INNER JOIN Votes v
		ON v.PostId = p.Id 
WHERE 
    p.OwnerUserId = @UserId
	and VoteTypeId = 2
ORDER BY UpVotes DESC


EXEC dbo.USP_GetUpVotes 23
DECLARE @UserId int = 23

SELECT 
    COUNT(*) AS UpVotes 
FROM 
	dbo.Posts p
    INNER JOIN Votes v
		ON v.PostId = p.Id 
WHERE 
    p.OwnerUserId = @UserId
	and VoteTypeId = 2
ORDER BY UpVotes DESC
Execution Plan for both stored procedure and ad hoc query
I didn’t include a screenshot of the second plan because it is identical.

As you can see, the optimizer generates identical plans for both the standalone query and the stored procedure. In the eyes of SQL Server, both of these queries will be executed in exactly the same way.

But I Swear My Stored Procedures Run Faster!

I think that a lot of the confusion for thinking that stored procedures execute faster comes from caching.

As I wrote about a little while back, SQL Server is very particular about needing every little detail about a query to be exactly the same in order for it to reuse its cached plan. This includes things like white space and case sensitivity.

It is much less likely that a query inside of a stored procedure will change compared to a query that is embedded in code. Because of this, it’s probably more likely that your stored procedure plans are being ran from cached plans while your individually submitted query texts may not be utilizing the cache. Because of this, the stored procedure may in fact be executing faster because it was able to reuse a cached plan. But this is not a fair comparison – if both plans would pull from the cache, or if both plans had to generate new execution plans, they would both have the same execution performance.

So does it matter if I use stored procedures or not?

So while in the majority of cases a standalone query will perform just as quickly as that same query embedded in a store procedure I still think it’s better to use stored procedures when possible.

First, embedding your query inside of a stored procedure increases the likelihood that SQL Server will reuse that query’s cached execution plan as explained above.

Secondly, using stored procedures is cleaner for organization, storing all of your database logic in one location: the database itself.

Finally, and most importantly, using stored procedures gives your DBA better insight into your queries. Storing a query inside of a stored procedure means your DBA can easily access and analyze it, offering suggestions and advice on how to fix it in case it is performing poorly. If your queries are all embedded in your apps instead, it makes it harder for the DBA to see those queries, reducing the likelihood that they will be able to help you fix your performance issues in a timely manner.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

More “Wrong” SQL Server Math – Floating Point Errors

Published on: 2019-10-01

Watch this week’s episode on YouTube.

Last week we looked at how implicit conversions and datatype precedence can cause SQL Server to output unexpected results (if you aren’t aware of how it handles these features).

This week I want to share another example of when SQL Server’s output may surprise you: floating point errors.

Charts don’t add up to 100%

barchart

Years ago I was writing a query for a stacked bar chart in SSRS. The chart intended to show the percentage breakdown of distinct values in a table. For example, the chart would show that value A made up 30% of the rows, B made up 3%, C made up 12% and so on. Since every row had a value, I was expecting the stacked bar chart percentages to add up to 100%

However, in many instances the charts would come up short; instead of a full 100%, the percentages would only add up to 98% or 99%. What was going on?

Floats

To see an example of how this happens, let’s look at the following query:

SELECT CASE WHEN CAST(.1 AS FLOAT)+CAST(.2 AS FLOAT) = CAST(.3 AS FLOAT) THEN 1 ELSE 0 END

If you’ve never encountered this error before, you’d expect the query to return a result of 1. However, it doesn’t:

.3 != .3

The reason this happens is that anytime you use the float datatype, SQL Server is trading off numeric precision for space savings – in actuality, that .1 in the query above is really stored as 0.10000000000000000555111512312578270212 and the .2 is stored as 0.20000000000000001110223024625156540424. Summed together, we get 0.30000000000000001665334536937734810636, not .3.

And to be clear, this isn’t a problem with SQL Server – any language that implements the IEEE standard for float data types experiences these same issues.

Float Approximation

In SQL Server, the int datatype can store every whole number from -2,147,483,648 to 2,147,483,647 in only 4 bytes of space.

A single precision float, using the same 4 bytes of data, can store almost any value between between -340,000,000,000,000,000,000,000,000,000,000,000,000 and 340,000,000,000,000,000,000,000,000,000,000,000,000.

The reason floats can store such a large range is because they are only storing approximate values; some compression happens in those 4 bytes that allows SQL Server to store a wider range of data, but the increased range of values comes at the cost of losing some accuracy.

Close Enough For Horse Shoes, Hand Grenades, and Floats

In short, a float works by storing its value as a percentage within a range. As an oversimplified example, imagine representing the number 17 as 17% of the range from 1 to 100. All you need to store is the range 1,100 and the number 17%. This doesn’t give us much efficiency for small numbers, but this allows us to store much larger numbers in exactly the same way. For example, to store the number 3 billion, you could make your range 1 billion (10^9), and 10 billion (10^10), and the percentage of 30%.

And while that oversimplified example uses base-10 to make it easy for my brain to think about, computers like doing calculations in base-2. And the math is a little bit more involved.

The Wikipedia page on floating point encoding is really good, but it uses a bunch of math notation that I haven’t seen since high school. Let’s reimagine that example with language we should be a little bit more familiar with: T-SQL.

A 4 byte number is made up of 32 bits. The floating point encoding breaks down these bits into 3 sections:

floating point encoding
Float example.svg” by en:User:Fresheneesz is licensed by CC BY-SA 3.0

 The first bit in blue is for the sign. This just indicates whether we will be left or right of 0 on the number line.

The next 8 bits in green indicate our exponent. This tells us which range of numbers we are in. Since we are using binary, the range is stored as a power of 2. And we only need to store the start of the range, since the end of the range would be the next power of 2.

Finally the last 23 bits in red encode the fractional location of our value within the range. Calculating our actual value then is simple as:

equation

Oh yeah, I promised to do the math in T-SQL. Let’s try that again.

First, we declare some variables to store the 3 encoded parts of our floating point number:

DECLARE 
	@sign int,
	@exponent int,
	@fraction decimal(38,38);

You’ll notice I am storing @fraction as a decimal and not float. This is some foreshadowing about how decimal is a precise datatype that I’ll come back to in a little bit.

Next we store the sign. Since we are encoding the value .15625, the sign is positive, so we set our @sign bit to 0:

SELECT @sign = 0;

Great. Now let’s calculate the value of our exponent. If you’ve never converted binary to decimal before, you basically raise each 1 or 0 to the power of its position, so:

-- Returns 124 
SELECT @exponent =
	(0*POWER(2,7))
	+(1*POWER(2,6))
	+(1*POWER(2,5))
	+(1*POWER(2,4))
	+(1*POWER(2,3))
	+(1*POWER(2,2))
	+(0*POWER(2,1))
	+(0*POWER(2,0));

Next up is converting the last 23 bits to decimal. In this case, the encoding standard specifies these are to be calculated as (1/2^n) instead of the regular 2^n, because we want a fraction:

-- Returns .25
SELECT @fraction = 
	(0*(1.0/POWER(2,1)))
	+(1*(1.0/POWER(2,2)))
	+(0*(1.0/POWER(2,3)))
	+(0*(1.0/POWER(2,4)))
	+(0*(1.0/POWER(2,5)))
	+(0*(1.0/POWER(2,6)))
	+(0*(1.0/POWER(2,7)))
	+(0*(1.0/POWER(2,8)))
	+(0*(1.0/POWER(2,9)))
	+(0*(1.0/POWER(2,10)))
	+(0*(1.0/POWER(2,11)))
	+(0*(1.0/POWER(2,12)))
	+(0*(1.0/POWER(2,13)))
	+(0*(1.0/POWER(2,14)))
	+(0*(1.0/POWER(2,15)))
	+(0*(1.0/POWER(2,16)))
	+(0*(1.0/POWER(2,17)))
	+(0*(1.0/POWER(2,18)))
	+(0*(1.0/POWER(2,19)))
	+(0*(1.0/POWER(2,20)))
	+(0*(1.0/POWER(2,21)))
	+(0*(1.0/POWER(2,22)))
	+(0*(1.0/POWER(2,23)));

Finally, we put them altogether like so:

-- Result: .15625
SELECT 
	POWER(-1,@sign)
	* POWER(CAST(2 AS DECIMAL(38,37)),(@exponent-127)) 
	* (1+@fraction)

In this example, there is no floating point error – .15625 can be accurately stored as a float. However, if go through the same exercise for a number like .1 or .2, you’ll notice your numbers are not so perfect.

Fixing Floating Point Errors

Floating point math errors can be fixed in a few ways.

One option is to stop caring about them. The error occurring on floats is very small (although when compounded through arithmetic, the error can grow large enough to be noticeable like in my reporting bar chart example). If you are writing queries or reports where such a small amount of error doesn’t matter, then you can continue on your merry way without having to change anything.

A second option is to still store values as floats (for that sweet, sweet storage space savings), but ensure your application code has business logic to correctly round numbers that are in precise.

However, if your data needs to be perfectly accurate every single time with no errors, use a different datatype. The logical choice here would be to use decimal in SQL Server, which uses a different internal method for storing your numbers, resulting in perfect results every time. However, the range of possible values is not as large as float, and you will pay for that precision with additional bytes of storage space.

In the end, floating point is good enough for many applications. The important thing is that you are aware that these kind of errors can happen and that you handle them appropriately.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!