While most of us strive to make as few mistakes as possible when it comes to our servers and data, accidents do occasionally happen.
Sometimes those accidents are easily fixed while other times the solutions require herculean efforts (usually accompanied by lots of caffeine and cursing…or is that just me?).
This week I’m excited to have guests Andy Mallon (t), Erin Stellato (t), and Mr. ANONYMOUS (t) (don’t spoil the fun by clicking these links until after watching!) share some of their most memorable SQL Server mishaps.
It’s a video only post so be sure to watch above or on my YouTube channel (and be sure to watch until the end for a special…furry…cameo).
This post is a response to this month’s T-SQL Tuesday #110 prompt by Garry Bargsley. T-SQL Tuesday is a way for the SQL Server community to share ideas about different database and professional topics every month.
This month’s topic asks to share how we automate certain processes.
I’m a fan of keeping documentation close to the code. I prefer writing my documentation directly above a procedure, function, or view definition because that’s where it will be most beneficial to myself and other developers.
Not to mention that’s the only place where the documentation has any chance of staying up to date when changes to the code are made.
What drives me crazy though is making a copy of that documentation somewhere else, into a different format. You know, like when someone without database access needs you to send them a description of all of the procedures for a project. Or if you are writing end-user documentation for your functions and views.
Not only is creating a copy of the documentation tedious, but there is no chance that it will stay up to date with future code changes.
So today I want to share how I automate some of my documentation generation directly from my code.
/// Retrieves the details for a user.
/// <param name="id">The internal id of the user.</param>
/// <returns>A user object.</returns>
public User GetUserDetails(int id)
User user = ...
I like this format: the documentation is directly next to the code and it is structured as XML, making it easy to parse for other uses (eg. use a static document generator to create end-user documentation directly from these comments).
This format is easily transferable to T-SQL:
<summary>Retrieves the details for a user.</summary>
<param name="@UserId">The internal id of the user.</param>
<returns>The username, user's full name, and join date</returns>
CREATE PROCEDURE dbo.USP_SelectUserDetails
SELECT Username, FullName, JoinDate FROM dbo.[User] WHERE Id = @UserId
<summary>Returns the value 'A'.</summary>
<param name="@AnyNumber">Can be any number. Will be ignored.</param>
<param name="@AnotherNumber">A different number. Will also be ignored.</param>
<returns>The value 'A'.</returns>
CREATE FUNCTION dbo.UDF_SelectA
Sure, this might not be as visually appealing as the traditional starred comment block, but I’ve wrestled with parsing enough free formatted text that I don’t mind a little extra structure in my comments.
Querying the Documentation
Now that our T-SQL object documentation has some structure, it’s pretty easy to query and extract those XML comments:
WITH DocumentationDefintions AS (
SCHEMA_NAME(o.schema_id) as schema_name,
o.name as object_name,
CAST(SUBSTRING(m.definition,CHARINDEX('<documentation>',m.definition),CHARINDEX('</documentation>',m.definition)+LEN('</documentation>')-CHARINDEX('<documentation>',m.definition)) AS XML) AS Documentation,
p.parameter_id as parameter_order,
p.name as parameter_name,
t.name as parameter_type,
INNER JOIN sys.sql_modules m
ON o.object_id = m.object_id
LEFT JOIN sys.parameters p
ON o.object_id = p.object_id
INNER JOIN sys.types t
ON p.system_type_id = t.system_type_id
o.type in ('P','FN','IF','TF')
t.c.value('author','varchar(100)') as Author,
t.c.value('summary','varchar(max)') as Summary,
t.c.value('returns','varchar(max)') as Returns,
p.c.value('@name','varchar(100)') as DocumentedParamName,
p.c.value('.','varchar(100)') as ParamDescription
OUTER APPLY d.Documentation.nodes('/documentation') as t(c)
OUTER APPLY d.Documentation.nodes('/documentation/param') as p(c)
p.c.value('@name','varchar(100)') IS NULL -- objects that don't have documentation
OR p.c.value('@name','varchar(100)') = d.parameter_name -- joining our documented parms with the actual ones
This query pulls the parameters of our procedures and functions from sys.parameters and joins them with what we documented in our XML documentation. This gives us some nicely formatted documentation as well as visibility into what objects haven’t been documented yet:
Only the Beginning
At this point, our procedure and function documentation is easily accessible via query. We can use this to dump the information into an Excel file for a project manager, or schedule a job to generate some static HTML documentation directly from the source every night.
This can be extended even further depending on your needs, but at least this is an automated starting point for generating further documentation directly from the T-SQL source.
Hash Match joins are the dependable workhorses of physical join operators.
While Nested Loops joins will fail if the data is too large to fit into memory, and Merge Joins require that the input data are sorted, a Hash Match will join any two data inputs you throw at it (as long as the join has an equality predicate and you have enough space in tempdb).
The base hash match algorithm has two phases that work like this:
During the first “Build” phase, SQL Server builds an in-memory hash table from one of the inputs (typically the smaller of the two). The hashes are calculated based on the join keys of the input data and then stored along with the row in the hash table under that hash bucket. Most of the time there is only 1 row of data per hash bucket except when:
There are rows with duplicate join keys.
The hashing function produces a collision and totally different join keys receive the same hash (uncommon but possible).
Once the hash table is built, SQL Server begins the “Probe” phase. During this second phase, SQL Server calculates the join key hash for each row in the second input, and checks to see if it exists in the hash table created in the first build phase. If it finds a match for that hash, it then verifies if the join keys between the row(s) in the hash table and the row from the second table actually match (it needs to perform this verification due to potential hash collisions).
A common variation on this hash match algorithm occurs when the build phase cannot create a hash table that can be fully stored in memory:
This happens when the data is larger than what can be stored in memory or when SQL Server grants an inadequate amount of memory required for the hash match join.
When SQL Server runs doesn’t have enough memory to store the build phase hash table, it proceeds by keeping some of the buckets in memory, while spilling the other buckets to tempdb.
During the probe phase, SQL Server joins the rows of data from the second input to buckets from the build phase that are in memory. If the bucket that the row potentially matches isn’t currently in memory, SQL Server writes that row to tempdb for later comparison.
Once the matches for one bucket are complete, SQL Server clears that data from memory and loads the next bucket(s) into memory. It then compares the second input’s rows (currently residing in tempdb) with the new in-memory buckets.
Knowing the internals of how a hash match join works allows us to infer what the optimizer thinks about our data and the join’s upstream operators, helping us focus our performance tuning efforts.
Here are a few scenarios to consider the next time you see a hash match join being used in your execution plan:
While hash match joins are able to join huge sets of data, building the hash table from the first input is a blocking operation that will prevent downstream operators from executing. Due to this, I always check to see if there is an easy way to convert a hash match to either a nested loops or merge join. Sometimes that won’t be possible (too many rows for nested loops or unsorted data for merge joins) but it’s always worth checking if a simple index change or improved estimates from a statistics update would cause SQL Server to pick a non-blocking hash match join operator.
Hash match joins are great for large joins – since they can spill to tempdb, it allows them to perform joins on large datasets that would fail an in-memory join with either the nested loops or merge join operators.
Seeing a hash match join operator means SQL Server thinks the upstream inputs are big. If we know our inputs shouldn’t be that big, then it’s worth checking if we have a stats/estimation problem that is causing SQL Server to choose a hash match join incorrectly.
When executed in memory, hash match joins are fairly efficient. Problems arise when the build phase spills to tempdb.
If I notice the little yellow triangle indicating that the join is spilling to tempdb, I take a look to see why: if the data is larger than the server’s available memory, there’s not much that can be done there, but if the memory grant seems unusually small that means we probably have another statistics problem that is providing the SQL Server optimizer estimates that are too low.