Data with Bert logo

Pinal Dave Helps Me Fix My Performance Tuning Problems

Watch this week's video on YouTube

This week I was fortunate enough to film a video in collaboration with Pinal Dave, the SQL Authority himself.  Pinal is creative, hilarious, and kind; making this video with him was A BLAST!

Although the video is a little tongue in cheek, Pinal's recommendations are very real: I've encountered plenty of scenarios where these solutions fixed slow queries.  Will these recommendations fix the problem in every situation?  Of course not, but they are a great place to start.

Instead of creating a text version of the concepts covered in the video (you should really watch it), I thought it would be fun to do a behind-the-scenes narrative of how the video came together because it is unlike any other project I've done before.

The Idea

After agreeing to make a video together, we tossed around a few ideas.  Because we live in different time zones, we thought it would be a fun to do something where I kept waking Pinal up in the middle of the night.

We iterated over what SQL Server examples to use (originally the second example was going to show my queries running out of space because autogrowth being turned off).  We also ended up adding another example after my wife suggested that having it build to three scenarios instead of two would be funnier - I agree!

Asynchronous Filming

You've probably already figured it out, but I didn't really wake Pinal up in the video (honestly, I think midnight would be too early to wake him up anyway; in our back and forth emails, I was seeing responses from him that were in the 1-2am range).

I filmed a preliminary version of my parts of the video, very roughly edited them together, and sent it over to Pinal.

He then filmed his segments, giving me lots of great footage (I'm not sure if it was ad-libbed or not, but I was dying of laughter when watching through his clips).

Then I re-filmed my parts to try to match his dialog as closely as possible.  Re-filming my parts also allowed me to self-edit and not ramble as much.

Everything Else

After that, it was just the usual process of editing, color correction, audio processing, etc...

I'm happy with how it turned out, especially given all of the technical challenges we had with filming separately.

Major thanks again to Pinal for being supportive and willing to make a fun SQL Server video.  Enjoy!

T-SQL Tuesday #104 Roundup

MJ-t-sql-TuesdayThis month's T-SQL Tuesday topic asked "What code would you hate to live without?" Turns out you like using script and code to automate boring, repetitive, and error-prone tasks.

Thank you to everyone who participated; I was nervous that July holidays and summer vacations would stunt turnout, however we wound up with 42 posts!

Watch tsqltuesday.com for next month's topic and consider signing up to host.

Watch this week's video on YouTube

Without further ado, here are this month's entries sorted in random order:

  • Stuart Moore shares the history behind needing to automate restore testing and writing the SqlAutoRestores PowerShell module to help.  Nowadays his commands are found in dbatools.  Great example of how a project can evolve through the community.
  • Arthur Daniels shares his script to identify the key and included columns of indexes in a given table.
  • Glenn Berry shares his DMV Diagnostic Queries and the story behind how he started developing them back in 2006.
  • Jason Brimhall links to multiple scripts he's shared in the past as well as a new script for remotely auditing server access to catch infilitraters red-handed.
  • Doug Purnell talks about how he uses database snapshots and shares some code for how he manages them.
  • Jay Robinson shares two C# extensions (shout to my fellow devs!): one to check an enum for a value and a second to cleanly handle the lengthy DBNull.Value syntax.
  • Drew Furgiuele shares how he scripts out his indexes to re-apply after snapshot replication.  He then writes very similar functionality using PowerShell in only 6 lines!
  • Tim Weigel shares which community scripts he uses regularly, as well as sharing his own scripts around replication, stored procedure execution information, and file manipulation.
  • Hugo Kornelis submitted two posts.  The first post shares sp_metasearch which helps with performing impact analysis and the second post follows up with an enhancement he's made to Ola Hallengren's database maintenance scripts to ignore backup BizTalk databases.
  • Andy Mallon shares his comprehensive script for checking database, file, data, log, etc... sizes.  Great explanations of his reasoning for writing the queries the way he did.
  • Dan Clemens shares his database search script with a switch that includes searching across agent jobs.
  • Jess Pomfret wrote a script that shows compression stats for database objects.  Wanting to run it against a whole instance (or across mulitple servers), she wrote a dbatools command to automate the process.
  • Kenneth Fisher shows us how he organizes his toolbox using an SSMS solution.
  • Rob Farley shares code he's written to demonstrate the pain of using NOLOCK.
  • Steve Jones shares a procedure from Microsoft that he uses for transferring logins and passwords between instances.
  • Kevin Hill shares two scripts he uses for finding low-hanging index optimization fruit: one that finds queries performing heap or clustered index scans, and another that returns the top 5 missing indexes per database.
  • Michael Villegas learned that Azure SQL doesn't allow you to graphically show user roles and permissions, so he wrote a script to query those details (works for on-premise SQL Server as well).
  • Nate Johnson shares scripts that identify if tables are being replicated, whether SSRS subscriptions executed, and how much space certain objects and files are consuming.
  • William Andrus shares how he uses his search script to find similarly named fields or all instances of a piece of text within a database.
  • Bert Wagner (me!) I share my template for generating dynamic table-driven code, making queries more adaptable to future changes.
  • Rudy Rodarte shows us a script he uses for iterating over a date range to use for executing date based queries.
  • Brent Ozar admits he can't live without sp_Blitz, but this month he shares a script for checking how much plan cache history exists on a server.
  • Jeff Mlakar offers a solution for taking all databases on an instance offline (and then back online) again.
  • Erik Darling offers a solution for constructing dynamic SQL so that his MAX variables don't get truncated.  He also links to a script for printing long strings in SSMS.
  • Chrissy LeMaire takes the hard work out of instance to instance migrations by sharing her single-line dbatools command that will do it all for you.  She also shares how dbachecks automates manual checklist work.
  • Glenda Gable mentions two procedures, one that is a high performance cursor rewrite and one  that is a robust log shipping solution.
  • Aaron Bertrand shows us how he discovers undocumented SQL Server features by comparing new builds to the previous versions.
  • Ryan Desmond writes about his post-install confirguration process and shares code he runs to customize Ola Hallengren's maintenance scripts for his environments.
  • Josh Simar shares his database file size code that is optimized for "very large databases" that span multiple files and filegroups.
  • Sander Stad discusses the importance of sharing code and offers a few dbatools commands that he's contributed to or authored around backup testing, log shipping, and SQL Server Agent manipulation.
  • Andy Levy wrote an SSMS snippet to generate a cursor.  Before you chew him out though, he has some really good uses cases for needing to use them.
  • Andy Yun reveals what's in his T-SQL toolbox and explains his organization strategies for 10+ years of scripts he's collected.
  • Eduardo Pivaral shares a script he uses to output query results into an HTML table, making it easy to copy into an email.
  • Raul Gonzalez shows us a versatile script for searching database tables and returning information on attributes such as column name, size, key definitions, and more.
  • Matthew McGiffen wanted to find the most expensive queries on an instance using Query Store instead of the traditional DMVs, so he wrote a script to do just that.
  • Daniel Hutmacher shares his beefed up version of sp_help.  Includes ASCII art dependency graphs and database search.
  • Christian Gräfe provides a function he wrote for padding the left-side of a value with zeros.
  • Adrian Buckman  shares his SQLUndercover Inspector HTML reporting tool, as well as scripts for helping to alter AG groups, checking for running jobs, and auditing failed logins.
  • Louis Davidson shares his technique for using relative positioning in date tables to make querying custom periods (eg. your company's fiscal month) easier.
  • Lance England shares a PowerShell script to automate generating upsert merge statements for his ETLs.

Building Dynamic Table-Driven Queries

MJ-t-sql-Tuesday

This post is a response to this month's T-SQL Tuesday #104 prompt by me! T-SQL Tuesday is a way for SQL Server bloggers to share ideas about different database and professional topics every month.

This month's topic is asking what code would you hate to live without?


Watch this week's video on YouTube

When given the choice between working on new projects versus maintaining old ones, I'm always more excited to work on something new.

That means that when I build something that is going to used for years to come, I try to build it so that it will require as little maintenance as possible in the future.

One technique I use for minimizing maintenance is making my queries dynamic.  Dynamic queries, while not right for every situation, do one thing really well: they allow you to modify functionality without needing a complete rewrite when your data changes.  The way I look it, it's much easier to add rules and logic to rows in table than having to modify a table's columns or structure.

To show you what I mean,let's say I want to write a query selecting data from model.sys.database_permissions:

SELECT class
      ,class_desc
      ,major_id
      ,minor_id
      ,grantee_principal_id
      ,grantor_principal_id
      ,type
      ,permission_name
      ,state
      ,state_desc
  FROM model.sys.database_permissions

Writing the query as above is pretty simple, but it isn't flexible in case the table structure changes in the future or if we want to programmatically write some conditions.

Instead of hardcoding the query as above, here is a general pattern I use for writing dynamic table-driven queries.  SQL Server has the handy views sys.all_views and sys.all_columns that show information about what columns are stored in each table/view:

2018-07-03_21-00-45

Using these two views, I can use this dynamic SQL pattern to build the same exact query as above:

-- Declare some variables up front
DECLARE 
    @FullQuery nvarchar(max),
    @Columns nvarchar(max),
    @ObjectName nvarchar(128)

-- Build our SELECT statment and schema+table name
SELECT 
    @Columns = COALESCE(@Columns + ', ', '') + '[' + c.[name] + ']',
    @ObjectName = QUOTENAME(s.name) + '.' + QUOTENAME(o.name)
FROM 
    sys.all_views o  
    INNER JOIN sys.schemas s
        ON o.schema_id = s.schema_id
    INNER JOIN sys.all_columns c
        ON o.object_id = c.object_id
WHERE 
    o.[name] = 'database_permissions'
ORDER BY
    c.column_id 

-- Put all of the pieces together an execute
SET @FullQuery = 'SELECT ' + @Columns + ' FROM ' + @ObjectName

EXEC(@FullQuery)

The way building a dynamic statement like this works is that I build my SELECT statement as a string based on the values stored in my all_columns view.  If a column is ever added to this view, my dynamic code will handle it (I wouldn't expect this view to change that much in future versions of SQL, but in other real-world tables I can regularly expect changing data).

Yes, writing certain queries dynamically like this means more up front work.  It also means some queries won't run to their full potential (not necessarily reusing plans, not tuning every individual query, needing to be thoughtful about SQL injection attacks, etc...).  There are A LOT of downsides to building queries dynamically like this.

But dynamically built queries make my systems flexible and drastically reduce the amount of work I have to do down the road.  In the next few weeks I hope to go into this type of dynamically built, table-driven process in more detail (so you should see the pattern in the example above get reused soon!).

Code You Would Hate To Live Without (T-SQL Tuesday #104 Invitation)

MJ-t-sql-Tuesday

The recent news about Microsoft acquiring GitHub has me thinking about how amazing it is for us to be part of today's online code community.

Before modern online programming communities, finding good code samples or sharing your own code was challenging.  Forums and email lists (if searchable) were good, but beyond that you had to rely on books, coworkers, and maybe a local meetup of like-minded individuals to help you work through your programming problems.

Watch this week's video on YouTube

Today, accessing and using code from the internet is second nature - I almost always first look online to see if a good solution already exists.  At the very least, searching blogs, GitHub, and StackOverflow for existing code is a great way to generate ideas.

For this month's T-SQL Tuesday, I want you to write about code you've written that you would hate to live without.

Maybe you built a maintenance script to free up disk space, wrote a query to gather system stats for monitoring, or coded some PowerShell to clean up string data.  Your work doesn't need to be completely original either - maybe you've improved the code in some open source project to better solve the problem for your particular situation.

There's probably someone out there in the world who is experiencing the same problem that you have already solved; let's make their life a little easier by sharing.

And don't worry if your code isn't perfect - just explain how your solution works and if you are aware of any caveats.  If it's not an exact solution for someone else's problem, at the very least it may help them generate some ideas.

Finally, here's a reminder of the official rules for T-SQL Tuesday:

  1. Publish your contribution on Tuesday, July 10, 2018. Let's use the "it's Tuesday somewhere" rule.
  2. Include the T-SQL Tuesday Logo and have it link to this post.
  3. Please comment below with a link to your post (trackbacks/pingbacks should work too but...comments ensure I don't miss your post)
  4. Tweet about your post using #tsql2sday.
  5. If you'd like to host in the future, contact Adam Machanic.

Filtering Tables, Procedures, And Other Objects In SSMS

Watch this week's video on YouTube

While the text of this post contains good information on SSMS object filters, I highly recommend watching this week's video on YouTube - I stretched my creativity with filming this week while I was on vacation.

This week I'm sharing a trick I learned at Jess Pomfret's Ohio North SQL Server User Group presentation on data compression.  Her presentation on data compression was awesome (check it out if you get the chance), but I was shocked that I have been working with SSMS for so many years and have never known about the following trick I watched her perform in her demos.

An "Organized" Nightmare

I'm guessing you've probably worked in a database that has hundreds or thousands of database objects:

lots-of-tables

I often come across this in applications where for one reason or another someone decides that there is no need to separate apps into different databases; why bother creating different databases when you can have lots of different schemas to organize your objects instead!? (*cough* vendor applications *cough*).

The problem with these enormous lists of tables, procedures, functions, etc... is that it can get pretty tiring to scroll through them to find what you need.

For years I wore down my mouse's scroll wheel, scrolling between thousands of objects across multiple server instances.  As it turns out, SSMS has a much better feature for handling this problem.

Filtering Objects in SSMS

You can apply filters to most objects in SSMS by right clicking and choosing "Filter Settings":

filter-settings

You can filter on attributes such as name, schema, create date, etc...:

filter-settings-dialog

For example, if I want to see only tables that are in the Travel schema, all I have to do is create a filter:

FILTER-SETTINGS-DIALOG-FILLED-IN-1

SSMS will even tell me that my list of objects is filtered so I don't go crazy later on wondering where all of my other tables went.

FILTERED-TABLES

Limitations

There are a few limitations with using SSMS object filters though.

For example, the different filter attributes work together as if they had "AND" operators between them, so you can do something like filter on tables in the Travel schema that contain the letter "a":

and-filter

However, there is no way to write multiple conditions with OR logic (eg. you can't filter on the schemas "Travel" OR "Lodging").

And while SSMS will indicate that your objects are filtered, it won't persist that filter after restarting SSMS.

Even with those drawbacks, I've used this filtering feature at least once per week since learning about it; it saves a lot of time and I can't believe I went so long without knowing about it.