4 Common Misconceptions About SQL Injection Attacks

Published on: 2017-11-07

Photo by Jaanus Jagomägi on Unsplash

Interested in learning more about SQL injection attacks, including how to prevent them?  Attend my online webcast on Tuesday November 14, 2017 at 1PM Eastern at the PASS Security Virtual Group.

SQL injection continues to be one of the biggest security risks that we face as database professionals.

Every year, millions of users’ personal information is leaked due to poorly written queries exploited by SQL injection.  The sad truth is that SQL injection is completely preventable with the right knowledge.

My goal today is to cover four common misconceptions that people have about SQL injection in an effort to dissuade any illusions that an injection attack is not something that can happen to you.

Prefer watching me get angry about these common misconceptions?  You can watch this content on my YouTube channel.

1. “My database information isn’t public”

Let’s see, without me knowing anything about your databases, I’m guessing you might have some tables with names like:

  • Users
  • Inventory
  • Products
  • Sales
  • etc…

Any of those sound familiar?

You might not be publicly publishing your database object names, but that doesn’t mean they aren’t easy to guess.

All a malicious user needs is a list of common database table names and they can iterate over the ones they are interested in until they find the ones that match in your system.

2. “But I obfuscate all of my table and column names!”

Oh jeez.  I hope you don’t do this.

Some people do this for job security (“since only I can understand my naming conventions, I’m guaranteeing myself a job!”) and that’s a terrible reason in and of itself.

Doing it for security reasons is just as horrible though.  Why?  Well, have you ever heard of some system tables like sys.objects and sys.columns?

SELECT 
	t.name, c.name 
FROM 
	sys.objects t
	INNER JOIN sys.columns c 
		on t.object_id = c.object_id

A hacker wanting to get into your system can easily write queries like the ones above, revealing your “secure” naming conventions.

Security through obscurity doesn’t work.  If you have table names that aren’t common, that’s perfectly fine, but don’t use that as your only form of prevention.

3. “Injection is the developer’s/dba’s/somebody else’s problem”

You’re exactly right.  SQL injection is a problem that should be tackled by the developer/dba/other person.

But it’s also a problem that benefits from multiple layers of security, meaning it’s your problem to solve as well.

Preventing sql injection is hard.

Developers should be validating, sanitizing, parameterizing etc…  DBAs should be parameterizing, sanitizing, restricting access, etc..

Multiple layers of security in the app and in the database are the only way to confidently prevent an injection attack.

4. “I’m too small of a fish in a big pond – no one would go out of their way to attack me”

So you run a niche business making and selling bespoke garden gnomes.

You only have a few dozen/hundred customers, so who would bother trying to steal your data with SQL injection?

Well, most SQL injection attacks can be completely automated with tools like sqlmap.  Someone might not care about your business enough to handcraft some SQL injection code, but that won’t stop them from stealing your handcrafted garden gnome customers’ data through automated means.

No app, big or small, is protected from the wrath of automated SQL injection tooling.

Interested in learning more about SQL injection attacks, including how to prevent them?  Attend my online webcast on Tuesday November 14, 2017 at 1PM Eastern at the PASS Security Virtual Group.

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

How to Make SQL Server Act Like A Human By Using WAITFOR

Published on: 2017-10-17

Photo by Fischer Twins on Unsplash

You probably tune your queries for maximum performance.  You take pride in knowing how to add indexes and refactor code in order to squeeze out every last drop your server’s performance potential.  Speed is usually king.

That’s why you probably don’t use SQL Server’s WAITFOR command regularly – it actually makes your overall query run slower.

However, slowness isn’t always a bad thing.  Today I want to show you two of my favorite ways for using the WAITFOR command.

You can also watch this week’s content on my YouTube channel.

1. Building A Human

Modern day computers are fast.  CPUs perform billions of actions per second, the amount of RAM manufactures can cram onto a stick increases regularly, and SSDs are quickly making disk I/O concerns a thing of the past.

While all of those things are great for processing large workloads, they move computers further and further away from “human speed”.

But “human speed” is sometimes what you want.  Maybe you want to simulate app usage on your database or the load created by analysts running ad hoc queries against your server.

This is where I love using WAITFOR DELAY – it can simulate humans executing queries extremely welll:

-- Run forever
WHILE (1=1)
BEGIN
	--Insert data to simulate an app action from our app
	EXEC dbo.BuyDonuts 12

	-- We usually average an order every 3 seconds
	WAITFOR DELAY '00:00:03'
END

Throw in some psuedo-random number generation and some IF statements, and you have a fake server load you can start using:

WHILE (1=1)
BEGIN
	-- Generate command values 1-24
	DECLARE @RandomDonutAmount int = ABS(CHECKSUM(NEWID()) % 25) + 1

	-- Generate a delay between 0 and 5 seconds
	DECLARE @RandomDelay int = ABS(CHECKSUM(NEWID()) % 6)
	
	EXEC dbo.BuyDonuts @RandomDonutAmount

	WAITFOR DELAY @RandomDelay
END

2. Poor Man’s Service Broker

Service Broker is a great feature in SQL Server.  It handles messaging and queuing scenarios really well, but requires more setup time so I usually don’t like using it in scenarios where I need something quick and dirty.

Instead of having to set up Service Broker to know when some data is available or a process is ready to be kicked off, I can do the same with a WHILE loop and a WAITFOR:

DECLARE @Quantity smallint = NULL

-- Keep checking our table data until we find the data we want
WHILE (@Quantity IS NULL)
BEGIN
	-- Delay each iteration of our loop for 10 seconds
	WAITFOR DELAY '00:00:03'

	-- Check to see if someone has bought a dozen donuts yet
	SELECT @Quantity = Quantity FROM dbo.Orders WHERE Quantity = 12
END

-- Run some other query now that our dozen donut purchase condition has been met
EXEC dbo.GenerateCoupon

Fancy? No.  Practical? Yes.

No longer do I need to keep checking a table for results before I run a query – I can have WAITFOR do that for me.

If you know there is a specific time you want to wait for until you start pinging some process, you can incorporate WAITFOR TIME to make your checking even more intelligent:

DECLARE @Quantity smallint = NULL

-- Let's not start checking until 6am...that's when the donut shop opens
WAITFOR TIME '06:00:00'
-- Keep checking our table data until we find the data we want
WHILE (@Quantity IS NULL)
BEGIN
	-- Delay each iteration of our loop for 10 seconds
	WAITFOR DELAY '00:00:03'

	-- Check to see if someone has bought a dozen donuts yet
	SELECT @Quantity = Quantity FROM dbo.Orders WHERE Quantity = 12
END

-- Run some other query now that our dozen donut purchase condition has been met
EXEC dbo.GenerateCoupon

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!

How NOLOCK Will Block Your Queries

Published on: 2017-10-10

lock
Photo by James Sutton on Unsplash

Note: the problem described below applies to all SELECT queries, not just those adorned with NOLOCK hints.  The fact that it applies to NOLOCK queries was a huge surprise to me though, hence the title.

Lots of people don’t like NOLOCK (i.e. the read uncommitted isolation level) because it can return inaccurate data.  I’ve seen plenty of arguments cautioning developers from retrieving uncommitted reads because of how they can return dirty data, phantom reads, and non-repeatable reads.

I’ve known about all of those above problems, but there’s one problem that I’ve never heard of until recently: NOLOCK can block other queries from running.

Watch this week’s post on YouTube

Let’s step back and understand why I’ve so often used NOLOCK in the past.  A fairly typical instance of when I use NOLOCK is when I want to let a query run overnight to return some large set of data.  I’m okay with some inconsistencies in the data (from dirty reads, etc…).  My primary concern is that I don’t want the long running query to get in the way of other processes.

I always thought NOLOCK was a perfect solution for this scenario because it never locks the data that it reads – the results might not be perfect, but at least the query won’t negatively impact any other process on the server.

This is where my understanding of NOLOCK was wrong: while NOLOCK won’t lock row level data, it will take out a schema stability lock.

A schema stability (Sch-S) lock prevents the structure of a table from changing while the query is executing.  All SELECT statements, including those in the read uncommitted/NOLOCK isolation level, take out a Sch-S lock.  This makes sense because we wouldn’t want to start reading data from a table and then have the column structure change half way through the data retrieval.

However, this also means there might be some operations that get blocked by a Sch-S lock.  For example, any command requesting a schema modification (Sch-M) lock gets blocked in this scenario.

What commands request Sch-M locks?

Things like an index REBUILD or sp_recompile table.  These are the types of commands running in my nightly maintenance jobs that I was trying to avoid hurting by using NOLOCK in the first place!

To reiterate, I used to think that using the NOLOCK hint was a great way to prevent blocking during long running queries.  However, it turns out that my NOLOCK queries were actually blocking my nightly index jobs (all SELECT queries block in this example, but I find the NOLOCK to be particularly misleading), which then caused other SELECT statements to get blocked too!

Let’s take a look at this in action.  Here I have a query that creates a database, table, and then runs a long running query with NOLOCK:

DROP DATABASE IF EXISTS [Sandbox]
GO
CREATE DATABASE [Sandbox]
GO
USE [Sandbox]
GO

DROP TABLE IF EXISTS dbo.Test
CREATE TABLE dbo.Test
(
	c0 int IDENTITY PRIMARY KEY,
	c1 varchar(700) default REPLICATE('a',700)
)

CREATE NONCLUSTERED INDEX IX_Id ON dbo.Test (c1);
GO

INSERT INTO dbo.Test DEFAULT VALUES;
GO 1000


-- Read a billion records
SELECT * 
FROM 
	dbo.Test t1 (NOLOCK) 
	CROSS JOIN dbo.Test t2 (NOLOCK) 
	CROSS JOIN dbo.Test t3 (NOLOCK) 

Now, while that billion row read is occurring, we can verify that the query took out a Sch-S lock by looking at sys.dm_tran_locks:

SELECT *
FROM sys.dm_tran_locks
WHERE resource_type = 'OBJECT'

Sch-S lock granted

While that’s running, if we try to rebuild an index, that rebuild is blocked (shown as a WAIT):

USE [Sandbox]
GO

ALTER INDEX IX_Id ON dbo.Test REBUILD

rebuild is blocked

Our index rebuild query will remain blocked until our billion row NOLOCK SELECT query finishes running (or is killed).  This means the query that I intended to be completely unobtrusive is now blocking my nightly index maintenance job from running.

Even worse, any other queries that try to run after the REBUILD query (or any other commands that request a Sch-M lock) are going to get blocked as well!  If I try to run a simple COUNT(*) query:

USE [Sandbox]
GO

SELECT COUNT(*) FROM dbo.Test

chained blocks

Blocked!  This means that not only is my initial NOLOCK query causing my index REBUILD maintenance jobs to wait, the Sch-M lock placed by the REBUILD maintenance job is causing any subsequent queries on that table to get blocked and be forced to wait as well.  I just derailed the timeliness of my maintenance job and subsequent queries with a blocking NOLOCK statement!

Solutions

Unfortunately this is a tough problem and there’s no one-size-fits-all remedy.

Solution #1: Don’t run long running queries

I could avoid running long queries at night when they might run into my index maintenance jobs.  This would prevent those index maintenance jobs and subsequent queries from getting delayed, but it means my initial billion row select query would then have to run earlier, negatively impacting server performance during a potentially busier time of day.

Solution #2: Use WAIT_AT_LOW_PRIORITY

Starting in 2014, I could do an online index rebuild with the WAIT_AT_LOW_PRIORITY option set:

ALTER INDEX IX_Id ON dbo.Test REBUILD 
WITH (ONLINE = ON (WAIT_AT_LOW_PRIORITY (MAX_DURATION = 1 MINUTES , ABORT_AFTER_WAIT = BLOCKERS)))

This query basically gives any blocking SELECT queries currently running 1 minute to finish executing or else this query will kill them and then execute the index rebuild.  Alternatively we could have also set ABORT_AFTER_WAIT = SELF and the rebuild query would kill itself, allowing the NOLOCK billion row SELECT to finish running and not preventing any other queries from running.

This is not a great solution because it means either the long running query gets killed or the index REBUILD gets killed.

Solution #3: REBUILD if no Sch-S, REORGANIZE otherwise

A programmatic solution can be written that tries to REBUILD the index, but falls back to REORGANIZE if it knows it will have to wait for a Sch-M lock.

I’ve created the boiler plate below as a starting point, but the sky is the limit with what you can do with it (e.g. create a WHILE loop to check for the lock every x seconds, create a timeout for when the script should stop trying to REBUILD and just REORGANIZE instead, etc…)

-- Idea for how to rebuild/reorganize based on a schema stability lock.
-- More of a starting point than fully functional code.
-- Not fully tested, you have been warned!
DECLARE 
	@TableName varchar(128) = 'Test',
	@HasSchemaStabilityLock bit = 0

SELECT TOP 1 @HasSchemaStabilityLock = 
	CASE WHEN l.request_mode IS NOT NULL THEN 1 ELSE 0 END
	FROM
		sys.dm_tran_locks as l
	WHERE
		l.resource_type = 'OBJECT'
		AND l.request_mode = 'Sch-S'
		AND l.request_type = 'LOCK'
		AND l.request_status = 'GRANT'
		AND OBJECT_NAME(l.resource_associated_entity_id) = @TableName

IF @HasSchemaStabilityLock = 0
BEGIN
	-- Perform a rebuild
	ALTER INDEX IX_Id ON dbo.Test REBUILD
	PRINT 'Index rebuilt'
END
ELSE
BEGIN
	-- Perform a REORG
	ALTER INDEX IX_Id ON dbo.Test REORGANIZE
	PRINT 'Index reorganized'
END

This solution is my favorite because:

  1. Ad hoc long running queries don’t get killed (all of that time spent processing doesn’t go to waste)
  2. Other select queries are not blocked by the Sch-M lock attempt by REBUILD
  3. Index maintenance still occurs, even if it ends up being a REORGANIZE instead of a REBUILD

Thanks for reading. You might also enjoy following me on Twitter.

Want to learn even more SQL?

Sign up for my newsletter to receive weekly SQL tips!