Use Query Store to help with SQL Server upgrades

The Query Store is a feature that helps you evaluating query performance. It captures execution information asl well as up to 200 plans for a single query. For a thorough overview on the Query Store you can read Enrico van de Laar’s article here. Query Store is also available for Azure since V12. To turn on Query Store you would run the following command:


You can also use the GUI in SQL Server to turn on Query Store:


Query Store GUI 1
Turn on and tune Query Store conveniently


I learned the following neat trick last week when I attended SQL Saturday. We know upgrading between SQL Server versions may lead to slow queries, as our queries are optimized for, well, the SQL Server we use. That means if your production server is a 2008 R2, and you upgrade to a 2016, you might end up with a few queries that in fact execute slower than what you are used to. If you are a smart DBA (and you are a smart DBA), you put a few test databases on the target server and let them run for a while. You evaluate performance, growth, make some changes, and when everything looks peachy you move on to upgrade your servers.

This is where the Query Store can help you in that process. Say, I want to upgrade from my current 2008R2 server. I would grab a database from my current 2008R2 server, restore it on the 2016 server, but leave the compatibility level at 2008R2. I turn on Query Store here, and allow it to capture all execution plans for all queries. These data will stay in the memory for 900 seconds by definition, then will be flushed to disk. You hear that right — Query Store saves all this information to disk. When I restart the server, the execution plans are still going to be there for me. I will let that database happily spin around for, say, a month. I can always check on its progress by accessing Query Store in SSMS.


Query Store in SSMS
Query Store in SSMS


At that time I change the compatibility mode to SQL Server 2016. What do you think will happen? You guessed it right, nothing will change for the most part, except for a few queries that would suddenly run slower. Way slower. I have heard stories of queries that used to run in mere seconds on 2008R2, but would take up to 15 minutes on 2016. That is because the Cardinality Estimator has undergone major changes over the past few years, especially with the introduction of SQL Server 2014.

At this point I turn to my new best friend, Query Store, and look at the executions plans it saved for that particular query. I would see something like this:

Query Store GUI
Execution plans in the Query Store


I can see that plan 146 was the best performing execution plan. I can now select that plan and click the ‘Force Plan’ button right under the scatter plot.


Force and Unforce Plan buttons in Query Store
Force and Unforce Plan buttons


I would do this to all my slow-performing queries. Tah-dah! I now crossed off one item on my ‘upgrade to SQL Server 2016’ list, and I don’t have to worry about execution plans anymore. Isn’t that neat?!


For more information on the Query Store, I recommend Microsoft’s surprisingly reader-friendly page here, as well as the aforementioned Enrico van de Laar’s extensive article over on Simple Talk. Credit goes to these two articles for their images as well.

When the client asks for a unique ID based on name and address but gives you dirty data

when len(
PARTITION BY left(replace(name,' ',''), 3) + left(replace(addrsscty,' ',''),3)
ORDER BY left(replace(name,' ',''), 3) + left(replace(addrsscty,' ',''),3)
) < 2

then (UPPER
(replace(name,' ',''), 3) + left(replace(addrsscty,' ',''),3) + '0' + convert(varchar(3),ROW_NUMBER() OVER( PARTITION BY left(replace(name,' ',''), 3) + left(replace(addrsscty,' ',''),3) ORDER BY left(replace(name,' ',''), 3) + left(replace(addrsscty,' ',''),3)))

else (UPPER(left(replace(name,' ',''), 3) + left(replace(addrsscty,' ',''),3) + convert(varchar(3),ROW_NUMBER() OVER( PARTITION BY left(replace(name,' ',''), 3) + left(replace(addrsscty,' ',''),3) ORDER BY left(replace(name,' ',''), 3) + left(replace(addrsscty,' ',''),3)))))

end as ID
my work is done.

Setting database to trustworthy will let you deploy assemblies in unsafe mode

The title says it all, but let me expand on this a little bit.

I have an assembly that I wanted to emulate in SQL server. The first, obvious choice is to use safe mode, which results in the following message:

Msg 6212, Level 16, State 1, Line 2
CREATE ASSEMBLY failed because method ‘x’ on type ‘x.x’ in safe assembly ‘x’ is storing to a static field. Storing to a static field is not allowed in safe assemblies.

Be quiet my racing heart. Sassy SQL Server be sassy. So evidently I’d go ahead and deploy it in unsafe mode, just so that we can run into another error message:

Msg 10327, Level 14, State 1, Line 2
CREATE ASSEMBLY for assembly ‘CTXCRMEncryption’ failed because assembly ‘CTXCRMEncryption’ is not authorized for PERMISSION_SET = UNSAFE. The assembly is authorized when either of the following is true: the database owner (DBO) has UNSAFE ASSEMBLY permission and the database has the TRUSTWORTHY database property on; or the assembly is signed with a certificate or an asymmetric key that has a corresponding login with UNSAFE ASSEMBLY permission.

SQL Server is actually trying to help in her quite, sassy way here, so let’s give her what she wants.


And now we can emulate the assembly in unsafe mode. I wish I had known that sooner.
I wish I had known that

SSIS – SQL Server data type

This blog post here on BI Developer Network contains a nice list on data conversion between SSIS and SQL server, something that gets my blood pressure going due to SSIS’s ‘check your metadata’ error message. Good cheat sheet that list the data types.


SSIS Data Type

SSIS Expression

SQL Server

single-byte signed integer



two-byte signed integer



four-byte signed integer



eight-byte signed integer



single-byte unsigned integer



two-byte unsigned integer



four-byte unsigned integer



eight-byte unsigned integer






double-precision float




(DT_STR, «length», «code_page»)

char, varchar

Unicode text stream

(DT_WSTR, «length»)

nchar, nvarchar, sql_variant, xml








(DT_NUMERIC, «precision», «scale»)

decimal, numeric


(DT_DECIMAL, «scale»)




smallmoney, money

unique identifier



byte stream

(DT_BYTES, «length»)

binary, varbinary, timestamp

database date



database time



database time with precision

(DT_DBTIME2, «scale»)


database timestamp


datetime, smalldatetime

database timestamp with precision

(DT_DBTIMESTAMP2, «scale»)


database timestamp with timezone



file timestamp






text stream

(DT_TEXT, «code_page»)


Unicode string




Credit goes to their user, DevinKnight.


DBCC DBReindex

I ran into some ‘Message 823’ issues on my development DB, and upon running DBCC CheckDB() I saw that my indexes were the ones throwing the errors.
I got this handy cursor from SQL Server Performance, and decided to save it here for me, and for you, so that I don’t have to search for it over and over again. The script by the way is from the talented Brad McGehee, and I suggest checking out the article because it has more information and more cursors for failing indexes.

Without further ado:
USE DatabaseName –Enter the name of the database you want to reindex

DECLARE @TableName varchar(255)

SELECT table_name FROM information_schema.tables
WHERE table_type = ‘base table’

OPEN TableCursor

FETCH NEXT FROM TableCursor INTO @TableName
DBCC DBREINDEX(@TableName,’ ‘,90)
FETCH NEXT FROM TableCursor INTO @TableName

CLOSE TableCursor


Conclusion: you, too, can fix your indexes with a cursor now.


I will make this short and sweet for you.
If there is something you want to do to all tables in your database, there is a command for you.

I will demonstrate, watch my hands because I cheat. To add a column to every table:
exec sp_msforeachtable 'alter table ? add intGoat int NULL'
To delete a column from each table
exec sp_msforeachtable 'alter table ? drop column intGoat'

And this is how you use this undocumented little gem.

Small print: emphasis on undocumented, aka can go away or change in nature without any warning. Test it on your dev DB before using it.
Willy Wonka is telling you to watch with undocumented features.

The thing with cursors

So our aggregated databases have experienced some loading errors lately, and until that gets fixed, I have to collect the data straight from the field databases for each report.
That is, go in the database, pull the data, load into temp.
Go in the next database, pull the data, load into temp.
Go in next, pull, process. Go in next, pull, process.

Sounds familiar? That’s a classic cursor right there.


CREATE TABLE pureHatred (
[DBName] varchar(50),
[text] varchar(max)
FROM MASTER.dbo.sysdatabases
WHERE name NOT IN ('master','model','msdb','tempdb')
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO @name
While @@FETCH_STATUS = 0
SET @sql = N'USE ['+@name+']
INSERT INTO pureHatred(DBName, [text])
from sys.[syscomments] sc
JOIN sys.views v ON v.object_id =
WHERE sc.text like ''%fill my sproc with hatred%'''
exec (@SQL)
FETCH NEXT FROM db_cursor INTO @name
Close db_cursor
DEALLOCATE db_cursor
select * from pureHatred
drop table pureHatred

Now you’ve heard a million times that cursors are a bad approach, and I won’t even deny that fact. They are. They lock rows and load everything into the memory. Cursors also include multiple steps, that is selecting the data, then processing the data, and the lock remains there for the duration of all steps. Also, the server has to execute the plan for each row, each database. (case in point: pull 10K orders each from 500 databases. Oh look! Large numbers.)

I have the luxury to do so in our environment, because we are loaded with memory already. My box still completes the 5 million pull-process cursors in less than 10 seconds, but imagine the peak in memory processes as it does so!* However, we could actually use that memory for other processes, so my solution is only for ad-hoc reports, and only until I fix the data imports. I slap my wrist every time I write up a cursor and hope that the lord of database administrators doesn’t find out about my sin.

But! I don’t know of any other way to jump databases. Feel free to suggest a solution, if you. If you have to repeat the same process in 500-1000-5000 databases, you need a cursor to repeat said process.

So again, should someone toss in the question on your next interview: cursors are bad for using up resources and locking rows.
There are times you can’t avoid them (jumping databases), but if you want to, read up on set-based approaches. I’ll write another article on the matter soon.

Conclusion: If you have to jump databases, you will use UNION ALL 5000 times, or you will load the data in a temp table with a cursor. In this one case I approve of using a cursor, else I you should avoid it.

*You know what? Let me run a query on sys.dm_exec_query_memory_grants next time I set up a mildly complicated cursor, and I tell you just how much memory it costs indeed to use it.

Cast datetime as varchar

I stumbled upon this great Q+A on Stack Overflow when I was looking for options to cast datetime for a report. (The customer didn’t like our standard mm/dd/yyyy format.)

I actually like the third answer the most, because it gives you a chart on all the styles. A pretty helpful shortcut, if you ask me.

union select convert(nvarchar(MAX), @now, 1), 1
union select convert(nvarchar(MAX), @now, 2), 2
union select convert(nvarchar(MAX), @now, 3), 3
union select convert(nvarchar(MAX), @now, 4), 4
union select convert(nvarchar(MAX), @now, 5), 5
output, style
Apr 28 2014 9:31AM, 0
04/28/14, 1
14.04.28, 2
28/04/14, 3
28.04.14, 4
28-04-14, 5

Helpful, if you ask me, if you want to depart from your usual convert(101) format. I don’t even know why the answer is not marked correct. Check out the article here, and bookmark it for future use.