Use Query Store to help with SQL Server upgrades

The Query Store is a feature that helps you evaluating query performance. It captures execution information asl well as up to 200 plans for a single query. For a thorough overview on the Query Store you can read Enrico van de Laar’s article here. Query Store is also available for Azure since V12. To turn on Query Store you would run the following command:

ALTER DATABASE <database_name> SET QUERY_STORE = ON;

You can also use the GUI in SQL Server to turn on Query Store:

 

Query Store GUI 1
Turn on and tune Query Store conveniently

 

I learned the following neat trick last week when I attended SQL Saturday. We know upgrading between SQL Server versions may lead to slow queries, as our queries are optimized for, well, the SQL Server we use. That means if your production server is a 2008 R2, and you upgrade to a 2016, you might end up with a few queries that in fact execute slower than what you are used to. If you are a smart DBA (and you are a smart DBA), you put a few test databases on the target server and let them run for a while. You evaluate performance, growth, make some changes, and when everything looks peachy you move on to upgrade your servers.

This is where the Query Store can help you in that process. Say, I want to upgrade from my current 2008R2 server. I would grab a database from my current 2008R2 server, restore it on the 2016 server, but leave the compatibility level at 2008R2. I turn on Query Store here, and allow it to capture all execution plans for all queries. These data will stay in the memory for 900 seconds by definition, then will be flushed to disk. You hear that right — Query Store saves all this information to disk. When I restart the server, the execution plans are still going to be there for me. I will let that database happily spin around for, say, a month. I can always check on its progress by accessing Query Store in SSMS.

 

Query Store in SSMS
Query Store in SSMS

 

At that time I change the compatibility mode to SQL Server 2016. What do you think will happen? You guessed it right, nothing will change for the most part, except for a few queries that would suddenly run slower. Way slower. I have heard stories of queries that used to run in mere seconds on 2008R2, but would take up to 15 minutes on 2016. That is because the Cardinality Estimator has undergone major changes over the past few years, especially with the introduction of SQL Server 2014.

At this point I turn to my new best friend, Query Store, and look at the executions plans it saved for that particular query. I would see something like this:

Query Store GUI
Execution plans in the Query Store

 

I can see that plan 146 was the best performing execution plan. I can now select that plan and click the ‘Force Plan’ button right under the scatter plot.

 

Force and Unforce Plan buttons in Query Store
Force and Unforce Plan buttons

 

I would do this to all my slow-performing queries. Tah-dah! I now crossed off one item on my ‘upgrade to SQL Server 2016’ list, and I don’t have to worry about execution plans anymore. Isn’t that neat?!

 

For more information on the Query Store, I recommend Microsoft’s surprisingly reader-friendly page here, as well as the aforementioned Enrico van de Laar’s extensive article over on Simple Talk. Credit goes to these two articles for their images as well.

When the client asks for a unique ID based on name and address but gives you dirty data

case when len(ROW_NUMBER() OVER( PARTITION BY left(replace(name,' ',''), 3) + left(replace(addrsscty,' ',''),3) ORDER BY left(replace(name,' ',''), 3) + left(replace(addrsscty,' ',''),3))) < 2
then (UPPER(left(replace(name,' ',''), 3) + left(replace(addrsscty,' ',''),3) + '0' + convert(varchar(3),ROW_NUMBER() OVER( PARTITION BY left(replace(name,' ',''), 3) + left(replace(addrsscty,' ',''),3) ORDER BY left(replace(name,' ',''), 3) + left(replace(addrsscty,' ',''),3)))))
else (UPPER(left(replace(name,' ',''), 3) + left(replace(addrsscty,' ',''),3) + convert(varchar(3),ROW_NUMBER() OVER( PARTITION BY left(replace(name,' ',''), 3) + left(replace(addrsscty,' ',''),3) ORDER BY left(replace(name,' ',''), 3) + left(replace(addrsscty,' ',''),3)))))
end as ID

 

exhausted
my work is done

 

How I passed 70-461

Conclusion: the exam is easier than you would think. Take advantage of the second shot offer, and test your knowledge as soon as possible. 

I put the conclusion on top of this article, because this was my biggest surprise after completing the exam.
I used Transcender’s exam simulator pack that comes with flashcards, too, if you’d like to practice every question. The 70-461 exam prep does not have the same questions that I saw in the exam, but the verbiage, format, and difficulty are quite similar, which is why I recommend this product.

For this particular exam I don’t recommend the official prep book. It does cover the necessary topics, just not in depth to call it an adequate preparation tool for the exam.

Here is what I did:
I bought the exam pack and practiced every day when I had some free time at work, I would say one hour every day.
After a month of that I signed up for the exam next week. Microsoft periodically offers second shot promos, that is, you can re-take the exam for free if you fail the first time.
(By the way, Transcender also offers money-back guarantee if you can’t complete the exam within a certain time.)
I attended the first exam with the only intention to see how I measure up. I expected to fail.

I did not fail. My score wasn’t stellar, but I passed.

Therefore, please allow me to repeat the conclusion:

Conclusion: the exam is easier than you would think. Take advantage of the second shot offer, and test your knowledge as soon as possible. 

sp_msforeachtable

I will make this short and sweet for you.
If there is something you want to do to all tables in your database, there is a command for you.

I will demonstrate, watch my hands because I cheat. To add a column to every table:
exec sp_msforeachtable 'alter table ? add intGoat int NULL'
To delete a column from each table
exec sp_msforeachtable 'alter table ? drop column intGoat'

And this is how you use this undocumented little gem.

Small print: emphasis on undocumented, aka can go away or change in nature without any warning. Test it on your dev DB before using it.
Willy Wonka is telling you to watch with undocumented features.

The thing with cursors

So our aggregated databases have experienced some loading errors lately, and until that gets fixed, I have to collect the data straight from the field databases for each report.
That is, go in the database, pull the data, load into temp.
Go in the next database, pull the data, load into temp.
Go in next, pull, process. Go in next, pull, process.

Sounds familiar? That’s a classic cursor right there.

Sample:

CREATE TABLE pureHatred (
[DBName] varchar(50),
[text] varchar(max)
)
DECLARE @name VARCHAR(50)
DECLARE @sql VARCHAR(max)
DECLARE db_cursor CURSOR FOR
SELECT name
FROM MASTER.dbo.sysdatabases
WHERE name NOT IN ('master','model','msdb','tempdb')
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO @name
While @@FETCH_STATUS = 0
BEGIN
SET @sql = N'USE ['+@name+']
INSERT INTO pureHatred(DBName, [text])
SELECT DB_NAME(), v.name
from sys.[syscomments] sc
JOIN sys.views v ON v.object_id = sc.id
WHERE sc.text like ''%fill my sproc with hatred%'''
exec (@SQL)
FETCH NEXT FROM db_cursor INTO @name
END
Close db_cursor
DEALLOCATE db_cursor
select * from pureHatred
drop table pureHatred

Now you’ve heard a million times that cursors are a bad approach, and I won’t even deny that fact. They are. They lock rows and load everything into the memory. Cursors also include multiple steps, that is selecting the data, then processing the data, and the lock remains there for the duration of all steps. Also, the server has to execute the plan for each row, each database. (case in point: pull 10K orders each from 500 databases. Oh look! Large numbers.)

I have the luxury to do so in our environment, because we are loaded with memory already. My box still completes the 5 million pull-process cursors in less than 10 seconds, but imagine the peak in memory processes as it does so!* However, we could actually use that memory for other processes, so my solution is only for ad-hoc reports, and only until I fix the data imports. I slap my wrist every time I write up a cursor and hope that the lord of database administrators doesn’t find out about my sin.

But! I don’t know of any other way to jump databases. Feel free to suggest a solution, if you. If you have to repeat the same process in 500-1000-5000 databases, you need a cursor to repeat said process.

So again, should someone toss in the question on your next interview: cursors are bad for using up resources and locking rows.
There are times you can’t avoid them (jumping databases), but if you want to, read up on set-based approaches. I’ll write another article on the matter soon.

Conclusion: If you have to jump databases, you will use UNION ALL 5000 times, or you will load the data in a temp table with a cursor. In this one case I approve of using a cursor, else I you should avoid it.

*You know what? Let me run a query on sys.dm_exec_query_memory_grants next time I set up a mildly complicated cursor, and I tell you just how much memory it costs indeed to use it.

Cast datetime as varchar

I stumbled upon this great Q+A on Stack Overflow when I was looking for options to cast datetime for a report. (The customer didn’t like our standard mm/dd/yyyy format.)

I actually like the third answer the most, because it gives you a chart on all the styles. A pretty helpful shortcut, if you ask me.

union select convert(nvarchar(MAX), @now, 1), 1
union select convert(nvarchar(MAX), @now, 2), 2
union select convert(nvarchar(MAX), @now, 3), 3
union select convert(nvarchar(MAX), @now, 4), 4
union select convert(nvarchar(MAX), @now, 5), 5
output, style
Apr 28 2014 9:31AM, 0
04/28/14, 1
14.04.28, 2
28/04/14, 3
28.04.14, 4
28-04-14, 5

Helpful, if you ask me, if you want to depart from your usual convert(101) format. I don’t even know why the answer is not marked correct. Check out the article here, and bookmark it for future use.