Lock escalation and ROWLOCK

Background to this post: I’m writing this post assuming you haven’t enabled read committed snapshot on your database. You will find DBAs online who tell you to just do it – it will solve all your locking problems. This is not the case. read committed snapshot is wonderful sometimes. But it comes with a cost. It’s not a one-size-fits-all solution to your blocking problems.

“Lock escalation in SQL Server”. Generated with Microsoft Copilot Designer

The business problem to solve

Consider the following SQL Server situation. You have a table with the really innovative name dbo.TableA. dbo.TableA has an identity column as Primary Key (and clustered key). You have an application that hammers dbo.TableA with single row inserts 24/7. This application uses SCOPE_IDENTITY() to figure out what value the identity column got for a single inserted row. You can’t change the application. And you are now given the task to move one billion rows from dbo.TableB. No rows may exist in both dbo.TableA and dbo.TableB at the same time.

Since you have an application hammering dbo.TableA with inserts, and this application relies on SCOPE_IDENTITY() to find the identity value for inserted rows, you can’t change the table to use a sequence to get default values – you need the identity column to stay an identity column. And you need to insert one billion rows into the table without causing blocking for the application hammering dbo.TableA with inserts.

What’s good for you though is that you don’t need to keep the PK value from dbo.TableB when it’s inserted into dbo.TableA. This is a made-up business case, but I was tasked with something very similar recently, and that gave me the idea to write this blog post.

First attempt. The naive approach.

Your first attempt to solve this problem may be as simple as:

INSERT INTO dbo.TableA (c1, c2, c3,..,cN)
SELECT c1, c2, c3,..,CN) FROM dbo.TableB
DELETE dbo.TableB.

Looks easy, right? But we’re talking about billions of rows in dbo.TableB. And users will not be happy if you block dbo.TableA. And inserting billions of rows into a table will cause blocking. Why? Because this single insert will first set PAGE-locks. These PAGE-locks will be exclusive locks, meaning no other session can read from or write to the pages. And sooner rather than later, there will be so many locks from this transaction, that lock escalation will happen. Lock escalation is when a single transaction has created so many locks that SQL Server decides it’s getting a bit much to handle, and changes the PAGE-locks to OBJECT locks. OBJECT locks in this case means TABLE locks. An exclusive OBJECT lock on a table means nobody can read from or write to the table.

Second attempt. Batch it crazy.

So we try something else. We batch our inserts and deletes. It could look something like this.

DECLARE @minId bigint;
DECLARE @maxId bigint;
SELECT @minId = MIN(ID), @maxId = MAX(ID) FROM dbo.TableB;
DECLARE @batchSize int = 1000;
WHILE @minId <= @maxId + @batchSize
    INSERT INTO dbo.TableA (c1, c2, c3,..,cN)
    SELECT c1, c2, c3,..,cN FROM dbo.tableB
      WHERE ID>=@minId AND ID<@minId + @batchSize;
    DELETE dbo.TableB WHERE ID>=@minId AND ID<@minId + @batchSize;
    SET @minId = @minId + @batchSize;
    IF @@TRANCOUNT > 0

What we have done now is to batch our inserts and deletes into batches of 1.000 rows. 1.000 rows wasn’t randomly selected. You read up on lock escalation and learned that lock escalation happens at certain thresholds. Some of these thresholds are really hard to predics, as they have to do with percentage of the lock manager’s memory. But you read that 5.000 is the magic number. When a transaction holds 5.000 locks on an object, lock escalation is triggered. So you decide that 1.000 rows should be a safe batchsize.

So you start running the above piece of code, and you still see a lot of blocking in the database. Why? How can this happen, you think to yourself. You let the code run for a bit, because you have been a responsible database developer and you run in a test environment where you have simulated the load from the application that hammers dbo.TableA with inserts, so that you won’t hurt users while you’re developing your solution. Well done!

So you read up on locks, and you learn that you can query sys.dm_tran_locks. You do that, and you filter on the session-id that runs the INSERT/DELETE-loop. After investigating what you can find from sys.dm_tran_locks, you end up with this query:

declare @sessionid int=56 --session-id of the SSMS-window that runs the INSERT/DELETE-loop
select count(*),resource_type,request_type,request_mode
from sys.dm_tran_locks
where request_session_id=@sessionid
group by resource_type,request_type,request_mode

You see that you have a lot of locks with resource_type=PAGE and request_mode=X. This means you have exclusive locks on PAGE level. What happens when your INSERT/DELETE-loop executes is that it will sooner or later allocate new pages. It will place an exclusive lock on those new pages. And since your simulated application workload hammer dbo.TableA with inserts, they get blocked by your INSERT/DELETE loop. And evere once in a while, the queued up inserts from your simulated workload will sneak in and write a row, and your INSERT/DELETE loop will have to wait for them to finish before it can go ahead and place new exclusive page locks on the table. And while your INSERT/DELETE loop wait to aquire these locks, new insert-attempts from your simulated workload will be placed in a queue behind your loop, etc, etc, etc.

Third and final attempt. Give the lock manager something to work with.

This is where ROWLOCK comes into play. I’m usually hesitant to use ROWLOCK hints on queries, because I don’t want to stress the lock manager with too many locks. But in this situation, it’s exactly what we want to do. We want to place fine-grained locks on row-level instead of PAGE-locks. When we need to allocate a new page, we don’t want our loop to set an exclusive lock ont the page. Instead, we want to set exclusive locks only on the rows it inserts into dbo.TableA.

So our final INSERT/DELETE-loop will look a little something like this.

DECLARE @minId bigint;
DECLARE @maxId bigint;
SELECT @minId = MIN(ID), @maxId = MAX(ID) FROM dbo.TableB;
DECLARE @batchSize int = 1000;
WHILE @minId <= @maxId + @batchSize
    INSERT INTO dbo.TableA WITH(ROWLOCK) (c1, c2, c3,..,cN)
    SELECT c1, c2, c3,..,cN FROM dbo.tableB
      WHERE ID>=@minId AND ID<@minId + @batchSize;
    DELETE dbo.TableB WHERE ID>=@minId AND ID<@minId + @batchSize;
    SET @minId = @minId + @batchSize;
    IF @@TRANCOUNT > 0

Now you don’t see massive blocking anymore. And when you query sys.dm_tran_locks, the only X locks you see are with resource_type=KEY, meaning rowlocks.

Final words

Now you can start playing with the batchsize parameter. If you set it to a too high value, lock escalation to OBJECT level will happen and you will see massive blocking again. Remember that also indexes on the table need locks, and since you used a ROWLOCK-hint, there will be a KEY-lock per row that you inserted in the index. Lock escalation happens for individual objects, meaning if you have 3.000 key locks on the clustered index and another 3.000 key locks on a nonclustered index, lock escalation won’t kick in because of the 5.000 key threshold. Lock escalation happens when you have reached the threshold for a single object. But the more indexes you have on your table, the more memory will be consumed by the lock manager and chances are you will end up with lock escalation because you pushed the lock manager over a memory threshold instead.

To be more sure you pick the right batchsize before you move on to run your batch in production, make sure to test with a realistic workload, on an instance that has the same CPU and memory specs as your production server. If you don’t have such test rig, stay conservative with your batchsize.

Securely pass pass column names to dynamically change result set from SP

I want to describe a secure way to pass a string-parameter containing a comma-separated list of column names to include in a result set. This can be useful when you’re often querying the same tables with the same predicates, but wish to include different columns in the result set for different purposes.

To setup our example, we’re going to use two tables in the AdventureWorks2022 database and the two tables Sales.SalesOrderHeader and Sales.SalesOrderDetail.

Our first version of the Stored Procedure has a fixed column list, so no dynamic columns in the result set.

Version 1 of our Stored Procedure

As you can see, we now have a stored procedure that returns some information about orders and line details.

Moving on, we now want to change this stored procedure, so that it can return a more dynamic column list. Perhaps some parts of an application isn’t interested in Totals at all, and other parts of an application isn’t interested in DueDate or ShipDate at all, but the query apart from that more or less the same.

Let’s start with an example of how NOT to do it. First the code, and then I will show why the below is really bad idea.

Some dynamic SQL to take care of dynamic column list.

Although this works, sort of it opens up our database for SQL injection.

We can do this:

EXEC dbo.DynamicColumnListExample @CustomerId=29825, @ColumnList=N'OrderDate, ShipDate, OnlineOrderFlag';


 it will indeed give us the desired result set, with three columns.

But we can also do this.

EXEC dbo.DynamicColumnListExample @CustomerId=29825, @ColumnList=N'OrderDate, ShipDate, OnlineOrderFlag from (values('''','''',''''))t(orderdate,shipdate,onlineorderflag);--
select * from sys.tables;
select *  '

Wait, what???!!!

This will generate the following SQL-statements and the Stored Procedure will happily execute them with sp_executesql.

SELECT OrderDate, ShipDate, OnlineOrderFlag from (values('','',''))t(orderdate,shipdate,onlineorderflag);--
select * from sys.tables;
select *  
    FROM Sales.SalesOrderHeader AS SOH
        INNER JOIN Sales.SalesOrderDetail AS SOD
            ON SOD.SalesOrderID = SOH.SalesOrderID
    WHERE SOH.CustomerID = @CustomerId;

So first a result set with three empty strings.

Then all the rows from sys.tables.

And finally all the columns from Sales.SalesOrderHeader and Sales.SalesOrderDetail for a given customer. Not at all what we were looking for, right?

But what if we still want to allow for dynamic column lists? We DO want to use dynamic SQL. But we can’t really do it this way. The way to secure this is to use the STRING_SPLIT function to convert the ColumnList parameter to a data set, and then convert it back to a comma separated list with STRING_AGG, but apply the QUOTENAME function to each element. Like this:

 DECLARE @columnListSecure NVARCHAR(MAX);
	SELECT @columNListSecure = STRING_AGG(QUOTENAME(TRIM(BOTH ' ' FROM value)),',')

And then we use columnListSecure instead of columnList to build our column list in the dynamic SQL-statement.

Our final procedure looks like this:

CREATE OR ALTER PROC dbo.DynamicColumnListExample
    @CustomerId INT,
    @ColumnList NVARCHAR(1000) = N'OrderDate, DueDate, ShipDate, Status, OnlineOrderFlag, PurchaseOrderNumber, AccountNumber, ProductId, LineTotal, OrderQty'
    DECLARE @columnListSecure NVARCHAR(MAX);
    SELECT @columnListSecure = STRING_AGG(QUOTENAME(TRIM(BOTH ' ' FROM value)), ',')
    FROM STRING_SPLIT(@ColumnList, ',') AS SS;
        = N'
	SELECT ' + @columnListSecure
          + N'
    FROM Sales.SalesOrderHeader AS SOH
        INNER JOIN Sales.SalesOrderDetail AS SOD
            ON SOD.SalesOrderID = SOH.SalesOrderID
    WHERE SOH.CustomerID = @CustomerId;';
    PRINT @sql;
    EXEC sp_executesql @stmt = @sql,
                       @param = N'@CustomerId int',
                       @CustomerId = @CustomerId;


Let’s test it. First without a column list. Then using a column list with good intentions. And finally a column list with bad intentions.

Works like before
Also works like before
Haha evil hacker, here are a couple of error messages for you!

As you can see from the printed SQL-statement, all the column identifiers are quoted and the error messages we get are that there are no columns named “OnlineOrderFlag from (values(”” etc. Meaning all the malicious code that was sent into the column list is translated to quoted identifiers which when used as part of our column list gives error messages. So we stopped an attacker from using a dynamic SQL injection vulnerability.

Read more about SQL Injection and other vulnerabilities on OWASP Top Ten: OWASP Top Ten | OWASP Foundation

Stream Teams meeting to YouTube

Recently, I started streaming SQL Friday directly to YouTube. The speaker still connect to a Teams meeting and then I use OBS Studio to create scenes with components from the Teams call together with a Title bar etc, and stream from OBS Studio directly to YouTube.

Since I struggled a little bit with getting all the bits and pieces right (and perhaps I still have some tuning to do), I wanted to write this blog post, in case anyone search for a way to do the same.

First – setup NDI for your Teams computer

NDI, short for Network Device Interface, is a broadcasting technology that broadcasts streams from cameras and other devices to the local network. Teams has built-in capabilities for NDI broadcasts, but you will need some NDI drivers. These drivers can be downloaded from https://ndi.video/download-ndi-sdk/. Downloading the NDI SDK should be enough. You will need this software on the computer where you run your Teams client as well as on the computer where you run OBS Studio. In my case, it’s the same computer, but you may just as well connect to Teams from your laptop, and use OBS on a seperate production computer.

Once you have installed the SDK, you need to configure your Teams user, to allow it to broadcast with NDI. This is done in the Teams Admin portal, by changing existing or creating new meeting policy and assign it to the users who should be allowed to use NDI broadcast.

This admin-page is in swedish, but just open meeting policy and go down to Audio and Viewo and select the NDI/SDI-option.

Next step is to configure the Teams client.


So I stick to the classic/old Teams client. There, you enable NDI for yourself by clicking the three-dot-menu next to your profile picture, and select Settings. Under settings, you go to the App Permissions tab, and there you should find this setting.

Now you’re almost done with the NDI setup in Teams. Last step is to start broadcasting when you’re in a meeting. So start the meeting, select “… More” and enable Broadcast via NDI under the Broadcast menu (above the recording menu). If you don’t see it, make sure your Teams admin have enabled NDI and that you yourself have enabled NDI for yourself in Teams.

Next up – configure NDI in your OBS Studio computer

For the computer where you run OBS Studio, you also need to install the NDI SDK (see the previous section). I went for the full Tools-suite, but that’s not really necessary. If you run OBS Studio on the same computer as Teams, you’re all done.

Once we have the NDI-SDK in our OBS Studio computer, we need to install the NDI-Plugin for OBS Studio. You can do that from the OBS Project Plugins forum.

Now we’re ready to start configuring our NDI sources. The easiest way to do it is to start a Teams call with someone, start the NDI broadcast in Teams and then go into OBS Studio to create a scene.

In your OBS Studio Scene, click the Plus sign to add a new source. This will bring up this menu:

Again, swedish, but the symbols are universal. Select “NDI(tm)Source”
Give the source a meaningful name and hit OK.
After hitting OK, you will start configuring the source. If your Teams call is broadcasting with NDI, you will be able to select your own camera, other participants’s cameras and whatever is shared by any of the participants.

One thing I realised late into the game is the Audio Video sync settings of an NDI source. The default is “Source Timing”. Change it to “Network” and you will most probably notice a remarkable improvement in audio quality from your Teams call. Also the sync itself actually works. If you get this wrong, chances are your stream from OBS Studio will get the audio sources mistimed, which makes for a quite funny way to follow conversations between two parties…

Network is the way to go here.

For the rest, it’s going to come down to your creativity in setting up the scenes in OBS Studio. I will leave that up to you.

Lastly – stream to YouTube

To stream to YouTube, you need a YouTube channel. If you already have that, you’re basically good to go. In OBS Studio, choose the stream settings over here.

So start by opening “Settings” and configure your Sterams.

Here you connect your YouTube account in OBS Studio, and then you can start creating or connecting to planned Live Streams.

With your YouTube account connected, you choose to manage streams in the lower right-hand corner of OBS Studio.

It’s “Hantera sändning” in swedish. You’ll figure out where that button is in whatever language you use.

Now you can either create a new Live-stream, or select an existing one from your connected YouTube account. The settings if you choose to create a new Live Stream are pretty much the same you will find in YouTube Studio, so I’m not going to explain them in detail. Once you’re done selecting or creating your Live Stream and select it, you’re ready to start streaming.

When you have selected a Live Stream to connect to, you can start streaming.

DONE! Or maybe not

I’m sure you, as I, will run into some technical difficulties with NDI/OBS Studio/YouTube. You can see my results, and how they evolve over time on my YouTube-channel. SQL Friday Season 8 is the first playlist where I started live streaming the event. (1) SQL Friday Season 8 – YouTube

Good luck! Let me know if I made any obvious mistakes in this post, and also let me know about your progress.

Personally, I feel like I live in the feature. Live streaming to a global audience, from my own office room. Wow.

EXECUTE AS to test permissions

As a security-aware developer, you want the application services to be able to do exactly what they are supposed to do. Not more, not less. To test this, you can impersonate a login and see that the login has exactly the permissions you intend for it to have.

This is done using either execute as login or execute as user. In this post, I will show a few use cases for execute as user.

Let’s start with an example where we test the permissions for a given SQL Server Table.

-- Setup the database user. I'm testing with a user without login here.
-- Create a Customer Table
CREATE TABLE dbo.Customer(
	CustomerId int identity(1,1) CONSTRAINT PK_Customer PRIMARY KEY CLUSTERED,
	CustomerName nvarchar(100) NOT NULL,
	CustomerRegion nvarchar(100) NOT NULL,
	CustomerRating tinyint CONSTRAINT DF_Customer_CustomerRating DEFAULT (3),
		CONSTRAINT CK_Customer_CustomerRating_1_to_5 CHECK (CustomerRating >0 and CustomerRating <6)

-- Give PermTest permission to read columns CustomerId, CustomerName and CustomerRegion but not CustomerRating
-- Now we test permissions, with EXECUTE AS
	-- Test that PermTest can't read all columns
	SELECT * FROM dbo.Customer;
	-- Test that PermTest can read the columns it should be able to read.
	FROM dbo.Customer;
	-- Test if permissions can be "fooled"
	SELECT CustomerId 
	FROM dbo.Customer 
	WHERE CustomerRating = 2;

Lines 14 and 27 is where the “magic” happens. Or rather where impersonation starts and stops. With EXECUTE AS USER=’PermTest’, our session impersonates the PermTest user and with the REVERT command, we switch back to our logged in user context.

The first test case will fail. The second will succeed and the third will fail.

Now let’s complicate things with some row-level security.

-- Create the security predicate function
CREATE OR ALTER FUNCTION Security.UserRLSPredicate(@User as nvarchar(100))
RETURN SELECT 1 as RLSPredicateResult
-- Apply the security predicate using a security policy on table dbo.Customer
ADD FILTER PREDICATE Security.UserRLSPredicate(SalesRep)
ON dbo.Customer
-- Finally we GRANT SELECT on the whole table to PermTest. The user should now see the rating, but only for their own customers.
-- First REVOKE existing permissions and then add only what we need.
REVOKE SELECT ON dbo.Customer TO PermTest;
GRANT SELECT ON dbo.Customer TO PermTest;

-- Now test if I as db_owner can see everything
SELECT * FROM dbo.Customer;

-- And finally test that Row Level Security predicate works
SELECT * FROM dbo.Customer;

These are some simple test cases using EXECUTE AS. They are somewhat silly, especially the predicate function – it should probably use metadata stored in tables instead of going straight at SalesRep = user-name. But that’s for you to figure out in your application. Using EXECUTE AS, you’re at least able to test if your permission structure works or not.

Final words: Don’t just give service accounts db_datareader, db_datawriter and EXECUTE (or even worse, db_owner) permissions. Unless that’s exactly what the service needs to be able to do. Give your service accounts just the right permissions, and test your security. A more advanced topic is to create stored procedures to elevate permissions for specific tasks, but that’s a bigger topic. One that Erland Sommarskog has written plenty about: Packaging Permissions in Stored Procedures (sommarskog.se)

Clone a docker volume used by SQL container

Erland asked me to have a look at an MS Q&A question today. The question in short was this:

I have SQL Server running in a container with /var/opt/mssql saved on a volume. Now I want to clone said volume to a new volume and spin up another container using the cloned volume, but doing so gives me an Access Denied error when starting the second container.

To reproduce the problem

# Create container sql1
docker run --name sql1 -v sql1volume:/var/opt/mssql -p1501:1433 -d -e "ACCEPT_EULA=YES" -e "SA_PASSWORD=Pa55w.rd" mcr.microsoft.com/mssql/server:latest

# Stop sql1 and clone sql1volume
docker stop sql1
docker volume create sql2volume
docker run --rm -i -t -v sql1volume:/original -v sql2volume:/clone alpine sh -c "cp -avr /original/* /clone"

# Start new container using the cloned volume
docker run --name sql2 -v sql2volume:/var/opt/mssql -p1502:1433 -d -e "ACCEPT_EULA=YES" -e "SA_PASSWORD=Pa55w.rd" mcr.microsoft.com/mssql/server:latest

# Check out the logs from sql2
docker logs sql2

Logs will look like this

SQL Server 2022 will run as non-root by default.
This container is running as user mssql.
Your master database file is owned by mssql.
To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216.
/opt/mssql/bin/sqlservr: Error: The system directory [/.system] could not be created. File: LinuxDirectory.cpp:420 [Status: 0xC0000022 Access Denied errno = 0xD(13) Permission denied]

Ok, error reproduced. Why does this happen?

It happens because Microsoft did good. Microsoft’s SQL Server-container images doesn’t run as root. They run as the user mssql, with user-id 10001. This is awesome, just as awesome as _not_ running SQL Server for Windows with a service account that is local admin on the server. However, the alpine container we use to clone the volume doesn’t run as user mssql. In fact, it doesn’t know who mssql is.

Thanks for the lesson. Now, how do we solve this mess?

It isn’t that hard to fix really. We need to create a user with user-id 10001 in our alpine container and we need to change ownership of the files in the container to this user. So let’s clean up and start cloning the volume again, this time creating the user too.

# cleanup
docker rm sql2
docker volume rm sql2volume

# Clone again
docker run --rm -i -t -v sql1volume:/original -v sql2volume:/clone alpine sh -c "adduser -u 10001 mssql;cp -avr /original/* /clone;chown mssql -R /clone"
docker run --name sql2 -v sql2volume:/var/opt/mssql -p1502:1433 -d -e "ACCEPT_EULA=YES" -e "SA_PASSWORD=Pa55w.rd" mcr.microsoft.com/mssql/server:latest

# check logs
docker logs sql2

Voila, logs look good!

2023-08-31 23:23:10.38 spid65s     [4]. Feature Status: PVS: 0. CTR: 0. ConcurrentPFSUpdate: 1. ConcurrentGAMUpdate: 1. ConcurrentSGAMUpdate: 1, CleanupUnderUserTransaction: 0. TranLevelPVS: 0
2023-08-31 23:23:10.38 spid65s     Starting up database 'msdb'.
2023-08-31 23:23:10.49 spid47s     [2]. Feature Status: PVS: 0. CTR: 0. ConcurrentPFSUpdate: 1. ConcurrentGAMUpdate: 1. ConcurrentSGAMUpdate: 1, CleanupUnderUserTransaction: 0. TranLevelPVS: 0
2023-08-31 23:23:10.49 spid47s     Starting up database 'tempdb'.
2023-08-31 23:23:10.81 spid47s     The tempdb database has 8 data file(s).
2023-08-31 23:23:10.82 spid62s     The Service Broker endpoint is in disabled or stopped state.
2023-08-31 23:23:10.82 spid62s     The Database Mirroring endpoint is in disabled or stopped state.
2023-08-31 23:23:10.83 spid62s     Service Broker manager has started.
2023-08-31 23:23:10.83 spid45s     Recovery is complete. This is an informational message only. No user action is required.

Final comments

This was actually pretty easy, but it wasn’t so easy to find it on the interwebs, thus this blog post. When running the updated cloning process, the alpine container (or rather the adduser command) will ask for a password. Just type anything there – it that password will be in the passwd file of the alpine container, which is thrown away as soon as it finished copying the files. But it’s a bit annoying – getting prompted for passwords makes it impossible to automate the process. There’s probably a flag for adduser to not change passwords but I din’t find it at first glance and it’s getting late here. Tomorrow perhaps…

SQL Server in containers, the basics

If you already run containers and even SQL Server in containers, this post is not for you. This is something to get started.

And I will start very basic. You heard the word “Container” and you heard it’s good. But you wonder, what the heck IS a container. Great. This is for you.

I created a video about creating your very first container. This post contains some more, and it’s a lot easier to copy/paste from a blog than from a YouTube video.

What the heck is a Container?

A container is a way to run a process, or a program. In my examples, that program is going to be the SQL Server executable. The surrounding operating system lends some resources. Like a namespace, so that whatever is running inside the container can’t see everything in the hosting operating system. Once inside the container, the processes can’t see what’s running outside of the container. The operating system controls how much resources the processes inside the container can use – how much CPU, how much RAM etc. Since the operating system gives the container some resources on its own, it can look to the processes running inside the container as if they are even running in a host of their own – with its own hostname.

I heard this is all Linux stuff, I don’t know how that works

It is true Containers is mostly a Linux thing. You can run Windows containers as well, but the Windows operating system isn’t as good as Linux at creating this isolated environment for processes, which means Windows containers are substantially larger. That means more time to create them and more resources to host them. Therefore, containers is mostly a Linux thing. But you don’t need to know all that about Linux. If you run SQL Server in a container, you need to know something about SQL Server on Linux. But not much really. If you run a .NET Core application in a container, you just need to know about .NET Core. The Linux operating system, and how to configure it to be able to run the containers, that’s what you have container software for.

How do I run a container?

To be able to run containers in Windows 10 or Windows 11, you need software to maintain and run the containers. I prefer Docker Desktop for Windows, which you can download from docker.com. You also need a Linux environment running inside your Windows operating system. Either in a Hyper-V VM or in Windows Subsystem For Linux ver 2 (WSL2). These are Windows features which you will simply enable. Once you have enabled them, you will configure Docker Desktop to use either Hyper-V or WSL2. If you run in Hyper-V, you don’t need to create, configure or even login to the Linux VM. Docker Desktop takes care of that for you.

Ok, I have Docker Desktop, now what?

Once you have installed and configured Docker Desktop, you’re ready to start playing with containers. I will use SQL Server containers. But there are container images for basically everything. In some of my demos, I use a .NET Core SDK container to build a .NET Core application, which I then run in an ASP.NET Runtime container. Imaging something that can run in Linux. That something can run in a Container, and there’s 99,9% chance someone else already wanted to do it and created a container image for it. And despite what some people may thing of people, people are mostly friendly and like to share things, so look and you’ll find.

Some simple docker commands to get you started

Fire up your favourite terminal and start playing. Here are some commands to get you started.

# Pull, or download if you will, the layers that make up the 
# container image for SQL Server 2022, latest build.
# Think if "pull" as "download installation media". Or even
# "download a snapshot of a preinstalled VM with SQL Server"
# Only there's no VM, because a container is not i VM, it's just 
# isolated processes running in Linux.

docker pull mcr.microsoft.com/mssql/server:2022-latest

# Use docker create to create a container from an image. 
# If that image isn't already downloaded, 
# docker create will take care of the pull for you.
# parameters to docker create
# -e Environment variables to create inside the container
# -p Port-mapping. Map local port X to container internal port Y
# -v Volumes. Mount a persistant docker volume to a path 
#             inside the container file system. 
# Put the name of the image you want to create the container 
# from last in the docker create command

docker create -e"SA_PASSWORD=Pa55w.rd" -e"ACCEPT_EULA" -p1450:1433 -v datavolume:/var/opt/mssql --name myveryfirstcontainer mcr.microsoft.com/mssql/server:2022-latest

docker start myveryfirstcontainer

# docker start will start a container created with docker create
# docker stop will stop a container that is running
# docker rm will remove a container. Stop it before removing
# docker ps will show you running containers
# docker ps -a will show you all containers

# docker run is a combination of docker create and docker start. 
# You probably want to use the -d switch to start the container detached
# Otherwise your terminal will run the container process interactively and when 
# you close the terminal, your container dies

docker run -e"SA_PASSWORD=Pa55w.rd" -e"ACCEPT_EULA" -p1450:1433 -v datavolume:/var/opt/mssql --name myveryfirstcontainer -d mcr.microsoft.com/mssql/server:2022-latest

# finally, docker exec. It will run a command inside the container.
# if you want to run it as "fire and forget", or rather if you want an unattended
# execution, you want to run it with -d, detached that is.

docker exec -d myveryfirstcontainer touch /var/opt/mssql/emptyfile

# Sometimes you want to run commands interactively inside the container
# Then you want a shell. Bash or sh for example. 
# Then you want to run them with the -i and -t switches. 

docker exec -it myveryfirstcontainer /bin/sh

Happy playing with containers. Once you have played some, you will want to learn some more. When that happens, there’s a chance I have written some more on the topic and you will return here. I of course didn’t learn in a vacuum. Andrew Pruski knows a lot more than I do about containers and what he writes and presents will take you to the next level once you start playing. Start on Andrew Pruskis website: DBA From The Cold | Ramblings on working as a SQL Server DBA

My life as a YouTuber

The title of this post: A while ago, I did a Teams presentation for my son’s class, about my occupation. And to make it a bit more fun, I told them I’m a Database Specialist and a YouTuber. The later didn’t amuse my son. He was upset when he came home from school. “You don’t even have 1000 subscribers on your channel, you’re not a ‘tuber’, he said.” I still don’t have 1000 subscribers, but it’s getting closer, so maybe I’ll have to start printing swag and call myself a SQL Influencer instead of a consultant soon 🙂

Top 4 viewed posts from my YouTube-channel. https://youtube.com/c/transmokopter

TL/DR; Keep videos short if you want viewers to watch the whole thing. That’s really hard if the video is a recording of a 60 minute long presentation 😂

Since I started organising and hosting SQL Friday, I have recorded every presentation. Most of them are published on my YouTube-channel (the rest are editing-work-in-progress and will show up shortly).

I browsed through the statistics from the YouTube-channel yesterday. The most viewed video is from SQL Friday #37, with Deepthi Goguri, about SQL Server Internals and Architecture. That video has 1679 views.

Second most viewed, with 1090 views is Bob Duffys presentation about Financial Modelling.

“Rank” 3 and 4 are my own videos, so not SQL Friday Recordings. They are about Temporal tables and and about Tally tables.

YouTube studio offers some more analytics than just the views, and here it becomes interesting and actionable for content publishers. As a publisher, you probably want viewers to watch the whole video, right? Sorry for you, but that’s not going to happen. At least not all viewers. People are going to watch the beginning and then drop off. Or watch some sections but fast forward through other sections. Or watch 20 minutes, and then the phone rings and they shut down their browser etc.

What I found out is, publishing full episodes from SQL Friday is not ideal. SQL Friday presentations are on average just over one hour, including the after-presentation chats which I usually leave in the video. That’s almost the length of your typical Hollywood movie. And that’s honestly too long for a YouTube video. But what if my goal isn’t to get as many views as possible? I mean, of course I want everyone to watch and learn from the fantastic presentations that speakers have given to the SQL Friday members. But I realise that one-hour long videos about SQL Server has a somewhat narrow possible audience. So I’m happy if just a handful of people, who missed the live presentation, watches the videos end-to-end.

For Deepthis video about SQL Server itnernals, the average view time is 8 minutes 45 seconds. That’s only 13,4% of the total video, so it must be really bad, right? Not really. Or not necessarily. Because only 1/3 of the viewers watched past the first 40 seconds. That’s very, very typical for a YouTube viewer. You watch something, and when the video ends, you get a new suggestion. You start watching, and maybe, just maybe that wasn’t an awesome suggestion, so you skip to the next suggestion. So if viewers watch 13,4% of Deepthi’s video, and 2/3 drops off after less than 40 seconds, that means the 1/3 that stays past 40 seconds watch roughly 40% of the video. That’s a whole lot better than 13,4%, right?

Using YouTube Studio’s analytics section, you can go into depth with every video, and try to figure out what makes people loose attention, or keep watching. For real “tubers”, this will have a direct impact on income. For me, I don’t have the channel to make money on ads. But I do like it when the people who find the videos keep watching them, because that means they probably like the content and think they can learn something from it.

What’s the ideal video then? I have no idea to be honest 🙂 But it looks like at least “my” viewers stay on the video for a larger percentage of time if the video is around 10 minutes compared to 60 minute videos. So perhaps “Keep it short” is the advice? On the other hand, if the video is a recording of a 60 minute presentation, it’s pretty hard to make it 10 minutes long.

Dapper Parameters – Explicit Content

I recently started seeing lots of implicit conversions in execution plans in a system for which I oversee databases. Implicit conversions happen when two values with different datatypes are to be compared. Here’s first an example of implicit conversion that does NOT affect performance all that much. I’m running the query in the AdventureWorks2014 database.

What is Implicit Conversion? Is it good or bad?


What’s gonna happen when this query executes? The SQL Server Optimizer is going to find an index, ix_Person_LastName_FirstName_MiddleName with the names as key columns and BusinessEntityId on the leaf level (since it’s the clustered index, the clustering key is always included on the leaf level of non clustered indexes).
So the optimizer is going to choose to do an index seek on that covering index. But before seeking into the index, SQL Server have to CONVERT values, because ‘Sánchez’ is a varchar value and the LastName column is an nvarchar column. So implicit conversion is going to happen. The optimizer is going to choose to convert the varchar value to nvarchar and then perform the seek operation. The reason for that is that nvarchar has higher precedence than varchar. This makes sense. Nvarchar can store text that varchar can’t. Doing it the other way around – convert nvarchar to varchar – would result in loosing some data in the conversion and therefore give us the wrong results.

Here’s where you can see implicit conversion in the execution plan.

Hover on the Index Seek operator and you’ll see that the varchar value is converted with CONVERT_IMPLICIT to nvarchar(4000). You could also right-click on the operator and look at properties to find out more.
Properties for the Seek Predicates of the Index Seek operator.

Will this implicit conversion affect the performance of our query? No, it won’t. At least not noticeable. Our constant is converted to nvarchar(4000) and then it can be used to seek into the index.

Let’s look at a more problematic example. Let’s run two queries against the Sales.Customer table.

	AccountNumber = 'AW00000013';

	AccountNumber = N'AW00000013';

What do you think will happen with these two very similar queries?
The first one will perform an index seek into the index AK_Customer_AccountNumber.
The second one will perform an index scan of the index AK_Customer_AccountNumber.

And it’s not only seek vs scan. We get an extra compute scalar and a filter in the lower plan. But the important operator is the Index Scan and the Filter operators.
This is in the tooltip of the Filter operator and the cause of bad performance.

What happens here? The upper query performs just the way we expect. Use the index, seek into it. But in the lower query, we send in an nvarchar value. Since the table column is varchar, we get implicit conversion on the COLUMN, not the parameter/constant expression. This is really bad. It means we are scanning every single row of the table and pass it on to the FILTER-operator, where the predicate is evaluated.


Now we have looked at what implicit conversion is. We also now know a little more about when it can be bad for us.

I said in the beginning that I saw implicit conversion in Query Store. And it wasn’t the good kind of implicit conversion, where parameters or constant values are converted. I saw implicit conversion on the table side of things, resulting in index scans.

Investigating things got me to queries sent from a service, in a code path using Dapper. Dapper is a library that can be used to map database results to application code objects. It’s relatively easy to use and lots of developers favour it vs just using SqlClient.SqlCommand or other frameworks like Entity Framework.

Dapper. And parameters.

Dapper provide a really simple way to pass parameters to queries. It looks a little something like this:

var template = new SalesCustomer { AccountNumber = "AW00000013" };
var parameters = new DynamicParameters(template);
var sql = "select CustomerId, AccountNumber from Sales.Customer where AccountNumber = @AccountNumber";
using (var connection = new SqlConnection(connString))
    var customers = connection.QuerySingle<SalesCustomer>(sql, parameters);

See how simple the parameter passing is? Just create a template from a SalesCustomer class with a specific AccountNumber and create a DynamicParameters result set from this template. But the parameters aren’t strongly typed. This means Dapper/.NET will choose a datatype for us. This will translate to an sp_executesql call with defined parameters. AccountNumber will be passed in as an nvarchar(4000) parameter to the query.

How is this bad?

As we saw in the execution plans above, when we send in an nvarchar value to a query that uses it to compare it to a varchar column, we will get implicit conversion on the table side of things. SQL Server will scan every row of the index and then apply the filter predicate. That’s not an effective use of a covering index. Effective use would have been an index seek.

What should we do then?

We saved a couple of characters of code by not specifying the datatype for the parameters when we created the Dapper code. But we make the database suffer every single time we run the query. What’s the correct way of doing this?

In Dapper, we could instead create an empty parameters collection and use the Add-method to add parameters, with datatype, direction and more. In Dapper, it would look a little something like this.

var parameters = new DynamicParameters();
var AccountNumber = "AW00000013";
parameters.Add("@AccountNumber", accountNumber, DbType.AnsiString, ParameterDirection.Input, AccountNumber.Length);
var sql = "select CustomerId, AccountNumber from Sales.Customer where AccountNumber = @AccountNumber";
using (var connection = new SqlConnection(connString))
    var customers = connection.QuerySingle<SalesCustomer>(sql, parameters);

In the above example, we explicitly defines the AccountNumber parameter as DbType.AnsiString, which will translate to varchar in SQL. Using DbType.String would give us an nvarchar parameter. But in our case, we want to pass a varchar parameter.
And while we’re at it, we’re setting the LENGTH of that varchar parameter. Because otherwise it’s going to be sent as varchar(8000). For the comparison itself this doesn’t matter too much. But the larger datatypes sent in as parameters, the larger the memory grant you’ll need to execute the query. And if you’re going to explicitly set the datatype, you might as well explicitly set the length too.


Be explicit. A few extra lines of code could potentially save you from that 2AM call when you’re on call, because that one important batch brought the database on it’s knees.
Use AnsiString for varchar and String for nvarchar.

Begin automatic row versioning on an existing table (Temporal Tables part 2)

In part 1 I showed in a video how to create a new, temporal table for row versioning in Sql Server. In this post, I will show how to convert an existing table to a system versioned (temporal) table.

CREATE TABLE dbo.PriceList(

INSERT INTO dbo.PriceList (ProductID, ListPrice)

We start with this small table, same as we used in part 1. Only this table already contains data. Same principles apply to existing tables, we need to add the two columns for start- and end-dates.

So we could try this.

ALTER TABLE dbo.PriceList ADD 

But that gives us this result

Msg 4901, Level 16, State 1, Line 12
ALTER TABLE only allows columns to be added that can contain nulls, or have a DEFAULT definition specified, or the column being added is an identity or timestamp column, or alternatively if none of the previous conditions are satisfied the table must be empty to allow addition of this column. Column 'RowStart' cannot be added to non-empty table 'PriceList' because it does not satisfy these conditions.

So we can do this different ways. We could add the columns as normal columns, set values to them and then decorate them with “Generated always”. Instead, I’m going to add two default-constraints to the columns. RowStart can be set to whatever you find suitable. Current date and time perhaps? I’m going with 1900-01-01, to indicate the actual start-date is unknown.

ALTER TABLE dbo.PriceList ADD 
	RowEnd DATETIME2(7) GENERATED ALWAYS AS ROW END CONSTRAINT DF_PriceList_RowEnd DEFAULT '9999-12-31 23:59:59.9999999',

More important than the value for RowStart is the value for RowEnd. It has to be the max available value for the data-type. Since I’m using datetime2 with a high resolution, I need to set the date with seven decimals (three for milliseconds, three for microseconds and 1 for 100 nanoseconds).

Now I have prepared the table for system versioning. All that’s left to do is to alter the table to make it system versioned.


And now, if we want to, we can drop the default constraints. They are kind of redundant, since the columns are decorated with Generated Always.

ALTER TABLE dbo.PriceList DROP CONSTRAINT DF_PriceList_RowStart;

And finally, let’s change values in the main table.

UPDATE dbo.PriceList SET ListPrice = 100 WHERE ProductID = 1;

If we now check the main table and the history table, we can see that the automatic row versioning is indeed in effect.

SELECT * FROM dbo.PriceList WHERE ProductID = 1;
SELECT * FROM dbo.PriceList_History WHERE ProductID = 1;


I hope you found this post useful. In part 3, we will look at how we can convert existing temporal pattern tables and make them automagically system versioned, temporal tables. That is: Tables where we have data in both a main table and a history table.

Temporal Tables for Row Versioning with Transact-SQL (part 1)

I’m working on a video series about Temporal Tables. In this first video I’m showing how to create a table which has System Versioning switched on. You might have heard the word Temporal Tables. What it means is that System Versioning is switched on for the table, and that all changes to the table are recorded in its history table.

Part 1 of my Temporal Tables video series.

In upcoming videos and blog posts, we’re going to look at some more complex scenarios, such as

  • Add system versioning to an existing table
  • Switch from home-cooked versioning to automatic system versioning
  • Use the FOR SYSTEM_TIME features in Transact-SQL
  • Performance considerations for temporal tables

Do you have any specific use cases for temporal tables that you want me to bring up in this series? Drop a comment here, or on the YouTube video, and we’ll see what we can do.