I spoke on SQL Server User Group Sweden

SQL Server User Group (SQLUG) Sweden had a meeting on October 24th. This one felt a bit special to me, as it was kind of a replay of my first time public speaking, in two ways. My first time speaking in public about SQL Server was on SQLUG Sweden, a few years ago. SQLUG Sweden arranged a Local Community Edition, with local speakers. That first time, it was me and Daniel Hutmacher (Twitter , Blog) speaking. And on October 24th, it was, again, me and Daniel Hutmacher speaking.

Continue reading “I spoke on SQL Server User Group Sweden”

Duplicate key in sysclsobjs using DBCC CLONEDATABASE

This blog post is about an error message I got the other day when using DBCC CLONEDATABASE in a T-sql-script. But first some background to DBCC CLONEDATABASE.

I was pretty excited about the DBCC CLONEDATABASE command, which was introduced in SQL Server 2014 SP2 and SQL Server 2016 SP1. It creates a schema-only (that means all the database objects, but no data) copy of a database, keeping all statistics data, so that you can troubleshoot Query plans for certain queries without having to copy all the data. Before DBCC CLONEDATABASE (and to be honest probably also afterwords, DBCC CLONEDATABASE doesn’t replace all the needs) one had to make a full copy of a database to get the statistics data along. That’s usually copied to a test box. If the test box is identical to your production box, you’re almost fine. But on your test box, you don’t have the cached execution plans from the production box. Therefore, you might end up with very different Query plans in your test box. With DBCC CLONEDATABASE, you get a readonly copy of a database, on your production box and you can use that to tweak your queries and see what new estimated execution plans they get.

Continue reading “Duplicate key in sysclsobjs using DBCC CLONEDATABASE”

OPTION(USE HINT) – New SQL Server 2016 SP1 feature

Many SQL Server developers and admins found, after upgrading to SQL Server 2014, that some queries started taking much longer time than before. The reason is the new cardinality estimation formula which was introduced in SQL Server 2014. Cardinality Estimation is done all the time by the SQL Server optimizer. To produce a Query plan, the optimizer makes some assumptions about how many rows exist for each condition in the table. In most cases, the new cardinality estimation formula in SQL Server 2014 and onwards gives slightly better estimates and the optimizer therefore produces slightly better plans. In some cases however, mostly when there are predicates on more than one column in a WHERE clause or JOIN clause, the 2014 cardinality estimation is a lot worse than in previous versions of SQL Server.

Continue reading “OPTION(USE HINT) – New SQL Server 2016 SP1 feature”

Archiving with the OUTPUT clause

Most database developers have been faced with the task to archive old data. It could look something like this:

CREATE TABLE dbo.Cars(
  CarID int identity(1,1) PRIMARY KEY,
  BrandName varchar(100),
  ModelName varchar(100),
  releaseYear smallint
);

CREATE TABLE dbo.Cars_Archive(
  CarID int,
  BrandName varchar(100),
  ModelName varchar(100),
  releaseYear smallint,
  ArchivedDateTime datetime DEFAULT CURRENT_TIMESTAMP,
  CONSTRAINT PK_Cars_Archive PRIMARY KEY(CarID, ArchivedDateTime)
)

And updating a row would often require a stored procedure and some explicit transactions

Continue reading “Archiving with the OUTPUT clause”

Statistics on ascending columns

I have had an annoying problem for a while. In a database used for a statistical survey system reporting is painfully slow in the beginning of each reporting period.

The system
The tables contain a few million rows. Ola Hallengren’s index maintenance (which includes UPDATE STATISTICS) is running weekly. Each month is a new reporting period. When a new reporting period opens, there are no rows for the current period. From the first day of the month, we receive input, each input being less than 2000 new rows in the table.

The problem
Reporting of any previous period is always consistent in execution time – around 3 seconds to produce a full report. That’s an OK performance. But when reporting is done for current period early in a reporting period, execution takes up to 10 minutes.

Continue reading “Statistics on ascending columns”

Generate time slots

Here’s an Inline Table Valued Function (TVF) for generating time-slots from a start-date to an end-date, given a certain time for each slot, given in minutes.

This would be useful for many applications, like scheduling systems, sales statistics broken down into certain slices of time etc. The function does have some limitations, eg there can’t be more than 100.000 minutes between start and endtime. This is easily fixed by just adding Another CROSS JOIN to CTE2, or by changing the DATEADD-functions to use hour instead of minute if that fits your purpose.

Continue reading “Generate time slots”

DELETE and Non Clustered Indexes

I’ll start off with a disclaimer: I’m going to tell you about something that happened in a specific system Environment. There’s no such thing as  a general advice you can build on this specific scenario. I’m just posting it because I was myself surprised by what order of magnitude I was able to speed up a specific query by slightly removing some of the work in the execution plan.

The other day I helped troubleshooting a database system. In a table with some 400 million records, a subset (50-60 million records) were to be deleted. The application kept timing out on this delete operation so I adviced the developer to split the delete operation into smaller chunks. I even helped writing a T-SQL script to perform the delete in one million row chunks. The script was pretty basic – a WHILE-loop which checked if any rows fulfilling the WHERE-condition of the delete was left in the table, and inside the loop a DELETE TOP(one million) followed by an explicit checkpoint.

Continue reading “DELETE and Non Clustered Indexes”