This blog post is about an error message I got the other day when using DBCC CLONEDATABASE in a T-sql-script. But first some background to DBCC CLONEDATABASE.
I was pretty excited about the DBCC CLONEDATABASE command, which was introduced in SQL Server 2014 SP2 and SQL Server 2016 SP1. It creates a schema-only (that means all the database objects, but no data) copy of a database, keeping all statistics data, so that you can troubleshoot Query plans for certain queries without having to copy all the data. Before DBCC CLONEDATABASE (and to be honest probably also afterwords, DBCC CLONEDATABASE doesn’t replace all the needs) one had to make a full copy of a database to get the statistics data along. That’s usually copied to a test box. If the test box is identical to your production box, you’re almost fine. But on your test box, you don’t have the cached execution plans from the production box. Therefore, you might end up with very different Query plans in your test box. With DBCC CLONEDATABASE, you get a readonly copy of a database, on your production box and you can use that to tweak your queries and see what new estimated execution plans they get.
Continue reading “Duplicate key in sysclsobjs using DBCC CLONEDATABASE”
Many SQL Server developers and admins found, after upgrading to SQL Server 2014, that some queries started taking much longer time than before. The reason is the new cardinality estimation formula which was introduced in SQL Server 2014. Cardinality Estimation is done all the time by the SQL Server optimizer. To produce a Query plan, the optimizer makes some assumptions about how many rows exist for each condition in the table. In most cases, the new cardinality estimation formula in SQL Server 2014 and onwards gives slightly better estimates and the optimizer therefore produces slightly better plans. In some cases however, mostly when there are predicates on more than one column in a WHERE clause or JOIN clause, the 2014 cardinality estimation is a lot worse than in previous versions of SQL Server.
Continue reading “OPTION(USE HINT) – New SQL Server 2016 SP1 feature”
If you ever studied normalisation of databases, you have probably come to the same conclusion as I have regarding NULL: It is best if NULL values in the database can be avoided but it is not always easy to achieve a NULL-free database. Let’s look at an example:
Continue reading “What is NULL?”
Most database developers have been faced with the task to archive old data. It could look something like this:
CREATE TABLE dbo.Cars(
CarID int identity(1,1) PRIMARY KEY,
CREATE TABLE dbo.Cars_Archive(
ArchivedDateTime datetime DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT PK_Cars_Archive PRIMARY KEY(CarID, ArchivedDateTime)
And updating a row would often require a stored procedure and some explicit transactions
Continue reading “Archiving with the OUTPUT clause”
I have had an annoying problem for a while. In a database used for a statistical survey system reporting is painfully slow in the beginning of each reporting period.
The tables contain a few million rows. Ola Hallengren’s index maintenance (which includes UPDATE STATISTICS) is running weekly. Each month is a new reporting period. When a new reporting period opens, there are no rows for the current period. From the first day of the month, we receive input, each input being less than 2000 new rows in the table.
Continue reading “Statistics on ascending columns”
Reporting of any previous period is always consistent in execution time – around 3 seconds to produce a full report. That’s an OK performance. But when reporting is done for current period early in a reporting period, execution takes up to 10 minutes.
Here’s an Inline Table Valued Function (TVF) for generating time-slots from a start-date to an end-date, given a certain time for each slot, given in minutes.
This would be useful for many applications, like scheduling systems, sales statistics broken down into certain slices of time etc. The function does have some limitations, eg there can’t be more than 100.000 minutes between start and endtime. This is easily fixed by just adding Another CROSS JOIN to CTE2, or by changing the DATEADD-functions to use hour instead of minute if that fits your purpose.
Continue reading “Generate time slots”
I’ll start off with a disclaimer: I’m going to tell you about something that happened in a specific system Environment. There’s no such thing as a general advice you can build on this specific scenario. I’m just posting it because I was myself surprised by what order of magnitude I was able to speed up a specific query by slightly removing some of the work in the execution plan.
The other day I helped troubleshooting a database system. In a table with some 400 million records, a subset (50-60 million records) were to be deleted. The application kept timing out on this delete operation so I adviced the developer to split the delete operation into smaller chunks. I even helped writing a T-SQL script to perform the delete in one million row chunks. The script was pretty basic – a WHILE-loop which checked if any rows fulfilling the WHERE-condition of the delete was left in the table, and inside the loop a DELETE TOP(one million) followed by an explicit checkpoint.
Continue reading “DELETE and Non Clustered Indexes”
I’m currently on a train from Gothenburg back home to Enköping. I have attended my first Sql Saturday (thank’s Mikael Wedham and crew for a great event!). I also did my first ever public SQL presentation at the event – a session about SQL Server partitioning.
The presentation and demo scripts can be downloaded from http://www.sqlsaturday.com/433/Sessions/Details.aspx?sid=38722
Continue reading “Impressions from Sql Saturday 433”
On September 5th, the first ever SQL Saturday in Sweden is held, in my favourite Swedish city Gothenburg. SQL Saturday Conferences are held all over the World and this first ever Swedish SQL Saturday event is the 433rd. And yeah, the Conference is for free. A full day of free training. If you are in the neighbourhood, you do want to be there. Check out the sessions and register here.
I’m very proud to have been selected on out of 24 speakers. My session – “Eight hours of work in 20 minutes” – is a case study of how a data load has evolved, from basically SSIS-loading data into a table, through some index maintenance as part of the data load, into table partitioning. The line-up makes me somewhat nervous, but it will be great fun to make a public appearance. Old friends showing up at the event makes it even better.
This is the first post on this blog. The future posts will be mostly about T-SQL.