SQL Server has a long history, it has been around since the ’90, more than 20 years have passed, which is a VERY long time as far as the IT world goes; Especially now that Microsoft has stepped up the release schedule for SQL Server (releases are now scheduled yearly) I thought that maybe it’s a good time to go back and revisit the classic articles and recommendations by the Gurus of the SQL Scene to see if those are stil valid.

I was reading “Minimizing the impact of DBCC CHECKDB : DOs and DON’Ts” by Aaron Bertrand, and wondered if those 6 year old recommendations were still actual, so i fired up my freshest SQL Server 2017 CU5 instance and started to test, so here we are.

I already had a 77GB database laying around (from my “insert test” article) which is pretty much the DB size used in the original article.

Step #1, I’ve created a simple logging table for my test results in another database:

CREATE TABLE DBCC_Results ( TestName varchar(512), StartTime datetime2, EndTime datetime2, Duration as DATEDIFF(Second,StartTime, EndTime) PERSISTED )

Then, a simple script that execs the DBCC command with the various options and tracks the results into the table just created:

DECLARE @t1 datetime2, @t2 datetime2 --Vanilla CHECKDB SET @t1 = SYSDATETIME() DBCC CHECKDB ([INSERT TEST]) SET @t2 = SYSDATETIME() INSERT INTO TEST.dbo.DBCC_Results VALUES ('CHECKDB',@t1,@t2) --CHECKDB WITH NO_INFOMSGS, ALL_ERRORMSGS SET @t1 = SYSDATETIME() DBCC CHECKDB ([INSERT TEST]) WITH NO_INFOMSGS, ALL_ERRORMSGS SET @t2 = SYSDATETIME() INSERT INTO TEST.dbo.DBCC_Results VALUES ('CHECKDB WITH NO_INFOMSGS, ALL_ERRORMSGS',@t1,@t2) --CHECKDB WITH NO_INFOMSGS, ALL_ERRORMSGS +TF2549 DBCC TRACEON (2549) SET @t1 = SYSDATETIME() DBCC CHECKDB ([INSERT TEST]) WITH NO_INFOMSGS, ALL_ERRORMSGS SET @t2 = SYSDATETIME() DBCC TRACEOFF (2549) INSERT INTO TEST.dbo.DBCC_Results VALUES ('TF 2549',@t1,@t2) --CHECKDB WITH NO_INFOMSGS, ALL_ERRORMSGS +TF2562 DBCC TRACEON (2562) SET @t1 = SYSDATETIME() DBCC CHECKDB ([INSERT TEST]) WITH NO_INFOMSGS, ALL_ERRORMSGS SET @t2 = SYSDATETIME() DBCC TRACEOFF (2562) INSERT INTO TEST.dbo.DBCC_Results VALUES ('TF 2562',@t1,@t2) --CHECKDB WITH NO_INFOMSGS, ALL_ERRORMSGS +TF2549 +TF2562 DBCC TRACEON (2549,2562) SET @t1 = SYSDATETIME() DBCC CHECKDB ([INSERT TEST]) WITH NO_INFOMSGS, ALL_ERRORMSGS SET @t2 = SYSDATETIME() DBCC TRACEOFF (2549,2562) INSERT INTO TEST.dbo.DBCC_Results VALUES ('TF 2549 + TF 2562',@t1,@t2) --CHECKDB WITH NO_INFOMSGS, PHYSICAL_ONLY, ALL_ERRORMSGS SET @t1 = SYSDATETIME() DBCC CHECKDB ([INSERT TEST]) WITH NO_INFOMSGS, PHYSICAL_ONLY, ALL_ERRORMSGS SET @t2 = SYSDATETIME() INSERT INTO TEST.dbo.DBCC_Results VALUES ('CHECKDB WITH NO_INFOMSGS, PHYSICAL_ONLY, ALL_ERRORMSGS',@t1,@t2) --CHECKDB WITH NO_INFOMSGS, PHYSICAL_ONLY, ALL_ERRORMSGS +TF2549 DBCC TRACEON (2549) SET @t1 = SYSDATETIME() DBCC CHECKDB ([INSERT TEST]) WITH NO_INFOMSGS, PHYSICAL_ONLY, ALL_ERRORMSGS SET @t2 = SYSDATETIME() DBCC TRACEOFF (2549) INSERT INTO TEST.dbo.DBCC_Results VALUES ('PHYSICAL_ONLY +TF 2549',@t1,@t2) --CHECKDB WITH NO_INFOMSGS, PHYSICAL_ONLY, ALL_ERRORMSGS +TF2562 DBCC TRACEON (2562) SET @t1 = SYSDATETIME() DBCC CHECKDB ([INSERT TEST]) WITH NO_INFOMSGS, PHYSICAL_ONLY, ALL_ERRORMSGS SET @t2 = SYSDATETIME() DBCC TRACEOFF (2562) INSERT INTO TEST.dbo.DBCC_Results VALUES ('PHYSICAL_ONLY + TF 2562',@t1,@t2) --CHECKDB WITH NO_INFOMSGS, PHYSICAL_ONLY, ALL_ERRORMSGS +TF2549 +TF2562 DBCC TRACEON (2549,2562) SET @t1 = SYSDATETIME() DBCC CHECKDB ([INSERT TEST]) WITH NO_INFOMSGS, PHYSICAL_ONLY, ALL_ERRORMSGS SET @t2 = SYSDATETIME() DBCC TRACEOFF (2549,2562) INSERT INTO TEST.dbo.DBCC_Results VALUES ('PHYSICAL_ONLY + TF 2549 + TF 2562',@t1,@t2) GO 10

As Aaron, I’m executing the tests 10 times (with the GO 10 keyword in SSMS) and averaging out the results.

The script should be self-explanatory, as everything is already detailed in the excellent original article.

Finally, the test results:

SELECT TestName, AVG(Duration) as [Duration (s)] FROM DBCC_Results GROUP BY TestName ORDER BY AVG(Duration) DESC

Test Name Duration (s) CHECKDB 171 CHECKDB WITH NO_INFOMSGS, ALL_ERRORMSGS 168 TF 2549 166 TF 2562 168 TF 2549 + TF 2562 168 CHECKDB WITH NO_INFOMSGS, PHYSICAL_ONLY, ALL_ERRORMSGS 98 PHYSICAL_ONLY +TF 2549 98 PHYSICAL_ONLY + TF 2562 120 PHYSICAL_ONLY + TF 2549 + TF 2562 122

With a modern hardware and the newest version of SQL Server, the once-suggested trace flags don’t seem to offer much improvement, or actually slow down the process.

This is probably due to two things:

The design changes introduced in SQL Server 2016 which changes the way DBCC scans the object (CheckScanner as opposed to MultiObjectScanner)

The hardware that I’m testing this on, which has a VERY high I/O throughput compared to what was even the top of the enterprise line in 2012, as I’m mounting a m.2 NVME SSD which was happily reading 1.2GB/sec of data while DBCC was running

But that’s really the case?

To verify my hardware assumption, I moved the database files from the fast SSD to a 10x slower RAID 5 with old time spinning disks, in this way we’re DEFINITELy I/O bound and any I/O optimization coming from the trace flags should be evident, right?

Well…

Test Name Duration (s) CHECKDB 498 CHECKDB WITH NO_INFOMSGS, ALL_ERRORMSGS 493 TF 2549 496 TF 2562 498 TF 2549 + TF 2562 491 CHECKDB WITH NO_INFOMSGS, PHYSICAL_ONLY, ALL_ERRORMSGS 434 PHYSICAL_ONLY +TF 2549 434 PHYSICAL_ONLY + TF 2562 492 PHYSICAL_ONLY + TF 2549 + TF 2562 490

In a heavily I/O bound scenario, the classic trace flags still don’t seem to have any clear benefit.

Of course, this is just an example on my specific machine, you know what you should do? Test in your instance!