I was intrigued by the deferred-drop feature that has been inside SQL since SQL Server 2000 SP3 after watching one of Paul Randal’s Pluralsight courses which was ‘Logging, Recovery and the Transaction Log’. The deferred drop feature is what allows for the seemingly instant truncation or drop of any table regardless of how big the table is. I wanted to find out a little more about it so after a quick Google, the number one result was this blog post from Paul which goes into the deferred-drop feature in more detail. After reading it, I picked up on one particular comment in the post which was the following:
‘It comes from the behaviour of DROP and TRUNCATE operations on large tables’
The word that I picked up on was ‘large’ which seemed to suggest that the deferred-drop process will only kick in for tables past a certain size. So with that I wanted to find out what this size had to be to seemingly kick off the deferred-drop process by running some tests.
First very similar to the original post itself, I setup a simple table in a clean database set in simple recovery mode
CREATE DATABASE DeferredDropTestDb GO ALTER DATABASE DeferredDropTestDb SET RECOVERY SIMPLE GO USE DeferredDropTestDb GO CREATE TABLE SomeTable ( SomeColumn CHAR(4000) ) GO
Then I proceeded to test to try and find the tipping point by inserting a specific number of records into the table and seeing how many pages the table took up in the database before truncating it by running the following script.
INSERT INTO SomeTable DEFAULT VALUES GO 2040 SELECT used_page_count, reserved_page_count, row_count FROM sys.dm_db_partition_stats WHERE object_id = object_id('SomeTable')
I kept a note of the reserved page count between each test prior to running the following to truncate the table and see if the deferred-drop process kicked in. As per Pauls blog post, we can see if the deferred drop process kicks in by monitoring the transaction log a few seconds after the actual truncate statement.
CHECKPOINT GO TRUNCATE TABLE SomeTable GO SELECT GETUTCDATE() AS 'DateTruncated' GO WAITFOR DELAY '00:00:08' GO SELECT [Current LSN] , [Operation] , [Begin Time] , [Transaction ID] , [Transaction Name] , [Description] FROM fn_dblog (NULL, NULL) WHERE [Transaction Name] = 'DeferredAllocUnitDrop::Process'
Using the above, I truncated the table, waited a few seconds and then searched the transaction log for the deferred drop transaction which has a name of “DeferredAllocUnitDrop::Process”. If the query found the record in the log, then naturally deferred drop was used otherwise if it didn’t then the truncate didn’t use the feature as the table wasn’t deemed large enough. I kept adjusting the number of records in the table and re-running the tests until I found that tipping point.
After a number of tests adjusting the number of records inserted and also using different column sizes/types, heap/clustered and non-clustered indexes etc, it seemed that a reserved page size of approx. 1025 for the base table only seemed to convince the truncate statement to use the deferred deallocation of the data pages. Anything less and it wasn’t triggered even if there were non-clustered indexes taking the total allocated pages to over 1025.
I’m sure that the internal algorithm may actually be a lot more complex than just a reserved page count check, but from these simplistic tests, 1025 pages seemed to be the magic number. Which if you think about it, 1024 pages equates to exactly 128 extents so does make some sense. These tests were only for the truncate table command. I didn’t test to see if the behaviour is any different when dropping a table, maybe one for a future post.
If you are trying this out yourself by using the above script, the above will cause the deferred drop to occur. Change the “GO 2040” to “GO 2020” to bring the number of reserved pages to under 1025 and the deferred drop will not be used.