Shrinking Database Files

Shrinking Database Files

There are times when you need to do things that you’re told not to do.

Scan the Internet, and you’ll find a lot of advice on Shrinking Database files. Generally, it’s “Don’t”.

Paul Randal – “Don’t do it”, Pinal Dave – “Shrinking Database is bad”, Brent Ozar – “Don’t touch that dial”.

In many cases, I’m in agreement. Unneccesary or repeated shrinking of Database files is a messy process. causing massive fragmentation, not to mention a huge amount of I/O.  Sometimes, though, it has to be done.

I have shrunk Data Files, and I’m going to tell you why, and how I do it.

The best advice, often given is not to Shrink Data Files. Leave the space in the File, waiting for natural data growth to fill the file once more. But that’s not always going to happen.

  • Implementing Data Compression can give reductions of space required by up to 90 percent. You’re going to want that space back now, not over the next twenty years.
  • How about when a Database Developer takes a “local backup” of the largest Table inside a Database, causing the Database to grow and then tell you “but I dropped the Table afterwards” – otherwise known as a “Jelly Flood”
  • If there’s a disparity between environments regarding Compression settings. Developers can end up in a battle with deployment tools where they’re trying to make the environments identical.

These have happened to me in the past, and I’ve found myself running DBCC SHRINKFILE.

When you run DBCC SHRINKFILE, SQL Server identifies Data pages at the end of the Data File. It picks them up and moves them to free space towards the beginning of the Data File, Data page by Data page. This is a slow process. SQL Server will squash pages in wherever it can, fragmenting your data all over the disk, making it slower to access.

Often, people get into a cycle  of Shrinking Databases during the day only for their scheduled Index Maintenance to defragment them at night. At that point, the Data file will just grow again, undoing the Shrink and re-claiming the space again (or even worse – maybe something else has already used the space that you released earlier – guess what happens then).

The whole process of using DBCC SHRINKFILE is slow… painfully slow – time stands still

At this point, it’s extremely hard to find out SQL Server is doing and how long the Shrink is going to take.

SQL Server doesn’t like Shrinking Data Files, so it’s not very efficient at doing it. Just like my kids, when given something to do that they don’t like, they will string out the job for as long as possible so that you either ask them to stop or (if they do complete the task), you won’t ask again.

A recommended practice is to create a new FileGroup and move all of the Data onto the new FileGroup, but this can be also very time consuming, providing that you have the space. You may shrinking because you have no space.

This led me to think about a more efficient way of performing the task – how about an Index Rebuild? SQL knows how to Rebuild Indexes – quicker and more efficiently – making more use of the CPU and memory available. While rebuilding the Index, data will be moved from the end of the Data File to a position “more inland”.

How can you tell what SQL Server is working on? SQL Server will take out Locks on Tables to move them safely.

If you check the Locking that’s going on while you’re running the SHRINK command, then you can see the Table that’s being moved.

The Query that I use looks like this:

SELECT DISTINCT '[' + s.[name] + '].[' + OBJECT_NAME(p.OBJECT_ID) + ']' AS TableName
  FROM sys.dm_tran_locks l
  JOIN sys.partitions p
    ON l.resource_associated_entity_id = p.hobt_id
   AND resource_database_id = DB_ID()
  JOIN sys.objects o
    ON p.object_id = o.object_id
  JOIN sys.schemas s
    ON o.schema_id = s.schema_id

You may not get any results straight away as lots of other processes are going on, but when SQL Server gets stuck into a Table then it’ll show up in the results.

Take this Table name, stop the Shrink command and use ALTER TABLE [TableName] REBUILD. Once the Table rebuild has completed, set the SHRINK command to run again. Repeat the process until you achieve your required size.

This doesn’t work with every scenario – for example, hears often stay put. These can be moved by adding, then removing a Clustered Index.

Be aware that your Transaction Log will need to be big enough the store the Table rebuild. If it’s a big Table then you may just be creating an issue somewhere else.

Using this method, I’ve found that typically after a few Tables are moved, everything speeds up again, the file is shrunk and you’ve got your space back!

Usual disclaimers apply – try this out on a non-Production Database and use at your own risk. Never leave a Shrinking Database unattended.

Leave a Reply

Your email address will not be published.

Follow on Feedly

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close