Out of control Database Log Files

Out of control Database Log Files

I have previously spoken about the issues caused by Shrinking Data Files in SQL Server, but also some genuine reasons why you might need to do this, and how.

Shrinking Log files is a different matter. Log files can grow enormously for a few reasons, quite often by accident.

A common pattern occurs when someone has set the Recovery Model on a Database to “Full” without scheduling Log Backups. It’s often done with good intentions blended with a touch of ignorance.
Setting the Recovery Model on a Database to “Full” can provide point-in-time recovery for Databases, allowing you to meet your RPO (Recovery Point Objective). A Recovery Point Objective is defined by the Business – essentially it defines the amount of Data that could be at risk in the event of a Disaster. For example, the Business may agree that they could lose five minutes processing following a catastrophic “event”. In this case, running a Database in “Full” Recovery Mode with Log Backups scheduled for every five minutes would allow data to be restored to a time of your choice up to five minutes from the time of the “event” happening.
If you haven’t got Log Backups scheduled, then that’s not possible.

Using a Maintenance Tool such as Ola Hallengren’s Maintenance Solution or Minionware Backup can help enormously as the default setting for User Databases means that if it exists, it will be backed up. If you add a Database to your SQL Server, they will be added into the Backups automatically. Make sure that you perform a Full Backup, and then Log backups will follow.

What happens when you’re not doing Log Backups? Not only do you lose the ability for “Point In Time Recovery”, the Log file will just continue to grow. Eventually the disk will fill, causing transactions on the Database to fail. This is made even worse if everything has been installed on the C: Drive causing SQL Server to stop working. No, don’t laugh – I have seen this happen a few times.

If this happens to you, then the ideal solution would be to perform a Log Backup and then shrink the Log File. Performing a Log Backup allows SQL Server to free space in the Transaction Log, which is why performing regular Log Backups keep the size of your Log files under control. Performing a Log backup isn’t always possible though – an example would be where the Database is say 10 Gb, Log file is 200Gb and there’s not enough disk space to backup the Log file (or it might take too long).

There is an option to effectively “drop” the current Transaction Log by switching the Recovery Model.
By switching the Recovery Model of the Database to “Simple”, the committed transactions in the Log file are cleared. Be aware, this means that you lose the ability to do a point in time recovery of your Data up to now – you won’t lose data, but in the event of a problem, you can only go back to your most recent Full backup.
The good thing is though after fixing this issue, your Backups will be valid in the future – provided that you follow the next steps which are to set the Recovery Model back to “Full” and schedule Backups again. You will need to take a Full backup (SQL Server uses this as a “base point”) and then set your Log backups running at regular intervals.

You may now use DBCC SHRINKFILE to shrink your Log file back down to a reasonable size. What a lovely statement – define “reasonable size”?
Your Transaction Log should be big enough to handle the largest Transaction that you will be running, and then a bit more.
It’s possible that the largest Transaction might occur during routine maintenance – e.g. Index Rebuilds, in which case, that’s the minimum Log File size that you need. I often start with the “folklore” setting of 25% of Database File Size and let the Log file grow out automatically. Providing that you’re performing regular Log Backups and Autogrowth is set to be on then it should grow to the optimum size. Once it’s there, leave it alone! Exceptional circumstances occur, such as a one-off Data Import and the Log file should be shrunk back down afterwards but do not get into a loop of expanding / shrinking Log files – this can be (almost) as bad as doing this for Data files.

While you are there, check your file AutoGrowth settings to make sure that a) Autogrowth is on, b) Autogrowth increments are in set sizes, not percentages. This will help you to not fall into another potential pain point with Log Files – too many Virtual Log files (VLFs). I have written about this in the past – it can have very nasty consequences.

Learnings and Additional Links:
Once again, know your environment. Find the SQL Servers on your Network before their problems find you. It’s easier to deal with setup issues such as not having Log Backups enabled before they become critical.

Great Post by Kimberley Tripp on setting up your VLFs correctly

Transaction Log VLFs – too many or too few?

Leave a Reply

Your email address will not be published.

Follow on Feedly

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close