Share backup files

When I have to establish a database mirror or restore a database in another environment, I usually restore direct from the original backup file without a copy or move before. The restore is done on a UNC reference to the backup file.
This can not be done in the Management Studio GUI, but must be done by T-SQL as the GUI fails on a UNC reference to the backup file.
The UNC reference is on a share I create on the backup disk. I prefer to name the share "SQLBackup".
I have looked around for Best Practices on file shares and security. The best description I found is by Derek Melber (see Reference), but I have some issues...

It makes sense in general to control the details in access and rights i NTFS, and make a general access on the share. Derek suggests to grant access on the share to Authenticated Users in the domain and give Full Control rights. This should be done while you keep in mind that a Windows default right is that Everyone can Read.
When we are dealing with database backup files, that contain sensible data, I find this access a little too "broad" as is grant access to too many accounts.

Instead I have given specific Read access to the share, and depends on the general Read access to Everyone.
If Everyone is removed, and this I have seen in several organisations, you have to take care of the file access and rights also.

And again: Keep in mind that the backup files are to be handled as sensitive data.


Derek Melber: "Share Permissions".


VLF count

The number of Virtual Log Files (VLF) in a database is discussed by several persons. I have some listed in the Reference section below.
Michelle Ufford (SQLFool) wrote a blog entry that is the inspiration of this entry. The script of Ufford is change to fit my present needs.
The script is in three parts, where one part is in two editions to handle one database or all databases in a instance

Part I (temporary data)


CREATE TABLE #results(
[database_name] sysname
,[vlf_count] INT
,[log_file_count] INT
,[log_size_mb] numeric(15,1)

[file_id] INT
,[file_size] BIGINT
,[start_offset] BIGINT
,[f_seq_no] BIGINT
,[status] BIGINT
,[parity] BIGINT
,[create_lsn] NUMERIC(38)

The temporary table #stage holds the output of DBCC LogInfo as a staging area. The data is transformed into the temporary table #results.

Part IIa (Single database)

INSERT INTO #stage EXECUTE (N'DBCC LogInfo WITH no_infomsgs');
INSERT INTO #results
,COUNT(DISTINCT [file_id])
,(SELECT SUM([size]*8/1024.) FROM sys.database_files WHERE [type_desc] = 'LOG')
FROM #stage;
DROP TABLE #stage;

The value [size] is the number of database pages, and is recalculated to MiB in the column #resuls.log_size_mb.

Part IIb (All databases)

SELECT [name]
FROM master.sys.sysdatabases;
DECLARE @dbname nvarchar(128);

OPEN csr_db;
FETCH NEXT FROM csr_db INTO @dbname;

WHILE (@@fetch_status <> -1)
EXECUTE (N'DBCC LogInfo([' + @dbname + N']) WITH no_infomsgs');

INSERT INTO #results
,COUNT(DISTINCT [file_id])
,(SELECT SUM([size]*8/1024.)
FROM master.sys.master_files
INNER JOIN master.sys.databases
ON master.sys.master_files.database_id = master.sys.databases.database_id
WHERE [master_files].[type_desc]=N'LOG' AND [databases].[name]=@dbname)
FROM #stage;


FETCH NEXT FROM csr_db INTO @dbname;

CLOSE csr_db;
DROP TABLE #stage;

Ufford used the (undocumented) stored procedure master.sys.sp_MSforeachdb, where I use a cursor to avoid a undocumented object.

Part III (Analysis)

DECLARE @vlfPerGBHighLimit int;
SET @vlfPerGBHighLimit = 50;
DECLARE @vlfPerGBLowLimit int;
SET @vlfPerGBLowLimit = 2;

,([log_size_mb]/1024) AS N'log_size_gb'
,N'maintenance' =
WHEN [log_size_mb] < 1024 THEN N'Log too small to evaluate.'
WHEN [vlf_count]/([log_size_mb]/1024) <= @vlfPerGBLowLimit THEN N'Too few VLFs. Shrink and grow log in 8 GB chunks.'
WHEN [vlf_count]/([log_size_mb]/1024) >= @vlfPerGBHighLimit THEN N'Too many VLFs. Shrink and grow log in 8 GB chunks.'
ELSE N'VLF count OK :-)'
FROM #results
ORDER BY [vlf_count] DESC;

DROP TABLE #results;

This is inspired by a comment by Kendra Little to the blog entry by Michelle Ufford. I rewrote it to use CASE instead of IF statements.
There was a error in SQL Server when expanding in multiplies of 4 GB, and this is why Tripp and others recommend to expand by 8000 MB instead of 8192 MB.


Michelle Ufford (SQLFool): „Check VLF Counts
Kimberly L. Tripp: „Transaction Log VLFs - too many or too few?
Linchi Shea: „Performance impact: a large number of virtual log files“ - Part I and Part II
Paul S. Randall: „Understanding Logging and Recovery in SQL Server
David Levy (Adventures in SQL): „A Busy/Accidental DBA’s Guide to Managing VLFs