Shrink DB takes more time?

We have often seen occurrences where DB shrink task takes more time than we actually estimate and think of. Imagine shrink operation is going smoothly when you take a look at it’s progress, and after a span of time say a few hours, when you again look at the percentage value, you come to know that shrink progress has been stalled.

What could have been the reason for this slowness? You would like to see if there are any blocked processes at the back end or any unavoidable reasons for this very very slow progress.

Try to have a look at perfmon and verify the Disk performance counters – Avg. Disk sec/Transfer to understand if there are any disk performance issues (Look for the drives where Data/Log file were hosted) and catch hold of Windows fellows, if there are similar issues.
Criteria suggested by Microsoft is – If Avg. Disk Sec/Transfer is larger than 0.09, then it indicates that we have disk performance issue.

Also, enable the trace and verify what kind of data is your shrink task moving? At times, when Blob data comes into picture, then Shrink Operation is bound to take more time than expected.

Below blog article (by Paul Randal) explain why LOB data makes shrink run slowly.

What to do to get space released to the OS quickly?
Instead of running shrink DB, try to execute DBCC Shrinkfile and look to release space in small chunks. Say, after the application data housekeeping you see that the DB has 40GB of free space, then don’t go directly for 40GB at one shot. Try for 5GB batches and run it in multiple iterations.
Also, Shrink works in batch sizes of ~32 pages so it can be cancelled and only the last batch would be rolled back. This means that we can start it, kill it and restart at a later time.

Best Practices
1) DBCC SHRINKFILE or DBCC SHRINKDATABASE hold very small transactions. Better not to run at the same time with DDL statements like index rebuild, which may require schema lock, and cause waiting.
2) Though DB Shrink is not generally recommended, since it causes a lot of fragmentation and ultimately becomes a major culprit in slowing down your SQL Server.
3) Consider using trace flag -T2548, which will skip the compact step during the shrink. (LOB data are compacted by default during shrink operation).  If we disable the LOB data compaction, the shrink time will be reduced. (Recommendation is to verify the usage in test environment first, before going to PRD Servers).

Hope this helps.

Aman Kharbanda


Contained Databases – New leaf from SQL Server 2012

SQL Server DB migration comes along with some added painful tasks, such as moving the user accounts. Contained Databases seems to be a great feature for people who have to go through this pain again and again.

SQL Server user resides either in Windows ADS or at SQL Server level as SQL Server users.  So when we migrate SQL Server database from one server to other server these users have to be recreated again.
Wonder, if you have a long list of user accounts then you must dedicate a set of hours for this particular task.

So one  of the requirements from easy migration perspective is to create databases which are self-contained. In other words, a database with meta-data information, security information etc with in the database itself. So that when we migrate the database, we migrate everything with it.  That’s where this concept of Contained database came into picture in SQL Server 2012.

Enable it at Server level (Thru’ GUI)-

Can be enabled using below T-SQL-
sp_configure ‘show advanced options’,1
sp_configure ‘contained database authentication’, 1
reconfigure with override

Enable it at DB level-

Now, lets test and see if this Contained DB concept actually works or not. We want the user credentials to be part of the database

1)  To test, create a user as “SQL User with password”.

Now if you try to login with the user created, you will get an error like below.
This helps us to understand that the user is not available at SQL Server level.


Now click on options and specify the database name in ‘connect to database’,you should be able to login. This proves that user is part of database and not SQL Server.


Best Practices while implementing Contained Databases-
1) As a best practice, do not create contained database users with passwords who have the same name as SQL Server logins.
2) Users in a contained database that are members of db_owner and db_securityadmin fixed database roles (ALTER ANY USER permission), can grant access to the database without the knowledge or permission of DBA. So, its a added risk and if a DB is partially contained, DBA’s should periodically audit the capabilities of the users and modules in contained databases.
3) Do not configure contained databases to auto close. If closed, opening the database to authenticate a user consumes additional resources and could contribute to a denial of service attack.

Hope this helps and gives you a brief idea/understanding of this new feature introduced in SQL Server 2012.


Aman Kharbanda

Implementing Transparent Data Encryption – SQL Server 2008

Continuing from my earlier post ‘Security in SQL Server – Implementing Encryption‘, where I had described the implementation of encryption in SQL Server and briefed on it’s process workflow, here I am once again posting on the same topic with some screenshots to make it easier for you, and for your convenient understanding and smoother implementation.











SQL Server Data Collector – A monitoring tool for free.

Taking a close look at the monitoring features available in SQL Server 2008 and higher versions, there is quite an interesting and easy utility available for usage.
It’s called the Data Collector.

Below mentioned are some basic details on this facility, which is readily available in SQL Server and how we can enable the same.

What is Data Collector?
Data Collector is an optional feature of SQL Server 2008 that collects and stores information about SQL Server’s 2008’s status and performance over time, using pre-defined data collection sets.

What version of SQL Server supports it?
SQL Server 2008 and above features this monitoring utility.

How it is beneficial?
The data collector enables you to adjust the scope of data collection to suit your test and production environments. The data collector also uses a data warehouse, a relational database that enables you to manage the data that you collect by setting different retention periods for your data.

What details does it capture?
The Data Collector includes three default data collection sets it uses to collect various data from SQL Server instances:

Disk Usage // Server Activity // Query Activity

1.) Disk Usage:
Disk space used by databases; Data file & Log file growth rate of a database in a SQL Server 2008 instance.

2.) Query Statistics:
Execution count of a query, Total duration for a query execution,
I/O cost & CPU utilization of a query in a SQL Server 2008 instance.
Number of logical disk reads performed by the server on behalf of the event.
Number of physical disk writes performed by the server on behalf of the event.

3.) Server Activity:
Resource consumption details like CPU,memory,disk i/o & network usage, SQL Server waits, SQL Server activities like (user connections,logins,logouts,transactions,batch requests and SQL compilations & recompilations) for a SQL Server 2008 instance and also for host OS where data collection option is configured.

How to enable Data Collector?
To enable the data collector using SSMS – GUI
– In Object Explorer, expand the Management node.
– Right-click Data Collection, and then click Enable Data Collection.

To disable the data collector
– In Object Explorer, expand the Management node.
– Right-click Data Collection, and then click Disable Data Collection.

Screenshots showing what kind of reports you will get upon enabling this feature-

Server Activity ::


Query Statistics::


Disk Usage::


Limitations of the Data Collector
1) It can only collect data from SQL Server 2008 (not previous versions).
2) Memory, CPU and disk I/O resources are consumed on the SQL Server instances being monitored.
3) In most cases, a dedicated SQL Server instance is required, if many instances are to be monitored.
4) Customizing data collection sets and reports is not particularly easy.
5) Once installed, it can’t be uninstalled, only disabled.

I hope this information is useful to you and same can be implemented in your respective environments without spending more on 3rd party monitoring tools.


Aman Kharbanda