Error: 17836 // Length specified in network packet payload did not match number of bytes read.

One of the prospect in hardening SQL Server includes changing the port from default 1433 to a customized one.

And while making such changes, application teams unknowingly try to establish connections and in turn,  we notice the below errors in SQL Server error log file-

Error: 17836, Severity: 20, State: 17.
Length specified in network packet payload did not match number of bytes read; the connection has been closed. Please contact the vendor of the client library. 

Users can lose connectivity to SQL Server if such error message is reported in the SQL Server error log file.

This issue occurs when the Detect services running on non-standard ports option is enabled in the Discovery module. The event is logged because SQL Server 2005/8/8R2 is unable to interpret the incoming packet.

Perform a nslookup of the CLIENT IP Address that is listed in the error message and find out what computer it is that is connecting.  Then you need to check that machine and determine what specifically is connecting to the SQL Server.  You might get more infromation from doing a SQL Trace for the Errors and Warnings Event Class and have the ClientProcessID column in the trace data.  When the error spikes, you might get the PID for the process that is connecting from that machine, and then you can find that process in Task Manager on that machine by adding the PID to the data displayed (View -> Select Columns).

If you have antivirus running on your system and SQL Server is not running default port then Disable the Detect services running on non-standard ports option in the scan configuration.

Hope this helps.

Aman Kharbanda


Troubleshooting 18456 :: Login Failure Alerts

At some point in time, SQL Server DBA’s must have dealt with login failure alerts along with error 18456 and with different states and severity no.
Thanks to the older versions of SQL Server for not presenting enough data in the error log about these login failures and making it a little tricky for new DBA’s.

Sample Alerts
2013-07-23 00:02:00.34 Logon     Error: 18456, Severity: 14, State: 8.
2013-07-23 00:02:00.34 Logon     Login failed for user ‘<user name>’. [CLIENT: <ip address>]

By severity, it means the seriousness of the error while state tries to reflect the source of the problem.

Aaron Bertrand has written a very useful blog describing all the states in detail.

Error State         Error Description
2 and 5                 Invalid userid
6                           Attempt to use a Windows login name with SQL Authentication
7                           Login disabled and password mismatch
8                           Password mismatch
9                           Invalid password
11 and 12             Valid login but server access failure
13                         SQL Server service paused
16                         Suggests that the default database was inaccessible. Main reason could be because the database has been removed, renamed, or is offline (it may be set to Auto Close).
18                         Change password required

You may also opt to use SQL Server Profiler trace to decode the exact error and get more details about it (Preferred columns to select) –
– Audit Login Failed (under Security Audit)
– User Error Messages (under Errors & Warnings)
– Error log (under Errors & Warnings)

State 16 means required DB is inaccessible/offline/removed. In this case, which DB is your Login ID trying to access or is there is any agent scheduled job running at the backend which is trying to search for the missing DB or trying to run some SP in that missing DB.
SQL Server Profiler trace will surely help in such cases to identify the problematic jobs.

Hope this helps you to troubleshoot with ease 🙂

Aman Kharbanda

Why is my transaction log file growing rapidly?

Blog’s headline must have given you a brief idea on the context of this issue.

Many a times this issue comes across us, if the settings are not in correct order and we end up in getting ‘Error: 9002, Severity: 17, State: 2 Transaction Log of XYZ database is full’.

With this error, SQL Server may mark databases as suspect because of a lack of space for transaction log expansion.

Probable causes and adverse effects of this issue
i) A very large transaction log file which can lead the transactions to fail and may start to roll back.
ii)Transactions may take a long time to complete.
iii)Performance issues may occur.
iv)Blocking may occur.

How do I stop it from eating up all my disk space? What should I do?
1) Truncate the inactive transactions in your transaction log by performing an immediate log backup. (The inactive part of the transaction log file contains the completed transactions, so the transaction log file is no longer used by SQL Server during the recovery process. SQL Server reuses this truncated, inactive space in the transaction log instead of letting the transaction log continue to grow and use more space).
2) Shrink the transaction log file (Backup or reducing the size doesn’t really truncate the log, so you have to opt for shrink task).
3) Preventing log file to grow unexpectedly –
    a) Expand the log file size to allow it grow until we have space on the drive.
    b) Configure auto-grow setting (Still need to be careful with the free space on disk).
    c) Consider changing the recovery model to Simple (You can change the recovery model from full to simple if you do not want to use the transaction log files during a disaster recovery operation).

Hope this clarifies.

Aman Kharbanda

Move the DB Files without taking the database into offline mode

Move the DB Files without taking the database into offline mode-

With time when you see the drive on which DB is hosted is running out of space, and you tend to approach the Wintel/Storage teams to add more LUN in order to allocate more disk space or you ask the application teams to housekeep some of their data.

What if there is no such possibility and you are stuck in a situation where movement of files to another bigger drive becomes your ultimate challenge without any downtime. Yes, management hates the word ‘Downtime’ 🙂

We all must have used attach/detach method which involves detaching a database, moving the files, then re-attaching the database. There is another method which involves taking a database offline, running the ALTER DATABASE command to change file locations, moving the files, and bringing the database back online.

Common limitation with these methods – The database has to be offline.

There is another approach to skip such situation – DBCC Shrinkfile (Logical FileName, EmptyFile).

Ceate a new file using the ALTER DATABASE command, then move the data using the DBCC SHRINKFILE command with the EMPTYFILE option.
Secondly, use ALTER DATABASE command to remove the empty file.

Excerpts from BOL-
EMPTYFILE – Migrates all data from the specified file to other files in the same filegroup. Because the Database Engine no longer allows data to be placed in the empty file, the file can be removed by using the ALTER DATABASE statement.

Use Test;
— Create a data file and assume it contains data.
    NAME = Testdata,
    FILENAME = ‘X:\data.ndf’,
    SIZE = 100MB
— Empty the data file.
— Remove the data file from the database.

Simple isn’t it! Hope this helps.

Aman Kharbanda

Shrink DB takes more time?

We have often seen occurrences where DB shrink task takes more time than we actually estimate and think of. Imagine shrink operation is going smoothly when you take a look at it’s progress, and after a span of time say a few hours, when you again look at the percentage value, you come to know that shrink progress has been stalled.

What could have been the reason for this slowness? You would like to see if there are any blocked processes at the back end or any unavoidable reasons for this very very slow progress.

Try to have a look at perfmon and verify the Disk performance counters – Avg. Disk sec/Transfer to understand if there are any disk performance issues (Look for the drives where Data/Log file were hosted) and catch hold of Windows fellows, if there are similar issues.
Criteria suggested by Microsoft is – If Avg. Disk Sec/Transfer is larger than 0.09, then it indicates that we have disk performance issue.

Also, enable the trace and verify what kind of data is your shrink task moving? At times, when Blob data comes into picture, then Shrink Operation is bound to take more time than expected.

Below blog article (by Paul Randal) explain why LOB data makes shrink run slowly.

What to do to get space released to the OS quickly?
Instead of running shrink DB, try to execute DBCC Shrinkfile and look to release space in small chunks. Say, after the application data housekeeping you see that the DB has 40GB of free space, then don’t go directly for 40GB at one shot. Try for 5GB batches and run it in multiple iterations.
Also, Shrink works in batch sizes of ~32 pages so it can be cancelled and only the last batch would be rolled back. This means that we can start it, kill it and restart at a later time.

Best Practices
1) DBCC SHRINKFILE or DBCC SHRINKDATABASE hold very small transactions. Better not to run at the same time with DDL statements like index rebuild, which may require schema lock, and cause waiting.
2) Though DB Shrink is not generally recommended, since it causes a lot of fragmentation and ultimately becomes a major culprit in slowing down your SQL Server.
3) Consider using trace flag -T2548, which will skip the compact step during the shrink. (LOB data are compacted by default during shrink operation).  If we disable the LOB data compaction, the shrink time will be reduced. (Recommendation is to verify the usage in test environment first, before going to PRD Servers).

Hope this helps.

Aman Kharbanda

Contained Databases – New leaf from SQL Server 2012

SQL Server DB migration comes along with some added painful tasks, such as moving the user accounts. Contained Databases seems to be a great feature for people who have to go through this pain again and again.

SQL Server user resides either in Windows ADS or at SQL Server level as SQL Server users.  So when we migrate SQL Server database from one server to other server these users have to be recreated again.
Wonder, if you have a long list of user accounts then you must dedicate a set of hours for this particular task.

So one  of the requirements from easy migration perspective is to create databases which are self-contained. In other words, a database with meta-data information, security information etc with in the database itself. So that when we migrate the database, we migrate everything with it.  That’s where this concept of Contained database came into picture in SQL Server 2012.

Enable it at Server level (Thru’ GUI)-

Can be enabled using below T-SQL-
sp_configure ‘show advanced options’,1
sp_configure ‘contained database authentication’, 1
reconfigure with override

Enable it at DB level-

Now, lets test and see if this Contained DB concept actually works or not. We want the user credentials to be part of the database

1)  To test, create a user as “SQL User with password”.

Now if you try to login with the user created, you will get an error like below.
This helps us to understand that the user is not available at SQL Server level.


Now click on options and specify the database name in ‘connect to database’,you should be able to login. This proves that user is part of database and not SQL Server.


Best Practices while implementing Contained Databases-
1) As a best practice, do not create contained database users with passwords who have the same name as SQL Server logins.
2) Users in a contained database that are members of db_owner and db_securityadmin fixed database roles (ALTER ANY USER permission), can grant access to the database without the knowledge or permission of DBA. So, its a added risk and if a DB is partially contained, DBA’s should periodically audit the capabilities of the users and modules in contained databases.
3) Do not configure contained databases to auto close. If closed, opening the database to authenticate a user consumes additional resources and could contribute to a denial of service attack.

Hope this helps and gives you a brief idea/understanding of this new feature introduced in SQL Server 2012.


Aman Kharbanda

Implementing Transparent Data Encryption – SQL Server 2008

Continuing from my earlier post ‘Security in SQL Server – Implementing Encryption‘, where I had described the implementation of encryption in SQL Server and briefed on it’s process workflow, here I am once again posting on the same topic with some screenshots to make it easier for you, and for your convenient understanding and smoother implementation.