So before you plan on moving your workloads from On-Prem SQL Server to the cloud, if your preferred public cloud vendor is going to be AWS and you wish to use their DBaaS i.e RDS then here are a few limitations, as of this writing, that you may encounter with SQL Server.
As always and with every cloud vendor, there are product updates almost every day, refer to their website for more up to date changes.
Limits for Microsoft SQL Server DB Instances
The Amazon RDS implementation of Microsoft SQL Server on a DB instance have some limitations you should be aware of:
- You can create up to 30 databases on each of your DB instances running Microsoft SQL Server. The Microsoft system databases, such as
model, don’t count toward this limit.
- Some ports are reserved for Amazon RDS use and you can’t use them when you create a DB instance.
- Amazon RDS for SQL Server does not support importing data into the msdb database.
- You can’t rename databases on a DB instance in a SQL Server Multi-AZ with Mirroring deployment.
- The maximum storage size for SQL Server DB instances is the following:
- General Purpose (SSD) storage: 16 TB for all editions
- Provisioned IOPS storage: 16 TB for all editions
- Magnetic storage: 1 TB for all editions
If you have a scenario that requires a larger amount of storage, you can use sharding across multiple DB instances to get around the limit. This approach requires data-dependent routing logic in applications that connect to the sharded system. You can use an existing sharding framework, or you can write custom code to enable sharding.
- The minimum storage size for SQL Server DB instances is the following:
- General Purpose (SSD) storage: 200 GB for Enterprise and Standard editions, 20 GB for Web and Express editions
- Provisioned IOPS storage: 200 GB for Enterprise and Standard editions, 100 GB for Web and Express editions
- Magnetic storage: 200 GB for Enterprise and Standard editions, 20 GB for Web and Express editions
- Because of limitations in Microsoft SQL Server, restoring to a point in time before successful execution of a DROP DATABASE might not reflect the state of that database at that point in time. For example, the dropped database is typically restored to its state up to 5 minutes before the DROP DATABASE command was issued, which means that you can’t restore the transactions made during those few minutes on your dropped database. To work around this, you can reissue the DROP DATABASE command after the restore operation is completed. Dropping a database removes the transaction logs for that database.
Amazon RDS doesn’t support running SQL Server Analysis Services, SQL Server Integration Services, SQL Server Reporting Services, Data Quality Services, or Master Data Services on the same server as your Amazon RDS DB instance. To use these features, the recommended way forward is that you install SQL Server on an Amazon EC2 instance, or use an on-premise SQL Server instance, to act as the Reporting, Analysis, Integration, or Master Data Services server for your SQL Server DB instance on Amazon RDS. You can install SQL Server on an Amazon EC2 instance with Amazon EBS storage, pursuant to Microsoft licensing policies.
Because of licensing requirements, AWS can’t offer SQL Server Developer edition on Amazon RDS. You can use Express edition for many development, testing, and other nonproduction needs. However, if you need the full feature capabilities of an enterprise-level installation of SQL Server, you must use a dedicated host environment. You can download and install SQL Server Developer edition (and other MSDN products) on Amazon EC2. Dedicated infrastructure is not required for Developer edition. By using your own host, you also gain access to other programmability features that are not accessible on Amazon RDS.
Hope this helps while you plan to host your SQL Server in Amazon world!
Many of you might’ve come across a situation wherein you’re planning for a SQL Server upgrade, and at the same time, skeptical of the performance/throughout that you’re going to get from this upgraded version of database.
Database Experimentation Assistant (DEA) is a new A/B testing solution for SQL Server upgrades. It will assist in evaluating a targeted version of SQL for a given workload. Customers who are upgrading from previous SQL Server versions (SQL Server 2005 and above) to any new version of the SQL Server will be able to use these analysis metrics provided, such as queries that have compatibility errors, degraded queries, query plans, and other workload comparison data, to help them build higher confidence, making it a successful upgrade experience.
Current available version (As of this writing) is 2.1.51169.2
Supported sources and target versions
Source: SQL Server 2005 and above
Target: SQL Server 2005 and above
Analysis: SQL Server 2008 and above
Data Migration Assistant (DMA) enables you to upgrade to a modern data platform by detecting compatibility issues that can impact database functionality on your new version of SQL Server. It recommends performance and reliability improvements for your target environment. It allows you to not only move your schema and data, but also uncontained objects from your source server to your target server.
Here’s the download link
What is new in V3.3?
- DMA v3.3 enables migration of an on-premises SQL Server instance to SQL Server 2017, on both Windows and Linux.
- Migration blocking issues: DMA discovers the compatibility issues that block migrating on-prem SQL Server database(s)s to Azure SQL Database(s). It then provides recommendations to help customers remediate those issues.
- Partially or unsupported features: DMA detects partially or unsupported features that are currently in use at the source SQL Server. It then provides comprehensive set of recommendations, alternative approaches available in Azure and mitigating steps so that customers can plan ahead this effort into their migration projects.
- Discovery of issues that can affect an upgrade to an on-premises SQL Server. These are described as compatibility issues categorized under these areas:
- Breaking changes
- Behavior changes
- Deprecated features
- Discover new features in the target SQL Server platform that the database can benefit from after an upgrade. These are described as feature recommendations and are categorized under these areas:
Supported source and target versions
- Source: SQL Server 2005, SQL Server 2008, SQL Server 2008 R2, SQL Server 2012, SQL Server 2014, and SQL Server 2016
- Target: SQL Server 2012, SQL Server 2014, SQL Server 2016, SQL Server 2017, and Azure SQL Database
SQL Operations Studio is now available. Here’s the link to download and install it’s December (Latest release as of this writing) release:
Get SQL Operations Studio (preview) for Windows
This release of SQL Operations Studio (preview) includes a standard Windows installer experience, and a .zip:
- Download and run the SQL Operations Studio (preview) installer for Windows.
- Start the SQL Operations Studio (preview) app.
- Download SQL Operations Studio (preview) .zip for Windows.
- Browse to the downloaded file and extract it.
Get SQL Operations Studio (preview) for macOS
- Download SQL Operations Studio (preview) for macOS.
- To expand the contents of the zip, double-click it.
- To make SQL Operations Studio (preview) available in the Launchpad, drag sqlops.app to the Applications folder.
Supported Operating Systems
SQL Operations Studio (preview) runs on Windows, macOS, and Linux, and is supported on the following platforms:
- Windows 10 (64-bit)
- Windows 8.1 (64-bit)
- Windows 8 (64-bit)
- Windows 7 (SP1) (64-bit) – Requires KB2533623
- Windows Server 2016
- Windows Server 2012 R2 (64-bit)
- Windows Server 2012 (64-bit)
- Windows Server 2008 R2 (64-bit)
- macOS 10.13 High Sierra
- macOS 10.12 Sierra
- Red Hat Enterprise Linux 7.4
- Red Hat Enterprise Linux 7.3
- SUSE Linux Enterprise Server v12 SP2
- Ubuntu 16.0
Allows us to connect well to the DB and it gives a nice feel of GUI integration.
I’ll pen down more in next writeup about this tool!
This seems to be one of the many strange errors within SQL Server that you may encounter. You might’ve lately restored this DB from the server where Replication flag would have been enabled – That’s the closest of the scenarios which happened in my case. Although system DB’s were not restored as part of the exercise.
So, here’s what the error message will scare you with:
One or more recovery units belonging to database ‘mydatabase’ failed to generate a checkpoint. This is typically caused by lack of system resources such as disk or memory, or in some cases due to database corruption. Examine previous entries in the error log for more detailed information on this failure.
The log scan number (643555:241:0) passed to log scan in database ‘ mydatabase ‘ is not valid. This error may indicate data corruption or that the log file (.ldf) does not match the data file (.mdf). If this error occurred during replication, re-create the publication. Otherwise, restore from backup if the problem results in a failure during startup
Couple of options to start with:
- Check within the sys.databases table and see if the log_reuse_wait_desc is marked with Replication. If so, then we must work to get rid of it.
SELECT name, log_reuse_wait_desc FROM sys.databases where log_reuse_wait_desc = ‘Replication’
- You may execute this stored procedure to remove replication from this DB if you’re sure that it’s not a replication participating DB.
EXEC sp_removedbreplication ‘mydatabasename’
- After executing #2, if DB entry is still showing up in sys.databases table within log_reuse_wait_desc as ‘Replication’ then try marking the transactions as replicated using this command:
EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1
- After executing #3, if you’re still hitting up this error, then advise would be to install Replication component using the setup.exe and put the DB into publication (Use any dummy table) and then remove replication using the same command as mentioned in #2.
Off course, prior going into this series of steps you’ll have to verify a certain things respective to your environment:
- Ensure that database integrity is rightly checked and validated.
- Error log is rightly checked and there are no more errors available there, apart of this one.
- Server resources are available in plenty.
- Clean restoration was performed earlier while bringing this DB online.
Hope this helps!
By default, SQL Server will use jumbo sized IOs (typically of 1MB) to perform backup and restore operations. These can adversely affect the IO latency for your array. Whenever possible, use MAXTRANSFERSIZE=65536 on your BACKUP or RESTORE statements. This could make your backups and restore take a tad longer, so test accordingly. If you are leveraging Transaction Log Shipping, you might have no way to tune this, as sqllogship.exe doesn’t offer a means to tune this parameter. If log shipping backups/restores are affecting your performance, considering rolling out scripts to perform log shipping manually.
If you are not achieving good throughput from your backup/restore operations after using the MAXTRANSFERSIZE=65536 parameter, considering bumping up the number of buffers available to the process by using the BUFFERCOUNT parameter. In our testing, a 1.5TB database backed up in 24 minutes, 38 seconds at default settings, and 25 minutes, 6 seconds with 64Kb IO block size and BUFFERCOUNT=2048.