Considerations before changing Compatibility Level in SQL Server

You’ve done an upgrade on your mission critical database thinking that the latest version/release would help you overcome all the problems in world, and then realize that there’s a huge performance dip when compared between the older and newer version of your SQL Server Databases. Queries which used to run lightening fast on your lower version are now taking minutes and this can be a frustrating experience for your customers, business teams.

Note that with every database upgrade, compatibility level changes – for example, with SQL 2008 the Compatibility Level was set to 100; in SQL 2012 to 110; in SQL 2014 it’s set to 120; and today, with SQL Server 2016 Compatibility Level to 130.

Although most workloads would benefit from the upgraded Compatibility Level, a few considerations before changing ‘that’ compatibility level in your SQL Server database.

  1. You move to the new Compatibility Level post upgrade, and see how things perform. In case you notice some regressions, you just simply set the Compatibility Level back to its original level, or keep the upgraded one, and only reverse the Cardinality Estimates back to the legacy mode.
  2. If you have multi-tenant applications spanning across multiple databases, it may be necessary to update the provisioning logic of your databases to ensure a consistent Compatibility Level across all databases; old and newly provisioned ones. Your application workload performance could be sensitive to the fact that some databases are running at different Compatibility Levels, and therefore, Compatibility Level consistency across any database could be required in order to provide the same experience to your customers all across the board. Note that it is not a mandate, it really depends on how your application is affected by the Compatibility Level.
  3. Regarding the Cardinality Estimates, and just like changing the Compatibility Level, before proceeding in production, it is recommended to test your production workload under the new conditions to determine if your application benefits from the Cardinality Estimates improvements.
  4. There are improvements with every upgraded version of SQL Server but if for some reason your workload is not tested fully with the higher version and giving you performance headaches, you should think of using “Alter Database” command, change it’s compatibility level and see the behavior.

Hope this helps!

Happy learning!

Cheers!
Aman Kharbanda

 

 

 

 

Advertisements

Migrating RedShift Cluster to different region

AWS recommends that we pause updates to the Amazon Redshift cluster during the migration process.

Here is a high-level overview of the steps for moving the entire cluster:
Use cross-region snapshot functionality to create a snapshot in the target region
Restore the cluster from the snapshot. When you do, Amazon Redshift creates a new cluster with all the snapshot data on the new cluster.

If you’re looking to migrate only a few tables out of your RedShift cluster, then following steps may help:

  1. Connect to the Amazon Redshift cluster in the source region and use the Unload command to export data from Redshift to S3.
  2. Copy your S3 data from the source region to the target region – Refer here for more details.
  3. Create an Amazon Redshift cluster and the required tables in the target region.
  4. Use the COPY command to load data from Amazon S3 to the required tables. 

Hope this helps.

Happy learning!

Cheers,
Aman Kharbanda

Migrating S3 bucket to a different region

AWS recommends that we pause updates to the Amazon Redshift cluster during the migration process.

Here is a high-level overview of the steps for moving the entire cluster:
Use cross-region snapshot functionality to create a snapshot in the target region
Restore the cluster from the snapshot. When you do, Amazon Redshift creates a new cluster with all the snapshot data on the new cluster.

If you’re looking to migrate only a few tables out of your RedShift cluster, then following steps may help:

  1. Connect to the Amazon Redshift cluster in the source region and use the Unload command to export data from Redshift to S3.
  2. Copy your S3 data from the source region to the target region – Refer here fo
  3. Create an Amazon Redshift cluster and the required tables in the target region.
  4. Use the COPY command to load data from Amazon S3 to the required tables.

 

 

Hope this helps!

Happy learning!

Cheers,
Aman Kharbanda

Key behind RDS MySQL’s Cross-Region Replicas

RDS uses native MySQL binary log based replication to create read replicas. There are mainly two threads involved in MySQL replication setup i.e., IO thread and SQL thread. IO thread is responsible to connect to master and fetch the binary log events and save them as relay log files on the slave instance. SQL thread will read those events from relay logs and execute them on the slave.

The main draw back with this approach is that all the queries executed on the master by multiple connections in parallel need to be executed by single thread (SQL_thread) on the replication instance i.e., MySQL replication is single threaded by default.

In recent versions of MySQL we have feature to enable multiple threads by modifying slave_parallel_workers variable to improve the replication performance. However the time required to sync the replica depends on the write workload present on the master. Even after replica is in sync with master, If workload changes i.e., many write operations or large transactions modifying number of rows executed on master then replication lag will increase.

Replication lag is calculated from “Seconds_behind_master” value from show slave status and Seconds_Behind_Master parameter shows a huge delay in seconds. However, this can be misleading, because it only measures the difference between the timestamps of the relay log most recently executed, versus the relay log entry most recently downloaded by the IO_THREAD.

Hope this helps you in planning an efficient solution for your RDS MySQL’s Cross-Region Read-Replicas.

Happy Learning!

Cheers,
Aman Kharbanda

Benefits of having a Read-Replica in AWS RDS

You can combine Multi-AZ deployments and read replicas to enjoy the benefits of each. For example, you can configure a source database as Multi-AZ for high availability and create a read replica (in Single-AZ) for read scalability.

With RDS for MySQL and MariaDB, you can also set the read replica as Multi-AZ, allowing you to use the read replica as a DR target. When you promote the read replica to be a standalone database, it will already be Multi-AZ enabled.

There are several benefits of having a Read-Replica in AWS RDS. To name a few:

1) You can reduce the load on your source DB instance by routing read queries from your applications to the read replica. Read replicas allow you to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.

2) You can promote a read replica if the source DB instance fails. You can also replicate DB instances across AWS Regions as part of your disaster recovery strategy.

3) When you create a read replica for Amazon RDS for MySQL, MariaDB and PostgreSQL, Amazon RDS sets up a secure communications channel using public key encryption between the source DB instance and the read replica, even when replicating across regions.

Happy learning!

Cheers!
Aman Kharbanda

Differences between Read-Replicas and Multi-AZ in Amazon RDS

Both these features offers a secondary copy of your data, there are differences between the two. A quick insight into the differences –

Multi-AZ Deployments Read Replicas
Synchronous replication – highly durable Asynchronous replication – highly scalable
Only database engine on primary instance is active All read replicas are accessible and can be used for read scaling
Automated backups are taken from standby No backups configured by default
Always span two Availability Zones within a single Region Can be within an Availability Zone, Cross-AZ, or Cross-Region
Database engine version upgrades happen on primary Database engine version upgrade is independent from source instance
Automatic failover to standby when a problem is detected Can be manually promoted to a standalone database instance

Read replicas are available in Amazon RDS for MySQL, MariaDB, and PostgreSQL as well as Amazon Aurora.

Next write-up will contain how do you promote your secondary (Read-Replica) to become a primary/standalone database.

Happy learning!

Cheers!
Aman Kharbanda

 

 

Report Server DB in a SQL Server Failover Cluster

Failover clustering is supported only for the report server database; you cannot run the Report Server service as part of a failover cluster.

To host a report server database on a SQL Server failover cluster, the cluster must already be installed and configured. You can then select the failover cluster as the server name when you create the report server database in the Database Setup page of the Reporting Services Configuration tool.

Although the Report Server service cannot participate in a failover cluster, you can install Reporting Services on a computer that has a SQL Server failover cluster installed. The report server runs independently of the failover cluster. If you install a report server on a computer that is part of a SQL Server failover instance, you are not required to use the failover cluster for the report server database; you can use a different SQL Server instance to host the database.

Hope this helps!

Happy learning!

Cheers,
Aman Kharbanda