Data Migration Assistant (DMA) enables you to upgrade to a modern data platform by detecting compatibility issues that can impact database functionality on your new version of SQL Server. It recommends performance and reliability improvements for your target environment. It allows you to not only move your schema and data, but also uncontained objects from your source server to your target server.
Here’s the download link
What is new in V3.3?
- DMA v3.3 enables migration of an on-premises SQL Server instance to SQL Server 2017, on both Windows and Linux.
- Migration blocking issues: DMA discovers the compatibility issues that block migrating on-prem SQL Server database(s)s to Azure SQL Database(s). It then provides recommendations to help customers remediate those issues.
- Partially or unsupported features: DMA detects partially or unsupported features that are currently in use at the source SQL Server. It then provides comprehensive set of recommendations, alternative approaches available in Azure and mitigating steps so that customers can plan ahead this effort into their migration projects.
- Discovery of issues that can affect an upgrade to an on-premises SQL Server. These are described as compatibility issues categorized under these areas:
- Breaking changes
- Behavior changes
- Deprecated features
- Discover new features in the target SQL Server platform that the database can benefit from after an upgrade. These are described as feature recommendations and are categorized under these areas:
Supported source and target versions
- Source: SQL Server 2005, SQL Server 2008, SQL Server 2008 R2, SQL Server 2012, SQL Server 2014, and SQL Server 2016
- Target: SQL Server 2012, SQL Server 2014, SQL Server 2016, SQL Server 2017, and Azure SQL Database
SQL Operations Studio is now available. Here’s the link to download and install it’s December (Latest release as of this writing) release:
Get SQL Operations Studio (preview) for Windows
This release of SQL Operations Studio (preview) includes a standard Windows installer experience, and a .zip:
- Download and run the SQL Operations Studio (preview) installer for Windows.
- Start the SQL Operations Studio (preview) app.
- Download SQL Operations Studio (preview) .zip for Windows.
- Browse to the downloaded file and extract it.
Get SQL Operations Studio (preview) for macOS
- Download SQL Operations Studio (preview) for macOS.
- To expand the contents of the zip, double-click it.
- To make SQL Operations Studio (preview) available in the Launchpad, drag sqlops.app to the Applications folder.
Supported Operating Systems
SQL Operations Studio (preview) runs on Windows, macOS, and Linux, and is supported on the following platforms:
- Windows 10 (64-bit)
- Windows 8.1 (64-bit)
- Windows 8 (64-bit)
- Windows 7 (SP1) (64-bit) – Requires KB2533623
- Windows Server 2016
- Windows Server 2012 R2 (64-bit)
- Windows Server 2012 (64-bit)
- Windows Server 2008 R2 (64-bit)
- macOS 10.13 High Sierra
- macOS 10.12 Sierra
- Red Hat Enterprise Linux 7.4
- Red Hat Enterprise Linux 7.3
- SUSE Linux Enterprise Server v12 SP2
- Ubuntu 16.0
Allows us to connect well to the DB and it gives a nice feel of GUI integration.
I’ll pen down more in next writeup about this tool!
This seems to be one of the many strange errors within SQL Server that you may encounter. You might’ve lately restored this DB from the server where Replication flag would have been enabled – That’s the closest of the scenarios which happened in my case. Although system DB’s were not restored as part of the exercise.
So, here’s what the error message will scare you with:
One or more recovery units belonging to database ‘mydatabase’ failed to generate a checkpoint. This is typically caused by lack of system resources such as disk or memory, or in some cases due to database corruption. Examine previous entries in the error log for more detailed information on this failure.
The log scan number (643555:241:0) passed to log scan in database ‘ mydatabase ‘ is not valid. This error may indicate data corruption or that the log file (.ldf) does not match the data file (.mdf). If this error occurred during replication, re-create the publication. Otherwise, restore from backup if the problem results in a failure during startup
Couple of options to start with:
- Check within the sys.databases table and see if the log_reuse_wait_desc is marked with Replication. If so, then we must work to get rid of it.
SELECT name, log_reuse_wait_desc FROM sys.databases where log_reuse_wait_desc = ‘Replication’
- You may execute this stored procedure to remove replication from this DB if you’re sure that it’s not a replication participating DB.
EXEC sp_removedbreplication ‘mydatabasename’
- After executing #2, if DB entry is still showing up in sys.databases table within log_reuse_wait_desc as ‘Replication’ then try marking the transactions as replicated using this command:
EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1
- After executing #3, if you’re still hitting up this error, then advise would be to install Replication component using the setup.exe and put the DB into publication (Use any dummy table) and then remove replication using the same command as mentioned in #2.
Off course, prior going into this series of steps you’ll have to verify a certain things respective to your environment:
- Ensure that database integrity is rightly checked and validated.
- Error log is rightly checked and there are no more errors available there, apart of this one.
- Server resources are available in plenty.
- Clean restoration was performed earlier while bringing this DB online.
Hope this helps!
A ColumnStore index stores each column in a separate set of disk pages, rather than storing multiple rows per page as data traditionally has been stored. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data is relevant or not, column store queries need only to search much lesser number of the columns. This means major increases in search speed and hard drive use.
Column oriented storage is the data storage of choice for data warehouse and business analysis applications. It works well for mostly read-only queries that perform analysis on large data sets. Column oriented storage allows for a high data compression rate and as such it can increase processing speed primarily by reducing the IO needs. SQL Server allows for creating column oriented indexes known as ColumnStore Indexes and thus brings the benefits of this highly efficient BI oriented indexes in the same engine that runs the OLTP workload. With SQL2016, a rowstore table can have one updateable nonclustered columnstore index. Previously, the nonclustered columnstore index was read-only. Columnstore supports index defragmentation by removing deleted rows without the need to explicitly rebuild the index.
How to change data in a Non-Clustered Index (Applies to SQL2012/4):
Basically, once you create a non-clustered columnstore index on a table, you cannot directly modify the data in that table. A query with INSERT, UPDATE, DELETE, or MERGE will fail and return an error message like this:
Msg 35330, Level 15, State 1, Line 1
INSERT statement failed because data cannot be updated in a table with a columnstore index. Consider disabling the columnstore index before issuing the INSERT statement, then rebuilding the columnstore index after INSERT is complete.
Disabling, rebuilding the index or dropping and then recreating the index is one probable solution but this can be an expensive process especially in mid of our business day. Another workaround here is to create partitioning and make use of staging tables. This will not require you to disable the columnstore indexes and you’ll still be able to update your data.
How to ignore Column Store Indexes if your query performance takes a hit?
There may be some cases when columnstore index is not ideal and needs to be ignored the same. You can use the query hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX to ignore the columnstore index. SQL Server Engine will use any other index which is best after ignoring the columnstore index.
Hope this helps!
SSRS keys can be backed up and restored in 2 easier ways. One, offcourse using the GUI panel of Reporting Configuration Manager and the other one is command line tool by executing rskeymgmt utility.
The rskeymgmt utility can be found in the binn sub-directory of your SQL Server install directory. Opening the command prompt and navigating to this directory, we can run rskeymgmt -? to get a list of arguments and additionally some example commands.
To backup the key, issue this command within cmd prompt: rskeymgmt.exe -e -f D:\TestBackup\SSRSKeybackup -p Yourownpwd -i MSSQLServer
While, to restore, use this command: rskeymgmt.exe -a -f D:\TestBackup\SSRSKeybackup -p Yourownpwd -i MSSQLServer
In the event you restore a SSRS database to a new server, the encryption keys will need to be loaded onto the new server in order to allow that server to read and utilize all of the items noted in the below list.
Otherwise an error will result when attempting to navigate to the Report Server.
Also, your embedded data sources would be unreadable if you add a new key.
Of course you could recreate a SSRS key on the new server and then redeploy all the data sets, data sources, and reports. In that situation though, you would still have to recreate all the folders and more importantly, the security for those folders (and related reports). An easier alternative is the backup and restore the SSRS key. Before digging deep into that, lets understand what gets encrypted within SSRS:
- Credentials used to connect to the Report Server database itself.
- The actual symmetric key used by SSRS to encrypt data.
- Data source credentials which are stored in the database in order to connect to external databases and data sources.
- The unattended user account information which is used to connect to a remote server in order get external images or data.
Moving a Report database to an other SQL Server can be tricky at times especially when there are multiple data sources around and which one should be rightly changed afterwards.
With this query for ReportServer database you get the connection string of all Shared Data Sources to document the usage or to search for a specific server/database.
–Listing out connection strings of all SSRS Shared Data Sources
;WITH XMLNAMESPACES — XML namespace def must be the first in with clause.
(SELECT SDS.name AS SharedDsName
,CONVERT(xml, CONVERT(varbinary(max), content)) AS DEF
FROM dbo.[Catalog] AS SDS
WHERE SDS.Type = 5) — 5 = Shared Datasource
,DSN.value(‘ConnectString’, ‘varchar(150)’) AS ConnString
SDS.DEF.nodes(‘/DataSourceDefinition’) AS R(DSN)
) AS CON
— Optional filter:
— WHERE CON.ConnString LIKE ‘%Initial Catalog%=%TFS%’
ORDER BY CON.[Path]