Designing data acquisition framework in SQL Server and SSIS – how to source and integrate external data for a decision support system or data warehouse (Part 1)

Note: Part 2 to this series can be found HERE, Part 3 HERE, Part 4 HERE and all the code and additional files for this post can be downloaded from my OneDrive folder HERE

Introduction

There is a lot of literature and Internet resources on the subject of data warehouse and decision support systems architecture, including considerations for Kimbal vs Inmon approach, data storage and management systems vendor options, RDBMS vs NoSQL arguments etc., however, in a typical small-to-medium enterprise environment, the first step to creating a data warehouse is designing a data acquisition job to move the source data into a staging area. Providing the most prevalent approach to data warehouse design is employed i.e. no near-real time or streaming architecture is required, the staging server/database, storing a copy of a transactional system data for further processing and transformations is created and populated in the first instance.

Most of the time, sourcing transactional data and placing its copy on the staging database simply involves a full or delta copy from the operational system(s) data into the staging database. There is typically no schema denormalisation involved at this stage but data cleansing routines can be employed to make data cleaner and conforming to business definitions e.g. missing values substitution, data types conversion, de-duplication etc. Sometimes, certain degree of ‘pruning’ may be employed to separate redundant and information-poor data from data which can be turned into insight, thus competitive advantage. Also, since the advent of cloud providers/services, with their huge on-demand and cost-competitive processing and storage capabilities, ELT (extract, load and transform), rather than ETL (extract, transform and load) approach may be more applicable for some scenarios. These may include dealing with large volumes of data e.g. generated by a variety of dispersed systems such as IoT devices or operating on a database engine designed for fast, high-concurrency data processing e.g. massively parallel processing (MPP) engine. Therefore, depending on how much you would like to massage the data before it finds its way into the landing/staging area, data acquisition can either become as simple as a like-for-like, source-to-target copying or as complex as an intricate collection of transformations, mostly to deal with data quality and deluge issues.

To demonstrate a sample workflow for a data acquisition job of moderate complexity and data volume I will describe a sample SQL Server Integration Services (SSIS) package which anyone versed enough in T-SQL and SSIS can replicate and modify according to the project and business needs. This package has been ‘taken out’ of one of my previous client’s environment and can serve as a template for sourcing transactional data into a staging database for further processing and massaging. To make this example more akin to a typical business scenario and more flexible for future reuse I have deliberately assumed the following:

The source data is running outside local network on a database supported by vendor other than Microsoft i.e. MySQL therefore specific data incompatibilities e.g. data types, numeric precisions, character maximum lengths etc. are likely to occur and need to be rectified automatically in the process of acquisition. For the sake of completeness, I will also include the code altered version for SQL Server-to-SQL Server data acquisition

The source database schema is under constant development so target database, where the acquired data is stored, needs to be adjusted automatically. Alterations such as schema changes for existing tables e.g. column names, data types, numeric precision and scale etc. need to be reconciled without developers’ intervention as part of the pre-acquisition tasks

In case any connectivity issues occur, the job will wait for a predefined period of time in a loop also executed a predefined number of times before reporting failed connectivity status

Any errors raised need to be logged and stored for reference but halting the entire process should not be the default behaviour in case of a single table failure. When exception is raised, the process should not stop, but rather gracefully log the error details and continue to synchronise the remaining objects

In case of any issues encountered, we need the administrator(s) to be notified

Any indexes will be dealt with as needed i.e. dropped/recreated, reorganised etc. Statistics will also be refreshed at the end of the process

Some source data (small tables) can be merged and others (larger tables) require source truncation and full table being copied across (no row per row comparison)

We should be able to ‘turn on’ and ‘turn off’ which tables and which columns from each table will be brought across e.g. some may contain irrelevant or sensitive data which is not required to be copied across

At the job completion we will have some rudimentary checks to compare source to target data e.g. record count for each table and check for any errors logged

Conceptually, the acquisition process and its core components can be depicted as per the image below.

At a lower level, this framework blueprint will become much more involved as there is quite a bit of code to account for each of the step’s functionality, however, at a higher level, all tasks involved can be roughly divided into three categories.

Pre-acquisition tasks – activities which facilitate subsequent data coping e.g. source server availability checking, schema modifications check, pre-load indexes management etc.

Acquisition tasks – tasks which are directly responsible for source-to-target data coping

Post-acquisition tasks – activities which ensure post-load validation e.g. statistics refresh, indexes re-creation/rebuild/re-organisation, error logs check etc.

Transactional systems data sourcing and staging can be as straightforward as simply selecting source data and inserting it into pre-created table, however, it is always prudent to assume that, for example, changes to the source or target data/schema/environment will not always be communicated or that source system will not always be available for querying and take measures to prevent the process from falling over. From my experience, developers are not always diligent about relying database changes information up-steam and on a lot of occasions I have witnessed even larger modifications being dismissed as not having any impact on the decision support systems, sometimes resulting in business being deprived of data for days or longer. To prevent situations where data could not be sourced reliably, it is always better to assume the worst and hope for the best so in the spirit of following best practice standards I will break this post into four parts, each dealing with their respective phases of development process i.e.

Building the supporting scaffolding i.e. creating support databases and database objects, setting up linked server connection to the source data etc. – this post

Pre-acquisition activities e.g. source server availability checking, schema modifications check etc. as well as large tables acquisition code development and overview – Part 2

Post-acquisition activities e.g. statistics refresh, error log check etc. as well as small tables acquisition code development and overview – Part 3

SSIS package structure and final conclusion – Part 4

As previously mentioned, the acquisition package (template) this blog series describes is logically comprised of three sections: pre-acquisition activities, data acquisition tasks and post-acquisition activities. At a high level, the package control flow may look as per image below.

Please note that this template, along with its individual tasks is only a guide and if any of the steps are not applicable or are excluded and should be added to conform to technical requirements, it should be fairly straightforward to alter it with little effort. This post will deal with the first step in this process i.e. creating all auxiliary structure to support further code and package development outlined in part 2, part 3 and part 4.

Environment and Supporting Objects Setup

Let’s begin by setting the stage to the rest of this series and create all necessary scaffolding i.e. staging database, control database and AdminDBA database (more on that later), linked server to the source database etc.

Firstly, let’s create two databases – ControlDB and StagingDB – and the associated objects/data. StagingDB database will simply act as a local copy of the source data. Control database, on the other hand, will hold tables controlling data acquisition objects metadata e.g. tables and fields exceptions in case we want to exclude certain columns from the process, indexes names in case we want to drop and rebuild them, information on whether the source table is large or small (this dependency will trigger different acquisition process), notification recipients’ e-mail addresses etc. One can omit creating control database and circumvent dealing with this metadata by hard-coding it into the stored procedures directly, however, in my experience, it is a worthwhile feature to have as changes/additions can be applied to a single repository transparently and effortlessly e.g. excluding one or more attributes from a source table is a simple INSERT (into control table) statement. I will demonstrate this functionality in more details in part 2 and 3.

As part of this task we will also create all database objects and populate them with test data. You can notice that the code below creates four tables (on ControlDB database) and two views (on StagingDB database). Each of those objects’ functionality is described as per below:

Ctrl_RemoteSvrs_Tables2Process – metadata table holding objects names and their corresponding environment variables e.g. schema names (both remote and local servers), database names (both remote and local servers), whether the table is active, whether the data volume/record count is large or not etc. This table’s content dictates which acquisition process should be used for data coping i.e. dynamic MERGE SQL statement (see part 3 for details) or parallelised INSERTs (see part 2 for details) as well as providing some basic metadata information

Ctrl_RemoteSvrs_Tables2Process_ColumnExceptions – metadata table containing objects attributes which are not to be acquired from the source database/server for data redundancy or security reasons. This table may be referenced if any particular object on the source server contains columns which can be excluded, saving space and reducing security concerns

Ctrl_INDXandPKs2Process – control table containing indexes metadata which stores information on the indexes types, objects they’re built on, columns they’re encompassing etc.

Ctrl_ErrorMsg_Notification_List – control table containing email addresses distribution list for error massages notifications and associated metadata. This table is referenced to build a list of addresses which should be notify in case of unexpected event occurrence

vw_MySQLReservedWords – a view containing a list of MySQL reserved words to allow for MySQL syntax compliance by means of substituting certain key words with a delimited version e.g. replacing words such as AS, CHAR or COLUMN with AS , CHAR and COLUMN equivalents (delimited by backticks)

, and equivalents (delimited by backticks) vw_MSSQLReservedWords – a view containing a list of SQL Server reserved words. Same purpose as the one above but targeting SQL Server version

/*============================================================================== STEP 1 Create Staging and Control databases on the local instance ==============================================================================*/ USE [master]; GO IF EXISTS ( SELECT name FROM sys.databases WHERE name = N'StagingDB' ) BEGIN -- Close connections to the StagingDB database ALTER DATABASE StagingDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE; DROP DATABASE StagingDB; END; GO CREATE DATABASE StagingDB ON PRIMARY ( NAME = N'StagingDB' , FILENAME = N'C:\DBFiles\StagingDB.mdf' , SIZE = 10MB , MAXSIZE = 1GB , FILEGROWTH = 10MB ) LOG ON ( NAME = N'StagingDB_log' , FILENAME = N'C:\DBFiles\StagingDB_log.LDF' , SIZE = 1MB , MAXSIZE = 1GB , FILEGROWTH = 10MB); GO --Assign database ownership to login SA EXEC StagingDB.dbo.sp_changedbowner @loginame = N'SA', @map = false; GO --Change the recovery model to BULK_LOGGED ALTER DATABASE StagingDB SET RECOVERY BULK_LOGGED; GO IF EXISTS ( SELECT name FROM sys.databases WHERE name = N'ControlDB' ) BEGIN -- Close connections to the ControlDB database ALTER DATABASE ControlDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE; DROP DATABASE ControlDB; END; GO CREATE DATABASE ControlDB ON PRIMARY ( NAME = N'ControlDB' , FILENAME = N'C:\DBFiles\ControlDB.mdf' , SIZE = 10MB , MAXSIZE = 1GB , FILEGROWTH = 10MB ) LOG ON ( NAME = N'StagingDB_log' , FILENAME = N'C:\DBFiles\ControlDB_log.LDF' , SIZE = 1MB , MAXSIZE = 1GB , FILEGROWTH = 10MB); GO --Assign database ownership to login SA EXEC ControlDB.dbo.sp_changedbowner @loginame = N'SA', @map = false; GO --Change the recovery model to BULK_LOGGED ALTER DATABASE ControlDB SET RECOVERY BULK_LOGGED; GO /*============================================================================== STEP 2 Create ControlDB database objects ==============================================================================*/ USE [ControlDB]; GO -- Create 'Ctrl_RemoteSvrs_Tables2Process' table CREATE TABLE [dbo].[Ctrl_RemoteSvrs_Tables2Process] ( [ID] [SMALLINT] IDENTITY(1, 1) NOT NULL , [Application_Name] [VARCHAR](255) NOT NULL , [Local_Table_Name] [VARCHAR](255) NOT NULL , [Local_Schema_Name] [VARCHAR](55) NOT NULL , [Local_DB_Name] [VARCHAR](255) NOT NULL , [Remote_Table_Name] [VARCHAR](255) NOT NULL , [Remote_Schema_Name] [VARCHAR](55) NOT NULL , [Remote_DB_Name] [VARCHAR](255) NOT NULL , [Remote_Server_Name] [VARCHAR](255) NULL , [Is_Active] [BIT] NOT NULL , [Is_Big_Table] [BIT] NOT NULL , CONSTRAINT [pk_dbo_ctrl_remotesvrs_tables2process_id] PRIMARY KEY CLUSTERED ( [ID] ASC ) WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY] ) ON [PRIMARY]; GO --Create 'Ctrl_RemoteSvrs_Tables2Process_ColumnExceptions' table CREATE TABLE [dbo].[Ctrl_RemoteSvrs_Tables2Process_ColumnExceptions] ( [ID] [SMALLINT] IDENTITY(1, 1) NOT NULL , [FK_ObjectID] [SMALLINT] NOT NULL , [Application_Name] [VARCHAR](255) NOT NULL , [Local_Field_Name] [VARCHAR](255) NOT NULL , [Local_Table_Name] [VARCHAR](255) NOT NULL , [Local_Schema_Name] [VARCHAR](55) NOT NULL , [Local_DB_Name] [VARCHAR](255) NOT NULL , [Remote_Field_Name] [VARCHAR](255) NOT NULL , [Remote_Table_Name] [VARCHAR](255) NOT NULL , [Remote_Schema_Name] [VARCHAR](55) NOT NULL , [Remote_DB_Name] [VARCHAR](255) NOT NULL , [Remote_Server_Name] [VARCHAR](255) NOT NULL , [Exception_Type] [VARCHAR](55) NOT NULL , [Is_Active] [BIT] NOT NULL , CONSTRAINT [pk_dbo_ctrl_remotesvrs_tables2process_columnexceptions_id] PRIMARY KEY CLUSTERED ( [ID] ASC ) WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY] ) ON [PRIMARY]; GO -- Create foreign key constraint between -- 'Ctrl_RemoteSvrs_Tables2Process_ColumnExceptions' and 'Ctrl_RemoteSvrs_Tables2Process' tables ALTER TABLE [dbo].[Ctrl_RemoteSvrs_Tables2Process_ColumnExceptions] WITH CHECK ADD CONSTRAINT [fk_dbo_ctrl_remotesvrs_tables2process_id] FOREIGN KEY([FK_ObjectID]) REFERENCES [dbo].[Ctrl_RemoteSvrs_Tables2Process] ([ID]); GO ALTER TABLE [dbo].[Ctrl_RemoteSvrs_Tables2Process_ColumnExceptions] CHECK CONSTRAINT [fk_dbo_ctrl_remotesvrs_tables2process_id]; GO -- Create 'Ctrl_INDXandPKs2Process' table CREATE TABLE [dbo].[Ctrl_INDXandPKs2Process] ( [ID] [SMALLINT] IDENTITY(1, 1) NOT NULL , [Program_Name] [VARCHAR](128) NOT NULL , [Database_Name] [VARCHAR](128) NOT NULL , [Schema_Name] [VARCHAR](25) NOT NULL , [Table_Name] [VARCHAR](256) NOT NULL , [Index_or_PKName] [VARCHAR](512) NOT NULL , [Index_Type] [VARCHAR](128) NOT NULL , [Is_Unique] [VARCHAR](56) NULL , [Is_PK] [VARCHAR](56) NULL , [PK_ColNames] [VARCHAR](1024) NULL , [Indx_ColNames] [VARCHAR](1024) NULL , [Indx_Options] VARCHAR (MAX) NULL, CONSTRAINT [pk_id_ctrl_indxandpks2process_id] PRIMARY KEY CLUSTERED ( [ID] ASC ) WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY] ) ON [PRIMARY]; GO -- Create 'Ctrl_ErrorMsg_Notification_List' table CREATE TABLE [dbo].[Ctrl_ErrorMsg_Notification_List] ( [ID] [INT] IDENTITY(1, 1) NOT NULL , [ServerName] [VARCHAR](128) NULL , [InstanceName] [VARCHAR](128) NULL , [TaskName] [VARCHAR](256) NULL , [EmailAddress] [VARCHAR](256) NULL , [IsActive] [BIT] NULL , CONSTRAINT [pk_dbo_ctrl_errorMsg_notification_list_id] PRIMARY KEY CLUSTERED ( [ID] ASC ) WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY] ) ON [PRIMARY]; GO -- Insert sample data into control objects created INSERT INTO [dbo].[Ctrl_RemoteSvrs_Tables2Process] ( Application_Name , Local_Table_Name , Local_Schema_Name , Local_DB_Name , Remote_Table_Name , Remote_Schema_Name , Remote_DB_Name , Remote_Server_Name , Is_Active , Is_Big_Table ) SELECT 'AppName' , 'answers' , 'dbo' , 'StagingDB' , 'answers' , 'Remote_SchemaName' , 'Remote_DBName' , 'RemoteMySQLDB' , 1 , 1 UNION ALL SELECT 'AppName' , 'federal_states' , 'dbo' , 'StagingDB' , 'federal_states' , 'Remote_SchemaName' , 'Remote_DBName' , 'RemoteMySQLDB' , 1 , 0; INSERT INTO dbo.Ctrl_RemoteSvrs_Tables2Process_ColumnExceptions ( FK_ObjectID , Application_Name , Local_Field_Name , Local_Table_Name , Local_Schema_Name , Local_DB_Name , Remote_Field_Name , Remote_Table_Name , Remote_Schema_Name , Remote_DB_Name , Remote_Server_Name , Exception_Type , Is_Active ) SELECT 1 , 'AppName' , 'other_value' , 'answers' , 'dbo' , 'StagingDB' , 'other_value' , 'answers' , 'Remote_Schema_Name' , 'Remote_DB_Name' , 'RemoteMySQLDB' , 'security' , 1; INSERT INTO dbo.Ctrl_INDXandPKs2Process ( Program_Name , Database_Name , Schema_Name , Table_Name , Index_or_PKName , Index_Type , Is_Unique , Is_PK , PK_ColNames , Indx_ColNames , Indx_Options ) SELECT 'AppName' , 'StagingDB' , 'dbo' , 'answers' , 'cstore_nonclustered_idx_dbo_answers_multiplecols' , 'CLUSTERED COLUMNSTORE' , '' , '' , '' , 'id,oos_id,question_id,question_set_id,answer_provided_by_user_id,answer_option_id,timestamp,oos_questionset_id,owner_user_id' , 'WITH ( DATA_COMPRESSION = COLUMNSTORE_ARCHIVE )' UNION ALL SELECT 'AppName' , 'StagingDB' , 'dbo' , 'federal_states' , 'nonclustered_idx_dbo_federal_states_name' , 'NONCLUSTERED' , '' , '' , '' , 'name' , '' UNION ALL SELECT 'AppName' , 'StagingDB' , 'dbo' , 'answers' , 'pk_dbo_answers_id' , 'CLUSTERED' , 'UNIQUE' , 'PRIMARY KEY' , 'id' , '' , '' UNION ALL SELECT 'AppName' , 'StagingDB' , 'dbo' , 'federal_states' , 'pk_dbo_federal_states_id' , 'CLUSTERED' , 'UNIQUE' , 'PRIMARY KEY' , 'id' , '' , ''; INSERT INTO dbo.Ctrl_ErrorMsg_Notification_List ( [ServerName] , [InstanceName] , [TaskName] , [EmailAddress] , [IsActive] ) SELECT 'BICortexTestServer' , 'TestSQLServer' , 'Data_Acquisition_Job' , 'myname@emailaddress.com' , 1; /*============================================================================== STEP 2 Create StagingDB database objects ==============================================================================*/ USE [StagingDB] GO CREATE VIEW [dbo].[vw_MysqlReservedWords] AS SELECT 'ACCESSIBLE' AS reserved_word, '<code>ACCESSIBLE</code>' AS mysql_version UNION ALL SELECT 'ADD', '<code>ADD</code>' UNION ALL SELECT 'ALL', '<code>ALL</code>' UNION ALL SELECT 'ALTER', '<code>ALTER</code>' UNION ALL SELECT 'ANALYZE', '<code>ANALYZE</code>' UNION ALL SELECT 'AND', '<code>AND</code>' UNION ALL SELECT 'AS', '<code>AS</code>' UNION ALL SELECT 'ASC', '<code>ASC</code>' UNION ALL SELECT 'ASENSITIVE', '<code>ASENSITIVE</code>' UNION ALL SELECT 'BEFORE', '<code>BEFORE</code>' UNION ALL SELECT 'BETWEEN', '<code>BETWEEN</code>' UNION ALL SELECT 'BIGINT', '<code>BIGINT</code>' UNION ALL SELECT 'BINARY', '<code>BINARY</code>' UNION ALL SELECT 'BLOB', '<code>BLOB</code>' UNION ALL SELECT 'BOTH', '<code>BOTH</code>' UNION ALL SELECT 'BY', '<code>BY</code>' UNION ALL SELECT 'CALL', '<code>CALL</code>' UNION ALL SELECT 'CASCADE', '<code>CASCADE</code>' UNION ALL SELECT 'CASE', '<code>CASE</code>' UNION ALL SELECT 'CHANGE', '<code>CHANGE</code>' UNION ALL SELECT 'CHAR', '<code>CHAR</code>' UNION ALL SELECT 'CHARACTER', '<code>CHARACTER</code>' UNION ALL SELECT 'CHECK', '<code>CHECK</code>' UNION ALL SELECT 'COLLATE', '<code>COLLATE</code>' UNION ALL SELECT 'COLUMN', '<code>COLUMN</code>' UNION ALL SELECT 'CONDITION', '<code>CONDITION</code>' UNION ALL SELECT 'CONSTRAINT', '<code>CONSTRAINT</code>' UNION ALL SELECT 'CONTINUE', '<code>CONTINUE</code>' UNION ALL SELECT 'CONVERT', '<code>CONVERT</code>' UNION ALL SELECT 'CREATE', '<code>CREATE</code>' UNION ALL SELECT 'CROSS', '<code>CROSS</code>' UNION ALL SELECT 'CURRENT_DATE', '<code>CURRENT_DATE</code>' UNION ALL SELECT 'CURRENT_TIME', '<code>CURRENT_TIME</code>' UNION ALL SELECT 'CURRENT_TIMESTAMP', '<code>CURRENT_TIMESTAMP</code>' UNION ALL SELECT 'CURRENT_USER', '<code>CURRENT_USER</code>' UNION ALL SELECT 'CURSOR', '<code>CURSOR</code>' UNION ALL SELECT 'DATABASE', '<code>DATABASE</code>' UNION ALL SELECT 'DATABASES', '<code>DATABASES</code>' UNION ALL SELECT 'DAY', '<code>DAY</code>' UNION ALL SELECT 'HOUR', '<code>HOUR</code>' UNION ALL SELECT 'DAY_MICROSECOND', '<code>DAY_MICROSECOND</code>' UNION ALL SELECT 'DAY_MINUTE', '<code>DAY_MINUTE</code>' UNION ALL SELECT 'DAY_SECOND', '<code>DAY_SECOND</code>' UNION ALL SELECT 'DEC', '<code>DEC</code>' UNION ALL SELECT 'DECIMAL', '<code>DECIMAL</code>' UNION ALL SELECT 'DECLARE', '<code>DECLARE</code>' UNION ALL SELECT 'DEFAULT', '<code>DEFAULT</code>' UNION ALL SELECT 'DELAYED', '<code>DELAYED</code>' UNION ALL SELECT 'DELETE', '<code>DELETE</code>' UNION ALL SELECT 'DESC', '<code>DESC</code>' UNION ALL SELECT 'DESCRIBE', '<code>DESCRIBE</code>' UNION ALL SELECT 'DETERMINISTIC', '<code>DETERMINISTIC</code>' UNION ALL SELECT 'DISTINCT', '<code>DISTINCT</code>' UNION ALL SELECT 'DISTINCTROW', '<code>DISTINCTROW</code>' UNION ALL SELECT 'DIV', '<code>DIV</code>' UNION ALL SELECT 'DOUBLE', '<code>DOUBLE</code>' UNION ALL SELECT 'DROP', '<code>DROP</code>' UNION ALL SELECT 'DUAL', '<code>DUAL</code>' UNION ALL SELECT 'EACH', '<code>EACH</code>' UNION ALL SELECT 'ELSE', '<code>ELSE</code>' UNION ALL SELECT 'ELSEIF', '<code>ELSEIF</code>' UNION ALL SELECT 'ENCLOSED', '<code>ENCLOSED</code>' UNION ALL SELECT 'ESCAPED', '<code>ESCAPED</code>' UNION ALL SELECT 'EXISTS', '<code>EXISTS</code>' UNION ALL SELECT 'EXIT', '<code>EXIT</code>' UNION ALL SELECT 'EXPLAIN', '<code>EXPLAIN</code>' UNION ALL SELECT 'FALSE', '<code>FALSE</code>' UNION ALL SELECT 'FETCH', '<code>FETCH</code>' UNION ALL SELECT 'FLOAT', '<code>FLOAT</code>' UNION ALL SELECT 'FLOAT4', '<code>FLOAT4</code>' UNION ALL SELECT 'FLOAT8', '<code>FLOAT8</code>' UNION ALL SELECT 'FOR', '<code>FOR</code>' UNION ALL SELECT 'FORCE', '<code>FORCE</code>' UNION ALL SELECT 'FOREIGN', '<code>FOREIGN</code>' UNION ALL SELECT 'FROM', '<code>FROM</code>' UNION ALL SELECT 'FULLTEXT', '<code>FULLTEXT</code>' UNION ALL SELECT 'GRANT', '<code>GRANT</code>' UNION ALL SELECT 'GROUP', '<code>GROUP</code>' UNION ALL SELECT 'HAVING', '<code>HAVING</code>' UNION ALL SELECT 'HIGH_PRIORITY', '<code>HIGH_PRIORITY</code>' UNION ALL SELECT 'HOUR_MICROSECOND', '<code>HOUR_MICROSECOND</code>' UNION ALL SELECT 'HOUR_MINUTE', '<code>HOUR_MINUTE</code>' UNION ALL SELECT 'HOUR_SECOND', '<code>HOUR_SECOND</code>' UNION ALL SELECT 'IF', '<code>IF</code>' UNION ALL SELECT 'IGNORE', '<code>IGNORE</code>' UNION ALL SELECT 'IN', '<code>IN</code>' UNION ALL SELECT 'INDEX', '<code>INDEX</code>' UNION ALL SELECT 'INFILE', '<code>INFILE</code>' UNION ALL SELECT 'INNER', '<code>INNER</code>' UNION ALL SELECT 'INOUT', '<code>INOUT</code>' UNION ALL SELECT 'INSENSITIVE', '<code>INSENSITIVE</code>' UNION ALL SELECT 'INSERT', '<code>INSERT</code>' UNION ALL SELECT 'INT', '<code>INT</code>' UNION ALL SELECT 'INT1', '<code>INT1</code>' UNION ALL SELECT 'INT2', '<code>INT2</code>' UNION ALL SELECT 'INT3', '<code>INT3</code>' UNION ALL SELECT 'INT4', '<code>INT4</code>' UNION ALL SELECT 'INT8', '<code>INT8</code>' UNION ALL SELECT 'INTEGER', '<code>INTEGER</code>' UNION ALL SELECT 'INTERVAL', '<code>INTERVAL</code>' UNION ALL SELECT 'INTO', '<code>INTO</code>' UNION ALL SELECT 'IS', '<code>IS</code>' UNION ALL SELECT 'ITERATE', '<code>ITERATE</code>' UNION ALL SELECT 'JOIN', '<code>JOIN</code>' UNION ALL SELECT 'KEY', '<code>KEY</code>' UNION ALL SELECT 'KEYS', '<code>KEYS</code>' UNION ALL SELECT 'KILL', '<code>KILL</code>' UNION ALL SELECT 'LEADING', '<code>LEADING</code>' UNION ALL SELECT 'LEAVE', '<code>LEAVE</code>' UNION ALL SELECT 'LEFT', '<code>LEFT</code>' UNION ALL SELECT 'LIKE', '<code>LIKE</code>' UNION ALL SELECT 'LIMIT', '<code>LIMIT</code>' UNION ALL SELECT 'LINEAR', '<code>LINEAR</code>' UNION ALL SELECT 'LINES', '<code>LINES</code>' UNION ALL SELECT 'LOAD', '<code>LOAD</code>' UNION ALL SELECT 'LOCALTIME', '<code>LOCALTIME</code>' UNION ALL SELECT 'LOCALTIMESTAMP', '<code>LOCALTIMESTAMP</code>' UNION ALL SELECT 'LOCK', '<code>LOCK</code>' UNION ALL SELECT 'LONG', '<code>LONG</code>' UNION ALL SELECT 'LONGBLOB', '<code>LONGBLOB</code>' UNION ALL SELECT 'LONGTEXT', '<code>LONGTEXT</code>' UNION ALL SELECT 'LOOP', '<code>LOOP</code>' UNION ALL SELECT 'LOW_PRIORITY', '<code>LOW_PRIORITY</code>' UNION ALL SELECT 'MASTER_SSL_VERIFY_SERVER_CERT', '<code>MASTER_SSL_VERIFY_SERVER_CERT</code>' UNION ALL SELECT 'MATCH', '<code>MATCH</code>' UNION ALL SELECT 'MAXVALUE', '<code>MAXVALUE</code>' UNION ALL SELECT 'MEDIUMBLOB', '<code>MEDIUMBLOB</code>' UNION ALL SELECT 'MEDIUMINT', '<code>MEDIUMINT</code>' UNION ALL SELECT 'MEDIUMTEXT', '<code>MEDIUMTEXT</code>' UNION ALL SELECT 'MIDDLEINT', '<code>MIDDLEINT</code>' UNION ALL SELECT 'MINUTE_MICROSECOND', '<code>MINUTE_MICROSECOND</code>' UNION ALL SELECT 'MINUTE_SECOND', '<code>MINUTE_SECOND</code>' UNION ALL SELECT 'MOD', '<code>MOD</code>' UNION ALL SELECT 'MODIFIES', '<code>MODIFIES</code>' UNION ALL SELECT 'NATURAL', '<code>NATURAL</code>' UNION ALL SELECT 'NOT', '<code>NOT</code>' UNION ALL SELECT 'NO_WRITE_TO_BINLOG', '<code>NO_WRITE_TO_BINLOG</code>' UNION ALL SELECT 'NULL', '<code>NULL</code>' UNION ALL SELECT 'NUMERIC', '<code>NUMERIC</code>' UNION ALL SELECT 'ON', '<code>ON</code>' UNION ALL SELECT 'OPTIMIZE', '<code>OPTIMIZE</code>' UNION ALL SELECT 'OPTION', '<code>OPTION</code>' UNION ALL SELECT 'OPTIONALLY', '<code>OPTIONALLY</code>' UNION ALL SELECT 'OR', '<code>OR</code>' UNION ALL SELECT 'ORDER', '<code>ORDER</code>' UNION ALL SELECT 'OUT', '<code>OUT</code>' UNION ALL SELECT 'OUTER', '<code>OUTER</code>' UNION ALL SELECT 'OUTFILE', '<code>OUTFILE</code>' UNION ALL SELECT 'PRECISION', '<code>PRECISION</code>' UNION ALL SELECT 'PRIMARY', '<code>PRIMARY</code>' UNION ALL SELECT 'PROCEDURE', '<code>PROCEDURE</code>' UNION ALL SELECT 'PURGE', '<code>PURGE</code>' UNION ALL SELECT 'RANGE', '<code>RANGE</code>' UNION ALL SELECT 'READ', '<code>READ</code>' UNION ALL SELECT 'READS', '<code>READS</code>' UNION ALL SELECT 'READ_WRITE', '<code>READ_WRITE</code>' UNION ALL SELECT 'REAL', '<code>REAL</code>' UNION ALL SELECT 'REFERENCES', '<code>REFERENCES</code>' UNION ALL SELECT 'REGEXP', '<code>REGEXP</code>' UNION ALL SELECT 'RELEASE', '<code>RELEASE</code>' UNION ALL SELECT 'RENAME', '<code>RENAME</code>' UNION ALL SELECT 'REPEAT', '<code>REPEAT</code>' UNION ALL SELECT 'REPLACE', '<code>REPLACE</code>' UNION ALL SELECT 'REQUIRE', '<code>REQUIRE</code>' UNION ALL SELECT 'RESIGNAL', '<code>RESIGNAL</code>' UNION ALL SELECT 'RESTRICT', '<code>RESTRICT</code>' UNION ALL SELECT 'RETURN', '<code>RETURN</code>' UNION ALL SELECT 'REVOKE', '<code>REVOKE</code>' UNION ALL SELECT 'RIGHT', '<code>RIGHT</code>' UNION ALL SELECT 'RLIKE', '<code>RLIKE</code>' UNION ALL SELECT 'SCHEMA', '<code>SCHEMA</code>' UNION ALL SELECT 'SCHEMAS', '<code>SCHEMAS</code>' UNION ALL SELECT 'SECOND_MICROSECOND', '<code>SECOND_MICROSECOND</code>' UNION ALL SELECT 'SELECT', '<code>SELECT</code>' UNION ALL SELECT 'SENSITIVE', '<code>SENSITIVE</code>' UNION ALL SELECT 'SEPARATOR', '<code>SEPARATOR</code>' UNION ALL SELECT 'SET', '<code>SET</code>' UNION ALL SELECT 'SHOW', '<code>SHOW</code>' UNION ALL SELECT 'SIGNAL', '<code>SIGNAL</code>' UNION ALL SELECT 'SMALLINT', '<code>SMALLINT</code>' UNION ALL SELECT 'SPATIAL', '<code>SPATIAL</code>' UNION ALL SELECT 'SPECIFIC', '<code>SPECIFIC</code>' UNION ALL SELECT 'SQL', '<code>SQL</code>' UNION ALL SELECT 'SQLEXCEPTION', '<code>SQLEXCEPTION</code>' UNION ALL SELECT 'SQLSTATE', '<code>SQLSTATE</code>' UNION ALL SELECT 'SQLWARNING', '<code>SQLWARNING</code>' UNION ALL SELECT 'SQL_BIG_RESULT', '<code>SQL_BIG_RESULT</code>' UNION ALL SELECT 'SQL_CALC_FOUND_ROWS', '<code>SQL_CALC_FOUND_ROWS</code>' UNION ALL SELECT 'SQL_SMALL_RESULT', '<code>SQL_SMALL_RESULT</code>' UNION ALL SELECT 'SSL', '<code>SSL</code>' UNION ALL SELECT 'STARTING', '<code>STARTING</code>' UNION ALL SELECT 'STRAIGHT_JOIN', '<code>STRAIGHT_JOIN</code>' UNION ALL SELECT 'TABLE', '<code>TABLE</code>' UNION ALL SELECT 'TERMINATED', '<code>TERMINATED</code>' UNION ALL SELECT 'THEN', '<code>THEN</code>' UNION ALL SELECT 'TINYBLOB', '<code>TINYBLOB</code>' UNION ALL SELECT 'TINYINT', '<code>TINYINT</code>' UNION ALL SELECT 'TINYTEXT', '<code>TINYTEXT</code>' UNION ALL SELECT 'TO', '<code>TO</code>' UNION ALL SELECT 'TRAILING', '<code>TRAILING</code>' UNION ALL SELECT 'TRIGGER', '<code>TRIGGER</code>' UNION ALL SELECT 'TRUE', '<code>TRUE</code>' UNION ALL SELECT 'UNDO', '<code>UNDO</code>' UNION ALL SELECT 'UNION', '<code>UNION</code>' UNION ALL SELECT 'UNIQUE', '<code>UNIQUE</code>' UNION ALL SELECT 'UNLOCK', '<code>UNLOCK</code>' UNION ALL SELECT 'UNSIGNED', '<code>UNSIGNED</code>' UNION ALL SELECT 'UPDATE', '<code>UPDATE</code>' UNION ALL SELECT 'USAGE', '<code>USAGE</code>' UNION ALL SELECT 'USE', '<code>USE</code>' UNION ALL SELECT 'USING', '<code>USING</code>' UNION ALL SELECT 'UTC_DATE', '<code>UTC_DATE</code>' UNION ALL SELECT 'UTC_TIME', '<code>UTC_TIME</code>' UNION ALL SELECT 'UTC_TIMESTAMP', '<code>UTC_TIMESTAMP</code>' UNION ALL SELECT 'VALUES', '<code>VALUES</code>' UNION ALL SELECT 'VARBINARY', '<code>VARBINARY</code>' UNION ALL SELECT 'VARCHAR', '<code>VARCHAR</code>' UNION ALL SELECT 'VARCHARACTER', '<code>VARCHARACTER</code>' UNION ALL SELECT 'VARYING', '<code>VARYING</code>' UNION ALL SELECT 'WHEN', '<code>WHEN</code>' UNION ALL SELECT 'WHERE', '<code>WHERE</code>' UNION ALL SELECT 'WHILE', '<code>WHILE</code>' UNION ALL SELECT 'WITH', '<code>WITH</code>' UNION ALL SELECT 'WRITE', '<code>WRITE</code>' UNION ALL SELECT 'XOR', '<code>XOR</code>' UNION ALL SELECT 'YEAR_MONTH', '<code>YEAR_MONTH</code>' UNION ALL SELECT 'ZEROFILL', '<code>ZEROFILL</code>' GO CREATE VIEW [dbo].[vw_MssqlReservedWords] AS SELECT 'ADD' AS reserved_word, '[ADD]' AS mssql_version UNION ALL SELECT 'EXTERNAL', '[EXTERNAL]' UNION ALL SELECT 'PROCEDURE', '[PROCEDURE]' UNION ALL SELECT 'ALL', '[ALL]' UNION ALL SELECT 'FETCH', '[FETCH]' UNION ALL SELECT 'PUBLIC', '[PUBLIC]' UNION ALL SELECT 'ALTER', '[ALTER]' UNION ALL SELECT 'FILE', '[FILE]' UNION ALL SELECT 'RAISERROR', '[RAISERROR]' UNION ALL SELECT 'AND', '[AND]' UNION ALL SELECT 'FILLFACTOR', '[FILLFACTOR]' UNION ALL SELECT 'READ', '[READ]' UNION ALL SELECT 'ANY', '[ANY]' UNION ALL SELECT 'FOR', '[FOR]' UNION ALL SELECT 'READTEXT', '[READTEXT]' UNION ALL SELECT 'AS', '[AS]' UNION ALL SELECT 'FOREIGN', '[FOREIGN]' UNION ALL SELECT 'RECONFIGURE', '[RECONFIGURE]' UNION ALL SELECT 'ASC', '[ASC]' UNION ALL SELECT 'FREETEXT', '[FREETEXT]' UNION ALL SELECT 'REFERENCES', '[REFERENCES]' UNION ALL SELECT 'AUTHORIZATION', '[AUTHORIZATION]' UNION ALL SELECT 'FREETEXTTABLE', '[FREETEXTTABLE]' UNION ALL SELECT 'REPLICATION', '[REPLICATION]' UNION ALL SELECT 'BACKUP', '[BACKUP]' UNION ALL SELECT 'FROM', '[FROM]' UNION ALL SELECT 'RESTORE', '[RESTORE]' UNION ALL SELECT 'BEGIN', '[BEGIN]' UNION ALL SELECT 'FULL', '[FULL]' UNION ALL SELECT 'RESTRICT', '[RESTRICT]' UNION ALL SELECT 'BETWEEN', '[BETWEEN]' UNION ALL SELECT 'FUNCTION', '[FUNCTION]' UNION ALL SELECT 'RETURN', '[RETURN]' UNION ALL SELECT 'BREAK', '[BREAK]' UNION ALL SELECT 'GOTO', '[GOTO]' UNION ALL SELECT 'REVERT', '[REVERT]' UNION ALL SELECT 'BROWSE', '[BROWSE]' UNION ALL SELECT 'GRANT', '[GRANT]' UNION ALL SELECT 'REVOKE', '[REVOKE]' UNION ALL SELECT 'BULK', '[BULK]' UNION ALL SELECT 'GROUP', '[GROUP]' UNION ALL SELECT 'RIGHT', '[RIGHT]' UNION ALL SELECT 'BY', '[BY]' UNION ALL SELECT 'HAVING', '[HAVING]' UNION ALL SELECT 'ROLLBACK', '[ROLLBACK]' UNION ALL SELECT 'CASCADE', '[CASCADE]' UNION ALL SELECT 'HOLDLOCK', '[HOLDLOCK]' UNION ALL SELECT 'ROWCOUNT', '[ROWCOUNT]' UNION ALL SELECT 'CASE', '[CASE]' UNION ALL SELECT 'IDENTITY', '[IDENTITY]' UNION ALL SELECT 'ROWGUIDCOL', '[ROWGUIDCOL]' UNION ALL SELECT 'CHECK', '[CHECK]' UNION ALL SELECT 'IDENTITY_INSERT', '[IDENTITY_INSERT]' UNION ALL SELECT 'RULE', '[RULE]' UNION ALL SELECT 'CHECKPOINT', '[CHECKPOINT]' UNION ALL SELECT 'IDENTITYCOL', '[IDENTITYCOL]' UNION ALL SELECT 'SAVE', '[SAVE]' UNION ALL SELECT 'CLOSE', '[CLOSE]' UNION ALL SELECT 'IF', '[IF]' UNION ALL SELECT 'SCHEMA', '[SCHEMA]' UNION ALL SELECT 'CLUSTERED', '[CLUSTERED]' UNION ALL SELECT 'IN', '[IN]' UNION ALL SELECT 'SECURITYAUDIT', '[SECURITYAUDIT]' UNION ALL SELECT 'COALESCE', '[COALESCE]' UNION ALL SELECT 'INDEX', '[INDEX]' UNION ALL SELECT 'SELECT', '[SELECT]' UNION ALL SELECT 'COLLATE', '[COLLATE]' UNION ALL SELECT 'INNER', '[INNER]' UNION ALL SELECT 'SEMANTICKEYPHRASETABLE', '[SEMANTICKEYPHRASETABLE]' UNION ALL SELECT 'COLUMN', '[COLUMN]' UNION ALL SELECT 'INSERT', '[INSERT]' UNION ALL SELECT 'SEMANTICSIMILARITYDETAILSTABLE', '[SEMANTICSIMILARITYDETAILSTABLE]' UNION ALL SELECT 'COMMIT', '[COMMIT]' UNION ALL SELECT 'INTERSECT', '[INTERSECT]' UNION ALL SELECT 'SEMANTICSIMILARITYTABLE', '[SEMANTICSIMILARITYTABLE]' UNION ALL SELECT 'COMPUTE', '[COMPUTE]' UNION ALL SELECT 'INTO', '[INTO]' UNION ALL SELECT 'SESSION_USER', '[SESSION_USER]' UNION ALL SELECT 'CONSTRAINT', '[CONSTRAINT]' UNION ALL SELECT 'IS', '[IS]' UNION ALL SELECT 'SET', '[SET]' UNION ALL SELECT 'CONTAINS', '[CONTAINS]' UNION ALL SELECT 'JOIN', '[JOIN]' UNION ALL SELECT 'SETUSER', '[SETUSER]' UNION ALL SELECT 'CONTAINSTABLE', '[CONTAINSTABLE]' UNION ALL SELECT 'KEY', '[KEY]' UNION ALL SELECT 'SHUTDOWN', '[SHUTDOWN]' UNION ALL SELECT 'CONTINUE', '[CONTINUE]' UNION ALL SELECT 'KILL', '[KILL]' UNION ALL SELECT 'SOME', '[SOME]' UNION ALL SELECT 'CONVERT', '[CONVERT]' UNION ALL SELECT 'LEFT', '[LEFT]' UNION ALL SELECT 'STATISTICS', '[STATISTICS]' UNION ALL SELECT 'CREATE', '[CREATE]' UNION ALL SELECT 'LIKE', '[LIKE]' UNION ALL SELECT 'SYSTEM_USER', '[SYSTEM_USER]' UNION ALL SELECT 'CROSS', '[CROSS]' UNION ALL SELECT 'LINENO', '[LINENO]' UNION ALL SELECT 'TABLE', '[TABLE]' UNION ALL SELECT 'CURRENT', '[CURRENT]' UNION ALL SELECT 'LOAD', '[LOAD]' UNION ALL SELECT 'TABLESAMPLE', '[TABLESAMPLE]' UNION ALL SELECT 'CURRENT_DATE', '[CURRENT_DATE]' UNION ALL SELECT 'MERGE', '[MERGE]' UNION ALL SELECT 'TEXTSIZE', '[TEXTSIZE]' UNION ALL SELECT 'CURRENT_TIME', '[CURRENT_TIME]' UNION ALL SELECT 'NATIONAL', '[NATIONAL]' UNION ALL SELECT 'THEN', '[THEN]' UNION ALL SELECT 'CURRENT_TIMESTAMP', '[CURRENT_TIMESTAMP]' UNION ALL SELECT 'NOCHECK', '[NOCHECK]' UNION ALL SELECT 'TO', '[TO]' UNION ALL SELECT 'CURRENT_USER', '[CURRENT_USER]' UNION ALL SELECT 'NONCLUSTERED', '[NONCLUSTERED]' UNION ALL SELECT 'TOP', '[TOP]' UNION ALL SELECT 'CURSOR', '[CURSOR]' UNION ALL SELECT 'NOT', '[NOT]' UNION ALL SELECT 'TRAN', '[TRAN]' UNION ALL SELECT 'DATABASE', '[DATABASE]' UNION ALL SELECT 'NULL', '[NULL]' UNION ALL SELECT 'TRANSACTION', '[TRANSACTION]' UNION ALL SELECT 'DBCC', '[DBCC]' UNION ALL SELECT 'NULLIF', '[NULLIF]' UNION ALL SELECT 'TRIGGER', '[TRIGGER]' UNION ALL SELECT 'DEALLOCATE', '[DEALLOCATE]' UNION ALL SELECT 'OF', '[OF]' UNION ALL SELECT 'TRUNCATE', '[TRUNCATE]' UNION ALL SELECT 'DECLARE', '[DECLARE]' UNION ALL SELECT 'OFF', '[OFF]' UNION ALL SELECT 'TRY_CONVERT', '[TRY_CONVERT]' UNION ALL SELECT 'DEFAULT', '[DEFAULT]' UNION ALL SELECT 'OFFSETS', '[OFFSETS]' UNION ALL SELECT 'TSEQUAL', '[TSEQUAL]' UNION ALL SELECT 'DELETE', '[DELETE]' UNION ALL SELECT 'ON', '[ON]' UNION ALL SELECT 'UNION', '[UNION]' UNION ALL SELECT 'DENY', '[DENY]' UNION ALL SELECT 'OPEN', '[OPEN]' UNION ALL SELECT 'UNIQUE', '[UNIQUE]' UNION ALL SELECT 'DESC', '[DESC]' UNION ALL SELECT 'OPENDATASOURCE', '[OPENDATASOURCE]' UNION ALL SELECT 'UNPIVOT', '[UNPIVOT]' UNION ALL SELECT 'DISK', '[DISK]' UNION ALL SELECT 'OPENQUERY', '[OPENQUERY]' UNION ALL SELECT 'UPDATE', '[UPDATE]' UNION ALL SELECT 'DISTINCT', '[DISTINCT]' UNION ALL SELECT 'OPENROWSET', '[OPENROWSET]' UNION ALL SELECT 'UPDATETEXT', '[UPDATETEXT]' UNION ALL SELECT 'DISTRIBUTED', '[DISTRIBUTED]' UNION ALL SELECT 'OPENXML', '[OPENXML]' UNION ALL SELECT 'USE', '[USE]' UNION ALL SELECT 'DOUBLE', '[DOUBLE]' UNION ALL SELECT 'OPTION', '[OPTION]' UNION ALL SELECT 'USER', '[USER]' UNION ALL SELECT 'DROP', '[DROP]' UNION ALL SELECT 'OR', '[OR]' UNION ALL SELECT 'VALUES', '[VALUES]' UNION ALL SELECT 'DUMP', '[DUMP]' UNION ALL SELECT 'ORDER', '[ORDER]' UNION ALL SELECT 'VARYING', '[VARYING]' UNION ALL SELECT 'ELSE', '[ELSE]' UNION ALL SELECT 'OUTER', '[OUTER]' UNION ALL SELECT 'VIEW', '[VIEW]' UNION ALL SELECT 'END', '[END]' UNION ALL SELECT 'OVER', '[OVER]' UNION ALL SELECT 'WAITFOR', '[WAITFOR]' UNION ALL SELECT 'ERRLVL', '[ERRLVL]' UNION ALL SELECT 'PERCENT', '[PERCENT]' UNION ALL SELECT 'WHEN', '[WHEN]' UNION ALL SELECT 'ESCAPE', '[ESCAPE]' UNION ALL SELECT 'PIVOT', '[PIVOT]' UNION ALL SELECT 'WHERE', '[WHERE]' UNION ALL SELECT 'EXCEPT', '[EXCEPT]' UNION ALL SELECT 'PLAN', '[PLAN]' UNION ALL SELECT 'WHILE', '[WHILE]' UNION ALL SELECT 'EXEC', '[EXEC]' UNION ALL SELECT 'PRECISION', '[PRECISION]' UNION ALL SELECT 'WITH', '[WITH]' UNION ALL SELECT 'EXECUTE', '[EXECUTE]' UNION ALL SELECT 'PRIMARY', '[PRIMARY]' UNION ALL SELECT 'WITHIN GROUP', '[WITHIN GROUP]' UNION ALL SELECT 'EXISTS', '[EXISTS]' UNION ALL SELECT 'PRINT', '[PRINT]' UNION ALL SELECT 'WRITETEXT', '[WRITETEXT]' UNION ALL SELECT 'EXIT', '[EXIT]' UNION ALL SELECT 'PROC', '[PROC]' UNION ALL SELECT 'USER_ID', '[USER_ID]' UNION ALL SELECT 'SEQUENCE', '[SEQUENCE]' GO

These tables/views, as mentioned before, will be referenced in subsequent posts and code as they provide the process with relevant metadata information to control tables, tables’ attributes, indexes, error notification alerts etc. and in case any change is required, provide a central point of reference for implementation. Also, all entries made into the four tables above correspond to my development environment so if replicating this functionality is your goal I suggest adjusting data entered/used in this post to one that matches your environment.

As part of this preliminary set up we will also create AdminDBA database (named this way, instead of ErrorsDB, only because it is probably too much hassle to change the already well documented code in one of my previous post). This database will be used to log any execution errors which can determine further package work flow e.g. determine if the subsequent task should or shouldn’t execute. A stored procedure responsible for sending out notification errors will also be located here as will a function concatenating e-mail addresses used by the package.

I have written extensively on how error capture and logging works in this process in my two previous blog posts (HERE and HERE) so I won’t be repeating myself in this post. For full details on the schema and the actual code used to create this database please view my previous blog posts HERE and HERE.

Once all the databases and their objects have been created successfully, the below stored procedure, allowing sending out notifications on any errors that occurred during package runtime (highlighted line entry needs to be modified with a valid reporting platform URL pointing to the AdminDBA database log report) as well as a scalar function, allowing tabular e-mail address entries conversion into a comma separated array, can be created. These two objects will later be incorporated into the SSIS package to manage error notifications distribution via e-mail.

/*==================================================================================== STEP 1 Create 'error distribution' stored procedure to manage error notifications based on executing stored procedure name and reporting platform in use (see the highlighted line). When implementing, please replace 'https://YourReportingPlatform' with a valid URL pointing to the reporting platform e.g. SSRS, Tableau etc. where a detained report based on AdminDBA database logs can be accessed from. ====================================================================================*/ USE [AdminDBA] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE PROCEDURE [dbo].[usp_sendBIGroupETLFailMessage] ( @Execution_Instance_GUID UNIQUEIDENTIFIER , @Package_Start_DateTime DATETIME , @Error_Message NVARCHAR(MAX) , @DBMail_Profile_Name VARCHAR(100) , @DBMail_Recipients VARCHAR(1024) , @DBMail_Msg_Body_Format VARCHAR(20) , @DBMail_Msg_Subject NVARCHAR(255) , @DBMail_Msg_Importance VARCHAR(6) , @Package_Name NVARCHAR(255) , @Process_Name NVARCHAR(255) , @Object_Name NVARCHAR(255) ) AS BEGIN IF OBJECT_ID('tempdb..#Temp') IS NOT NULL BEGIN DROP TABLE #Temp END SELECT COALESCE(@Package_Name,'Unknown') AS PackageName, COALESCE(CAST(DB_NAME() AS VARCHAR (128)) , 'Unknown') AS DatabaseName, COALESCE(CAST(@Execution_Instance_GUID AS VARCHAR (60)) , 'Unknown') AS ExecutionInstanceGUID, COALESCE(CONVERT(VARCHAR (50),@Package_Start_DateTime, 120) , 'Unknown') AS PackageStartDateTime, COALESCE(CONVERT(VARCHAR (50),SYSDATETIME(), 120), 'Unknown') AS EventDateTime, COALESCE(@Object_Name,'Unknown') AS ObjectName, COALESCE(@Process_Name, 'Unknown') AS ErrorProcedure, COALESCE(@Error_Message , 'Unknown') AS ErrorMessage INTO #Temp UPDATE #Temp SET ObjectName = 'Unknown' WHERE ObjectName = '' IF OBJECT_ID('tempdb..#Msg') IS NOT NULL BEGIN DROP TABLE [#Msg] END CREATE TABLE #Msg ( [ID] [INT] IDENTITY(1, 1) NOT NULL , [ProcessName] [VARCHAR](255) NULL , [MsgText] VARCHAR(1024) NULL , ); INSERT INTO #Msg ( [ProcessName] , [MsgText] ) SELECT 'usp_updateLogSSISErrorsDBObjects' AS ProcessName , ''+@@SERVERNAME+''+' instance metadata update process for package ' + ''+@Package_Name+''+' has encountered an error during processing' AS MsgText UNION ALL SELECT 'usp_checkRemoteSvrMySQLTablesSchemaChanges', 'Table schema definition reconciliation failed between '+''+@@SERVERNAME+''+'and the remote server for package ' + ''+@Package_Name+''+'' UNION ALL SELECT 'usp_checkRemoteSvrConnectionStatus' , 'Connection from ' +''+@@SERVERNAME+''+ ' to a remote/linked server cannot be established at this time for package ' + ''+@Package_Name+''+'' UNION ALL SELECT 'usp_checkRemoteSvrDBvsLocalDBRecCounts' , 'Preliminary record count between a remote/linked server and staging database on ' +''+@@SERVERNAME+''+' server is different' UNION ALL SELECT 'usp_runCreateDropStagingIDXs', 'Creating/dropping staging environment indexes procedure for package ' + ''+@Package_Name+''+' raised errors during execution on ' +''+@@SERVERNAME+''+ ' server' UNION ALL SELECT 'Non-specyfic SSIS Job Transformation Failure' , 'SSIS package ' + ''+@Package_Name+''+' failed during execution on ' +''+@@SERVERNAME+''+ ' server' UNION ALL SELECT 'usp_checkRemoteSvrDBvsLocalDBSyncErrors', 'SSIS package ' + ''+@Package_Name+''+' finished executing; however, some errors were raised at runtime on ' +''+@@SERVERNAME+''+ ' server' UNION ALL SELECT 'usp_runUpdateStagingDBStatistics', 'Statistics update step in ' + ''+@Package_Name+''+' package failed during execution on ' +''+@@SERVERNAME+''+ ' server' DECLARE @Heading NVARCHAR(1024) = ( SELECT MsgText FROM #Msg WHERE ProcessName = @Process_Name ) DECLARE @tableHTML NVARCHAR(MAX) SET @tableHTML = '<H3><span style="color: #ff0000;">' + '' + @Heading + '' + ' <img src="http://tinymce.cachefly.net/4.1/plugins/emoticons/img/smiley-frown.gif" alt="frown" /></H3>' + N'<p><span style="color: #333333;">Click on the <a class="btn" href="https://YourReportingPlatform">LINK</a> to view more detailed execution error logs or refer to the table below for info on the recent event(s).</p>' + N'<table border="1">' + N'<tr><th>Package Name </th>' + N'<th>Database Name</th>' + N'<th>Execution Instance GUID</th>' + N'<th>Package Start DateTime</th>' + N'<th>Event DateTime</th>' + N'<th>Affected Object Name</th>' + N'<th>Error Procedure/Process Name</th>' + N'<th>Error Message</th></tr><font size="2"' + CAST(( SELECT td = PackageName , '' , td = DatabaseName , '' , td = ExecutionInstanceGUID , '' , td = PackageStartDateTime , '' , td = EventDateTime , '' , td = ObjectName , '' , td = ErrorProcedure , '' , td = ErrorMessage , '' FROM #Temp FOR XML PATH('tr') , TYPE ) AS NVARCHAR(MAX)) + N'</font></table>'; EXEC msdb.dbo.sp_send_dbmail @profile_name = @DBMail_Profile_Name, @recipients = @DBMail_Recipients, @body_format = @DBMail_Msg_Body_Format, @subject = @DBMail_Msg_Subject, @body = @tableHTML, @importance = @DBMail_Msg_Importance IF OBJECT_ID('tempdb..#Temp') IS NOT NULL BEGIN DROP TABLE #Temp END IF OBJECT_ID('tempdb..#Msg') IS NOT NULL BEGIN DROP TABLE [#Msg] END END GO /*==================================================================================== STEP 2 Create a row merging function to concatenate multiple e-mail addresses into a single line for error notifications e-mail distribution. ====================================================================================*/ CREATE FUNCTION [dbo].[udf_getErrorEmailDistributionArray] ( @servername VARCHAR(128) , @taskname VARCHAR(128) ) RETURNS VARCHAR(1024) AS BEGIN DECLARE @string VARCHAR(1024); SELECT @string = ( SELECT STUFF(( SELECT ';' + [EmailAddress] FROM [ControlDB].[dbo].[Ctrl_ErrorMsg_Notification_List] WHERE IsActive = 1 AND ServerName + '\' + InstanceName = @servername AND TaskName = @taskname FOR XML PATH('') ), 1, 1, '') AS emailaddresses ); RETURN @string; END; GO

A sample e-mail notification (providing database mail is enabled on the used SQL Server instance) can look as per the image below. Notice the embedded hyperlink pointing to a more detailed report which can be retrieved to analyse the error log entries on the AdminDBA database.

You will notice that upon running the above scripts as well as creating AdminDBA database with all its related tables and stored procedures, the following objects will be available in the object explorer.

One final thing in this preliminary phase is to create a linked server between the source and target databases. Since our source data resides on the remote MySQL instance, the simplest way to connect to it is through a linked server connection. In this example I have downloaded the Oracle ODBC driver for windows environment and, after installation, configured it with the source database credentials.

Once the connection configuration has been completed and we could connect to the remote host, it is just a matter of creating a linked server connection from SQL Server and validating the setup by querying remote data.

In Part 2 to this series I will dive into the nuts and bolts of how large tables data can be migrated across as well as some pre-acquisition activities e.g. checking remote server connection, dropping existing indexes etc.

http://scuttle.org/bookmarks.php/pass?action=add

Posted in: Data Modelling, SQL, SQL Server, SSIS