Microsoft SQL Server Notes For Professionals
Microsoft SQL Server Notes For Professionals
Microsoft SQL Server Notes For Professionals
SQL Server ®
200+ pages
of professional hints and tricks
Disclaimer
GoalKicker.com This is an unocial free book created for educational purposes and is
not aliated with ocial Microsoft® SQL Server® group(s) or company(s).
Free Programming Books All trademarks and registered trademarks are
the property of their respective owners
www.dbooks.org
Contents
About ................................................................................................................................................................................... 1
Chapter 1: Getting started with Microsoft SQL Server .............................................................................. 2
Section 1.1: INSERT / SELECT / UPDATE / DELETE: the basics of Data Manipulation Language ......................... 2
Section 1.2: SELECT all rows and columns from a table ............................................................................................ 6
Section 1.3: UPDATE Specific Row ................................................................................................................................ 6
Section 1.4: DELETE All Rows ........................................................................................................................................ 7
Section 1.5: Comments in code .................................................................................................................................... 7
Section 1.6: PRINT .......................................................................................................................................................... 8
Section 1.7: Select rows that match a condition ......................................................................................................... 8
Section 1.8: UPDATE All Rows ....................................................................................................................................... 8
Section 1.9: TRUNCATE TABLE ..................................................................................................................................... 9
Section 1.10: Retrieve Basic Server Information ......................................................................................................... 9
Section 1.11: Create new table and insert records from old table ............................................................................. 9
Section 1.12: Using Transactions to change data safely ......................................................................................... 10
Section 1.13: Getting Table Row Count ...................................................................................................................... 11
Chapter 2: Data Types ............................................................................................................................................. 12
Section 2.1: Exact Numerics ........................................................................................................................................ 12
Section 2.2: Approximate Numerics .......................................................................................................................... 13
Section 2.3: Date and Time ........................................................................................................................................ 13
Section 2.4: Character Strings .................................................................................................................................... 14
Section 2.5: Unicode Character Strings .................................................................................................................... 14
Section 2.6: Binary Strings .......................................................................................................................................... 14
Section 2.7: Other Data Types ................................................................................................................................... 14
Chapter 3: Converting data types ..................................................................................................................... 15
Section 3.1: TRY PARSE ............................................................................................................................................... 15
Section 3.2: TRY CONVERT ......................................................................................................................................... 15
Section 3.3: TRY CAST ................................................................................................................................................. 16
Section 3.4: Cast .......................................................................................................................................................... 16
Section 3.5: Convert .................................................................................................................................................... 16
Chapter 4: User Defined Table Types ............................................................................................................. 18
Section 4.1: creating a UDT with a single int column that is also a primary key .................................................. 18
Section 4.2: Creating a UDT with multiple columns ................................................................................................. 18
Section 4.3: Creating a UDT with a unique constraint: ............................................................................................ 18
Section 4.4: Creating a UDT with a primary key and a column with a default value: ......................................... 18
Chapter 5: SELECT statement .............................................................................................................................. 19
Section 5.1: Basic SELECT from table ........................................................................................................................ 19
Section 5.2: Filter rows using WHERE clause ........................................................................................................... 19
Section 5.3: Sort results using ORDER BY ................................................................................................................. 19
Section 5.4: Group result using GROUP BY ............................................................................................................... 19
Section 5.5: Filter groups using HAVING clause ....................................................................................................... 20
Section 5.6: Returning only first N rows .................................................................................................................... 20
Section 5.7: Pagination using OFFSET FETCH .......................................................................................................... 20
Section 5.8: SELECT without FROM (no data souce) ............................................................................................... 20
Chapter 6: Alias Names in SQL Server ............................................................................................................. 21
Section 6.1: Giving alias after Derived table name .................................................................................................. 21
Section 6.2: Using AS ................................................................................................................................................... 21
Section 6.3: Using = ..................................................................................................................................................... 21
Section 6.4: Without using AS ..................................................................................................................................... 21
Chapter 7: NULLs ........................................................................................................................................................ 22
Section 7.1: COALESCE () ............................................................................................................................................ 22
Section 7.2: ANSI NULLS ............................................................................................................................................. 22
Section 7.3: ISNULL() ................................................................................................................................................... 23
Section 7.4: Is null / Is not null .................................................................................................................................... 23
Section 7.5: NULL comparison ................................................................................................................................... 23
Section 7.6: NULL with NOT IN SubQuery ................................................................................................................. 24
Chapter 8: Variables ................................................................................................................................................. 26
Section 8.1: Declare a Table Variable ........................................................................................................................ 26
Section 8.2: Updating variables using SELECT ......................................................................................................... 26
Section 8.3: Declare multiple variables at once, with initial values ........................................................................ 27
Section 8.4: Updating a variable using SET .............................................................................................................. 27
Section 8.5: Updating variables by selecting from a table ..................................................................................... 28
Section 8.6: Compound assignment operators ........................................................................................................ 28
Chapter 9: Dates ......................................................................................................................................................... 29
Section 9.1: Date & Time Formatting using CONVERT ............................................................................................ 29
Section 9.2: Date & Time Formatting using FORMAT ............................................................................................. 30
Section 9.3: DATEADD for adding and subtracting time periods ........................................................................... 31
Section 9.4: Create function to calculate a person's age on a specific date ........................................................ 32
Section 9.5: Get the current DateTime ...................................................................................................................... 32
Section 9.6: Getting the last day of a month ............................................................................................................ 33
Section 9.7: CROSS PLATFORM DATE OBJECT ....................................................................................................... 33
Section 9.8: Return just Date from a DateTime ....................................................................................................... 33
Section 9.9: DATEDIFF for calculating time period dierences ............................................................................. 34
Section 9.10: DATEPART & DATENAME ..................................................................................................................... 34
Section 9.11: Date parts reference ............................................................................................................................. 35
Section 9.12: Date Format Extended ......................................................................................................................... 35
Chapter 10: Generating a range of dates ...................................................................................................... 39
Section 10.1: Generating Date Range With Recursive CTE ...................................................................................... 39
Section 10.2: Generating a Date Range With a Tally Table .................................................................................... 39
Chapter 11: Database Snapshots ........................................................................................................................ 40
Section 11.1: Create a database snapshot ................................................................................................................. 40
Section 11.2: Restore a database snapshot .............................................................................................................. 40
Section 11.3: DELETE Snapshot ................................................................................................................................... 40
Chapter 12: COALESCE .............................................................................................................................................. 41
Section 12.1: Using COALESCE to Build Comma-Delimited String .......................................................................... 41
Section 12.2: Getting the first not null from a list of column values ....................................................................... 41
Section 12.3: Coalesce basic Example ....................................................................................................................... 41
Chapter 13: IF...ELSE ................................................................................................................................................... 43
Section 13.1: Single IF statement ................................................................................................................................ 43
Section 13.2: Multiple IF Statements .......................................................................................................................... 43
Section 13.3: Single IF..ELSE statement ...................................................................................................................... 43
Section 13.4: Multiple IF... ELSE with final ELSE Statements ..................................................................................... 44
Section 13.5: Multiple IF...ELSE Statements ................................................................................................................ 44
Chapter 14: CASE Statement ................................................................................................................................ 45
Section 14.1: Simple CASE statement ......................................................................................................................... 45
Section 14.2: Searched CASE statement ................................................................................................................... 45
Chapter 15: INSERT INTO ........................................................................................................................................ 46
www.dbooks.org
Section 15.1: INSERT multiple rows of data ............................................................................................................... 46
Section 15.2: Use OUTPUT to get the new Id ............................................................................................................ 46
Section 15.3: INSERT from SELECT Query Results ................................................................................................... 47
Section 15.4: INSERT a single row of data ................................................................................................................ 47
Section 15.5: INSERT on specific columns ................................................................................................................. 47
Section 15.6: INSERT Hello World INTO table ........................................................................................................... 47
Chapter 16: MERGE ..................................................................................................................................................... 48
Section 16.1: MERGE to Insert / Update / Delete ...................................................................................................... 48
Section 16.2: Merge Using CTE Source ...................................................................................................................... 49
Section 16.3: Merge Example - Synchronize Source And Target Table ................................................................. 49
Section 16.4: MERGE using Derived Source Table .................................................................................................... 50
Section 16.5: Merge using EXCEPT ............................................................................................................................. 50
Chapter 17: CREATE VIEW ....................................................................................................................................... 52
Section 17.1: CREATE Indexed VIEW ........................................................................................................................... 52
Section 17.2: CREATE VIEW ......................................................................................................................................... 52
Section 17.3: CREATE VIEW With Encryption ............................................................................................................ 53
Section 17.4: CREATE VIEW With INNER JOIN .......................................................................................................... 53
Section 17.5: Grouped VIEWs ...................................................................................................................................... 53
Section 17.6: UNION-ed VIEWs ................................................................................................................................... 54
Chapter 18: Views ........................................................................................................................................................ 55
Section 18.1: Create a view with schema binding ..................................................................................................... 55
Section 18.2: Create a view ......................................................................................................................................... 55
Section 18.3: Create or replace view .......................................................................................................................... 55
Chapter 19: UNION ...................................................................................................................................................... 56
Section 19.1: Union and union all ................................................................................................................................ 56
Chapter 20: TRY/CATCH ......................................................................................................................................... 59
Section 20.1: Transaction in a TRY/CATCH .............................................................................................................. 59
Section 20.2: Raising errors in try-catch block ........................................................................................................ 59
Section 20.3: Raising info messages in try catch block .......................................................................................... 60
Section 20.4: Re-throwing exception generated by RAISERROR ........................................................................... 60
Section 20.5: Throwing exception in TRY/CATCH blocks ....................................................................................... 60
Chapter 21: WHILE loop ............................................................................................................................................ 62
Section 21.1: Using While loop .................................................................................................................................... 62
Section 21.2: While loop with min aggregate function usage ................................................................................. 62
Chapter 22: OVER Clause ........................................................................................................................................ 63
Section 22.1: Cumulative Sum .................................................................................................................................... 63
Section 22.2: Using Aggregation functions with OVER ........................................................................................... 63
Section 22.3: Dividing Data into equally-partitioned buckets using NTILE ........................................................... 64
Section 22.4: Using Aggregation funtions to find the most recent records .......................................................... 64
Chapter 23: GROUP BY ............................................................................................................................................. 66
Section 23.1: Simple Grouping .................................................................................................................................... 66
Section 23.2: GROUP BY multiple columns ............................................................................................................... 66
Section 23.3: GROUP BY with ROLLUP and CUBE .................................................................................................... 67
Section 23.4: Group by with multiple tables, multiple columns .............................................................................. 68
Section 23.5: HAVING .................................................................................................................................................. 69
Chapter 24: ORDER BY ............................................................................................................................................ 71
Section 24.1: Simple ORDER BY clause ...................................................................................................................... 71
Section 24.2: ORDER BY multiple fields .................................................................................................................... 71
Section 24.3: Custom Ordering .................................................................................................................................. 71
Section 24.4: ORDER BY with complex logic ............................................................................................................ 72
Chapter 25: The STUFF Function ........................................................................................................................ 73
Section 25.1: Using FOR XML to Concatenate Values from Multiple Rows ........................................................... 73
Section 25.2: Basic Character Replacement with STUFF() ..................................................................................... 73
Section 25.3: Basic Example of STUFF() function .................................................................................................... 74
Section 25.4: stu for comma separated in sql server ........................................................................................... 74
Section 25.5: Obtain column names separated with comma (not a list) .............................................................. 74
Chapter 26: JSON in SQL Server ......................................................................................................................... 76
Section 26.1: Index on JSON properties by using computed columns .................................................................. 76
Section 26.2: Join parent and child JSON entities using CROSS APPLY OPENJSON ........................................... 77
Section 26.3: Format Query Results as JSON with FOR JSON .............................................................................. 78
Section 26.4: Parse JSON text .................................................................................................................................... 78
Section 26.5: Format one table row as a single JSON object using FOR JSON .................................................. 78
Section 26.6: Parse JSON text using OPENJSON function ..................................................................................... 79
Chapter 27: OPENJSON ........................................................................................................................................... 80
Section 27.1: Transform JSON array into set of rows ............................................................................................. 80
Section 27.2: Get key:value pairs from JSON text ................................................................................................... 80
Section 27.3: Transform nested JSON fields into set of rows ................................................................................ 80
Section 27.4: Extracting inner JSON sub-objects ..................................................................................................... 81
Section 27.5: Working with nested JSON sub-arrays .............................................................................................. 81
Chapter 28: FOR JSON ............................................................................................................................................. 83
Section 28.1: FOR JSON PATH ................................................................................................................................... 83
Section 28.2: FOR JSON PATH with column aliases ................................................................................................ 83
Section 28.3: FOR JSON clause without array wrapper (single object in output) ............................................... 83
Section 28.4: INCLUDE_NULL_VALUES .................................................................................................................... 84
Section 28.5: Wrapping results with ROOT object ................................................................................................... 84
Section 28.6: FOR JSON AUTO .................................................................................................................................. 84
Section 28.7: Creating custom nested JSON structure ........................................................................................... 85
Chapter 29: Queries with JSON data ................................................................................................................ 86
Section 29.1: Using values from JSON in query ....................................................................................................... 86
Section 29.2: Using JSON values in reports ............................................................................................................. 86
Section 29.3: Filter-out bad JSON text from query results ..................................................................................... 86
Section 29.4: Update value in JSON column ............................................................................................................ 86
Section 29.5: Append new value into JSON array ................................................................................................... 87
Section 29.6: JOIN table with inner JSON collection ............................................................................................... 87
Section 29.7: Finding rows that contain value in the JSON array .......................................................................... 87
Chapter 30: Storing JSON in SQL tables ......................................................................................................... 88
Section 30.1: JSON stored as text column ................................................................................................................ 88
Section 30.2: Ensure that JSON is properly formatted using ISJSON ................................................................... 88
Section 30.3: Expose values from JSON text as computed columns .................................................................... 88
Section 30.4: Adding index on JSON path ................................................................................................................ 88
Section 30.5: JSON stored in in-memory tables ...................................................................................................... 89
Chapter 31: Modify JSON text ............................................................................................................................... 90
Section 31.1: Modify value in JSON text on the specified path ................................................................................ 90
Section 31.2: Append a scalar value into a JSON array .......................................................................................... 90
Section 31.3: Insert new JSON Object in JSON text ................................................................................................. 90
Section 31.4: Insert new JSON array generated with FOR JSON query ................................................................ 91
Section 31.5: Insert single JSON object generated with FOR JSON clause ........................................................... 91
Chapter 32: FOR XML PATH ................................................................................................................................... 93
www.dbooks.org
Section 32.1: Using FOR XML PATH to concatenate values .................................................................................... 93
Section 32.2: Specifying namespaces ....................................................................................................................... 93
Section 32.3: Specifying structure using XPath expressions ................................................................................... 94
Section 32.4: Hello World XML ................................................................................................................................... 95
Chapter 33: Join ........................................................................................................................................................... 96
Section 33.1: Inner Join ................................................................................................................................................ 96
Section 33.2: Outer Join .............................................................................................................................................. 97
Section 33.3: Using Join in an Update ....................................................................................................................... 99
Section 33.4: Join on a Subquery .............................................................................................................................. 99
Section 33.5: Cross Join ............................................................................................................................................ 100
Section 33.6: Self Join ............................................................................................................................................... 101
Section 33.7: Accidentally turning an outer join into an inner join ....................................................................... 101
Section 33.8: Delete using Join ................................................................................................................................ 102
Chapter 34: cross apply ........................................................................................................................................ 104
Section 34.1: Join table rows with dynamically generated rows from a cell ...................................................... 104
Section 34.2: Join table rows with JSON array stored in cell ............................................................................... 104
Section 34.3: Filter rows by array values ................................................................................................................ 104
Chapter 35: Computed Columns ....................................................................................................................... 106
Section 35.1: A column is computed from an expression ...................................................................................... 106
Section 35.2: Simple example we normally use in log tables ............................................................................... 106
Chapter 36: Common Table Expressions ...................................................................................................... 107
Section 36.1: Generate a table of dates using CTE ................................................................................................ 107
Section 36.2: Employee Hierarchy ........................................................................................................................... 107
Section 36.3: Recursive CTE ..................................................................................................................................... 108
Section 36.4: Delete duplicate rows using CTE ...................................................................................................... 109
Section 36.5: CTE with multiple AS statements ...................................................................................................... 110
Section 36.6: Find nth highest salary using CTE .................................................................................................... 110
Chapter 37: Move and copy data around tables ..................................................................................... 111
Section 37.1: Copy data from one table to another ............................................................................................... 111
Section 37.2: Copy data into a table, creating that table on the fly .................................................................... 111
Section 37.3: Move data into a table (assuming unique keys method) .............................................................. 111
Chapter 38: Limit Result Set ............................................................................................................................... 113
Section 38.1: Limiting With PERCENT ....................................................................................................................... 113
Section 38.2: Limiting with FETCH ........................................................................................................................... 113
Section 38.3: Limiting With TOP ............................................................................................................................... 113
Chapter 39: Retrieve Information about your Instance ....................................................................... 114
Section 39.1: General Information about Databases, Tables, Stored procedures and how to search them
............................................................................................................................................................................. 114
Section 39.2: Get information on current sessions and query executions .......................................................... 115
Section 39.3: Information about SQL Server version ............................................................................................. 116
Section 39.4: Retrieve Edition and Version of Instance ......................................................................................... 116
Section 39.5: Retrieve Instance Uptime in Days .................................................................................................... 116
Section 39.6: Retrieve Local and Remote Servers ................................................................................................. 116
Chapter 40: With Ties Option ............................................................................................................................ 117
Section 40.1: Test Data ............................................................................................................................................. 117
Chapter 41: String Functions .............................................................................................................................. 119
Section 41.1: Quotename ........................................................................................................................................... 119
Section 41.2: Replace ................................................................................................................................................ 119
Section 41.3: Substring .............................................................................................................................................. 120
Section 41.4: String_Split ........................................................................................................................................... 120
Section 41.5: Left ........................................................................................................................................................ 121
Section 41.6: Right ..................................................................................................................................................... 121
Section 41.7: Soundex ................................................................................................................................................ 122
Section 41.8: Format .................................................................................................................................................. 122
Section 41.9: String_escape ..................................................................................................................................... 124
Section 41.10: ASCII .................................................................................................................................................... 124
Section 41.11: Char ...................................................................................................................................................... 125
Section 41.12: Concat ................................................................................................................................................. 125
Section 41.13: LTrim ................................................................................................................................................... 125
Section 41.14: RTrim ................................................................................................................................................... 126
Section 41.15: PatIndex .............................................................................................................................................. 126
Section 41.16: Space ................................................................................................................................................... 126
Section 41.17: Dierence ........................................................................................................................................... 127
Section 41.18: Len ....................................................................................................................................................... 127
Section 41.19: Lower ................................................................................................................................................... 128
Section 41.20: Upper ................................................................................................................................................. 128
Section 41.21: Unicode ............................................................................................................................................... 128
Section 41.22: NChar ................................................................................................................................................. 129
Section 41.23: Str ........................................................................................................................................................ 129
Section 41.24: Reverse .............................................................................................................................................. 129
Section 41.25: Replicate ............................................................................................................................................ 129
Section 41.26: CharIndex ........................................................................................................................................... 130
Chapter 42: Logical Functions ........................................................................................................................... 131
Section 42.1: CHOOSE ............................................................................................................................................... 131
Section 42.2: IIF .......................................................................................................................................................... 131
Chapter 43: Aggregate Functions ................................................................................................................... 132
Section 43.1: SUM() .................................................................................................................................................... 132
Section 43.2: AVG() ................................................................................................................................................... 132
Section 43.3: MAX() ................................................................................................................................................... 133
Section 43.4: MIN() .................................................................................................................................................... 133
Section 43.5: COUNT() .............................................................................................................................................. 133
Section 43.6: COUNT(Column_Name) with GROUP BY Column_Name ............................................................. 134
Chapter 44: String Aggregate functions in SQL Server ...................................................................... 135
Section 44.1: Using STUFF for string aggregation ................................................................................................. 135
Section 44.2: String_Agg for String Aggregation .................................................................................................. 135
Chapter 45: Ranking Functions ......................................................................................................................... 136
Section 45.1: DENSE_RANK () .................................................................................................................................. 136
Section 45.2: RANK() ................................................................................................................................................. 136
Chapter 46: Window functions .......................................................................................................................... 137
Section 46.1: Centered Moving Average ................................................................................................................. 137
Section 46.2: Find the single most recent item in a list of timestamped events ................................................ 137
Section 46.3: Moving Average of last 30 Items ...................................................................................................... 137
Chapter 47: PIVOT / UNPIVOT .......................................................................................................................... 138
Section 47.1: Dynamic PIVOT ................................................................................................................................... 138
Section 47.2: Simple PIVOT & UNPIVOT (T-SQL) ................................................................................................... 139
Section 47.3: Simple Pivot - Static Columns ............................................................................................................ 141
Chapter 48: Dynamic SQL Pivot ....................................................................................................................... 142
Section 48.1: Basic Dynamic SQL Pivot ................................................................................................................... 142
www.dbooks.org
Chapter 49: Partitioning ....................................................................................................................................... 143
Section 49.1: Retrieve Partition Boundary Values .................................................................................................. 143
Section 49.2: Switching Partitions ............................................................................................................................ 143
Section 49.3: Retrieve partition table,column, scheme, function, total and min-max boundry values using
single query ....................................................................................................................................................... 143
Chapter 50: Stored Procedures ........................................................................................................................ 145
Section 50.1: Creating and executing a basic stored procedure .......................................................................... 145
Section 50.2: Stored Procedure with If...Else and Insert Into operation ............................................................... 146
Section 50.3: Dynamic SQL in stored procedure ................................................................................................... 147
Section 50.4: STORED PROCEDURE with OUT parameters ................................................................................. 148
Section 50.5: Simple Looping ................................................................................................................................... 149
Section 50.6: Simple Looping ................................................................................................................................... 150
Chapter 51: Retrieve information about the database ........................................................................ 151
Section 51.1: Retrieve a List of all Stored Procedures ............................................................................................ 151
Section 51.2: Get the list of all databases on a server ........................................................................................... 151
Section 51.3: Count the Number of Tables in a Database .................................................................................... 152
Section 51.4: Database Files ..................................................................................................................................... 152
Section 51.5: See if Enterprise-specific features are being used .......................................................................... 153
Section 51.6: Determine a Windows Login's Permission Path ............................................................................... 153
Section 51.7: Search and Return All Tables and Columns Containing a Specified Column Value .................... 153
Section 51.8: Get all schemas, tables, columns and indexes ................................................................................. 154
Section 51.9: Return a list of SQL Agent jobs, with schedule information ........................................................... 155
Section 51.10: Retrieve Tables Containing Known Column ................................................................................... 157
Section 51.11: Show Size of All Tables in Current Database ................................................................................... 158
Section 51.12: Retrieve Database Options ............................................................................................................... 158
Section 51.13: Find every mention of a field in the database ................................................................................ 158
Section 51.14: Retrieve information on backup and restore operations .............................................................. 158
Chapter 52: Split String function in SQL Server ....................................................................................... 160
Section 52.1: Split string in Sql Server 2008/2012/2014 using XML ...................................................................... 160
Section 52.2: Split a String in Sql Server 2016 ......................................................................................................... 160
Section 52.3: T-SQL Table variable and XML ......................................................................................................... 161
Chapter 53: Insert ..................................................................................................................................................... 162
Section 53.1: Add a row to a table named Invoices ............................................................................................... 162
Chapter 54: Primary Keys ................................................................................................................................... 163
Section 54.1: Create table w/ identity column as primary key ............................................................................. 163
Section 54.2: Create table w/ GUID primary key .................................................................................................. 163
Section 54.3: Create table w/ natural key .............................................................................................................. 163
Section 54.4: Create table w/ composite key ........................................................................................................ 163
Section 54.5: Add primary key to existing table .................................................................................................... 163
Section 54.6: Delete primary key ............................................................................................................................. 164
Chapter 55: Foreign Keys ..................................................................................................................................... 165
Section 55.1: Foreign key relationship/constraint .................................................................................................. 165
Section 55.2: Maintaining relationship between parent/child rows ..................................................................... 165
Section 55.3: Adding foreign key relationship on existing table .......................................................................... 166
Section 55.4: Add foreign key on existing table ..................................................................................................... 166
Section 55.5: Getting information about foreign key constraints ........................................................................ 166
Chapter 56: Last Inserted Identity ................................................................................................................... 167
Section 56.1: @@IDENTITY and MAX(ID) ................................................................................................................ 167
Section 56.2: SCOPE_IDENTITY() ............................................................................................................................ 167
Section 56.3: @@IDENTITY ...................................................................................................................................... 167
Section 56.4: IDENT_CURRENT('tablename') ........................................................................................................ 168
Chapter 57: SCOPE_IDENTITY() ........................................................................................................................ 169
Section 57.1: Introduction with Simple Example ..................................................................................................... 169
Chapter 58: Sequences .......................................................................................................................................... 170
Section 58.1: Create Sequence ................................................................................................................................. 170
Section 58.2: Use Sequence in Table ...................................................................................................................... 170
Section 58.3: Insert Into Table with Sequence ........................................................................................................ 170
Section 58.4: Delete From & Insert New ................................................................................................................. 170
Chapter 59: Index ...................................................................................................................................................... 171
Section 59.1: Create Clustered index ....................................................................................................................... 171
Section 59.2: Drop index ........................................................................................................................................... 171
Section 59.3: Create Non-Clustered index .............................................................................................................. 171
Section 59.4: Show index info ................................................................................................................................... 171
Section 59.5: Returns size and fragmentation indexes ......................................................................................... 171
Section 59.6: Reorganize and rebuild index ........................................................................................................... 172
Section 59.7: Rebuild or reorganize all indexes on a table ................................................................................... 172
Section 59.8: Rebuild all index database ................................................................................................................ 172
Section 59.9: Index on view ...................................................................................................................................... 172
Section 59.10: Index investigations .......................................................................................................................... 173
Chapter 60: Full-Text Indexing ........................................................................................................................... 174
Section 60.1: A. Creating a unique index, a full-text catalog, and a full-text index ............................................. 174
Section 60.2: Creating a full-text index on several table columns ....................................................................... 174
Section 60.3: Creating a full-text index with a search property list without populating it ................................. 174
Section 60.4: Full-Text Search .................................................................................................................................. 175
Chapter 61: Trigger .................................................................................................................................................. 176
Section 61.1: DML Triggers ........................................................................................................................................ 176
Section 61.2: Types and classifications of Trigger ................................................................................................. 177
Chapter 62: Cursors ................................................................................................................................................. 178
Section 62.1: Basic Forward Only Cursor ................................................................................................................ 178
Section 62.2: Rudimentary cursor syntax ............................................................................................................... 178
Chapter 63: Transaction isolation levels ...................................................................................................... 180
Section 63.1: Read Committed ................................................................................................................................. 180
Section 63.2: What are "dirty reads"? ..................................................................................................................... 180
Section 63.3: Read Uncommitted ............................................................................................................................ 181
Section 63.4: Repeatable Read ................................................................................................................................ 181
Section 63.5: Snapshot .............................................................................................................................................. 181
Section 63.6: Serializable .......................................................................................................................................... 181
Chapter 64: Advanced options .......................................................................................................................... 183
Section 64.1: Enable and show advanced options ................................................................................................. 183
Section 64.2: Enable backup compression default ................................................................................................ 183
Section 64.3: Enable cmd permission ...................................................................................................................... 183
Section 64.4: Set default fill factor percent ............................................................................................................ 183
Section 64.5: Set system recovery interval ............................................................................................................ 183
Section 64.6: Set max server memory size ............................................................................................................. 183
Section 64.7: Set number of checkpoint tasks ....................................................................................................... 183
Chapter 65: Migration ............................................................................................................................................ 184
Section 65.1: How to generate migration scripts ................................................................................................... 184
Chapter 66: Table Valued Parameters .......................................................................................................... 186
www.dbooks.org
Section 66.1: Using a table valued parameter to insert multiple rows to a table ............................................... 186
Chapter 67: DBMAIL ................................................................................................................................................. 187
Section 67.1: Send simple email ............................................................................................................................... 187
Section 67.2: Send results of a query ...................................................................................................................... 187
Section 67.3: Send HTML email ................................................................................................................................ 187
Chapter 68: In-Memory OLTP (Hekaton) ...................................................................................................... 188
Section 68.1: Declare Memory-Optimized Table Variables ................................................................................... 188
Section 68.2: Create Memory Optimized Table ..................................................................................................... 188
Section 68.3: Show created .dll files and tables for Memory Optimized Tables ................................................. 189
Section 68.4: Create Memory Optimized System-Versioned Temporal Table ................................................... 190
Section 68.5: Memory-Optimized Table Types and Temp tables ........................................................................ 190
Chapter 69: Temporal Tables ............................................................................................................................ 192
Section 69.1: CREATE Temporal Tables .................................................................................................................. 192
Section 69.2: FOR SYSTEM_TIME ALL ..................................................................................................................... 192
Section 69.3: Creating a Memory-Optimized System-Versioned Temporal Table and cleaning up the SQL
Server history table ........................................................................................................................................... 192
Section 69.4: FOR SYSTEM_TIME BETWEEN <start_date_time> AND <end_date_time> ............................... 194
Section 69.5: FOR SYSTEM_TIME FROM <start_date_time> TO <end_date_time> ......................................... 194
Section 69.6: FOR SYSTEM_TIME CONTAINED IN (<start_date_time> , <end_date_time>) ........................... 194
Section 69.7: How do I query temporal data? ........................................................................................................ 194
Section 69.8: Return actual value specified point in time(FOR SYSTEM_TIME AS OF <date_time>) .............. 195
Chapter 70: Use of TEMP Table ........................................................................................................................ 196
Section 70.1: Dropping temp tables ......................................................................................................................... 196
Section 70.2: Local Temp Table .............................................................................................................................. 196
Section 70.3: Global Temp Table ............................................................................................................................. 196
Chapter 71: Scheduled Task or Job ................................................................................................................. 198
Section 71.1: Create a scheduled Job ...................................................................................................................... 198
Chapter 72: Isolation levels and locking ....................................................................................................... 200
Section 72.1: Examples of setting the isolation level .............................................................................................. 200
Chapter 73: Sorting/ordering rows ................................................................................................................. 201
Section 73.1: Basics .................................................................................................................................................... 201
Section 73.2: Order by Case ..................................................................................................................................... 203
Chapter 74: Privileges or Permissions ........................................................................................................... 205
Section 74.1: Simple rules .......................................................................................................................................... 205
Chapter 75: SQLCMD ............................................................................................................................................... 206
Section 75.1: SQLCMD.exe called from a batch file or command line ................................................................. 206
Chapter 76: Resource Governor ....................................................................................................................... 207
Section 76.1: Reading the Statistics ......................................................................................................................... 207
Chapter 77: File Group ........................................................................................................................................... 208
Section 77.1: Create filegroup in database ............................................................................................................. 208
Chapter 78: Basic DDL Operations in MS SQL Server ............................................................................ 210
Section 78.1: Getting started ..................................................................................................................................... 210
Chapter 79: Subqueries ......................................................................................................................................... 212
Section 79.1: Subqueries ............................................................................................................................................ 212
Chapter 80: Pagination ......................................................................................................................................... 214
Section 80.1: Pagination with OFFSET FETCH ........................................................................................................ 214
Section 80.2: Paginaton with inner query ............................................................................................................... 214
Section 80.3: Paging in Various Versions of SQL Server ....................................................................................... 214
Section 80.4: SQL Server 2012/2014 using ORDER BY OFFSET and FETCH NEXT ............................................. 215
Section 80.5: Pagination using ROW_NUMBER with a Common Table Expression .......................................... 215
Chapter 81: CLUSTERED COLUMNSTORE ...................................................................................................... 217
Section 81.1: Adding clustered columnstore index on existing table .................................................................... 217
Section 81.2: Rebuild CLUSTERED COLUMNSTORE index .................................................................................... 217
Section 81.3: Table with CLUSTERED COLUMNSTORE index ................................................................................ 217
Chapter 82: Parsename ......................................................................................................................................... 218
Section 82.1: PARSENAME ......................................................................................................................................... 218
Chapter 83: Installing SQL Server on Windows ......................................................................................... 219
Section 83.1: Introduction .......................................................................................................................................... 219
Chapter 84: Analyzing a Query ......................................................................................................................... 220
Section 84.1: Scan vs Seek ........................................................................................................................................ 220
Chapter 85: Query Hints ....................................................................................................................................... 221
Section 85.1: JOIN Hints ............................................................................................................................................ 221
Section 85.2: GROUP BY Hints ................................................................................................................................. 221
Section 85.3: FAST rows hint .................................................................................................................................... 222
Section 85.4: UNION hints ........................................................................................................................................ 222
Section 85.5: MAXDOP Option ................................................................................................................................. 222
Section 85.6: INDEX Hints ......................................................................................................................................... 222
Chapter 86: Query Store ....................................................................................................................................... 224
Section 86.1: Enable query store on database ....................................................................................................... 224
Section 86.2: Get execution statistics for SQL queries/plans ............................................................................... 224
Section 86.3: Remove data from query store ........................................................................................................ 224
Section 86.4: Forcing plan for query ....................................................................................................................... 224
Chapter 87: Querying results by page .......................................................................................................... 226
Section 87.1: Row_Number() .................................................................................................................................... 226
Chapter 88: Schemas ............................................................................................................................................. 227
Section 88.1: Purpose ................................................................................................................................................ 227
Section 88.2: Creating a Schema ............................................................................................................................ 227
Section 88.3: Alter Schema ....................................................................................................................................... 227
Section 88.4: Dropping Schemas ............................................................................................................................. 227
Chapter 89: Backup and Restore Database ............................................................................................... 228
Section 89.1: Basic Backup to disk with no options ............................................................................................... 228
Section 89.2: Basic Restore from disk with no options ......................................................................................... 228
Section 89.3: RESTORE Database with REPLACE .................................................................................................. 228
Chapter 90: Transaction handling ................................................................................................................... 229
Section 90.1: basic transaction skeleton with error handling ............................................................................... 229
Chapter 91: Natively compiled modules (Hekaton) ................................................................................ 230
Section 91.1: Natively compiled stored procedure ................................................................................................. 230
Section 91.2: Natively compiled scalar function ..................................................................................................... 230
Section 91.3: Native inline table value function ...................................................................................................... 231
Chapter 92: Spatial Data ...................................................................................................................................... 233
Section 92.1: POINT ................................................................................................................................................... 233
Chapter 93: Dynamic SQL ..................................................................................................................................... 234
Section 93.1: Execute SQL statement provided as string ...................................................................................... 234
Section 93.2: Dynamic SQL executed as dierent user ........................................................................................ 234
Section 93.3: SQL Injection with dynamic SQL ....................................................................................................... 234
Section 93.4: Dynamic SQL with parameters ......................................................................................................... 235
www.dbooks.org
Chapter 94: Dynamic data masking ............................................................................................................... 236
Section 94.1: Adding default mask on the column ................................................................................................. 236
Section 94.2: Mask email address using Dynamic data masking ........................................................................ 236
Section 94.3: Add partial mask on column ............................................................................................................. 236
Section 94.4: Showing random value from the range using random() mask .................................................... 236
Section 94.5: Controlling who can see unmasked data ........................................................................................ 237
Chapter 95: Export data in txt file by using SQLCMD ............................................................................ 238
Section 95.1: By using SQLCMD on Command Prompt ......................................................................................... 238
Chapter 96: Common Language Runtime Integration .......................................................................... 239
Section 96.1: Enable CLR on database .................................................................................................................... 239
Section 96.2: Adding .dll that contains Sql CLR modules ...................................................................................... 239
Section 96.3: Create CLR Function in SQL Server .................................................................................................. 239
Section 96.4: Create CLR User-defined type in SQL Server .................................................................................. 240
Section 96.5: Create CLR procedure in SQL Server ............................................................................................... 240
Chapter 97: Delimiting special characters and reserved words ...................................................... 241
Section 97.1: Basic Method ....................................................................................................................................... 241
Chapter 98: DBCC ..................................................................................................................................................... 242
Section 98.1: DBCC statement .................................................................................................................................. 242
Section 98.2: DBCC maintenance commands ....................................................................................................... 242
Section 98.3: DBCC validation statements ............................................................................................................. 243
Section 98.4: DBCC informational statements ....................................................................................................... 243
Section 98.5: DBCC Trace commands .................................................................................................................... 243
Chapter 99: BULK Import ...................................................................................................................................... 245
Section 99.1: BULK INSERT ....................................................................................................................................... 245
Section 99.2: BULK INSERT with options ................................................................................................................ 245
Section 99.3: Reading entire content of file using OPENROWSET(BULK) .......................................................... 245
Section 99.4: Read file using OPENROWSET(BULK) and format file .................................................................. 245
Section 99.5: Read json file using OPENROWSET(BULK) ..................................................................................... 246
Chapter 100: Service broker ............................................................................................................................... 247
Section 100.1: Basics .................................................................................................................................................. 247
Section 100.2: Enable service broker on database ................................................................................................ 247
Section 100.3: Create basic service broker construction on database (single database communication)
............................................................................................................................................................................. 247
Section 100.4: How to send basic communication through service broker ........................................................ 248
Section 100.5: How to receive conversation from TargetQueue automatically ................................................. 248
Chapter 101: Permissions and Security .......................................................................................................... 250
Section 101.1: Assign Object Permissions to a user ................................................................................................ 250
Chapter 102: Database permissions ............................................................................................................... 251
Section 102.1: Changing permissions ....................................................................................................................... 251
Section 102.2: CREATE USER .................................................................................................................................... 251
Section 102.3: CREATE ROLE .................................................................................................................................... 251
Section 102.4: Changing role membership ............................................................................................................. 251
Chapter 103: Row-level security ........................................................................................................................ 252
Section 103.1: RLS filter predicate ............................................................................................................................ 252
Section 103.2: Altering RLS security policy ............................................................................................................. 252
Section 103.3: Preventing updated using RLS block predicate ............................................................................. 253
Chapter 104: Encryption ....................................................................................................................................... 254
Section 104.1: Encryption by certificate ................................................................................................................... 254
Section 104.2: Encryption of database .................................................................................................................... 254
Section 104.3: Encryption by symmetric key .......................................................................................................... 254
Section 104.4: Encryption by passphrase ............................................................................................................... 255
Chapter 105: PHANTOM read .............................................................................................................................. 256
Section 105.1: Isolation level READ UNCOMMITTED .............................................................................................. 256
Chapter 106: Filestream ........................................................................................................................................ 257
Section 106.1: Example .............................................................................................................................................. 257
Chapter 107: bcp (bulk copy program) Utility ........................................................................................... 258
Section 107.1: Example to Import Data without a Format File(using Native Format ) ....................................... 258
Chapter 108: SQL Server Evolution through dierent versions (2000 - 2016) .......................... 259
Section 108.1: SQL Server Version 2000 - 2016 ....................................................................................................... 259
Chapter 109: SQL Server Management Studio (SSMS) .......................................................................... 262
Section 109.1: Refreshing the IntelliSense cache .................................................................................................... 262
Chapter 110: Managing Azure SQL Database ............................................................................................. 263
Section 110.1: Find service tier information for Azure SQL Database ................................................................... 263
Section 110.2: Change service tier of Azure SQL Database .................................................................................. 263
Section 110.3: Replication of Azure SQL Database ................................................................................................ 263
Section 110.4: Create Azure SQL Database in Elastic pool .................................................................................... 264
Chapter 111: System database - TempDb .................................................................................................... 265
Section 111.1: Identify TempDb usage ...................................................................................................................... 265
Section 111.2: TempDB database details ................................................................................................................. 265
Appendix A: Microsoft SQL Server Management Studio Shortcut Keys ...................................... 266
Section A.1: Shortcut Examples ................................................................................................................................ 266
Section A.2: Menu Activation Keyboard Shortcuts ................................................................................................ 266
Section A.3: Custom keyboard shortcuts ............................................................................................................... 266
Credits ............................................................................................................................................................................ 269
You may also like ...................................................................................................................................................... 273
www.dbooks.org
About
Please feel free to share this PDF with anyone for free,
latest version of this book can be downloaded from:
https://goalkicker.com/MicrosoftSQLServerBook
This Microsoft® SQL Server® Notes for Professionals book is compiled from Stack
Overflow Documentation, the content is written by the beautiful people at Stack
Overflow. Text content is released under Creative Commons BY-SA, see credits at
the end of this book whom contributed to the various chapters. Images may be
copyright of their respective owners unless otherwise specified
This is an unofficial free book created for educational purposes and is not
affiliated with official Microsoft® SQL Server® group(s) or company(s) nor Stack
Overflow. All trademarks and registered trademarks are the property of their
respective company owners
-- Selecting rows from the table (see how the Description has changed after the update?)
SELECT * FROM HelloWorld
USE Northwind;
GO
SELECT TOP 10 * FROM Customers
ORDER BY CompanyName
will select the first 10 records of the Customer table, ordered by the column CompanyName from the database
Northwind (which is one of Microsoft's sample databases, it can be downloaded from here):
Note that Use Northwind; changes the default database for all subsequent queries. You can still reference the
database by using the fully qualified syntax in the form of [Database].[Schema].[Table]:
This is useful if you're querying data from different databases. Note that dbo, specified "in between" is called a
schema and needs to be specified while using the fully qualified syntax. You can think of it as a folder within your
database. dbo is the default schema. The default schema may be omitted. All other user defined schemas need to
be specified.
If the database table contains columns which are named like reserved words, e.g. Date, you need to enclose the
column name in brackets, like this:
-- descending order
SELECT TOP 10 [Date] FROM dbo.MyLogTable
ORDER BY [Date] DESC
The same applies if the column name contains spaces in its name (which is not recommended). An alternative
syntax is to use double quotes instead of square brackets, e.g.:
-- descending order
SELECT top 10 "Date" from dbo.MyLogTable
order by "Date" desc
is equivalent but not so commonly used. Notice the difference between double quotes and single quotes: Single
quotes are used for strings, i.e.
-- descending order
SELECT top 10 "Date" from dbo.MyLogTable
where UserId='johndoe'
is a valid syntax. Notice that T-SQL has a N prefix for NChar and NVarchar data types, e.g.
returns all companies having a company name starting with AL (% is a wild card, use it as you would use the asterisk
in a DOS command line, e.g. DIR AL*). For LIKE, there are a couple of wildcards available, look here to find out
more details.
Joins
Joins are useful if you want to query fields which don't exist in one single table, but in multiple tables. For example:
You want to query all columns from the Region table in the Northwind database. But you notice that you require
also the RegionDescription, which is stored in a different table, Region. However, there is a common key, RgionID
which you can use to combine this information in a single query as follows (Top 5 just returns the first 5 rows, omit
it to get all rows):
will show all columns from Territories plus the RegionDescription column from Region. The result is ordered by
TerritoryDescription.
Table Aliases
When your query requires a reference to two or more tables, you may find it useful to use a Table Alias. Table
aliases are shorthand references to tables that can be used in place of a full table name, and can reduce typing and
editing. The syntax for using an alias is:
Where as is an optional keyword. For example, the previous query can be rewritten as:
Aliases must be unique for all tables in a query, even if you use the same table twice. For example, if your Employee
table included a SupervisorId field, you can use this query to return an employee and his supervisor's name:
SELECT e.*,
s.Name as SupervisorName -- Rename the field for output
FROM Employee e
INNER JOIN Employee s
ON e.SupervisorId = s.EmployeeId
Unions
As we have seen before, a Join adds columns from different table sources. But what if you want to combine rows
from different sources? In this case you can use a UNION. Suppose you're planning a party and want to invite not
only employees but also the customers. Then you could run this query to do it:
It will return names, addresses and cities from the employees and customers in one single table. Note that
duplicate rows (if there should be any) are automatically eliminated (if you don't want this, use a UNION ALL
instead). The column number, column names, order and data type must match across all the select statements that
are part of the union - this is why the first SELECT combines FirstName and LastName from Employee into
ContactName.
Table Variables
It can be useful, if you need to deal with temporary data (especially in a stored procedure), to use table variables:
The difference between a "real" table and a table variable is that it just exists in memory for temporary processing.
Example:
creates a table in memory. In this case the @ prefix is mandatory because it is a variable. You can perform all DML
operations mentioned above to insert, delete and select rows, e.g.
which would read the filtered values from the real table dbo.Region and insert it into the memory table @Region -
where it can be used for further processing. For example, you could use it in a join like
which would in this case return all Northern and Southern territories. More detailed information can be found here.
Temporary tables are discussed here, if you are interested to read more about that topic.
NOTE: Microsoft only recommends the use of table variables if the number of rows of data in the table variable are
less than 100. If you will be working with larger amounts of data, use a temporary table, or temp table, instead.
SELECT *
FROM table_name
Using the asterisk operator * serves as a shortcut for selecting all the columns in the table. All rows will also be
selected because this SELECT statement does not have a WHERE clause, to specify any filtering criteria.
This would also work the same way if you added an alias to the table, for instance e in this case:
SELECT *
FROM Employees AS e
Or if you wanted to select all from a specific table you can use the alias + " .* ":
This is not necessarily recommended, as changing the server and/or database names would cause the queries
using fully-qualified names to no longer execute due to invalid object names.
Note that the fields before table_name can be omitted in many cases if the queries are executed on a single server,
database and schema, respectively. However, it is common for a database to have multiple schema, and in these
cases the schema name should not be omitted when possible.
Warning: Using SELECT * in production code or stored procedures can lead to problems later on (as new columns
are added to the table, or if columns are rearranged in the table), especially if your code makes simple assumptions
about the order of columns, or number of columns returned. So it's safer to always explicitly specify column names
in SELECT statements for production code.
The above code updates the value of the field "HelloWorld" with "HELLO WORLD!!!" for the record where "Id = 5" in
HelloWorlds table.
Note: In an update statement, It is advised to use a "where" clause to avoid updating the whole table unless and
until your requirement is different.
This will delete all the data from the table. The table will contain no rows after you run this code. Unlike DROP TABLE,
this preserves the table itself and its structure and you can continue to insert new rows into that table.
Restrictions Of TRUNCATE
[sic]
Comments are preceded by -- and are ignored until a new line is encountered:
-- This is a comment
SELECT *
FROM MyTable -- This is another comment
WHERE Id = 1;
Slash star comments begin with /* and end with */. All text between those delimiters is considered as a comment
block.
/* This is
a multi-line
comment block. */
SELECT Id = 1, [Message] = 'First row'
UNION ALL
SELECT 2, 'Second row'
/* This is a one liner */
SELECT 'More';
Slash star comments have the advantage of keeping the comment usable if the SQL Statement loses new line
characters. This can happen when SQL is captured during troubleshooting.
/*
SELECT *
FROM CommentTable
WHERE Comment = '/*'
*/
The slash star even though inside the quote is considered as the start of a comment. Hence it needs to be ended
with another closing star slash. The correct way would be
/*
SELECT *
FROM CommentTable
WHERE Comment = '/*'
*/ */
For example:
The following is an example that increments the Score field by 1 (in all rows):
UPDATE Scores
SET score = score + 1
This code will delete all the data from the table Helloworlds. Truncate table is almost similar to Delete from Table
code. The difference is that you can not use where clauses with Truncate. Truncate table is considered better than
delete because it uses less transaction log spaces.
Note that if an identity column exists, it is reset to the initial seed value (for example, auto-incremented ID will
restart from 1). This can lead to inconsistency if the identity columns is used as a foreign key in another table.
SELECT @@SERVERNAME
SELECT @@SERVICENAME
Returns the name of the Windows service MS SQL Server is running as.
SELECT serverproperty('ComputerNamePhysicalNetBIOS');
Returns the physical name of the machine where SQL Server is running. Useful to identify the node in a failover
cluster.
In a failover cluster returns every node where SQL Server can run on. It returns nothing if not a cluster.
Section 1.11: Create new table and insert records from old
table
SELECT * INTO NewTable FROM OldTable
Creates a new table with structure of old table and inserts all rows into the new table.
Some Restrictions
1. You cannot specify a table variable or table-valued parameter as the new table.
2. You cannot use SELECT…INTO to create a partitioned table, even when the source table is
partitioned. SELECT...INTO does not use the partition scheme of the source table; instead, the new
table is created in the default filegroup. To insert rows into a partitioned table, you must first
create the partitioned table and then use the INSERT INTO...SELECT FROM statement.
3. Indexes, constraints, and triggers defined in the source table are not transferred to the new table,
[sic]
DML changes will take a lock on the rows affected. When you begin a transaction, you must end the transaction or
all objects being changed in the DML will remain locked by whoever began the transaction. You can end your
transaction with either ROLLBACK or COMMIT. ROLLBACK returns everything within the transaction to its original state.
COMMIT places the data into a final state where you cannot undo your changes without another DML statement.
Example:
INSERT INTO
dbo.test_transaction
( column_1 )
VALUES
( 'a' )
UPDATE dbo.test_transaction
SET column_1 = 'B'
OUTPUT INSERTED.*
WHERE column_1 = 'A'
SELECT * FROM dbo.test_transaction --View the table after your changes have been run
Notes:
This is a simplified example which does not include error handling. But any database operation can fail and
It is also possible to get the row count for all tables by joining back to the table's partition based off the tables' HEAP
(index_id = 0) or cluster clustered index (index_id = 1) using the following script:
This is possible as every table is essentially a single partition table, unless extra partitions are added to it. This script
also has the benefit of not interfering with read/write operations to the tables rows'.
bit
tinyint
smallint
int
bigint
Integers are numeric values that never contain a fractional portion, and always use a fixed amount of storage. The
range and storage sizes of the integer data types are shown in this table:
numeric
decimal
smallmoney
money
These data types are useful for representing numbers exactly. As long as the values can fit within the range of the
values storable in the data type, the value will not have rounding issues. This is useful for any financial calculations,
where rounding errors will result in clinical insanity for accountants.
Note that decimal and numeric are synonyms for the same data type.
When defining a decimal or numeric data type, you may need to specify the Precision [p] and Scale [s].
Precision is the number of digits that can be stored. For example, if you needed to store values between 1 and 999,
you would need a Precision of 3 (to hold the three digits in 100). If you do not specify a precision, the default
precision is 18.
Scale is the number of digits after the decimal point. If you needed to store a number between 0.00 and 999.99, you
would need to specify a Precision of 5 (five digits) and a Scale of 2 (two digits after the decimal point). You must
specify a precision to specify a scale. The default scale is zero.
Precision Table
These data types are intended specifically for accounting and other monetary data. These type have a fixed Scale of
4 - you will always see four digits after the decimal place. For most systems working with most currencies, using a
numeric value with a Scale of 2 will be sufficient. Note that no information about the type of currency represented is
stored with the value.
These data types are used to store floating point numbers. Since these types are intended to hold approximate
numeric values only, these should not be used in cases where any rounding error is unacceptable. However, if you
need to handle very large numbers, or numbers with an indeterminate number of digits after the decimal place,
these may be your best option.
n value table for float numbers. If no value is specified in the declaration of the float, the default value of 53 will be
used. Note that float(24) is the equivalent of a real value.
datetime
smalldatetime
These types are in all versions of SQL Server after SQL Server 2012
date
For example, source data is string type and we need to covert to date type. If conversion attempt fails it returns
NULL value.
It converts value to specified data type and if conversion fails it returns NULL. For example, source value in string
format and we need date/integer format. Then this will help us to achieve the same.
TRY_CONVERT() returns a value cast to the specified data type if the cast succeeds; otherwise, returns null.
Data_type - The datatype into which to convert. Here length is an optional parameter which helps to get result in
specified length.
Expression - The value to be convert
Style - It is an optional parameter which determines formatting. Suppose you want date format like “May, 18 2013”
then you need pass style as 111.
It converts value to specified data type and if conversion fails it returns NULL. For example, source value in string
format and we need it in double/integer format. Then this will help us in achieving it.
TRY_CAST() returns a value cast to the specified data type if the cast succeeds; otherwise, returns null.
Syntax
The data type to which you are casting an expression is the target type. The data type of the expression from which
you are casting is the source type.
DECLARE @A varchar(2)
DECLARE @B varchar(2)
set @A='25a'
set @B='15'
--Result
--40
Style - style values for datetime or smalldatetime conversion to character data. Add 100 to a style value to get a
four-place year that includes the century (yyyy).
select convert(varchar(20),GETDATE(),108)
13:27:16
SELECT *
FROM sys.objects
SELECT *
FROM sys.objects
WHERE type = 'IT'
SELECT *
FROM sys.objects
ORDER BY create_date
You can apply some function on each group (aggregate function) to calculate sum or count of the records in the
group.
type c
SQ 3
S 72
IT 16
PK 1
U 5
SELECT TOP 10 *
FROM sys.objects
SELECT *
FROM sys.objects
ORDER BY object_id
OFFSET 50 ROWS FETCH NEXT 10 ROWS ONLY
You can use OFFSET without fetch to just skip first 50 rows:
SELECT *
FROM sys.objects
ORDER BY object_id
OFFSET 50 ROWS
SELECT *
FROM (SELECT firstname + ' ' + lastname
FROM AliasNameDemo) a (fullname)
Demo
SET @MyInt3 = 3
Although ISNULL() operates similarly to COALESCE(), the ISNULL() function only accepts two parameters - one to
check, and one to use if the first parameter is NULL. See also ISNULL, below
In a future version of SQL Server, ANSI_NULLS will always be ON and any applications that explicitly set
the option to OFF will generate an error. Avoid using this feature in new development work, and plan to
modify applications that currently use this feature.
ANSI NULLS being set to off allows for a =/<> comparison of null values.
id someVal
----
0 NULL
1 1
2 2
SELECT id
FROM table
WHERE someVal = NULL
would produce no results. However the same query, with ANSI NULLS off:
SELECT id
FROM table
Would return id 0.
Parameters:
The IsNull() function returns the same data type as the check expression.
DECLARE @MyInt int -- All variables are null until they are set with values.
The following statement will select the value 6, since all comparisons with null values evaluates to false or unknown:
Setting the content of the @Date variable to null and try again, the following statement will return 5:
SET @Date = NULL -- Note that the '=' here is an assignment operator!
With a query:
SELECT id
FROM table
WHERE someVal = 1
would return id 1
SELECT id
FROM table
WHERE someVal <> 1
would return id 2
SELECT id
FROM table
WHERE someVal IS NULL
would return id 0
SELECT id
FROM table
WHERE someVal IS NOT NULL
If you wanted NULLs to be "counted" as values in a =, <> comparison, it must first be converted to a countable data
type:
SELECT id
FROM table
WHERE ISNULL(someVal, -1) <> 1
OR
SELECT id
FROM table
WHERE someVal IS NULL OR someVal <> 1
returns 0 and 2
While handling not in sub-query with null be cautious with your expected output
When you create a normal table, you use CREATE TABLE Name (Columns) syntax. When creating a table variable,
you use DECLARE @Name TABLE (Columns) syntax.
To reference the table variable inside a SELECT statement, SQL Server requires that you give the table variable an
alias, otherwise you'll get an error:
i.e.
/*
-- the following two commented out statements would generate an error:
SELECT *
FROM @Table1
INNER JOIN @Table2 ON @Table1.Example = @Table2.Example
SELECT *
FROM @Table1
WHERE @Table1.Example = 1
*/
SELECT *
FROM @Table1 Table1
WHERE Table1.Example = 1
Hello
When using SELECT to update a variable from a table column, if there are multiple values, it will use the last value.
(Normal order rules apply - if no sort is given, the order is not guaranteed.)
PRINT @Variable
PRINT @Variable
If there are no rows returned by the query, the variable's value won't change:
PRINT @Variable
DECLARE @CurrentID int = (SELECT TOP 1 ID FROM Table ORDER BY CreateDate desc)
In most cases, you will want to ensure that your query returns only one value when using this method.
Example usage:
You can also use some built-in codes to convert into a specific format. Here are the options built into SQL Server:
Using this you can transform your DATETIME fields to your own custom VARCHAR format.
Example
Arguments
Given the DATETIME being formatted is 2016-09-05 00:01:02.333, the following chart shows what their output
would be for the provided argument.
Argument Output
yyyy 2016
yy 16
MMMM September
MM 09
M 9
dddd Monday
ddd Mon
dd 05
d 5
HH 00
H 0
hh 12
h 12
mm 01
m 1
ss 02
s 2
tt AM
t A
fff 333
ff 33
f 3
Note: The above list is using the en-US culture. A different culture can be specified for the FORMAT() via the third
parameter:
2016年9月5日 4:01:02
To add a time measure, the number must be positive. To subtract a time measure, the number must be negative.
Examples
NOTE: DATEADD also accepts abbreviations in the datepart parameter. Use of these abbreviations is generally
discouraged as they can be confusing (m vs mi, ww vs w, etc.).
RETURN @age
END
SELECT dbo.Calc_Age('2000-01-01',Getdate())
The return value of both functions is based on the operating system of the computer on which the instance of SQL
Server is running.
The return value of GETDATE represents the current time in the same timezone as operating system. The return
value of GETUTCDATE represents the current UTC time.
Either function can be included in the SELECT clause of a query or as part of boolean expression in the WHERE clause.
Examples:
There are a few other built-in functions that return different variations of the current date-time:
SELECT
GETDATE(), --2016-07-21 14:27:37.447
GETUTCDATE(), --2016-07-21 18:27:37.447
CURRENT_TIMESTAMP, --2016-07-21 14:27:37.447
SYSDATETIME(), --2016-07-21 14:27:37.4485768
SYSDATETIMEOFFSET(),--2016-07-21 14:27:37.4485768 -04:00
SYSUTCDATETIME() --2016-07-21 18:27:37.4485768
The EOMONTH function provides a more concise way to return the last date of a month, and has an optional
parameter to offset the month.
In Transact SQL , you may define an object as Date (or DateTime) using the [DATEFROMPARTS][1] (or
[DATETIMEFROMPARTS][1]) function like following:
The parameters you provide are Year, Month, Day for the DATEFROMPARTS function and, for the DATETIMEFROMPARTS
function you will need to provide year, month, day, hour, minutes, seconds and milliseconds.
These methods are useful and worth being used because using the plain string to build a date(or datetime) may fail
depending on the host machine region, location or date format settings.
It will return a positive number if datetime_expr is in the past relative to datetime_expr2, and a negative number
otherwise.
Examples
NOTE: DATEDIFF also accepts abbreviations in the datepart parameter. Use of these abbreviations is generally
discouraged as they can be confusing (m vs mi, ww vs w, etc.).
DATEDIFF can also be used to determine the offset between UTC and the local time of the SQL Server. The following
statement can be used to calculate the offset between UTC and local time (including timezone).
DATENAME returns a character string that represents the specified datepart of the specified date. In practice
DATENAME is mostly useful for getting the name of the month or the day of the week.
There are also some shorthand functions to get the year, month or day of a datetime expression, which behave like
DATEPART with their respective datepart units.
Syntax:
Examples:
NOTE: DATEPART and DATENAME also accept abbreviations in the datepart parameter. Use of these abbreviations is
generally discouraged as they can be confusing (m vs mi, ww vs w, etc.).
datepart Abbreviations
year yy, yyyy
quarter qq, q
month mm, m
dayofyear dy, y
day dd, d
week wk, ww
weekday dw, w
hour hh
minute mi, n
second ss, s
millisecond ms
microsecond mcs
nanosecond ns
NOTE: Use of abbreviations is generally discouraged as they can be confusing (m vs mi, ww vs w, etc.). The long
version of the datepart representation promotes clarity and readability, and should be used whenever possible
(month, minute, week, weekday, etc.).
The default MaxRecursion setting is 100. Generating more than 100 dates using this method will require the Option
(MaxRecursion N) segment of the query, where N is the desired MaxRecursion setting. Setting this to 0 will remove
the MaxRecursion limitation altogether.
;With
E1(N) As (Select 1 From (Values (1), (1), (1), (1), (1), (1), (1), (1), (1), (1)) DT(N)),
E2(N) As (Select 1 From E1 A Cross Join E1 B),
E4(N) As (Select 1 From E2 A Cross Join E2 B),
E6(N) As (Select 1 From E4 A Cross Join E2 B),
Tally(N) As
(
Select Row_Number() Over (Order By (Select Null))
From E6
)
Select DateAdd(Day, N - 1, @FromDate) Date
From Tally
Where N <= DateDiff(Day, @FromDate, @ToDate) + 1
Warning: This will delete all changes made to the source database since the snapshot was taken!
Since table variable is used, we need to execute whole query once. So to make easy to understand, I have added
BEGIN and END block.
BEGIN
--Used COLESCE function, so it will concatenate comma separated FirstName into @Names varible
SELECT @Names = COALESCE(@Names + ',', '') + FirstName
FROM @Table
Section 12.2: Getting the first not null from a list of column
values
SELECT COALESCE(NULL, NULL, 'TechOnTheNet.com', NULL, 'CheckYourMath.com');
Result: 'TechOnTheNet.com'
For example in the example below 1 = 1 is the expression, which evaluates to True and the control enters the
BEGIN..END block and the Print statement prints the string 'One is equal to One'
In the example below, each IF statement's expression is evaluated and if it is true the code inside the BEGIN...END
block is executed. In this particular example, the First and Third expressions are true and only those print
statements will be executed.
On the other hand if the expression evaluates to False the ELSE BEGIN..END block gets executed and the control
never enters the first BEGIN..END Block.
In the Example below the expression will evaluate to false and the Else block will be executed printing the string
'First expression was not true'
In the example below none of the IF or ELSE IF expression are True hence only ELSE block is executed and prints
'No other expression is true'
IF ( 1 = 1 + 1 )
BEGIN
PRINT 'First If Condition'
END
ELSE IF (1 = 2)
BEGIN
PRINT 'Second If Else Block'
END
ELSE IF (1 = 3)
BEGIN
PRINT 'Third If Else Block'
END
ELSE
BEGIN
PRINT 'No other expression is true' --<-- Only this statement will be printed
END
In this example all the expressions are evaluated from top to bottom. As soon as an expression evaluates to true,
the code inside that block is executed. If no expression is evaluated to true, nothing gets executed.
IF (1 = 1 + 1)
BEGIN
PRINT 'First If Condition'
END
ELSE IF (1 = 2)
BEGIN
PRINT 'Second If Else Block'
END
ELSE IF (1 = 3)
BEGIN
PRINT 'Third If Else Block'
END
ELSE IF (1 = 1) --<-- This is True
BEGIN
PRINT 'Last Else Block' --<-- Only this statement will be printed
END
SELECT CASE
WHEN LEFT(@FirstName, 1) IN ('a','e','i','o','u')
THEN 'First name starts with a vowel'
WHEN LEFT(@LastName, 1) IN ('a','e','i','o','u')
THEN 'Last name starts with a vowel'
ELSE
'Neither name starts with a vowel'
END
To insert multiple rows of data in earlier versions of SQL Server, use "UNION ALL" like so:
Note, the "INTO" keyword is optional in INSERT queries. Another warning is that SQL server only supports 1000
rows in one INSERT so you have to split them in batches.
When programmatically calling this (e.g., from ADO.net) you would treat it as a normal query and read the values as
if you would've made a SELECT-statement.
-- CREATE TABLE OutputTest ([Id] INT NOT NULL PRIMARY KEY IDENTITY, [Name] NVARCHAR(50))
If the ID of the recently added row is required inside the same set of query or stored procedure.
-- CREATE a table variable having column with the same datatype of the ID
Note, 'student' in SELECT is a string constant that will be inserted in each row.
If required, you can select and insert data from/into the same table
Or
Note that the second insert statement only allows the values in exactly the same order as the table columns
whereas in the first insert, the order of the values can be changed like:
This will only work if the columns that you did not list are nullable, identity, timestamp data type or computed
columns; or columns that have a default value constraint. Therefore, if any of them are non-nullable, non-identity,
non-timestamp, non-computed, non-default valued columns...then attempting this kind of insert will trigger an
error message telling you that you have to provide a value for the applicable field(s).
The MERGE statement allows you to join a data source with a target table or view, and then perform multiple
actions against the target based on the results of that join.
USING sourceTable
ON (targetTable.PKID = sourceTable.PKID)
Description:
Comments:
If a specific action is not needed then omit the condition e.g. removing WHEN NOT MATCHED THEN INSERT will prevent
records from being inserted
Restrictions:
1. dbo.Product : This table contains information about the product that company is currently selling
2. dbo.ProductNew: This table contains information about the product that the company will sell in the future.
The following T-SQL will create and populate these two tables
Now, Suppose we want to synchoronize the dbo.Product Target Table with the dbo.ProductNew table. Here is the
criterion for this task:
2. Any product in the dbo.ProductNew source table that do not exist in the dob.Product target table are
inserted into the dbo.Product target table.
3. Any Product in the dbo.Product target table that do not exist in the dbo.ProductNew source table must be
deleted from the dbo.Product target table. Here is the MERGE statement to perform this task.
The view definition can reference one or more tables in the same database.
Once the unique clustered index is created, additional nonclustered indexes can be created against the view.
You can update the data in the underlying tables – including inserts, updates, deletes, and even truncates.
You can’t modify the underlying tables and columns. The view is created with the WITH SCHEMABINDING
option.
It can’t contain COUNT, MIN, MAX, TOP, outer joins, or a few other keywords or elements.
For more information about creating indexed Views you can read this MSDN article
SELECT FirstName
FROM view_EmployeeInfo
You may also create a view with a calculated column. We can modify the view above as follows by adding a
This view adds an additional column that will appear when you SELECT rows from it. The values in this additional
column will be dependent on the fields FirstName and LastName in the table Employee and will automatically
update behind-the-scenes when those fields are updated.
Views can use joins to select data from numerous sources like tables, table functions, or even other views. This
example uses the FirstName and LastName columns from the Person table and the JobTitle column from the
Employee table.
This view can now be used to see all corresponding rows for Managers in the database:
SELECT *
FROM view_PersonEmployee
WHERE JobTitle LIKE '%Manager%'
https://www.simple-talk.com/sql/t-sql-programming/sql-view-beyond-the-basics/
https://www.simple-talk.com/sql/t-sql-programming/sql-view-beyond-the-basics/
Views without schema binding can break if their underlying table(s) change or get dropped. Querying a broken view
results in an error message. sp_refreshview can be used to ensure existing views without schema binding aren't
broken.
1.The number and the order of the columns must be the same in all queries.
Example:
We have three tables : Marksheet1, Marksheet2 and Marksheet3. Marksheet3 is the duplicate table of Marksheet2
which contains same values as that of Marksheet2.
Table1: Marksheet1
Table2: Marksheet2
Table3: Marksheet3
OUTPUT
Union All
OUTPUT
BEGIN TRANSACTION
BEGIN TRY
INSERT INTO dbo.Sale(Price, SaleDate, Quantity)
VALUES (5.2, GETDATE(), 1)
INSERT INTO dbo.Sale(Price, SaleDate, Quantity)
VALUES (5.2, 'not a date', 1)
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION -- First Rollback and then throw.
THROW
END CATCH
BEGIN TRANSACTION
BEGIN TRY
INSERT INTO dbo.Sale(Price, SaleDate, Quantity)
VALUES (5.2, GETDATE(), 1)
INSERT INTO dbo.Sale(Price, SaleDate, Quantity)
VALUES (5.2, GETDATE(), 1)
COMMIT TRANSACTION
END TRY
BEGIN CATCH
THROW
ROLLBACK TRANSACTION
END CATCH
RAISERROR with second parameter greater than 10 (11 in this example) will stop execution in TRY BLOCK and raise
an error that will be handled in CATCH block. You can access error message using ERROR_MESSAGE() function.
Output of this sample is:
First statement
Error: Here is a problem!
BEGIN TRY
print 'First statement';
RAISERROR( 'Here is a problem!', 10, 15);
print 'Second statement';
END TRY
BEGIN CATCH
print 'Error: ' + ERROR_MESSAGE();
END CATCH
After RAISERROR statement, third statement will be executed and CATCH block will not be invoked. Result of
execution is:
First statement
Here is a problem!
Second statement
Note that in this case we are raising error with formatted arguments (fourth and fifth parameter). This might be
useful if you want to add more info in message. Result of execution is:
First statement
Error: Here is a problem! Area: 'TRY BLOCK' Line:'2'
Msg 50000, Level 11, State 1, Line 26
Here is a problem! Area: 'TRY BLOCK' Line:'2'
Exception with be handled in CATCH block and then re-thrown using THROW without parameters.
First statement
Error: Here is a problem!
Msg 51000, Level 16, State 15, Line 39
Here is a problem!
DECLARE @i int = 0;
WHILE(@i < 100)
BEGIN
PRINT @i;
SET @i = @i+1
END
SELECT CustomerId,
SUM(TotalCost) OVER(PARTITION BY CustomerId) AS Total,
AVG(TotalCost) OVER(PARTITION BY CustomerId) AS Avg,
COUNT(TotalCost) OVER(PARTITION BY CustomerId) AS Count,
MIN(TotalCost) OVER(PARTITION BY CustomerId) AS Min,
MAX(TotalCost) OVER(PARTITION BY CustomerId) AS Max
FROM CarsTable
WHERE Status = 'READY'
Beware that using OVER in this fashion will not aggregate the rows returned. The above query will return the
following:
The duplicated row(s) may not be that useful for reporting purposes.
If you wish to simply aggregate data, you will be better off using the GROUP BY clause along with the appropriate
aggregate functions Eg:
SELECT CustomerId,
SUM(TotalCost) AS Total,
AVG(TotalCost) AS Avg,
COUNT(TotalCost) AS Count,
MIN(TotalCost) AS Min,
MAX(TotalCost) AS Max
FROM CarsTable
WHERE Status = 'READY'
GROUP BY CustomerId
-- Setup data:
declare @values table(Id int identity(1,1) primary key, [Value] float, ExamId int)
insert into @values ([Value], ExamId) values
(65, 1), (40, 1), (99, 1), (100, 1), (90, 1), -- Exam 1 Scores
(91, 2), (88, 2), (83, 2), (91, 2), (78, 2), (67, 2), (77, 2) -- Exam 2 Scores
ntile works great when you really need a set number of buckets and each filled to approximately the same level.
Notice that it would be trivial to separate these scores into percentiles by simply using ntile(100).
Instead of RANK, two other functions can be used to order. In the previous example the result will be the same, but
they give different results when the ordering gives multiple rows for each rank.
RANK(): duplicates get the same rank, the next rank takes the number of duplicates in the previous rank into
account
For example, if the table had a non-unique column CreationDate and the ordering was done based on that, the
following query:
SELECT Authors.Name,
Books.Title,
Books.CreationDate,
RANK() OVER (PARTITION BY Authors.Id ORDER BY Books.CreationDate DESC) AS RANK,
DENSE_RANK() OVER (PARTITION BY Authors.Id ORDER BY Books.CreationDate DESC) AS DENSE_RANK,
ROW_NUMBER() OVER (PARTITION BY Authors.Id ORDER BY Books.CreationDate DESC) AS ROW_NUMBER,
FROM Authors
JOIN Books ON Books.AuthorId = Authors.Id
When grouping by a specific column, only unique values of this column are returned.
SELECT customerId
FROM orders
GROUP BY customerId;
Return value:
customerId
1
2
3
Aggregate functions like count() apply to each group and not to the complete table:
SELECT customerId,
COUNT(productId) as numberOfProducts,
sum(price) as totalPrice
FROM orders
GROUP BY customerId;
Return value:
CUBE generates a result set that shows aggregates for all combinations of values in the selected columns.
ROLLUP generates a result set that shows aggregates for a hierarchy of values in the selected columns.
(7 row(s) affected)
If the ROLLUP keyword in the query is changed to CUBE, the CUBE result set is the same, except these two
additional rows are returned at the end:
https://technet.microsoft.com/en-us/library/ms189305(v=sql.90).aspx
Subject_Id Subject
1 Maths
2 P.E.
3 Physics
And because one student can attend many subjects and one subject can be attended by many students (therefore
N:N relationship) we need to have third "bounding" table. Let's call the table Students_subjects:
Subject_Id Student_Id
1 1
2 2
2 1
3 2
1 3
1 1
Now lets say we want to know the number of subjects each student is attending. Here the standalone GROUP BY
statement is not sufficient as the information is not available through single table. Therefore we need to use GROUP
BY with the JOIN statement:
FullName SubjectNumber
Matt Jones 3
Frank Blue 2
Anthony Angel 1
For an even more complex example of GROUP BY usage, let's say student might be able to assign the same subject
to his name more than once (as shown in table Students_Subjects). In this scenario we might be able to count
number of times each subject was assigned to a student by GROUPing by more than one column:
If we want to get the number of orders each person has placed, we would use
and get
Name Orders
Matt 2
John 5
Luke 4
However, if we want to limit this to individuals who have placed more than two orders, we can add a HAVING clause.
will yield
Name Orders
John 5
Luke 4
Note that, much like GROUP BY, the columns put in HAVING must exactly match their counterparts in the SELECT
statement. If in the above example we had instead said
Returns:
Id FName LName
2 John Johnson
1 James Smith
4 Johnathon Smith
3 Michael Williams
To sort in descending order add the DESC keyword after the field parameter, e.g. the same query in LName
descending order is:
Note that the ASC keyword is optional, and results are sorted in ascending order of a given field by default.
Group Count
Not Retired 6
Retired 4
Total 10
order by case group when 'Total' then 1 when 'Retired' then 2 else 3 end returns:
Group Count
SELECT
STUFF( (SELECT ';' + Email
FROM Customers
where (Email is not null and Email <> '')
ORDER BY Email ASC
FOR XML PATH('')),
1, 1, '')
In the example above, FOR XML PATH('')) is being used to concatenate email addresses, using ; as the delimiter
character. Also, the purpose of STUFF is to remove the leading ; from the concatenated string. STUFF is also
implicitly casting the concatenated string from XML to varchar.
Note: the result from the above example will be XML-encoded, meaning it will replace < characters with < etc. If
you don't want this, change FOR XML PATH('')) to FOR XML PATH, TYPE).value('.[1]','varchar(MAX)'), e.g.:
SELECT
STUFF( (SELECT ';' + Email
FROM Customers
where (Email is not null and Email <> '')
ORDER BY Email ASC
FOR XML PATH, TYPE).value('.[1]','varchar(900)'),
1, 1, '')
This can be used to achieve a result similar to GROUP_CONCAT in MySQL or string_agg in PostgreSQL 9.0+, although
we use subqueries instead of GROUP BY aggregates. (As an alternative, you can install a user-defined aggregate
such as this one if you're looking for functionality closer to that of GROUP_CONCAT).
Executing this example will result in returning SQL Server Documentation instead of SQL Svr Documentation.
STUFF() function inserts Replacement_expression, at the start position specified, along with removing the
characters specified using Length parameter.
Given the above table If we want to find the row with the name = 'Adam', we would execute the following query.
SELECT *
FROM JsonTable Where
JSON_VALUE(jsonInfo, '$.Name') = 'Adam'
However this will require SQL server to perform a full table which on a large table is not efficent.
To speed this up we would like to add an index, however we cannot directly reference properties in the JSON
document. The solution is to add a computed column on the JSON path $.Name, then add an index on the
computed column.
Now when we execute the same query, instead of a full table scan SQL server uses an index to seek into the non-
clustered index and find the rows that satisfy the specified conditions.
Note: For SQL server to use the index, you must create the computed column with the same expression that you
plan to use in your queries - in this example JSON_VALUE(jsonInfo, '$.Name'), however you can also use the
name of computed column vName
Query
SELECT
JSON_VALUE(person.value, '$.id') as Id,
JSON_VALUE(person.value, '$.user.name') as PersonName,
JSON_VALUE(hobbies.value, '$.name') as Hobby
FROM OPENJSON (@json) as person
CROSS APPLY OPENJSON(person.value, '$.hobbies') as hobbies
SELECT
Id, person.PersonName, Hobby
FROM OPENJSON (@json)
WITH(
Id int '$.id',
PersonName nvarchar(100) '$.user.name',
Hobbies nvarchar(max) '$.hobbies' AS JSON
) as person
CROSS APPLY OPENJSON(Hobbies)
WITH(
Hobby nvarchar(100) '$.name'
)
Result
Id PersonName Hobby
1 John Reading
1 John Surfing
2 Jane Programming
2 Jane Running
Id Name Age
1 John 23
2 Jane 31
Query
Result
[
{"Id":1,"Name":"John","Age":23},
{"Id":2,"Name":"Jane","Age":31}
]
SELECT
JSON_VALUE(@json, '$.id') AS Id,
JSON_VALUE(@json, '$.user.name') AS Name,
JSON_QUERY(@json, '$.user') AS UserObject,
JSON_QUERY(@json, '$.skills') AS Skills,
JSON_VALUE(@json, '$.skills[0]') AS Skill0
Result
Note: this option will produce invalid JSON output if more than one row is returned.
Id Name Age
1 John 23
2 Jane 31
Result
{"Id":1,"Name":"John","Age":23}
SELECT *
FROM OPENJSON (@json)
WITH(Id int '$.id',
Name nvarchar(100) '$.user.name',
UserObject nvarchar(max) '$.user' AS JSON,
Skills nvarchar(max) '$.skills' AS JSON,
Skill0 nvarchar(20) '$.skills[0]')
Result
SELECT *
FROM OPENJSON (@json)
WITH (
Number varchar(200),
Date datetime,
Customer varchar(200),
Quantity int
)
In the WITH clause is specified return schema of OPENJSON function. Keys in the JSON objects are fetched by
column names. If some key in JSON is not specified in the WITH clause (e.g. Price in this example) it will be ignored.
Values are automatically converted into specified types.
Column type describe the type of value, i.e. null(0), string(1), number(2), boolean(3), array(4), and object(5).
{"data":{"num":"SO43659","date":"2011-05-31T00:00:00"},"info":{"customer":"MSFT","Price":59.99,"qty
":1}},
{"data":{"number":"SO43661","date":"2011-06-01T00:00:00"},"info":{"customer":"Nokia","Price":24.99,
"qty":3}}
]'
In the WITH clause is specified return schema of OPENJSON function. After the type is specified path to the JSON
nodes where returned value should be found. Keys in the JSON objects are fetched by these paths. Values are
automatically converted into specified types.
{"Number":"SO43659","Date":"2011-05-31T00:00:00","info":{"customer":"MSFT","Price":59.99,"qty":1}},
{"Number":"SO43661","Date":"2011-06-01T00:00:00","info":{"customer":"Nokia","Price":24.99,"qty":3}}
]'
SELECT *
FROM OPENJSON (@json)
WITH (
Number varchar(200),
Date datetime,
Info nvarchar(max) '$.info' AS JSON
)
"Items":[{"Price":21.99,"Quantity":3},{"Price":22.99,"Quantity":2},{"Price":23.99,"Quantity":2}]}
]'
We can parse root level properties using OPENJSON that will return Items array AS JSON fragment. Then we can
SELECT *
FROM
OPENJSON (@json)
WITH ( Number varchar(200), Date datetime,
Items nvarchar(max) AS JSON )
CROSS APPLY
OPENJSON (Items)
WITH ( Price float, Quantity int)
Results:
Column names will be used as keys in JSON, and cell values will be generated as JSON values. Result of the query
would be an array of JSON objects:
[
{"object_id":3,"name":"sysrscols","type":"S "},
{"object_id":5,"name":"sysrowsets","type":"S "},
{"object_id":6,"name":"sysclones","type":"S "}
]
NULL values in principal_id column will be ignored (they will not be generated).
Column alias will be used as a key name. Dot-separated column aliases (data.name and data.type) will be generated
as nested objects. If two column have the same prefix in dot notation, they will be grouped together in single object
(data in this example):
[
{"id":3,"data":{"name":"sysrscols","type":"S "}},
{"id":5,"data":{"name":"sysrowsets","type":"S "}},
{"id":6,"data":{"name":"sysclones","type":"S "}}
]
{"object_id":3,"name":"sysrscols","type":"S "}
[
{"object_id":3,"name":"sysrscols","type":"S ","principal_id":null},
{"object_id":5,"name":"sysrowsets","type":"S ","principal_id":null},
{"object_id":6,"name":"sysclones","type":"S ","principal_id":null}
]
Result of the query would be array of JSON objects inside the wrapper object:
{
"data":[
{"object_id":3,"name":"sysrscols","type":"S "},
{"object_id":5,"name":"sysrowsets","type":"S "},
{"object_id":6,"name":"sysclones","type":"S "}
]
}
[
{
"object_id":3,
"name":"sysrscols",
"c":[
{"column_id":12,"name":"bitpos"},
{"column_id":6,"name":"cid"}
]
},
{
"object_id":5,
"name":"sysrowsets",
Each sub-query will produce JSON result that will be included in the main JSON content.
update Product
set Data = JSON_MODIFY(Data, '$.Price', 24.99)
where ProductID = 17;
JSON_MODIFY function will update or create Price key (if it does not exists). If new value is NULL, the key will be
removed. JSON_MODIFY function will treat new value as string (escape special characters, wrap it with double
quotes to create proper JSON string). If your new value is JSON fragment, you should wrap it with JSON_QUERY
function:
update Product
set Data = JSON_MODIFY(Data, '$.tags', JSON_QUERY('["promo","new"]'))
where ProductID = 17;
JSON_QUERY function without second parameter behaves like a "cast to JSON". Since the result of JSON_QUERY is
valid JSON fragment (object or array), JSON_MODIFY will no escape this value when modifies input JSON.
update Product
set Data = JSON_MODIFY(Data, 'append $.tags', "sales")
where ProductID = 17;
New value will be appended at the end of the array, or a new array with value ["sales"] will be created.
JSON_MODIFY function will treat new value as string (escape special characters, wrap it with double quotes to
create proper JSON string). If your new value is JSON fragment, you should wrap it with JSON_QUERY function:
update Product
set Data = JSON_MODIFY(Data, 'append $.tags', JSON_QUERY('{"type":"new"}'))
where ProductID = 17;
JSON_QUERY function without second parameter behaves like a "cast to JSON". Since the result of JSON_QUERY is
valid JSON fragment (object or array), JSON_MODIFY will no escape this value when modifies input JSON.
Result of the query is equivalent to the join between Product and Part tables.
OPENJSON will open inner collection of tags and return it as table. Then we can filter results by some value in the
table.
Use nvarchar(max) as you are not sure what would be the size of your JSON documents. nvarchar(4000) and
varchar(8000) have better performance but with size limit to 8KB.
If you already have a table, you can add check constraint using the ALTER TABLE statement:
If you add PERSISTED computed column, value from JSON text will be materialized in this column. This way your
queries can faster read value from JSON text because no parsing is needed. Each time JSON in this row changes,
value will be re-calculated.
To optimize these kind of queries, you can add non-persisted computed column that exposes JSON expression
used in filter or sort (in this example JSON_VALUE(Data, '$.Color')), and create index on this column:
As a result, we will have new JSON text with "Price":39.99 and other value will not be changed. If object on the
specified path does not exists, JSON_MODIFY will insert key:value pair.
JSON_MODIFY will by default delete key if it does not have value so you can use it to delete a key.
If array on the specified path does not exists, JSON_MODIFY(append) will create new array with a single element:
Since third parameter is text you need to wrap it with JSON_QUERY function to "cast" text to JSON. Without this
"cast", JSON_MODIFY will treat third parameter as plain text and escape characters before inserting it as string
value. Without JSON_QUERY results will be:
{"Id":1,"Name":"Toy Car","Price":'{\"Min\":34.99,\"Recommended\":45.49}'}
JSON_MODIFY will insert this object if it does not exist, or delete it if value of third parameter is NULL.
(1 row(s) affected)
{"Id":1,"Name":"master","tables":[{"name":"Colors"},{"name":"Colors_Archive"},{"name":"OrderLines"}
,{"name":"PackageTypes"},{"name":"PackageTypes_Archive"},{"name":"StockGroups"},{"name":"StockItemS
tockGroups"},{"name":"StockGroups_Archive"},{"name":"StateProvinces"},{"name":"CustomerTransactions
"},{"name":"StateProvinces_Archive"},{"name":"Cities"},{"name":"Cities_Archive"},{"name":"SystemPar
ameters"},{"name":"InvoiceLines"},{"name":"Suppliers"},{"name":"StockItemTransactions"},{"name":"Su
ppliers_Archive"},{"name":"Customers"},{"name":"Customers_Archive"},{"name":"PurchaseOrders"},{"nam
e":"Orders"},{"name":"People"},{"name":"StockItems"},{"name":"People_Archive"},{"name":"ColdRoomTem
peratures"},{"name":"ColdRoomTemperatures_Archive"},{"name":"VehicleTemperatures"},{"name":"StockIt
ems_Archive"},{"name":"Countries"},{"name":"StockItemHoldings"},{"name":"sysdiagrams"},{"name":"Pur
chaseOrderLines"},{"name":"Countries_Archive"},{"name":"DeliveryMethods"},{"name":"DeliveryMethods_
Archive"},{"name":"PaymentMethods"},{"name":"SupplierTransactions"},{"name":"PaymentMethods_Archive
"},{"name":"TransactionTypes"},{"name":"SpecialDeals"},{"name":"TransactionTypes_Archive"},{"name":
"SupplierCategories"},{"name":"SupplierCategories_Archive"},{"name":"BuyingGroups"},{"name":"Invoic
es"},{"name":"BuyingGroups_Archive"},{"name":"CustomerCategories"},{"name":"CustomerCategories_Arch
ive"}]}
JSON_MODIFY will know that select query with FOR JSON clause generates valid JSON array and it will just insert it
into JSON text.
You can use all FOR JSON options in SELECT query, except WITHOUT_ARRAY_WRAPPER, which will
generate single object instead of JSON array. See other example in this topic to see how insert single JSON
object.
(1 row(s) affected)
{"Id":17,"Name":"WWI","table":{"name":"Colors","create_date":"2016-06-02T10:04:03.280","schema_id":
13}}
You should wrap FOR JSON, WITHOUT_ARRAY_WRAPPER query with JSON_QUERY function in order to
cast result to JSON.
SELECT STUFF
(
(
SELECT ',' + [FirstName]
FROM @DataSource
ORDER BY [rowID] DESC
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1
,1
,''
);
the ORDER BY clause can be used to order the values in a preferred way
if a longer value is used as the concatenation separator, the STUFF function parameter must be changed too;
SELECT STUFF
(
(
SELECT '---' + [FirstName]
FROM @DataSource
ORDER BY [rowID] DESC
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1
,3 -- the "3" could also be represented as: LEN('---') for clarity
,''
);
as the TYPE option and .value function are used, the concatenation works with NVARCHAR(MAX) string
<html>
<head>
<title>XPath example</title>
</head>
<body>
<p>This example demonstrates <a href="https://www.w3.org/TR/xpath/">XPath
expressions</a></p>
</body>
</html>
In FOR XML PATH, columns without a name become text nodes. NULL or '' therefore become empty text nodes.
Note: you can convert a named column to an unnamed one by using AS *
<row>
<LIN NUM="1">
<FLD NAME="REF">100001</FLD>
<FLD NAME="DES">Normal</FLD>
<FLD NAME="QTY">1</FLD>
</LIN>
<LIN NUM="2">
<FLD NAME="REF">100002</FLD>
<FLD NAME="DES">Foobar</FLD>
<FLD NAME="QTY">1</FLD>
</LIN>
<LIN NUM="3">
<FLD NAME="REF">100003</FLD>
<FLD NAME="DES">Hello World</FLD>
<FLD NAME="QTY">2</FLD>
</LIN>
</row>
For example, without the the empty strings between the element and the attribute in the SELECT statement, SQL
Server gives an error:
Attribute-centric column 'FLD/@NAME' must not come after a non-attribute-centric sibling in XML
hierarchy in FOR XML PATH.
Also note that this example also wrapped the XML in a root element named row, specified by ROOT('row')
<example>Hello World</example>
SELECT *
FROM table_1
INNER JOIN table_2
ON table_1.column_name = table_2.column_name
SELECT *
FROM table_1
JOIN table_2
ON table_1.column_name = table_2.column_name
Example
/* Sample data. */
DECLARE @Animal table (
AnimalId Int IDENTITY,
Animal Varchar(20)
);
SELECT
*
FROM
@Animal
JOIN @AnimalSound
ON @Animal.AnimalId = @AnimalSound.AnimalId;
This query will return data from table 1 where fields matching with table2 with a key and data not in Table 1 when
comparing with Table2 with a condition and key
select *
from Table1 t1
inner join Table2 t2 on t1.ID_Column = t2.ID_Column
left join Table3 t3 on t1.ID_Column = t3.ID_Column
where t2.column_name = column_value
and t3.ID_Column is null
order by t1.column_name;
LEFT JOIN returns all rows from the left table, matched to rows from the right table where the ON clause conditions
are met. Rows in which the ON clause is not met have NULL in all of the right table's columns. The syntax of a LEFT
JOIN is:
RIGHT JOIN returns all rows from the right table, matched to rows from the left table where the ON clause
conditions are met. Rows in which the ON clause is not met have NULL in all of the left table's columns. The syntax of
a RIGHT JOIN is:
FULL JOIN combines LEFT JOIN and RIGHT JOIN. All rows are returned from both tables, regardless of whether the
conditions in the ON clause are met. Rows that do not satisfy the ON clause are returned with NULL in all of the
opposite table's columns (that is, for a row in the left table, all columns in the right table will contain NULL, and vice
versa). The syntax of a FULL JOIN is:
Examples
SELECT *
FROM @Animal As t1
LEFT JOIN @AnimalSound As t2 ON t1.AnimalId = t2.AnimalId;
SELECT *
FROM @Animal As t1
RIGHT JOIN @AnimalSound As t2 ON t1.AnimalId = t2.AnimalId;
SELECT *
FROM @Animal As t1
FULL JOIN @AnimalSound As t2 ON t1.AnimalId = t2.AnimalId;
Update the SomeSetting column of the Preferences table filtering by a predicate on the Users table as follows:
UPDATE p
SET p.SomeSetting = 1
FROM Users u
JOIN Preferences p ON u.UserId = p.UserId
WHERE u.AccountId = 1234
p is an alias for Preferences defined in the FROM clause of the statement. Only rows with a matching AccountId
from the Users table will be updated.
Update t
SET t.Column1=100
FROM Table1 t LEFT JOIN Table12 t2
ON t2.ID=t.ID
UPDATE t1
SET t1.field1 = t2.field2Sum
FROM table1 t1
INNER JOIN (select field3, sum(field2) as field2Sum
from table2
group by field3) as t2
on t2.field3 = t1.field3
This example uses aliases which makes queries easier to read when you have multiple tables involved. In this case
we are retrieving all rows from the parent table Purchase Orders and retrieving only the last (or most recent) child
row from the child table PurchaseOrderLineItems. This example assumes the child table uses incremental numeric
Id's.
Example:
/* Sample data. */
DECLARE @Animal table (
AnimalId Int IDENTITY,
Animal Varchar(20)
);
SELECT
*
FROM
@Animal
CROSS JOIN @AnimalSound;
Results:
Note that there are other ways that a CROSS JOIN can be applied. This is a an "old style" join (deprecated since ANSI
SQL-92) with no condition, which results in a cross/Cartesian join:
SELECT *
FROM @Animal, @AnimalSound;
This syntax also works due to an "always true" join condition, but is not recommended and should be avoided, in
favor of explicit CROSS JOIN syntax, for the sake of readability.
SELECT *
FROM
@Animal
JOIN @AnimalSound
ON 1=1
ID Name Boss_ID
1 Bob 3
2 Jim 1
3 Sam 2
Each employee's Boss_ID maps to another employee's ID. To retrieve a list of employees with their respective boss'
name, the table can be joined on itself using this mapping. Note that joining a table in this manner requires the use
of an alias (Bosses in this case) on the second reference to the table to distinguish itself from the original table.
SELECT Employees.Name,
Bosses.Name AS Boss
FROM Employees
INNER JOIN Employees AS Bosses
ON Employees.Boss_ID = Bosses.ID
Name Boss
Bob Sam
Jim Bob
Sam Jim
Table People
PersonID FirstName
1 Alice
Table Scores
PersonID Subject Score
1 Math 100
2 Math 54
2 Science 98
Returns:
If you wanted to return all the people, with any applicable math scores, a common mistake is to write:
This would remove Eve from your results, in addition to removing Bob's science score, as Subject is NULL for her.
The correct syntax to remove non-Math records while retaining all individuals in the People table would be:
We can delete rows from the Preferences table, filtering by a predicate on the Users table as follows:
DELETE p
Here p is an alias for Preferences defined in the FROM clause of the statement and we only delete rows that have a
matching AccountId from the Users table.
Imagine that you have a Company table with a column that contains an array of products (ProductList column), and
a function that parse these values and returns a set of products. You can select all rows from a Company table,
apply this function on a ProductList column and "join" generated results with parent Company row:
SELECT *
FROM Companies c
CROSS APPLY dbo.GetProductList( c.ProductList ) p
For each row, value of ProductList cell will be provided to the function, and the function will return those products as
a set of rows that can be joined with the parent row.
Section 34.2: Join table rows with JSON array stored in cell
CROSS APPLY enables you to "join" rows from a table with collection of JSON objects stored in a column.
Imagine that you have a Company table with a column that contains an array of products (ProductList column)
formatted as JSON array. OPENJSON table value function can parse these values and return the set of products. You
can select all rows from a Company table, parse JSON products with OPENJSON and "join" generated results with
parent Company row:
SELECT *
FROM Companies c
CROSS APPLY OPENJSON( c.ProductList )
WITH ( Id int, Title nvarchar(30), Price money)
For each row, value of ProductList cell will be provided to OPENJSON function that will transform JSON objects to
rows with the schema defined in WITH clause.
Imagine that you have a Product table with a column that contains an array of comma separated tags (e.g.
promo,sales,new). STRING_SPLIT and CROSS APPLY enable you to join product rows with their tags so you can filter
products by tags:
SELECT *
FROM Products p
CROSS APPLY STRING_SPLIT( p.Tags, ',' ) tags
WHERE tags.value = 'promo'
For each row, value of Tags cell will be provided to STRING_SPLIT function that will return tag values. Then you can
filter rows by these values.
Value is computed and stored in the computed column automatically on inserting other values.
This gives run difference in minutes for runtime which will be very handy..
WITH CTE_DatesTable
AS
(
SELECT CAST(@startdate as date) AS [date]
UNION ALL
SELECT DATEADD(dd, 1, [date])
FROM CTE_DatesTable
WHERE DATEADD(dd, 1, [date]) <= DateAdd(DAY, @numberDays-1, @startdate)
)
OPTION (MAXRECURSION 0)
This example returns a single-column table of dates, starting with the date specified in the @startdate variable, and
returning the next @numberDays worth of dates.
GO
UNION ALL
SELECT
FirstName + ' ' + LastName AS FullName,
EmpLevel,
(SELECT FirstName + ' ' + LastName FROM Employees WHERE EmployeeID = cteReports.SupervisorID)
AS ManagerName
FROM cteReports
ORDER BY EmpLevel, SupervisorID
Output:
FullName EmpLevel ManagerName
Ken Sánchez 1 null
Keith Hall 2 Ken Sánchez
Fred Bloggs 2 Ken Sánchez
Žydre Klybe 2 Ken Sánchez
Joseph Walker 3 Keith Hall
Peter Miller 3 Fred Bloggs
Sam Jackson 3 Žydre Klybe
Chloe Samuels 3 Žydre Klybe
George Weasley 3 Žydre Klybe
Michael Kensington 4 Sam Jackson
WITH yearsAgo
(
myYear
)
AS
(
-- Base Case: This is where the recursion starts
SELECT DATEPART(year, GETDATE()) AS myYear
-- Recursive Section: This is what we're doing with the recursive call
SELECT yearsAgo.myYear - 1
FROM yearsAgo
WHERE yearsAgo.myYear >= 2012
)
SELECT myYear FROM yearsAgo; -- A single SELECT, INSERT, UPDATE, or DELETE
myYear
2016
2015
2014
2013
2012
2011
WITH yearsAgo
(
myYear
)
AS
(
-- Base Case
SELECT DATEPART(year , GETDATE()) AS myYear
UNION ALL
-- Recursive Section
SELECT yearsAgo.myYear - 1
FROM yearsAgo
WHERE yearsAgo.myYear >= 2002
)
SELECT * FROM yearsAgo
OPTION (MAXRECURSION 10);
Msg 530, Level 16, State 1, Line 2The statement terminated. The maximum recursion 10 has been
exhausted before statement completion.
WITH EmployeesCTE AS
(
SELECT *, ROW_NUMBER()OVER(PARTITION BY ID ORDER BY ID) AS RowNumber
FROM Employees
)
DELETE FROM EmployeesCTE WHERE RowNumber > 1
Execution result :
With common table expressions, it is possible to create multiple queries using comma-separated AS statements. A
query can then reference any or all of those queries in many different ways, even joining them.
WITH RESULT AS
(
SELECT SALARY,
DENSE_RANK() OVER (ORDER BY SALARY DESC) AS DENSERANK
FROM EMPLOYEES
)
SELECT TOP 1 SALARY
FROM RESULT
WHERE DENSERANK = 1
To find 2nd highest salary simply replace N with 2. Similarly, to find 3rd highest salary, simply replace N with 3.
This code creates a new table called MyNewTable and puts that data into it
What did you insert? Normally in databases you need to have one or more columns that you can use to uniquely
identify rows so we will assume that and make use of it.
Now assuming records in both tables are unique on Key1,Key2, we can use that to find and delete data out of the
source table
DELETE MyTable
WHERE EXISTS (
SELECT * FROM TargetTable
WHERE TargetTable.Key1 = SourceTable.Key1
This will only work correctly if Key1, Key2 are unique in both tables
Lastly, we don't want the job half done. If we wrap this up in a transaction then either all data will be moved, or
nothing will happen. This ensures we don't insert the data in then find ourselves unable to delete the data out of
the source.
BEGIN TRAN;
DELETE MyTable
WHERE EXISTS (
SELECT * FROM TargetTable
WHERE TargetTable.Key1 = SourceTable.Key1
AND TargetTable.Key2 = SourceTable.Key2
);
COMMIT TRAN;
As database tables grow, it's often useful to limit the results of queries to a fixed number or percentage. This can be
achieved using SQL Server's TOP keyword or OFFSET FETCH clause.
FETCH is generally more useful for pagination, but can be used as an alternative to TOP:
SELECT *
FROM table_name
ORDER BY 1
OFFSET 0 ROWS
FETCH NEXT 50 ROWS ONLY
WITH LastRestores AS
(
SELECT
DatabaseName = [d].[name] ,
[d].[create_date] ,
[d].[compatibility_level] ,
[d].[collation_name] ,
r.*,
RowNum = ROW_NUMBER() OVER (PARTITION BY d.Name ORDER BY r.[restore_date] DESC)
FROM master.sys.databases d
LEFT OUTER JOIN msdb.dbo.[restorehistory] r ON r.[destination_database_name] = d.Name
)
SELECT *
FROM [LastRestores]
WHERE [RowNum] = 1
This procedure can be used to find information on current SQL server sessions. Since it is a procedure, it's often
helpful to store the results into a temporary table or table variable so one can order, filter, and transform the
results as needed.
-- Create a variable table to hold the results of sp_who2 for querying purposes
SELECT *
FROM @who2 w
WHERE 1=1
Examples:
EXEC sp_helpserver;
Below is the output of above table,As you can see Id Column is repeated three times..
Id Name
1 A
1 B
1 C
2 D
Id Name
1 B
Output :
Id Name
1 A
1 B
1 C
As you can see SQL Server outputs all the Rows which are tied with Order by Column. Lets see one more Example
to understand this better..
Output:
Id Name
1 A
In Summary ,when we use with Ties Option,SQL Server Outputs all the Tied rows irrespective of limit we impose
Parameters:
1. character string. A string of Unicode data, up to 128 characters (sysname). If an input string is longer than 128
characters function returns null.
2. quote character. Optional. A single character to use as a delimiter. Can be a single quotation mark (' or ``),
a left or right bracket ({,[,(,< or >,),],}) or a double quotation mark ("). Any other value will return null.
Default value is square brackets.
Parameters:
1. string expression. This is the string that would be searched. It can be a character or binary data type.
2. pattern. This is the sub string that would be replaced. It can be a character or binary data type. The pattern
argument cannot be an empty string.
3. replacement. This is the sub string that would replace the pattern sub string. It can be a character or binary
data.
Notes:
If string expression is not of type varchar(max) or nvarchar(max), the replace function truncates the return
value at 8,000 chars.
Return data type depends on input data types - returns nvarchar if one of the input values is nvarchar, or
varchar otherwise.
Parameters:
1. Character expression. The character expression can be of any data type that can be implicitly converted to
varchar or nvarchar, except for text or ntext.
2. Start index. A number (int or bigint) that specifies the start index of the requested substring. (Note: strings
in sql server are base 1 index, meaning that the first character of the string is index 1). This number can be
less then 1. In this case, If the sum of start index and max length is greater then 0, the return string would be
a string starting from the first char of the character expression and with the length of (start index + max
length - 1). If it's less then 0, an empty string would be returned.
3. Max length. An integer number between 0 and bigint max value (9,223,372,036,854,775,807). If the max
length parameter is negative, an error will be raised.
If the max length + start index is more then the number of characters in the string, the entier string is returned.
If the start index is bigger then the number of characters in the string, an empty string is returned.
Splits a string expression using a character separator. Note that STRING_SPLIT() is a table-valued function and
therefore must be used within FROM clause.
Parameters:
Returns a single column table where each row contains a fragment of the string. The name of the columns is value,
and the datatype is nvarchar if any of the parameters is either nchar or nvarchar, otherwise varchar.
SELECT value FROM STRING_SPLIT('Lorem ipsum dolor sit amet.', ' ');
Result:
value
-----
Lorem
ipsum
dolor
sit
Remarks:
The STRING_SPLIT function is available only under compatibility level 130. If your database compatibility
level is lower than 130, SQL Server will not be able to find and execute STRING_SPLIT function. You can
change the compatibility level of a database using the following command:
Older versions of sql server does not have a built in split string function. There are many user defined functions
that handles the problem of splitting a string. You can read Aaron Bertrand's article Split strings the right way – or
the next best way for a comprehensive comparison of some of them.
Parameters:
1. character expression. The character expression can be of any data type that can be implicitly converted to
varchar or nvarchar, except for text or ntext
2. max length. An integer number between 0 and bigint max value (9,223,372,036,854,775,807).
If the max length parameter is negative, an error will be raised.
If the max length is more then the number of characters in the string, the entier string is returned.
Parameters:
1. character expression. The character expression can be of any data type that can be implicitly converted to
varchar or nvarchar, except for text or ntext
2. max length. An integer number between 0 and bigint max value (9,223,372,036,854,775,807). If the max
length parameter is negative, an error will be raised.
If the max length is more then the number of characters in the string, the entier string is returned.
Parameters:
The soundex function creates a four-character code that is based on how the character expression would sound
when spoken. the first char is the the upper case version of the first character of the parameter, the rest 3
characters are numbers representing the letters in the expression (except a, e, i, o, u, h, w and y that are ignored).
Returns a NVARCHAR value formatted with the specified format and culture (if specified). This is primarily used for
converting date-time types to strings.
Parameters:
1. value. An expression of a supported data type to format. valid types are listed below.
2. format. An NVARCHAR format pattern. See Microsoft official documentation for standard and custom format
strings.
3. culture. Optional. nvarchar argument specifying a culture. The default value is the culture of the current
session.
DATE
SELECT
FORMAT ( @d, 'd', 'en-US' ) AS 'US English Result' -- Returns '7/31/2016'
,FORMAT ( @d, 'd', 'en-gb' ) AS 'Great Britain English Result' -- Returns '31/07/2016'
,FORMAT ( @d, 'd', 'de-de' ) AS 'German Result' -- Returns '31.07.2016'
,FORMAT ( @d, 'd', 'zh-cn' ) AS 'Simplified Chinese (PRC) Result' -- Returns '2016/7/31'
,FORMAT ( @d, 'D', 'en-US' ) AS 'US English Result' -- Returns 'Sunday, July 31, 2016'
,FORMAT ( @d, 'D', 'en-gb' ) AS 'Great Britain English Result' -- Returns '31 July 2016'
,FORMAT ( @d, 'D', 'de-de' ) AS 'German Result' -- Returns 'Sonntag, 31. Juli 2016'
CURRENCY
PERCENTAGE
NUMBER
Important Notes:
See also Date & Time Formatting using FORMAT documentation example.
Escapes special characters in texts and returns text (nvarchar(max)) with escaped characters.
Parameters:
2. type. Escaping rules that will be applied. Currently the only supported value is 'json'.
SELECT STRING_ESCAPE('\ /
\\ " ', 'json') -- returns '\\\t\/\n\\\\\t\"\t'
If the string is Unicode and the leftmost character is not ASCII but representable in the current collation, a value
greater than 127 can be returned:
If the string is Unicode and the leftmost character cannot be represented in the current collation, the int value of 63
is returned: (which represents question mark in ASCII):
This can be used to introduce new line/line feed CHAR(10), carriage returns CHAR(13), etc. See AsciiTable.com for
reference.
If the argument value is not between 0 and 255, the CHAR function returns NULL.
The return data type of the CHAR function is char(1)
Returns a string that is the result of two or more strings joined together. CONCAT accepts two or more arguments.
SELECT CONCAT('This', ' is', ' my', ' string') -- returns 'This is my string'
Note: Unlike concatenating strings using the string concatenation operator (+), when passing a null value to the
concat function it will implicitly convert it to an empty string:
SELECT CONCAT('This', NULL, ' is', ' my', ' string'), -- returns 'This is my string'
'This' + NULL + ' is' + ' my' + ' string' -- returns NULL.
SELECT CONCAT('This', ' is my ', 3, 'rd string') -- returns 'This is my 3rd string'
Non-string type variables will also be converted to string format, no need to manually covert or cast it to string:
Older versions do not support CONCAT function and must use the string concatenation operator (+) instead. Non-
string types must be cast or converted to string types in order to concatenate them this way.
SELECT 'This is the number ' + CAST(42 AS VARCHAR(5)) --returns 'This is the number 42'
Parameters:
1. character expression. Any expression of character or binary data that can be implicitly converted to varcher,
except text, ntext and image.
Parameters:
1. character expression. Any expression of character or binary data that can be implicitly converted to varcher,
except text, ntext and image.
Parameters:
1. pattern. A character expression the contains the sequence to be found. Limited to A maximum length of
8000 chars. Wildcards (%, _) can be used in the pattern. If the pattern does not start with a wildcard, it may
only match whatever is in the beginning of the expression. If it doesn't end with a wildcard, it may only match
whatever is in the end of the expression.
Parameters:
1. integer expression. Any integer expression, up to 8000. If negative, null is returned. if 0, an empty string is
returned. (To return a string longer then 8000 spaces, use Replicate.
Parameters:
1. character expression 1.
2. character expression 2.
The integer returned is the number of chars in the soundex values of the parameters that are the same, so 4 means
that the expressions are very similar and 0 means that they are very different.
If the length including trailing spaces is desired there are several techniques to achieve this, although each has its
drawbacks. One technique is to append a single character to the string, and then use the LEN minus one:
The drawback to this is if the type of the string variable or column is of the maximum length, the append of the
extra character is discarded, and the resulting length will still not count trailing spaces. To address that, the
following modified version solves the problem, and gives the correct results in all cases at the expense of a small
amount of additional execution time, and because of this (correct results, including with surrogate pairs, and
reasonable execution speed) appears to be the best technique to use:
It's important to note though that DATALENGTH returns the length in bytes of the string in memory. This will be
different for varchar vs. nvarchar.
You can adjust for this by dividing the datalength of the string by the datalength of a single character (which must
be of the same type). The example below does this, and also handles the case where the target string happens to
be empty, thus avoiding a divide by zero.
Even this, though, has a problem in SQL Server 2012 and above. It will produce incorrect results when the string
contains surrogate pairs (some characters can occupy more bytes than other characters in the same string).
Another technique is to use REPLACE to convert spaces to a non-space character, and take the LEN of the result. This
gives correct results in all cases, but has very poor execution speed with long strings.
Parameters:
1. Character expression. Any expression of character or binary data that can be implicitly converted to varchar.
Parameters:
1. Character expression. Any expression of character or binary data that can be implicitly converted to varchar.
Parameters:
Parameters:
1. integer expression. Any integer expression that is a positive number between 0 and 65535, or if the collation
of the database supports supplementary character (CS) flag, the supported range is between 0 to 1114111. If
the integer expression does not fall inside this range, null is returned.
Parameters:
Parameters:
1. string expression. Any string or binary data that can be implicitly converted to varchar.
Parameters:
Note: If string expression is not of type varchar(max) or nvarchar(max), the return value will not exceed 8000
chars. Replicate will stop before adding the string that will cause the return value to exceed that limit:
Parameters list:
If the string to search is varchar(max), nvarchar(max) or varbinary(max), the CHARINDEX function will return a
bigint value. Otherwise, it will return an int.
Returns the item at the specified index from a list of values. If index exceeds the bounds of values then NULL is
returned.
Parameters:
chosen_result
-------------
apples
Returns one of two values, depending on whether a given Boolean expression evaluates to true or false.
Parameters:
SELECT IIF (42 > 23, 'I knew that!', 'That is not true.') AS iif_result
iif_result
------------
I knew that!
Version < SQL Server 2012
SELECT CASE WHEN 42 > 23 THEN 'I knew that!' ELSE 'That is not true.' END AS iif_result
iif_result
------------
I knew that!
We have table as shown in figure that will be used to perform different aggregate functions. The table name is
Marksheet.
The sum function doesn't consider rows with NULL value in the field used as parameter
We have table as shown in figure that will be used to perform different aggregate functions. The table name is
Marksheet.
The average function doesn't consider rows with NULL value in the field used as parameter
We have table as shown in figure that will be used to perform different aggregate functions. The table name is
Marksheet.
We have table as shown in figure that will be used to perform different aggregate functions. The table name is
Marksheet.
We have table as shown in figure that will be used to perform different aggregate functions. The table name is
Marksheet.
The count function doesn't consider rows with NULL value in the field used as parameter. Usually the count
parameter is * (all fields) so only if all fields of row are NULLs this row will not be considered
NOTE
The function COUNT(*) returns the number of rows in a table. This value can also be obtained by using a constant
non-null expression that contains no column references, such as COUNT(1).
Example
ReportName ReportPrice
Test 10.00 $
Test 10.00 $
Test 10.00 $
Test 2 11.00 $
Test 10.00 $
Test 3 14.00 $
Test 3 14.00 $
Test 4 100.00 $
SELECT
ReportName AS REPORT NAME,
COUNT(ReportName) AS COUNT
FROM
REPORTS
GROUP BY
ReportName
REPORT NAME COUNT
Test 4
Test 2 1
Test 3 2
Test 4 1
select subjectid, stuff(( select concat( ',', studentname) from #yourstudent y where y.subjectid =
u.subjectid for xml path('')),1,1, '')
from #yourstudent u
group by subjectid
Eg :
Select Studentid,Name,Subject,Marks,
RANK() over(partition by name order by Marks desc)Rank
From Exam
order by name,subject
SELECT TradeDate, AVG(Px) OVER (ORDER BY TradeDate ROWS BETWEEN 63 PRECEDING AND 63 FOLLOWING) AS
PxMovingAverage
FROM HistoricalPrices
Note that, because it will take up to 63 rows before and after each returned row, at the beginning and end of the
TradeDate range it will not be centered: When it reaches the largest TradeDate it will only be able to find 63
preceding values to include in the average.
select *
from (
select
*,
row_number() over (order by crdate desc) as my_ranking
from sys.sysobjects
) g
where my_ranking=1
This same technique can be used to return a single row from any dataset with potentially duplicate values.
SELECT
value_column1,
( SELECT
AVG(value_column1) AS moving_average
FROM Table1 T2
WHERE ( SELECT
COUNT(*)
FROM Table1 T3
WHERE date_column1 BETWEEN T2.date_column1 AND T1.date_column1
) BETWEEN 1 AND 30
) as MovingAvg
FROM Table1 T1
For demonstration we will use a table Books in a Bookstore’s database. We assume that the table is quite de-
normalised and has following columns
Table: Books
-----------------------------
BookId (Primary Key Column)
Name
Language
NumberOfPages
EditionNumber
YearOfPrint
YearBoughtIntoStore
ISBN
AuthorName
Price
NumberOfUnitsSold
GO
Now if we need to query on the database and figure out number of books in English, Russian, German, Hindi, Latin
languages bought into the bookstore every year and present our output in a small report format, we can use PIVOT
query like this
SELECT * FROM
(
SELECT YearBoughtIntoStore AS [Year Bought],[Language], NumberOfBooks
FROM BookList
Special case is when we do not have a full list of the languages, so we'll use dynamic SQL like below
First, suppose we have a table which keeps daily records of all items' prices.
+========+=========+=======+
| item | weekday | price |
+========+=========+=======+
| Item1 | Mon | 110 |
+--------+---------+-------+
| Item2 | Mon | 230 |
+--------+---------+-------+
| Item3 | Mon | 150 |
+--------+---------+-------+
| Item1 | Tue | 115 |
+--------+---------+-------+
| Item2 | Tue | 231 |
+--------+---------+-------+
| Item3 | Tue | 162 |
+--------+---------+-------+
| . . . |
+--------+---------+-------+
| Item2 | Wed | 220 |
+--------+---------+-------+
In order to perform aggregation which is to find the average price per item for each week day, we are going to use
the relational operator PIVOT to rotate the column weekday of table-valued expression into aggregated row values
as below:
Result:
+--------+------+------+------+------+------+
| item | Mon | Tue | Wed | Thu | Fri |
+--------+------+------+------+------+------+
| Item1 | 116 | 112 | 117 | 109 | 120 |
| Item2 | 227 | 233 | 230 | 228 | 210 |
| Item3 | 145 | 158 | 152 | 145 | 125 |
+--------+------+------+------+------+------+
Lastly, in order to perform the reverse operation of PIVOT, we can use the relational operator UNPIVOT to rotate
columns into rows as below:
Result:
+=======+========+=========+
| item | price | weekday |
+=======+========+=========+
| Item1 | 116 | Mon |
+-------+--------+---------+
| Item1 | 112 | Tue |
+-------+--------+---------+
| Item1 | 117 | Wed |
+-------+--------+---------+
| Item1 | 109 | Thu |
+-------+--------+---------+
| Item1 | 120 | Fri |
+-------+--------+---------+
| Item2 | 227 | Mon |
+-------+--------+---------+
| Item2 | 233 | Tue |
+-------+--------+---------+
| Item2 | 230 | Wed |
+-------+--------+---------+
| Item2 | 228 | Thu |
+-------+--------+---------+
| Item2 | 210 | Fri |
This can be easily done with a group by, but lets assume we to 'rotate' our result table in a way that for each
Product Id we have a column.
Since our 'new' columns are numbers (in the source table), we need to square brackets []
100 145
45 18
DECLARE
@cols AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX);
FROM sys.partition_schemes ps
INNER JOIN sys.destination_data_spaces dds
ON dds.partition_scheme_id = ps.data_space_id
INNER JOIN sys.filegroups fg
ON dds.data_space_id = fg.data_space_id
INNER JOIN sys.partition_functions f
ON f.function_id = ps.function_id
INNER JOIN sys.partition_range_values prv
ON f.function_id = prv.function_id
AND dds.destination_id = prv.boundary_id
Partitioning data enables you to manage and access subsets of your data quickly and efficiently while
maintaining the integrity of the entire data collection.
When you call the following query the data is not physically moved; only the metadata about the location of the
data changes.
The tables must have the same columns with the same data types and NULL settings, they need to be in the same
file group and the new target table must be empty. See the page link above for more info on switching partitions.
Just un-comment where clause and replace table_name with actual table name to view the detail of desired object.
You can execute a procedure with a few different syntaxes. First, you can use EXECUTE or EXEC
Additionally, you can omit the EXEC command. Also, you don't have to specify what parameter you are passing in,
as you pass in all parameters.
When you want to specify the input parameters in a different order than how they are declared in the procedure
you can specify the parameter name and assign values. For example
SELECT
Param1 = @Param1,
Param2 = @Param2
END
the normal order to execute this procedure is to specify the value for @Param1 first and then @Param2 second. So
it will look something like this
But it's also possible that you can use the following
in this, you are specifying the value for @param2 first and @Param1 second. Which means you do not have to keep
the same order as it is declared in the procedure but you can have any order as you wish. but you will need to
specify to which parameter you are setting the value
And also you can create a procedure with a prefix sp_ these procuedres, like all system stored procedures, can be
executed without specifying the database because of the default behavior of SQL Server. When you execute a
stored procedure that starts with "sp_", SQL Server looks for the procedure in the master database first. If the
procedure is not found in master, it looks in the active database. If you have a stored procedure that you want to
access from all your databases, create it in master and use a name that includes the "sp_" prefix.
Use Master
Creates stored procedure that checks whether the values passed in stored procedure are not null or non empty
and perform insert operation in Employee table.
EXECUTE spSetEmployeeDetails
@ID = 1,
@Name = 'Subin Nepal',
@Gender = 'Male',
@DeptId = 182666
In the above sql query, we can see that we can use above query by defining values in @table_name, @col_name,
and @col_value at run time. The query is generated at runtime and executed. This is technique in which we can
create whole scripts as string in a variable and execute it. We can create more complex queries using dynamic SQL
Output
select
o.name,
row_number() over (order by o.name) as rn
into
#systables
from
sys.objects as o
where
o.type = 'S'
Next we declare some variables to control the looping and store the table name in this example
declare
@rn int = 1,
@maxRn int = (
select
max(rn)
from
#systables as s
)
declare @tablename sys name
Now we can loop using a simple while. We increment @rn in the select statement but this could also have been a
separate statement for ex set @rn = @rn + 1 it will depend on your requirements. We also use the value of @rn
before it's incremented to select a single record from #systables. Lastly we print the table name.
select
@tablename = name,
@rn = @rn + 1
from
print @tablename
end
RETURN;
END
GO
The ROUTINE_NAME, ROUTINE_SCHEMA and ROUTINE_DEFINITION columns are generally the most useful.
Note that this version has an advantage over selecting from sys.objects since it includes the additional columns
is_auto_executed, is_execution_replicated, is_repl_serializable, and skips_repl_constraints.
Note that the output contains many columns that will never relate to a stored procedure.
The next set of queries will return all Stored Procedures in the database that include the string 'SearchTerm':
Note: This is a catalog view and will be available SQL SERVER 2005+ versions
Method 3: To see just database names you can use undocumented sp_MSForEachDB
Method 4: Below SP will help you to provide database size along with databases name , owner, status etc. on the
server
EXEC sp_helpdb
Method 5 Similarly, below stored procedure will give database name, database size and Remarks
EXEC sp_databases
USE YourDatabaseName
SELECT COUNT(*) from INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
Following is another way this can be done for all user tables with SQL Server 2008+. The reference is here.
FROM Master.SYS.SYSALTFILES SF
Join Master.SYS.Databases d on sf.fileid = d.database_id
Order by d.name
You can do this using the sys.dm_db_persisted_sku_features system view, like so:
xp_logininfo 'DOMAIN\user'
SET NOCOUNT ON
BEGIN
SET @ColumnName = ''
SET @TableName =
(
SELECT MIN(QUOTENAME(TABLE_SCHEMA) + '.' + QUOTENAME(TABLE_NAME))
FROM INFORMATION_SCHEMA.TABLES
BEGIN
SET @ColumnName =
(
SELECT MIN(QUOTENAME(COLUMN_NAME))
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA = PARSENAME(@TableName, 2)
AND TABLE_NAME = PARSENAME(@TableName, 1)
AND DATA_TYPE IN ('char', 'varchar', 'nchar', 'nvarchar', 'int', 'decimal')
AND QUOTENAME(COLUMN_NAME) > @ColumnName
)
BEGIN
INSERT INTO #Results
EXEC
(
'SELECT ''' + @TableName + '.' + @ColumnName + ''', LEFT(' + @ColumnName + ',
3630) FROM ' + @TableName + ' (NOLOCK) ' +
' WHERE ' + @ColumnName + ' LIKE ' + @SearchStr2
)
END
END
END
CASE len(run_duration)
WHEN 1 THEN cast('00:00:0'
+ cast(run_duration as char) as char (8))
WHEN 2 THEN cast('00:00:'
+ cast(run_duration as char) as char (8))
WHEN 3 THEN cast('00:0'
+ Left(right(run_duration,3),1)
+':' + right(run_duration,2) as char (8))
WHEN 4 THEN cast('00:'
+ Left(right(run_duration,4),2)
+':' + right(run_duration,2) as char (8))
WHEN 5 THEN cast('0'
+ Left(right(run_duration,5),1)
UNION
CASE len(run_duration)
WHEN 1 THEN cast('00:00:0'
+ cast(run_duration as char) as char (8))
WHEN 2 THEN cast('00:00:'
+ cast(run_duration as char) as char (8))
WHEN 3 THEN cast('00:0'
+ Left(right(run_duration,3),1)
+':' + right(run_duration,2) as char (8))
WHEN 4 THEN cast('00:'
+ Left(right(run_duration,4),2)
+':' + right(run_duration,2) as char (8))
WHEN 5 THEN cast('0'
+ Left(right(run_duration,5),1)
+':' + Left(right(run_duration,4),2)
+':' + right(run_duration,2) as char (8))
WHEN 6 THEN cast(Left(right(run_duration,6),2)
+':' + Left(right(run_duration,4),2)
+':' + right(run_duration,2) as char (8))
END as 'Max Duration',
CASE(dbo.sysschedules.freq_subday_interval)
WHEN 0 THEN 'Once'
ELSE cast('Every '
+ right(dbo.sysschedules.freq_subday_interval,2)
+ ' '
+ CASE(dbo.sysschedules.freq_subday_type)
WHEN 1 THEN 'Once'
WHEN 4 THEN 'Minutes'
WHEN 8 THEN 'Hours'
END as char(16))
END as 'Subday Frequency'
FROM dbo.sysjobs
LEFT OUTER JOIN dbo.sysjobschedules ON dbo.sysjobs.job_id = dbo.sysjobschedules.job_id
INNER JOIN dbo.sysschedules ON dbo.sysjobschedules.schedule_id = dbo.sysschedules.schedule_id
LEFT OUTER JOIN (SELECT job_id, max(run_duration) AS run_duration
FROM dbo.sysjobhistory
GROUP BY job_id) Q1
ON dbo.sysjobs.job_id = Q1.job_id
WHERE Next_run_time <> 0
SELECT
c.name AS ColName,
t.name AS TableName
FROM
sys.columns c
JOIN sys.tables t ON c.object_id = t.object_id
WHERE
c.name LIKE '%MyName%'
To get the list of all restore operations performed on the current database instance:
SELECT
[d].[name] AS database_name,
[r].restore_date AS last_restore_date,
[r].[user_name],
[bs].[backup_finish_date] AS backup_creation_date,
[bmf].[physical_device_name] AS [backup_file_used_for_restore]
FROM master.sys.databases [d]
LEFT OUTER JOIN msdb.dbo.[restorehistory] r ON r.[destination_database_name] = d.Name
INNER JOIN msdb.dbo.backupset [bs] ON [r].[backup_set_id] = [bs].[backup_set_id]
Example:
Result:
+-----+
|Value|
+-----+
|A |
+-----+
|B |
+-----+
|C |
+-----+
String:
separator :
Is a single character expression of any character type (e.g. nvarchar(1), varchar(1), nchar(1) or char(1)) that
is used as separator for concatenated strings.
Example:
Select Value
From STRING_SPLIT('a|b|c','|')
In above example
Result :
+-----+
|Value|
+-----+
|a |
+-----+
|b |
+-----+
|c |
+-----+
SELECT value
FROM STRING_SPLIT('',',')
Result :
+-----+
|Value|
+-----+
1 | |
+-----+
SELECT value
FROM STRING_SPLIT('',',')
WHERE LTRIM(RTRIM(value))<>''
Select @text
Column names are required if the table you are inserting into contains a column with the IDENTITY attribute.
Note, if the primary key column (in this case ssn) has more than one row with the same candidate key, the above
statement will fail, as primary key values must be unique.
In this example we have parent Company table with CompanyId primary key, and child Employee table that has id
of the company where this employee works.
foreign key references ensures that values inserted in Employee.CompanyId column must also exist in
Company.CompanyId column. Also, nobody can delete company in company table if there is ate least one
employee with a matching companyId in child table.
FOREIGN KEY relationship ensures that rows in two tables cannot be "unlinked".
Msg 547, Level 16, State 0, Line 12 The INSERT statement conflicted with the FOREIGN KEY constraint
"FK__Employee__Compan__1EE485AA". The conflict occurred in database "MyDb", table "dbo.Company", column
'CompanyId'. The statement has been terminated.
Also, we cannot delete parent row in company table as long as there is at least one child row in employee table that
references it.
Msg 547, Level 16, State 0, Line 14 The DELETE statement conflicted with the REFERENCE constraint
"FK__Employee__Compan__1EE485AA". The conflict occurred in database "MyDb", table "dbo.Employee", column
'CompanyId'. The statement has been terminated.
select name,
OBJECT_NAME(referenced_object_id) as [parent table],
OBJECT_NAME(parent_object_id) as [child table],
delete_referential_action_desc,
update_referential_action_desc
from sys.foreign_keys
SELECT SCOPE_IDENTITY();
This will return the most recently added identity value produced on the same connection, within the current scope.
In this case, 1, for the first row in the dbo.person table.
SELECT @@IDENTITY;
This will return the most recently-added identity on the same connection, regardless of scope. In this case,
whatever the current value of the identity column on logging_table is, assuming no other activity is occurring on the
instance of SQL Server and no other triggers fire from this insert.
This will select the most recently-added identity value on the selected table, regardless of connection or scope.
Results
CustomerID CustomerName
10001 Jerry
10002 Gorge
Results
CustomerID CustomerName
10001 Jerry
10003 George
Sample :
This drops the index and recreates it, removing fragementation, reclaims disk space and reorders index pages.
which will use minimal system resources and defragments the leaf level of clustered and nonclustered indexes on
tables and views by physically reordering the leaf-level pages to match the logical, left to right, order of the leaf
nodes
USE Adventureworks
EXEC sp_SQLskills_SQL2012_helpindex 'dbo.Product'
Alternatively, Tibor Karaszi has a stored procedure (found here). The later will show information on index usage too,
and optionally provide a list of index suggestions. Usage:
USE Adventureworks
EXEC sp_indexinfo 'dbo.Product'
USE AdventureWorks2012;
GO
CREATE UNIQUE INDEX ui_ukJobCand ON HumanResources.JobCandidate(JobCandidateID);
CREATE FULLTEXT CATALOG ft AS DEFAULT;
CREATE FULLTEXT INDEX ON HumanResources.JobCandidate(Resume)
KEY INDEX ui_ukJobCand
WITH STOPLIST = SYSTEM;
GO
https://www.simple-talk.com/sql/learn-sql-server/understanding-full-text-indexing-in-sql-server/
https://msdn.microsoft.com/en-us/library/cc879306.aspx
https://msdn.microsoft.com/en-us/library/ms142571.aspx
SELECT candidate_name,SSN
FROM candidates
WHERE CONTAINS(candidate_resume,”SQL Server”) AND candidate_division =DBA;
It is usually bound to a table and fires automatically. You cannot explicitly call any trigger.
DML Triggers provides access to inserted and deleted tables that holds information about the data that was / will
be affected by the insert, update or delete statement that fired the trigger.
Note that DML triggers are statement based, not row based. This means that if the statement effected more then
one row, the inserted or deleted tables will contain more then one row.
Examples:
GO
GO
GO
All the examples above will add records to tblAudit whenever a record is added, deleted or updated in
tblSomething.
DDL Triggers are fired in response to Data Definition Language (DDL) events. These events primarily correspond to
Transact-SQL statements that start with the keywords CREATE, ALTER and DROP.
DML Triggers are fired in response to Data Manipulation Language (DML) events. These events corresponds to
Transact-SQL statements that start with the keywords INSERT, UPDATE and DELETE.
2. Instead of triggers
-- here we are creating our cursor, as a local cursor and only allowing
-- forward operations
DECLARE rowCursor CURSOR LOCAL FAST_FORWARD FOR
-- this is the query that we want to loop through record by record
SELECT [OrderId]
FROM [dbo].[Orders]
-- now we will initialize the cursor by pulling the first row of data, in this example the
[OrderId] column,
-- and storing the value into a variable called @orderId
FETCH NEXT FROM rowCursor INTO @orderId
-- start our loop and keep going until we have no more records to loop through
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT @orderId
-- this is important, as it tells SQL Server to get the next record and store the [OrderId]
column value into the @orderId variable
FETCH NEXT FROM rowCursor INTO @orderId
END
/* @@FETCH_STATUS global variable will be 1 / true until there are no more rows to fetch */
WHILE @@FETCH_STATUS = 0
BEGIN
/* Write operations to perform in a loop here. Simple SELECT used for example */
SELECT Id, Val
FROM @test_table
WHERE Id = @myId;
/* Set variable(s) to the next value returned from iterator; this is needed otherwise the
cursor will loop infinitely. */
FETCH NEXT FROM myCursor INTO @myId;
END
/* After all is done, clean up */
CLOSE myCursor;
DEALLOCATE myCursor;
Results from SSMS. Note that these are all separate queries, they are in no way unified. Notice how the query
engine processes each iteration one by one instead of as a set.
Id Val
1 Foo
(1 row(s) affected)
Id Val
2 Bar
(1 row(s) affected)
Id Val
3 Baz
(1 row(s) affected)
This isolation level is the 2nd most permissive. It prevents dirty reads. The behavior of READ COMMITTED depends on
the setting of the READ_COMMITTED_SNAPSHOT:
If set to OFF (the default setting) the transaction uses shared locks to prevent other transactions from
modifying rows used by the current transaction, as well as block the current transaction from reading rows
modified by other transactions.
If set to ON, the READCOMMITTEDLOCK table hint can be used to request shared locking instead of row
versioning for transactions running in READ COMMITTED mode.
This behavior can be replicated by using 2 separate queries: one to open a transaction and write some data to a
table without committing, the other to select the data to be written (but not yet committed) with this isolation level.
Returns:
col1 col2
----------- ---------------------------------------
99 Normal transaction
COMMIT TRANSACTION;
DROP TABLE dbo.demo;
GO
This is the most permissive isolation level, in that it does not cause any locks at all. It specifies that statements can
read all rows, including rows that have been written in transactions but not yet committed (i.e., they are still in
transaction). This isolation level can be subject to "dirty reads".
This transaction isolation level is slightly less permissive than READ COMMITTED, in that shared locks are placed on all
data read by each statement in the transaction and are held until the transaction completes, as opposed to
being released after each statement.
Note: Use this option only when necessary, as it is more likely to cause database performance degradation as well
as deadlocks than READ COMMITTED.
Specifies that data read by any statement in a transaction will be the transactionally consistent version of the data
that existed at the start of the transaction, i.e., it will only read data that has been committed prior to the
transaction starting.
SNAPSHOT transactions do not request or cause any locks on the data that is being read, as it is only reading the
version (or snapshot) of the data that existed at the time the transaction began.
A transaction running in SNAPSHOT isolation level read only its own data changes while it is running. For example, a
transaction could update some rows and then read the updated rows, but that change will only be visible to the
current transaction until it is committed.
Note: The ALLOW_SNAPSHOT_ISOLATION database option must be set to ON before the SNAPSHOT isolation level can
be used.
This isolation level is the most restrictive. It requests range locks the range of key values that are read by each
statement in the transaction. This also means that INSERT statements from other transactions will be blocked if the
rows to be inserted are in the range locked by the current transaction.
This option has the same effect as setting HOLDLOCK on all tables in all SELECT statements in a transaction.
Note: This transaction isolation has the lowest concurrency and should only be used when necessary.
The server must be restarted before the change can take effect.
2. Wizard will open click Next then chose objects you want to migrate and click Next again, then click Advanced
scroll a bit down and in Types of data to script choose Schema and data (unless you want only
structures)
4. run .sql file on your new server, and you should be done.
EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'The Profile Name',
@recipients = '[email protected]',
@body = 'This is a simple email sent from SQL Server.',
@subject = 'Simple email'
EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'The Profile Name',
@recipients = '[email protected]',
@query = 'SELECT * FROM Users',
@subject = 'List of users',
@attach_query_result_as_file = 1;
Then use the @html variable with the @body argument. The HTML string can also be passed directly to @body,
although it may make the code harder to read.
EXEC msdb.dbo.sp_send_dbmail
@recipients='[email protected]',
@subject = 'Some HTML content',
@body = @html,
@body_format = 'HTML';
To define memory-optimized variables, you must first create a memory-optimized table type and then declare a
variable from it:
Result:
Col1 Col2
1 1
2 2
3 3
4 4
5 5
6 6
use SQL2016_Demo
go
CONSTRAINT CHK_soSessionC_SpidFilter
CHECK ( SpidFilter = @@spid ),
)
WITH
(MEMORY_OPTIMIZED = ON,
DURABILITY = SCHEMA_AND_DATA); --or DURABILITY = SCHEMA_ONLY
go
Section 68.3: Show created .dll files and tables for Memory
Optimized Tables
SELECT
OBJECT_ID('MemOptTable1') AS MemOptTable1_ObjectID,
SELECT
name,description
FROM sys.dm_os_loaded_modules
WHERE name LIKE '%XTP%'
GO
SELECT
name,type_desc,durability_desc,Is_memory_Optimized
FROM sys.tables
WHERE Is_memory_Optimized = 1
GO
more information
To memory-optimize this table type simply add the option memory_optimized=on, and add an index if there is none
on the original type:
1. Create a new SCHEMA_ONLY memory-optimized table with the same schema as the global ##temp table
Ensure the new table has at least one index
2. Change all references to ##temp in your Transact-SQL statements to the new memory-optimized table temp
3. Replace the DROP TABLE ##temp statements in your code with DELETE FROM temp, to clean up the contents
4. Remove the CREATE TABLE ##temp statements from your code – these are now redundant
more information
INSERTS: On an INSERT, the system sets the value for the ValidFrom column to the begin time of the current
transaction (in the UTC time zone) based on the system clock and assigns the value for the ValidTo column to the
maximum value of 9999-12-31. This marks the row as open.
UPDATES: On an UPDATE, the system stores the previous value of the row in the history table and sets the value
for the ValidTo column to the begin time of the current transaction (in the UTC time zone) based on the system
clock. This marks the row as closed, with a period recorded for which the row was valid. In the current table, the
row is updated with its new value and the system sets the value for the ValidFrom column to the begin time for the
transaction (in the UTC time zone) based on the system clock. The value for the updated row in the current table for
the ValidTo column remains the maximum value of 9999-12-31.
DELETES: On a DELETE, the system stores the previous value of the row in the history table and sets the value for
the ValidTo column to the begin time of the current transaction (in the UTC time zone) based on the system clock.
This marks the row as closed, with a period recorded for which the previous row was valid. In the current table, the
row is removed. Queries of the current table will not return this row. Only queries that deal with history data return
data for which a row is closed.
MERGE: On a MERGE, the operation behaves exactly as if up to three statements (an INSERT, an UPDATE, and/or a
DELETE) executed, depending on what is specified as actions in the MERGE statement.
Tip : The times recorded in the system datetime2 columns are based on the begin time of the transaction itself. For
example, all rows inserted within a single transaction will have the same UTC time recorded in the column
corresponding to the start of the SYSTEM_TIME period.
Cleaning up the SQL Server history table Over time the history table can grow significantly. Since inserting,
updating or deleting data from the history table are not allowed, the only way to clean up the history table is first to
disable system versioning:
Cleaning the history table in Azure SQL Databases is a little different, since Azure SQL databases have built-in
support for cleaning of the history table. First, temporal history retention cleanup need to be enable on a database
level:
This will delete all data in the history table older than 90 days. SQL Server 2016 on-premise databases do not
support TEMPORAL_HISTORY_RETENTION and HISTORY_RETENTION_PERIOD and either of the above two queries
are executed on the SQL Server 2016 on-premise databases the following errors will occur.
If your query produces temp tables, and you want to run it more than once, you will need to drop the tables before
trying to generate them again. The basic syntax for this is:
Trying to execute this syntax before the table exists (e.g. on the first run of your syntax) will cause another error:
Cannot drop the table '#tempTable', because it does not exist or you do not have permission.
To avoid this, you can check to see if the table already exists before dropping it, like so:
After executing all these statements if we close the query window and open it again and try inserting and select it
will show an error message
Note: These are viewable by all users of the database, irrespective of permissions level.
USE msdb ;
GO
EXEC dbo.sp_add_job
@job_name = N'Weekly Job' ; -- the job name
Then we have to add a job step using a stored procedure named sp_add_jobStep
EXEC sp_add_jobstep
@job_name = N'Weekly Job', -- Job name to add a step
@step_name = N'Set database to read only', -- step name
@subsystem = N'TSQL', -- Step type
@command = N'ALTER DATABASE SALES SET READ_ONLY', -- Command
@retry_attempts = 5, --Number of attempts
@retry_interval = 5 ; -- in minutes
EXEC dbo.sp_add_jobserver
@job_name = N'Weekly Sales Data Backup',
@server_name = 'MyPC\data; -- Default is LOCAL
GO
USE msdb
GO
EXEC sp_add_schedule
@schedule_name = N'NightlyJobs' , -- specify the schedule name
@freq_type = 4, -- A value indicating when a job is to be executed (4) means Daily
@freq_interval = 1, -- The days that a job is executed and depends on the value of
`freq_type`.
@active_start_time = 010000 ; -- The time on which execution of a job can begin
GO
There are more parameters that can be used with sp_add_schedule you can read more about in the the link
provided above.
To attach a schedule to an SQL agent job you have to use a stored procedure called sp_attach_schedule
1. READ UNCOMMITTED - means that a query in the current transaction can't access the modified data from
another transaction that is not yet committed - no dirty reads! BUT, nonrepeatable reads and phantom reads
are possible, because data can still be modified by other transactions.
2. REPEATABLE READ - means that a query in the the current transaction can't access the modified data from
another transaction that is not yet committed - no dirty reads! No other transactions can modify data being
read by the current transaction until it is completed, which eliminates NONREPEATABLE reads. BUT, if
another transaction inserts NEW ROWS and the query is executed more then once, phantom rows can
appear starting the second read (if it matches the where statement of the query).
3. SNAPSHOT - only able to return data that exists at the beginning of the query. Ensures consistency of the data.
It prevents dirty reads, nonrepeatable reads and phantom reads. To use that - DB configurationis required:
4. READ COMMITTED - default isolation of the SQL server. It prevents reading the data that is changed by another
transaction until committed. It uses shared locking and row versioning on the tables which prevents dirty
reads. It depends on DB configuration READ_COMMITTED_SNAPSHOT - if enabled - row versioning is used. to
enable - use this:
5. SERIALIZABLE - uses physical locks that are acquired and held until end of the transaction, which prevents
dirty reads, phantom reads, nonrepeatable reads. BUT, it impacts on the performance of the DataBase,
because the concurrent transactions are serialized and are being executed one by one.
Remember that when retrieving data, if you don't specify a row ordering clause (ORDER BY) SQL server does not
guarantee the sorting (order of the columns) at any time. Really, at any time. And there's no point arguing about
that, it has been shown literally thousands of times and all over the internet.
-- Ascending - upwards
SELECT * FROM SortOrder ORDER BY ID ASC
GO
-- Ascending is default
SELECT * FROM SortOrder ORDER BY ID
GO
-- Descending - downwards
SELECT * FROM SortOrder ORDER BY ID DESC
GO
When ordering by the textual column ((n)char or (n)varchar), pay attention that the order respects the collation. For
more information on collation look up for the topic.
There is a possibility to pseudo-randomize the order of rows in your resultset. Just force the ordering to appear
nondeterministic.
Ordering can be remembered in a stored procedure, and that's the way you should do it if it is the last step of
manipulating the rowset before showing it to the end user.
EXEC GetSortOrder
GO
There is a limited (and hacky) support for ordering in the SQL Server views as well, but be encouraged NOT to use it.
-- This will work, but hey... should you really use it?
CREATE VIEW VwSortOrder2
AS
SELECT TOP 99999999 *
FROM SortOrder
ORDER BY ID DESC
GO
For ordering you can either use column names, aliases or column numbers in your ORDER BY.
SELECT *
FROM SortOrder
ORDER BY [Text]
-- New resultset column aliased as 'Msg', feel free to use it for ordering
SELECT ID, [Text] + ' (' + CAST(ID AS nvarchar(10)) + ')' AS Msg
FROM SortOrder
ORDER BY Msg
-- Can be handy if you know your tables, but really NOT GOOD for production
I advise against using the numbers in your code, except if you want to forget about it the moment after you execute
it.
Group
-----
Total
Young
MiddleAge
Old
Male
Female
Group
-----
Female
Male
MiddleAge
Old
Total
Young
Adding a 'case' statement, assigning ascending numerical values in the order you want your data sorted:
Group
-----
Total
Male
USE AdventureWorks;
GRANT CREATE TABLE TO MelanieK;
GO
USE AdventureWorks2012;
GRANT SHOWPLAN TO AuditMonitor;
GO
USE AdventureWorks2012;
GRANT CREATE VIEW TO CarmineEs WITH GRANT OPTION;
GO
use YourDatabase
go
exec sp_addrolemember 'db_owner', 'UserName'
go
cls
sqlcmd.exe -S "your server name" -U "sql user name" -P "sql password" -d "name of databse" -Q "here
you may write your query/stored procedure"
Batch files like these can be used to automate tasks, for example to make backups of databases at a specified time
(can be scheduled with Task Scheduler) for a SQL Server Express version where Agent Jobs can't be used.
select *
from sys.dm_resource_governor_resource_pools
USE master;
GO
-- Create the database with the default data
-- filegroup and a log file. Specify the
-- growth increment and the max size for the
-- primary data file.
go
ALTER DATABASE TestDB MODIFY FILEGROUP TestDB_FG1 DEFAULT;
go
Create Database
The following SQL command creates a new database Northwind on the current server, using pathC:\Program
Files\Microsoft SQL Server\MSSQL11.INSTSQL2012\MSSQL\DATA\:
USE [master]
GO
Note: A T-SQL database consists of two files, the database file *.mdf, and its transaction log *.ldf. Both need to be
specified when a new database is created.
Create Table
The following SQL command creates a new table Categories in the current database, using schema dbo (you can
switch database context with Use <DatabaseName>):
Create View
The following SQL command creates a new view Summary_of_Sales_by_Year in the current database, using schema
dbo (you can switch database context with Use <DatabaseName>):
This will join tables Orders and [Order Subtotals] to display the columns ShippedDate, OrderID and Subtotal.
Because table [Order Subtotals] has a blank in its name in the Northwind database, it needs to be enclosed in
square brackets.
Create Procedure
The following SQL command creates a new stored procedure CustOrdersDetail in the current database, using
schema dbo (you can switch database context with Use <DatabaseName>):
This stored procedure, after it has been created, can be invoked as follows:
which will return all order details with @OrderId=10248 (and quantity >=0 as default). Or you can specify the
optional parameter
which will return only orders with a minimum quantity of 10 (or more).
Note
Subqueries can be used with select, insert, update and delete statement within where, from, select clause along
with IN, comparison operators, etc.
We have a table named ITCompanyInNepal on which we will perform queries to show subqueries examples:
SELECT *
FROM ITCompanyInNepal
WHERE Headquarter IN (SELECT Headquarter
FROM ITCompanyInNepal
WHERE Headquarter = 'USA');
SELECT *
FROM ITCompanyInNepal
WHERE NumberOfEmployee < (SELECT AVG(NumberOfEmployee)
FROM ITCompanyInNepal
)
SELECT CompanyName,
CompanyAddress,
Headquarter,
(Select SUM(NumberOfEmployee)
FROM ITCompanyInNepal
We have to insert data from IndianCompany table to ITCompanyInNepal. The table for IndianCompany is shown
below:
Suppose all the companies whose headquarter is USA decided to fire 50 employees from all US based companies of
Nepal due to some change in policy of USA companies.
UPDATE ITCompanyInNepal
SET NumberOfEmployee = NumberOfEmployee - 50
WHERE Headquarter IN (SELECT Headquarter
FROM ITCompanyInNepal
WHERE Headquarter = 'USA')
Suppose all the companies whose headquarter is Denmark decided to shutdown their companies from Nepal.
The OFFSET FETCH clause implements pagination in a more concise manner. With it, it's possible to skip N1 rows
(specified in OFFSET) and return the next N2 rows (specified in FETCH):
SELECT *
FROM sys.objects
ORDER BY object_id
OFFSET 40 ROWS FETCH NEXT 10 ROWS ONLY
SELECT TOP 10 *
FROM
(
SELECT
TOP 50 object_id,
name,
type,
create_date
FROM sys.objects
ORDER BY name ASC
) AS data
ORDER BY name DESC
The inner query will return the first 50 rows ordered by name. Then the outer query will reverse the order of these
50 rows and select the top 10 rows (these will be last 10 rows in the group before the reversal).
SELECT * FROM TableName ORDER BY id OFFSET 10 ROWS FETCH NEXT 10 ROWS ONLY;
The ROW_NUMBER function can assign an incrementing number to each row in a result set. Combined with a Common
Table Expression that uses a BETWEEN operator, it is possible to create 'pages' of result sets. For example: page one
containing results 1-10, page two containing results 11-20, page three containing results 21-30, and so on.
WITH data
AS
(
SELECT ROW_NUMBER() OVER (ORDER BY name) AS row_id,
object_id,
name,
type,
create_date
FROM sys.objects
)
SELECT *
FROM data
WHERE row_id BETWEEN 41 AND 50
SELECT object_id,
name,
type,
Although this would be more convenient, SQL server will return the following error in this case:
Rebuilding CLUSTERED COLUMNSTORE will "reload" data from the current table into new one and apply
compression again, remove deleted rows, etc.
COLUMNSTORE tables are better for tables where you expect full scans and reports, while row store tables are
better for tables where you will read or update smaller sets of rows.
SELECT
PARSENAME(@ObjectName, 4) as Server
,PARSENAME(@ObjectName, 3) as DB
,PARSENAME(@ObjectName, 2) as Owner
,PARSENAME(@ObjectName, 1) as Object
Returns:
Server DB
HeadofficeSQL1 Northwind
Owner Object
dbo Authors
Express: Entry-level free database. Includes core-RDBMS functionality. Limited to 10G of disk size. Ideal for
development and testing.
Standard Edition: Standard Licensed edition. Includes core functionality and Business Intelligence
capabilities.
Enterprise Edition: Full-featured SQL Server edition. Includes advanced security and data warehousing
capabilities.
Developer Edition: Includes all of the features from Enterprise Edition and no limitations, and it is free to
download and use for development purposes only.
After downloading/acquiring SQL Server, the installation gets executed with SQLSetup.exe, which is available as a
GUI or a command-line program.
Installing via either of these will require you to specify a product key and run some initial configuration that
includes enabling features, separate services and setting the initial parameters for each of them. Additional services
and features can be enabled at any time by running the SQLSetup.exe program in either the command-line or the
GUI version.
A Seek occurs when SQL Server knows where it needs to go and only grab specific items. This typically occurs when
good filters on put in a query, such as where name = 'Foo'.
A Scan is when SQL Server doesn't know exactly where all of the data it needs is, or decided that the Scan would be
more efficient than a Seek if enough of the data is selected.
Seeks are typically faster since they are only grabbing a sub-section of the data, whereas Scans are selecting a
majority of the data.
HASH join
LOOP join
MERGE join
QO will explore plans and choose the optimal operator for joining tables. However, if you are sure that you know
what would be the optimal join operator, you can specify what kind of JOIN should be used. Inner LOOP join will
force QO to choose Nested loop join while joining two tables:
You can explicitly require that QO picks one or another aggregate operator if you know what would be the optimal.
With OPTION (ORDER GROUP), QO will always choose Stream aggregate and add Sort operator in front of Stream
aggregate if input is not sorted:
Merge (Union)
Concat (Union)
Hash Match (Union)
You can explicitly specify what operator should be used using OPTION() hint:
SELECT OrderID,
AVG(Quantity)
FROM Sales.OrderLines
GROUP BY OrderID
OPTION (MAXDOP 2);
This option overrides the MAXDOP configuration option of sp_configure and Resource Governor. If MAXDOP is set
to zero then the server chooses the max degree of parallelism.
SQL Server/Azure SQL Database will collect information about executed queries and provide information in
sys.query_store views:
sys.query_store_query
sys.query_store_query_text
sys.query_store_plan
sys.query_store_runtime_stats
sys.query_store_runtime_stats_interval
sys.database_query_store_options
sys.query_context_settings
EXEC sp_query_store_remove_query 4;
EXEC sp_query_store_remove_plan 3;
Parameters for these stored procedures are query/plan id retrieved from system views.
You can also just remove execution statistics for particular plan without removing the plan from the store:
EXEC sp_query_store_reset_exec_stats 3;
From this point, QO will always use plan provided for the query.
If you want to remove this binding, you can use the following stored procedure:
From this point, QO will again try to find the best plan.
From which it will yield a result set with a RowID field which you can use to page between.
SELECT *
FROM
( SELECT Row_Number() OVER(ORDER BY UserName) As RowID, UserFirstName, UserLastName
FROM Users
) As RowResults
WHERE RowID Between 5 AND 10
This would transfer the tbl_Staging table from the dbo schema to the dvr schema
Error 3154: The backup set holds a backup of a database other than the existing database.
In that case you should use WITH REPLACE option to replace database with the database from backup:
Even in this case you might get the errors saying that files cannot be located on some path:
Msg 3156, Level 16, State 3, Line 1 File 'WWI_Primary' cannot be restored to
'D:\Data\WideWorldImportersDW.mdf'. Use WITH MOVE to identify a valid location for the file.
This error happens probably because your files were not placed on the same folder path that exist on new server.
In that case you should move individual database files to new location:
With this statement you can replace database with all database files moved to new location.
Instead of standard BEGIN END block, you need to use BEGIN ATOMIC block:
BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE='us_english')
-- T-Sql code goes here
END
Example:
Instead of standard BEGIN END block, you need to use BEGIN ATOMIC block:
BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE='us_english')
-- T-Sql code goes here
END
RETURN (@ReturnValue);
END
-- usage sample:
SELECT dbo.udfMultiply(10, 12)
Instead of standard BEGIN END block, you need to use BEGIN ATOMIC block:
BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE='us_english')
-- T-Sql code goes here
END
Example:
Geography Latitude/Longitude coordinate system for a curved surface (the earth). There are multiple projections
of curved surfaces so each geography spatial must let SQL Server know which projection to use. The usual Spatial
Reference ID (SRID) is 4326, which is measuring distances in Kilometers. This is the default SRID used in most web
maps
Parameter Detail
Lat or X Is a float expression representing the x-coordinate of the Point being generated
Long or Y Is a float expression representing the y-coordinate of the Point being generated
String Well Known Text (WKB) of a geometry/geography shape
Binary Well Known Binary (WKB) of a geometry/geography shape
Is an int expression representing the spatial reference ID (SRID) of the geometry/geography instance
SRID
you wish to return
--Explicit constructor
DECLARE @gm1 GEOMETRY = GEOMETRY::Point(10,5,0)
This procedure will return the same result-set as SQL query provided as statement text. sp_executesql can execute
SQL query provided as string literal, variable/parameter, or even expression:
You need QUOTENAME function to escape special characters in @table variable. Without this function you would
get syntax error if @table variable contains something like spaces, brackets, or any other special character.
SQL query will be executed under dbo database user. All permission checks applicable to dbo user will be checked
on SQL query.
SET @sql = N'SELECT COUNT(*) FROM AppUsers WHERE Username = ''' + @user + ''' AND Password = ''' +
@pass + ''''
EXEC(@sql)
If value of user variable is myusername'' OR 1=1 -- the following query will be executed:
SELECT COUNT(*)
FROM AppUsers
WHERE Username = 'myusername' OR 1=1 --' AND Password = ''
Comment at the end of value of variable @username will comment-out trailing part of the query and condition 1=1
will be evaluated. Application that checks it there at least one user returned by this query will return count greater
than 0 and login will succeed.
SET @sql = N'SELECT COUNT(*) FROM AppUsers WHERE Username = @user AND Password = @pass
EXEC sp_executesql @sql, '@user nvarchar(50), @pass nvarchar(50)', @username, @password
Second parameter is a list of parameters used in query with their types, after this list are provided variables that will
be used as parameter values.
When user tries to select emails from Company table, he will get something like the following values:
In the parameters of the partial function you can specify how many values from the beginning will be shown, how
many values from the end will be shown, and what woudl be the pattern that is shown in the middle.
When user tries to select emails from Company table, he will get something like the following values:
(381)XXXXXXX39
(360)XXXXXXX01
(415)XXXXXXX05
Note that is some cases displayed value might match actual value in column (if randomly selected number matches
If some user already has unmask permission, you can revoke this permission:
-S
-d
-o
-Q
In addition, if some CLR module need external access, you should set TRUSTWORTHY property to ON in your
database:
PERMISSION_SET is Safe by default meaning that code in .dll don't need permission to access external resources
(e.g. files, web sites, other servers), and that it will not use native code that can access memory.
PERMISSION_SET = EXTERNAL_ACCESS is used to mark assemblies that contain code that will access external
resources.
you can find information about current CLR assembly files in sys.assemblies view:
SELECT *
FROM sys.assemblies asms
WHERE is_user_defined = 1
You need to specify name of the function and signature with input parameters and return values that match .Net
function. In AS EXTERNAL NAME clause you need to specify assembly name, namespace/class name where this
You can find information about the CLR functions using the following query:
You need to specify name of the type that will be used in T-SQL queries. In EXTERNAL NAME clause you need to
specify assembly name, namespace, and class name.
You need to specify name of the procedure and signature with input parameters that match .Net method. In AS
EXTERNAL NAME clause you need to specify assembly name, namespace/class name where this procedure is
placed and name of the method in the class that contains the code that will be exposed as procedure.
SELECT [Description]
FROM dbo.TableName
WHERE [Name] = 'foo'
The only special character for SQL Server is the single quote ' and it is escaped by doubling its usage. For example,
to find the name O'Shea in the same table, the following syntax would be used:
SELECT [Description]
FROM dbo.TableName
WHERE [Name] = 'O''Shea'
The following example returns all DBCC statements for which Help is available:
Examples are:
DBCC DROPCLEANBUFFERS
Removes all clean buffers from the buffer pool, and columnstore objects from the columnstore object pool.
DBCC FREEPROCCACHE
-- or
DBCC FREEPROCCACHE (0x060006001ECA270EC0215D05000000000000000000000000);
Removes all SQL query in plan cache. Every new plan will be recompiled: You can specify plan handle, query handle
to clean plans for the specific query plan or SQL statement.
Cleans all cached entries created by system. It can clean entries o=in all or some specified resource pool
(myresourcepool in the example above)
DBCC FLUSHAUTHCACHE
Empties the database authentication cache containing information about logins and firewall rules.
Shrinks database MyDB to 10%. Second parameter is optional. You can use database id instead of name.
Shrinks data file named DataFile1 in the current database. Target size is 7 MB (tis parameter is optional).
ALTER TABLE Table1 WITH NOCHECK ADD CONSTRAINT chkTab1 CHECK (Col1 > 100);
GO
DBCC CHECKCONSTRAINTS(Table1);
--OR
DBCC CHECKCONSTRAINTS ('Table1.chkTable1');
Check constraint is added with nocheck options, so it will not be checked on existing data. DBCC will trigger
constraint check.
DBCC PROCCACHE
Returns the current output buffer in hexadecimal and ASCII format for the specified session_id (and optional
request_id).
Displays the last statement sent from a client to an instance of Microsoft SQL Server.
The following example switches on trace flag 3205 globally and 3206 for the current session:
The following example switches off trace flag 3205 globally and 3206 for the current session:
The following example displays the status of trace flags 2528 and 3205:
BULK INSERT command will map columns in files with columns in target table.
In this example, CODEPAGE specifies that a source file in UTF-8 file, and TERMINATORS are coma and new line.
SINGLE_BLOB option will read entire content from a file as single cell.
9.0
2
1 SQLCHAR 0 10 "\t" 1 ID SQL_Latin1_General_Cp437_BIN
2 SQLCHAR 0 40 "\r\n" 2 Description SQL_Latin1_General_Cp437_BIN
The following example shows hot to read entire content of JSON file using OPENROWSET(BULK) and then provide
BulkColumn to OPENJSON function that will parse JSON and return columns:
SELECT book.*
FROM OPENROWSET (BULK 'C:\JSON\Books\books.json', SINGLE_CLOB) as j
CROSS APPLY OPENJSON(BulkColumn)
WITH( id nvarchar(100), name nvarchar(100), price float,
pages int, author nvarchar(100)) AS book
More: https://msdn.microsoft.com/en-us/library/bb522893.aspx
USE [MyDatabase]
SET @msg = (
SELECT 'HelloThere' "elementNum1"
FOR XML PATH(''), ROOT('ExampleRoot'), ELEMENTS XSINIL, TYPE
);
First we need to create a procedure that is able to read and process data from the Queue
USE [MyDatabase]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
AS
BEGIN
declare
@message_body xml,
@message_type_name nvarchar(256),
@conversation_handle uniqueidentifier,
@messagetypename nvarchar(256);
BEGIN TRANSACTION
WAITFOR(
RECEIVE TOP(1)
@message_body = CAST(message_body as xml),
@message_type_name = message_type_name,
@conversation_handle = conversation_handle,
@messagetypename = message_type_name
FROM DwhInsertSmsQueue
), TIMEOUT 1000;
IF (@@ROWCOUNT = 0)
BEGIN
ROLLBACK TRANSACTION
BREAK
END
IF (@messagetypename = '//initiator')
BEGIN
END
IF (@messagetypename = 'http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog')
BEGIN
END CONVERSATION @conversation_handle;
END
COMMIT TRANSACTION
END
END
USE [MyDatabase]
Instead individually assigning permissions to a user on a piece-meal basis, just run the script below, copy the output
and then run it in a Query window.
-- SQL 2008+
ALTER ROLE [myRole] ADD MEMBER [aUser];
ALTER ROLE [myRole] DROP MEMBER [aUser];
Note: role members can be any database-level principal. That is, you can add a role as a member in another role.
Also, adding/dropping role members is idempotent. That is, attempting to add/drop will result in their
presence/absence (respectively) in the role regardless of the current state of their role membership.
First, you need a table-valued function that contains some predicate that describes what it the condition that will
allow users to read data from some table:
dbo.pUserCanAccessCompany(@CompanyID int)
RETURNS TABLE
WITH SCHEMABINDING
AS RETURN (
SELECT 1 as canAccess WHERE
In this example, the predicate says that only users that have a value in SESSION_CONTEXT that is matching input
argument can access the company. You can put any other condition e.g. that checks database role or database_id
of the current user, etc.
Most of the code above is a template that you will copy-paste. The only thing that will change here is the
name and arguments of predicate and condition in WHERE clause. Now you create security policy that will
apply this predicate on some table.
Now you can create security policy that will apply predicate on some table:
This security policy assigns predicate to company table. Whenever someone tries to read data from Company table
, security policy will apply predicate on each row, pass CompanyID column as a parameter of the predicate, and
predicate will evaluate should this row be returned in the result of SELECT query.
You can add more predicates on tables in the existing security policy.
CREATE FUNCTION
dbo.pUserCanAccessProduct(@CompanyID int)
RETURNS TABLE
WITH SCHEMABINDING
AS RETURN (
SELECT 1 as canAccess WHERE
CAST(SESSION_CONTEXT(N'CompanyID') as int) = @CompanyID
)
In this example, the predicate says that only users that have a value in SESSION_CONTEXT that is matching input
argument can access the company. You can put any other condition e.g. that checks database role or database_id
of the current user, etc.
Most of the code above is a template that you will copy-paste. The only thing that will change here is the
name and arguments of predicate and condition in WHERE clause. Now you create security policy that will
apply this predicate on some table.
Now we can create security policy with the predicate that will block updates on product table if CompanyID column
in table do not satisfies predicate.
This predicate will be applied on all operations. If you want to apply predicate on some operation you can write
something like:
Possible options that you can add after block predicate definition are:
SELECT EncryptByCert(Cert_ID('My_New_Cert'),
'This text will get encrypted') encryption_test
Usually, you would encrypt with a symmetric key, that key would get encrypted by the asymmetric key (public key)
from your certificate.
Also, note that encryption is limited to certain lengths depending on key length and returns NULL otherwise.
Microsoft writes: "The limits are: a 512 bit RSA key can encrypt up to 53 bytes, a 1024 bit key can encrypt up to 117
bytes, and a 2048 bit key can encrypt up to 245 bytes."
EncryptByAsymKey has the same limits. For UNICODE this would be divided by 2 (16 bits per character), so 58
characters for a 1024 bit key.
-- Encrypt
SELECT EncryptByKey(Key_GUID('SSN_Key_01'), 'This text will get encrypted');
This will also encrypt but then by passphrase instead of asymmetric(certificate) key or by an explicit symmetric key.
Now open a First query editor (on the database) insert the code below, and execute (do not touch the --rollback)
in this case you insert a row on DB but do not commit changes.
begin tran
--rollback
Now open a Second Query Editor (on the database), insert the code below and execute
begin tran
You may notice that on second editor you can see the newly created row (but not committed) from first transaction.
On first editor execute the rollback (select the rollback word and execute).
Execute the query on second editor and you see that the record disappear (phantom read), this occurs because you
tell, to the 2nd transaction to get all rows, also the uncommitteds.
The following features added in version 2005 from its previous version:
The following features added in version 2008 from its previous version:
The following features added in version 2008 R2 from its previous version:
The following features added in version 2012 from its previous version:
1. Column store indexes - reduces I/O and memory utilization on large queries.
2. Pagination - pagination can be done by using “OFFSET” and “FETCH’ commands.
3. Contained database – Great feature for periodic data migrations.
4. AlwaysOn Availability Groups
5. Windows Server Core Support
6. User-Defined Server Roles
7. Big Data Support
8. PowerView
9. SQL Azure Enhancements
10. Tabular Model (SSAS)
11. DQS Data quality services
12. File Table - an enhancement to the FILESTREAM feature which was introduced in 2008.
13. Enhancement in Error Handling including THROW statement
14. Improvement to SQL Server Management Studio Debugging a. SQL Server 2012 introduces more options to
control breakpoints. b. Improvements to debug-mode windows
c. Enhancement in IntelliSense - like Inserting Code Snippets.
The following features added in version 2014 from its previous version:
The following features added in version 2016 from its previous version:
2. DROP IF EXISTS
4. ALTER TABLE can now alter many columns while the table remains online, using WITH (ONLINE = ON | OFF).
9. FORMATMESSAGE Statement
a. InstanceDefaultDataPath
b. InstanceDefaultLogPath
c. ProductBuild
d. ProductBuildType
e. ProductMajorVersion
f. ProductMinorVersion
g. ProductUpdateLevel
h. ProductUpdateReference
Within an query editor window either press Ctrl + Shift + R or select Edit | IntelliSense | Refresh Local
Cache from the menu.
After this all changes since the last refresh will be available to IntelliSense.
You can find version, edition (basic, standard, or premium), and service objective (S0,S1,P4,P11, etc.) of SQL
Database that is running as a service in Azure using the following statements:
select @@version
SELECT DATABASEPROPERTYEX('Wwi', 'EDITION')
SELECT DATABASEPROPERTYEX('Wwi', 'ServiceObjective')
If you try to change service level while changing service level of the current database is still in progress you wil get
the following error:
Msg 40802, Level 16, State 1, Line 1 A service objective assignment on server '......' and database '.......' is
already in progress. Please wait until the service objective assignment state for the database is marked as
'Completed'.
Target server may be in another data center (usable for geo-replication). If a database with the same name already
exists on the target server, the command will fail. The command is executed on the master database on the server
hosting the local database that will become the primary. When ALLOW_CONNECTIONS is set to ALL (it is set to NO
by default), secondary replica will be a read-only database that will allow all logins with the appropriate permissions
to connect.
Secondary database replica might be promoted to primary using the following command:
You can create copy of an existing database and place it in some elastic pool:
SELECT
SUM (user_object_reserved_page_count)*8 as usr_obj_kb,
SUM (internal_object_reserved_page_count)*8 as internal_obj_kb,
SUM (version_store_reserved_page_count)*8 as version_store_kb,
SUM (unallocated_extent_page_count)*8 as freespace_kb,
SUM (mixed_extent_page_count)*8 as mixedextent_kb
FROM sys.dm_db_file_space_usage
USE [MASTER]
SELECT * FROM sys.databases WHERE database_id = 2
OR
USE [MASTER]
SELECT * FROM sys.master_files WHERE database_id = 2
With the help of below DMV, you can check how much TempDb space does your session is using. This query is quite
helpful while debugging TempDb issues
On the right side you can see some shortcuts which are by default in SSMS. Now if you need to add a new one, just
click on any column under Stored Procedure column.
Now if you need to select all the records from a table when you select the table and press CTRL+5(You can select
any key). You can make the shortcut as follows.