In December 2016, ISO released a new version of the international SQL standard (ISO/IEC 9075:2016). It supersedes the previous version from 2011.

This article is a brief overview of the new features being introduced into the SQL language. Strictly speaking, this article covers the additions to part 2 of the standard (SQL/Foundation), i.e. the most commonly used part.

This article also shows the availability of these features among six major databases. Note that respective figures—shown below—only reflect whether or not the databases support the features in the way described by the standard. For example, an X in the JSON row does not mean the database has no JSON support—it simply means the database doesn’t support the JSON features described by the SQL standard. As a matter of fact, all tested databases support JSON in some way—but not necessarily in the way described by the standard.

Before going through all the shiny new features, let’s look at some trivia: part 2 of the SQL:2016 standard has 1732 pages—that’s 260 pages more (~18%) than the 2011 edition. It introduces 44 new optional features (+14%). Let’s take a look at them…

Row Pattern Recognition

Row pattern recognition captures groups of rows that follow a pattern. Rows in matched groups can be filtered, grouped, and aggregated. The pattern itself is described with a simple regular expression syntax.

The main use of row pattern recognition is to check time series for patterns. However, the match_recognize clause combines aspects of the where , group by , having and over clauses (window functions) so it is also useful in many other cases.

I have given a presentation about row pattern recognition. It discusses several examples in two implementation variants: with and without the new match_recognize clause. The examples cover some typical use cases, and also some atypical use cases for row pattern recognition:

Consecutive events: identifying sessions in a web-log; tolerating gaps in sequences

Top-N per group (might be faster than window functions!)

Time intervals: finding gaps (e.g. for scheduling)

Time intervals: closing gaps (creating new rows)

If you understand German, you can watch the video recording here.

Readers interested in more details may refer to the technical report “Row Pattern Recognition in SQL” (ISO/IEC TR 19075-5:2016) available for free at ISO.

JSON

SQL has been supporting arrays, multisets (nested tables) and composite types for 18 years. In 2003, SQL was given a native XML type. Now it has JSON support.

The following sections briefly describe the key parts of the new standard JSON functionality. A more detailed discussion is available in the technical report by ISO.

No Native JSON Type

Even though XML and JSON are somewhat similar—they are documents with nested structures—their integration into SQL is quite different. The most striking difference is that the standard does not define a native JSON type like it does for XML. Instead, the standard uses strings to store JSON data.

Note that this does not prevent vendors from providing a JSON type. The standard just defines functions that interpret strings as JSON data. To claim conformity to the standard’s JSON features, a database must support these string-based functions. Vendors are still free to add a JSON type and related functions. They are even allowed to provide the standard’s JSON functions for their native JSON type—a very reasonable option in my opinion.

The only annoying consequence of using strings for JSON data is the error handling. If there was a JSON type, parsing errors could only happen when casting a string into the JSON type. Instead, the standard defines an on error clause for all functions that interpret strings as JSON data.

json_value(json, '$.id' null on error )

The default is error or error . The string '$.id' is a JSON path (described below).

Note that is still possible to declare table columns that only accept valid JSON data:

CREATE TABLE … ( jcol CLOB CHECK (jcol IS JSON) )

The check constraint uses the new is json predicate to test whether the string contains valid JSON data or not. The is json predicate can even test for a specific JSON types and is thus more flexible than a single JSON type:

<expr> is [not] json [ value | array | object | scalar ]

JSON Formats

The SQL standard allows database vendors to support different JSON formats. The one described by RFC 7159 is the mandatory default.

Functions that generate or parse JSON data accept the optional format clause to specify which format to use ( format json is default). The alternatives offered might have a very different syntax (like BSON).

Creating JSON Data

The standard defines the following functions to create JSON strings:

json_object([key] <expr> value <expression> [,…]) Creates a JSON object. The keywords key (optional) and value introduce the attribute name and value: json_object( key 'id' value 1234 , 'name' value 'Markus') The standard also accepts a colon ( : ) between key and value: json_object( 'id': 1234 , 'name': 'Markus') A comma is only used to list multiple key/value pairs. This is a noteworthy difference to json_object (MySQL, SQLite) and json_build_object (PostgreSQL): they use the comma for both separating keys from values and listing multiple key/value pairs. json_array([<expr>[,…]]) Creates a JSON array from the values provided. json_array(<query>) Creates a JSON array from the values returned by <query> . The query must return exactly one column. json_arrayagg(<expr> [order by …]) Creates a JSON array from the values of a group (like array_agg ): SELECT json_arrayagg(col [order by seq]) FROM … GROUP BY x Note that the optional order by clause is inside the parentheses—not in a within group clause as for listagg . json_objectagg([key] <expr> value <expression>) Creates a JSON object from the key/value pairs of a group. It uses the same syntax to denote the key and value as json_object : SELECT json_objectagg(k value v) FROM … GROUP BY x An order by clause is not allowed because JSON objects are unordered key/value pairs.

Accessing JSON Items

The following functions use the so-called SQL/JSON path language (described below) to access parts of a JSON document. They interpret strings as JSON data and thus accept the on error clause.

json_exists(<json>, <path>) Tests whether a specific path exists in JSON document. It evaluates to true , false or unknown and can be used in the where clause directly: WHERE json_exists(jcol, '$.name') json_value(<json>, <path> [returning <type>]) Extracts a scalar JSON value—everything except object and array—and returns it as a native SQL type. The optional returning clause performs a typecast. Without a returning clause, json_value returns a string. json_query(<json>, <path> …) Extracts a part out of JSON document and returns it as a JSON string. The main differences to json_value are: (1) it can extract any JSON type; (2) it always returns a string; (3) it can extract multiple elements from a JSON document. Due to these differences, there are two special cases: Multiple hits By default, json_query raises an exception if the JSON path matches more than one element (like json_value ). The optional with [ conditional | unconditional ] [array] wrapper clause wraps the result into an array and returns all hits. JSON path denotes a single string Json_query generally returns JSON data. If the JSON path matches a single JSON string, the result is still enclosed in double quotes with JSON special characters escaped. The omit quotes [on scalar string] clause returns the raw data instead (like json_value ).

Finally, there is the json_table function: it is basically a table function—i.e. it is used in the from clause.

json_table(<json>, <path> columns …) Transforms JSON data into a table. Json_table does not introduce any new functionality but is often simpler (and faster) than using multiple json_value calls. Please refer to ISO’s technical report (paragraph 5.3.4) for the full syntax. As a teaser, consider the following example. It transforms a JSON document (an array containing two objects) into a table: [{id: 1, name: "Marvin"}, {id: 2, name: "Arthur"} ] SELECT jt.* FROM t , JSON_TABLE ( jcol , '$[*]' COLUMNS (id NUMERIC PATH '$.id', name VARCHAR(255) PATH '$.name' ) ) jt The json_table function produces one row for each element matched by the JSON path $[*] . The columns clause declares the names and types of the result columns and the JSON paths to the actual values (relative to the main JSON path). id name 1 Marvin 2 Arthur

JSON Path

The SQL standard specifies a path language for JSON. It “adopts many features of ECMAscript, though it is neither a subset nor a superset of ECMAscript.”

In the JSON path language, the dollar sign ( $ ) represents the current context element, the period ( . ) an object member, and the brackets ( [] ) an array element. The following examples illustrate this:

$.name Denotes the value of the name attribute of the current JSON object. $[0] Denotes the first element of the current JSON array. $.events[last] Denotes the last element of the array stored in the attribute events of the current JSON object. $.events[0 to 4, last] Denotes the first five and the last array element of the array stored in the attribute events of the current JSON object. $.* Denotes the values of all attributes of the current JSON object.

JSON path supports filter expressions in the form ?(<expression>) . In filter expressions, the at sign ( @ ) denotes the current context.

$.* ?(@.type()=="number") Denotes all attribute values of type number. $.events[*] ?⁠(exists(@.name)) Denotes all array elements of the attribute events that contain a JSON object with the attribute name. $?⁠(@.name starts with "Lorem") Denotes the full JSON document if it is a JSON object that has an attribute name that contains a string that starts with Lorem .

Other functions available in filter expressions include size() (array size), ceiling() , floor() , abs() , and datetime() (for parsing, see below).

The SQL/JSON path language defines two modes: lax , which is the default, and strict . The mode can be specified by adding a prefix to the JSON path expression (e.g. 'strict $.name' ). The strict mode triggers error handling (subject to the on error clause) for all errors. This includes accessing non-existing object members and using an array accessor on a scalar value or JSON object.

The lax mode suppresses these errors. If required, it unwraps arrays or wraps scalar values so that the document structure and JSON path expression fit to each other. The lax mode allows working with variable document structures without adjusting the JSON path to each document.

What’s missing in SQL/JSON

The SQL standard does not provide functions to update parts of JSON documents (like json_set in MySQL, PostgreSQL, and SQLite or json_modify in SQL Server).

On my Own Behalf I make my living from training, other SQL related services and selling my book. Learn more at https://winand.at/.

Formatting dates and times (temporal data) is one of the gaps in the SQL standard that was filled in pretty much every database—of course, every vendor filled it differently. SQL:2016 finally added this functionality to the standard.

The SQL standard uses a format template such as 'YYYY-MM-DD' to describe how to format or parse temporal data. The following table summarized the available mnemonics—notice the similarities with the “datetime format models” used by the Oracle database.

Mnemonic Meaning extract field YYYY | YYY | YY | Y Year YEAR RRRR | RR Rounded year MM Month MONTH DD Day of month DAY DDD Day of year HH | HH12 12 hour HH24 24 hour HOUR MI Minute MINUTE SS Second of minute SECOND SSSSS Second of day FF1 | … | FF9 Fraction (in SECOND ) A.M. | P.M. AM or PM TZH Time zone hour TIMEZONE_HOUR TZM Time zone minute TIMEZONE_MINUTE

Remember that the extract expression (good old SQL-92 ) can access the individual components of temporal types. I’ve added the respective extract field names for reference.

Format templates can be used in two ways: (1) in the JSON path method datetime (see above); (2) in a cast specification:

CAST(<datetime> AS <char string type> [FORMAT <template>]) CAST(<char string> AS <datetime type> [FORMAT <template>])

Listagg

Listagg is a new ordered set function that resembles the group_concat and string_agg functions offered by some databases. It transforms values from a group of rows into a delimited string.

The minimal syntax is:

LISTAGG(<expr>, <separator>) WITHIN GROUP(ORDER BY …)

Listagg accepts the optional on overflow clause to define the behavior if the result becomes too long:

LISTAGG(<expr>, <separator> ON OVERFLOW …)

The default is on overflow error . The on overflow truncate clause prevents the overflow by only concatenating as many elements as the result type can accommodate. Furthermore, the on overflow truncate clause allows you to specify how to terminate the result:

ON OVERFLOW TRUNCATE [<filler>] WITH[OUT] COUNT

The optional <filler> defaults to three periods ( ... ) and this is added as the last element if truncation happens. If with count is specified, the number of omitted values is put in parentheses and appended to the result.

I have written a more detailed article about listagg . Please keep the type-safe alternatives in mind, which are a better choice in many cases.

Trigonometric and Logarithmic Functions

SQL:2016 introduces trigonometric functions ( sin , cos , tan , asin , acos , atan , sinh , cosh , tanh ), a general logarithm function ( log(<base>, <value>) ) and a shortcut for the logarithm with base 10 ( log10(<value>) ).

Note that the natural logarithm ln(<value>) was introduced with SQL:2003.

Polymorphic Table Functions

SQL table functions—standardized with SQL:2003—are functions that return tables. Table functions have to specify the names and types of the columns they return—the so-called row-type—at the time of creation:

CREATE FUNCTION <name> (<parameters>) RETURNS TABLE (c1 NUMERIC(10,2), c2 TIMESTAMP) ...

This function returns a table with two columns: c1 and c2 with the respective types.

Once declared, table functions can be used in the from and join clauses similarly to regular tables.

Prominent examples of table functions are:

SQL:2016 introduces polymorphic table functions (PTF) that don’t need to specify the result type upfront. Instead, they can provide a describe component procedure that determines the return type at run time. Neither the author of the PTF nor the user of the PTF need to declare the returned columns in advance.

PTFs as described by SQL:2016 are not yet available in any tested database.

Update 2018: Oracle Database 18c Oracle Database 18c introduced polymorphic table functions with a proprietary syntax. See “Polymorphic Table Functions (PTF)”.

Interested readers may refer to the free technical report “Polymorphic table functions in SQL” released by ISO. The following are some of the examples discussed in the report:

CSVreader, which reads the header line of a CVS file to determine the number and names of the return columns

Pivot (actually unpivot), which turns column groups into rows (example: phonetype , phonenumber )

TopNplus, which passes through N rows per partition and one extra row with the totals of the remaining rows

Personally, I think many of the examples from the technical report could be solved using other SQL features. Implementing the json_table function as a PTF might have been an interesting example.

Miscellaneous Features

Join … using now accepts as (F404):

FROM A JOIN B USING (…) AS correlation_name

There is a new type decfloat[(<precision>)] (T076).

Named arguments in function and procedure invocations ( name => value ) are no longer limited to call statements (T524).

The default clause is allowed for arguments to functions (T525) and inout arguments (T523).

Feature Taxonomy: Obsolete and New Features

For the sake of completeness: part 2 of SQL:2016 introduces no new mandatory features.

The feature T581, “Regular expression substring function” has been deprecated (“The functionality is essentially subsumed by Feature F844, “SUBSTRING_REGEX”” ). Note that feature T141, “SIMILAR predicate” has been deprecated in 2011.

The following table lists the new optional features in ISO/IEC 9075-2:2016 (compared to 2011).