1150
1150
1150
Teradata JSON
Release 15.0
B035-1150-015K
June 2014
The product or products described in this book are licensed products of Teradata Corporation or its affiliates.
Teradata, Active Data Warehousing, Active Enterprise Intelligence, Applications-Within, Aprimo Marketing Studio, Aster, BYNET,
Claraview, DecisionCast, Gridscale, MyCommerce, SQL-MapReduce, Teradata Decision Experts, "Teradata Labs" logo, Teradata
ServiceConnect, Teradata Source Experts, WebAnalyst, and Xkoto are trademarks or registered trademarks of Teradata Corporation or its
affiliates in the United States and other countries.
Adaptec and SCSISelect are trademarks or registered trademarks of Adaptec, Inc.
AMD Opteron and Opteron are trademarks of Advanced Micro Devices, Inc.
Apache, Apache Hadoop, Hadoop, and the yellow elephant logo are either registered trademarks or trademarks of the Apache Software
Foundation in the United States and/or other countries.
Apple, Mac, and OS X all are registered trademarks of Apple Inc.
Axeda is a registered trademark of Axeda Corporation. Axeda Agents, Axeda Applications, Axeda Policy Manager, Axeda Enterprise, Axeda
Access, Axeda Software Management, Axeda Service, Axeda ServiceLink, and Firewall-Friendly are trademarks and Maximum Results and
Maximum Support are servicemarks of Axeda Corporation.
Data Domain, EMC, PowerPath, SRDF, and Symmetrix are registered trademarks of EMC Corporation.
GoldenGate is a trademark of Oracle.
Hewlett-Packard and HP are registered trademarks of Hewlett-Packard Company.
Hortonworks, the Hortonworks logo and other Hortonworks trademarks are trademarks of Hortonworks Inc. in the United States and other
countries.
Intel, Pentium, and XEON are registered trademarks of Intel Corporation.
IBM, CICS, RACF, Tivoli, and z/OS are registered trademarks of International Business Machines Corporation.
Linux is a registered trademark of Linus Torvalds.
LSI is a registered trademark of LSI Corporation.
Microsoft, Active Directory, Windows, Windows NT, and Windows Server are registered trademarks of Microsoft Corporation in the United
States and other countries.
NetVault is a trademark or registered trademark of Dell Inc. in the United States and/or other countries.
Novell and SUSE are registered trademarks of Novell, Inc., in the United States and other countries.
Oracle, Java, and Solaris are registered trademarks of Oracle and/or its affiliates.
QLogic and SANbox are trademarks or registered trademarks of QLogic Corporation.
Quantum and the Quantum logo are trademarks of Quantum Corporation, registered in the U.S.A. and other countries.
Red Hat is a trademark of Red Hat, Inc., registered in the U.S. and other countries. Used under license.
SAS and SAS/C are trademarks or registered trademarks of SAS Institute Inc.
SPARC is a registered trademark of SPARC International, Inc.
Symantec, NetBackup, and VERITAS are trademarks or registered trademarks of Symantec Corporation or its affiliates in the United States
and other countries.
Unicode is a registered trademark of Unicode, Inc. in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other product and company names mentioned herein may be the trademarks of their respective owners.
The information contained in this document is provided on an "as-is" basis, without warranty of any kind, either express
or implied, including the implied warranties of merchantability, fitness for a particular purpose, or non-infringement.
Some jurisdictions do not allow the exclusion of implied warranties, so the above exclusion may not apply to you. In no
event will Teradata Corporation be liable for any indirect, direct, special, incidental, or consequential damages, including
lost profits or lost savings, even if expressly advised of the possibility of such damages.
The information contained in this document may contain references or cross-references to features, functions, products, or services that are
not announced or available in your country. Such references do not imply that Teradata Corporation intends to announce such features,
functions, products, or services in your country. Please consult your local Teradata Corporation representative for those features, functions,
products, or services available in your country.
Information contained in this document may contain technical inaccuracies or typographical errors. Information may be changed or
updated without notice. Teradata Corporation may also make improvements or changes in the products or services described in this
information at any time without notice.
To maintain the quality of our products and services, we would like your comments on the accuracy, clarity, organization, and value of this
document. Please e-mail: [email protected]
Any comments or materials (collectively referred to as "Feedback") sent to Teradata Corporation will be deemed non-confidential. Teradata
Corporation will have no obligation of any kind with respect to Feedback and will be free to use, reproduce, disclose, exhibit, display,
transform, create derivative works of, and distribute the Feedback and derivative works thereof without limitation on a royalty-free basis.
Further, Teradata Corporation will be free to use any ideas, concepts, know-how, or techniques contained in such Feedback for any purpose
whatsoever, including developing, manufacturing, or marketing products or services incorporating Feedback.
Preface
Purpose
Teradata JSON describes Teradata support for JSON data, including the JSON data type and
the functions and methods available for processing, shredding, and publishing JSON data.
Use Teradata JSON with the other books in the SQL book set.
Audience
Database administrators, application programmers, and other technical personnel
responsible for designing, maintaining, and using Teradata Database will find Teradata
JSON useful.
Prerequisites
If you are not familiar with Teradata Database, you will find it useful to read Introduction to
Teradata, B035-1091 before reading Teradata JSON.
You should be familiar with basic relational database management technology and SQL. You
should also have basic knowledge of programming languages, such as JavaScript, C++, and
so on, and common programming language operations.
Teradata JSON
Preface
Changes to this Book
Description
Initial publication.
March 2014
Teradata Database 15.0
April 2014
June 2014
Additional Information
URL
Description
www.info.teradata.com
www.teradata.com
Teradata JSON
Preface
Product Safety Information
URL
Description
www.teradata.com/TEN/
tays.teradata.com
Teradata Developer
Exchange
To maintain the quality of our products and services, we would like your comments on the
accuracy, clarity, organization, and value of this document. Please email [email protected].
Teradata JSON
Preface
Teradata Database Optional Features
Teradata JSON
CHAPTER 1
Teradata JSON
Client Product
CLI
ODBC
The ODBC specification does not have a unique data type code for JSON.
Therefore, the ODBC driver maps the JSON data type to
SQL_LONGVARCHAR or SQL_WLONGVARCHAR, which are the
ODBC CLOB data types. The metadata clearly differentiates between a
Teradata CLOB data type mapped to SQL_LONGVARCHAR and a
Teradata JSON data type mapped to SQL_LONGVARCHAR.
The ODBC driver supports LOB Input, Output and InputOutput
parameters. Therefore, it can load JSON data. Also the Catalog (Data
Dictionary) functions support JSON.
JDBC
No support.
Teradata Parallel
Transporter (TPT)
JSON columns are treated exactly like CLOB columns and subject to the
same limitations. JSON columns cannot exceed 16 MB (16,776,192 LATIN
characters or 8,388,096 UNICODE characters). TPT accommodates the
JSON keyword in object schema but internally converts it to CLOB.
Import and export are both fully supported.
BTEQ
Standalone Utilities
No support.
For more information about the Teradata Database client products, see the following books:
Teradata Call-Level Interface Version 2 Reference for Mainframe-Attached Systems,
B035-2417
Teradata Call-Level Interface Version 2 Reference for Workstation-Attached Systems,
B035-2418
ODBC Driver for Teradata User Guide, B035-2509
Teradata JDBC Driver User Guide, B035-2403
Teradata Parallel Transporter Reference, B035-2436
Teradata Parallel Transporter User Guide, B035-2445
Basic Teradata Query Reference, B035-2414
Teradata JSON
Terminology
A JSON document or JSON message is any string that conforms to the JSON format. When
discussing JSON values in the SQL context, JSON documents are referred to as an instance
of the JSON data type or simply as a JSON instance.
A JSON document structured as an object is encapsulated in {}. A JSON document
structured as an array is encapsulated in []. In the context of SQL, they are both JSON data
type instances. For details, see JSON String Syntax
When we discuss the serialized form (such as the example below), we call it a JSON
document. To describe the structure of the JSON document, we say "a JSON document
structured as an array or object" or simply JSON array and JSON object. The following is an
example of a JSON document.
{
"name": "Product",
"properties": {
"id": {
"type": "number",
"description": "Product identifier",
"required": true
},
"name": {
"type": "string",
"description": "Name of the product",
"required": true
},
"price": {
"type": "number",
"minimum": 0,
"required": true
},
"tags": {
"type": "array",
"items": {
"type": "string"
}
},
"stock": {
"type": "object",
"properties": {
"warehouse": {
"type": "number"
},
"retail": {
"type": "number"
}
}
}
}
}
Teradata JSON
Standards Compliance
The Teradata JSON data type is compliant with the standard for JSON syntax defined by
IETF RFC 4627. The standard is available at http://www.ietf.org/rfc/rfc4627.txt and http://
www.json.org/.
CHARACTER SET
UNICODE
attributes
LATIN
Syntax Elements
integer
A positive integer value that specifies the maximum length in characters of the JSON
type.
If you do not specify a maximum length, the default maximum length for the character
set is used. If specified, the length is subject to a minimum of two characters and cannot
10
Teradata JSON
For example, if you try to insert data that is larger than the maximum length defined for a
JSON column, or pass data that is larger than what is supported by a JSON parameter, you
will get this error.
JSON data is stored inline and in LOB subtables depending on the size of the data.
1. If the maximum length specified is less than or equal to 64000 bytes (64000 LATIN
characters or 32000 UNICODE characters), then the data in all rows of the table are
stored inline.
2. If the maximum length specified exceeds 64000 bytes, then rows with less than 4K bytes
of data are stored inline, and rows with more than 4K bytes of data are stored in a LOB
subtable. A table with a column defined in this manner will have a few rows stored inline
and a few rows in a LOB subtable, depending on data size.
Teradata JSON
11
Result: This creates a table with LATIN character set with a maximum length of that
character set, 16776192 LATIN characters. When a character set is not specified for the JSON
type, the default character set for the user is used. The result for this example assumes the
user had LATIN as their default character set.
Example: Create a table with a JSON type column, with no maximum length specified and
specify UNICODE character set:
CREATE TABLE json_table(id INTEGER, json_j1 JSON CHARACTER SET
UNICODE);
Result: This creates a table with UNICODE character set and a maximum length of that
character set, 8388096 UNICODE characters.
Example: Create a table with a JSON type column, with a maximum length specified and no
character set specified:
CREATE TABLE json_table(id INTEGER, json_j1 JSON(100000));
Result: This creates a table with a maximum length of 100000 LATIN characters. Note, the
result for this example assumes the user had LATIN as their default character set.
Example: Create a table with a JSON type column, with a maximum length specified and
UNICODE character set specified:
CREATE TABLE json_table(id INTEGER, json_j1 JSON(100000) CHARACTER SET
UNICODE);
Result: This creates a table with a maximum length of 100000 UNICODE characters.
Example: Create a table with JSON type columns, with a maximum length specified that
exceeds the allowed length and no character set specified:
CREATE TABLE json_table(id INTEGER, json_j1 JSON(64000), json_j2
JSON(12000));
Result: This fails because the maximum possible amount of data stored in the row could grow
to approximately 76000 bytes. This exceeds the maximum row size, as described in item 1
earlier.
Example: Create a table with JSON type columns, with a maximum length specified and no
character set specified:
CREATE TABLE json_table(id INTEGER, json_j1 JSON(64001), json_j2
JSON(12000));
Result: This succeeds because the maximum possible amount of data stored in the row is
~16000 bytes which is within the maximum row size. This is because the json_j1 column has
12
Teradata JSON
the storage scheme described in item 2 earlier, in which a maximum of 4K bytes will be
stored in the row.
Teradata JSON
13
Example: Error: JSON Value Is Too Large to Store in the Defined JSON
Type
In this example, an error is returned when data being inserted into a JSON column is larger
than the maximum length defined.
The smallJSONTable table in this example has a JSON column with a maximum length of 10
LATIN characters.
CREATE TABLE smallJSONTable(id INTEGER, j JSON(10));
The following INSERT statement succeeds because the data inserted into the JSON column is
less than 10 characters.
INSERT INTO smallJSONTable(1, '{"a":1}');
*** Insert completed. One row added.
*** Total elapsed time was 1 second.
The following INSERT statement fails because '{"a":12345}' is greater than the maximum
length of 10 characters.
INSERT INTO smallJSONTable(1, '{"a":12345}');
*** Failure JSON value is too large to store in the defined JSON
type.
14
Teradata JSON
The NEW JSON constructor allocates a JSON type instance. The constructor can be used
without arguments (default constructor) or with arguments to set the value and the
character set of the JSON instance. The resulting JSON instance can be used to insert a JSON
type into a column or as a JSON type argument to a function or method.
Syntax
NEW JSON
)
'JSON_string '
, LATIN
, UNICODE
Syntax Elements
'JSON_string'
The value of the resulting JSON type.
The string must conform to JSON syntax as described in JSON String Syntax.
LATIN or UNICODE
The character set of the resulting JSON type.
If you do not specify a character set, the default character set of the user is used.
Rules and Restrictions
When inserting a JSON type into a column or using the JSON type as an argument to a
function or method, you can pass in a string of any size as long as it is less than or equal to
Teradata JSON
15
the maximum possible length of the resulting JSON type, which is 16776192 LATIN
characters or 8388096 UNICODE characters. If the data being inserted is longer than the
maximum length specified for a particular JSON instance, an error is reported.
Usage Notes
In the default constructor no arguments are passed to the constructor expression. NEW
JSON() initializes an empty JSON type value with the character set based on the default
character set of the user. The data is set to a null string.
You can append a JSON entity reference to the end of a constructor expression as described
in JSON Entity Reference.
Example: Default JSON Constructor
NEW JSON();
16
Teradata JSON
The JSON type can be cast to all other forms of the JSON type.
The JSON type can be cast to a JSON type of a different character set, such as JSON
LATIN from JSON UNICODE.
The JSON type can be cast to and from VARCHAR and CLOB of the same character set:
VARCHAR(32000) CHARACTER SET UNICODE
VARCHAR(64000) CHARACTER SET LATIN
CLOB(8388096) CHARACTER SET UNICODE
CLOB(16776192) CHARACTER SET LATIN
The JSON type can be cast to a CHAR of the same character set:
CHAR(32000) CHARACTER SET UNICODE
CHAR(64000) CHARACTER SET LATIN
The casting functionality can be implicitly invoked, and the format of the data cast to/from
in the text conforms to JSON syntax.
If any truncation occurs as a result of the cast, an error is reported.
Teradata JSON
17
This example shows an SQL definition for a UDF using parameter style SQL with a JSON
parameter, and a C function that shows the corresponding parameter list.
/* Parameter Style SQL */
CREATE FUNCTION MyJSONUDF (a1 JSON(100))
RETURNS VARCHAR(100)
NO SQL
PARAMETER STYLE SQL
DETERMINISTIC
LANGUAGE C
EXTERNAL NAME 'CS!MyJSONUDF!MyJSONUDF.c!F!MyJSONUDF';
/* C source file name: myJSONUDF.c */
void MyJSONUDF (
JSON_HANDLE
*json_handle,
VARCHAR_LATIN *result,
int
*indicator_ary,
int
*indicator_result,
char
sqlstate[6],
SQL_TEXT
extname[129],
SQL_TEXT
specific_name[129],
SQL_TEXT
error_message[257])
{
/* body function */
}
This example shows an SQL definition for a UDF using parameter style TD_GENERAL with
a JSON parameter, and a C function that shows the corresponding parameter list.
/* Parameter Style TD_GENERAL */
CREATE FUNCTION MyJSONUDF2 (a1 JSON(100))
RETURNS VARCHAR(100)
NO SQL
PARAMETER STYLE TD_GENERAL
DETERMINISTIC
18
Teradata JSON
This example shows an SQL UDF with a RETURN expression that evaluates to JSON type.
CREATE FUNCTION New_Funct ()
RETURNS JSON(100)
LANGUAGE SQL
CONTAINS SQL
DETERMINISTIC
SPECIFIC New_Funct
SQL SECURITY DEFINER
COLLATION INVOKER
INLINE TYPE 1
RETURN (new JSON('[1,2,3]'));
This example shows an SQL definition for an external stored procedure using parameter
style SQL with an IN JSON parameter, and a C function that shows the corresponding
parameter list.
/* Parameter Style SQL */
CREATE PROCEDURE myJSONXSP( IN a JSON(100),
OUT phonenum VARCHAR(100) )
LANGUAGE C
NO SQL
PARAMETER STYLE SQL
EXTERNAL NAME 'CS!myJSONXSP!myJSONXSP.c';
/* C source file name: myJSONXSP.c */
void myJSONXSP (JSON_HANDLE *json_handle,
VARCHAR_LATIN *result,
Teradata JSON
19
This example shows an SQL definition for an external stored procedure using parameter style
TD_GENERAL with an IN JSON parameter, and a C function that shows the corresponding
parameter list.
/* Parameter Style TD_GENERAL */
CREATE PROCEDURE myJSONXSP2 ( IN a1 JSON(100),
OUT phonenum VARCHAR(100))
NO SQL
PARAMETER STYLE TD_GENERAL
LANGUAGE C
EXTERNAL NAME 'CS!myJSONXSP2!myJSONXSP2.c!F!myJSONXSP2';
/* C source file name: myJSONXSP2.c */
void myJSONXSP2 (
JSON_HANDLE
*json_handle,
VARCHAR_LATIN *result,
char
sqlstate[6])
{
/* body function */
}
You can create a stored procedure containing one or more parameters that are a JSON type.
You can use the JSON type to define IN, OUT, or INOUT parameters.
Example: Stored Procedures with JSON Type Parameters
The following statement defines a stored procedure with an IN parameter that is a JSON type
with a maximum length of 100.
CREATE PROCEDURE my_tdsp1 (IN p1 JSON(100))
...
;
The following statement defines a stored procedure with an OUT parameter that is a JSON
type with a maximum length of 100.
20
Teradata JSON
You can create a stored procedure containing one or more local variables that are JSON data
type. You can use the DEFAULT clause with the declaration of these local variables:
DEFAULT NULL
DEFAULT value
If you specify DEFAULT value, you can use a JSON constructor expression to initialize the
local variable. See About JSON Type Constructor.
You can use the stored procedure semantics, including cursors, to access a particular value,
name/value pair, or nested documents structured as objects or arrays. You can use this with
the JSON methods and functions that allow JSONPath request syntax for access to specific
portions of a JSON instance.
Example: Stored Procedures with JSON Type Local Variables
The following stored procedure contains local variables, local1 and local2, that are of JSON
type with a maximum length of 100. The local1 variable is initialized using the NEW JSON
constructor method.
CREATE PROCEDURE TestJSON_TDSP1
(IN id INTEGER, OUT outName VARCHAR(20))
BEGIN
DECLARE local1 JSON(100) DEFAULT
NEW JSON('{"name":"Cameron", "age":24}');
DECLARE local2 JSON(100);
;
...
END;
The example shows an SQL definition for a UDM with a JSON parameter, and a C function
that defines the method.
/* Parameter style SQL: */
CREATE INSTANCE METHOD JSONMethod (p1 JSON(100))
FOR Some_UDT
RETURNS INTEGER
Teradata JSON
21
*someUdt,
*jsonval,
*result,
*indicator_this,
*indicator_aryval,
*indicator_result,
sqlstate[6],
extname[129],
specific_name[129],
error_message[257])
*/
The example shows an SQL definition for a UDM using parameter style TD_GENERAL with
a JSON parameter, and a C function that defines the method.
/* Parameter style TD_GENERAL */
CREATE INSTANCE METHOD JSONMethod (p1 JSON(100))
FOR Some_UDT
RETURNS INTEGER
FOR Some_UDT
NO SQL
PARAMETER STYLE TD_GENERAL
DETERMINISTIC
LANGUAGE C
EXTERNAL NAME 'CS!JSONMethod!JSONMethod.c!F!JSONMethod';
/* C source file name: JSONMethod.c */
void JSONMethod (
UDT_HANDLE
JSON_HANDLE
INTEGER
char
{
/* body function
}
22
*someUdt,
*jsonval,
*result,
sqlstate[6])
*/
Teradata JSON
This example shows how to create a structured user-defined type (UDT) which has a JSON
type attribute. The routines in this example are created in the SYSUDTLIB database.
Therefore, the user must have the UDTMETHOD privilege on the SYSUDTLIB database.
SQL Definition
This section shows the SQL DDL statements necessary to create the structured UDT.
Create a Structured UDT with a JSON Attribute
The following statement creates a structured UDT named judt with a JSON attribute named
Att1. The maximum length of the JSON attribute is 100000 characters, and the character set
of the JSON attribute is UNICODE.
CREATE TYPE judt AS (Att1 JSON(100000) CHARACTER SET UNICODE) NOT FINAL
CONSTRUCTOR METHOD judt (p1 JSON(100000) CHARACTER SET UNICODE)
RETURNS judt
SELF AS RESULT
SPECIFIC judt_cstr
LANGUAGE C
PARAMETER STYLE TD_GENERAL
RETURNS NULL ON NULL INPUT
DETERMINISTIC
NO SQL;
Teradata JSON
23
The following statement associates the tosql and fromsql transform routines with the judt
UDT.
CREATE TRANSFORM FOR judt
judt_io (TO SQL WITH SPECIFIC FUNCTION SYSUDTLIB.judt_tosql,
FROM SQL WITH SPECIFIC FUNCTION SYSUDTLIB.judt_fromsql);
C Source Files
This section shows the C code for the methods and functions created in the previous section.
This is just sample code so there is no meaningful logic in the tosql or Ordering functions.
However, based on the examples for the Constructor and fromsql routines, you can enhance
the previous routines to perform the necessary functions.
judt_constructor.c
#define SQL_TEXT Latin_Text
#include <sqltypes_td.h>
24
Teradata JSON
judt_fromsql.c
#define SQL_TEXT Latin_Text
#include <sqltypes_td.h>
#include <string.h>
#define buffer_size 200000
void judt_fromsql(UDT_HANDLE
LOB_RESULT_LOCATOR
int
int
char
{
Teradata JSON
*udt,
*result,
*inNull,
*outNull,
sqlstate[6])
25
judt_tosql.c
#define SQL_TEXT Latin_Text
#include <sqltypes_td.h>
#include <string.h>
void judt_tosql (LOB_LOCATOR
*p1,
UDT_HANDLE
*result,
char
sqlstate[6])
{
/* Using the LOB FNC routines, read from 'p1' and load the data
into the JSON attribute, depending on its length. See judt_cstr() for
an example of loading the JSON attribute. */
}
26
Teradata JSON
judt_order.c
#define SQL_TEXT Latin_Text
#include "sqltypes_td.h"
#define buffer_size 512
void judt_order
UDT_HANDLE
INTEGER
int
int
char
SQL_TEXT
SQL_TEXT
SQL_TEXT
(
*UDT,
*result,
*indicator_udt,
*indicator_result,
sqlstate[6],
extname[129],
specific_name[129],
error_message[257])
{
/* Read out as much data as necessary, using either
FNC_GetStructuredAttribute or FNC_GetStructuredInputLobAttribute + LOB
FNC routines, following the example in judt_fromsql. Then use this data
to make the determination about the value of this instance in terms of
ordering. */
}
Result:
id
----------1
2
j1
---------------------------------------------------------{"name":"Cameron"}
{"name":"Melissa"}
Teradata JSON
27
Description
FNC_GetInternalValue
FNC_GetJSONInfo
FNC_GetJSONInputLob
FNC_GetJSONResultLob
FNC_SetInternalValue
Use the JSON_HANDLE C data type to pass a JSON type instance as an argument to an
external routine. Similarly, use JSON_HANDLE to return a JSON type result from an
external routine. JSON_HANDLE is defined in sqltypes_td.h as: typedef int
JSON_HANDLE;
You can specify the JSON data type as an attribute of a structured UDT. When passing a
structured UDT that includes a JSON attribute to or from an external routine, you can use
the following interface functions to access or set the value of the JSON attribute, or get
information about a JSON attribute.
28
Description
FNC_GetStructuredAttribute
FNC_GetStructuredAttributeInfo and
FNC_GetStructuredAttributeInfo_EON
Teradata JSON
Description
FNC_GetStructuredInputLobAttribute
FNC_GetStructuredResultLobAttribute
FNC_SetStructuredAttribute
For details about the JSON type interface functions, see SQL External Routine Programming,
B035-1147.
Teradata JSON
29
Syntax
object
array
object
}
name/value pair
,
name/value pair
name/value pair
"string "
value
value
"string "
number
object
array
null
true
false
array
]
value
,
value
Syntax Elements
object
An unordered collection of name/value pairs.
array
An ordered sequence of values.
name/value pair
An attribute and its associated value.
string
A JSON string literal enclosed in double quotation marks.
number
A JSON numeric literal.
30
Teradata JSON
Teradata JSON
31
32
Teradata JSON
CHAPTER 2
For details about CREATE TABLE and ALTER TABLE, see SQL Data Definition Language Syntax and Examples, B035-1144
Teradata JSON
33
You can use TD_LZ_COMPRESS to compress JSON data; however, Teradata recommends
that you use JSON_COMPRESS instead because the JSON_COMPRESS function is
optimized for compressing JSON data.
JSON_COMPRESS and JSON_DECOMPRESS can be used to compress JSON type columns
only. These functions cannot be used to compress columns of other data types.
You cannot create your own compression and decompression user-defined functions to
perform algorithmic compression on JSON type columns. You must use the functions
previously listed.
Note: Using ALC together with block-level compression (BLC) may degrade performance, so
this practice is not recommended. For more information on compression use cases and
examples, see Teradata Orange Book Block-Level Compression in Teradata. Access Orange
Books through Teradata @ Your Service: http://tays.teradata.com.
For more information about compression functions, see SQL Functions, Operators,
Expressions, and Predicates, B035-1145 .
For information about the COMPRESS and DECOMPRESS phrases, see SQL Data Types and
Literals, B035-1143 .
Example: JSON_COMPRESS and JSON_DECOMPRESS Functions
In this example, the JSON data in the "json_col" column is compressed using the
JSON_COMPRESS function. The compressed data is uncompressed using the
JSON_DECOMPRESS function.
CREATE TABLE temp (
id
INTEGER,
json_col JSON(1000)
CHARACTER SET LATIN
COMPRESS USING JSON_COMPRESS
DECOMPRESS USING JSON_DECOMPRESS);
34
Teradata JSON
The following shows a sample USING clause generated by the TPT SQL Inserter operator:
USING COL1_INT(INTEGER),
COL2_CHAR(CHAR(10)),
COL3_JSON(CLOB(1000)),
COL4_BIGINT(BIGINT),
COL5_JSON(CLOB(16776192) AS DEFERRED),
COL6_VARCHAR(VARCHAR(20))
INSERT INTO target_table
VALUES (:COL1_INT, :COL2_CHAR, :COL3_JSON, :COL4_BIGINT,
:COL5_JSON, :COL6_VARCHAR);
You cannot load JSON data using FastLoad, MultiLoad, or FastExport protocols using either
the legacy stand-alone load tools or Parallel Transporter. However, if the JSON data is less
than 64 KB, and if the target table defines the column as CHAR or VARCHAR, then you can
load the JSON data using these utilities. Similarly, you can load the JSON data into a staging
table with the columns defined as CHAR or VARCHAR, and then use INSERT-SELECT to
load the data into JSON columns of a target table.
For more information about the TPT load utility, see the following books:
Teradata Parallel Transporter Reference, B035-2436
Teradata Parallel Transporter User Guide, B035-2445
Teradata JSON
35
The example creates a table with a JSON column, allocates and initializes a JSON instance
using the JSON constructor, then inserts the JSON and integer values into the table.
CREATE TABLE my_table (eno INTEGER, edata JSON(100));
INSERT INTO my_table VALUES(1,
NEW JSON('{"name" : "Cameron", "age" : 24}'));
The example inserts a JSON string into a table that contains a JSON column.
INSERT INTO my_table VALUES(2,
'{"name" : "Cameron", "age" : 24}');
36
Teradata JSON
The example creates two tables, then inserts JSON data into the second table from the first
table.
CREATE TABLE my_table (eno INTEGER, edata JSON(100));
CREATE TABLE my_table2 (eno INTEGER, edata JSON(20));
INSERT INTO my_table VALUES(1,
NEW JSON('{"name" : "Cam"}'));
INSERT INTO my_Table2
SELECT * FROM my_table;
Note: If the JSON data is too large to fit in the column an error is reported.
INSERT INTO my_table VALUES(1,
NEW JSON('{"name" : "Cameron", "age" : 24}'));
INSERT INTO my_Table2
SELECT * FROM my_table;
*** Failure 7548: Data too large for this JSON instance.
VARCHAR columns.
2 Verify that the intended JSON data is well-formed and conforms to the rules of JSON
formatting.
3 Create new versions of the tables using the JSON type for columns that will hold the
JSON data.
4 Insert the JSON text (for example, the JSON constructor or string) into the JSON
columns.
JSON cannot be loaded with FastLoad, MultiLoad, or FastExport due to restrictions on
all LOB types.
Teradata JSON
37
Syntax
JSON_expr
Object Member
Array Element
Object Member
nonreserved_word
" string_literal "
Array Element
[ integer ]
[ integer ]
Syntax Elements
JSON_expr
An expression that evaluates to a JSON data type.
Object Member
A Teradata nonreserved word or a string literal enclosed in double quotation marks.
integer
A positive integer value that is within the range of the JSON instance being traversed.
Return Value
When the desired entity is found, the resulting data is a VARCHAR value. If more than one
entity is found which satisfies the specified syntax, the result is a VARCHAR representation
of a JSON array composed of the entities found.
The default length of the return value is 4096 characters, but you can use the
JSON_AttributeSize DBS Control field to change the default length. An error is returned if
the result exceeds the specified length. The maximum length that can be set for the return
value is 32000 characters.
For details about the JSON_AttributeSize DBS Control field, see Utilities, B035-1102.
If the entity is not found or if the JSON instance being referenced is null, a Teradata NULL
value is returned.
If the result is an empty string, an empty string is returned.
Restrictions
You cannot use a JSON entity reference in the target portion of a SET clause because you
cannot update the entities of a JSON instance.
38
Teradata JSON
The following logic is employed to differentiate between a standard column reference in the
format table.column and a reference to an entity in a JSON instance in the format
name1.name2.
1. If standard resolution is successful, the reference is interpreted as a standard
table.column reference.
2. Otherwise, if there is one source table with a JSON column named name1, the reference
is interpreted as a reference to an entity called name2 on a JSON column called name1.
If there is more than one source table with a JSON column named name1, an ambiguity
error is returned.
Teradata JSON
39
3. If there are no source tables with a JSON column named name1, an error is returned.
Column Reference with a Specified Table and Database
The following logic is employed to differentiate between a standard column reference in the
format database.table.column and a reference to an entity in a JSON instance in the format
name1.name2.name3...nameN.
1. If standard resolution is successful and there are more than 3 names, the reference is
interpreted as a reference to an entity called name4nameN on a JSON column called
name1.name2.name3. An error is returned if the column is not a JSON column.
If there are 3 names, the reference is interpreted as a standard database.table.column
reference.
Otherwise
if standard resolution is not successful, the standard disambiguation logic is
2.
used as follows:
a. If a source table named name1 exists and it has a JSON column named name2, the
reference is interpreted as a reference to an entity called name3nameN on a JSON
column called name1.name2. Otherwise, an error is returned indicating that column
name2 was not found.
If
there is one source table with a JSON column named name1, the reference is
b.
interpreted as a reference to an entity called name2nameN on a JSON column
called name1 that is present in one source table.
If there is more than one source table with a JSON column named name1, an
ambiguity error is returned.
If
there is a table in the current database named name1 with a JSON column named
c.
name2, the reference is interpreted as a reference to an entity called name3nameN
on a JSON column called CurrentDatabase.name1.name2.
d. Otherwise, processing continues with standard error handling.
40
Teradata JSON
Result:
jsonCol.name
-----------Cameron
Example
SELECT jsonCol.numbers
FROM test.jsonTable
WHERE id=1;
Result:
jsonCol.numbers
--------------[1, 2, 3, [1, 2]]
Example
SELECT jsonCol.numbers[1]
FROM test.jsonTable
WHERE id=1;
Result:
jsonCol.numbers[1]
-----------------2
Example
SELECT new JSON('{"name" : "Cameron"}').name;
Result:
new JSON('{"name" : "Cameron"}').name
------------------------------------Cameron
Example
SELECT jsonCol.name
FROM test.jsonTable
WHERE id=2;
Result:
Teradata JSON
41
Example
SELECT id, jsonCol.numbers
FROM test.jsonTable
WHERE jsonCol.name='Cameron'
ORDER BY id;
Result:
id
jsonCol.numbers
-----------------------1
[1, 2, 3, [1, 2]]
Example
SELECT id, jsonCol.numbers
FROM test.jsonTable
WHERE id < 3
ORDER BY id;
Result:
id
jsonCol.numbers
-----------------------1
[1, 2, 3, [1, 2]]
2
?
/* There are no numbers in this JSON */
Result:
id
jsonCol
-------------1
{"name" : "Cameron", "numbers" : [1,2,3,[1,2]]}
2
{"name" : "Cameron", "name" : "Lewis"}
Example
In the following query, jsonCol.name is interpreted as a JSON entity reference.
SELECT jsonCol.name
FROM test.jsonTable
WHERE id=1;
Result:
42
Teradata JSON
Example
The following query returns an error because there is more than one source table with a
JSON column named jsonCol.
SELECT jsonCol.name
FROM test.jsonTable, test.jsonTable2;
Result:
*** Failure 3809 Column 'jsonCol' is ambiguous.
Example
The following query shows a JSON entity reference specified as a fully qualified column
reference.
SELECT id, test.jsonTable.jsonCol.name
FROM test.jsonTable
WHERE id=1;
Result:
id
jsonTable.jsonCol.name
---------------------------------1
Cameron
Example
The following shows an incorrect JSON entity reference specified as a fully qualified column
reference.
SELECT test.jsonTable.id.name
FROM test.jsonTable
WHERE id=1;
Example
In the following query, jsonTable.jsonCol.name is a JSON entity reference that looks
like a database.table.column reference.
SELECT id, jsonTable.jsonCol.name
FROM test.jsonTable
WHERE id=1;
Result:
id
jsonTable.jsonCol.name
----------------------------1
Cameron
Teradata JSON
43
Example
Incorrect JSON entity reference
SELECT jsonTable.id.name
FROM test.jsonTable
WHERE id=1;
Result:
*** Failure 3802 Database 'jsonTable' does not exist.
Example
In the following query, jsonCol.name."first" is interpreted as an entity reference on the
jsonCol column of the source table, test.jsonTable.
SELECT T.id, jsonCol.name."first"
FROM test.jsonTable T, test.jsonTable3 T3
ORDER BY T.id;
Result:
id
jsonCol.name.first
------------------------1
?
2
?
3
Cameron
Example
In the following query, the reference to jsonCol is ambiguous because both source tables
have a JSON column named jsonCol.
SELECT T.id, jsonCol.name."first"
FROM test.jsonTable T, test.jsonTable2 T2
ORDER BY T.id;
Example
In this example, jsonTable2 is in the current database and it has a JSON column called
jsonCol, so jsonTable2.jsonCol.name is interpreted as a JSON entity reference.
SELECT jsonTable2.id, jsonTable2.jsonCol.name
FROM test.jsonTable3;
Result:
id
jsonCol.name
-----------------------------1
Cameron
44
Teradata JSON
Syntax
The following shows the JSONPath syntax as specified in the JSONPath Specification located
at http://goessner.net/articles/JsonPath/.
'
'
$
children
children
child specification
..
options
child specification
*
name_string
options
options
index
expression
filter
index
*
integer
:
, integer
: integer : integer
: integer
Teradata JSON
45
( @.LENGTH
)
+
integer
*
/
filter
?(@.element_string
)
number comparison
=~ string
number comparison
<=
integer
<
>
>=
==
!=
Syntax Elements
$
The root object or element.
children
The descent operator ('.' or '.. ') followed by a child specification or options.
child specification
The wildcard character ('*') which matches all objects or elements.
A string specifying the name of a particular object or element and associated options
if needed.
options
An index, an expression, or a filter.
integer
A signed integer.
expression
In this context, LENGTH is the length of the current JSON array, equal to the number of
elements in the array.
@
The current object or element.
filter
Applies a filter (script) expression.
element_string
A string specifying the name of an element.
46
Teradata JSON
=~ string
String comparison expression.
JSONPath
Description
Example
Explanation of Example
Result
$.customer
CustomerName
The current object/element $.items[(@.length The last item in the order. {"ID":8,"name":"pen","amt":
-1)]
80}
The use of the 'length'
keyword in this context is
interpreted as the length of
the current JSON array
and is treated as a property
of the JSON array. This is
only interpreted in this
manner if 'length' occurs
immediately after the '@.'
syntax. If the word 'length'
is found later in the
expression (for example,
'@.firstChild.length'), it is
interpreted as the name of
a child of some entity, not
as a property of that entity.
..
Recursive descent
Teradata JSON
$..name
["disk","RAM","monitor",
"keyboard","camera","butto
n","mouse","pen"]
47
Description
Example
Explanation of Example
Wildcard
$.items[0].*
All objects/elements
regardless of their names
Result
[]
$.items[0]
{"ID":1,"name":"disk","amt":
10}
[start,end]
List of indexes
$.items[0,1]
[{"ID":
1,"name":"disk","amt":10},
{"ID":
2,"name":"RAM","amt":20}]
[start:end:step]
$.items[0:4:2]
$.items[?
(@.amt<50)]
[{"ID":
1,"name":"disk","amt":10},
{"ID":
2,"name":"RAM","amt":20},
{"ID":
3,"name":"monitor","amt":
30},
{"ID":
4,"name":"keyboard","amt":
40}]
()
Script expression, using the $.items[(@.length The last item in the order
underlying script engine
-1)]
{"ID":8,"name":"pen","amt":
80}
48
Teradata JSON
Note: You cannot include a JSON column in the ORDER BY, HAVING or GROUP BY
clauses of a SELECT statement.
In the SELECT or WHERE clause, you can add a JSON entity reference to the end of a
column reference or any expression which evaluates to a JSON type.
Result:
'{"name" : "Justin", "phoneNumber" : 8584852611}'
'{"name" : "Cameron", "phoneNumber" : 8584852612}'
Result:
eno edata
-------1
'{"name" : "Cameron", "phoneNumber" : 8584852612}'
Teradata JSON
49
Wildcard
Name or index list
Index slice
Filter
Note: The search is optimized in that it does not always need to search the entire document.
However, this means that the following scenario is possible.
Disable JSON validation
Insert malformed JSON data. For example, something similar to this:
{"name":"Cameron" 123456}
50
Teradata JSON
Result: To see the name has been updated, run: SELECT edata FROM my_table;
edata
----{"name" : "George"}
To see the result of the query, run: SELECT * FROM my_table ORDER BY eno;
eno edata
--------1
{"name" : "Kevin", "phoneNumber" : 8584852613}
2
{"name" : "Cameron", "phoneNumber" : 8584852612}
Teradata JSON
51
To see the result of the query, run: SELECT * FROM my_table ORDER BY eno;
eno edata
----------1
{"name" : "Kevin", "phoneNumber" : 8584852613}
2
{"name" : "Cameron", "phoneNumber" : 8584852612}
2
{"name" : "Mike", "phoneNumber" : 8584852614}
52
Teradata JSON
Result:
*** Failure 3513 JSON Abort.
Teradata JSON
53
54
Teradata JSON
Methods
Combine: This method takes two JSON documents and combines them into a JSON
document structured as array or JSON document structured as an object.
ExistValue: This method allows you to specify a name or path in JSONPath syntax to
determine if that name or path exists in the JSON document.
JSONExtract: The JSONExtract method extracts data from a JSON instance. The desired
data is specified in a JSONPath expression. The result is a JSON array composed of the
values found, or NULL if there are no matches.
JSONExtractValue: This method allows you to retrieve the text representation of the
value of an entity in a JSON instance, specified using JSONPath syntax.
JSONExtractValue returns a VARCHAR. The returned VARCHAR length defaults to 4K,
but this can be increased to 32000 characters (not bytes) in DBS Control using the
JSON_AttributeSize flag.
JSONExtractLargeValue: This method is the same as JSONExtractValue, except for the
return size and type. For LOB based JSON objects this method returns a CLOB of
16776192 characters for CHARACTER SET LATIN or 8388096 characters for
CHARACTER SET UNICODE.
Functions
ARRAY_TO_JSON: This function allows any Teradata ARRAY type to be converted to a
JSON type composed of an array.
GeoJSONFromGeom: This function converts an ST_Geometry object into a JSON
document that conforms to the GeoJSON standards.
GeomFromGeoJSON: This function converts a JSON document that conforms to the
GeoJSON standards into an ST_Geometry object.
JSON_AGG: This aggregate function takes a variable number of input parameters and
packages them into a JSON document.
JSON_COMPOSE: This scalar function takes a variable number of input parameters and
packages them into a JSON document. This function provides a complex composition of
a JSON document when used in conjunction with the JSON_AGG function.
JSON_CHECK: This function checks a string for valid JSON syntax and provides an
informative error message about the cause of the syntax failure if the string is invalid.
Table Operators
JSON_KEYS: This table operator parses a JSON instance, from either CHAR or
VARCHAR input and returns a list of key names.
JSON_TABLE: This table operator creates a temporary table based on all, or a subset, of
the data in a JSON object.
Teradata JSON
55
56
Teradata JSON
CHAPTER 3
JSON Methods
Combine
Purpose
The Combine method takes two JSON documents (specified by the JSON expression),
combines them, and returns a JSON document structured as an array or structured as an
object.
Syntax
JSON_expr.Combine ( JSON_expr
)
, 'ARRAY'
, 'OBJECT'
Syntax Elements
JSON_expr
An expression that evaluates to a JSON data type; for example, this can be a column in
a table, a constant, or the result of a function or method.
'ARRAY' or 'OBJECT'
Optional. Explicitly specifies the result type as 'ARRAY' or 'OBJECT'.
If 'ARRAY' is specified, the result is a JSON document structured as an array. If
'OBJECT' is specified the result is a JSON document structured as an object.
When specifying 'OBJECT' both the JSON documents being combined must be JSON
objects, otherwise an error is reported.
Teradata JSON
57
If the character set is LATIN, the maximum length is 16776192 characters. This is
equivalent to 16776192 bytes, which is the maximum possible length for the JSON type.
A shorter maximum length can be specified wherever the JSON type is used, for example, as
the data type of a table column or as a parameter to a function.
If the result of the Combine method is used to insert into a column of a table or as a
parameter to a UDF, UDM, external stored procedure (XSP), Teradata Stored Procedure
(TDSP), and so on, the resulting length is subject to the length of that defined JSON instance.
If it is too large, an error will result.
Usage Notes
The result of a combining two JSON documents is a JSON document containing data from
both the input documents. The resulting document is structured as a JSON object or a JSON
array. The JSON documents being combined can be structured differently than each other;
for example, a JSON document structured as an array can be combined with a JSON
document structured as an object and the resulting combination is structured as a JSON
array. The following explains the result of various combinations.
If the optional 'ARRAY' parameter is specified in the command:
If both input JSON documents are structured as JSON arrays, the result is an array
with values from each JSON document. For example, if j1 = [1,2] and j2 = [3,4], the
combination is [1,2,3,4].
If one of the JSON documents is structured as a JSON array and the other is structured
as a JSON object, the result is a JSON array composed of all the elements of the JSON
document structured as an array, plus one more element, which is the entire JSON
document structured as an object (the second document). For example, if j1 = [1,2]
and j2 = {"name" : "Jane"}, the combination is [1,2, { "name" : "Jane" } ].
If both JSON documents are structured as JSON objects, the result is a JSON
document structured as an array composed of JSON documents structured as objects
from each JSON object being combined. For example, if j1 = { "name" : "Harry" } and j2
= { "name" : "Jane" }, the combination is [ { "name" : "Harry" }, {"name" : "Jane"} ].
If the optional 'OBJECT' parameter is specified, the result is a combined JSON document
structured as an object containing all the members of each input object. For example, if j1
= { "name" : "Jane" , "age" : "30" } and j2 = { "name" : "Harry", "age" : "41" }, the combination
is { "name" : "Jane" , "age" : "30" , "name" : "Harry" , "age" : "41" }.
If either JSON document being combined is structured as an array an error is reported.
If 'ARRAY' or 'OBJECT' is not specified in the command:
If both JSON documents are structured as JSON arrays, the result is a JSON document
structured as an array with values from each JSON document.
If either JSON document is structured as an array, the result is a JSON document
structured as an array, which is composed of all the elements of the JSON document
which is an array, plus one more element which is the entire JSON document
structured as an object; this is the same behavior as specifying 'ARRAY'.
If both JSON documents are structured as objects, the result is a combined JSON
document structured as an object containing all the members of each input object; this
is the same behavior as specifying 'OBJECT'.
If one of the JSON documents being combined is NULL, the result is a non-NULL JSON
document.
If both JSON documents being combined are NULL, the result is a Teradata NULL value.
58
Teradata JSON
If one of the JSON documents being combined has CHARACTER SET UNICODE, the
resulting JSON instance has text in CHARACTER SET UNICODE.
If both JSON documents being combined has text in CHARACTER SET LATIN, the
resulting JSON instance has text in CHARACTER SET LATIN.
Result: When combining LATIN and UNICODE, the result is a JSON instance with text in
CHARACTER SET UNICODE.
edata.Combine(edata2)
--------------------{"name" : "Cameron", "name" : "Lewis"}
{"name" : "Melissa", "name" : "Lewis"}
Teradata JSON
59
Result: When combining UNICODE and UNICODE, the result is a JSON instance with text
in CHARACTER SET UNICODE.
edata.Combine(edata)
-------------------{"name" : "Cameron","name"
{"name" : "Cameron","name"
{"name" : "Melissa","name"
{"name" : "Melissa","name"
:
:
:
:
"Cameron"}
"Melissa"}
"Cameron"}
"Melissa"}
Result: The result is a JSON instance with text in CHARACTER SET LATIN.
edata2.Combine(edata2)
--------------------{"name" : "Lewis","name"
{"name" : "Lewis","name"
{"name" : "Lewis","name"
{"name" : "Lewis","name"
:
:
:
:
"Lewis"}
"Lewis"}
"Lewis"}
"Lewis"}
60
Teradata JSON
Result: The result of combining a JSON array with a JSON object is implicitly a JSON
ARRAY.
edata.Combine(edata2)
--------------------[1,2,3, {"name" : "Lewis"}]
:
:
:
:
"Lewis"}]
"Lewis"}]
"Lewis"}]
"Lewis"}]
Example: Combine Two JSON ARRAYs and Explicitly State the Result Is a
JSON ARRAY
The following examples show how to use the JSON Combine method with the 'ARRAY'
parameter to explicitly set the result to a JSON array.
The example specifies the 'ARRAY' parameter. Note, the data being combined are both JSON
array instances, so the result is implicitly a JSON ARRAY even if the 'ARRAY' parameter is
not specified.
Note: The example uses the table(s) created earlier.
/* Explicit statement of result type as JSON ARRAY. */
SELECT edata.Combine(edata, 'ARRAY') FROM my_table WHERE eno=3;
Teradata JSON
61
Example: Combine a JSON ARRAY and a JSON Expression and Specify the
Result Is a JSON ARRAY
Combine a JSON array and an expression that evaluates to a JSON array and specify
'ARRAY' as the result. Note, the result is implicitly a JSON array even if the 'ARRAY'
parameter is not specified.
Note: The example uses the table(s) created earlier.
/* Explicit statement of result type as JSON 'ARRAY' */
SELECT edata.Combine(NEW JSON('[1,2,[3,4]]'), 'ARRAY') FROM my_table
WHERE eno=3;
Result: The result is an array of several elements, with the last element an array itself.
edata.Combine(edata2)
[1,2,3,1,2,[3,4]]
Example: Error - Combine Two JSON ARRAYs and State the Result Is a
JSON OBJECT
The example specifies the 'OBJECT' parameter; however, the data being combined are both
JSON array instances, so the result must be a JSON ARRAY. This example results in an error
because 'OBJECT' was specified.
Note: The example uses the table(s) created earlier.
/*Error case*/
SELECT edata.Combine(edata2, 'OBJECT') FROM my_table WHERE eno=3;
The example results in an error similar to this: *** Failure 7548: Invalid options
for JSON Combine method.
ExistValue
Purpose
The ExistValue method determines if a name represented by a JSONPath-formatted string
exists in a JSON instance.
Syntax
JSON_expr.ExistValue (JSONPath_expr )
62
Teradata JSON
Syntax Elements
JSON_expr
An expression that evaluates to a JSON data type.
JSONPath_expr
A name in JSONPath syntax.
The name can be either UNICODE or LATIN, depending on the character set of the
JSON type that invoked this method. If the parameter character set does not match the
character set of the JSON type, Teradata attempts to translate the parameter character
set to the correct character set.
JSONPath_expr cannot be NULL. If the expression is NULL an error is reported.
Functional Description
ExistValue determines if the name specified by JSONPath_expr exists in the JSON instance
specified by JSON_expr.
Return Value
1, if the specified name is found at least once in the JSON instance.
0, if the name is not found.
NULL, if the JSON_expr argument is null.
Result:
ENO
'True'
-----------------1
True
Teradata JSON
63
Result:
*** No rows found ***
The JSON column contains a child named 'schools' which is an array, and the second element
of the array does not have a child named 'location'; therefore, the method did not return any
rows.
64
Teradata JSON
JSONExtract
Purpose
The JSONExtract method operates on a JSON instance, to extract data identified by the
JSONPath formatted string. If one or more entities are found, the result of this method is a
JSON array composed of the values found; otherwise, NULL is returned.
Syntax
JSON_expr. JSONExtract ( JSONPath_expr )
Syntax Elements
JSON_expr
An expression that evaluates to a JSON data type.
JSONPath_expr
An expression to extract information about a particular portion of a JSON instance. For
example, $.employees.info[*] provides all the information about each employee.
The desired information can be any portion of a JSON instance; for example, a name/
value pair, object, array, array element, or a value.
The JSONPath expression must be in JSONPath syntax.
JSONPath_expr cannot be NULL. If the expression is NULL, an error is reported.
Functional Description
JSONExtract searches the JSON object specified by JSON_expr and retrieves the data that
matches the entity name specified by JSONPath_expr.
Return Value
A JSON array whose elements are all the matches for the JSONPath_expr in the JSON
instance.
NULL, if the entity was not found in the JSON object.
NULL, if the JSON_expr argument is null.
Teradata JSON
65
66
Teradata JSON
[ "Alex" ]
[ "David" ]
Result:
ENO edata.JSONExtract()
--------------------------1 [ "Cameron" ]
2 ?
3 [ "Alex" ]
4 [ "David" ]
Result: A NULL value is returned for a person in the table who does not have a job.
ENO edata.JSONExtract()
-------------------------1 [ "programmer" ]
2 ?
3 [ "CPA" ]
4 [ "small business owner" ]
Teradata JSON
67
Syntax
JSON_expr.JSONExtractValue
(JSONPath_expr )
JSON_expr.JSONExtractLargeValue
Syntax Elements
JSON_expr
An expression that evaluates to a JSON data type.
JSONPath_expr
An expression to extract information about a particular portion of a JSON instance. For
example, $.employees.info[*] provides all the information about each employee.
The desired information can be any portion of a JSON instance; for example, a name/
value pair, object, array, array element, or a value.
The JSONPath expression must be in JSONPath syntax.
JSONPath_expr cannot be NULL. If the expression is NULL, an error is reported.
Functional Description
These methods search the JSON object specified by JSON_expr and get the value of the entity
name specified by JSONPath_expr. The entity name is represented by a JSONPath formatted
string.
Return Value
A string that is the value of the entity, if the entity was found in the JSON instance.
A JSON array, formatted as a string, if the result of the search found two or more paths
that meet the search criteria specified by JSONPath_expr. Each element of the array
represents one of the values found.
An empty string if the result is an empty string.
A Teradata NULL, if the result is a JSON null.
A Teradata NULL, if the entity was not found in the JSON object.
A Teradata NULL, if the JSON_expr argument is null.
JSONExtractValue returns a VARCHAR of the desired attribute. The returned length defaults
to 4K, but this can be increased to 32000 characters (not bytes) in DBS Control using the
JSON_AttributeSize flag. If the result of the method is too large for the buffer, an error will be
reported.
68
Teradata JSON
Usage
JSONExtractLargeValue should only be used when the results of the extraction are large
(greater than 32000 characters); otherwise use JSONExtractValue.
Teradata JSON
69
Result:
ENO edata.JSONExtractValue()
----------------------------1
UCI
2
Mira Costa
3
CSUSM
4
?
Result:
ENO edata.JSONExtractValue()
----------------------------1
Cameron
2
?
3
Alex
4
David
Result:
ENO edata.JSONExtractValue()
--------------------------------1
programmer
2
?
3
CPA
4
small business owner
70
Teradata JSON
Result:
ENO edata.JSONExtractValue()
-------------------------------------------------1
["Lake", "Madison", "Rancho", "UCI"]
2
["Lake", "Madison", "Rancho", "Mira Costa"]
3
["Lake", "Madison", "Rancho", "CSUSM"]
4
["Lake", "Madison", "Rancho"]
Teradata JSON
71
72
Teradata JSON
CHAPTER 4
ARRAY_TO_JSON
Purpose
The ARRAY_TO_JSON function converts a Teradata ARRAY type to a JSON type composed
of an ARRAY.
Syntax
ARRAY_TO_JSON ( ARRAY_expr )
TD_SYSFNLIB.
If you specify a RETURNS clause, you must enclose the function call in parenthesis.
(
ARRAY_TO_JSON ( ARRAY_expr )
RETURNS clause
TD_SYSFNLIB.
RETURNS clause
RETURNS data_type
( integer )
CHARACTER SET
UNICODE
LATIN
Syntax Elements
TD_SYSFNLIB
The name of the database where the function is located.
ARRAY_expr
An expression that evaluates to an ARRAY data type.
ARRAY_expr specifies the array to be converted to the JSON type.
RETURNS data_type
Specifies that data_type is the return type of the function.
data_type can only be JSON.
integer
A positive integer value that specifies the maximum length in characters of the JSON
type.
Teradata JSON
73
If you do not specify a maximum length, the default maximum length for the character
set is used. If specified, the length is subject to a minimum of two characters and cannot
be greater than the maximum of 16776192 LATIN characters or 8388096 UNICODE
characters.
CHARACTER SET
The character set for the return value of the function, which can be LATIN or
UNICODE.
If you do not specify a character set, the default character set for the user is used.
RETURNS STYLE column_expr
Specifies that the return type of the function is the same as the data type of the specified
column. The data type of the column must be JSON.
column_expr can be any valid table or view column reference.
Functional Description
The ARRAY data type is mapped to a JSON-formatted string composed of an array, which
can also be a multidimensional array. If the data type of the ARRAY is a numeric predefined
type, the array element maps to a numeric type in the JSON instance. For all other types, the
value added to the JSON instance is the transformed value of each element of the ARRAY,
which is stored in the JSON instance as a string. Note, the JSON array should have the same
number of elements as the ARRAY type.
Return Value
The return type of this function is JSON.
You can use the RETURNS clause to specify the maximum length and character set of the
JSON type.
If you do not specify a RETURNS clause, the return type defaults to JSON data type with
UNICODE character set and a return value length of 64000 bytes, which supports up to
32000 UNICODE characters.
ARRAY_TO_JSON returns NULL if the ARRAY_expr argument is null.
Usage Notes
ARRAY_TO_JSON can be particularly powerful when used in conjunction with the
ARRAY_AGG function, which allows columns of a table to be aggregated into an ARRAY
object. You can then use ARRAY_TO_JSON to convert the aggregated ARRAY into a JSON
array.
Related Topics
74
Teradata JSON
Result:
ARRAY_TO_JSON(a)
ARRAY_TO_JSON(b)
---------------------------------------[1,2,3,4,5]
[[1,2,3], [4,5,6], [7,8,9]]
Teradata JSON
INTEGER,
'engineer',
'engineer',
'engineer',
'engineer',
'salesman',
'salesman',
'salesman',
'salesman',
50000);
100000);
50000);
75000);
50000);
100000);
50000);
75000);
75
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
Result:
pos
ARRAY_AGG(age ORDER BY empId ASC, NEW intarr5())
-----------------------------------------------------------engineer
(24,34,25,21)
executive
(50,51,52,52,60)
manager
(40,41,45,48)
salesman
(31,32,33,40)
Result:
pos
ARRAY_TO_JSON(ARRAY_AGG())
--------------------------------------engineer
[24,34,25,21]
executive
[50,51,52,52,60]
manager
[40,41,45,48]
salesman
[31,32,33,40]
76
Teradata JSON
Result:
salary
ARRAY_AGG(pos ORDER BY empId ASC, NEW varchararr5())
------------------------------------------------------------50000 ('engineer','engineer','salesman','salesman','manager')
75000 ('engineer','salesman','manager')
100000 ('engineer','salesman','manager')
125000 ('manager','executive')
150000 ('executive','executive')
200000 ('executive')
1000000 ('executive')
Result:
salary
ARRAY_TO_JSON(ARRAY_AGG())
----------------------------------50000
["engineer","engineer","salesman","salesman","manager"]
75000
["engineer","salesman","manager"]
100000
["engineer","salesman","manager"]
125000
["manager","executive"]
150000
["executive","executive"]
200000
["executive"]
1000000
["executive"]
Teradata JSON
77
GeoJSONFromGeom
Purpose
The GeoJSONFromGeom function converts an ST_Geometry object into a JSON document
that conforms to the GeoJSON standards.
Syntax
GeoJSONFromGeom
( geom_expr
TD_SYSFNLIB.
)
, precision
If you specify a RETURNS clause, you must enclose the function call in parenthesis.
(
GeoJSONFromGeom
( geom_expr
TD_SYSFNLIB.
RETURNS clause
, precision
RETURNS clause
RETURNS data_type
( integer )
CHARACTER SET
UNICODE
LATIN
Syntax Elements
TD_SYSFNLIB
The name of the database where the function is located.
geom_expr
An expression which evaluates to an ST_Geometry object that represents a Point,
MultiPoint, LineString, MultiLineString, Polygon, MultiPolygon, or
GeometryCollection.
precision
An integer specifying the maximum number of decimal places in the coordinate values.
If this argument is NULL or not specified, the default precision is 15.
RETURNS data_type
Specifies that data_type is the return type of the function.
data_type can be VARCHAR, CLOB, or JSON.
integer
Specifies the maximum length of the return type.
If you do not specify a maximum length, the default is the maximum length supported
by the return data type.
CHARACTER SET
The character set for the return value of the function, which can be LATIN or
UNICODE.
If you do not specify a character set, the default character set for the user is used.
78
Teradata JSON
Return Value
The result of this function is a JSON document which has a format that conforms to the
GeoJSON standards as specified in http://geojson.org/geojson-spec.html.
If you do not specify a RETURNS clause, the default return type is JSON with UNICODE as
the character set. The length of the return value is 64000 bytes, which supports up to 32000
UNICODE characters. This can result in a failure if the underlying ST_Geometry object
exceeds 64000 bytes when converted to a GeoJSON value.
Note: The length of the converted GeoJSON value is greater than the underlying length of
the ST_Geometry object.
The usual rules apply for the maximum length of each data type.
Data Type
Character Set
VARCHAR
LATIN
64000
UNICODE
32000
LATIN
2097088000
UNICODE
1048544000
LATIN
16776192
UNICODE
8388096
CLOB
JSON
ANSI Compliance
This function is not compliant with the ANSI SQL:2011 standard.
Related Topics
Examples: GeoJSONFromGeom
These examples show the following conversions of ST_Geometry objects to JSON
documents that conform to GeoJSON standards:
Conversion of Point ST_Geometry objects to JSON documents that have a data type of
VARCHAR, CLOB, or JSON.
Conversion of various ST_Geometry objects to JSON documents that have a data type of
VARCHAR(2000) CHARACTER SET LATIN.
Example: Conversion to a JSON Document with a VARCHAR Type
SELECT (GeoJSONFromGeom(new ST_Geometry('Point(45.12345 85.67891)'))
RETURNS VARCHAR(2000) CHARACTER SET LATIN);
Teradata JSON
79
Result:
GeoJSONFromGeom( NEW ST_GEOMETRY('Point(45.12345 85.67891)'))
------------------------------------------------------------{ "type": "Point", "coordinates": [ 45.12345, 85.67891 ] }
Result:
GeoJSONFromGeom( NEW ST_GEOMETRY('Point(45.12345 85.67891)'))
------------------------------------------------------------{ "type": "Point", "coordinates": [ 45.12345, 85.67891 ] }
Result:
GeoJSONFromGeom( NEW ST_GEOMETRY('Point(45.12345 85.67891)'))
------------------------------------------------------------{ "type": "Point", "coordinates": [ 45.12345, 85.67891 ] }
Result:
GeoJSONFromGeom( NEW ST_GEOMETRY('LineString(10 20, 50 80, 200 50)'))
--------------------------------------------------------------------{ "type": "LineString", "coordinates": [ [ 10.0, 20.0 ], [ 50.0, 80.0 ],
[ 200.0, 50.0 ] ] }
Result:
GeoJSONFromGeom( NEW ST_GEOMETRY('Polygon((0 0, 0 10, 10 10, 10 0, 0 0))', 4326))
--------------------------------------------------------------------------------{ "type": "Polygon", "coordinates": [ [ [ 0.0, 0.0 ], [ 0.0, 10.0 ], [ 10.0, 10.0 ],
[ 10.0, 0.0 ], [ 0.0, 0.0 ] ] ] }
80
Teradata JSON
Result:
GeoJSONFromGeom( NEW ST_GEOMETRY('MultiPoint(10 20, 50 80, 200 50)', 4326))
--------------------------------------------------------------------------{ "type": "MultiPoint", "coordinates": [ [ 10.0, 20.0 ], [ 50.0, 80.0 ],
[ 200.0, 50.0 ] ] }
Result:
GeoJSONFromGeom( NEW ST_GEOMETRY('MultiLineString((10 20, 50 80, 200 50),
(0 100, 10 220, 20 240))', 4326))
------------------------------------------------------------------------{ "type": "MultiLineString", "coordinates": [ [ [ 10.0, 20.0 ], [ 50.0, 80.0 ],
[ 200.0, 50.0 ] ], [ [ 0.0, 100.0 ], [ 10.0, 220.0 ], [ 20.0, 240.0 ] ] ] }
Result:
GeoJSONFromGeom( NEW ST_GEOMETRY('MultiPolygon(((0 0, 0 10, 10 10, 10 0, 0 0)),
((0 50, 0 100, 100 100, 100 50, 0 50)))', 4326))
------------------------------------------------------------------------------{ "type": "MultiPolygon", "coordinates": [ [ [ [ 0.0, 0.0 ], [ 0.0, 10.0 ],
[ 10.0, 10.0 ], [ 10.0, 0.0 ], [ 0.0, 0.0 ] ] ], [ [ [ 0.0, 50.0 ], [ 0.0, 100.0 ],
[ 100.0, 100.0 ], [ 100.0, 50.0 ], [ 0.0, 50.0 ] ] ] ] }
Result:
GeoJSONFromGeom( NEW ST_GEOMETRY('GeometryCollection(point(10 20),
linestring(50 80, 200 50))', 4326))
-----------------------------------------------------------------{ "type": "GeometryCollection", "geometries": [ { "type": "Point", "coordinates":
[ 10.0, 20.0 ] }, { "type": "LineString", "coordinates": [ [ 50.0, 80.0 ],
[ 200.0, 50.0 ] ] } ] }
Teradata JSON
81
GeomFromGeoJSON
Purpose
The GeomFromGeoJSON function converts a JSON document that conforms to the
GeoJSON standards into an ST_Geometry object.
Syntax
GeomFromGeoJSON
( geojson_expr , asrid )
TD_SYSFNLIB.
Syntax Elements
TD_SYSFNLIB
The name of the database where the function is located.
geojson_expr
An expression which evaluates to a JSON document conforming to the GeoJSON
standards as specified in http://geojson.org/geojson-spec.html. The GeoJSON text string
can be represented as a VARCHAR, CLOB, or JSON instance in the LATIN or
UNICODE character set.
This GeoJSON text string must represent a geometry object. The value of the type
member must be one of these strings: "Point", "MultiPoint", "LineString",
"MultiLineString", "Polygon", "MultiPolygon", or "GeometryCollection".
asrid
An integer which specifies the Spatial Reference System (SRS) identifier assigned to the
returned ST_Geometry object.
Return Value
An ST_Geometry object which contains the data that was stored in the JSON document.
ANSI Compliance
This function is not compliant with the ANSI SQL:2011 standard.
Examples: GeomFromGeoJSON
Examples: Valid GeoJSON Geometry Objects
The following show examples of valid GeoJSON geometry objects. The objects are valid based
on the GeoJSON format specification which you can access at: http://geojson.org/geojsonspec.html.
Point
{ "type": "Point", "coordinates": [100.0, 0.0] }
LineString
82
Teradata JSON
MultiPoint
{ "type": "MultiPoint",
"coordinates": [ [100.0, 0.0], [101.0, 1.0] ]
}
MultiLineString
{ "type": "MultiLineString",
"coordinates": [
[ [100.0, 0.0], [101.0, 1.0] ],
[ [102.0, 2.0], [103.0, 3.0] ]
]
}
MultiPolygon
{ "type": "MultiPolygon",
"coordinates": [
[[[102.0, 2.0], [103.0, 2.0], [103.0, 3.0], [102.0, 3.0], [102.0, 2.0]]],
[[[100.0, 0.0], [101.0, 0.0], [101.0, 1.0], [100.0, 1.0], [100.0, 0.0]],
[[ 100.2, 0.2], [100.8, 0.2], [100.8, 0.8], [100.2, 0.8], [100.2, 0.2]]]
]
}
GeometryCollection
{ "type": "GeometryCollection",
"geometries": [
{ "type": "Point",
"coordinates": [100.0, 0.0]
},
{ "type": "LineString",
"coordinates": [ [101.0, 0.0], [102.0, 1.0] ]
}
Teradata JSON
83
Result:
Result
-----POINT (100 0)
In this example, an error is returned because "Poit" is not valid JSON geometry data.
SELECT GeomFromGeoJSON('{ "type": "Poit", "coordinates": [100.0, 0.0] }', 4326);
JSON_CHECK
Purpose
The JSON_CHECK function checks a string for valid JSON syntax and provides an
informative error message about the cause of the syntax failure if the string is invalid. It does
not create a JSON instance.
This function can be used to validate text before loading JSON data in order to save time in
case of syntax errors.
Syntax
JSON_CHECK
TD_SYSFNLIB.
Syntax Elements
TD_SYSFNLIB
The name of the database where the function is located.
'{string}'
The string to be tested for compliance to JSON syntax.
CHAR, VARCHAR, and CLOB are allowed as input types.
LATIN and UNICODE are allowed for all data types.
84
Teradata JSON
Return Value
'OK', if the string is valid JSON syntax.
'INVALID: error message', if the string is not valid JSON syntax. The error message
provides an informative message about the cause of the syntax failure.
NULL, if the string is null.
Usage Notes
You can use this function to validate before loading a large amount of JSON data. This
prevents the rollback of an entire transaction in the event of a JSON syntax error and
provides the necessary information to fix any syntax errors that may be present.
Related Topics
Examples: JSON_CHECK
Valid JSON String Argument
The string passed to JSON_CHECK in this query has valid JSON syntax; therefore, the
function returns 'OK'.
SELECT JSON_CHECK('{"name" : "Cameron", "age" : 24, "schools" :
[ {"name" : "Lake", "type" : "elementary"}, {"name" : "Madison",
"type" : "middle"}, {"name" : "Rancho", "type" : "high"}, {"name" :
"UCI", "type" : "college"} ], "job" : "programmer" }');
JSON_KEYS
Purpose
The JSON_KEYS table operator parses a JSON instance (represented as CHAR, VARCHAR,
CLOB, or JSON) and returns a list of key names.
Teradata JSON
85
Syntax
JSON_KEYS
( ON ( json_expr )
TD_SYSFNLIB.
USING
name ( value )
correlation_name
AS
Syntax Elements
TD_SYSFNLIB
The name of the database where the function is located.
json_expr
An expression that evaluates to correct JSON syntax. The expression can be a CHAR,
VARCHAR, CLOB, or JSON representation of a JSON data type in LATIN or
UNICODE.
USING name (value)
AS
Return Values
JSON_KEYS performs a search on the JSON instance to the specified depth and returns a
VARCHAR column of key names.
If a JSON array is present in the document, one result per index of each array is generated.
All results are given according to their "path" in the JSON document. Therefore, the output
corresponding to a nested key will contain all of the parent keys, in addition to itself.
If you specify the USING QUOTES ('Y') clause, the key names in the result set are enclosed
in double quotation marks. This is the default behavior and allows you to copy and paste a
86
Teradata JSON
path into an SQL statement, using the path as an entity reference on a JSON document,
without any potential for improper use of SQL keywords.
If you specify the USING QUOTES ('N') clause, the key names in the result set are not
enclosed in double quotation marks. This allows you to use the output as input to one of the
JSON extraction methods, such as JSONExtractValue.
Teradata JSON
87
88
Teradata JSON
Result:
KEYS
---coord
coord.lon
coord.lat
sys
sys.country
sys.sunrise
sys.sunset
weather
weather[0].id
weather[0].main
weather[0].description
weather[0].icon
base
main
main.temp
main.humidity
main.pressure
main.temp_min
main.temp_max
wind
wind.speed
wind.deg
clouds
clouds.all
dt
id
name
cod
Example: Use JSON_Keys to Obtain Key Names for the Top Level Depth
The example uses JSON_KEYS to obtain the key names from the top level depth.
SELECT * from JSON_KEYS
(
ON (SELECT json_data FROM json_table) USING DEPTH(1))
AS json_data;
Result:
JSONKEYS
-------coord
sys
weather
base
main
wind
clouds
dt
id
Teradata JSON
89
Result:
KEYS
---"x"
"x"."a"
"x"."a"."b"
"y"
Result:
KEYS
---x
x.a
x.a.b
y
, "y" : "b"}')))
Result:
KEYS
---"x"
90
Teradata JSON
Result:
JSONKeys
-------"base"
"clouds"
"clouds"."all"
"cod"
"coord"
"coord"."lat"
"coord"."lon"
"dt"
"id"
"main"
"main"."humidity"
"main"."pressure"
"main"."temp"
"main"."temp_max"
"main"."temp_min"
"main"."temp_scale"
"name"
"sys"
"sys"."country"
"sys"."sunrise"
"sys"."sunset"
"weather"
"weather"[0]
"weather"[0]."description"
"weather"[0]."icon"
"weather"[0]."id"
"weather"[0]."main"
"wind"
"wind"."deg"
"wind"."speed"
Teradata JSON
91
Result:
JSONKeys
-------base
clouds
clouds.all
cod
coord
coord.lat
coord.lon
dt
id
main
main.humidity
main.pressure
main.temp
main.temp_max
main.temp_min
main.temp_scale
name
sys
sys.country
sys.sunrise
sys.sunset
weather
weather[0]
weather[0].description
weather[0].icon
weather[0].id
weather[0].main
wind
wind.deg
wind.speed
92
Teradata JSON
Result:
JSONKeys
json_data.JSONEXTRACTVALUE(('$.'||JSONKeys))
--------------------------------------------------------------------------base
global stations
clouds
{"all":0}
clouds.all
0
cod
200
coord
{"lon":145.766663,"lat":-16.91667}
coord.lat
-16.91667
coord.lon
145.766663
dt
1375292971
id
2172797
main
{"temp":288.97,"humidity":99,"pressure":
1007,"temp_min":288.71,"temp_max":289.15}
main.humidity
99
main.pressure
1007
main.temp
288.97
main.temp_max
289.15
main.temp_min
288.71
name
Cairns
sys
{"country":"AU","sunrise":1375216946,"sunset":
1375257851}
sys.country
AU
sys.sunrise
1375216946
sys.sunset
1375257851
weather
[{"id":800,"main":"Clear","description":"Sky is
Clear","icon":"01n"}]
weather[0]
{"id":800,"main":"Clear","description":"Sky is
Clear","icon":"01n"}
weather[0].description
Sky is Clear
weather[0].icon
01n
weather[0].id
800
weather[0].main
Clear
wind
{"speed":5.35,"deg":145.001}
wind.deg
145.001
wind.speed
5.35
SELECT CAST(JSONKeys AS VARCHAR(30)),
T.json_data.JSONExtractValue('$.'||JSONKeys) from json_table T, JSON_KEYS
(
ON (SELECT json_data FROM json_table WHERE id=2) USING QUOTES('N'))
AS json_data
where T.id=2
ORDER BY 1;
Result:
JSONKeys
json_data.JSONEXTRACTVALUE(('$.'||JSONKeys))
---------------------------------------------------------------------------
Teradata JSON
93
94
Teradata JSON
CHAPTER 5
JSON Shredding
JSON_TABLE
Purpose
JSON_TABLE creates a temporary table based on all, or a subset, of the data in a JSON
object.
Syntax
JSON_TABLE ( ON ( json_documents_retrieving_expr )
TD_SYSFNLIB.
A
colexpr ( column_expr_literal ) )
correlation_name
,
AS
(
Teradata JSON
column_name
95
Syntax Elements
json_documents_retrieving_expr
A query expression that must result in at least two columns. The first column is an ID
that identifies a JSON document. The ID is returned in the output row to correlate
output rows to the input rows, and in case of errors, it is used to identify the input row
that caused the error. The second column is the JSON document itself. Other columns
can be included, but they must be after the JSON document column. Extra columns are
returned in the output row without being modified from the input.
This is an example of the json_documents_retrieving_expr.
SELECT id, jsonDoc FROM jsonTable;
Functional Description
JSON_TABLE takes a JSON instance and creates a temporary table based on all, or a subset,
of the data in the instance. The JSON instance is retrieved from a table with an SQL
statement. JSONPath syntax is used to define the portions of the JSON instance to retrieve.
96
Teradata JSON
Return Value
The JSON_TABLE table operator produces output rows which conform to the row format
defined by the column_expr_literal. That literal describes the columns in the output row and
their data types.
The rows returned by JSON_TABLE have the following columns:
The first column returned contains the JSON document ID obtained from the first
column in the json_documents_retrieving_expression.
The next N columns returned are generated based on the colexpr parameter, where N is
the number of objects in the JSON array represented by the column_expression_literal.
If json_documents_retrieving_expr returns more than two columns, all the extra columns
from the third column onward are added to the output row without being modified.
The column_expr_literal parameter requires a mapping of the columns in the
row_expr_literal to the columns of the output table of this function. Each column in the
column_expr_literal is defined by a JSON instance that must conform to one of the following
structures.
Ordinal Column
An Ordinal Column contains an integer sequence number. The sequence number is
not guaranteed to be unique in itself, but the combination of the id column, the first
column of the output row, and the ordinal column is unique.
{ "ordinal" : true}
JSON_TABLE does not support UDTs and LOB types in the output so the JSON data type
cannot be the type for the output columns. The data type of the columns of the output table
may be any non-LOB predefined Teradata type. The supported data types for JSON_TABLE
output are listed next.
Teradata JSON
97
98
Teradata JSON
Result:
id schoolName
type
-----------------------------1
Lake
elementary
1
Madison
middle
Teradata JSON
99
Rancho
UCI
high
college
Result:
id schoolName
type
studentName
-------------------------------------------------1
Lake
elementary
Cameron
1
Madison
middle
Cameron
1
Rancho
high
Cameron
1
UCI
college
Cameron
Result:
id name
type
State Nation
-----------------------------------------------1
Lake
elementary
CA
USA
1
Madison middle
CA
USA
100
Teradata JSON
Rancho
UCI
high
college
CA
CA
USA
USA
Result:
idcol
----------3
4
3
4
3
4
3
1
1
1
1
2
2
2
2
ordnum
----------0
0
1
1
2
2
3
4
5
6
7
8
9
10
11
res1
-----------Lake
Lake
Madison
Madison
Rancho
Rancho
CSUSM
Lake
Madison
Rancho
UCI
Lake
Madison
Rancho
Mira Costa
res2
-----------elementary
elementary
middle
middle
high
high
college
elementary
middle
high
college
elementary
middle
high
college
State
----CA
CA
CA
CA
CA
CA
CA
CA
CA
CA
CA
CA
CA
CA
CA
Nation
-----USA
USA
USA
USA
USA
USA
USA
USA
USA
USA
USA
USA
USA
USA
USA
JSON_SHRED_BATCH and
JSON_SHRED_BATCH_U
Purpose
JSON_SHRED_BATCH and JSON_SHRED_BATCH_U are SQL stored procedures that use
any number of JSON instances to populate existing tables, providing a flexible form of
loading data from the JSON format into a relational model. Two shred procedures are
provided; however, the only difference between them is the character set of the data. To
explain the functionality, we only describe JSON_SHRED_BATCH (the version that
Teradata JSON
101
operates on LATIN character set data), but the explanation applies equally to
JSON_SHRED_BATCH_U (the UNICODE version).
Functional Description
The batch shredding procedures map into a number of successive calls to JSON_TABLE to
create a conglomerate temporary table, the values of which can be assigned to existing tables.
Syntax
,
JSON_SHRED_BATCH
SYSLIB.
input_query
, : result_code
shred statement
JSON_SHRED_BATCH_U
shred statement
row expression
,
,
column expression
table expression
query expression
row expression
rowexpr : JSONPath_expr
column expression
,
colexpr :
temp_column_name : JSONPath_expr
type : data_type
, fromRoot
: : true
query expression
,
queryexpr :
temp_column_name :
column_type
table expression
,
tables : [
table_name : {
metadata
column_assignment
metadata
metadata : { operation :
insert
}
,
update
merge
, keys : [
, filter : filter_expression
table_column_name
delete
column assignment
,
columns : {
table_column_name :
temp_column_name
temp_expr
numeric_constant
[ string_constant ]
boolean_constant
null
102
Teradata JSON
Syntax Elements
CALL JSON_SHRED_BATCH
input query
A string input parameter which specifies a query that results in a group of JSON
instances from which the user can perform shredding. Extra columns can result and be
referred to in the shred statement.
If this parameter is NULL, an error is reported.
The input query parameter can operate on one or more JSON objects in a source table. This
query is mapped to a JSON_TABLE function call. Since JSON_TABLE requires that the first
two columns specified be an ID value and a JSON object, respectively, the input query
parameter also requires the first two columns to be an ID value and a JSON object.
The following are examples of an input query string.
'SELECT id, empPersonalInfo, site FROM test.json_table'
'SELECT JSONDOCID, JSONDT1, a, b FROM jsonshred.JSON_TABLE3 WHERE
JSONID=100'
JSONID (uppercase or lowercase) is a keyword. It is a temporary column name used for the
JSON document ID value. JSONID is allowed in the input query and table expression clauses.
You cannot use JSONID as a temp_column_name in "colexpr" or "queryexpr".
The execution of JSON_TABLE on multiple JSON objects requires a join between the result
of one invocation and the source table. In order to avoid a full table join, we require an ID
column to be specified in the input query parameter, so that a join condition can be built off
that column.
The data types in the queryexpr (discussed later) must match the actual data type of the
columns specified in the input query. No explicit cast will be added, so the data must be
implicitly castable to the data type defined in the query expr, if not the exact data type. Any
errors encountered will result in a failed shred, and will be reported to the user.
If there is a problem encountered during the execution of JSON_TABLE, the ID column is
used in the error message to isolate which row caused the problem.
shred statement
The shred statement element defines the mapping of the JSON instance, that resulted from
the input query, to where the data will be loaded in the user table(s).
If the shred statement is NULL an error is reported.
All keywords in the shred statement must be specified in lowercase.
The following sections discuss the structure and syntax of the shred statement. Multiple
shred statements can be run, but there are performance impacts.
Teradata JSON
103
row expression
104
Teradata JSON
The data type of the temporary columns in the output table of JSON_TABLE must be
specified. This is enforced with JSON_SHRED_BATCH and JSON_SHRED_BATCH _U in
the column expression. It is necessary for the user to provide this information so that the data
may be correctly interpreted and used with the target table(s).
query expression
Teradata JSON
105
table expression
input JSON document ID value (the first column of the input query) and the index number
for an input row, respectively. JSONID and ROWINDEX may be referenced in the table
expression clause as a source value for the shredding operation.
Note: In the process of shredding, a volatile table is created for each shred statement. A table
can have a maximum of 2048 columns, so all the columns together from all the table
mappings should not exceed 2044 columns (there are four internal columns). You can have 1
to N target tables, which can each have 1 to N columns, but the total number of all columns
must not exceed 2044.
metadata
106
Teradata JSON
drastically affect performance. In the case of a MERGE operation, the target table must
have a primary index, and the primary index has to be a member of the specified keys.
"table_column_name"
The name of any column in the table referenced by table_name, which is the name of an
existing table the executor has access too. table_name is referenced in the table
expression of JSON_SHRED_BATCH, as well.
"filter":
Filtering is optional. If used, "filter": is a required, literal entry.
filter_expression
SQL statement referencing elements of the column or query expressions.
Example filter statement: "filter" : "empId<5000"
column assignment
result code
An output parameter representing the result of the shred operation. A value of 0
indicates success. All non-zero values indicate specific error conditions and an
appropriate error message is returned.
Teradata JSON
107
Return Values
The return value is an output parameter representing the result of the shred operation.
A value of 0 indicates success.
All non-zero values indicate specific error conditions, and an appropriate error message is
returned.
Usage Notes
When the character set of the data to be shredded is:
LATIN, use the JSON_SHRED_BATCH procedure
UNICODE, use the JSON_SHRED_BATCH_U procedure
Other than the difference regarding the character set of the data, the functionality of the two
procedures is identical.
Note: When a JSON value is shredded to populate a CHAR, VARCHAR, or VARBYTE
column, if the size of the value is larger than the size of the target column, the value is
truncated to fit the column.
The JSON_SHRED_BATCH query provides flexibility between the source JSON instance and
the table(s) the source data is loaded into. This flexibility allows for efficient and non-efficient
queries, depending on the query itself and how the mapping (shred statement) is performed.
The following guidelines assist in achieving the optimal performance with these procedures.
For each shred statement, a JSON_TABLE function call is made, to shred the JSON object
into a temporary table based on the row expression and column expressions. The resulting
temporary table may be used to assign values to any column of any table for which the
user has the proper privileges. The best performing queries optimize the mapping such
that each shred statement updates the maximum possible number of tables. Only if
complications of the mapping (such as hierarchical relationships) make it impossible to
map a shredding to an actual column should another shred statement be included in the
query.
The performance is largely dependent upon the usage of the procedure. If the mapping
minimizes the number of separate queries needed, it will perform best. It is not always the
case that everything can fit into one shred statement; for this reason multiple statements
are allowed.
This procedure allows INSERT, UPDATE, MERGE and DELETE operations, which can be
specified in the operation portion of the metadata portion of the statement. The keys in
the metadata statement are used to perform the join between the temporary table created
by the row/column expressions and the target table. This should be used carefully as it can
drastically affect performance. In a MERGE operation, the target table must have a
primary index, and the primary index has to be a member of the keys in the metadata.
In order to avoid a full table join, we require an ID column to be specified in the input
query parameter, so that a join condition can be built off that column.
Columns of a target table may be assigned values in the temporary table created by the row
and column expressions, constants, or the results of SQL expressions. The use of an SQL
expression requires the user to submit a proper SQL statement (in terms of syntax and actual
results of the query). This is a powerful and flexible way to manipulate the data in a target
table, but can cause a problem if queries are not structured properly. Any errors reported by
the DBS based on an SQL expression will be reported to the user and cause the query to fail.
108
Teradata JSON
Columns of the temporary table created by the row and column expressions and the extra
columns created by the input query may be used in the SQL expression.
In the process of shredding, a volatile table is created for each shred statement. A table can
have a maximum of 2048 columns, so all the columns together from all the table mappings
should not exceed 2044 columns (there are four internal columns). You can have 1 to N
target tables, which can each have 1 to N columns, but the total number of all columns must
not exceed 2044.
All keywords in the shred statement must be specified in lowercase.
The names assigned to the temporary columns (temp_column_name) and the names of extra
columns created by the input query must be unique. They can be referenced in the table
expression clause, so there cannot be any ambiguity. Note, names are not case-sensitive. If a
non-unique name is detected, an error is reported. For example, col1 and COL1 will fail
because they are used in internal queries and are not unique.
Note: All the names given in the keys clause must be present in the column assignment
clause.
You must specify the data type of the temporary column in the output table in the column
expression. It is necessary to provide this information so that the data may be correctly
interpreted and used with the target table(s).
JSONID and ROWINDEX (uppercase or lowercase) are keywords. They are used to track the
input JSON document ID value (the first column of the input query) and the index number
for an input row, respectively. JSONID and ROWINDEX are not allowed in colexpr and
queryexpr because they are fixed temporary column names. A syntax error is reported if they
are used in those clauses. However, they may be referenced in the table expression clause as a
source value for the shredding operation.
Teradata JSON
CHAR
VARCHAR
CHARACTER(n)
CHARACTER SET GRAPHIC
VARCHAR(n) CHARACTER
SET GRAPHIC
CLOB
BYTE
VARBYTE
BLOB
BYTEINT
SMALLINT
INTEGER
BIGINT
DECIMAL
NUMBER
DATE
TIME
TIMESTAMP
INTERVAL YEAR
INTERVAL DAY
109
INTERVAL HOUR TO
MINUTE
INTERVAL HOUR TO
SECOND
INTERVAL MINUTE
INTERVAL MINUTE TO
SECOND
INTERVAL SECOND
PERIOD (DATE)
PERIOD (TIME)
PERIOD (TIMESTAMP)
JSON
Privileges
JSON_SHRED_BATCH resides in the SYSLIB database. The user executing the
JSON_SHRED_BATCH procedures requires privileges on the tables being updated, which
include the following:
The database where the procedure is executing must have all privileges on SYSUDTLIB,
SYSLIB, and the database where the target table exists and EXECUTE PROCEDURE on
SYSLIB.
SYSLIB must have all privileges on the database which is executing the procedure.
For example, if the database where the procedure is executing and where the target tables
exists is called JSONShred, then the following statements will assign the required privileges:
GRANT
GRANT
GRANT
GRANT
GRANT
110
Teradata JSON
Teradata JSON
111
The result of the shred populates the emp_table table with three rows, corresponding to the
three items in the JSON object used as source data.
To see the result, run: SELECT empID, company, empName, empAge, startDate,
site FROM emp_table ORDER BY empID;
empID company
empName empAge
startDate site
---------------------------------------------100
Teradata Cameron 24
13/09/19 RB
200
Teradata Justin
34
13/09/19 RB
300
Teradata Melissa 24
13/09/19 RB
112
Teradata JSON
Result: To view the updated data in the employee table, run: SELECT empID, company,
empName, empAge, startDate, site FROM emp_table ORDER BY empID;
empID company
empName empAge startDate site
---------------------------------------------100
Teradata Cameron 24
15/02/10 RB
200
Teradata Justin
34
15/02/07 RB
300
Teradata Melissa 24
15/02/07 RB
Teradata JSON
113
The result of the above shred will populate the emp_table and dept_table tables with three
rows, corresponding to the three items in the JSON object used as source data.
Result: To view the data inserted into the employee table, run: SELECT * FROM
emp_table ORDER BY empID;
empID
----100
200
300
company
-----------Teradata
Teradata
Teradata
empName
--------Cameron
Justin
Melissa
empAge
-----24
30
24
dept
-----------engineering
engineering
marketing
startDate
---------15/02/10
15/02/07
?
site
----RB
RB
RB
Result: To view the data inserted into the department table, run: SELECT * FROM
dept_table ORDER BY dept, empID;
dept
description
empID
-------------------------------------engineering CONSTANT DESCRIPTION
1
engineering CONSTANT DESCRIPTION
2
marketing
CONSTANT DESCRIPTION
3
114
Teradata JSON
Teradata JSON
115
116
empId
-----1001
1002
1003
1004
empName
-----------Cameron
Justin
Madhu
Srini
company
----------Teradata
Teradata
Teradata
Teradata
dept
---------engineerin
engineerin
engineerin
engineerin
jsonDocId
--------1
1
2
2
site
-----HYD
HYD
HYD
HYD
country
------USA
USA
USA
USA
Teradata JSON
CHAPTER 6
JSON Publishing
JSON_AGG
Purpose
The JSON_AGG function returns a JSON document composed of aggregated values from
each input parameter. The input parameters can be a column reference or an expression.
Each input parameter results in a name/value pair in the returned JSON document.
Syntax
,
JSON_AGG (
param
)
( FORMAT ' format string ' )
AS name
If you specify a RETURNS clause, you must enclose the function call in parenthesis.
,
( JSON_AGG (
param
( FORMAT ' format string ' )
Teradata JSON
RETURNS clause
AS name
117
RETURNS data_type
( integer )
CHARACTER SET
UNICODE
LATIN
Syntax Elements
param
An input parameter that can be any supported data type, a column reference, constant,
or expression that evaluates to some value. A variable number of these parameters are
accepted and each input parameter results in a name/value pair in the returned JSON
document.
FORMAT 'format string'
format string is any allowable format string in Teradata.
For an example using the format string see Example: Use JSON_COMPOSE with
Subqueries and GROUP BY.
AS name
name is any allowable name in Teradata.
The string created conforms to the JSON standard escaping scheme. A subset of
UNICODE characters are required to be escaped by the '\' character. This is not the case
for strings in Teradata. Thus, when porting a Teradata string to a JSON string, proper
JSON escape characters are used where necessary. This also applies to the values of the
JSON instance and to the JSON_COMPOSE function. If the character set is LATIN, '\'
escaped characters must be part of that character set; otherwise a syntax error is
reported.
RETURNS data_type
Specifies that data_type is the return type of the function.
data_type must be JSON for this function.
integer
A positive integer value that specifies the maximum length in characters of the JSON
type. If specified, the length is subject to a minimum of two characters and cannot be
greater than the absolute maximum allowed for the function. Shorter lengths may be
specified.
Note: As an aggregate function, JSON_AGG supports up to 64000 bytes, which is 32000
UNICODE characters or 64000 LATIN characters. The RETURNS clause can specify a
larger return value, but the actual data returned by JSON_AGG is 64000 bytes. If the
data length is greater than this an error is returned. Note, JSON_COMPOSE can specify
larger values than JSON_AGG.
If you do not specify a RETURNS clause, the return type defaults to JSON(32000)
CHARACTER SET UNICODE. In other words, the default return type is a JSON data
type with UNICODE character set and a return value length of 32000 characters.
CHARACTER SET UNICODE | LATIN
The character set for the data type in the RETURNS data_type clause.
The character set can be LATIN or UNICODE.
118
Teradata JSON
Return Value
By default, JSON_AGG returns a JSON document in character set UNICODE and a
maximum length of 32000 UNICODE characters (64000 bytes), unless otherwise specified
with the optional RETURNS clause.
A hierarchical relationship is not possible with this function. The resulting JSON instance is
flat, with each input parameter corresponding to one child of the result. The resulting
document will be in the following format.
{
name1 : data1,
name2 : data2,
...,
nameN : dataN,
}
If one of the values used to compose the JSON document is a Teradata NULL, it is returned
in the JSON instance as a JSON null.
Usage Notes
The GROUP BY clause can be used in the SELECT statement which invokes the JSON_AGG
function. Existing rules for the GROUP BY clause and aggregate functions apply to
JSON_AGG. When this is used, the resulting JSON document is structured as an array with
Teradata JSON
119
objects as its elements that represent members of the resulting group. Each group is in a
different output row.
If one of the values used to compose the JSON object is a Teradata NULL, it is returned in the
JSON instance as a JSON null.
Result:
JSON_agg
-------[{ "empID" : 1, "company" : "Teradata", "empName" : "Cameron", "empAge" : 24 },
{ "empID" : 2, "company" : "Teradata", "empName" : "Justin", "empAge" : 34 },
{ "empID" : 3, "company" : "Teradata", "empName" : "Someone", "empAge" : 24 }]
Result:
JSON_agg
-------{ "id" : 1, "company" : "Teradata", "name" : "Cameron", "age" : 24 },
{ "id" : 2, "company" : "Teradata", "name" : "Justin", "age" : 34 },
{ "id" : 3, "company" : "Teradata", "name" : "Someone", "age" : 24 }
120
Teradata JSON
Example: Using JSON_AGG with GROUP BY and with All Parameter Names
The example shows how to use JSON_Agg to assign parameter names in the resulting JSON
instance and use the GROUP BY clause.
SELECT JSON_agg(empID AS id, empName AS name, empAge AS age)
FROM emp_table
GROUP BY company;
Result: The query returns one line of output (note, output line is wrapped).
JSON_agg(empID AS id,empName AS name,empAge AS age)
--------------------------------------------------[{"id":3,"name":"Someone","age":24},
{"id":1,"name":"Cameron","age":24},
{"id":2,"name":"Justin","age":34}]
Result:
JSON_AGG(empID AS id,empName AS name)
------------------------------------------------------------------------{"id":2,"name":"Justin"}
[{"id":3,"name":"Someone"},{"id":1,"name":"Cameron"}]
JSON_COMPOSE
Purpose
JSON_COMPOSE creates a JSON document composed of the input parameters specified.
This function provides a complex composition of a JSON document when used in
conjunction with the JSON_AGG function.
Syntax
,
JSON_COMPOSE (
param
)
( FORMAT ' format string ' )
AS name
If you specify a RETURNS clause, you must enclose the function call in parenthesis.
Teradata JSON
121
( JSON_COMPOSE (
)
( FORMAT ' format string ' )
RETURNS clause
AS name
RETURNS clause
RETURNS data_type
( integer )
CHARACTER SET
UNICODE
LATIN
Syntax Elements
param
An input parameter that can be a column reference, constant, or expression that
evaluates to some value. A variable number of these parameters are accepted, and each
input parameter results in a name/value pair in the returned JSON document.
FORMAT 'format string'
format string is any allowable format string in Teradata.
For an example using the format string see Example: Use JSON_COMPOSE with
Subqueries and GROUP BY.
AS name
name is any allowable name in Teradata.
The string created conforms to the JSON standard escaping scheme. A subset of
UNICODE characters are required to be escaped by the '\' character. This is not the case
for strings in Teradata. Thus, when porting a Teradata string to a JSON string, proper
JSON escape characters are used where necessary. This also applies to the values of the
JSON instance and to the JSON_AGG function. If the character set is LATIN, '\' escaped
characters must be part of that character set; otherwise a syntax error is reported.
RETURNS data_type
Specifies that data_type is the return type of the function.
data_type must be JSON for this function.
integer
A positive integer value that specifies the maximum length in characters of the JSON
type. If specified, the length is subject to a minimum of two characters and cannot be
greater than the absolute maximum for the character set. Shorter lengths may be
specified.
If you do not specify a RETURNS clause, the return type defaults to JSON(32000)
CHARACTER SET UNICODE. In other words, the default return type is a JSON data
type with UNICODE character set and a return value length of 32000 characters.
If you specify the optional RETURNS clause, the maximum lengths allowed are
16776192 characters for LATIN and 8388096 characters for UNICODE.
CHARACTER SET UNICODE | LATIN
The character set for the data type in the RETURNS data_type clause.
The character set can be LATIN or UNICODE.
122
Teradata JSON
Return Value
By default, JSON_COMPOSE returns a LOB based JSON document with character set
UNICODE and a maximum length of 32000 UNICODE characters (64000 bytes), unless
otherwise specified with the optional RETURNS clause.
A hierarchical relationship is not possible with this function. The resulting JSON document
is flat, with each input parameter corresponding to one child of the result. The resulting
document will be in this format:
{
name1 : data1,
name2 : data2,
...,
nameN : dataN,
}
If one of the values used to compose the JSON document is a Teradata NULL, it is returned
in the JSON instance as a JSON null.
Usage Notes
JSON_COMPOSE is most useful when used in conjunction with JSON_AGG. JSON_AGG
is limited in that it provides groups as identified by the GROUP BY clause, but it does not
provide the value that was used to create the group. To obtain this, use JSON_AGG in a
subquery that results in a derived table, and reference the result of JSON_AGG as one of the
parameters to the JSON_COMPOSE function. To ensure the values being grouped on are
Teradata JSON
123
included with the proper groups, the columns used in the GROUP BY clause of the subquery
with the JSON_AGG function should be used as parameters to the JSON_COMPOSE
function along with the result of JSON_AGG. In this way, the values being grouped on will be
included alongside the group.
Result:
JSON_Compose
-----------{
"company" :
"employees"
{ "id" :
{ "id" :
]
}
{
"company" :
"employees"
{ "id" :
]
}
124
"Teradata",
: [
1, "name" : "Cameron", "age" : 24 },
2, "name" : "Justin", "age" : 34 }
"Apple",
: [
3, "name" : "Someone", "age" : 24 }
Teradata JSON
Result:
JSON_Compose
-----------{
"company" : "Teradata",
"age" : 24,
"employees" : [
{ "id" : 1, "name" : "Cameron" }
]
}
{
"company" : "Teradata",
"age" : 34,
"employees" : [
{ "id" : 2, "name" : "Justin" }
]
}
{
"company" : "Apple",
"age" : 24,
"employees" : [ { "id" : 3, "name" : "Someone" } ]
}
Teradata JSON
125
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
item_table(1,
item_table(1,
item_table(2,
item_table(2,
item_table(2,
item_table(2,
item_table(3,
item_table(3,
item_table(3,
item_table(3,
item_table(3,
item_table(3,
item_table(3,
item_table(3,
1,
2,
1,
2,
3,
4,
1,
2,
3,
4,
5,
6,
7,
8,
'disk', 100);
'RAM', 200);
'disk', 10);
'RAM', 20);
'monitor', 30);
'keyboard', 40);
'disk', 10);
'RAM', 20);
'monitor', 30);
'keyboard', 40);
'camera', 50);
'button', 60);
'mouse', 70);
'pen', 80);
Result:
JSON_Compose
-----------{
"customer" : "Teradata",
"orderID" : 1,
"price" : "$1000.00",
"items" :
[
{ "ID" : 1, "name" : "disk", "amt" : 100 },
126
Teradata JSON
Teradata JSON
127
Result:
> JSON_Compose
-----------{
"customer" : "Teradata",
"orders" :
[
{
"orderID" : 1,
"price" : 1000,
"items" :
[
{ "ID" : 1, "name" :
{ "ID" : 2, "name" :
]
},
{
"orderID" : 2,
"price" : 10000,
"items" :
[
{ "ID" : 1, "name" :
{ "ID" : 2, "name" :
{ "ID" : 3, "name" :
{ "ID" : 4, "name" :
]
}
]
}
{
"customer" : "Apple",
"orders" :
[
{
"orderID" : 3,
"price" : 100000,
"items" :
[
{ "ID" : 1, "name" :
{ "ID" : 2, "name" :
{ "ID" : 3, "name" :
{ "ID" : 4, "name" :
{ "ID" : 5, "name" :
{ "ID" : 6, "name" :
{ "ID" : 7, "name" :
{ "ID" : 8, "name" :
]
}
128
"disk", "amt" : 10 },
"RAM", "amt" : 20 },
"monitor", "amt" : 30 },
"keyboard", "amt" : 40 }
"disk", "amt" : 10 },
"RAM", "amt" : 20 },
"monitor", "amt" : 30 },
"keyboard", "amt" : 40 },
"camera", "amt" : 50 },
"button", "amt" : 60 },
"mouse", "amt" : 70 },
"pen", "amt" : 80 }
Teradata JSON
Teradata JSON
129
130
Teradata JSON
APPENDIX A
Notation Conventions
Letter
Number
Word
The excerpt is defined immediately following the diagram that contains it.
UNDERLINED LETTERS represent the default value.
This applies to both uppercase and lowercase words.
Spaces
Punctuation
Paths
The main path along the syntax diagram begins at the left with a keyword, and proceeds, left
to right, to the vertical bar, which marks the end of the diagram. Paths that do not have an
arrow or a vertical bar only show portions of the syntax.
The only part of a path that reads from right to left is a loop.
Teradata JSON
131
Continuation Links
Paths that are too long for one line use continuation links. Continuation links are circled
letters indicating the beginning and end of a link:
A
When you see a circled letter in a syntax diagram, go to the corresponding circled letter and
continue reading.
Required Entries
Required entries appear on the main path:
SHOW
If you can choose from more than one entry, the choices appear vertically, in a stack. The first
entry appears on the main path:
SHOW
CONTROLS
VERSIONS
Optional Entries
You may choose to include or disregard optional entries. Optional entries appear below the
main path:
SHOW
CONTROLS
If you can optionally choose from more than one entry, all the choices appear below the main
path:
READ
SHARE
ACCESS
Some commands and statements treat one of the optional choices as a default value. This
value is UNDERLINED. It is presumed to be selected if you type the command or statement
without specifying one of the options.
Strings
String literals appear in apostrophes:
'msgtext '
Abbreviations
If a keyword or a reserved word has a valid abbreviation, the unabbreviated form always
appears on the main path. The shortest valid abbreviation appears beneath.
132
Teradata JSON
CONTROLS
CONTROL
Loops
A loop is an entry or a group of entries that you can repeat one or more times. Syntax
diagrams show loops as a return path above the main path, over the item or items that you
can repeat:
,
,
(
cname
3
4
)
Description
Example
maximum
number of
entries allowed
minimum
number of
entries allowed
separator
The character appears on the return path.
character
If the diagram does not show a separator character,
required
use one blank space.
between entries
delimiter
The beginning and end characters appear outside
character
the return path.
required around
Generally, a space is not needed between delimiter
entries
characters and entries.
Excerpts
Sometimes a piece of a syntax phrase is too large to fit into the diagram. Such a phrase is
indicated by a break in the path, marked by (|) terminators on each side of the break. The
name for the excerpted piece appears between the terminators in boldface type.
The boldface excerpt name and the excerpted phrase appears immediately after the main
diagram. The excerpted phrase starts and ends with a plain horizontal line:
Teradata JSON
133
excerpt
HAVING
con
excerpt
where_cond
,
cname
,
col_pos
134
Teradata JSON
viewname
AS
cname
CV
LOCKING
LOCK
ACCESS
dbname
A
DATABASE
FOR
SHARE
IN
tname
READ
TABLE
WRITE
EXCLUSIVE
vname
VIEW
EXCL
,
SEL
B
MODE
expr
,
FROM
qual_cond
tname
.aname
C
HAVING cond
;
qual_cond
,
WHERE cond
GROUP BY
cname
,
col_pos
Character Symbols
The symbols, along with character sets with which they are used, are defined in the following
table.
Teradata JSON
Symbol
Encoding
Meaning
a-z
A-Z
Any
135
Encoding
Meaning
a-z
A-Z
0-9
Any
<
KanjiEBCDIC
0-9
KanjiEBCDIC
Any
Any
Any
Any
ss2
KanjiEUC
ss3
KanjiEUC
For example, string TEST, where each letter is intended to be a fullwidth character, is
written as TEST. Occasionally, when encoding is important, hexadecimal representation is
used.
For example, the following mixed single byte/multibyte character data in KanjiEBCDIC
character set
LMN<TEST>QRS
is represented as:
D3 D4 D5 0E 42E3 42C5 42E2 42E3 0F D8 D9 E2
Pad Characters
The following table lists the pad characters for the various character data types.
136
LATIN
SPACE
0x20
UNICODE
SPACE
U+0020
GRAPHIC
IDEOGRAPHIC SPACE
U+3000
Teradata JSON
Teradata JSON
KANJISJIS
ASCII SPACE
0x20
KANJI1
ASCII SPACE
0x20
137
138
Teradata JSON
APPENDIX B
NULL Property
Non-nullable
Nullable
IN
INOUT
OUT
880
881
1380
1381
1382
884
885
1384
1385
1386
888
889
1388
1389
1390
These codes are sent from server to client, and are accepted by server from client in the
parcels described in the following sections. The only restriction is the type may not be used
in the USING clause. VARCHAR/CLOB can be used in the USING clause and when
necessary, this data is implicitly cast to the JSON type.
Teradata JSON
SQLCap_Feature;
SQLCap_Length;
SQLCap_UPSERT;
SQLCap_ArraySupport;
padbyte_boolean;
139
SQLCap_JSON;
-Database-limit
-Maximum parcel size
-32767
-65535
UNUSED
-Maximum segment count
-Segments not supported
-Maximum segment size
-Max. avail. bytes in perm row
StatementInfo Parcel
The following fields of the StatementInfo Parcel contain information relevant to a particular
instance of the JSON data type and show typical values that are expected for the JSON data
type:
Data Type Code = JSON Data Type, according to the table in the Data Type Encoding
section.
UDT indicator = 0 (JSON data type is treated as a system built-in type)
Fully qualified type name length = 0
Fully qualified type name = ""
Field Size = the maximum possible length in bytes for this particular JSON instance
Character Set Code = 1 or 2, depending on the character set for this particular JSON
instance
Maximum number of characters = number of characters of the JSON column - the same
as number in the column definition
Case Sensitive: 'Y' (JSON is case specific)
140
Teradata JSON
When executing the SELECT statement, the StatementInfo parcel looks like the following:
Database Name
test_db
Table/View Name
jsontab
Column Name
jsoncol
Column Index
As Name
Title
jsoncol
Format
Default Value
Is Identity Column
Is Definitely Writable
Is Nullable
Is Searchable
Is Writable
UDT Indicator
UDT Name
UDT Misc
Teradata JSON
Digits
Interval Digits
Fractional Digits
141
543000
Is CaseSensitive
Is Signed
Is Key Column
Is Unique
Is Expression
Is Sortable
Parameter Type
Struct Depth
Is Temporal Column
142
Teradata JSON