App Dev
App Dev
App Dev
MarkLogic 10
May, 2019
Table of Contents
5.0 Importing XQuery Modules, XSLT Stylesheets, and Resolving Paths .......86
5.1 XQuery Library Modules and Main Modules ......................................................86
5.1.1 Main Modules ...........................................................................................86
5.1.2 Library Modules .......................................................................................87
5.2 Rules for Resolving Import, Invoke, and Spawn Paths ........................................87
5.3 Module Caching Notes .........................................................................................89
5.4 Example Import Module Scenario ........................................................................90
16.0 Creating an Interpretive XQuery Rewriter to Support REST Web Services ...
201
16.1 Terms Used in this Chapter ................................................................................201
16.2 Overview of the REST Library ...........................................................................202
16.3 A Simple XQuery Rewriter and Endpoint ..........................................................203
16.4 Notes About Rewriter Match Criteria .................................................................205
16.5 The options Node ................................................................................................207
16.6 Validating options Node Elements .....................................................................209
16.7 Extracting Multiple Components from a URL ...................................................210
16.8 Handling Errors ...................................................................................................212
16.9 Handling Redirects .............................................................................................212
16.10 Handling HTTP Verbs ........................................................................................214
16.10.1Handling OPTIONS Requests ................................................................215
16.10.2Handling POST Requests .......................................................................217
16.11 Defining Parameters ...........................................................................................218
16.11.1Parameter Types .....................................................................................219
16.11.2Supporting Parameters Specified in a URL ............................................219
16.11.3Required Parameters ...............................................................................220
16.11.4Default Parameter Value .........................................................................220
16.11.5Specifying a List of Values .....................................................................221
16.11.6Repeatable Parameters ............................................................................221
16.11.7Parameter Key Alias ...............................................................................221
16.11.8Matching Regular Expressions in Parameters with the match and pattern At-
tributes 222
16.12 Adding Conditions ..............................................................................................224
16.12.1Authentication Condition ........................................................................225
16.12.2Accept Headers Condition ......................................................................225
16.12.3User Agent Condition .............................................................................225
16.12.4Function Condition .................................................................................226
16.12.5And Condition .........................................................................................226
16.12.6Or Condition ...........................................................................................227
16.12.7Content-Type Condition .........................................................................227
16.13 Preparing to Run the Examples ..........................................................................227
16.13.1Load the Example Data ...........................................................................227
16.13.2Create the Example App Server ..............................................................228
17.0 Creating a Declarative XML Rewriter to Support REST Web Services ...230
17.1 Overview of the XML Rewriter ..........................................................................230
17.2 Configuring an App Server to use the XML Rewriter ........................................231
17.3 Input and Output Contexts ..................................................................................231
17.3.1 Input Context ..........................................................................................232
17.3.2 Output Context ........................................................................................233
17.4 Regular Expressions (Regex) ..............................................................................234
This chapter describes application development in MarkLogic Server in general terms, and
includes the following sections:
This Application Developer’s Guide provides general information about creating applications
using MarkLogic Server. For information about developing search application using the powerful
XQuery search features of MarkLogic Server, see the Search Developer’s Guide.
This Application Developer’s Guide focuses primarily on techniques, design patterns, and
concepts needed to use XQuery or Server-Side JavaScript to build content and search applications
in MarkLogic Server. If you are using the Java Client API, Node.js Client API, or the REST APIs,
some of the concepts in this guide might also be helpful, but see the guides about those APIs for
more specific guidance. For information about developing applications with the Java Client API,
see the Java Application Developer’s Guide. For information about developing applications with
the REST API, see REST Application Developer’s Guide. For information about developring
applications with the Node.js Client API, see Node.js Application Developer’s Guide.
• Web development skills (xHTML, HTTP, cross-browser issues, CSS, Javascript, and so
on), especially if you are developing applications which run on an HTTP App Server.
• Overall understanding and knowledge of XML.
• XQuery skills. To get started with XQuery, see the XQuery and XSLT Reference Guide.
• JavaScript skills. For information about Server-Side JavaScript in MarkLogic, see the
JavaScript Reference Guide.
• Understanding of search engines and full-text queries.
• Java, if you are using the Java Client API or XCC to develop applications. For details, see
the Java Application Developer’s Guide or the XCC Developer’s Guide.
• Node.js, if you use the Node.js Client to develop applications. For more details, see the
Node.js Application Developer’s Guide.
• General application development techniques, such as solidifying application requirements,
source code control, and so on.
• If you will be deploying large-scale applications, administration on operations techniques
such as creating and managing large filesystems, managing multiple machines, network
bandwidth issues, and so on.
• For information about installing and upgrading MarkLogic Server, see the Installation
Guide. Additionally, for a list of new features and any known incompatibilities with other
releases, see the Release Notes.
• For information about creating databases, forests, App Servers, users, privileges, and so
on, see the Administrator’s Guide.
• For information on how to use security in MarkLogic Server, see Security Guide.
• For information on creating pipeline processes for document conversion and other
purposes, see Content Processing Framework Guide.
• For syntax and usage information on individual XQuery functions, including the XQuery
standard functions, the MarkLogic Server built-in extension functions for updates, search,
HTTP server functionality, and other XQuery library functions, see the MarkLogic
XQuery and XSLT Function Reference.
• For information about Server-Side JavaScript in MarkLogic, see the JavaScript Reference
Guide.
• For information on using XCC to access content in MarkLogic Server from Java, see the
XCC Developer’s Guide.
• For information on how languages affect searches, see Language Support in MarkLogic
Server in the Search Developer’s Guide. It is important to understand how languages
affect your searches regardless of the language of your content.
• For information about developing search applications, see the Search Developer’s Guide.
• For information on what constitutes a transaction in MarkLogic Server, see
“Understanding Transactions in MarkLogic Server” on page 28 in this Application
Developer’s Guide.
• For other developer topics, review the contents for this Application Developer’s Guide.
• For performance-related issues, see the Query Performance and Tuning Guide.
MarkLogic Server has the concept of a schema database. The schema database stores schema
documents that can be shared across many different databases within the same MarkLogic Server
cluster. This chapter introduces the basics of loading schema documents into MarkLogic Server,
and includes the following sections:
For more information about configuring schemas in the Admin Interface, see the “Understanding
and Defining Schemas” chapter of the Administrator’s Guide.
Every document database that is created references both a schema database and a security
database. By default, when a new database is created, it automatically references Schemas as its
schema database. In most cases, this default configuration (shown in the following figure) will be
correct:
In other cases, it may be desirable to configure your database to reference a different schema
database. It may be necessary, for example, to be able to have two different databases reference
different versions of the same schema using a common schema name. In these situations, simply
select the database from the drop-down schema database menu that you want to use in place of the
default Schemas database. Any database in the system can be used as a schema database.
In select cases, it may be efficient to configure your database to reference itself as the schema
database. This is a perfectly acceptable configuration which can be set up through the same
drop-down menu. In these situations, a single database stores both content and schema relevant to
a set of applications.
Note: To create a database that references itself as its schema database, you must first
create the database in a configuration that references the default Schemas database.
Once the new database has been created, you can change its schema database
configuration to point to itself using the drop-down menu.
This makes loading schemas slightly tricky. Because the system looks in the schema database
referenced by the current document database when requesting schema documents, you need to
make sure that the schema documents are loaded into the current database's schema database
rather than into the current document database.
1. You can use the Admin Interface's load utility to load schema documents directly into a
schema database. Go to the Database screen for the schema database into which you want
to load documents. Select the load tab at top-right and proceed to load your schema as you
would load any other document.
2. You can create an XQuery program that uses the xdmp:eval built-in function, specifying
the <database> option to load a schema directly into the current database’s schema
database:
xdmp:eval('xdmp:document-load("sample.xsd")', (),
<options xmlns="xdmp:eval">
<database>{xdmp:schema-database()}</database>
</options>)
3. You can create an XDBC or HTTP Server that directly references the schema database in
question as its document database, and then use any document insertion function to load
one or more schemas into that schema database. This approach is not necessary.
4. You can create a WebDAV Server that references the Schemas database and then
drag-and-drop schema documents in using a WebDAV client.
1. If a schema with a matching target namespace is not found, a schema is not used in
processing the document.
2. If one matching schema is found, that schema is used for processing the document.
3. If there are more than one matching schema in the schema database, a schema is selected
based on the precedence rules in the order listed:
b. If there is an import schema prolog expression with a matching target namespace, the
schema with the specified URI is used. Note that if the target namespace of the import
schema expression and that of the schema document referenced by that expression do not
match, the import schema expression is not applied.
c. If there is a schema with a matching namespace configured within the current HTTP or
XDBC Server's Schema panel, that schema is used. Note that if the target namespace
specified in the configuration panel does not match the target namespace of the schema
document, the Admin Interface schema configuration information is not used.
d. If none of these rules apply, the server uses the first schema that it finds. Given that
document ordering within the database is not defined, this is not generally a predictable
selection mechanism, and is not recommended.
Schemas are treated just like any other document by the system. They can be inserted, read,
updated and deleted just like any other document. The difference is that schemas are usually
stored in a secondary schema database, not in the document database itself.
The most common activity developers want to carry out with schema is to read them. There are
two approaches to fetching a schema from the server explicitly:
1. You can create an XQuery that uses xdmp:eval with the <database> option to read a
schema directly from the current database’s schema database. For example, the following
expression will return the schema document loaded in the code example given above:
xdmp:eval('doc("sample.xsd")', (),
<options xmlns="xdmp:eval">
<database>{xdmp:schema-database()}</database>
</options>)
The use of the xdmp:schema-database built-in function ensures that the sample.xsd
document is read from the current database’s schema database.
2. You can create an XDBC or HTTP Server that directly references the schema database in
question as its document database, and then submit any XQuery as appropriate to read,
analyze, update or otherwise work with the schemas stored in that schema database. This
approach is not necessary in most instances.
Other tasks that involve working with schema can be accomplished similarly. For example, if you
need to delete a schema, an approach modeled on either of the above (using
xdmp:document-delete("sample.xsd")) will work as expected.
Schematron is an open source project on Github and licensed under MIT. MarkLogic supports the
latest version of Schematron, called the "skeleton" XSLT implementation of ISO Schematron. See
the Schematron XQuery and JavaScript API reference documentation for more information.
The open source XSLT based Schematron implementation can be found at:
https://github.com/Schematron/schematron.
1. Open Query Console, and use the following query to insert the example schema document
into the Schemas database.
Note: The queryBinding="xslt2" attribute in the schema file directs Schematron to make
use of the xslt 2.0 engine.
xdmp:document-insert("/userSchema.sch",
<sch:schema xmlns:sch="http://purl.oclc.org/dsdl/schematron"
queryBinding="xslt2" schemaVersion="1.0">
<sch:title>user-validation</sch:title>
<sch:phase id="phase1">
<sch:active pattern="structural"></sch:active>
</sch:phase>
<sch:phase id="phase2">
<sch:active pattern="co-occurence"></sch:active>
</sch:phase>
<sch:pattern id="structural">
<sch:rule context="user">
<sch:assert test="@id">user element must have an id
attribute</sch:assert>
<sch:assert test="count(*) = 5">
user element must have 5 child elements: name, gender,
age, score and result
</sch:assert>
<sch:assert test="score/@total">score element must have a total
attribute</sch:assert>
<sch:assert test="score/count(*) = 2">score element must have two
child elements</sch:assert>
</sch:rule>
</sch:pattern>
<sch:pattern id="co-occurence">
<sch:rule context="score">
<sch:assert test="@total = test-1 + test-2">
total score must be a sum of test-1 and test-2 scores
</sch:assert>
<sch:assert test="(@total gt 30 and ../result = 'pass') or
(@total le 30 and ../result = 'fail')" diagnostics="d1">
if the score is greater than 30 then the result will be
'pass' else 'fail'
</sch:assert>
</sch:rule>
</sch:pattern>
<sch:diagnostics>
<sch:diagnostic id="d1">the score does not match with the
result</sch:diagnostic>
</sch:diagnostics>
</sch:schema>)
2. Switch Query Console to the Documents database and use the following schematron:put
query to compile the userSchema.sch Schematron document and insert the generated
validator XSLT into the Modules database.
xdmp:document-insert("user001.xml",
<user id="001">
<name>Alan</name>
<gender>Male</gender>
<age>14</age>
<score total="90">
<test-1>50</test-1>
<test-2>40</test-2>
</score>
<result>fail</result>
</user>)
schematron:validate(fn:doc("user001.xml"),
schematron:get("/userSchema.sch"))
If you want to validate a document before loading it, you can do so by first getting the node for
the document, validate the node, and then insert it into the database. For example:
(:
this will validate against the schema if it is in scope, but
will validate it without a schema if there is no in-scope schema
:)
let $node := xdmp:document-get("c:/tmp/test.xml")
return
try { xdmp:document-insert("/my-valid-document.xml",
validate lax { $node } )
}
catch ($e) { "Validation failed: ",
$e/error:format-string/text() }
The following uses strict validation and imports the schema from which it validates:
(:
this will validate against the specified schema, and will fail
if the schema does not exist (or if it is not valid according to
the schema)
:)
let $node := xdmp:document-get("c:/tmp/test.xml")
return
try { xdmp:document-insert("/my-valid-document.xml",
validate strict { $node } )
}
catch ($e) { "Validation failed: ",
$e/error:format-string/text() }
{
"language": "zxx",
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"count": { "type":"integer", "minimum":0 },
"items": { "type":"array",
"items": {"type":"string", "minLength":1 } }
}
}
You can validate the following node against the example.json schema as follows:
xdmp:json-validate(
object-node{ "count": 3, "items": array-node{12} },
"/schemas/example.json" )
You can also use the xdmp:json-validate-node function to validate JSON documents against ad
hoc schema nodes. For example:
xdmp:json-validate-node(
object-node{ "count": 3, "items": array-node{12} },
object-node{
"properties": object-node{
"count": object-node{ "type":"integer", "minimum":0 },
"items": object-node{ "type":"array",
"items": object-node{"type":"string", "minLength":1 }
}
}
}
)
MarkLogic Server is a transactional system that ensures data integrity. This chapter describes the
transaction model of MarkLogic Server, and includes the following sections:
• Commit Mode
• Transaction Type
• Transaction Mode
• Administering Transactions
• Transaction Examples
For additional information about using multi-statement and XA/JTA transactions from XCC Java
applications, see the XCC Developer’s Guide.
Term Definition
update statement A statement with the potential to perform updates (that is, it contains
one or more update calls).
transaction A set of one or more statements which either all fail or all succeed.
Term Definition
transaction mode Controls the transaction type and the commit semantics of newly
created transactions. If you need to control the transaction type and/or
commit semantics of a transaction, set them individually, rather than
setting transaction mode. For details, see “Transaction Mode” on
page 56.
update transaction A transaction that can perform updates (make changes to the
database). A transaction consisting of a single update statement in
auto commit mode, or any transaction created with update transaction
type.
commit End a transaction and make the changes made by the transaction
visible in the database. Single-statement transactions are automatically
committed upon successful completion of the statement.
Multi-statement transactions are explicitly committed using
xdmp:commit, but the commit only occurs if and when the calling
statement successfully completes.
Term Definition
system timestamp A number maintained by MarkLogic Server that increases every time
a change or a set of changes occurs in any of the databases in a system
(including configuration changes from any host in a cluster). Each
fragment stored in a database has system timestamps associated with it
to determine the range of timestamps during which the fragment is
valid.
readers/writers locks A set of read and write locks that lock documents for reading and
update at the time the documents are accessed.
An application can use either or both transaction models. Single statement transactions are
suitable for most applications. Multi-statement transactions are powerful, but introduce more
complexity to your application. Focus on the concepts that match your chosen transactional
programming model.
In addition to being single or multi-statement, transactions are typed as either update or query.
The transaction type determines what operations are permitted and if, when, and how locks are
acquired. By default, MarkLogic automatically detects the transaction type, but you can also
explicitly specify the type.
The transactional model (single or multi-statement), commit mode (auto or explicit), and the
transaction type (auto, query, or update) are fixed at the time a transaction is created. For example,
if a block of code is evaluated by an xdmp:eval (XQuery) or xdmp.eval (JavaScript) call using
same-statement isolation, then it runs in the caller’s transaction context, so the transaction
configuration is fixed by the caller, even if the called code attempts to change the settings.
The default transaction semantics vary slightly between XQuery and Server-Side JavaScript. The
default behavior for each language is shown in the following table, along with information about
changing the behavior. For details, see “Transaction Type” on page 38.
XQuery single-statement, auto-commit, with Use the update prolog option to explicitly
auto-detection of transaction type set the transaction type to auto, update or
query. Use the commit option to set the
commit mode to auto (single-statement)
or explicit (multi-statement). Similar
controls are available through options on
functions such as xdmp:eval and
xdmp:invoke.
A statement can be either a query statement (read only) or an update statement. In XQuery, the
first (or only) statement type determines the transaction type unless you explicitly set the
transaction type. The statement type is determined through static analysis. In JavaScript, query
statement type is assumed unless you explicitly set the transaction to update.
In the context of transactions, a “statement” has different meanings for XQuery and JavaScript.
For details, see “Understanding Statement Boundaries” on page 33.
If you evaluate this code as a multi-statement transaction, both statements would execute in the
same transaction; depending on the evaluation context, the transaction might remain open or be
rolled back at the end of the code since there is no explicit commit.
'use strict';
declareUpdate();
xdmp.documentInsert('/some/uri/doc.json', {property: 'value'});
console.log('I did something!');
// end of module
By default, the above code executes in a single transaction that completes at the end of the script.
If you evaluate this code in the context of a multi-statement transaction, the transaction remains
open after completion of the script.
Updates made by a statement are not visible Update Transactions: Readers/Writers Locks
until the statement (transaction) completes.
Setting the commit mode to explicit always Single vs. Multi-statement Transactions
creates a multi-statement transaction,
explit-commit transaction. Multi-Statement, Explicitly Committed
Transactions
Updates made by a statement are not visible Update Transactions: Readers/Writers Locks
until the statement completes.
The default behavior for a single-statement transaction is auto commit, which means MarkLogic
commits the transaction at the end of a statement, as defined in “Understanding Statement
Boundaries” on page 33.
Explicit commit mode is intended for multi-statement transactions. In this mode, you must
explicitly commit the transaction by calling xdmp:commit (XQuery) or xdmp.commit (JavaScript),
or explicitly roll back the transaction by calling xdmp:rollback (XQuery) or xdmp.rollback
(JavaScript). This enables you to leave a transaction open across multiple statements or requests.
The following functions support commit and update options that enable you to control the commit
mode (explicit or auto) and transaction type (update, query, or auto). For details, see the function
reference for xdmp:eval or xdmp.eval.
XQuery JavaScript
xdmp:eval xdmp.eval
xdmp:javascript-eval xdmp.xqueryEval
xdmp:invoke xdmp.invoke
xdmp:invoke-function xdmp.invokeFunction
xdmp:spawn xdmp.spawn
xdmp:spawn-function
Update transactions and statements can perform both query and update operations. Query
transactions and statements are read-only and may not attempt update operations. A query
transaction can contain an update statement, but an error is raised if that statement attempts an
update operation at runtime; for an example, see “Query Transaction Mode” on page 59.
• Auto: (XQuery only) MarkLogic determines the transaction type through static analysis of
the first (or only) statement in the transaction. Auto is the default behavior in XQuery.
• Explicit: Your code explicitly specifies the transaction type as update or query through an
option, a call to xdmp:set-transaction-mode (XQuery) or xdmp.setTransactionMode
(JavaScript), or by calling declareUpdate (JavaScript only).
For more details, see “Controlling Transaction Type in XQuery” on page 39 or “Controlling
Transaction Type in JavaScript” on page 42.
Query transactions use a system timestamp to access a consistent snapshot of the database at a
particular point in time, rather than using locks. Update transactions use readers/writers locks. See
“Query Transactions: Point-in-Time Evaluation” on page 44 and “Update Transactions:
Readers/Writers Locks” on page 45.
The following table summarizes the interactions between transaction types, statements, and
locking behavior. These interactions apply to both single-statement and multi-staement
transactions.
Transaction
Statement Behavior
Type
Use the xdmp:update prolog option when you need to set the transaction type before the first
transaction is created, such as at the beginning of a main module. For example, the following code
runs as a multi-statement update transaction because of the prolog options:
For more details, see xdmp:update and xdmp:commit in the XQuery and XSLT Reference Guide.
Setting transaction mode with xdmp:set-transaction-mode affects both the commit semantics
(auto or explicit) and the transaction type (auto, query, or update). Setting the transaction mode in
the middle of a transaction does not affect the current transaction. Setting the transaction mode
affects the transaction creation semantics for the entire session.
If you paste the above example into Query Console, and run it with results displayed as text, you
see the first transaction runs in update mode, as specified by xdmp:transaction-mode, and the
second transaction runs in query mode, as specified by xdmp:set-transaction-mode:
ExampleTransaction-1: update
ExampleTransaction-2: query
You can include multiple option declarations and calls to xdmp:set-transaction-mode in your
program, but the settings are only considered at transaction creation. A transaction is implicitly
created just before evaluating the first statement. For example:
(: begin transaction :)
"this is an update transaction";
xdmp:commit();
(: end transaction :)
(: begin transaction :)
"this is a query transaction";
xdmp:commit();
(: end transaction :)
The following functions support commit and update options that enable you to control the commit
mode (explicit or auto) and transaction type (update, query, or auto). For details, see the function
reference for xdmp:eval or xdmp.eval.
XQuery JavaScript
xdmp:eval xdmp.eval
xdmp:javascript-eval xdmp.xqueryEval
xdmp:invoke xdmp.invoke
xdmp:invoke-function xdmp.invokeFunction
xdmp:spawn xdmp.spawn
xdmp:spawn-function
• Use the declareUpdate function to set the transaction type to update and/or specify the
commit semantics, or
• Set the update option in the options node passed to functions such as xdmp.eval,
xdmp.invoke, or xdmp.spawn; or
• Call xdmp.setTransactionMode prior to creating transactions that will run in that mode.
MarkLogic cannot use static analysis to determine whether or not JavaScript code performs
updates. If your JavaScript code makes updates, one of the following requirements must be met:
• You call the declareUpdate function to indicate your code will make updates.
• The caller of your code sets the transaction type to one that permits updates.
Calling declareUpdate with no arguments is equivalent to auto commit mode and update
transaction type. This means the code can make updates and runs as a single-statement
transaction. The updates are automatically commited when the JavaScript code completes.
You can also pass an explicitCommit option to declareUpdate, as shown below. The default value
of explicitCommit is false.
declareUpdate({explicitCommit: boolean});
If you set explicitCommit to true, then your code starts a new multi-statement update transaction.
You must explicitly commit or rollback the transaction, either before returning from your
JavaScript code or in another context, such as the caller of your JavaScript code or another request
executing in the same transaction.
For example, you might use explicitCommit to start a multi-statement transaction in an ad-hoc
query request through XCC, and then subsequently commit the transaction through another
request.
If the caller sets the transaction type to update, then your code is not required to call
declareUpdate in order to perform updates. If you do call declareUpdate in this situation, then the
resulting mode must not conflict with the mode set by the caller.
For more details, see declareUpdate Function in the JavaScript Reference Guide.
• Your code is called via an eval/invoke function such as the XQuery function
xdmp:javascript-eval or the JavaScript functions xdmp.eval, and the caller specifies the
commit, update, or transaction-mode option.
• Your code is a server-side import transformation for use with the mlcp command line tool.
• Your code is a server-side transformation, extension, or other customization called by the
Java, Node.js, or REST Client APIs. The pre-set mode depends on the operation which
causes your code to run.
• Your code runs in the context of an XCC session where the client sets the commit mode
and/or transaction type.
The following functions support commit and update options that enable you to control the commit
mode (explicit or auto) and transaction type (update, query, or auto). For details, see the function
reference for xdmp:eval (XQuery) or xdmp.eval (JavaScript).
XQuery JavaScript
xdmp:eval xdmp.eval
xdmp:javascript-eval xdmp.xqueryEval
xdmp:invoke xdmp.invoke
xdmp:invoke-function xdmp.invokeFunction
xdmp:spawn xdmp.spawn
xdmp:spawn-function
Each fragment in a stand has system timestamps associated with it, which correspond to the range
of system timestamps in which that version of the fragment is valid. When a document is updated,
the update process creates new versions of any fragments that are changed. The new versions of
the fragments are stored in a new stand and have a new set of valid system timestamps associated
with them. Eventually, the system merges the old and new stands together and creates a new stand
with only the latest versions of the fragments. Point-in-time queries also affect which versions of
fragments are stored and preserved during a merge. After the merge, the old stands are deleted.
The range of valid system timestamps associated with fragments are used when a statement
determines which version of a document to use during a transaction. For more details about
merges, see Understanding and Controlling Database Merges in the Administrator’s Guide. For more
details on how point-in-time queries affect which versions of documents are stored, see
“Point-In-Time Queries” on page 139.
• Visibility of Updates
Depending on the specific logic of the transaction, it might not actually update anything, but a
transaction that MarkLogic determines to be an update transaction always runs as an update
transaction, not a query transaction.
For example, the following transaction runs as an update transaction even though the
xdmp:document-insert can never occur:
if ( 1 = 2 )
then ( xdmp:document-insert("fake.xml", <a/>) )
else ()
In a multi-statement transaction, the transaction type always corresponds to the transaction type
settings in effect when the transaction is created. If the transaction type is explicitly set to update,
then the transaction is an update transaction, even if none of the contained statements perform
updates. Locks are acquired for all statements in an update transaction, whether or not they
perform updates.
Similarly, if you use auto-detect mode and MarkLogic determines the first statement in a
multi-statement transaction is a query statement, then the transaction is created as a query
transaction. If a subsequent statement in the transaction attempts an update operation, MarkLogic
throws an exception.
Calls to xdmp:request-timestamp always return the empty sequence during an update transaction;
that is, if xdmp:request-timestamp returns a value, the transaction is a query transaction, not an
update transaction.
Once a lock is acquired, it is held until the transaction ends. This prevents other transactions from
updating the read locked document and ensures a read-consistent view of the document. Query
(read) operations require read locks. Update operations require readers/writers locks.
If the same example is rewritten as a multi-statement transaction, locks are held across all three
statements:
In the default single-statement transaction model, the commit occurs automatically when the
statement completes. To use a newly updated document, you must separate the update and the
access into two single-statement transactions or use multi-statement transactions.
In a multi-statement transaction, changes made by one statement in the transaction are visible to
subsequent statements in the same transaction as soon as the updating statement completes.
Changes are not visible outside the transaction until you call xdmp:commit.
An update statement cannot perform an update to a document that will conflict with other updates
occurring in the same statement. For example, you cannot update a node and add a child element
to that node in the same statement. An attempt to perform such conflicting updates to the same
document in a single statement will fail with an XDMP-CONFLICTINGUPDATES exception.
T3 (update)
T1 (update) T2 (query) T3 (update) commits doc.xml based
updates doc.xml reads doc.xml updates doc.xml T1 (update) on the new version.
commits at sees version waits for T1 commits T1 committed at
timestamp 40 before T1 to commit doc.xml
timestamp 40, so T3
commits at timestamp
41 or later
10 20 30 40 50
System Timestamp
Assume T1 is a long-running update transaction which starts when the system is at timestamp 10
and ends up committing at timestamp 40 (meaning there were 30 updates or other changes to the
system while this update statement runs).
When T2 reads the document being updated by T1 (doc.xml), it sees the latest version that has a
system timestamp of 20 or less, which turns out to be the same version T1 uses before its update.
When T3 tries to update the document, it finds that T1 has readers/writers locks on it, so it waits
for them to be released. After T1 commits and releases the locks, then T3 sees the newly updated
version of the document, and performs its update which is committed at a new timestamp of 41.
In a single statement transaction, updates made by a statement are not visible outside the
statement until the statement completes and the transaction is committed.
The single-statement model is suitable for most applications. This model requires less familiarity
with transaction details and introduces less complexity into your application:
Note: In Server-Side JavaScript, you need to use the declareUpdate() function to run an
update. For details, see “Controlling Transaction Type in JavaScript” on page 42.
• Sessions
For details on setting the transaction type and commit mode, see “Transaction Type” on page 38.
For additional information about using multi-statement transactions in Java, see “Multi-Statement
Transactions” in the XCC Developer’s Guide.
The statements in a multi-statement transaction are serialized, even if they run in different
requests. That is, one statement in the transaction completes before another one starts, even if the
statements execute in different requests.
The following example contains 3 multi-statement transactions (because of the use of the commit
prolog option). The first transaction is explicitly committed, the second is explicitly rolled back,
and the third is implicitly rolled back when the session ends without a commit or rollback call.
Running the example in Query Console is equivalent to evaluating it using xdmp:eval with
different transaction isolation, so the final transaction rolls back when the end of the query is
reached because the session ends. For details about multi-statement transaction interaction with
sessions, see “Sessions” on page 53.
xdmp:document-insert('/docs/mst2.xml', fn:doc('/docs/mst1.xml'));
xdmp:commit();
(: Transaction ends, updates visible in database :)
Instead of acquiring locks, a multi-statement query transaction uses a system timestamp to give all
statements in the transaction a read consistent view of the database, as discussed in “Query
Transactions: Point-in-Time Evaluation” on page 44. The system timestamp is determined when
the query transaction is created, so all statements in the transaction see the same version of
accessed documents.
Once updates are committed, the transaction ends and evaluation of the next statement continues
in a new transaction. For example:
(: Begin transaction 1 :)
xdmp:document-insert('/docs/mst1.xml', <data/>);
(: This statement runs in the same txn, so sees /docs/mst1.xml :)
xdmp:document-insert('/docs/mst2.xml', fn:doc('/docs/mst1.xml'));
xdmp:commit();
(: Transaction ends, updates visible in database :)
Calling xdmp:commit commits updates and ends the transaction only after the calling statement
successfully completes. This means updates can be lost even after calling xdmp:commit, if an error
occurs before the committing statement completes. For this reason, it is best practice to call
xdmp:commit at the end of a statement.
The following example preserves updates even in the face of error because the statement calling
xdmp:commit always completes.:
(: transaction created :)
xdmp:document-insert("not-lost.xml", <data/>)
, xdmp:commit();
fn:error(xs:QName("EXAMPLE-ERROR"), "An error occurs here");
(: end of session or program :)
By contrast, the update in this example is lost because the error occurring in the same statement as
the xdmp:commit call prevents successful completion of the committing statement:
(: transaction created :)
xdmp:document-insert("lost.xml", <data/>)
, xdmp:commit()
, fn:error(xs:QName("EXAMPLE-ERROR"), "An error occurs here");
(: end of session or program :)
xdmp:document-insert("/docs/test.xml", <a>hello</a>);
try {
xdmp:document-delete("/docs/nonexistent.xml")
} catch ($ex) {
(: handle error or rethrow :)
if ($ex/error:code eq 'XDMP-DOCNOTFOUND') then ()
else xdmp:rethrow()
}, xdmp:commit();
(: start of a new txn :)
fn:doc("/docs/test.xml")//a/text()
The result of a statement terminated with xdmp:rollback is always the empty sequence.
Best practice is to explicitly rollback when necessary. Waiting on implicit rollback at session end
leaves the transaction open and ties up locks and other resources until the session times out. This
can be a relatively long time. For example, an HTTP session can span multiple HTTP requests.
For details, see “Sessions” on page 53.
3.5.2.4 Sessions
A session is a “conversation” with a database in a MarkLogic Server instance. A session
encapsulates state about the conversation, such as connection information, credentials, and
transaction settings. When using multi-statement transactions, you must understand when
evaluation might occur in a different session because:
By contrast, in an HTTP session, the transaction settings might apply to queries run in response to
multiple HTTP requests. Uncommitted transactions remain open until the HTTP session times
out, which can be a relatively long time.
The exact nature of a session depends on the “conversation” context. The following table
summarizes the most common types of sessions encountered by a MarkLogic Server application
and their lifetimes:
Semi-colon separated statements in auto commit mode (the default) are not multi-statement
transactions. Each statement is a single-statement transaction. If one update statement commits
and the next one throws a runtime error, the first transaction is not rolled back. If you have logic
that requires a rollback if subsequent transactions fail, you must add that logic to your XQuery
code, use multi-statement transactions, or use a pre-commit trigger. For information about
triggers, see “Using Triggers to Spawn Actions” on page 415.
In a multi-statement transaction, the semi-colon separator does not act as a transaction separator.
The semi-colon separated statements in a multi-statement transaction see updates made by
previous statements in the same transaction, but the updates are not committed until the
transaction is explicitly committed. If the transaction is rolled back, updates made by previously
evaluated statements in the transaction are discarded.
The following diagram contrasts the relationship between statements and transactions in single
and multi-statement transactions:
Program
Transaction
Program
statement;
Transaction
statement;
statement;
xdmp:commit;
Program (auto commit)
Transaction
Transaction Transaction
statement;
statement statement; ...
(auto commit) (auto commit) ...
Default model: A program Default model: A program Multi-statement transactions:
containing one single containing multiple single A program containing
statement, auto commit statement transactions. multiple, multi-statement
transaction. transactions.
Use the more specific commit mode and transaction type controls instead of setting transaction
mode. These controls provide finer grained control over transaction configuration.
For example, use the following table to map the xdmp:transaction-mode XQuery prolog options
to the xdmp:commit and xdmp:update prolog options. For more details, see “Controlling
Transaction Type in XQuery” on page 39.
xdmp:transaction-mode
Equivlaent xdmp:commit and xdmp:update Option Settings
Value
Use the following table to map between the transaction-mode option and the commit and update
options for xdmp:eval and related eval/invoke/spawn functions.
transaction-mode
Equivalent commit and update Option Values
Option Value
Server-Side JavaScript modules use the declareUpdate function to indicate when the transaction
mode is update-auto-commit or update. For more details, see “Controlling Transaction Type in
JavaScript” on page 42.
To use multi-statement transactions in XQuery, you must explicitly set the transaction mode to
multi-auto, query, or update. This sets the commit mode to “explicit” and specifies the
transaction type. For details, see “Transaction Type” on page 38.
Selecting the appropriate transaction mode enables the server to properly optimize your queries.
For more information, see “Multi-Statement, Explicitly Committed Transactions” on page 49.
The transaction mode is only considered during transaction creation. Changing the mode has no
effect on the current transaction.
Explictly setting the transaction mode affects only the current session. Queries run under
xdmp:eval or xdmp.eval or a similar function with different-transaction isolation, or under
xdmp:spawn do not inherit the transaction mode from the calling context. See “Interactions with
xdmp:eval/invoke” on page 61.
Most XQuery applications use auto transaction mode. Using auto transaction mode allows the
server to optimize each statement independently and minimizes locking on your files. This leads
to better performance and decreases the chances of deadlock, in most cases.
Most Server-Side JavaScript applications use auto mode for code that does not perform updates,
and update-auto-commit mode for code that performs updates. Calling declareUpdate with no
arguments activates update-auto-commit mode; for more details, see “Controlling Transaction
Type in JavaScript” on page 42..
• All transactions are single-statement transactions, so a new transaction is created for each
statement.
• Static analysis of the statement prior to evaluation determines whether the created
transaction runs in update or query mode.
• The transaction associated with a statement is automatically committed when statement
execution completes, or automatically rolled back if an error occurs.
The update-auto-commit differs only in that the transaction is always an update transaction.
In XQuery, you can set the mode to auto explicitly with xdmp:set-transaction-mode or the
xdmp:transaction-mode prolog option, but this is not required unless you’ve previously explicitly
set the mode to update or query.
In XQuery, query transaction mode is only in effect when you explicitly set the mode using
xdmp:set-transaction-mode or the xdmp:transaction-mode prolog option. Transactions created in
this mode are always multi-statement transactions, as described in “Multi-Statement, Explicitly
Committed Transactions” on page 49.
if (fn:false())then
(: XDMP-UPDATEFUNCTIONFROMQUERY only if this executes :)
xdmp:document-insert("/docs/test.xml", <a/>)
else ();
xdmp:commit();
In XQuery, update transaction mode is only in effect when you explicitly set the mode using
xdmp:set-transaction-mode or the xdmp:transaction-mode prolog option. Transactions created in
update mode are always multi-statement transactions, as described in “Multi-Statement,
Explicitly Committed Transactions” on page 49.
In Server-Side JavaScript, setting explicitCommit to true when calling declareUpdate puts the
transaction into update mode.
In XQuery, this transaction mode is only in effect when you explicitly set the mode using
xdmp:set-transaction-mode or the xdmp:transaction-mode prolog option. Transactions created in
this mode are always single-statement transactions, as described in “Single-Statement Transaction
Concept Summary” on page 35.
• All transactions are single-statement transactions, so a new transaction is created for each
statement.
• The transaction is assumed to be read-only, so no locks are acquired. The statement is
evaluated as a point-in-time query, using the system timestamp at the start of the
transaction.
• An error is raised at runtime if an update operation is attempted by the transaction.
• The transaction is automatically committed when statement execution completes, or
automatically rolled back if an error occurs.
An update operation can appear in a this type of transaction, but it must not actually make any
updates at runtime. If a transaction running in query mode attempts an update operation,
XDMP-UPDATEFUNCTIONFROMQUERY is raised.
• Preventing Deadlocks
• same-statement
• different-transaction
In same-statement isolation, the code executed by eval or invoke runs as part of the same
statement and in the same transaction as the calling statement. Any updates done in the
eval/invoke operation with same-statement isolation are not visible to subsequent parts of the
calling statement. However, when using multi-statement transactions, those updates are visible to
subsequent statements in the same transaction.
You may not perform update operations in code run under eval/invoke in same-statement
isolation called from a query transaction. Since query transactions run at a timestamp, performing
an update would require a switch between timestamp mode and readers/writers locks in the
middle of a transaction, and that is not allowed. Statements or transactions that do so will throw
XDMP-UPDATEFUNCTIONFROMQUERY.
You may not use same-statement isolation when using the database option of eval or invoke to
specify a different database than the database in the calling statement’s context. If your
eval/invoke code needs to use a different database, use different-transaction isolation.
When you set the isolation to different-transaction, the code that is run by eval/invoke runs in
a separate session and a separate transaction from the calling statement. The eval/invoke session
and transaction will complete before continuing the rest of the caller’s transaction. If the calling
transaction is an update transaction, any committed updates done in the eval/invoke operation
with different-transaction isolation are visible to subsequent parts of the calling statement and
to subsequent statements in the calling transaction. However, if you use different-transaction
isolation (which is the default isolation level), you need to ensure that you do not get into a
deadlock situation (see “Preventing Deadlocks” on page 63).
The following table shows which isolation options are allowed from query statements and update
statements.
Note: This table is slightly simplified. For example, if an update statement calls a query
statement with same-statement isolation, the “query statement” is actually run as
part of the update statement (because it is run as part of the same transaction as the
calling update statement), and it therefore runs with readers/writers locks, not in a
timestamp.
There are, however, some deadlock situations that MarkLogic Server cannot do anything about
except wait for the transaction to time out. When you run an update statement that calls an
xdmp:eval or xdmp:invoke statement, and the eval/invoke in turn is an update statement, you run
the risk of creating a deadlock condition. These deadlocks can only occur in update statements;
query statements will never cause a deadlock.
A deadlock condition occurs when a transaction acquires a lock of any kind on a document and
then an eval/invoke statement called from that transaction attempts to get a write lock on the same
document. These deadlock conditions can only be resolved by cancelling the query or letting the
query time out.
To be completely safe, you can prevent these deadlocks from occurring by setting the
prevent-deadlocks option to true, as in the following example:
In this case, it will indeed prevent a deadlock from occurring because this statement runs as an
update statement, due to the xdmp:document-insert call, and therefore uses readers/writers locks.
In line 2, a read lock is placed on the document with URI /docs/test.xml. Then, the xdmp:eval
statement attempts to get a write lock on the same document, but it cannot get the write lock until
the read lock is released. This creates a deadlock condition. Therefore the prevent-deadlocks
option stopped the deadlock from occurring.
If you remove the prevent-deadlocks option, then it defaults to false (that is, it will allow
deadlocks). Therefore, the following statement results in a deadlock:
Warning This code is for demonstration purposes; if you run this code, it will cause a
deadlock and you will have to cancel the query or wait for it to time out to clear the
deadlock.
This is a deadlock condition, and the deadlock will remain until the transaction either times out, is
manually cancelled, or MarkLogic is restarted. Note that if you take out the first call to
doc("/docs/test.xml") in line 2 of the above example, the statement will not deadlock because
the read lock on /docs/test.xml is not called until after the xdmp:eval statement completes.
The call to doc("/docs/test.xml") in the last line of the example returns <a>goodbye</a>, which
is the new version that was updated by the xdmp:eval operation.
You can often solve the same problem by using multi-statement transactions. In a multi-statement
transaction, updates made by one statement are visible to subsequent statements in the same
transaction. Consider the above example, rewritten as a multi-statement transaction. Setting the
transaction mode to update removes the need for “fake” code to force classification of statements
as updates, but adds a requirement to call xdmp:commit to make the updates visible in the database.
</options>);
(: returns <a>goodbye</b> within this transaction :)
doc("/docs/test.xml"),
(: make updates visible in the database :)
xdmp:commit()
• Set the commit option to “explicit” in the options node if the transaction must run as a
multi-statement transaction or use the XQuery xdmp:commit prolog option or JavaScript
declareUpdate function to specify explicit commit mode.
Transactions run under same-statement isolation run in the caller’s context, and so use the same
transaction mode and benefit from committing the caller’s transaction. For a detailed example,
see “Example: Multi-statement Transactions and Same-statement Isolation” on page 69.
Some functionsevaluate asynchronously as soon as they are called, whether called from an update
transaction or a query transaction. These functions have side effects outside the scope of the
calling statement or the containing transaction (non-transactional side effects). The following are
some examples of functions that can have non-transactional side effects:
Use care or avoid calling any of these functions from an update transaction, as they are not
guaranteed to only evaluate once (or to not evaluate if the transaction rolls back). If you are
logging some information with xdmp:log or xdmp.log in your transaction, it might or might not be
appropriate for that logging to occur on retries (for example, if the transaction is retried because a
deadlock is detected). Even if it is not what you intended, it might not do any harm.
Other side effects, however, can cause problems in updates. For example, if you use xdmp:spawn
or xdmp.spawn in this context, the action might be spawned multiple times if the calling transaction
retries, or the action might be spawned even if the transaction fails; the spawn call evaluates
asyncronously as soon as it is called. Similarly, if you are calling a web service with
xdmp:http-get or xdmp.httpGet from an update transaction, it might evaluate when you did not
mean for it to evaluate.
If you do use these functions in updates, your application logic must handle the side effects
appropriately. These types of use cases are usually better suited to triggers and the Content
Processing Framework. For details, see “Using Triggers to Spawn Actions” on page 415 and the
Content Processing Framework Guide manual.
In nonblocking mode, the server chooses the latest timestamp for which all transactions are
known to have comitted, even if there is a slightly later timestamp for which another transaction
has committed. In this mode, queries do not block waiting for contemporaneous transactions, but
they might not see the most up to date results.
You can run App Servers with different multi-version concurrency control settings against the
same database.
Use xdmp:host-status to get information about running transactions. The status information
includes a <transactions> element containing detailed information about every running
transaction on the host. For example:
<transactions xmlns="http://marklogic.com/xdmp/status/host">
<transaction>
<transaction-id>10030469206159559155</transaction-id>
<host-id>8714831694278508064</host-id>
<server-id>4212772872039365946</server-id>
<name/>
<mode>query</mode>
<timestamp>11104</timestamp>
<state>active</state>
<database>10828608339032734479</database>
<canceled>false</canceled>
<start-time>2011-05-03T09:14:11-07:00</start-time>
<time-limit>600</time-limit>
<max-time-limit>3600</max-time-limit>
<user>15301418647844759556</user>
<admin>true</admin>
</transaction>
...
</transactions>
In a clustered installation, transactions might run on remote hosts. If a remote transaction does not
terminate normally, it can be committed or rolled back remotely using xdmp:transaction-commit
or xdmp:transaction-rollback. These functions are equivalent to calling xdmp:commit and
xdmp:rollback when xdmp:host is passed as the host id parameter. You can also rollback a
transaction through the Host Status page of the Admin Interface. For details, see Rolling Back a
Transaction in the Administrator’s Guide.
Though a call to xdmp:transaction-commit returns immediately, the commit only occurs after the
currently executing statement in the target transaction succesfully completes. Calling
xdmp:transaction-rollback immediately interrupts the currently executing statement in the target
transaction and terminates the transaction.
For an example of using these features, see “Example: Generating a Transaction Report With
xdmp:host-status” on page 72. For details on the built-ins, see the XQuery & XSLT API Reference.
For an example of tracking system timestamp in relation to wall clock time, see “Keeping Track
of System Timestamps” on page 147.
The goal of the sample is to insert a document in the database using xdmp:eval, and then examine
and modify the results in the calling module. The inserted document will be visible to the calling
module immediately, but not visible outside the module until transaction completion.
(: result: VISIBLE :)
The same operation (inserting and then modifying a document before making it visible in the
database) cannot be performed as readily using the default transaction model. If the module
attempts the document insert and child insert in the same single-statement transaction, an
XDMP-CONFLICTINGUPDATES error occurs. Performing these two operations in different
single-statement transactions makes the inserted document immediately visible in the database,
prior to inserting the child node. Attempting to perform the child insert using a pre-commit trigger
creates a trigger storm, as described in “Avoiding Infinite Trigger Loops (Trigger Storms)” on
page 425.
The eval’d query runs as part of the calling module’s multi-statement update transaction since the
eval uses same-statement isolation. Since transaction mode is not inherited by transactions
created in a different context, using different-transaction isolation would evaluate the eval’d
query as a single-statement transaction, causing the document to be immediately visible to other
transactions.
The call to xdmp:commit is required to preserve the updates performed by the module. If
xdmp:commit is omitted, all updates are lost when evaluation reaches the end of the module. In this
example, the commit must happen in the calling module, not in the eval’d query. If the
xdmp:commit occurs in the eval’d query, the transaction completes when the statement containing
the xdmp:eval call completes, making the document visible in the database prior to inserting the
child node.
In this example, xdmp:eval is used to create a new transaction that inserts a document whose
content includes the current transaction id using xdmp:transaction. The calling query prints its
own transaction id and the transaction id from the eval’d query.
let $sub-query :=
'xquery version "1.0-ml";
declare option xdmp:transaction-mode "update"; (: 1 :)
xdmp:document-insert("/docs/mst.xml", <myData/>);
xdmp:node-insert-child(
fn:doc("/docs/mst.xml")/myData,
<child>{xdmp:transaction()}</child>
);
xdmp:commit() (: 2 :)
'
return xdmp:eval($sub-query, (),
<options xmlns="xdmp:eval">
<isolation>different-transaction</isolation>
</options>);
The xdmp:commit call at statement (: 3 :) ends the multi-statement query transaction that called
xdmp:eval and starts a new transaction for printing out the results. This causes the final transaction
at statement (: 4 :) to run at a new timestamp, so it sees the document inserted by xdmp:eval.
Since the system timestamp is fixed at the beginning of the transaction, omitting this commit
means the inserted document is not visible. For more details, see “Query Transactions:
Point-in-Time Evaluation” on page 44.
If the query calling xdmp:eval is an update transaction instead of a query transaction, the
xdmp:commit at statement (: 3 :) can be omitted. An update transaction sees the latest version of
a document at the time the document is first accessed by the transaction. Since the example
document is not accessed until after the xdmp:eval call, running the example as an update
transaction sees the updates from the eval’d query. For more details, see “Update Transactions:
Readers/Writers Locks” on page 45.
This example generates a simple HTML report of the duration of all transactions on the local host:
<html>
<body>
<h2>Running Transaction Report for {xdmp:host-name()}</h2>
<table border="1" cellpadding="5">
<tr>
<th>Transaction Id</th>
<th>Database</th><th>State</th>
<th>Duration</th>
</tr>
{
let $txns:= xdmp:host-status(xdmp:host())//hs:transaction
let $now := fn:current-dateTime()
for $t in $txns
return
<tr>
<td>{$t/hs:transaction-id}</td>
<td>{xdmp:database-name($t/hs:database-id)}</td>
<td>{$t/hs:transaction-state}</td>
<td>{$now - $t/hs:start-time}</td>
</tr>
}
</table>
</body>
</html>
If you paste the above query into Query Console and run it with HTML output, the query
generates a report similar to the following:
Many details about each transaction are available in the xdmp:host-status report. For more
information, see xdmp:host-status in the XQuery & XSLT API Reference.
If we assume the first transaction in the report represents a deadlock, we can manually cancel it by
calling xdmp:transaction-rollback and supplying the transaction id. For example:
You can also rollback transactions from the Host Status page of the Admin Interface.
This section describes configuring and managing binary documents in MarkLogic Server. Binary
documents require special consideration because they are often much larger than text or XML
content. The following topics are included:
• Terminology
4.1 Terminology
The following table describes the terminology used related to binary document support in
MarkLogic Server.
Term Definition
small binary A binary document whose contents are managed by the server and
document whose size does not exceed the large size threshold.
large binary document A binary document whose contents are managed by the server and
whose size exceeds the large size threshold.
external binary A binary document whose contents are not managed by the server.
document
large size threshold A database configuration setting defining the upper bound on the size
of small binary documents. Binary documents larger than the
threshold are automatically classified as large binary documents.
Large Data Directory The per-forest area where the contents of large binary documents are
stored.
static content Content stored in the modules database of the App Server. MarkLogic
Server responds directly to HTTP range requests (partial GETs) of
static content. See “Downloading Binary Content With HTTP Range
Requests” on page 81.
External binaries require special handling at load time because they are not managed by
MarkLogic. For more information, see Loading Binary Documents.
For example, a threshold of 1024 sets the size threshold to 1 MB. Any (managed) binary
document larger than 1 MB is automatically handled as a large binary object.
The range of acceptable threshold values on a 64-bit machine is 32 KB to 512 MB, inclusive.
Many factors must be considered in choosing the large size threshold, including the data
characteristics, the access patterns of the application, and the underlying hardware and operating
system. Ideally, set the threshold such that smaller, frequently accessed binary content such as
thumbnails and profile images are classified as small for efficient access, while larger documents
such as movies and music, which may be streamed by the application, are classified as large for
efficient memory usage.
The threshold may be set through the Admin Interface or by calling an admin API function. To set
the threshold through the Admin Interface, use the large size threshold setting on the database
configuration page.To set the threshold programmatically, use the XQuery built-in
admin:database-set-large-size-threshold:
at "/MarkLogic/admin.xqy";
When the threshold changes, the reindexing process automatically moves binary documents into
or out of the Large Data Directory as needed to match the new setting.
For more information on sizing and scalability, see the Scalability, Availability, and Failover
Guide and the Query Performance and Tuning Guide.
max(large-size-threshold, largest-expected-non-binary-document)
As described in “Selecting a Location For Binary Content” on page 77, the maximum size for
small binary documents is 512 MB on a 64-bit system. Large and external binary document size is
limited only by the maximum file size supported by the operating system.
To change the in memory tree size setting, see the Database configuration page in the Admin
Interface or admin:database-set-in-memory-limit in the XQuery and XSLT Reference Guide.
When a small binary is cached, the entire document is cached in memory. When a large or
external binary is cached, the content is fetched into the compressed tree cache in chunks, as
needed.
The chunks of a large binary are fetched into the compressed tree cache of the d-node containing
the fragment or document. The chunks of an external binary are fetched into the compressed tree
cache of the e-node evaluating the accessing query. Therefore, you may need a larger compressed
tree cache size on e-nodes if your application makes heavy use of external binary documents.
To change the compressed tree cache size, see the Groups configuration page in the Admin
Interface or admin:group-set-compressed-tree-cache-size in the XQuery and XSLT Reference
Guide.
For details on sizing and scalability, see Scalability Considerations in MarkLogic Server in the
Scalability, Availability, and Failover Guide.
The external file associated with an external binary document must be located outside the forest
containing the document. The external file must be accessible to any server instance evaluating
queries that manipulate the document. That is, the external file path used when creating an
external-binary node must be resolvable on any server instance running queries against the
document.
External binary files may be shared across a cluster by placing them on a network shared file
system, as long as the files are accessible along the same path from any e-node running queries
against the external binary documents. The reference fragment containing the associated
external-binary node may be located on a remote d-node that does not have access to the
external storage.
The diagram below demonstrates sharing external binary content across a cluster with different
host configurations. On the left, the evaluator node (e-node) and data node (d-node) are separate
hosts. On the right, the same host serves as both an evaluator and data node. The database in both
configurations contains an external binary document referencing /images/my.jpg. The JPEG
content is stored on shared external storage, accessible to the evaluator nodes through the external
file path stored in the external binary document in the database.
data node
To check the size of the Large Data Directory using the Admin Interface:
2. Click Forests in the left tree menu. The Forest summary is displayed.
4. Click the Status tab at the top to display the forest status page.
5. Observe the “Large Data Size” status, which reflects the total size of the contents of the
large data directory.
The following example uses xdmp:forest-status to retrieve the size of the Large Data Directory:
xdmp:forest-status(
xdmp:forest("samples-1"))/fs:large-data-size)
Normally, the server ensures that the binary content is removed when the containing forest no
longer contains references to the data. However, content may be left behind in the Large Data
Directory under some circumstances, such as a failover in the middle of inserting a binary
document. Content left behind in the Large Data Directory with no corresponding database
reference fragment is an orphaned binary.
If your data includes large binary documents, periodically check for and remove orphaned
binaries. Use xdmp:get-orphaned-binaries and xdmp:remove-orphaned-binary to perform this
cleanup. For example:
For example, to remove all external binary documents associated with the external binary file
/external/path/sample.jpg, use xdmp:external-binary-path:
MarkLogic Server server offers the XQuery built-in, xdmp:document-filter, and JavaScript
method, xdmp.documentFilter, to assist with adding metadata to binary documents. These
functions extract metadata and text from binary documents as a node, each of whose child
elements represent a piece of metadata. The results may be used as document properties. The text
extracted contains little formatting or structure, so it is best used for search, classification, or other
text processing.
For example, the following code creates properties corresponding to just the metadata extracted
by xdmp:document-filter from a Microsoft Word document:
The result properties document contains properties such as Author, AppName, and
Creation_Date, extracted by xdmp:document-filter:
<prop:properties xmlns:prop="http://marklogic.com/xdmp/property">
<content-type>application/msword</content-type>
<filter-capabilities>text subfiles HD-HTML</filter-capabilities>
<AppName>Microsoft Office Word</AppName>
<Author>MarkLogic</Author>
<Company>Mark Logic Corporation</Company>
<Creation_Date>2011-09-05T16:21:00Z</Creation_Date>
<Description>This is my comment.</Description>
<Last_Saved_Date>2011-09-05T16:22:00Z</Last_Saved_Date>
<Line_Count>1</Line_Count>
<Paragraphs_Count>1</Paragraphs_Count>
<Revision>2</Revision>
<Subject>Creating binary doc props</Subject>
<Template>Normal.dotm</Template>
<Typist>MarkLogician</Typist>
<Word_Count>7</Word_Count>
<isys>SubType: Word 2007</isys>
<size>10047</size>
<prop:last-modified>2011-09-05T09:47:10-07:00</prop:last-modified>
</prop:properties>
This section covers the following topics related to serving binary content in response to range
requests:
For example, suppose your database contains a large binary document with the URI
“/images/really_big.jpg” and you create an HTTP App Server on port 8010 that uses this database
as its modules database. Sending a GET request of the following form to port 8010 directly
fetches the binary document:
GET http://host:8010/images/really_big.jpg
If you include a range in the request, then you can incrementally stream the document out of the
database. For example:
GET http://host:8010/images/really_big.jpg
Range: bytes=0-499
MarkLogic returns the first 500 byes of the document /images/really_big.jpg in a Partial
Content response with a 206 (Partial Content) status, similar to the following (some headers are
omitted for brevity):
If the range request includes multiple non-overlapping ranges, the App Server responds with a
206 and a multi-part message body with media type “multipart/byteranges”.
If a range request cannot be satisfied, the App Server responds with a 416 status (Requested
Range Not Satisfiable).
The following code demonstrates how to interpret a Range header and return dynamically
generated content in response to a range request:
If the above code is in an XQuery module fetch-bin.xqy, then a request such the following returns
the first 100 bytes of a binary. (The -r option to the curl command specifies a byte range).
return
xdmp:email(
<em:Message
xmlns:em="URN:ietf:params:email-xml:"
xmlns:rf="URN:ietf:params:rfc822:">
<rf:subject>Sample Email</rf:subject>
<rf:from>
<em:Address>
<em:name>Myself</em:name>
<em:adrs>[email protected]</em:adrs>
</em:Address>
</rf:from>
<rf:to>
<em:Address>
<em:name>Somebody</em:name>
<em:adrs>[email protected]</em:adrs>
</em:Address>
</rf:to>
<rf:content-type>{$content-type}</rf:content-type>
<em:content xml:space="preserve">
{$content}
</em:content>
</em:Message>)
• xdmp:subbinary
• xdmp:binary-size
• xdmp:external-binary
• xdmp:external-binary-path
• xdmp:binary-is-small
• xdmp:binary-is-large
• xdmp:binary-is-external
In addition, the following XQuery built-ins may be useful when creating or testing the integrity of
external binary content:
• xdmp:filesystem-file-length
• xdmp:filesystem-file-exists
You can import XQuery into other XQuery and/or Server-Side JavaScript modules. Similarly, you
can import XSLT stylesheets into other stylesheets, you can import XQuery modules into XSLT
stylesheets, and you can import XSLT stylesheets into XQuery modules.
This chapter describes the two types of XQuery modules and specifies the rules for importing
modules and resolving URI references. To import XQuery into Server-Side JavaScript modules,
see Using XQuery Functions and Variables in JavaScript in the JavaScript Reference Guide.
For details on importing XQuery library modules into XSLT stylesheets and vice-versa, see Notes
on Importing Stylesheets With <xsl:import> and Importing a Stylesheet Into an XQuery Module in the
XQuery and XSLT Reference Guide.
• Main Modules
• Library Modules
For more details about the XQuery language, see the XQuery and XSLT Reference Guide.
"hello world"
Main modules can have prologs, but the prolog is optional. As part of a prolog, a main module can
have function definitions. Function definitions in a main module, however, are only available to
that module; they cannot be imported to another module.
If you insert the module into the modules database of your App Server or save it on the filesystem
under the modules root directory of your App Server, then you can import the module and call the
“helloworld” function.
For example, suppose you save the above module to the filesystem with the pathname
/my/app/helloworld.xqy. If you configure an App Server to use “Modules” as the modules
database and “/” as the modules root, then you can store the module in the modules database as
follows:
The inserted module has the URI /my/app/helloworld.xqy. Now, you can import the module in a
main module or library module and call the “helloworld” function as follows:
hw:helloworld()
The same import statement works if you configure an App server to use the filesystem as the
modules “database” and “/” as the modules root. In this case, the query imports the module from
the filesystem instead of from the modules database.
The XQuery module that is imported/invoked/spawned can reside in any of the following places:
1. When an import/invoke/spawn path starts with a leading slash, first look under the
Modules directory (on Windows, typically c:\Program Files\MarkLogic\Modules). For
example:
In this case, it would look for the module file with a namespace foo in
c:\Program Files\MarkLogic\Modules\foo.xqy.
2. If the import/invoke/spawn path starts with a slash, and it is not found under the Modules
directory, then start at the App Server root. For example, if the App Server root is
/home/mydocs/, then the following import:
Note that you start at the App Server root, both for filesystem roots and Modules database
roots. For example, in an App Server configured with a modules database and a root of
http://foo/:
will look for a module with namespace foo in the modules database with a URI
http://foo/foo.xqy (resolved by appending the App Server root to foo.xqy).
3. If the import/invoke/spawn path does not start with a slash, first look under the Modules
directory. If the module is not found there, then look relative to the location of the module
that called the function. For example, if a module at /home/mydocs/bar.xqy has the
following import:
Note that you start at the calling module location, both for App Servers configured to use
the filesystem and for App Servers configured to use modules databases. For example, a
module with a URI of http://foo/bar.xqy that resides in the modules database and has
the following import statement:
will look for the module with the URI http://foo/foo.xqy in the modules database.
4. If the import/invoke/spawn path contains a scheme or network location, then the server
throws an exception. For example:
my:hello()
The library module lib.xqy is imported relative to the App Server root (in this case, relative to
c:/mydir).
This chapter describes how to use Library Services, which enable you to create and manage
versioned content in MarkLogic Server in a manner similar to a Content Management System
(CMS). This chapter includes the following sections:
When you initially put a document under Library Services management, it creates Version 1 of the
document. Each time you update the document, a new version of the document is created. Old
versions of the updated document are retained according to your retention policy, as described in
“Defining a Retention Policy” on page 100.
The Library Services include functions for managing modular documents so that various versions
of linked documents can be created and managed, as described in “Managing Modular
Documents in Library Services” on page 107.
The following diagram illustrates the workflow of a typical managed document. In this example,
the document is added to the database and placed under Library Services management. The
managed document is checked out, updated several times, and checked in by Jerry. Once the
document is checked in, Elaine checks out, updates, and checks in the same managed document.
Each time the document is updated, the previous versions of the document are purged according
to the retention policy.
Add Document
to Database
Manage
Document
Library Services
Version 1 Jerry
Checkout Document
Version 4
Checkin Document
Version 4 Elaine
Checkout Document
Version 5 Version 6
Version 6
Checkin Document
• dls:document-add-collections
• dls:document-add-permissions
• dls:document-add-properties
• dls:document-set-collections
• dls:document-set-permissions
• dls:document-set-properties
• dls:document-remove-properties
• dls:document-remove-permissions
• dls:document-remove-collections
• dls:document-set-property
• dls:document-set-quality
Note: If you only change the collection or property settings on a document, these settings
will not be maintained in version history when the document is checked in. You
must also change the content of the document to version changes to collections or
properties.
• dls-admin Role
• dls-user Role
• dls-internal Role
Note: Do not log in with the Admin role when inserting managed documents into the
database or when testing your Library Services applications. Instead create test
users with the dls-user role and assign them the various permissions needed to
access the managed documents. When testing your code in Query Console, you
must also assign your test users the qconsole-user role.
The dls-user role only has privileges that are needed to run the Library Services API; it does not
provide execute privileges to any functions outside the scope of the Library Services API. The
Library Services API uses the dls-user role as a mechanism to amp more privileged operations in
a controlled way. It is therefore reasonably safe to assign this role to any user whom you trust to
use your Library Services application. Assign the dls-user role to all users of your Library
Services application.
When inserting a managed document, specify at least read and update permissions to the roles
assigned to the users that are to manage the document. If no permissions are supplied, the default
permissions of the user inserting the managed document are applied. The default permissions can
be obtained by calling the xdmp:default-permissions function. When adding a collection to a
document, as shown in the example below, the user will also need the unprotected-collections
privilege.
For example, the following query inserts a new document into the database and places it under
Library Services management. This document can only be read or updated by users assigned the
writer and/or editor role and have permission to read and update the
http://marklogic.com/engineering/specs collection.
dls:document-insert-and-manage(
"/engineering/beta_overview.xml",
fn:true(),
<TITLE>Project Beta Overview</TITLE>,
"Manage beta_overview.xml",
(xdmp:permission("writer", "read"),
xdmp:permission("writer", "update"),
xdmp:permission("editor", "read"),
xdmp:permission("editor", "update")),
("http://marklogic.com/engineering/specs"))
dls:document-checkout(
"/engineering/beta_overview.xml",
fn:true(),
"Updating doc")
You can specify an optional timeout parameter to dls:document-checkout that specifies how long
(in seconds) to keep the document checked out. For example, to check out the beta_overview.xml
document for one hour, specify the following:
dls:document-checkout(
"/engineering/beta_overview.xml",
fn:true(),
"Updating doc",
3600)
dls:document-checkout-status("/engineering/beta_overview.xml")
<dls:checkout xmlns:dls="http://marklogic.com/xdmp/dls">
<dls:document-uri>/engineering/beta_overview.xml</dls:document-uri>
<dls:annotation>Updating doc</dls:annotation>
<dls:timeout>0</dls:timeout>
<dls:timestamp>1240528210</dls:timestamp>
<sec:user-id xmlns:sec="http://marklogic.com/xdmp/security">
10677693687367813363
</sec:user-id>
</dls:checkout>
dls:document-checkin(
"/engineering/beta_overview.xml",
fn:true() )
Note: You cannot use node update functions, such as xdmp:node-replace, with managed
documents. Updates to the document must be done in memory before calling the
dls:document-update function. For information on how to do in-memory updates
on document nodes, see “Transforming XML Structures With a Recursive
typeswitch Expression” on page 113.
let $contents :=
<BOOK>
<TITLE>Project Beta Overview</TITLE>
<CHAPTER>
<TITLE>Objectives</TITLE>
<PARA>
The objective of Project Beta, in simple terms, is to corner
the widget market.
</PARA>
</CHAPTER>
</BOOK>
return
dls:document-update(
"/engineering/beta_overview.xml",
$contents,
"Roughing in the first chapter",
fn:true())
Note: The dls:document-update function replaces the entire contents of the document.
You can also use dls:purge or dls:document-purge to determine what documents would be
deleted by the retention policy without actually deleting them. This option can be useful when
developing your retention rules. For example, if you change your retention policy and want to
determine specifically what document versions will be deleted as a result, you can use:
dls:purge(fn:false(), fn:true())
You can define retention rules to keep various numbers of document versions, to keep documents
matching a cts-query expression, and/or to keep documents for a specified period of time.
Restrictions in a retention rule are combined with a logical AND, so that all of the expressions in
the retention rule must be true for the document versions to be retained. When you combine
separate retention rules, the resulting retention policy is an OR of the combined rules (that is, the
document versions are retained if they are matched by any of the rules). Multiple rules do not
have an order of operation.
Warning The retention policy specifies what is retained, not what is purged. Therefore,
anything that does not match the retention policy is removed.
For example, the following retention rule retains all versions of all documents because the empty
cts:and-query function matches all documents:
dls:retention-rule-insert(
dls:retention-rule(
"All Versions Retention Rule",
"Retain all versions of all documents",
(),
(),
"Locate all of the documents",
cts:and-query(()) ) )
The following retention rule retains the last five versions of all of the documents located under the
/engineering/ directory:
dls:retention-rule-insert(
dls:retention-rule(
"Engineering Retention Rule",
"Retain the five most recent versions of Engineering docs",
5,
(),
"Locate all of the Engineering documents",
cts:directory-query("/engineering/", "infinity") ) )
The following retention rule retains the latest three versions of the engineering documents with
“Project Alpha” in the title that were authored by Jim:
dls:retention-rule-insert(
dls:retention-rule(
"Project Alpha Retention Rule",
"Retain the three most recent engineering documents with
the title ‘Project Alpha’ and authored by Jim.",
3,
(),
"Locate the engineering docs with 'Project Alpha' in the
title authored by Jim",
cts:and-query((
cts:element-word-query(xs:QName("TITLE"), "Project Alpha"),
cts:directory-query("/engineering/", "infinity"),
dls:author-query(xdmp:user("Jim")) )) ) )
The following retention rule retains the five most recent versions of documents in the “specs”
collection that are no more than thirty days old:
dls:retention-rule-insert(
dls:retention-rule(
"Specs Retention Rule",
"Keep the five most recent versions of documents in the ‘specs’
collection that are 30 days old or newer",
5,
xs:duration("P30D"),
"Locate documents in the 'specs' collection",
cts:collection-query("http://marklogic.com/documents/specs") ) )
For example, the following retention rule retains the latest versions of the engineering documents
created before 5:00pm on 4/23/09:
dls:retention-rule-insert(
dls:retention-rule(
"Draft 1 of the Engineering Docs",
"Retain each engineering document that was update before
5:00pm, 4/23/09",
(),
(),
(),
cts:and-query((
cts:directory-query("/documentation/", "infinity"),
dls:as-of-query(xs:dateTime("2009-04-23T17:00:00-07:00")) )) ))
If you want to retain two separate snapshots of the engineering documents, you can add a
retention rule that contains a different cts:or-query function. For example:
cts:and-query((
cts:directory-query("/documentation/", "infinity"),
dls:as-of-query(xs:dateTime("2009-25-12T09:00:01-07:00")) ))
Consider the two rules shown below. The first rule retains the latest 5 versions of all of the
documents under the /engineering/ directory. The second rule, retains that latest 10 versions of
all of the documents under the /documentation/ directory. The ORed result of these two rules
does not impact the intent of each individual rule and each rule can be updated independently
from the other.
dls:retention-rule-insert((
dls:retention-rule(
"Engineering Retention Rule",
"Retain the five most recent versions of Engineering docs",
5,
(),
"Apply to all of the Engineering documents",
cts:directory-query("/engineering/", "infinity") ),
dls:retention-rule(
"Documentation Retention Rule",
"Retain the ten most recent versions of the documentation",
10,
(),
"Apply to all of the documentation",
cts:directory-query("/documentation/", "infinity") ) ))
As previously described, multiple retention rules define a logical OR between them, so there may
be circumstances when multiple retention rules are needed to define the desired retention policy
for the same set of documents.
For example, you want to retain the last five versions of all of the engineering documents, as well
as all engineering documents that were updated before 8:00am on 4/24/09 and 9:00am on 5/12/09.
The following two retention rules are needed to define the desired retention policy:
dls:retention-rule-insert((
dls:retention-rule(
"Engineering Retention Rule",
"Retain the five most recent versions of Engineering docs",
5,
(),
"Retain all of the Engineering documents",
cts:directory-query("/engineering/", "infinity") ),
dls:retention-rule(
"Project Alpha Retention Rule",
"Retain the engineering documents that were updated before
the review dates below.",
(),
(),
"Retain all of the Engineering documents updated before
the two dates",
cts:and-query((
cts:directory-query("/engineering/", "infinity"),
cts:or-query((
dls:as-of-query(xs:dateTime("2009-04-24T08:00:17.566-07:00")),
dls:as-of-query(xs:dateTime("2009-05-12T09:00:01.632-07:00"))
))
)) ) ))
It is important to understand the difference between the logical OR combination of the above two
retention rules and the logical AND within a single rule. For example, the OR combination of the
above two retention rules is not same as the single rule below, which is an AND between retaining
the last five versions and the as-of versions. The end result of this rule is that the last five versions
are not retained and the as-of versions are only retained as long as they are among the last five
versions. Once the revisions of the last five documents have moved past the as-of dates, the AND
logic is no longer true and you no longer have an effective retention policy, so no versions of the
documents are retained.
dls:retention-rule-insert(
dls:retention-rule(
"Project Alpha Retention Rule",
"Retain the 5 most recent engineering documents",
5,
(),
"Retain all of the Engineering documents updated before
the two dates",
cts:and-query((
cts:directory-query("/engineering/", "infinity"),
cts:or-query((
dls:as-of-query(xs:dateTime("2009-04-24T08:56:17.566-07:00")),
dls:as-of-query(xs:dateTime("2009-05-12T08:59:01.632-07:00"))
)) )) ) )
dls:retention-rule-remove(fn:data(dls:retention-rules("*")//dls:name))
For example, the following function call extracts Chapter 1 from the “Project Beta Overview”
document:
dls:document-extract-part("/engineering/beta_overview_chap1.xml",
fn:doc("/engineering/beta_overview.xml")//CHAPTER[1],
"Extracting Chapter 1",
fn:true() )
<BOOK>
<TITLE>Project Beta Overview</TITLE>
<xi:include href="/engineering/beta_overview_chap1.xml"/>
</BOOK>
<CHAPTER>
<TITLE>Objectives</TITLE>
<PARA>
The objective of Project Beta, in simple terms, is to corner
the widget market.
</PARA>
</CHAPTER>
Note: The newly created managed document containing the extracted child element is
initially checked-in and must be checked out before you can make any updates.
The dls:document-extract-part function can only be called once in a transaction for the same
document. There may be circumstances in which you want to extract multiple elements from a
document and replace them with XInclude statements. For example, the following query creates
separate documents for all of the chapters from the “Project Beta Overview” document and
replaces them with XInclude statements:
return (
dls:document-insert-and-manage(
fn:concat("/engineering/beta_overview_chap", $num, ".xml"),
fn:true(),
$chap),
<xi:include href="/engineering/beta_overview_chap{$num}.xml"
xmlns:xi="http://www.w3.org/2001/XInclude"/>
)
let $contents :=
<BOOK>
<TITLE>Project Beta Overview</TITLE>
{$includes}
</BOOK>
return
dls:document-update(
"/engineering/beta_overview.xml",
$contents,
"Chapters are XIncludes",
fn:true() )
This query produces a “Project Beta Overview” document similar to the following:
<BOOK>
<TITLE>Project Beta Overview</TITLE>
<xi:include href="/engineering/beta_overview_chap1.xml"
xmlns:xi="http://www.w3.org/2001/XInclude"/>
<xi:include href="/engineering/beta_overview_chap1.xml"
xmlns:xi="http://www.w3.org/2001/XInclude"/>
<xi:include href="/engineering/beta_overview_chap2.xml"
xmlns:xi="http://www.w3.org/2001/XInclude"/>
</BOOK>
Note: When using the dls:node-expand function to expand documents that contain
XInclude links to specific versioned documents, specify the $restriction
parameter as an empty sequence.
For example, to return the expanded beta_overview.xml document, you can use:
To return the first linked node in the beta_overview.xml document, you can use:
return dls:link-expand(
$node,
$node/BOOK/xi:include[1],
() )
return dls:node-expand(
$node,
dls:as-of-query(
xs:dateTime("2009-04-06T13:30:33.576-07:00")) )
For example, as shown in “Creating Managed Modular Documents” on page 107, the “Project
Beta Overview” document contains three chapters that are linked as separate documents. The
following query takes a snapshot of the latest version of each chapter and creates a new version of
the “Project Beta Overview” document that includes the versioned chapters:
return
dls:document-update(
"/engineering/beta_overview.xml",
$contents,
"Latest Draft",
fn:true() )
The above query results in a new version of the “Project Beta Overview” document that looks
like:
<BOOK>
<TITLE>Project Beta Overview</TITLE>
<xi:include
href="/engineering/beta_overview_chap1.xml_versions/4-beta_overview_
chap1.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
<xi:include
href="/engineering/beta_overview_chap2.xml_versions/3-beta_overview_
chap2.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
<xi:include
href="/engineering/beta_overview_chap3.xml_versions/3-beta_overview_
chap3.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
</BOOK>
Note: When using the dls:node-expand function to expand modular documents that
contain XInclude links to specific versioned documents, specify the $restriction
parameter as an empty sequence.
You can also create modular documents that contain different versions of linked documents. For
example, in the illustration below, Doc R.xml, Version 1 contains the contents of:
Versions
R.xml
Versions
1
A common task required with XML is to transform one structure to another structure. This
chapter describes a design pattern using the XQuery typeswitch expression which makes it easy
to perform complex XML transformations with good performance, and includes some samples
illustrating this design pattern. It includes the following sections:
• XML Transformations
XQuery is a powerful programming language, and MarkLogic Server provides very fast access to
content, so together they work extremely well for transformations. MarkLogic Server is
particularly well suited to transformations that require searches to get the content which needs
transforming. For example, you might have a transformation that uses a lexicon lookup to get a
value with which to replace the original XML value. Another transformation might need to count
the number of authors in a particular collection.
Similarly, you can write an XQuery program that returns XSL-FO, which is a common path to
build PDF output. Again, XSL-FO is just an XML structure, so it is easy to write XQuery that
returns XML in that structure.
For the syntax of the typeswitch expression, see The typeswitch Expression in XQuery and XSLT
Reference Guide. The case clause allows you to perform a test on the input to the typeswitch and
then return something. For transformations, the tests are often what are called kind tests. A kind
test tests to see what kind of node something is (for example, an element node with a given
QName). If that test returns true, then the code in the return clause is executed. The return clause
can be arbitrary XQuery, and can therefore call a function.
Because XML is an ordered tree structure, you can create a function that recursively walks
through an XML node, each time doing some transformation on the node and sending its child
nodes back into the function. The result is a convenient mechanism to transform the structure
and/or content of an XML node.
• Simple Example
let $x :=
<foo>foo
<bar>bar</bar>
<baz>baz
<buzz>buzz</buzz>
</baz>
foo
</foo>
return
local:transform($x)
<fooo>
foo
<barr>bar</barr>
<bazz>baz
<buzzz>buzz</buzzz>
</bazz>
foo
</fooo>
let $x :=
<foo>foo
<bar>bar</bar>
<baz>baz
<buzz>buzz</buzz>
</baz>
foo
</foo>
return
cts:highlight(local:transform($x), cts:word-query("foo"),
<b>{$cts:text}</b>)
<fooo>
<b>foo</b>
<barr>bar</barr>
<bazz>baz
<buzzz>buzz</buzzz>
</bazz>
<b>foo</b>
</fooo>
let $x :=
<a>
<title>This is a Title</title>
<para>Some words are here.</para>
<sectionTitle>A Section</sectionTitle>
<para>This is a numbered list.</para>
<numbered>
<number>Install MarkLogic Server.</number>
<number>Load content.</number>
<number>Run very big and fast XQuery.</number>
</numbered>
</a>
return
<html xmlns="http://www.w3.org/1999/xhtml">
<head><title>MarkLogic Sample Code</title></head>
<body>{local:transform($x)}</body>
</html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>MarkLogic Sample Code</title>
</head>
<body>
<h1>This is a Title</h1>
<p>Some words are here.</p>
<h2>A Section</h2>
<p>This is a numbered list.</p>
<ol>
<li>Install MarkLogic Server.</li>
<li>Load content.</li>
<li>Run very big and fast XQuery.</li>
</ol>
</body>
</html>
If you run this code against an HTTP App Server (for example, copy the code to a file in the App
Server root and access the page from a browser), you will see results similar to the following:
Note that the return clauses of the typeswitch case statements in this example are simplified, and
look like the following:
The function can then perform arbitrarily complex logic. Typically, each case statement calls a
function with code appropriate to how that element needs to be transformed.
Suppose you want your transformation to exclude certain elements based on the place in the XML
hierarchy in which the elements appear. You can then add logic to the function to exclude the
passed in elements, as shown in the following code snippet:
There are plenty of other extensions to this design pattern you can use. What you do depends on
your application requirements. XQuery is a powerful programming language, and therefore these
types of design patterns are very extensible to new requirements.
This chapter describes locks on documents and directories, and includes the following sections:
• Overview of Locks
• Lock APIs
Note: This chapter is about document and directory locks that you set explicitly, not
about transaction locks which MarkLogic sets implicitly. To understand
transactions, see “Understanding Transactions in MarkLogic Server” on page 28.
• Write Locks
• Persistent
• Searchable
• Exclusive or Shared
• Hierarchical
8.1.2 Persistent
Locks are persistent in the database. They are not tied to a transaction. You can set locks to last a
specified time period or to last indefinitely. Because they are persistent, you can use locks to
ensure that a document is not modified during a multi-transaction operation.
8.1.3 Searchable
Because locks are persistent XML documents, they are therefore searchable XML documents,
and you can write queries to give information about locks in the database. For an example, see
“Example: Finding the URI of Documents With Locks” on page 122.
8.1.5 Hierarchical
When you are locking a directory, you can specify the depth in a directory hierarchy you want to
lock. Specifying "0" means only the specified URI is locked, and specifying "infinity" means
the URI (for example, the directory) and all of its children are locked.
If you set a lock on every document and directory in the database, that can have the effect of not
allowing any data to change in the database (except by the user who owns the lock). Combining a
application development practice of locking and using security permissions effectively can
provide a robust multi-user development environment.
• xdmp:document-locks
• xdmp:directory-locks
• xdmp:collection-locks
The xdmp:document-locks function with no arguments returns a sequence of locks, one for each
document lock. The xdmp:document-locks function with a sequence of URIs as an argument
returns the locks for the specified document(s). The xdmp:directory-locks function returns locks
for all of the documents in the specified directory, and the xdmp:collection-locks function
returns all of the locks for documents in the specified collection.
You can set and remove locks on directories and documents with the following functions:
• xdmp:lock-acquire
• xdmp:lock-release
The basic procedure to set a lock on a document or a directory is to submit a query using the
xdmp:lock-acquire function, specifying the URI, the scope of lock requested (exclusive or
shared), the hierarchy affected by the lock (just the URI or the URI and all of its children), the
owner of the lock, the duration of the lock
Note: The owner of the lock is not the same as the sec:user-id of the lock. The owner can
be specified as an option to xdmp:lock-acquire. If owner is not explicitly specified,
then the owner defaults to the name of the user who issued the lock command. For
an example, see “Example: Finding the User to Whom a Lock Belongs” on
page 124.
<root>
{
for $locks in xdmp:document-locks()
return <document-URI>{xdmp:node-uri($locks)}</document-URI>
}
</root>
For example, if the only document in the database with a lock has a URI
/document/myDocument.xml, then the above query would return the following.
<root>
<document-URI>/documents/myDocument.xml</document-URI>
</root>
xdmp:lock-acquire("/documents/myDocument.xml",
"exclusive",
"0",
"Raymond is editing this document",
xs:unsignedLong(120))
You can view the resulting lock document with the xdmp:document-locks function as follows:
xdmp:document-locks("/documents/myDocument.xml")
=>
<lock:lock xmlns:lock="http://marklogic.com/xdmp/lock">
<lock:lock-type>write</lock:lock-type>
<lock:lock-scope>exclusive</lock:lock-scope>
<lock:active-locks>
<lock:active-lock>
<lock:depth>0</lock:depth>
<lock:owner>Raymond is editing this document</lock:owner>
<lock:timeout>120</lock:timeout>
<lock:lock-token>
http://marklogic.com/xdmp/locks/4d0244560cc3726c
</lock:lock-token>
<lock:timestamp>1121722103</lock:timestamp>
<sec:user-id xmlns:sec="http://marklogic.com/xdmp/security">
8216129598321388485
</sec:user-id>
</lock:active-lock>
</lock:active-locks>
</lock:lock>
xdmp:lock-release("/documents/myDocument.xml")
If you acquire a lock with no timeout period, be sure to release the lock when you are done with it.
If you do not release the lock, no other users can update any documents or directories locked by
the xdmp:lock-acquire action.
for $x in xdmp:document-locks()//sec:user-id
return <lock>
<URI>{xdmp:node-uri($x)}</URI>
<user-id>{data($x)}</user-id>
</lock>
A sample result is as follows (this result assumes there is only a single lock in the database):
<lock>
<URI>/documents/myDocument.xml</URI>
<user-id>15025067637711025979</user-id>
</lock>
This chapter describes properties documents and directories in MarkLogic Server. It includes the
following sections:
• Properties Documents
• Directories
• Protected Properties
The properties schema is assigned the prop namespace prefix, which is predefined in the server:
http://marklogic.com/xdmp/property
<xs:schema targetNamespace="http://marklogic.com/xdmp/property"
xsi:schemaLocation="http://www.w3.org/2001/XMLSchema XMLSchema.xsd
http://marklogic.com/xdmp/security security.xsd"
xmlns="http://marklogic.com/xdmp/property"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xhtml="http://www.w3.org/1999/xhtml"
xmlns:sec="http://marklogic.com/xdmp/security">
<xs:complexType name="properties">
<xs:annotation>
<xs:documentation>
A set of document properties.
</xs:documentation>
<xs:appinfo>
</xs:appinfo>
</xs:annotation>
<xs:choice minOccurs="1" maxOccurs="unbounded">
<xs:any/>
</xs:choice>
</xs:complexType>
<xs:simpleType name="directory">
<xs:annotation>
<xs:documentation>
A directory indicator.
</xs:documentation>
<xs:appinfo>
</xs:appinfo>
</xs:annotation>
<xs:restriction base="xs:anySimpleType">
</xs:restriction>
</xs:simpleType>
<xs:simpleType name="last-modified">
<xs:annotation>
<xs:documentation>
A timestamp of the last time something was modified.
</xs:documentation>
<xs:appinfo>
</xs:appinfo>
</xs:annotation>
<xs:restriction base="xs:dateTime">
</xs:restriction>
</xs:simpleType>
</xs:schema>
For the signatures and descriptions of these APIs, see the MarkLogic XQuery and XSLT Function
Reference.
The property axis is similar to the forward and reverse axes in an XPath expression. For example,
you can use the child:: forward axis to traverse to a child element in a document. For details on
the XPath axes, see the XPath 2.0 specification and XPath Quick Reference in the XQuery and XSLT
Reference Guide.
The property axis contains all of the children of the properties document node for a given URI.
The following example shows how you can use the property axis to access properties for a
document while querying the document:
xdmp:document-insert("/test/123.xml",
<test>
<element>123</element>
</test>)
xdmp:document-add-properties("/test/123.xml",
<hello>hello there</hello>)
If you list the properties for the /test/123.xml document, you will see the property you just
added:
xdmp:document-properties("/test/123.xml")
=>
<prop:properties xmlns:prop="http://marklogic.com/xdmp/property">
<hello>hello there</hello>
</prop:properties>
You can now search through the property axis of the /test/123.xml document, as follows:
doc("/test/123.xml")/property::hello
=>
<hello>hello there</hello>
• prop:directory
• prop:last-modified
These properties are reserved for use directly by MarkLogic Server; attempts to add or delete
properties with these names fail with an exception.
xdmp:document-properties("http://myDirectory/")
=>
<prop:properties xmlns:prop="http://marklogic.com/xdmp/property">
<prop:directory/>
</prop:properties>
You can add whatever you want to a properties document (as long as it conforms to the properties
schema). If you run the function xdmp:document-properties with no arguments, it returns a
sequence of all the properties documents in the database.
The following example creates a properties document and sets permissions on it:
xdmp:document-set-properties("/my-props.xml", <my-props/>),
xdmp:document-set-permissions("/my-props.xml",
(xdmp:permission("dls-user", "read"),
xdmp:permission("dls-user", "update")))
If you then run xdmp:document-properties on the URI, it returns the new properties document:
xdmp:document-properties("/my-props.xml")
(: returns:
<?xml version="1.0" encoding="ASCII"?>
<prop:properties xmlns:prop="http://marklogic.com/xdmp/property">
<my-props/>
<prop:last-modified>2010-06-18T18:19:10-07:00</prop:last-modified>
</prop:properties>
:)
Similarly, you can pass in functions to set the collections and quality on the standalone properties
document, either when you create it or after it is created.
This section describes how to use properties to store metadata for use in a document processing
pipeline, it includes the following subsections:
Joins across the properties axis that have predicates are optimized for performance. For example,
the following returns foo root elements from documents that have a property bar:
foo[property::bar]
The following examples show the types of queries that are optimized for performance (where
/a/b/c is some XPath expression):
/a/b/c[property::bar]
/a/b/c[not(property::bar = "baz")]
for $f in /a/b/c
where $f/property::bar = "baz"
return $f
Other types of expressions will work but are not optimized for performance, including the
following:
• If you want the bar property of documents whose root elements are foo:
/foo/property::bar
• “I have already loaded 1 million documents and now want to update all of them.” The
psuedo-code for this is as follows:
for $d in fn:doc()
return some-update($d)
These types of queries will eventually run out of tree cache memory and fail.
For these types of scenarios, using properties to test whether a document needs processing is an
effective way of being able to batch up the updates into manageable chunks.
1. Take an iterative strategy, but one that does not become progressively slow.
3. Use properties (or lack thereof) to identify the documents that (still) need processing.
4. Repeatedly call the same module, updating its property as well as updating the document:
5. If there are any documents that still need processing, invoke the module again.
6. The psuedo-code for the module that processes documents that do not have a specific
property is as follows:
xdmp:invoke($path, $external-vars)
9.3 Directories
Directories have many uses, including organizing your document URIs and using them with
WebDAV servers. This section includes the following items about directories:
xdmp:directory-create("/myDirectory/");
xdmp:document-properties("/myDirectory/")
=>
<prop:properties xmlns:prop="http://marklogic.com/xdmp/property">
<prop:directory/>
</prop:properties>
Note: You can create a directory with any unique URI, but the convention is for directory
URIs to end with a forward slash (/). It is possible to create a document with the
same URI as a directory, but this is not recommended; the best practice is to
reserve URIs ending in slashes for directories.
Because xdmp:document-properties with no arguments returns the properties documents for all
properties documents in the database, and because each directory has a prop:directory element,
you can easily write a query that returns all of the directories in the database. Use the
xdmp:node-uri function to accomplish this as follows:
for $x in xdmp:document-properties()/prop:properties/prop:directory
return <directory-uri>{xdmp:node-uri($x)}</directory-uri>
/a/b/hello/
Except for the fact that you can use both directories and collections to organize documents,
directories are unrelated to collections. For details on collections, see Collections in the Search
Developer’s Guide. For details on WebDAV servers, see WebDAV Servers in the Administrator’s
Guide.
(: browser.xqy :)
This application writes out an HTML document with links to the documents and directories in the
root of the server. The application finds the documents in the root directory using the
xdmp:directory function, finds the directories using the xdmp:directory-properties function,
does some string manipulation to get the last part of the URI to display, and keeps the state using
the application server request object built-in XQuery functions (xdmp:get-request-field and
xdmp:get-request-path).
a. Set the Modules database to be the same database as the Documents database. For
example, if the database setting is set to the database named my-database, set the modules
database to my-database as well.
b. Set the HTTP Server root to http://myDirectory/, or set the root to another value and
modify the $rootdir variable in the directory browser code so it matches your HTTP
Server root.
2. Copy the sample code into a file named browser.xqy. If needed, modify the $rootdir
variable to match your HTTP Server root. Using the xdmp:modules-root function, as in the
sample code, will automatically get the value of the App Server root.
3. Load the browser.xqy file into the Modules database at the top level of the HTTP Server
root. For example, if the HTTP Server root is http://myDirectory/, load the browser.xqy
file into the database with the URI http://myDirectory/browser.xqy. You can load the
document either via a WebDAV client (if you also have a WebDAV server pointed to this
root) or with the xdmp:document-load function.
4. Make sure the browser.xqy document has execute permissions. You can check the
permissions with the following function:
xdmp:document-get-permissions("http://myDirectory/browser.xqy")
This command returns all of the permissions on the document. It must have “execute”
capability for a role possessed by the user running the application. If it does not, you can
add the permissions with a command similar to the following:
xdmp:document-add-permissions("http://myDirectory/browser.xqy",
xdmp:permission("myRole", "execute"))
5. Load some other documents into the HTTP Server root. For example, drag and drop some
documents and folders into a WebDAV client (if you also have a WebDAV server pointed
to this root).
6. Access the browser.xqy file with a web browser using the host and port number from the
HTTP Server. For example, if you are running on your local machine and you have set the
HTTP Server port to 9001, you can run this application from the URL
http://localhost:9001/browser.xqy.
You will see links to the documents and directories you loaded into the database. If you did not
load any other documents, you will just see a link to the browser.xqy file.
You can configure MarkLogic Server to retain old versions of documents, allowing you to
evaluate a query statement as if you had travelled back to a point-in-time in the past. When you
specify a timestamp at which a query statement must evaluate, that statement will evaluate against
the newest version of the database up to (but not beyond) the specified timestamp.
This chapter describes point-in-time queries and forest rollbacks to a point-in-time, and includes
the following sections:
For more information on how merges work, see the “Understanding and Controlling Database
Merges” chapter of the Administrator’s Guide. For background material for this chapter, see
“Understanding Transactions in MarkLogic Server” on page 28.
To maximize efficiency and improve performance, the fragments are maintained using a method
analagous to a log-structured filesystem. A log-structured filesystem is a very efficient way of
adding, deleting, and modifying files, with a garbage collection process that periodically removes
obsolete versions of the files. In MarkLogic Server, fragments are stored in a log-structured
database. MarkLogic Server periodically merges two or more stands together to form a single
stand. This merge process is equivalent to the garbage collection of log-structured filesystems.
When you modify or delete an existing document or node, it affects one or more fragments. In the
case of modifying a document (for example, an xdmp:node-replace operation), MarkLogic Server
creates new versions of the fragments involved in the operation. The old versions of the fragments
are marked as obsolete, but they are not yet deleted. Similarly, if a fragment is deleted, it is simply
marked as obsolete, but it is not immediately deleted from disk (although you will no longer be
able to query it without a point-in-time query).
There is a control at the database level called the merge timestamp, set via the Admin Interface.
By default, the merge timestamp is set to 0, which sets the timestamp of a merge to the timestamp
corresponding to when the merge completes. To use point-in-time queries, you can set the merge
timestamp to a static value corresponding to a particular time. Then, any merges that occur after
that time will preserve all fragments, including obsolete fragments, whose timestamps are equal
to or later than the specified merge timestamp.
The effect of preserving obsolete fragments is that you can perform queries that look at an older
view of the database, as if you are querying the database from a point-in-time in the past. For
details on setting the merge timestamp, see “Enabling Point-In-Time Queries in the Admin
Interface” on page 142.
The following figure shows a stand with a merge timestamp of 100. Fragment 1 is a version that
was changed at timestamp 110, and fragment 2 is a version of the same fragment that was
changed at timestamp 120.
Stand
Merge Timestamp: 100
Fragment 1
Fragment
Fragment ID: 1 Timestamp: 110
Fragment 2
Fragment
Fragment ID: 1 Timestamp: 120
In this scenario, if you assume that the current time is timestamp 200, then a query at the current
time will see Fragment 2, but not Fragment 1. If you perform a point-in-time query at
timestamp 115, you will see Fragment 1, but not Fragment 2 (because Fragment 2 did not yet
exist at timestamp 115).
There is no limit to the number of different versions that you can keep around. If the merge
timestamp is set to the current time or a time in the past, then all subsequently modified fragments
will remain in the database, available for point-in-time queries.
• Scoring Considerations
In the Merge Policy Configuration page of the Admin Interface, there is a merge timestamp
parameter. When this parameter is set to 0 (the default) and merges are enabled, point-in-time
queries are effectively disabled. To access the Merge Policy Configuration page, click the
Databases > db_name > Merge Policy link from the tree menu of the Admin Interface.
When deciding the value at which to set the merge timestamp parameter, the most likely value to
set it to is the current system timestamp. Setting the value to the current system timestamp will
preserve any versions of fragments from the current time going forward. To set the merge
timestamp parameter to the current timestamp, click the get current timestamp button on the
Merge Control Configuration page and then Click OK.
If you set a value for the merge timestamp parameter higher than the current timestamp,
MarkLogic Server will use the current timestamp when it merges (the same behavior as when set
to the default of 0). When the system timestamp grows past the specified merge timestamp
number, it will then start using the merge timestamp specified. Similarly, if you set a merge
timestamp lower than the lowest timestamp preserved in a database, MarkLogic Server will use
the lowest timestamp of any preserved fragments in the database, or the current timestamp,
whichever is lower.
You might want to keep track of your system timestamps over time, so that when you go to run
point-in-time queries, you can map actual time with system timestamps. For an example of how to
create such a timestamp record, see “Keeping Track of System Timestamps” on page 147.
Note: After the system merges when the merge timestamp is set to 0, all obsolete versions
of fragments will be deleted; that is, only the latest versions of fragments will
remain in the database. If you set the merge timestamp to a value lower than the
current timestamp, any obsolete versions of fragments will not be available
(because they no longer exist in the database). Therefore, if you want to preserve
versions of fragments, you must configure the system to do so before you update
the content.
The timestamp you specify must be valid for the database. If you specify a system timestamp that
is less than the oldest timestamp preserved in the database, the statement will throw an
XDMP-OLDSTAMP exception. If you specify a timestamp that is newer than the current timestamp, the
statement will throw an XDMP-NEWSTAMP exception.
Note: If the merge timestamp is set to the default of 0, and if the database has completed
all merges since the last updates or deletes, query statements that specify any
timestamp older than the current system timestamp will throw the XDMP-OLDSTAMP
exception. This is because the merge timestamp value of 0 specifies that no
obsolete fragments are to be retained.
xdmp:eval("doc('/docs/mydocument.xml')", (),
<options xmlns="xdmp:eval">
<timestamp>99225</timestamp>
</options>)
This statement will return the version of the /docs/mydocument.xml document that existed at
system timestamp 99225.
In XCC for Java, you can set options to requests with the RequestOptions class, which allows you
to modify the environment in which a request runs. The setEffectivePointInTime method sets the
timestamp in which the request runs. The core design pattern is to set up options for your requests
and then use those options when the requests are submitted to MarkLogic Server for evaluation.
You can also set request options on the Session object. The following Java code snippet shows
the basic design pattern:
options.setEffectivePointInTime (timestamp);
session.setDefaultRequestOptions (options);
For an example of how you might use a Java environment to run point-in-time queries, see
“Example: Query Old Versions of Documents Using XCC” on page 146.
For more details on scores and the scoring methods, see Relevance Scores: Understanding and
Customizing in the Search Developer’s Guide.
Point-in-time queries allow you to do this all within the same database. The only thing that you
need to change in the application is the timestamps at which the query statements run. XCC
provides a convenient mechanism for accomplishing this goal.
This example demonstrates how you can query deleted documents with point-in-time queries. For
simplicity, assume that no other query or update activity is happening on the system for the
duration of the example. To follow along in the example, run the following code samples in the
order shown below.
xdmp:document-insert("/docs/test.xml", <a>hello</a>))
2. When you query the document, it returns the node you inserted:
doc("/docs/test.xml")
(: returns the node <a>hello</a> :)
xdmp:document-delete("/docs/test.xml")
4. Query the document again. It returns the empty sequence because it was just deleted.
5. Run a point-in-time query, specifying the current timestamp (this is semantically the same
as querying the document without specifying a timestamp):
xdmp:eval("doc('/docs/test.xml')", (),
<options xmlns="xdmp:eval">
<timestamp>{xdmp:request-timestamp()}</timestamp>
</options>)
(: returns the empty sequence because the document has been deleted :)
6. Run the point-in-time query at one less than the current timestamp, which is the old
timestamp in this case because only one change has happened to the database. The
following query statement returns the old document.
xdmp:eval("doc('/docs/test.xml')", (),
<options xmlns="xdmp:eval">
<timestamp>{xdmp:request-timestamp()-1}</timestamp>
</options>)
(: returns the deleted version of the document :)
Note: It might not be important to your application to map system timestamps to actual
time. For example, you might simply set up your merge timestamp to the current
timestamp, and know that all versions from then on will be preserved. If you do
not need to keep track of the system timestamp, you do not need to create this
application.
The first step is to create a document in which the timestamps are stored, with an initial entry of
the current timestamp. To avoid possible confusion of future point-in-time queries, create this
document in a different database than the one in which you are running point-in-time queries. You
can create the document as follows:
xdmp:document-insert("/system/history.xml",
<timestamp-history>
<entry>
<datetime>{fn:current-dateTime()}</datetime>
<system-timestamp>{
(: use eval because this is an update statement :)
xdmp:eval("xdmp:request-timestamp()")}
</system-timestamp>
</entry>
</timestamp-history>)
<timestamp-history>
<entry>
<datetime>2006-04-26T19:35:51.325-07:00</datetime>
<system-timestamp>92883</system-timestamp>
</entry>
</timestamp-history>
Note that the code uses xdmp:eval to get the current timestamp. It must use xdmp:eval because the
statement is an update statement, and update statements always return the empty sequence for
calls to xdmp:request-timestamp. For details, see “Understanding Transactions in MarkLogic
Server” on page 28.
Next, set up a process to run code similar to the following at periodic intervals. For example, you
might run the following every 15 minutes:
xdmp:node-insert-child(doc("/system/history.xml")/timestamp-history,
<entry>
<datetime>{fn:current-dateTime()}</datetime>
<system-timestamp>{
(: use eval because this is an update statement :)
xdmp:eval("xdmp:request-timestamp()")}
</system-timestamp>
</entry>)
<timestamp-history>
<entry>
<datetime>2006-04-26T19:35:51.325-07:00</datetime>
<system-timestamp>92883</system-timestamp>
</entry>
<entry>
<datetime>2006-04-26T19:46:13.225-07:00</datetime>
<system-timestamp>92884</system-timestamp>
</entry>
</timestamp-history>
To call this code at periodic intervals, you can set up a cron job, write a shell script, write a Java or
dotnet program, or use any method that works in your environment. Once you have the document
with the timestamp history, you can easily query it to find out what the system timestamp was at a
given time.
A typical use case for forest rollbacks is to guard against some sort of data-destroying event,
providing the ability to get back to the point in time before that event without doing a full
database restore. If you wanted to allow your application to go back to some state within the last
week, for example, you can create a process whereby you update the merge timestamp every day
to the system timestamp from 7 days ago. This would allow you to go back any point in time in
the last 7 days. To set up this process, you would need to do the following:
• Maintain a mapping between the system timestamp and the actual time, as described in
“Keeping Track of System Timestamps” on page 147.
• Create a script (either a manual process or an XQuery script using the Admin API) to
update the merge timestamp for your database once every 7 days. The script would update
the merge timestamp to the system timestamp that was active 7 days earlier.
• If a rollback was needed, roll back all of the forests in the database to a time between the
current timestamp and the merge timestamp. For example:
xdmp:forest-rollback(
xdmp:database-forests(xdmp:database("my-db")),
3248432)
(: where 3248432 is the timestamp to which you want to roll back :)
Another use case to set up an environment for using forest rollback operations is if you are
pushing a new set of code and/or content out to your application, and you want to be able to roll it
back to the previous state. To set up this scenario, you would need to do the following:
• When your system is in a steady state before pushing the new content/code, set the merge
timestamp to the current timestamp.
• Load your new content/code.
• Are you are happy with your changes?
• If yes, then you can set the merge timestamp back to 0, which will eventually
merge out your old content/code (because they are deleted fragments).
• If no, then roll all of the forests in the database back to the timestamp that you set
in the merge timestamp.
• Use caution when rolling back one or more forests that are in the context database (that is,
forests that belong to the database against which your query is evaluating against). When
in a forest in the context database, the xdmp:forest-rollback operation is run
asyncronously. The new state of the forest is not seen until the forest restart occurs, Before
the forest is unmounted, the old state will still be reflected. Additionally, any errors that
might occur as part of the rollback operation are not reported back to the query that
performs the operation (although, if possible, they are logged to the ErrorLog.txt file). As
a best practice, MarkLogic recommends running xdmp:forest-rollback operations against
forests not attached to the context database.
• If you do not specify all of the forests in a database to roll back, you might end up in a
state where the rolled back forest is not in a consistent state with the other forests. In most
cases, it is a good idea to roll back all of the forests in a database, unless you are sure that
the content of the forest being rolled back will not become inconsistent if other forests are
not rolled back to the same state (for example, if you know that all of content you are
rolling back is only in one forest).
• If your database indexing configuration has changed since the point in time to which you
are rolling back, and if you have reindexing enabled, a rollback operation will begin
reindexing as soon as the rollback operation completes. If reindexing is not enabled, then
the rolled backed fragments will remain indexed as they were at the time they were last
updated, which might be inconsistent with the current database configuration.
• As a best practice, MarkLogic recommends running a rollback operation only on forests
that have no update activitiy at the time of the operation (that is, the forests will be
quiesced).
1. At the state of the database to which you want to be able to roll back, set the merge
timestamp to the current timestamp.
2. Keep track of your system timestamps, as desribed in “System Timestamps and Merge
Timestamps” on page 140.
3. Perform updates to your application as usual. Old version of document will remain in the
database.
4. If you know you will not need to roll back to a time earlier, than the present, go back to
step 1.
5. If you want to roll back, you can roll back to any time between the merge timestamp and
the current timestamp. When you perform the rollback, it is a good idea to do so from the
context of a different database. For example, to roll back all of the forests in the my-db
database, perform an operation similar to the following, which sets the database context to
a different one than the forests that are being rolled back:
xdmp:eval(
'xdmp:forest-rollback(
xdmp:database-forests(xdmp:database("my-db")),
3248432)
(: where 3248432 is the timestamp to which you want
to roll back :)',
(),
<options xmlns="xdmp:eval">
<database>{xdmp:database("Documents")}</database>
</options>)
This chapter describes the system plugin framework in MarkLogic Server, and includes the
following sections:
Consider the following notes about how the plugin framework works:
• After MarkLogic starts up, each module in the Plugins directory is evaluated before the
first request against each App Server is evaluated on each node in the cluster. This process
repeats again after the Plugins directory is modified.
• When using a cluster, any files added to the Plugins directory must be added to the
Plugins directory on each node in a MarkLogic Server cluster.
• Any errors (for example, syntax errors) in a plugin module are thrown whenever any
request is made to any App Server in the cluster (including the Admin Interface). It is
therefore extremely important that you test the plugin modules before deploying them to
the <marklogic-dir>/Plugins directory. If there are any errors in a plugin module, you
must fix them before you will be able to successfully evalate any requests against any App
Server.
• Plugins are cached and, for performance reasons, MarkLogic Server only checks for
updates once per second, and only refreshes the cache after the Plugins directory is
modified; it does not check for modifications of the individual files in the Plugins
directory. If you are using an editor to modify a plugin that creates a new file (which in
turn modifies the directory) upon each update, then MarkLogic Server will see the update
within the next second. If your editor modifies the file in place, then you will have to
touch the directory to change the modification date for the latest changes to be loaded
(alternatively, you can restart MarkLogic Server). If you delete a plugin from the Plugins
directory, it remains registered on any App Servers that have already evaluated the plugin
until either you restart MarkLogic Server or another plugin registers with the same name
on each App Server.
System plugins use the built-in plugin framework in MarkLogic Server along with the
xdmp:set-server-field and xdmp:get-server-field functions. As described in “Overview of
System Plugins” on page 152, system plugins are stored in the <marklogic-dir>/Plugins
directory and any errors in them are thrown on all App Servers in the cluster.
Application plugins are built on top of system plugins and are designed for use by applications.
Application plugins are stored in the <marklogic-dir>/Assets/plugins/marklogic/appservices
directory, and, unlike system plugins, they do not cause errors to other applications if the plugin
code contains errors.
With the plugin API, you can register a set of plugins, and then you can ask for all of the plugins
with a particular capability, and the functionality delivered by each plugin is available to your
application. For details about the plugin API, see the MarkLogic XQuery and XSLT Function
Reference.
Warning Any errors in a system plugin module will cause all requests to hit the error. It is
therefore extremely important to test your plugins before deploying them in a
production environment.
To use a system plugin, you must deploy the plugin main module to the Plugins directory. To
deploy a plugin to a MarkLogic Server cluster, you must copy the plugin main module to the
plugin directory of each host in the cluster.
Warning Any system plugin module you write must have a unique filename. Do not modify
any of the plugin files that MarkLogic ships in the <marklogic-dir>/Plugins
directory. Any changes you make to MarkLogic-installed files in the Plugins
directory will be overridden after each upgrade of MarkLogic Server.
When a password is set using the security XQuery library (security.xqy), it calls the plugin to
check the password using the plugin capability with the following URI:
http://marklogic.com/xdmp/security/password-check
When no plugins are registered with the above capability in the <marklogic-dir>/Plugins
directory, then no other work is done upon setting a password. If you include plugins that register
with the above password-check capability in the <marklogic-dir>/Plugins directory, then the
module(s) are run when you set a password. If multiple plugins are registered with that capability,
then they will all run. The order in which they run is undetermined, so the code must be designed
such that the order does not matter.
There is a sample included that checks for a minimum length and a sample included that checks to
see if the password contains digits. You can create your own plugin module to perform any sort of
password checking you require (for example, check for a particular length, the existence of
various special characters, repeated charatcers, upper or lower case, and so on).
Additionally, you can write a plugin to save extra history in the Security database user document,
which stores information that you can use or update in your password checking code. The element
you can use to store information for password checking applications is sec:password-extra. You
can use the sec:user-set-password-extra and sec:user-set-password-extra functions (in
security.xqy) to modify the sec:password-extra element in the user document. Use these APIs
to create elements as children of the sec:password-extra element.
Note: Use a unique name to register your plugin (the second argument to
plugin:register). If the name is used by another plugin, only one of them will end
up being registered (because the other one will overwrite the registration).
If you want to implement your own logic that is performed when a password is checked (both on
creating a user and on changing the password), then you can write a plugin, as described in the
next section.
Warning Any errors in a plugin module will cause all requests to hit the error. It is therefore
extremely important to test your plugins before deploying them in a production
environment.
To use and modify the sample password plugins, perform the following steps:
cd /opt/MarkLogic/Plugins
cp ../Samples/Plugins/password-check-*.xqy .
3. Make any changes you desire. For example, to change the minimum length, find the
pwd:minimum-length function and change the 4 to a 6 (or to whatever you prefer). When
you are done, the body of the function looks as follows:
if (fn:string-length($password) < 6)
then "password too short"
else ()
4. Optionally, if you have renamed the files, change the second parameter to
plugin:register to the name you called the plugin files in the first step. For example, if
you named the plugin file my-password-plugin.xqy, change the plugin:register call as
follows:
plugin:register($map, "my-password-plugin.xqy")
Warning If you made a typo or some other mistake that causes a syntax error in the plugin,
any request you make to any App Server will throw an exception. If that happens,
edit the file to correct any errors.
6. If you are using a cluster, copy your plugin to the Plugins directory on each host in your
cluster.
7. Test your code to make sure it works the way you intend.
The next time you try and change a password, your new checks will be run. For example, if you
try to make a single-character password, it will be rejected.
This chapter describes how to use the map functions and includes the following sections:
• Map API
• Map Operators
• Examples
MarkLogic Server has a set of XQuery functions to create manipulate maps. Like the xdmp:set
function, maps have side-effects and can change within your program. Therefore maps are not
strictly functional like most other aspects of XQuery. While the map is in memory, its structure is
opaque to the developer, and you access it with the built-in XQuery functions. You can persist the
structure of the map as an XML node, however, if you want to save it for later use. A map is a
node and therefore has an identity, and the identity remains the same as long as the map is in
memory. However, if you serialize the map as XML and store it in a document, when you retrieve
it will have a different node identity (that is, comparing the identity of the map and the serialized
version of the map would return false). Similarly, if you store XML values retrieved from the
database in a map, the node in the in-memory map will have the same identity as the node from
the database while the map is in memory, but will have different identities after the map is
serialized to an XML document and stored in the database. This is consistent with the way
XQuery treats node identity.
The keys take xs:string types, and the values take item()* values. Therefore, you can pass a
string, an element, or a sequence of items to the values. Maps are a nice alternative to storing
values an in-memory XML node and then using XPath to access the values. Maps makes it very
easy to update the values.
For example, the following returns the XML serialization of the constructed map:
<map:map xmlns:map="http://marklogic.com/xdmp/map"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xs="http://www.w3.org/2001/XMLSchema">
<map:entry key="1">
<map:value xsi:type="xs:string">hello</map:value>
</map:entry>
<map:entry key="2">
<map:value xsi:type="xs:string">world</map:value>
</map:entry>
</map:map>
• map:clear
• map:count
• map:delete
• map:get
• map:keys
• map:map
• map:put
+ The union of two maps. The result is the combination of the keys and
values of the first map (Map A) and the second map (Map B). For an
example, see “Creating a Map Union” on page 162.
* The intersection of two maps (similar to a set intersection). The result is
the key-value pairs that are common to both maps (Map A and Map B)
are returned. For an example, see “Creating a Map Intersection” on
page 163.
- The difference between two maps (similar to a set difference). The result
is the key-value pairs that exist in the first map (Map A) that do not exist
in the second map (Map B) are returned. For an example, see “Applying
a Map Difference Operator” on page 164.
12.6 Examples
This section includes example code that uses maps and includes the following examples:
This returns a map with two key-value pairs in it: the key “1” has a value “hello”, and the key “2”
has a value “world”.
This returns a map with three key-value pairs in it: the key “1” has a value “hello”, the key “2” has
a value “world”, and the key “3” has a value “fair”. Note that the map bound to the $map variable
is not the same as the map bound to $map2. After it was serialized to XML, a new map was
constructed in the $map2 variable.
let $map :=
map:map()
let $key :=
map:put($map, "1", "hello")
let $key :=
map:put($map, "2", "world")
let $seq :=
("fair",
<some-xml>
<another-tag>with text</another-tag>
</some-xml>)
let $key := map:put($map, "3", $seq)
return
for $x in map:keys($map) return
<result>{map:get($map, $x)}</result>
<result>fair
<some-xml>
<another-tag>with text</another-tag>
</some-xml>
</result>
<result>world</result>
<result>hello</result>
Any key-value pairs common to both maps are included only once. This returns the following:
The key-value pairs common to both maps are returned. This returns the following:
This chapter describes how to use function values, which allow you to pass function values as
parameters to XQuery functions. It includes the following sections:
You pass a function value to another function by telling it the name of the function you want to
pass. The actual value returned by the function is evaluated dynamically during query runtime.
Passing these function values allows you to define an interface to a function and have a default
implementation of it, while allowing callers of that function to implement their own version of the
function and specify it instead of the default version.
• xdmp:function
• xdmp:apply
You use xdmp:function to specify the function to pass in, and xdmp:apply to run the function that
is passed in. For details and the signature of these APIs, see the MarkLogic XQuery and XSLT
Function Reference.
If you have code that you will apply that performs an update, and if the calling query does not
have any update statements, then you must make the calling query an update statement. To change
a query statement to be an update statement, either use the xdmp:update prolog option or put an
update call somewhere in the statement. For example, to force a query to run as an update
statement, you can add the following to your XQuery prolog:
Without the prolog option, any update expression in the query will force it to run as an update
statement. For example, the following expression will force the query to run as an update
statement and not change anything else about the query:
if ( fn:true() )
then ()
else xdmp:document-insert("fake.xml", <fake/>)
For details on the difference between update statements and query statements, see “Understanding
Transactions in MarkLogic Server” on page 28.
This returns 5052, which is the sum of all of the numbers between 2 and 100.
If you want to use a different formula for adding up the numbers, you can create an XQuery
library module with a different implementation of the same function and specify it instead. For
example, assume you want to use a different formula to add up the numbers, and you create
another library module named /my.xqy that has the following code (it multiplies the second
number by two before adding it to the first):
You can now call the my:sum-sequence function specifying your new implementation of the
my:add function as follows:
This returns 10102 using the new formula. This technique makes it possible for the caller to
specify a completely different implementation of the specified function that is passed.
This chapter describes how to create applications that reuse content by using XML that includes
other content. It contains the following sections:
• Modular Documents
Modular documents allow you to manage and reuse content. MarkLogic Server includes a
Content Processing Framework (CPF) application that expands the documents based on all of the
XInclude references. The CPF application creates a new document for the expanded document,
leaving the original documents untouched. If any of the parts are updated, the expanded document
is recreated, automatically keeping the expanded document up to date.
The CPF application for modular documents takes care of all of the work involved in expanding
the documents. All you need to do is add or update documents in the database that have XInclude
references, and then anything under a CPF domain is automatically expanded. For details on CPF,
see the Content Processing Framework Guide guide.
Content can be reused by referencing it in multiple documents. For example, imagine you are a
book publisher and you have boilerplate passages such as legal disclaimers, company
information, and so on, that you include in many different titles. Each book can then reference the
boilerplate documents. If you are using the CPF application, then if the boilerplate is updated, all
of the documents are automatically updated. If you are not using the CPF application, you can still
update the documents with a simple API call.
• XInclude: http://www.w3.org/TR/xinclude/
• XPointer: http://www.w3.org/TR/WD-xptr
XInclude provides a syntax for including XML documents within other XML documents. It
allows you to specify a relative or absolute URI for the document to include. XPointer provides a
syntax for specifying parts of an XML document. It allows you to specify a node in the document
using a syntax based on (but not quite the same as) XPath. MarkLogic Server supports the
XPointer framework, and the element() and xmlns() schemes of XPointer, as well as the xpath()
scheme:
Each of these schemes is used within an attribute named xpointer. The xpointer attribute is an
attribute of the <xi:include> element. If you specify a string corresponding to an idref, then it
selects the element with that id attribute, as shown in “Example: Simple id” on page 174.
The examples that follow show XIncludes that use XPointer to select parts of documents:
• Example: Simple id
<el-name>
<p id="myID">This is the first para.</p>
<p>This is the second para.</p>
</el-name>
The following selects the element with an id attribute with a value of myID from the /test2.xml
document:
<el-name>
<p id="myID">This is the first para.</p>
<p>This is the second para.</p>
</el-name>
The following selects the second p element that is a child of the root element el-name from the
/test2.xml document:
<el-name>
<p id="myID">This is the first para.</p>
<p>This is the second para.</p>
</el-name>
The following selects the second p element that is a child of the root element el-name from the
/test2.xml document:
<pref:el-name xmlns:pref="pref-namespace">
<pref:p id="myID">This is the first para.</pref:p>
<pref:p>This is the second para.</pref:p>
</pref:el-name>
The following selects the first pref:p element that is a child of the root element pref:el-name
from the /test2.xml document:
<xi:include href="/test2.xml"
xpointer="xmlns(pref=pref-namespace)
xpath(/pref:el-name/pref:p[1])" />
Note that the namespace prefixes for the XPointer must be entered in an xmlns() scheme; it does
not inherit the prefixes from the query context.
• The XQuery module library xinclude.xqy. The key function in this library is the
xinc:node-expand function, which takes a node and recursively expands any XInclude
references, returning the fully expanded node.
• The XQuery module library xpointer.xqy.
• The XInclude pipeline and its associated actions.
• You can create custom pipelines based on the XInclude pipeline that use the following
<options> to the XInclude pipeline. These options control the expansion of XInclude
references for documents under the domain to which the pipeline is attached:
• <destination-root> specifies the directory in which the expanded version of
documents are saved. This must be a directory path in the database, and the
expanded document will be saved to the URI that is the concatenation of this root
and the base name of the unexpanded document. For example, if the URI of the
unexpanded document is /mydocs/unexpanded/doc.xml, and the destination-root is
set to /expanded-docs/, then this document is expanded into a document with the
URI /expanded-docs/doc.xml.
• <destination-collection> specifies the collection in which to put the expanded
version. You can specify multiple collections by specifying multiple
<destination-collection> elements in the pipeline.
• xdmp:with-namespaces
• xdmp:value
Therefore, any users who will be expanding documents require these privileges. There us a
predefined role called xinclude that has the needed privileges to execute this code. You must
either assign the xinclude role to your users or they must have the above execute privileges in
order to run the XInclude code used in the XInclude CPF application.
• <xi:include> Elements
• <xi:fallback> Elements
• Simple Examples
<xi:include href="/blahblah.xml">
<xi:fallback><p>NOT FOUND</p></xi:fallback>
</xi:include>
The <p>NOT FOUND</p> will be substituted when expanding the document with this <xi:include>
element if the document with the URI /blahblah.xml is not found.
You can also put an <xi:include> element within the <xi:fallback> element to fallback to some
content that is in the database, as follows:
<xi:include href="/blahblah.xml">
<xi:fallback><xi:include href="/fallback.xml" /></xi:fallback>
</xi:include>
The previous element says to include the document with the URI /blahblah.xml when expanding
the document, and if that is not found, to use the content in /fallback.xml.
xdmp:document-insert("/test1.xml", <document>
<p>This is a sample document.</p>
<xi:include href="test2.xml"/>
</document>);
xdmp:document-insert("/test2.xml",
<p>This document will get inserted where
the XInclude references it.</p>);
xinc:node-expand(fn:doc("/test1.xml"))
The following is the expanded document returned from the xinc:node-expand call:
<document>
<p>This is a sample document.</p>
<p xml:base="/test2.xml">This document will get inserted where
the XInclude references it.</p>
</document>
The base URI from the URI of the included content is added to the expanded node as an xml:base
attribute.
xdmp:document-insert("/test1.xml", <document>
<p>This is a sample document.</p>
<xi:include href="/blahblah.xml">
<xi:fallback><p>NOT FOUND</p></xi:fallback>
</xi:include>
</document>);
xdmp:document-insert("/test2.xml",
<p>This document will get inserted where the XInclude references
it.</p>);
xdmp:document-insert("/fallback.xml",
<p>Sorry, no content found.</p>);
xinc:node-expand(fn:doc("/test1.xml"))
The following is the expanded document returned from the xinc:node-expand call:
<document>
<p>This is a sample document.</p>
<p xml:base="/test1.xml">NOT FOUND</p>
</document>
1. Install Content Processing in your database, if it is not already installed. For example, if
your database is named modular, In the Admin Interface click the Databases > modular >
Content Processing link. If it is not already installed, the Content Processing Summary
page will indicate that it is not installed. If it is not installed, click the Install tab and click
install (you can install it with or without enabling conversion).
2. Click the domains link from the left tree menu. Either create a new domain or modify an
existing domain to encompass the scope of the documents you want processed with the
XInclude processing. For details on domains, see the Content Processing Framework
Guide guide.
3. Under the domain you have chosen, click the Pipelines link from the left tree menu.
4. Check the Status Change Handling and XInclude Processing pipelines. You can also
attach other pipelines or detach other pipelines, depending if they are needed for your
application.
Note: If you want to change any of the <options> settings on the XInclude Processing
pipeline, copy that pipeline to another file, make the changes (make sure to change
the value of the <pipeline-name> element as well), and load the pipeline XML file.
It will then be available to attach to a domain. For details on the options for the
XInclude pipeline, see “CPF XInclude Application and API” on page 175.
5. Click OK. The Domain Pipeline Configuration screen shows the attached pipelines.
Any documents with XIncludes that are inserted or updated under your domain will now be
expanded. The expanded document will have a URI ending in _expanded.xml. For example, if you
insert a document with the URI /test.xml, the expanded document will be created with a URI of
/test_xml_expanded.xml (assuming you did not modify the XInclude pipeline options).
Note: If there are existing XInclude documents in the scope of the domain, they will not
be expanded until they are updated.
MarkLogic Server evaluates XQuery programs against App Servers. This chapter describes ways
of controlling the output, both by App Server configuration and with XQuery built-in functions.
Primarily, the features described in this chapter apply to HTTP App Servers, although some of
them are also valid with XDBC Servers and with the Task Server. This chapter contains the
following sections:
• Error Detail
• Execute Permissions Are Needed On Error Handler Document for Modules Databases
You can implement a custom error handler module in either XQuery or Server-Side JavaScript.
The language you choose is independent from the language(s) in which you implement your
application.
The error handler module can get the HTTP error code and the contents of the HTTP response
using the xdmp:get-response-code (XQuery) or xdmp.getResponseCode (JavaScript) function.
The error handler module also has access to additional error details, including stack trace
information, when available. For details, see “Error Detail” on page 182.
If the error is a 503 (unavailable) error, then the error handler is not invoked and the 503
exception is returned to the client.
If the error handler itself throws an exception, that exception is passed to the client with the error
code from the error handler. It will also include a stack trace that includes the original error code
and exception.
An error handler accesses the error detail through a special $error:errors external variable that
MarkLogic populates. To access the error details, include a declaration of the following form in
your error handler:
The following is a sample error detail node, generated by an XQuery module with a syntax error
that caused MarkLogic to raise an XDMP-CONTEXT exception:
<error:error xsi:schemaLocation="http://marklogic.com/xdmp/error
error.xsd"
xmlns:error="http://marklogic.com/xdmp/error"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<error:code>XDMP-CONTEXT</error:code>
<error:name>err:XPDY0002</error:name>
<error:xquery-version>1.0-ml</error:xquery-version>
<error:message>Expression depends on the context where none
is defined</error:message>
<error:format-string>XDMP-CONTEXT: (err:XPDY0002) Expression
depends on the context where none is defined</error:format-string>
<error:retryable>false</error:retryable>
<error:expr/> <error:data/>
<error:stack>
<error:frame>
<error:uri>/blaz.xqy</error:uri>
<error:line>1</error:line>
<error:xquery-version>1.0-ml</error:xquery-version>
</error:frame>
</error:stack>
</error:error>
The following is an example error detail object resulting from a JavaScript module that caused
MarkLogic to throw an XDMP-DOCNOTFOUND exception:
{ "code": "XDMP-DOCNOTFOUND",
"name": "",
"message": "Document not found",
"retryable": "false",
"data": [ ],
"stack": "XDMP-DOCNOTFOUND:xdmp.documentDelete(\"nonexistent.json\")
-- Document not found\n
in /eh-ex/eh-app.sjs, at 3:7, in panic() [javascript]\n
in /eh-ex/eh-app.sjs, at 6:0 [javascript]\n
in /eh-ex/eh-app.sjs [javascript]",
"stackFrames": [
{
"uri": "/eh-ex/eh-app.sjs",
"line": "3",
"column": "7",
"operation": "panic()"
}, {
"uri": "/eh-ex/eh-app.sjs",
"line": "6",
"column": "0"
}, {
"uri": "/eh-ex/eh-app.sjs"
}
]
}
As a consequence of needing the execute permission on the error handler, if a user who is actually
not authorized to run the error handler attempts to access the App Server, that user runs as the
default user configured for the App Server until authentication. If authentication fails, then the
error handler is called as the default user, but because that default user does not have permission
to execute the error handler, the user is not able to find the error handler and a 404 error (not
found) is returned. Therefore, if you want all users (including unauthorized users) to have
permission to run the error handler, give the default user a role (it does not need to have any
privileges on it) and assign an execute permission to the error handler paired with that role.
Language Example
xdmp:set-response-content-type("text/plain"),
xdmp:get-response-code(),
$error:errors
xdmp.setResponseContentType('text/plain');
response;
In a typical error page, you would use some or all of the information to create a user-friendly
representation of the error to display to users. Since you can write arbitrary code in the error
handler, you can do a wide variety of things, such as sending an email to the application
administrator or redirecting it to a different page.
http://<dispatcher-program.xqy>?instructions=foo
Users of web applications typically prefer short, neat URLs to raw query string parameters. A
concise URL, also referred to as a “clean URL,” is easy to remember, and less time-consuming to
type in. If the URL can be made to relate clearly to the content of the page, then errors are less
likely to happen. Also crawlers and search engines often use the URL of a web page to determine
whether or not to index the URL and the ranking it receives. For example, a search engine may
give a better ranking to a well-structured URL such as:
http://marklogic.com/technical/features.html
http://marklogic.com/document?id=43759
In a “RESTful” environment, URLs must be well-structured, predictable, and decoupled from the
physical location of a document or program. When an HTTP server receives an HTTP request
with a well-structured, external URL, it must be able to transparently map that to the internal URL
of a document or program.
The URL Rewriter feature allows you to configure your HTTP App Server to enable the rewriting
of external URLs to internal URLs, giving you the flexibility to use any URL to point to any
resource (web page, document, XQuery program and arguments). The URL Rewriter
implemented by MarkLogic Server operates similarly to the Apache mod_rewrite module, except
you write an XQuery or Server-Side JavaScript program to perform the rewrite operation.
The URL rewriting happens through an internal redirect mechanism so the client is not aware of
how the URL was rewritten. This makes the inner workings of a web site's address opaque to
visitors. The internal URLs can also be blocked or made inaccessible directly if desired by
rewriting them to non-existent URLs, as described in “Prohibiting Access to Internal URLs” on
page 189.
For an end to end example of a simple rewriter, see “Example: A Simple URL Rewriter” on
page 191.
For information about creating a URL rewriter to directly invoke XSLT stylesheets, see Invoking
Stylesheets Directly Using the XSLT Rewriter in the XQuery and XSLT Reference Guide.
Note: If your application code is in a modules database, the URL rewriter needs to have
permissions for the default App Server user (nobody by default) to execute the
module. This is the same as with an error handler that is stored in the database, as
described in “Execute Permissions Are Needed On Error Handler Document for
Modules Databases” on page 184.
You can implement a rewrite module in XQuery or Server-Side JavaScript. The language you
choose for the rewriter implementation is independent of the implementation language of any
module the rewriter may redirect to. For example, you can create a JavaScript rewriter that
redirects a request to an XQuery application module, and vice versa.
You can use the pattern matching features in regular expressions to create flexible URL rewrite
modules. For example, you want the user to only have to enter / after the scheme and network
location portions of the URL (for example, http://localhost:8060/) and have it rewritten as
/app.xqy:
Language Example
The following example converts a portion of the original URL into a request parameter of a new
dynamic URL:
Language Example
The product ID can be any number. For example, the URL /product-12.html is converted to
/product.xqy?id=12 and /product-25.html is converted to /product.xqy?id=25.
Search engine optimization experts suggest displaying the main keyword in the URL. In the
following URL rewriting technique you can display the name of the product in the URL:
Language Example
The product name can be any string. For example, /product/canned_beans/12.html is converted
to /product.xqy?id=12 and /product/cola_6_pack/8.html is converted to /product.xqy?id=8.
If you need to rewrite multiple pages on your HTTP server, you can create a URL rewrite script
like the following:
Language Example
Language Example
Where /nowhere.html is a non-existent page for which the browser returns a “404 Not Found”
error. Alternatively, you could redirect to a URL consisting of a random number generated using
xdmp:random (XQuery) or xdmp.random (JavaScript), or some other scheme guaranteed to
generate non-existent URLs.
If you are going to rewrite a URL to a page that uses page-relative URLs, convert the
page-relative URLs to server-relative or canonical URLs. For example, if your application is
located in C:\Program Files\MarkLogic\myapp and the page builds a frameset with page-relative
URLs, like:
or canonical:
After you configure the URL Rewrite trace event, when any URL Rewrite script is invoked, a
line, like that shown below, is added to the ErrorLog.txt file, indicating the URL received from
the client and the converted URL from the URL rewriter:
Note: The trace events are designed as development and debugging tools, and they might
slow the overall performance of MarkLogic Server. Also, enabling many trace
events will produce a large quantity of messages, especially if you are processing a
high volume of documents. When you are not debugging, disable the trace event
for maximum performance.
The example assumes the existence of an HTTP App Server with the following characteristics. If
you choose to use different settings, you will need to modify the subsequent instructions to match.
For instructions on creating an HTTP App Server, see Creating a New HTTP Server in the
Administrator’s Guide.
root /
port 8020
modules Modules
database Documents
Before running the code, set the Database to Documents and the Query Type as appropriate
Query Console.
Language Example
Before running the code, set the Database to Modules and the Query Type as appropriate in
Query Console.
Language Example
If you used the XQuery example app, navigate to the following URL, assuming MarkLogic is
installed on localhost:
http://localhost:8020/rewriter-ex/app.xqy
If you used the Server-Side JavaScript example app, navigate to the following URL, assuming
MarkLogic is installed on localhost:
http://localhost:8020/rewriter-ex/app.sjs
The example document from “Install the Example Content” on page 192 will appear. If you get a
404 (Page Not Found) error, use Query Console to confirm that you correctly installed the
example application module in the Modules database, and not in the Documents database.
Run the following code in Query Console to insert the rewriter into the modules database. Set the
Database to Modules and the Query Type as appropriate in Query Console.
Language Example
Note that this xdmp:get-request-rule and xdmp.getRequestUrl also return any request parameters
(fields). You rewriter can modify the request parameters. For example, you could add a
parameter, changing the URL to test-rewriter/someparam=value. If you just want the request
path (/test-rewriter, here), you can use xdmp:get-request-path (XQuery) or
xdmp.getRequestPath (JavaScript).
You can create more elaborate URL rewrite modules, as described in “Creating URL Rewrite
Modules” on page 187 and “Creating an Interpretive XQuery Rewriter to Support REST Web
Services” on page 201.
In the Admin Interface, go to the configuration page for the rewriter-ex App Server you created in
“Create the Example App Server” on page 191.
Find the url rewriter configuration setting. Set the rewriter to one of the following paths,
depending on whether you’re using the XQuery or JavaScript example rewriter:
• XQuery: /rewriter-ex/rewriter.xqy
• JavaScript: /rewriter-ex/rewriter.sjs
Click OK at the top or bottom of the App Server configuration page to save your change.
You can also configure the rewriter for an App Server using the Admin library function
admin:appserver-set-url-rewriter, or the the REST Management API.
http://localhost:8020/test-rewriter
Your request will return the same test document as when you queried the example application
directly using http://localhost:8020/rewriter-ex/rewriter.xqy. or
http://localhost:8020/rewriter-ex/rewriter.sjs in “Exercise the Example Application” on
page 193.
The mappings are based on the W3C XML Entities for Characters specification:
• http://www.w3.org/TR/2008/WD-xml-entity-names-20080721/
with the following modifications to the specification:
• Entities that map to multiple codepoints are not output, unless there is an alternate
single-codepoint mapping available. Most of these entities are negated mathematical
symbols (nrarrw from isoamsa is an example).
• The gcedil set is also included (it is not included in the specification).
The following table describes the different SGML character mapping settings:
SGML Character
Description
Mapping Setting
1. In the Admin Interface, navigate to the App Server you want to configure (for example,
Groups > Default > App Servers > MyAppServer).
2. Select the Output Options page from the left tree menu. The Output Options Configuration
page appears.
3. Locate the Output SGML Entity Characters drop list (it is towards the top).
4. Select the setting you want. The settings are described in the table in the previous section.
5. Click OK.
Codepoints that map to an SGML entity will now be serialized as the entity by default for requests
against this App Server.
• xdmp:quote
• xdmp:save
For details, see the MarkLogic XQuery and XSLT Function Reference for these functions.
To configure output encoding for an App Server using the Admin Interface, perform the following
steps:
1. In the Admin Interface, navigate to the App Server you want to configure (for example,
Groups > Default > App Servers > MyAppServer).
2. Select the Output Options page from the left tree menu. The Output Options Configuration
page appears.
3. Locate the Output Encoding drop list (it is towards the top).
4. Select the encoding you want. The settings correspond to different languages, as described
in the table in Collations and Character Sets By Language in the Encodings and Collations
chapter of the Search Developer’s Guide.
5. Click OK.
By default, queries against this App Server will now be output in the specified encoding.
• xdmp:get-response-encoding
• xdmp:set-response-encoding
Additionally, you can specify the output encoding for XML output in an XQuery program using
the <output-encoding> option to the following XML-serializing APIs:
• xdmp:quote
• xdmp:save
For details, see the MarkLogic XQuery and XSLT Function Reference for these functions.
This configuration page allows you to specify defaults that correspond to the XSLT output options
(http://www.w3.org/TR/xslt20#serialization) as well as some MarkLogic-specific options. For details
on these options, see xdmp:output in the XQuery and XSLT Reference Guide. For details on
configuring default options for an App Server, see Setting Output Options for an HTTP Server in the
Administrator’s Guide.
The REST Library enables you to create RESTful functions that are independent of the language
used in applications.
Note: The procedures in this chapter assume you performed the steps described in
“Preparing to Run the Examples” on page 227.
• Handling Errors
• Handling Redirects
• Defining Parameters
• Adding Conditions
When you have enabled RESTful access to MarkLogic Server resources, applications access
these resources by means of a URL that invokes an endpoint module on the target MarkLogic
Server host.
3. Rewrites the resource path to one understood internally by the server before invoking the
endpoint module.
If the request is valid, the endpoint module executes the requested operation and returns any data
to the application. Otherwise, the endpoint module returns an error message.
Note: The API signatures for the REST Library are documented in the MarkLogic
XQuery and XSLT Function Reference. For additional information on URL
rewriting, see “Setting Up URL Rewriting for an HTTP App Server” on page 185.
Navigate to the /<MarkLogic_Root>/bill directory and create the following files with the
described content.
module namespace
requests="http://marklogic.com/appservices/requests";
rest:rewrite($requests:options)
return
fn:doc($play)
Enter the following URL, which uses the bill App Server created in “Preparing to Run the
Examples” on page 227:
http://localhost:8060/macbeth
The rest:rewrite function in the rewriter uses an options node to map the incoming request to an
endpoint. The options node includes a request element with a uri attribute that specifies a
regular expression and an endpoint attribute that specifies the endpoint module to invoke in the
event the URL of an incoming request matches the regular expression. In the event of a match, the
portion of the URL that matches (.+) is bound to the $1 variable. The uri-param element in the
request element assigns the value of the $1 variable, along with the .xml extension, to the play
parameter.
<rest:options xmlns="http://marklogic.com/appservices/rest">
<rest:request uri="^/(.+)" endpoint="/endpoint.xqy">
<uri-param name="play">$1.xml</uri-param>
</rest:request>
</rest:options>
In the example rewriter module above, this options node is passed to the rest:rewrite function,
which outputs a URL that calls the endpoint module with the parameter play=macbeth.xml:
/endpoint.xqy?play=macbeth.xml
The rest:process-request function in the endpoint locates the first request element associated
with the endpoint.xqy module and uses it to identify the parameters defined by the rewriter. In
this example, there is only a single parameter, play, but for reasons described in “Extracting
Multiple Components from a URL” on page 210, when there are multiple request elements for
the same endpoint, the request element that extracts the greatest number of parameters from a
URL will be listed in the options node ahead of those that extract fewer parameters.
The rest:process-request function in the endpoint uses the request element to parse the
incoming request and return a map that contains all of the parameters as typed values. The
map:get function extracts each parameter from the map, which is only one in this example.
/path/to/resource?limit=test
does not match that request node (because the limit is not a decimal). If there are no other request
nodes which match, then the request will return 404 (not found).
That may be surprising. Using additional request nodes to match more liberally is one way to
address this problem. However, as the number and complexity of the requests grows, it may
become less attractive. Instead, the rewriter can be instructed to match only on specific parts of
the request. In this way, error handling can be addressed by the called module.
The match criteria are specified in the call to rest:rewrite. For example:
In this case, only the URI and HTTP method will be used for the purpose of matching.
The table below summarizes all of the possible elements and attributes in an options node. On the
left are the elements that have attributes and/or child elements. Attributes for an element are listed
in the Attributes column. Attributes are optional, unless designated as ‘(required)’. Any first-level
elements of the element listed on the left are listed in the Child Elements column. The difference
between the user-params="allow" and user-params="allow-dups" attribute values is that allow
permits a single parameter for a given name, and allow-dups permits multiple parameters for a
given name.
Number
Child
Element Attributes of For More Information
Elements
Children
and 0..n
or 0..n
Number
Child
Element Attributes of For More Information
Elements
Children
“Defining Parameters” on
page 218
param name=string (required) “Defining Parameters” on
page 218
as=string
pattern = <regex>
user-agent 0..n
and 0..n
or 0..n
at=string (required)
rest:check-options($requests:options)
An empty sequence is returned if the options node is valid. Otherwise an error is returned.
You can also use the rest:check-request function to validate request elements in an options
node. For example, to validate all of the request elements in the options node defined in the
requests.xqy module described in “A Simple XQuery Rewriter and Endpoint” on page 203, you
would do the following:
rest:check-request($requests:options/rest:request)
An empty sequence is returned if the request elements are valid. Otherwise an error is returned.
Note: Before calling the rest:check-request function, you must set xdmp:mapping to
false to disable function mapping.
For example, you want expand the capability of the rewriter described in “A Simple XQuery
Rewriter and Endpoint” on page 203 and add the ability to use a URL like the one below to
display an individual act in a Shakespeare play:
http://localhost:8060/macbeth/act3
The options node in requests.xqy might look like the one below, which contains two request
elements. The rewriter employs a “first-match” rule, which means that it tries to match the
incoming URL to the request elements in the order they are listed and selects the first one
containing a regular expression that matches the URL. In the example below, if an act is specified
in the URL, the rewriter uses the first request element. If only a play is specified in the URL,
there is no match in the first request element, but there is in the second request element.
Note: The default parameter type is string. Non-string parameters must be explicitly
typed, as shown for the act parameter below. For more information on typing
parameters, see “Parameter Types” on page 219.
<options>
<request uri="^/(.+)/act(\d+)$" endpoint="/endpoint.xqy">
<uri-param name="play">$1.xml</uri-param>
<uri-param name="act" as="integer">$2</uri-param>
</request>
<request uri="^/(.+)/?$" endpoint="/endpoint.xqy">
<uri-param name="play">$1.xml</uri-param>
</request>
</options>
When an act is specified in the incoming URL, the first request element binds macbeth and 3 to
the variables $1 and $2, respectively, and then assigns them to the parameters named, play and
act. The URL rewritten by the rest:rewrite function looks like:
/endpoint.xqy?play=macbeth.xml&act=3
The following is an example endpoint module that can be invoked by a rewriter that uses the
options node shown above. As described in “A Simple XQuery Rewriter and Endpoint” on
page 203, the rest:process-request function in the endpoint uses the request element to parse
the incoming request and return a map that contains all of the parameters as typed values. Each
parameter is then extracted from the map by means of a map:get function. If the URL that invokes
this endpoint does not include the act parameter, the value of the $num variable will be an empty
sequence.
Note: The first request element that calls the endpoint.xqy module is used in this
example because, based on the first-match rule, this element is the one that
supports both the play and act parameters.
return
if (empty($num))
then
fn:doc($play)
else
fn:doc($play)/PLAY/ACT[$num]
try {
let $params := rest:process-request($request)
return
...the non-error case...
} catch ($e) {
rest:report-error($e)
}
If the user agent making the request accepts text/html, a simple HTML-formatted response is
returned. Otherwise, it returns the raw error XML.
You can also use this function in an error handler to process all of the errors for a particular
application.
For example, previous users accessed the macbeth play using the following URL pattern:
http://localhost:8060/Shakespeare/macbeth
http://localhost:8060/macbeth
The user can tell that this redirection happened because the URL in the browser address bar
changes from the old URL to the new URL, which can then be bookmarked by the user.
You can support such redirects by adding a redirect.xqy module like this one to your application:
In the options node in requests.xqy, add the following request elements to perform the redirect:
Your options node will look like the following one shown. Note that the request elements for the
redirect.xqy module are listed before those for the endpoint.xqy module. This is because of the
“first-match” rule described in “Extracting Multiple Components from a URL” on page 210.
<options xmlns="http://marklogic.com/appservices/rest">
<request uri="^/shakespeare/(.+)/(.+)" endpoint="/redirect.xqy">
<uri-param name="__ml_redirect__">/$1/$2</uri-param>
</request>
<request uri="^/shakespeare/(.+)" endpoint="/redirect.xqy">
<uri-param name="__ml_redirect__">/$1</uri-param>
</request>
<request uri="^/(.+)/act(\d+)" endpoint="/endpoint.xqy">
<uri-param name="play">$1.xml</uri-param>
<uri-param name="act" as="integer">$2</uri-param>
</request>
<request uri="^/(.+)$" endpoint="/endpoint.xqy">
<uri-param name="play">$1.xml</uri-param>
</request>
</options>
You can employ as many redirects as you want through the same redirect.xqy module by
changing the value of the __ml_redirect__ parameter.
This request will match (and validate) if the request method is either an HTTP GET or an HTTP
POST.
The following topics describe use cases for mapping requests with verbs and simple endpoints
that service those requests:
Below is a simple options.xqy module that handles requests that specify an OPTIONS method. If
the request URL is /, the options.xqy module returns the entire options element, exposing the
complete set of endpoints. When the URL is not /, the module returns the request element that
matches the URL.
Add the following request element to requests.xqy to match any HTTP request that includes an
OPTIONS method.
Open Query Console and enter the following query, replacing name and password with your login
credentials:
xdmp:http-options("http://localhost:8011/",
<options xmlns="xdmp:http">
<authentication method="digest">
<username>name</username>
<password>password</password>
</authentication>
</options>)
Because the request URL is /, the entire options node will be returned. To see the results when
another URL is used, enter the following query in Query Console:
xdmp:http-options("http://localhost:8011/shakespeare/macbeth",
<options xmlns="xdmp:http">
<authentication method="digest">
<username>name</username>
<password>password</password>
</authentication>
</options>)
Rather than the entire options node, the request element that matches the given URL is returned:
You can use it by adding the following request to the end of your options:
If some earlier request directly supports OPTIONS then it will have priority for that resource.
Below is a simple post.xqy module that accepts requests that include the POST method and inserts
the body of the request into the database at the URL specified by the request.
return
(xdmp:document-insert($posturi, $body),
concat("Successfully uploaded: ", $posturi, " "))
Add the following request element to requests.xqy. If the request URL is /post/filename, the
rewriter will issue an HTTP request to the post.xqy module that includes the POST method.
To test the post.xqy endpoint, open Query Console and enter the following query, replacing
‘name’ and ‘password’ with your MarkLogic Server login credentials:
return
xdmp:http-post("http://localhost:8011/post/mydoc.xml",
<options xmlns="xdmp:http">
<authentication method="digest">
<username>name</username>
<password>password</password>
</authentication>
<data>{$document}</data>
<headers>
<content-type>text/xml</content-type>
</headers>
</options>)
Click the Query Console Explore button and locate the /mydoc.xml document in the Documents
database.
• Parameter Types
• Required Parameters
• Repeatable Parameters
• Matching Regular Expressions in Parameters with the match and pattern Attributes
You can define a parameter type using any of the types supported by XQuery, as described in the
specification, XML Schema Part 2: Datatypes Second Edition:
http://www.w3.org/TR/xmlschema-2/
http://host:port/url-path?param=value
For example, you want the endpoint.xqy module to support a "scene" parameter, so you can enter
the following URL to return Macbeth, Act 4, Scene 2:
http://localhost:8011/macbeth/act4?scene=2
To support the scene parameter, modify the first request element for the endpoint module as
shown below. The match attribute in the param element defines a subexpression, so the parameter
value is assigned to the $1 variable, which is separate from the $1 variable used by the uri_param
element.
Rewrite the endpoint.xqy module as follows to add support for the scene parameter:
return
if (empty($num))
then
fn:doc($play)
else if (empty($scene))
then
fn:doc($play)/PLAY/ACT[$num]
else
fn:doc($play)/PLAY/ACT[$num]/SCENE[$scene]
Now the rewriter and the endpoint will both recognize a scene parameter. You can define any
number of parameters in a request element.
In the rewriter, this would allow any number of css parameters. In the endpoint, there would be a
single css key in the parameters map but its value would be a list.
For example, jQuery changes the key names if the value of a key is an array. So, if you ask
JQuery to invoke a call with { "a": "b", "c": [ "d", "e" ] }, you get the following URL:
http://whatever/endpoint?a=b&c[]=d&c[]=e
You can use the alias attribute as shown below so that the map you get back from the
rest:process-request function will have a key value of "c" regardless of whether the incoming
URL uses c= or c[]= in the parameters:
• match Attribute
• pattern Attribute
16.11.8.1match Attribute
The match attribute in the param element defines a subexpression with which to test the value of
the parameter, so the captured group in the regular expression is assigned to the $1 variable.
You can use the match attribute to translate parameters. For example, you want to translate a
parameter that contains an internet media type and you want to extract part of that value using the
match attribute. The following will translate format=application/xslt+xml to format=xslt.
If you combine matching in parameters with validation, make sure that you validate against the
transformed value. For example, this parameter will never match:
16.11.8.2pattern Attribute
The param element supports a pattern attribute, which uses the specified regular expression to
match the name of the parameter. This allows you to specify a regular expression for matching
parameter names, for example:
pattern='xmlns:.+'
pattern='val[0-9]+'
Exactly one of name or pattern must be specified. It is an error if the name of a parameter passed
to the endpoint matches more than one pattern.
With this request, only users with the specified execute privilege can POST to that URL. If a user
without that privilege attempts to post, this request won't match and control will fall through to
the next request. In this way, you can provide fallbacks if you wish.
In a rewriter, failing to match a condition causes the request not to match. In an endpoint, failing
to match a condition raises an error.
• Authentication Condition
• Function Condition
• And Condition
• Or Condition
• Content-Type Condition
<auth>
<privilege>privilege-uri</privilege>
<kind>kind</kind>
</auth>
For example, the request element described for POST requests in “Handling POST Requests” on
page 217 allows any user to load documents into the database. To restrict this POST capability to
users with infostudio execute privilege, you can add the following auth condition to the request
element:
The privilege can be any specified execute or URL privilege. If unspecified, kind defaults to
execute.
For example, to match only user agent requests that can accept JSON responses, specify the
following accept condition in the request:
<accept>application/json</accept>
<user-agent>ELinks</user-agent>
<function ns="http://example.com/module"
apply="my-function"
at="utils.xqy"/>
A request that specifies the function shown above will only match requests for which the
specified function returns true. The function will be passed the URL string and the function
condition element.
<and>
...conditions...
</and>
If more than one condition is present at the top level in a request, they are treated as they occurred
in an and.
For example, the following condition matches only user agent requests that can accept responses
in HTML from an ELinks browser:
<and>
<accept>text/html</accept>
<user-agent>ELinks</user-agent>
</and>
Note: There is no guarantee that conditions will be evaluated in any particular order or
that all conditions will be evaluated.
16.12.6 Or Condition
An or condition must contain only conditions. It returns true if and only if at least one of its child
conditions return true.
<or>
...conditions...
</or>
For example, the following condition matches only user agent requests that can accept responses
in HTML or plain text:
<or>
<accept>text/html</accept>
<accept>text/plain</accept>
</or>
Note: There is no guarantee that conditions will be evaluated in any particular order or
that all conditions will be evaluated.
xdmp:set-response-content-type("text/plain"),
let $zip-file :=
xdmp:document-get("http://www.ibiblio.org/bosak/xml/eg/shaks200.zip")
Note: The XML source for the Shakespeare plays is subject to the copyright stated in the
shaksper.htm file contained in the zip file.
1. In the Admin Interface, click the Groups icon in the left tree menu.
2. Click the group in which you want to define the HTTP server (for example, Default).
3. Click the App Servers icon on the left tree menu and create a new HTTP App Server.
4. Name the HTTP App Server bill, assign it port 8060, specify bill as the root directory,
and Documents as the database.
5. Create a new directory under the MarkLogic root directory, named bill to hold the
modules you will create as part of the examples in this chapter.
The Declarative XML Rewriter serves the same purpose as the Interpretive XQuery Rewriter
described in “Creating an Interpretive XQuery Rewriter to Support REST Web Services” on
page 201. The XML rewriter has many more options for affecting the request environment than
the XQuery rewriter. However, because it is designed for efficiency, XML rewriter doesn’t have
the expressive power of the XQuery rewriter or access to the system function calls. Instead a
select set of match and evaluation rules are available to support a large set of common cases.
• Match Rules
• System Variables
• Evaluation Rules
• Termination Rules
The XML rewriter enables XCC clients to communicate on the same port as REST and HTTP
clients. You can also execute requests with the same features as XCC but without using the XCC
library.
For example, the XML rewriter for the App-Services server at port 8000 is located in:
<marklogic-dir>/Modules/MarkLogic/rest-api/8000-rewriter.xml
• Input Context
• Output Context
The properties listed in the table below are available as the context of a match rule. Where
"regex" is indicated, a match is done by a regular expression. Otherwise matches are “equals” of
one or more components of the property.
path regex
param name
[value]
user name or id
The input context properties, external path and external query, can be modified in the output
context. There are other properties that can be added to the output context, such as to direct the
request to a particular database or to set up a particular transaction, as shown in the table below.
Property Description
database Database
transaction Transaction ID
error format Specifies the error format for server generated errors
As is the case with the regular expression rules for the fn:replace XQuery function, only the first
(non overlapping) match in the string is processed and the rest ignored.
For example given the path shown below, you may want to both match the general form and at the
same time extract components in one expression.
/admin/v2/meters/databases/12345/total/file.xqy
The following path match rule regex matches the above path and also extracts the desired
components ("match groups") and sets them into the local context as numbered variables, as
shown in the table below.
<match-path matches="/admin/v(.)/([a-z]+)/([a-z]+)/([0-9]+)/([a-z]+)/.+\.xqy">
Variable Value
$0 /admin/v2/meters/databases/12345/total/file.xqy
$1 2
$2 meters
$3 databases
$4 12345
$5 total
The extracted values could then be used to construct output values such as additional query
parameters.
Note: No anchors (“^ .....$”) are used in this example, so the expression could also match
a string, such as the one below and provide the same results.
somestuff/admin/v2/meters/databases/12345/total/file.xqy/morestuff
Wherever a rule that matches a regex (indicated by the matches attribute) a flags option is
allowed. Only the "i" flag (case insensitive) is currently supported.
2. If it is a match, then the rule may produce zero or more "Eval Expressions" (local
variables $*,$0 ... $n)
3. If it is a match then the evaluator descends into the match rule, otherwise the match is
considered "not match" and the evaluator continues onto the next sibling.
4. If this is the last sibling then the evaluator "ascends" to the parent
Descending: When descending a match rule on match the following steps occur:
1. If "scoped" (attribute scoped=true) the current context (all in-scope user-defined variables
and all currently active modification requests) is pushed.
2. Any Eval Expressions from the parent are cleared ($*,$0..$n) and replaced with the Eval
Expressions produced by the matching node.
Ascending: When Ascending (after evaluating the last of the siblings) the evaluator Ascends to
the parent node. The following steps occur:
1. If the parent was scoped (attribute scoped=true) then the current context is popped and
replaced by the context of the parent node. Otherwise the context is left unchanged.
2. Eval Expressions ($*,$0...) popped and replaced with the parents in-scope eval
expressions.
Note: This is unaffected by the scoped attribute, Eval expressions are always scoped to
only the immediate children of the match node that produced them.)
The table below summarizes the match rules. A detailed description of each rule follows.
Element Description
17.5.1 rewriter
Root element of the rewriter rule tree.
Attributes: none
Example:
Simple rewriter that redirects anything to /home/** to the module gohome.xqy otherwise passes
through the request
<rewriter xmlns="http://marklogic.com/xdmp/rewriter">
<match-path prefix="/home/">
<dispatch>gohome.xqy</dispatch>
</match-path>
</rewriter>
17.5.2 match-accept
Matches on the Accept HTTP Header.
Attributes
@any-of list of strings yes Matches if the Accept header contains any of
media types specified.
Note: The match is performed as a case sensitive match against the literal strings of the
type/subtype. No evaluation of expanding subtype, media ranges or quality factors
are performed.
Example:
17.5.3 match-content-type
Matches on the Content-Type HTTP Header.
Attributes
Note: The match is performed as a case sensitive match against the literal strings of the
type/subtype. No evaluation of expanding subtype, media ranges or quality factors
are performed.
Example:
17.5.4 match-cookie
Matches on a cookie by name. Cookies are an HTTP Header with a well-known structured
format.
Attributes
@name string yes Matches if the cookie of the specified name exists.
Example:
<match-cookie name="SESSIONID">
<set-var name="session">$0</set-var>
....
</match-cookie>
17.5.5 match-execute-privilege
Match on the users execute privileges
Attributes
@any-of list of uris no* Matches if the user has at least one of the specified
execute privileges
@all-of list of uris no* Matches if the user has all of the specified execute
privileges
@scoped boolean no Indicates this rule creates a new "scope" context for
default false its children.
Note: The execute privilege must be the URI not the name. See the example.
$0 string The matching privileges. For more than one match it is converted
to a space delimited string
Example:
<match-execute-privilege
any-of="http://marklogic.com/xdmp/privileges/admin-module-read
http://marklogic.com/xdmp/privileges/admin-ui">
<dispatch/>
</match-execute-privilege>
Note: In the XML format you can use newlines in the attribute
17.5.6 match-header
Match on an HTTP Header
Attributes
default false
If there is no @matches or @value attribute, then $0 is the entire text content of the header of that
name. If more than one header matches, then @repeated indicates if this is an error or allowed. If
allowed (true), then $* is set to each individual value and $0 to the space delimited concatenation
of all headers. If false (default) multiple matches generates an error.
If @matches is specified then, as with match-path and match-string, $0 .. $N are the results of the
regex match
Example:
17.5.7 match-method
Match on the HTTP Method
Attributes
@any-of string list yes Matches if the HTTP method is one of the
values in the list. Method names are Case
Sensitive matches.
The value of the HTTP method is a system global variable, $_method, as described in “System
Variables” on page 252.
Example:
Dispatches if the method is either GET HEAD or OPTIONS AND if the user has the execute
privilege http://marklogic.com/xdmp/privileges/manage
17.5.8 match-path
Match on the request path. The "path" refers to the "path" component of the request URI as per
RFC3986 [https://tools.ietf.org/html/rfc3986 ] . Simply, this is the part of the URL after the
scheme, and authority section, starting with a "/' (even if none were given) up to but not including
the Query Param seperator "?" and not including any fragement ("#").
The Path is NOT URL Decoded for the purposes of match-path, but query parameter values are
decoded (as per HTTP specifications). This is intentional so that path components can contain
what would otherwise be considered path component separates, and because HTTP specifications
make the intent clear that path components are only to be encode when the 'purpose' of the path is
ambiguous without encoding, therefore characters in a path are only supposed to be URL encoded
in the case they are intended to NOT be considered as path separator compoents (or reserved URL
characters).
http://localhost:8040//root%2Ftestme.xqy?name=%2Ftest
GET /root%2Ftestme.xqy?name=%2Ftest
PATH: /root%2Ftestme.xqy
http://localhost:8040//root/testme.xqy?name=%2Ftest
PATH: /root/testme.xqy
For example, <match-path matches="/root([^/].*)"> would match the first URL but not the
second, even though they would decode to the same path.
When match results are placed into $0..$n then the default behavior is to decode the results so that
in the above case, $1 would be "/testme.xqy". This is to handle consistency with other values
which also are in decoded form, in particular when a value is set as a query parameter it is then
*URL encoded* as part of the rewriting. If the value was in encoded form already it would be
encoded twice resulting in the wrong value.
In the (rare) case where it is not desired for path-match to decode the results after a match the
attribute @uri-decode can be specified and set to false.
Attributes
@prefix string no* Matches if the path starts with the prefix
(literal string)
@any-of list of strings no* Matches if the path is one of the list of
exact matches.
If none of @matches, @prefix or @any-of is specified then match-path matches all paths.
To match an empty path use matches="^$" (not matches="" which will match anything)
$0 string The entire portion of the path that matched. For matches this is the full
matching text.
Example:
<match-path
matches="^/manage/(v2|LATEST)/meters/labels/([^F$*/?&]+)/?$">
<add-query-param name="version">$1</add-query-param>
<add-query-param name="label-id">$2</add-query-param>
<set-path>/history/endpoints/labels-item.xqy</set-path>
...
</match-path>
17.5.9 match-query-param
Match on a query parameter.
Query parameters can be matched exactly (by name and value quality) or partially (by only
specifying a name match). For exact matches only one name/value pair is matched. For partial
matches it is possible to match multiple name/value pairs with the same name when the query
parameter has multiple parameters with the same name. The repeated attribute specifies if this is
an error or not, the default (false) indicates repeated matching parameters are an error.
Attributes:
@name string yes Matches if a query parameter exists with the name
@scoped boolean no Indicates this rule creates a new "scope" context for
its children.
default false
default false
Note: If a @value parameter that is present but empty is valid and is a match for the
presence of query parameter with an empty value.
$* list of strings A list of all the matched values as in $0 except as a List of String
Example:
If the query param user has the value "admin" verify AND they have execute privilege
http://marklogic.com/xdmp/privileges/manage then dispatch to /admin.xqy.
If the query parameter contains a transaction then set the transaction ID.
<match-query-param name="transaction">
<set-transaction>$0</set-transaction>
...
</match-query-param>
See match-string for an example of multiple query parameters with the same name.
17.5.10 match-role
Match on the users assigned roles
Attributes
@any-of list of role names (strings) no* Matches if the user has the at least
one of the specified roles
@all-of list of role names (strings) no* Matches if the user has all of the
specified roles
Otherwise unset, (if all-of matched, its known what those roles
are, the contents of @all-of).
Example:
Matches if the user has both of the roles infostudio-user AND rest-user
17.5.11 match-string
Matches a string expression against a regular expression. If the value matches then the rule
succeeds and its children are descended.
This rule is intended to fill in gaps where the current rules are not sufficient or would be overly
complex to implement additional regular expression matching to all rules. Avoid using this rule
unless it is absolutely necessary.
Attributes
@value string yes The value to match against. May be a literal string
or may be a single variable expression.
@matches regex string yes Matches if the value matches the regular
expression
Repeated matches: Regular expressions can match multiple non-overlapping portions of a string,
if the regex is not anchored to the begin and end of the string.
17.5.12 match-user
Match on a user name or default user.
Attributes
Child Context modifications: None: One of name or default-user available as system variables;
See System Variables
Examples:
Matches the default user (note that there is no need to specify the “name’ attribute in this case):
<match-user default-user="true">
...
</match-user>
System variables are used to substitute for the mechanism used by the XQuery rewriter which can
get this information (and much more) by calling any of our XQuery APIs. The Declarative
rewriter does not expose any API calls so in cases where the values may be needed in outputs they
are made available as global variables. There is some overlap in these variables and the Match
rules to simplify the case where you simply need to set a value but don't need to match on it. For
example the set-database rule may want to set the database to the current modules database (to
allow GET and PUT operations on module files). By supplying a system variable for the
modules database ($_modules-database) there is no need for a matching rule on
modules-database for the sole purpose of extracting the value.
System variables use a reserved prefix "_" to avoid accidental use in current or future code if new
variables are added. Overwriting a system variable is only set in the current scope and does not
produce changes to the system.
The period (".") is a convention that suggests the idea of property access but is really just part of
the variable name. Where variables start with the same prefix but have ".<name>" as a suffix this
is a convention that the name without the dot evaluates to the most useful value and the name with
the dot specifics a specific property or type for that variable. For example $_database is the
database Name, $_database.id is the database ID.
As noted in Variables and Types the actual type of all variable is a String (or List of String), the
Type column in the table below is to indicate what range of values is possible for that variable.
For example a database id originates as an unsigned long so can be safely used in any expression
that expects a number.
Note:
$_cookie.<name> string The value of the cookie <name>. Only the text
value of the cookie is returned, not the extra
metadata (path, domain, expires etc.). If the cookie
does not exist evaluates as "" Cookie names
matched and compared case insensitive.
$_path string The HTTP request path Not including the query
string.
$_query-param.<name> list of The query parameters matching the name as a list
strings of strings.
$_request-url string The original request URI, including the path and
query parameters.
$_user[.name] string The user name.
<set-database>$_modules-database</set-database>
<set-transaction>$_cookie.TRANSACTION_ID</set-transaction>
There are two types of eval rules: Set rules and assign rules.
Set Rules are rules that create a rewriter command (a request to change the output context in some
way). Assign rules are rules that set locally scoped variables but do not produce any rewriter
commands.
Variable and rewriter commands are placed into the current scope.
Element Description
17.7.1 add-query-param
Adds (appends) a query parameter (name/value) to the query parameters
Attributes
Children:
An empty element or list will still append a query parameter with an empty value (equivalent to a
URL like http://company.com?a= )
If the expression is a List then the query parameter is duplicated once for each value in the list.
Example:
<match-path
matches="^/manage/(v2|LATEST)/meters/labels/([^/?&]+)/?$">
<add-query-param name="version">$1</add-query-param>
<add-query-param name="label-id">$2</add-query-param>
</match-path>
17.7.2 set-database
Sets the Database.
This will change the context Database for the remainder of request.
Attributes
Children:
It is an immediate error to set the value using an expression evaluating to a list of values.
See Database (Name or ID) for a description of how Database references are interpreted.
The @checked flag is interpreted during the rewriter modification result phase, by implication
this means that only the last set-database that successfully evaluated before a dispatch is used.
If the @checked flag is true AND if the database is different than the App Server defined database
then the user must have the eval-in privilege.
Examples:
<set-database>SpecialDocuments</set-database>
<set-database>$_modules-database</set-database>
17.7.3 set-error-format
Sets the error format used for all system generated errors. This is the format (content-type) of the
body of error messages for a non-successful HTTP response.
This overwrites the setting from the application server configuration and takes effect immediately
after validation of the rewriter rules have succeeded.
Attributes: None
• html
• json
• xml
• compatible
The "compatible" format indicates for the system to match as closely as possible the format used
in prior releases for the type of request and error. For example, if dispatch indicates "xdbc" then
"compatible" will produce errors in the HTML format, which is compatible with XCC client
library.
It is an immediate error to set the value using an expression evaluating to a list of values.
Note: This setting does not affect any user defined error handler, which is free to output
any format and body.
Example:
<set-error-format>json </set-database>
17.7.4 set-error-handler
Sets the error handler
Attributes: None
Example:
<set-error-handler >/myerror-handler.xqy</set-modules-root>
If error occurs during the rewriting process then the error handler which is associated with the
application server is used for error handling. After a successful rewrite if the set-error-handler
specifies a new error handler then it will be used for handling errors.
The modules database and modules root used to locate the error handler is the modules database
and root in effect at the time of the error.
Setting the error handler to the empty string will disable the use of any user defined error handler
for the remainder of the request.
It is an immediate error to set the value using an expression evaluating to a list of values.
For example, if in addition the set-modules-database rule was used, then the new error handler
will be search for in the rewritten modules database (and root set with set-modules-root )
otherwise the error handler will be searched for in the modules database configured in the app
server.
17.7.5 set-eval
Sets the Evaluation mode (eval or direct).
The Evaluation mode is used in the request handler to determine if a path is to be evaluated
(XQuery or JavaScript) or to be directly accessed (PUT/GET).
In order to be able to read and write to evaluable documents (in the modules database), the
evaluation mode needs to be set to direct and the Database needs to be set to a Modules database.
Attributes: None
Example:
Forces a direct file access instead of an evaluation if the filename ends in .xqy
<match-path matches=".*\.xqy$">
<set-eval>direct</set-eval>
</match-user>
17.7.6 set-modules-database
Sets the Modules database.
Attributes
@checked boolean no If true then the permissions of the user are checked
[ true,1 | false,0] for the eval-in privilege verify the change is
allowed.
default false
Children:
See Database (Name or ID) for a description of how Database references are interpreted.
It is an immediate error to set the value using an expression evaluating to a list of values.
The @checked flag is interpreted during the rewriter modification result phase, by implication
this means that only the last set-database that successfully evaluated before a dispatch is used.
If the @checked flag is true AND if the database is different than the App Server defined modules
database then the user must have the eval-in privilege.
Example:
<match-user name="admin">
<set-modules-database>SpecialModules</set-modules-database>
...
</match-user>
17.7.7 set-modules-root
Sets the modules root path
Attributes: None
It is an immediate error to set the value using an expression evaluating to a list of values.
Example:
<set-modules-root>/myapp</set-modules-root>
17.7.8 set-path
Sets the URI path for the request.
Attributes: None
Children:
It is an immediate error to set the value using an expression evaluating to a list of values.
Example:
Then if the method is either GET , HEAD, OPTIONS dispatch otherwise if the method is POST
then set a query parameter "verified" to true and dispatch.
<match-user name="admin">
<set-path>/admin.xqy</set-path>
<match-method any-of="GET HEAD OPTIONS">
<dispatch/>
</match-method>
<match-method any-of="POST">
<set-query-param name="verified">true</set-query-param>
<dispatch/>
</match-method>
</match-user>
17.7.9 set-query-param
Sets (overwrites) a query parameter. If the query parameter previously existed all of its values are
replaced with the new value(s).
Attributes
Children
An expression which evaluates to the value of the query parameter to be set. If the expression is a
List then the query parameter is duplicated once for each value in the list.
An empty element, empty string value or empty list value will still set a query parameter with an
empty value (equivalent to a URL like http://company.com?a= )
Examples:
If the user is admin then set the query parameter user to be admin, overwriting any previous
values it may have had.
<match-user name="admin">
<set-query-param name="user">admin</set-query-param>
</match-user>
Copy all the values from the query param "ids" to a new query parameter "app-ids" replacing any
values it may have had.
<match-query-param name="ids">
<set-query-param name="app-ids">$*</set-query-param>
</match-query-param>
The following rules will copy all query parameter (0 or more) named "special" to result without
passing through other parameters.
17.7.10 set-transaction
Sets the current transaction. If specified, set-transaction-mode must also be set.
Attributes: None
Example:
<set-transaction>$_cookie.TRANSACTION_ID</set-transaction>
Note: If the expression for set-transaction is empty, such as when the cookie doesn't
exist, then the transaction is unchanged.
It is an immediate error (during rewriter parsing) to set the value using an expression evaluating to
a list of values or to 0.
17.7.11 set-transaction-mode
Sets the transaction mode for the current transaction. If specified, set-transaction must also be set.
Attributes: None
Children: An expression evaluating to a transaction mode specified by exactly one of the strings
Example:
Set the transaction mode to the value of the query param "trmode" if it exists.
<match-query-param name="trmode">
<set-transaction-mode>$0</set-transaction-mode>
</match-query-param>
Note: It is an error if the value for transaction mode is not one of "auto," "query," or
"update." It is also an error to set the value using an expression evaluating to a list
of values.
17.7.12 set-var
Sets a variable in the local scope
This is an Assign Rule. It does not produce rewriter commands instead it sets a variable.
The assignment only affects the current scope (which is the list of variables pushed by the parent).
The variable is visible to following siblings as well as children of following siblings.
Allowed user defined variable names must start with a letter and followed by zero or more letters,
numbers, underscore or dash.
This implies that set-var cannot set either system defined variables, property components or
expression variables.
Attributes
@name string yes Name of the variable to set (without the "$")
Children:
Examples:
Sets the variable $dir1 to the first component of the matching path, and $dir2 to the second
component.
<match-path matches="^/([a-z]+)/([a-z]+)/.*">
<set-var name="dir1">$1</set-var>
<set-var name="dir2">$2</set-var>
...
</match-path
If the Modules Database name contains the string "User" then set the variable usedb to the full
name of the Modules DB.
Matches all of the values of a query parameter named "ids" if any of them is fully numeric.
<match-query-param name="ids">
<match-string value="$*" matches="[0-9]+">
....
</match-string>
</match-query-param>
17.7.13 trace
Log a trace message
The trace rule can be used anywhere an eval rule is allowed. It logs a trace message similar to
fn:trace.
The event attribute specifies the Trace Event ID. The body of the trace element is the message to
log.
Attributes
Example:
<match-path prefix="/special">
<trace event="AppEvent1">
The following trace contains the matched path.
</trace>
<trace event="AppEvent2">
$0
</trace>
</match-path>
Element Description
17.8.1 dispatch
Stop evaluation and dispatch with all rewrite commands.
The dispatch element is required as the last child of any match rule which contains no match
rules.
Attributes
If set to false then the initial request parameters are not included and only the parameters set or
added by any set-query-param and add-query-param rules are included in the result.
If xdbc is specified and true then the built-in xdbc handlers will be used for the request. If xdbc
support is enabled then the final path (possibly rewritten) MUST BE one of the paths supported
by the xdbc built-in handlers.
Child Content:
Empty or an expression
Child Elements:
If the child element is not empty or blank then it is evaluated and used for the rewrite path.
Examples:
<set-path>/a/path.xqy
<dispatch/>
</set-path>
Is equivalent to:
<dispatch>/a/path.xqy</dispatch>
<set-query-param name="a">a1</set-query-param>
<dispatch include-request-query-params="false">/run.xqy</dispatch>
a=a1
<set-query-param name="a">a1</set-query-param>
<dispatch>run.xqy</dispatch>
a=a1
b=b
17.8.2 error
Terminate evaluation with an error.
The error rule terminates the evaluation of the entire rewriter and returns and error to the request
handler. This error is then handled by the request handler, passing to the error-handler if there is
one.
Attributes:
Child Content:
None
Child Elements:
None
Example:
<rewriter xmlns="http://marklogic.com/xdmp/rewriter">
<match-path matches="^/dir(/.+)">
<dispatch>$1</dispatch>
</match-path>
</rewriter>
For GET and PUT requests only, if the a query parameter named path is exactly /admin then
redirect to /private/admin.xqy otherwise use the value of the parameter for the redirect.
<rewriter xmlns="http://marklogic.com/xdmp/rewriter">
<match-method any-of="GET PUT">
<!-- match by name/value -->
<match-query-param name="path" value="/admin">
<dispatch>/private/admin.xqy</dispatch>:
</match-query-param>
<!-- match by name use value -->
<match-query-param name="path">
<dispatch>$0</dispatch>:
</match-query-param>
</match-method>
</rewriter>
If a parameter named data is present in the URI then set the database to UserData. If a query
parameter module is present then set the modules database to UserModule. If the path starts with
/users/ and ends with /version<versionID> then extract the next path component ($1), append it
to /app and add a query parameter version with the versionID.
<rewriter xmlns="http://marklogic.com/xdmp/rewriter">
<match-query-param name="data">
<set-database>UserData</set-database>
</match-query-param>
<match-query-param name="module">
<set-modules-database>UserModule</set-modules-database>
</match-query-param>
<match-path match="^/users/([^/]+)/version(.+)%">
<set-path>/app/$1</set-path>
<add-query-param name="version">$2</add-query-param>
</match-path>
<dispatch/>
</rewriter>
Match users by name and default user and set or overwrite a query parameter.
<rewriter xmlns="http://marklogic.com/xdmp/rewriter">
<set-query-param name="default">
default-user no match
</set-query-param>
<match-user name="admin">
<add-query-param name="user">admin matched</add-query-param>
</match-user>
<match-user name="infostudio-admin">
<add-query-param name="user">
infostudio-admin matced
</add-query-param>
</match-user>
<match-user default-user="true">
<set-query-param name="default">
default-user matched
</set-query-param>
</match-user>
<dispatch>/myapp.xqy</dispatch>
</rewriter>
Matching cookies. This properly parses the cookie HTTP header structure so matches can be
performed reliably. In this example, the SESSIONID cookie is used to conditionally set the current
transaction.
<rewriter xmlns="http://marklogic.com/xdmp/rewriter">
<match-cookie name="SESSIONID">
<set-transaction>$0</set-transaction>
</match-cookie>
</rewriter>
User defined variables with local scoping. Set an initial value to the user variable “test”. If the
patch starts with /test/ and contains atleast 2 more path components then reset the “test” variable
to the first matching path, and add a query param “var1″ to the second matching path. If the role
of the user also contains either “admin-builtins” or “app-builder” then rewrite to the path
‘/admin/secret.xqy’, otherwise add a query param “var2″ with the value of the “test” user variable
and rewrite to “/default.xqy”
If you change the scoped attribute from true to false, (or remove it), then all the changes within
that condition are discarded if the final dispatch to /admin/secret.xqy is not reached, leaving intact
the initial value for the “test” variable, not adding the “var1″ query parameter and dispatching to
/default.xqy
Template Driven Extraction (TDE) enables you to define a relational lens over your document
data, so you can query parts of your data using SQL or the Optic API. Templates let you specify
which parts of documents make up rows in a view. You can also use templates to define a
semantic lens, specifying which values from a document make up triples in the triple index.
TDE enables you to generate rows and triples from ingested documents based on predefined
templates that describe the following:
TDE is applied during indexing at ingestion time and serves the following purposes:
• SQL/Relation indexing. TDE allows the user to map parts of an XML or JSON document
into SQL rows. With a TDE template instance, users can create different rows and
describe how each column in a row is constructed using the extracted data from a
document. For details, see Creating Template Views in the SQL Data Modeling Guide.
• Custom Embedded Triple Extraction. TDE enables users to ingest triples that do not
follow the sem:triple schema. A user can define many triple projections in a single
template, where each projection specifies the different parts of a document that are
mapped to subjects, predicates or objects. For details, see Using a Template to Identify Triples
in a Document in the Semantics Developer’s Guide.
• Entity Services Data Models. For details, see Creating and Managing Models in the Entity
Services Developer’s Guide.
TDE data is also used by the Optic API, as described in “Optic API for Multi-Model Data
Access” on page 295.
Note: The tde-admin role is required in order to insert a template into the schema
database.
• Deleting Templates
The tde-admin role, which is required to access the TDE protected collection.
The tde-view role, which is required to view documents in the TDE protected collection. Access
to views can be further restricted by setting additional permissions on the template documents that
define the views. Since the same view can be declared in multiple templates loaded with different
permissions, the access to views must be controlled at the column level as follows:
Column level read permissions are implicit by default and are derived from the read permissions
set on the template documents. Permissions can also be explicitly set on a column using the
permissions element. Permissions on a column are not required to be identical and are ORed
together. A user with a role that has at least one of the read permissions set on a column will be
able to see the column.
If a user does not have permissions on any of the view’s columns, the view itself is not visible.
• Template document TD1 creates view View 1 with column C1 and C2. Template document
TD1 was loaded with Read Permission 1.
• Template document TD2 creates view View 1 with column C1 and C3. Template document
TD2 was loaded with Read Permission 2.
TDE Document
<element1>
Foo
</element1>
<element2>
Bar
</element2>
<element3>
Baz
</element3>
View 1: Read Permission 1 View 1: Read Permission 2
C1 C2 C1 C3
• Users can see columns referenced in templates they have access to.
• Users cannot see additional columns referenced in templates they do not have access to.
If a document in a TDE protected collection makes use of Element Level Security, both
unprotected and protected elements will be extracted. For details on Element Level Security, see
Element Level Security in the Security Guide.
Note: When creating a JSON template, substitute the dash (-) with an upper-case
character. For example, collections-and becomes collectionsAnd. For the
complete structure of a JSON template, see JSON Template Structure
Element Description
Element Description
The context, vars, and columns identify XQuery elements or JSON properties by means of path
expressions. Path expressions are based on XPath, which is described in XPath Quick Reference in
the XQuery and XSLT Reference Guide and “Traversing JSON Documents Using XPath” on
page 379.
{
"template":{
"description":"test template",
"context":"context1",
"pathNamespace":[
{
"prefix":"sem",
"namespaceUri":"http://semantics"
},
{
"prefix":"tde",
"namespaceUri":"http://tde"
}
],
"collections":[
"colc1",
"colc4",
{ "collectionsAnd":["colc2","colc3"]},
{ "collectionsAnd":["colc5","colc6"]}
],
"directories":["dir1","dir2"],
"vars":[
{
"name":"myvar1",
"val":"someVal"
}
],
"rows":[
{
"schemaName":"schemaA",
"viewName":"viewA",
"viewLayout":"sparse",
"columns":[
{
"name":"A",
"scalarType":"int",
"val":"someVal",
"nullable":false,
"default":"'1'",
"invalidValues":"ignore"
},
{
"name":"B",
"scalarType":"int",
"val":"someVal",
"nullable":true,
"invalidValues":"ignore"
}
]
},
{
"schemaName": ...
...
}
],
"triples":[
{
"subject":{
"val":"someVal",
"invalidValues":"ignore"
},
"predicate":{
"val":"someVal"
},
"object":{
"val":"someVal"
}
},
{
"subject": ...
...
}
],
"templates":[
{
"context":"context2",
"vars":[
{
"name":"myvar2",
"val":"someval"
}
],
"rows":[
{
"schemaName":"schemaA",
"viewName":"viewC",
"viewLayout":"sparse",
"columns":[
{
"name":"A",
"scalarType":"string",
"val":"someVal",
"collation":"http://marklogic.com/collation/fr"
},
{
"name":"B",
"scalarType":"int",
"val":"someVal"
}
]
}
]
}
]
}
}
18.3.1 Collections
A <collections> section defines the scope of the template to be confined only to documents in
specific collections. The <collections> section is a top level OR of a sequence of:
ORed collections:
<collections>
<collection>A</collection>
<collection>B</collection>
</collections>
ANDed collections:
<collections>
<collections-and>
<collection>A</collection>
<collection>B</collection>
</collections-and>
</collections>
OR of ANDed collections:
<collections>
<collection>A</collection>
<collection>B</collection>
<collections-and>
<collection>C</collection>
<collection>D</collection>
</collections-and>
<collections-and>
<collection>E</collection>
<collection>F</collection>
</collections-and>
</collections>
18.3.2 Directories
A <directories> section defines the scope of the template to be confined only to documents in
specific directories. The <directories> section is a top level OR of a sequence of <directory>
elements that scope the template to a specific directory.
18.3.3 path-namespaces
A <path-namespaces> section is a top level of one or more <path-namespace> elements, which
contain:
<path-namespaces>
<path-namespace>
<prefix>wb</prefix>
<namespace-uri>http://marklogic.com/wb</namespace-uri>
</path-namespace>
</path-namespaces>
The namespace prefix definitions are stored in the template documents and not in the
configuration of the target database. Otherwise, templates cannot be compiled into code without
knowing the target database configuration that uses them.
18.3.4 Context
The context tag defines the lookup node that is used for template activation and data extraction.
Path expressions occurring inside vars, rows, or triples are relative to the context element of
their parent template. The context defines an anchor in the XML/JSON tree where data is
collected by walking up and down the tree relative to the anchor. Any indexable path expression is
valid in the context element, therefore predicates are allowed. The context element of a
sub-template is relative to the context element of its parent template.
For example:
<context>/Employee</context>
<context>/MedlineCitation/Article</context>
For performance and security reasons, your path expressions are limited to a subset of XPath. For
more details, see Template Driven Extraction (TDE) in the XQuery and XSLT Reference Guide.
You can specify an invalid-values element to control the behavior when the context expression
cannot be evaluated. The possible invalid-values settings are:
• ignore — The extraction will be skipped for the node that resulted in an exception thrown
during the evaluation of the context expression.
• reject— The server will generate an error when the document is inserted and reject the
document. This is the default setting.
It is important to understand that context defines the node from which to extract a single row. If
you want to extract multiple rows from the document, the context must be set to the parent
element of those rows. For example, you have “order” documents that are as structured as
follows:
<order>
<order-num>10663</order-num>
<order-date>2017-01-15</order-date>
<items>
<item>
<product>SpeedPro Ultimate</product>
<price>999</price>
<quantity>1</quantity>
</item>
<item>
<product>Ladies Racer Helmet</product>
<price>115</price>
<quantity>1</quantity>
</item>
</items>
</order>
Each order document contains one or more <item> nodes. You want to create a view template that
extracts the <product>, <price>, and <quantity> values from each <item> node. A context of
/order and column values, such as items/item/product, will trigger a single row extraction for
the entire document, so the only way this will work is if the document has only one <item> node.
To extract the content of all of the <item> nodes as multiple rows, the context must be
/order/items/item. In this case, if you wanted to also extract <order-num>, the column value
would be ../../order-num.
Note: The context can be any path validated by cts:valid-tde-context. It may contain
wildcards, such as ‘*’, but, for performance reasons, do not use wildcards unless
their value outweighs the performance costs. It is best to use collection or directory
scoping when wildcards are used in the context.
Below is the complete grammar for the Restricted XPath, including all the supported constructs.
18.3.5 Variables
Variables are intermediate data projections needed for data transformation and are defined under
var elements. Variables can reference other variables inside their transformation section val, for
the cases where several intermediate projection/transformations are needed before the last
projection into the column/triple. The expression inside the val code is relative to the context
element of the current template in which the var is defined. See “Template Dialect and Data
Transformation Functions” on page 285 for the types of expressions allowed in a val.
For example:
<context>/northwind/Orders/Order</context>
.......
<vars>
<var>
<name>OrderID</name>
<val>./@OrderID</val>
</var>
</vars>
.......
<column>
<name>OrderID</name>
<scalar-type>long</scalar-type>
<val>$OrderID</val>
</column>
Note: You do not type variable values in the var description. Rather, the variable value is
typed in the column description.
The template dialect supports the following types of expressions described in the Expressions
section of the An XML Query Language specification:
• Path Expressions
• Sequence Expressions
• Arithmetic Expressions
• Comparison Expressions
• Logical Expressions
• Conditional Expressions
• Expressions on SequenceTypes
More complex operations like looping, FLWOR statements, iterations, and XML construction are
not supported within the dialect. The property axis property:: is also not supported.
• String Functions
• Type Casting
• Mathematical Functions
• Miscellaneous Functions
Note: Templates only support XQuery functions. JavaScript functions are not supported.
• fn:minutes-from-dateTime
• fn:minutes-from-duration
• fn:minutes-from-time
• fn:timezone-from-date
• fn:timezone-from-dateTime
• fn:timezone-from-time
• fn:year-from-date
• fn:year-from-dateTime
• fn:years-from-duration
• fn:day-from-date
• fn:day-from-dateTime
• fn:days-from-duration
• fn:format-date
• fn:format-dateTime
• fn:format-time
• fn:hours-from-dateTime
• fn:hours-from-duration
• fn:hours-from-time
• xdmp:dayname-from-date
• xdmp:quarter-from-date
• xdmp:week-from-date
• xdmp:weekday-from-date
• xdmp:yearday-from-date
• sql:dateadd
• sql:datediff
• sql:datepart
• sql:day
• sql:seconds
• sql:dayname
• sql:timestampadd
• sql:hours
• sql:timestampdiff
• sql:minutes
• sql:week
• sql:month
• sql:weekday
• sql:monthname
• sql:year
• sql:quarter
• sql:yearday
• xdmp:parse-dateTime
• xdmp:parse-yymmdd
• fn:substring-before
• fn:tokenize
• fn:translate
• fn:upper-case
Note: The tde-admin role is required in order to insert a template into the schema
database.
Note: The default collation for string values in a TDE template is codepoint. If you are
having problems joining columns that use a different collation, you will need to
change the TDE template to use a matching collation, or change the appropriate
range indexes to use codepoint.
Warning For best performance, it is recommended that you do not configure your content
database to use the default Schemas database and instead create your own schemas
database for your template documents. If you create multiple content databases to
hold documents to be extracted by TDE, each content database must have its own
schema database. Failure to do so may result in unexpected indexing behavior on
the content databases.
Always validate your template before inserting your view into a schema database. To validate
your view, use the tde:validate function as follows:
let $viewTemplate :=
<template xmlns="http://marklogic.com/xdmp/tde">
.....
</template>
return tde:validate($viewTemplate)
<map:map xmlns:map="http://marklogic.com/xdmp/map"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xs="http://www.w3.org/2001/XMLSchema">
<map:entry key="valid">
<map:value xsi:type="xs:boolean">true</map:value>
</map:entry>
</map:map>
Note: Do not use xdmp:validate to validate your template, as this function may miss
some validation steps.
After you have confirmed that the view template is valid, you can insert your view template into
the schema database used by the content database holding the document data. You can use any
method for inserting documents into the database to insert a view template, but you must insert
the template document into the http://marklogic.com/xdmp/tde collection.
The tde:template-insert function is a convenience that validates the template, inserts the
template document into the tde collection in the schema database (if executed on the content
database) with the default permissions, and triggers a re-index of the database.
Note: When a template is inserted, only those document fragments affected by the
template are re-indexed.
For example, to define and insert a view template, you would enter the following:
let $ClinicalView :=
<template xmlns="http://marklogic.com/xdmp/tde">
<description>populates patients' data</description>
<context>/Citation/Article</context>
<rows>
<row>
<schema-name>Medical2</schema-name>
<view-name>Publications</view-name>
<columns>
<column>
<name>ID</name>
<scalar-type>long</scalar-type>
<val>../ID</val>
</column>
<column>
<name>ISSN</name>
<scalar-type>string</scalar-type>
<val>Journal/ISSN</val>
</column>
</columns>
</row>
</rows>
</template>
If you use an alternative insert operation, you must explicitly insert the template document into
the http://marklogic.com/xdmp/tde collection of the schema database used by your content
database. For example:
return xdmp:document-insert(
"/Template.xml",
$ClinicalView,
(),
"http://marklogic.com/xdmp/tde")
“Doesn’t conform” might mean that the template says you must have a price element at some
path and the column is not nullable, and there is no default value. But the inserted document has
no price element at that path, or perhaps there is a price in the document but it can’t be cast to the
type of the column.
If the document is already in the database and you add the template, you may not want to delete
the non-conforming document, but you do want to be aware of its existence. If you set the log
level to debug, then in the case where you added a template and some existing documents are
non-conforming, you’ll get an error in the error log for each document that doesn’t get indexed.
For details on setting the log level, see Understanding the Log Levels in the Administrator’s Guide.
If the template is already in place and you try to insert the non-conforming document, there are
two possible outcomes:
For example, to disable the template set the <enabled> flag to false, as follows:
<template xmlns="http://marklogic.com/xdmp/tde">
<context>/foo/bar</context>
<enabled>false</enabled>
...
</template>
The MarkLogic Optic API makes it possible to perform relational operations on indexed values
and documents. The Optic API is not a single API, but rather a set of APIs exposed within the
XQuery, JavaScript, and Java languages.
The Optic API can read any indexed value, whether the value is in a range index, the triple index,
or rows extracted by a template. The extraction templates, such as those used to create template
views described in Creating Template Views in the SQL Data Modeling Guide, are a simple,
powerful way to specify a relational lens over documents, making parts of your document data
accessible via SQL. Optic gives you access to the same relational operations, such as joins and
aggregates, over rows. The Optic API also enables document search to match rows projected from
documents, joined documents as columns within rows, and dynamic document structures, all
performed efficiently within the database and accessed programmatically from your application.
The Optic API allows you to use your data as-is and makes it possible to make use of MarkLogic
document and search features using JavaScript or XQuery syntax, incorporating common SQL
concepts, regardless of the structure of your data. Unlike SQL, Optic is well suited for building
applications and accessing the full range of MarkLogic NoSQL capabilities. Because Optic is
integrated into common application languages, it can perform queries within the context of
broader applications that perform updates to data and process results for presentation to end users.
• Joins: Integrating documents that are frequently updated or that have many relations with
a declarative query instead of with a denormalized write
• Grouping: Summarizing aggregate properties over many documents
• Exact matches over repeated structures in documents
• Joining Triples: Incorporating semantic triples to enrich row data or to link documents and
rows
• Document Joins: Returning the entire source document to provide context to row data
• Document Query: Performing rich full text search to constrain rows in addition to
relational filtering
As in the SQL and SPARQL interfaces, you can use the Optic API to build a query from standard
operations such as where, groupBy, orderBy, union, and join by expressing the operations through
calls to JavaScript and XQuery functions. The Optic API enables you to work in the environment
of the programming language, taking advantage of variables and functions for benefits such as
modularizing plan construction and avoiding the parse errors and injection attacks associated with
assembling a query by concatenating strings.
Note: Unlike in SQL, column order is indeterminate in Optic. Notable exceptions of the
sort order keys in orderby and grouping keys in groupby, which specify priority.
There is also an Optic Java Client API, which is described in Optic Java API for Relational
Operations in the Developing Applications With the Java Client API guide.
• Parameterizing a Plan
• Sampling Data
Note: Libraries can be imported as JavaScript MJS modules. This is the preferred import
method.
Warning Resource service extensions, transforms, row mappers and reducers, and other
hooks cannot be implemented as JavaScript MJS modules.
The XQuery Optic API and JavaScript Optic API are functionally equivalent. Each is adapted to
the features and practices of their respective language conventions, but otherwise both are as
consistent as possible and have the same performance profile. Use the language that best suits
your skills and programming environment.
The following table highlights the differences between the JavaScript and XQuery versions of the
Optic API.
Characteristi
JavaScript XQuery
c
Fluent object Methods that Functions take a state object as the first parameter and
chaining return objects return a state object, enabling use of the XQuery =>
chaining operator. These black-box objects hold the state
of the plan being built in the form of a map. Because
these state objects might change in a future release, they
must not be modified, serialized or persisted. Chained
functions always create a new map instead of modifying
the existing map.
Characteristi
JavaScript XQuery
c
Result types Returns a sequence Returns a map of sql:rows, with the option to return an
of objects, with the array consisting of a header and rows.
option to return a
sequence of arrays
op.fromLexicons()
op.fromLiterals()
op.fromTriples() => AccessPlan
op.fromView()
op.fromSQL()
op.fromSPARQL()
Optic Objects and Associated Methods
col()
joinInner() => ModifyPlan
joinLeftOuter() => ModifyPlan
joinCrossProduct() => ModifyPlan
joinDoc() => ModifyPlan
joinDocUri() => ModifyPlan
union() => ModifyPlan
where() => ModifyPlan
whereDistinct() => ModifyPlan
orderBy() => ModifyPlan
groupBy() => ModifyPlan
select() => ModifyPlan
offset() => ModifyPlan AccessPlan
ModifyPlan
except() => ModifyPlan
intersect() => ModifyPlan
limit() => ModifyPlan
offsetLimit() => ModifyPlan
prepare() => PreparePlan
reduce() => IteratePlan
map() => IteratePlan
result() PreparePlan
export() IteratePlan
explain()
An Optic query creates a pipeline that applies a sequence of relational operations to a row set. The
following are the basic characteristics of the functions and methods used in an Optic query:
• All data access functions (any from* function) produce an output row set in the form of an
AccessPlan object.
• All modifier operations, such as ModifyPlan.prototype.where, take an input row set and
produce an output row set in the form of a ModifyPlan object.
• All composer operations, such as ModifyPlan.prototype.joinInner, take two input row
sets and produce one output row set in the form of a ModifyPlan object.
• The last output row set is the result of the plan.
• The order of operations is constrained only in that the pipeline starts with an accessor
operation. For example, you can specify:
• select before a groupBy that applies a formula to two columns to specify the input
for a sum function.
• select after a groupBy that applies a formula on the columns that are the output
from two sum aggregates.
The following is simple example that selects specific columns from the rows in a view and
outputs them in a particular order. The pipeline created by this query is illustrated below.
const op = require('/MarkLogic/optic');
op.fromView('main', 'employees')
.select(['EmployeeID', 'FirstName', 'LastName'])
.orderBy('EmployeeID')
.result();
1. The op.fromView function outputs an AccessPlan object that can be used by all of the API
methods.
op.fromView
AccessPlan
ModifyPlan ModifyPlan
select orderBy result
output
The following example calculates the total expenses for each employee and returns the results in
order of employee number.
const op = require('/MarkLogic/optic');
const employees = op.fromView('main', 'employees');
const expenses = op.fromView('main', 'expenses');
const Plan =
employees.joinInner(expenses, op.on(employees.col('EmployeeID'),
expenses.col('EmployeeID')))
.groupBy(employees.col('EmployeeID'), ['FirstName','LastName',
op.sum('totalexpenses', expenses.col('Amount'))])
.orderBy('EmployeeID')
Plan.result();
Note: The absence of .select is equivalent to a SELECT * in SQL, retrieving all columns
in a view.
1. The op.fromView functions outputs AccessPlan objects that are used by the op.on function
and AccessPlan.prototype.col methods to direct the ModifyPlan.prototype.joinInner
method to join the row sets from both views, which then ouputs them as a single row set in
the form of a ModifyPlan object.
op.fromView col
AccessPlan
op.fromView col
ModifyPlan
Aggregate
“Amount” columns
for each employee output
JavaScript XQuery
op.fromView op:from-view
op.fromTriples op:from-triples
op.fromLiterals op:from-literals
op.fromLexicons op:from-lexicons
op.fromSQL op:from-sql
op.fromSPARQL op:from-sparql
The op.fromView function accesses indexes created by a template view, as described in Creating
Template Views in the SQL Data Modeling Guide.
The op.fromTriples function accesses semantic triple indexes and abstracts them as rows and
columns. Note, however, that the columns of rows from an RDF graph may have varying data
types, which could affect joins.
The op.fromLiterals function constructs a literal row set that is similar to the results from a SQL
VALUES or SPARQL VALUES statement. This allows you to provide alternative columns to join
with an existing view.
The op.fromSQL and op.fromSPARQL functions dynamically construct a row set based on a
SELECT queries template views and triples, respectively
The following sections provide examples of the different data access functions:
• fromView Examples
• fromTriples Example
• fromLexicons Examples
• fromLiterals Examples
• fromSQL Example
• fromSPARQL Example
JavaScript:
const op = require('/MarkLogic/optic');
op.fromView('main', 'employees')
.select(['EmployeeID', 'FirstName', 'LastName'])
.orderBy('EmployeeID')
.result();
XQuery:
op:from-view("main", "employees")
=> op:select(("EmployeeID", "FirstName", "LastName"))
=> op:order-by("EmployeeID")
=> op:result()
You can use Optic to filter rows for specific data of interest. For example, the following query
returns the ID and name for employee 3.
JavaScript:
const op = require('/MarkLogic/optic');
op.fromView('main', 'employees')
.where(op.eq(op.col('EmployeeID'), 3))
.select(['EmployeeID', 'FirstName', 'LastName'])
.orderBy('EmployeeID')
.result();
XQuery:
op:from-view("main", "employees")
=> op:where(op:eq(op:col("EmployeeID"), 3))
=> op:select(("EmployeeID", "FirstName", "LastName"))
=> op:order-by("EmployeeID")
=> op:result()
The following query returns all of the expenses and expense categories for each employee and
return results in order of employee number. Because some information is contained only on the
expense reports and some data is only in the employee record, a row join on EmployeeID is used to
pull data from both sets of documents and produce a single, integrated row set.
JavaScript:
const op = require('/MarkLogic/optic');
const Plan =
employees.joinInner(expenses, op.on(employees.col('EmployeeID'),
expenses.col('EmployeeID')))
.select([employees.col('EmployeeID'), 'FirstName', 'LastName',
'Category', 'Amount'])
.orderBy(employees.col('EmployeeID'))
Plan.result();
XQuery:
return $employees
=> op:join-inner($expenses, op:on(
op:view-col("employees", "EmployeeID"),
op:view-col("expenses", "EmployeeID")))
=> op:select((op:view-col("employees", "EmployeeID"),
"FirstName", "LastName", "Category", "Amount"))
=> op:order-by(op:view-col("employees", "EmployeeID"))
=> op:result()
Locate employee expenses that exceed the allowed limit. The where operation in this example
demonstrates the nature of the Optic chaining pipeline, as it applies to all of the preceding rows.
JavaScript:
const op = require('/MarkLogic/optic');
const Plan =
employees.joinInner(expenses, op.on(employees.col('EmployeeID'),
expenses.col('EmployeeID')))
.joinInner(expenselimit, op.on(expenses.col('Category'),
expenselimit.col('Category')))
.where(op.gt(expenses.col('Amount'), expenselimit.col('Limit')))
.select([employees.col('EmployeeID'), 'FirstName', 'LastName',
expenses.col('Category'), expenses.col('Amount'),
expenselimit.col('Limit') ])
.orderBy(employees.col('EmployeeID'))
Plan.result();
XQuery:
return $employees
=> op:join-inner($expenses, op:on(
op:view-col("employees", "EmployeeID"),
op:view-col("expenses", "EmployeeID")))
=> op:join-inner($expenselimit, op:on(
op:view-col("expenses", "Category"),
op:view-col("expenselimit", "Category")))
=> op:where(op:gt(op:view-col("expenses", "Amount"),
op:view-col("expenselimit", "Limit")))
=> op:select((op:view-col("employees", "EmployeeID"),
"FirstName", "LastName",
op:view-col("expenses", "Category"),
op:view-col("expenses", "Amount"),
op:view-col("expenselimit", "Limit")))
=> op:order-by(op:view-col("employees", "EmployeeID"))
=> op:result()
JavaScript:
const op = require('/MarkLogic/optic');
// prefixer is a factory for sem:iri() constructors in a namespace
const resource = op.prefixer('http://dbpedia.org/resource/');
const foaf = op.prefixer('http://xmlns.com/foaf/0.1/');
const onto = op.prefixer('http://dbpedia.org/ontology/');
const Plan =
op.fromTriples([
op.pattern(person, onto('birthPlace'), resource('Brooklyn')),
op.pattern(person, foaf("name"), op.col("name"))
])
Plan.result();
XQuery:
return op:from-triples((
op:pattern($person, $onto("birthPlace"), $resource("Brooklyn")),
op:pattern($person, $foaf("name"), op:col("name"))))
=> op:result()
The examples in this section operate on the documents described in Load the Data in the SQL Data
Modeling Guide.
Note: The fromLexicons function queries on range index names, rather than column
names in a view. For example, for the employee documents, rather than query on
EmployeeID, you create a range index, named ID, and query on ID.
First, in the database holding your data, create element range indexes for the following elements:
ID, Position, FirstName, and LastName. For details on how to create range indexes, see Defining
Element Range Indexes in the Administrator’s Guide.
The following example returns the EmployeeID for each employee. The text, myview, is prepended
to each column name.
JavaScript:
const op = require('/MarkLogic/optic');
const Plan =
op.fromLexicons(
{EmployeeID: cts.elementReference(xs.QName('ID'))});
Plan.result();
XQuery:
op:from-lexicons(
map:entry(
"EmployeeID", cts:element-reference(xs:QName("ID"))),
"myview")
=> op:result()
The following example returns the EmployeeID, FirstName, LastName, and the URI of the
document holding the data for each employee.
JavaScript:
const op = require('/MarkLogic/optic');
const Plan =
op.fromLexicons({
EmployeeID: cts.elementReference(xs.QName('ID')),
FirstName: cts.elementReference(xs.QName('FirstName')),
LastName: cts.elementReference(xs.QName('LastName')),
URI: cts.uriReference()});
Plan.result();
XQuery:
op:from-lexicons(
map:entry("EmployeeID", cts:element-reference(xs:QName("ID")))
=> map:with("FirstName", cts:element-reference(xs:QName("FirstName")))
=> map:with("LastName", cts:element-reference(xs:QName("LastName")))
=> map:with("uri", cts:uri-reference()))
=> op:result()
Every view contains a fragment ID. The fragment ID generated from op.fromLexicons can be
used to join with the fragment ID of a view. For example, the following returns the EmployeeID,
FirstName, LastName, Position, and document URI for each employee.
JavaScript:
const op = require('/MarkLogic/optic');
const Plan =
employees.joinInner(DFrags, op.on(empldocid, uridocid))
.select(['URI', 'EmployeeID', 'FirstName',
'LastName', 'Position']);
Plan.result() ;
XQuery:
return $employees
=> op:join-inner($DFrags, op:on($empldocid, $uridocid))
=> op:select((op:view-col("employees", "EmployeeID"),
("URI", "FirstName", "LastName", "Position")))
=> op:result()
Build a table with two rows and return the row that matches the id column value of 1:
JavaScript:
const op = require('/MarkLogic/optic');
op.fromLiterals([
{id:1, name:'Master 1', date:'2015-12-01'},
{id:2, name:'Master 2', date:'2015-12-02'}
])
.where(op.eq(op.col('id'),1))
.result();
XQuery:
op:from-literals(
map:entry("columnNames",
json:to-array(("id", "name", "date")))
=> map:with("rowValues", (
json:to-array(( 1, "Master 1", "2015-12-01")),
json:to-array(( 2, "Master 2", "2015-12-02")))))
=> op:where(op:eq(op:col("id"), 1))
=> op:result()
Build a table with five rows and return the average values for group 1 and group 2:
JavaScript:
const op = require('/MarkLogic/optic');
op.fromLiterals([
{group:1, val:2},
{group:1, val:4},
{group:2, val:3},
{group:2, val:5},
{group:2, val:7}
])
.groupBy('group', op.avg('valAvg', 'val'))
.orderBy('group')
.result()
XQuery:
op:from-literals((
map:entry("group", 1) => map:with("val", 2),
map:entry("group", 1) => map:with("val", 4),
map:entry("group", 2) => map:with("val", 3),
map:entry("group", 2) => map:with("val", 5),
map:entry("group", 2) => map:with("val", 7)
))
=> op:group-by("group", op:avg("valAvg", "val"))
=> op:order-by("group")
=> op:result()
JavaScript:
const op = require('/MarkLogic/optic');
XQuery:
JavaScript:
'use strict';
const op = require('/MarkLogic/optic');
XQuery:
• Basic Queries
• Row Joins
• Document Joins
• Document Queries
For example, the following lists all of the employee IDs and names in order of ID number.
JavaScript:
const op = require('/MarkLogic/optic');
op.fromView('main', 'employees')
.select(['EmployeeID', 'FirstName', 'LastName'])
.orderBy('EmployeeID')
.result();
XQuery:
op:from-view("main", "employees")
=> op:select(("EmployeeID", "FirstName", "LastName"))
=> op:order-by("EmployeeID")
=> op:result()
Grouping in Optic differs from SQL. In SQL, the grouping keys are in the GROUP BY statement
and the aggregates are separately declared in the SELECT. In an Optic group-by operation, the
grouping keys are the first parameter and the aggregates are an optional second parameter. In this
way, Optic enables you to aggregate sequences and arrays in a group-by operation and then call
expression functions that operate on these sequences and arrays. For example, many of the math:*
functions, described in “Expression Functions For Processing Column Values” on page 331, take
a sequence.
In Optic, instead of applying aggregate functions to the group, a simple column can be supplied.
Optic will sample the value of the column for one arbitrary row within the group. This can be
useful when the column has the same value in every row within the group; for example, when
grouping on a department number but sampling on the department name.
JavaScript:
const op = require('/MarkLogic/optic');
op.fromView('main', 'expenses')
.groupBy(null, [
op.count('ExpenseReports', 'EmployeeID'),
op.min('minCharge', 'Amount'),
op.avg('average', 'Amount'),
op.max('maxCharge', 'Amount')
])
.select(['ExpenseReports',
'minCharge',
op.as('avgCharge', op.math.trunc(op.col('average'))),
'maxCharge'])
.result();
XQuery:
return $expenses
=> op:group-by((), (
op:count("ExpenseReports", "EmployeeID"),
op:min("minCharge", "Amount"),
op:avg("average", "Amount"),
op:max("maxCharge", "Amount")
))
=> op:select(("ExpenseReports", "minCharge",
op:as("avgCharge", omath:trunc(op:col("average"))),
"maxCharge"))
=> op:result();
Method Description
joinInner Creates one output row set that concatenates one left row and one right
row for each match between the keys in the left and right row sets.
joinLeftOuter Creates one output row set with all of the rows from the left row set with
the matching rows in the right row set, or NULL when there is no match.
joinCrossProduct Creates one output row set that concatenates every left row with every
right row.
The examples in this section join the employees and expenses views to return more information
on employee expenses and their categories than what is available on individual documents.
19.4.3.1 joinInner
The following queries make use of the AccessPlan.prototype.joinInner and op:join-inner
functions to return all of the expenses and expense categories for each employee in order of
employee number. The join will supplement employee data with information stored in separate
expenses documents. The inner join acts as a filter and will only include those employees with
expenses.
JavaScript:
const op = require('/MarkLogic/optic');
const Plan =
employees.joinInner(expenses, op.on(employees.col('EmployeeID'),
expenses.col('EmployeeID')))
.select([employees.col('EmployeeID'), 'FirstName', 'LastName',
expenses.col('Category'), 'Amount'])
.orderBy(employees.col('EmployeeID'))
Plan.result();
XQuery:
return $employees
=> op:join-inner($expenses, op:on(
op:view-col("employees", "EmployeeID"),
op:view-col("expenses", "EmployeeID")))
=> op:select((op:view-col("employees", "EmployeeID"),
"FirstName", "LastName", "Category"))
=> op:order-by(op:view-col("employees", "EmployeeID"))
=> op:result()
Use the AccessPlan.prototype.where and op:where functions to locate employee expenses that
exceed the allowed limit. Join the employees, expenses, and category limits to get a 360 degree
view of employee expenses.
JavaScript:
const op = require('/MarkLogic/optic');
const Plan =
employees.joinInner(expenses, op.on(employees.col('EmployeeID'),
expenses.col('EmployeeID')))
.joinInner(expenselimit, op.on(expenses.col('Category'),
expenselimit.col('Category')))
.where(op.gt(expenses.col('Amount'), expenselimit.col('Limit')))
.select([employees.col('EmployeeID'), 'FirstName', 'LastName',
expenses.col('Category'), expenses.col('Amount'),
expenselimit.col('Limit') ])
.orderBy(employees.col('EmployeeID'))
Plan.result();
XQuery:
return $employees
=> op:join-inner($expenses, op:on(
op:view-col("employees", "EmployeeID"),
op:view-col("expenses", "EmployeeID")))
=> op:join-inner($expenselimit, op:on(
op:view-col("expenses", "Category"),
op:view-col("expenselimit", "Category")))
=> op:where(op:gt(op:view-col("expenses", "Amount"),
op:view-col("expenselimit", "Limit")))
=> op:select((op:view-col("employees", "EmployeeID"),
"FirstName", "LastName",
op:view-col("expenses", "Category"),
op:view-col("expenses", "Amount"),
op:view-col("expenselimit", "Limit")))
=> op:order-by(op:view-col("employees", "EmployeeID"))
=> op:result()
19.4.3.2 joinLeftOuter
The following queries make use of the AccessPlan.prototype.joinLeftOuter and
op:join-left-outer functions to return all of the expenses and expense categories for each
employee in order of employee number, or null values for employees without matching expense
records.
JavaScript:
const op = require('/MarkLogic/optic');
const Plan =
employees.joinLeftOuter(expenses, op.on(employees.col('EmployeeID'),
expenses.col('EmployeeID')))
.orderBy(employees.col('EmployeeID'))
Plan.result();
XQuery:
return $employees
=> op:join-left-outer($expenses, op:on(
op:view-col("employees", "EmployeeID"),
op:view-col("expenses", "EmployeeID")))
=> op:order-by(op:view-col("employees", "EmployeeID"))
=> op:result()
19.4.3.3 joinCrossProduct
The following queries make use of the AccessPlan.prototype.joinCrossProduct and
op:join-cross-product functions to return all of the expenses and expense categories for each
employee title (Position) in order of expense Category. If employees with a particular position do
not have any expenses under a category, the reported expense is 0.
JavaScript:
const op = require('/MarkLogic/optic');
expenses.groupBy ('Category')
.joinCrossProduct(employees.groupBy('Position'))
.select(null, 'all')
.joinLeftOuter(
expenses.joinInner(employees,
op.on(employees.col('EmployeeID'),
expenses.col('EmployeeID'))
)
.groupBy(['Category', 'Position'],
op.sum('rawExpense', expenses.col('Amount'))
)
.select(null, 'expensed'),
[op.on(op.viewCol('expensed', 'Category'),
op.viewCol('all', 'Category')),
op.on(op.viewCol('expensed', 'Position'),
op.viewCol('all', 'Position'))]
)
.select([op.viewCol('all', 'Category'),
op.viewCol('all', 'Position'),
op.as('expense', op.sem.coalesce(op.col('rawExpense'), 0))
])
.orderBy(['Category', 'Position'])
.result();
XQuery:
return $expenses
=> op:group-by('Category')
=> op:join-cross-product($employees => op:group-by("Position"))
=> op:select((), 'all')
=> op:join-left-outer(
$expenses
=> op:join-inner($employees, op:on(
op:col($employees, "EmployeeID"),
op:col($expenses, "EmployeeID")
))
=> op:group-by(("Category", "Position"),
op:sum("rawExpense", op:col($expenses, "Amount")))
=> op:select((), "expensed"),
(op:on(op:view-col("expensed", "Category"),
op:view-col("all", "Category")),
op:on(op:view-col("expensed", "Position"),
op:view-col("all", "Position")))
)
=> op:select((op:view-col("all", "Category"),
op:view-col("all", "Position"),
op:as("expense",
osem:coalesce((op:col("rawExpense"), 0)))))
=> op:order-by(("Category", "Position"))
=> op:result();
Method Description
joinDoc Joins the source documents for rows (especially when the source
documents have detail that's not projected into rows). In this case, name
the fragment ID column and use it in the join
Note: Minimize the number of documents retrieved by filtering or limiting rows before
joining documents.
19.4.4.1 joinDoc
In the examples below, the ‘employee’ and ‘expense’ source documents are returned by the
AccessPlan.prototype.joinDoc or op:join-doc function after the row data. The join is done on
the document fragment ids returned by op.fromView.
JavaScript:
const op = require('/MarkLogic/optic');
const Plan =
employees.joinInner(expenses, op.on(employees.col('EmployeeID'),
expenses.col('EmployeeID')))
.joinDoc('Employee', empldocid)
.joinDoc('Expenses', expdocid)
.select([employees.col('EmployeeID'),'FirstName',
'LastName', expenses.col('Category'), 'Amount',
'Employee', 'Expenses'])
.orderBy(employees.col('EmployeeID'))
Plan.result();
XQuery:
return $employees
=> op:join-inner($expenses, op:on(
op:view-col("employees", "EmployeeID"),
op:view-col("expenses", "EmployeeID")))
=> op:join-doc("Employee", $empldocid)
=> op:join-doc("Expenses", $expdocid)
=> op:select((op:view-col("employees", "EmployeeID"),
"FirstName", "LastName",
op:view-col("expenses", "Category"),
op:view-col("expenses", "Amount"),
"Employee", "Expenses"))
=> op:order-by(op:view-col("employees", "EmployeeID"))
=> op:result()
19.4.4.2 joinDocUri
The following examples show how the AccessPlan.prototype.joinDocUri or op:join-doc-uri
function can be used to return the document URI along with the row data.
JavaScript:
const op = require('/MarkLogic/optic');
const empldocid = op.fragmentIdCol('empldocid');
const employees = op.fromView('main', 'employees', null, empldocid);
employees.joinDocUri(op.col('uri'), empldocid)
.result();
XQuery:
Method Description
union Combines all of the rows from the input row sets. Columns that are
present only in some input row sets effectively have a null value in the
rows from the other row sets.
intersect Creates one output row set from the rows that have the same columns and
values in both the left and right row sets.
except Creates one output row set from the rows that have the same columns in
both the left and right row sets, but the column values in the left row set
do not match the column values in the right row set.
The examples in this section operate on the employees and expenses views to return more
information on employee expenses and their categories than what is available on individual
documents.
19.4.5.1 union
The following queries make use of the AccessPlan.prototype.union and op:union functions to
return all of the expenses and expense categories for each employee in order of employee number.
JavaScript:
const op = require('/MarkLogic/optic');
const Plan =
employees.union(expenses)
.whereDistinct()
.orderBy([employees.col('EmployeeID')])
Plan.result();
XQuery:
return $employees
=> op:union($expenses)
=> op:where-distinct()
=> op:order-by(op:view-col("employees", "EmployeeID"))
=> op:result()
19.4.5.2 intersect
The following queries make use of the AccessPlan.prototype.intersect and op:intersect
functions to return the matching columns and values in the tables, tab1 and tab2.
Note: The op.fromLiterals function is used for this example because the data set does
not contain redundant columns and values.
JavaScript:
const op = require('/MarkLogic/optic');
tab1.intersect(tab2)
.orderBy('id')
.result();
XQuery:
return $tab1
=> op:intersect($tab2)
=> op:order-by("id")
=> op:result()
19.4.5.3 except
The following queries make use of the AccessPlan.prototype.except and op:except functions to
return the columns and values in tab1 that do not match those in tab2.
Note: The op.fromLiterals function is used for this example because the data set does
not contain redundant columns and values.
JavaScript:
const op = require('/MarkLogic/optic');
tab1.except(tab2)
.orderBy('id')
.result();
XQuery:
return $tab1
=> op:except($tab2)
=> op:order-by("id")
=> op:result()
JavaScript:
const op = require('/MarkLogic/optic');
op.fromView('main', 'employees')
.where(cts.andQuery([cts.wordQuery('Senior'),
cts.wordQuery('Researcher')]))
.select(['FirstName', 'LastName', 'Position'])
.result();
XQuery:
return $employees
=> op:where(cts:and-query((cts:word-query("Senior"),
cts:word-query("Researcher"))))
=> op:select(("FirstName", "LastName", "Position"))
=> op:result()
const op = require('/MarkLogic/optic');
To view the output as a table in Query Console, select HTML from the String as menu.
• A proxy for a deferred call to a builtin function on the value of a column in each row.
• Nestable for powerful expressions that transform values.
For example, the math.trunc function is expressed by the op.math.trunc expression function in
JavaScript and as omath:trunc in XQuery.
For example, the truncate to decimal portion of the returned 'average' value, do the following:
op.math.trunc(op.col('average')) // JavaScript
omath:trunc(op:col('average')) (: XQuery :)
The list of JavaScript functions supported by expression functions is shown in the table below.
Their XQuery equivalents are also supported, but you must import the respective module libraries
listed in “XQuery Libraries Required for Expression Functions” on page 335.
Most every value processing built-in function you would want to use is listed below. In the
unlikely event that you want to call a function that is not listed, the Optic API provides a
general-purpose op.call constructor for deferred calls:
Use the op.call function with care because some builtins could adversely affect performance or
worse. You cannot call JavaScript or XQuery functions using this function. Instead, provide a map
or reduce function to postprocess the results.
cts functions:
fn functions:
json functions:
map functions:
math functions:
rdf functions:
sem functions:
spell functions:
sql functions:
xdmp functions:
xs functions:
Expression functions can be nested for powerful expressions that transform values. For example:
.select(['countUsers', 'minReputation',
op.as('avgReputation', op.math.trunc(op.col('aRep'))), 'maxReputation',
op.as('locationPercent',
op.fn.formatNumber(op.xs.double(
op.divide(op.col('locationCount'),
op.col('countUsers'))),'##%'))
])
ne(valueExpression, valueExpression) != !=
=> booleanExpression
or(booleanExpression+) => || OR
booleanExpression
not({condition:...}) =>
booleanExpression
case(whenExpression+, IF CASE
valueExpression) => valueExpression WHEN
ELSE
case({list:..., otherwise:...}) =>
valueExpression
when(booleanExpression, WHEN
valueExpression) => whenExpression
divide(numericExpression, / /
numericExpression) =>
numericExpression
modulo(numericExpression, %
numericExpression) =>
numericExpression
multiply(numericExpression, * *
numericExpression) =>
numericExpression
subtract(numericExpression, - -
numericExpression) =>
numericExpression
Note: Expressions that use rows returned from a subplan (similar to SQL or SPARQL
EXISTS) are not supported in the initial release.
• Create JSON objects whose properties come from column values or XML elements whose
content or attribute values come from column values.
• Insert documents or nodes extracted via op.xpath into constructed nodes.
• Create JSON arrays from aggregated arrays of nodes or XML elements from aggregated
sequences of nodes.
The table below summarizes the Optic node constructor functions. For details on each function,
see the Optic API reference documentation.
Function Description
op.jsonArray Constructs a JSON array with the specified JSON nodes as items.
op.jsonBoolean Constructs a JSON boolean node with a specified value.
op.jsonDocument Constructs a JSON document with the root content, which must be
exactly one JSON object or array node.
op.jsonNull Constructs a JSON null node.
op.jsonNumber Constructs a JSON number node with a specified value.
op.jsonObject Constructs a JSON object with the specified properties.
For example, the following query constructs JSON documents, like the one shown below:
const op = require('/MarkLogic/optic');
const employees = op.fromView('main', 'employees');
employees.select(op.as('Employee', op.jsonDocument(
op.jsonObject([op.prop('ID and Name',
op.jsonArray([
op.jsonNumber(op.col('EmployeeID')),
op.jsonString(op.col('FirstName')),
op.jsonString(op.col('LastName'))
])),
op.prop('Position',
op.jsonString(op.col('Position')))
])
)))
.result();
This query will produce output that looks like the following:
{
"Employee": {
"ID and Name": [
42,
"Debbie",
"Goodall"
],
"Position": "Senior Widget Researcher"
}
}
You can use the PreparePlan.prototype.explain function to view or save an execution plan. The
execution plan definition consists of operations on a row set. These operations fall into the
following categories:
• data access – an execution plan can read a row set from a view, graph, or literals where a
view can access the triple index or the cross-product of the co-occurrence of range index
values in documents.
• row set modification – an execution plan can filter with where, order by, group, project
with select, and limit a row set to yield a modified row set.
• row set composition – an execution plan can combine multiple row sets with join, union,
intersect, or except to yield a single row set.
• row result processing – an execution plan can specify operations to perform on the final
row set including mapping or reducing.
When a view is opened as an execution plan, it has a special property that has an object with a
property for each column in the view. The name of the property is the column name and the value
of the property is a name object. To prevent ambiguity for columns with the same name in
different views, the column name for a view column is prefixed with the view name and a
separating period.
The execution plan result can be serialized to CSV, line-oriented XML or JSON, depending on the
output mime type. For details on how to read an execution plan, see Execution Plan in the SQL
Data Modeling Guide.
Because the plan engine caches plans, parameterizing a plan executed previously is more efficient
than submitting a new plan.
For example, the following query uses a start and length parameter to set the offsetLimit and
an increment parameter to increment the value of EmployeeID.
const op = require('/MarkLogic/optic');
employees.offsetLimit(op.param('start'), op.param('length'))
.select(['EmployeeID',
op.as('incremented', op.add(op.col('EmployeeID'),
op.param('increment')))])
.result(null, {start:1, length:2, increment:1});
JavaScript:
const op = require('/MarkLogic/optic');
const EmployeePlan =
op.fromView('main', 'employees')
.select(['EmployeeID', 'FirstName', 'LastName'])
.orderBy('EmployeeID')
const planObj = EmployeePlan.export();
xdmp.documentInsert("plan.json", planObj)
XQuery:
To import an Optic query from a file and output the results, do the following:
JavaScript:
const op = require('/MarkLogic/optic');
op.import(cts.doc('plan.json').toObject())
.result();
XQuery:
op:import(fn:doc("plan.json")/node())
=> op:result()
JavaScript:
const op = require('/MarkLogic/optic');
op.toSource(cts.doc('plan.json'))
XQuery:
op:to-source(fn:doc("plan.json"))
const op = require('/MarkLogic/optic');
op.fromView(...)
.where(...column filters...)
.select([...projected columns...,
op.as('randomNumberCol',op.sql.rand())])
.orderBy('randomNumberCol')
.limit(10)
... optional inner or left joins on other accessors ...
... optional select expressions constructing column values from multiple
accessors ...
... optional grouping on rows from other accessors ...
.result();
The technique also works for the op.fromSQL() accessor when each row is produced from a single
document.
The technique also works for the op.fromTriples() or op.fromSPARQL() accessors when each
result is produced from a single document.
• Terms
Consider again the example of image classification. A simple machine learning model can be like
this: convert the image into a matrix of pixel values x; multiply it with another matrix W. If the
result Wx is larger than a Threshold, it’s a cat, otherwise it’s not. For the model to succeed, it needs
labeled training data of images. The model starts with a totally random matrix W, and produces
output on all training images. It will make lots of mistakes, and for every mistake it makes, it
adjusts W so that the output Wx is closer to the ground truth label. The precise amount of adjustment
of W is determined through a process called error back propagation. In the example described
here, the computation is a simple one matrix multiplication; however, in real world applications,
you can have hundreds of layers of computations, with millions of different W parameters.
20.2 Terms
The material in this guide assumes you are familiar with the basic concepts of machine learning.
Some terms have ambiguous popular definitions, so they are described below.
Term Definition
Deep Learning Subset of Machine Learning which makes the computation of neural
networks feasible.
Accuracy Accuracy is a metric by which one can examine how good is the machine
learning model. Let us look at the confusion matrix to understand it in a
better way:
So, the accuracy is the ratio of correctly predicted classes to the total
classes predicted. Here, the accuracy will be:
Term Definition
Autoregression Autoregression is a time series model that uses observations from previous
time steps as input to a regression equation to predict the value at the next
time step. The autoregressive model specifies that the output variable
depends linearly on its own previous values. In this technique input
variables are taken as observations at previous time steps, called lag
variables.
For example, we can predict the value for the next time step (t+1) given the
observations at the last two time steps (t-1 and t-2). As a regression model,
this would look as follows:
Since the regression model uses data from the same input variable at
previous time steps, it is referred to as an autoregression.
Back In neural networks, if the estimated output is far away from the actual
Propagation output (high error), we update the biases and weights based on the error.
This weight and bias updating process is known as Back Propagation.
Back-propagation (BP) algorithms work by determining the loss (or error)
at the output and then propagating it back into the network. The weights
are updated to minimize the error resulting from each neuron. The first step
in minimizing the error is to determine the gradient (Derivatives) of each
node wtih respect to the final output.
Term Definition
Bayes' Theorem Bayes’ theorem is used to calculate the conditional probability. Conditional
probability is the probability of an event ‘B’ occurring given the related
event ‘A’ has already occurred.
For example, a clinic wants to cure cancer of the patients visiting the clinic.
Term Definition
Term Definition
Datasets Training data is used to train a model. It means that ML model sees that
data and learns to detect patterns or determine which features are most
important during prediction.
Test data is used once the final model is chosen to simulate the model’s
behavior on a completely unseen data, i.e. data points that weren’t used in
building models or even in deciding which model to choose.
Term Definition
Neural Network Neural Networks is a very wide family of Machine Learning models. The
main idea behind them is to mimic the behavior of a human brain when
processing data. Just like the networks connecting real neurons in the
human brain, artificial neural networks are composed of layers. Each layer
is a set of neurons, all of which are responsible for detecting different
things. A neural network processes data sequentially, which means that
only the first layer is directly connected to the input. All subsequent layers
detect features based on the output of a previous layer, which enables the
model to learn more and more complex patterns in data as the number of
layers increases. When a number of layers increases rapidly, the model is
often called a Deep Learning model. It is difficult to determine a specific
number of layers above which a network is considered deep, 10 years ago
it used to be 3 and now is around 20.
There are many types of Neural Networks. A list of the most common can
be found https://en.wikipedia.org/wiki/Types_of_artificial_neural_networks.
Term Definition
Most of the time the value of a threshold is obtained through training. The
initial value can be chosen randomly, for example 2.2, then the training
algorithm finds out that most of the predictions are wrong (cats classified
as dogs), then the training algorithm adjusts the value of the threshold, so
that the prediction can be more accurate.
• Supervised Learning
• Unsupervised Learning
• Reinforcement Learning
1. Different development teams throughout your enterprise may each use any Machine
Learning stack of their choice to create their models. They may then export these models
the ONNX format and use them all within a MarkLogic application.
2. In some cases, they can use their models as they are, because ONNX currently has native
support for PyTorch, CNTK, MXNet, and Caffe2.
4. By using ONNX on MarkLogic, your Machine Learning applications are safe from vendor
lock-in.
ONNX is an open format to represent deep learning models. With ONNX, AI developers can more
easily move models between state-of-the-art tools and choose the combination that is best for
them. ONNX is developed and supported by a community of partners.
It is an open source project with an MIT license, with its development led by Microsoft.
A machine learning model can be represented as a computation network, where nodes in the
network represent mathematical operations (operators). There are many different machine
learning frameworks out there (tensorflow, PyTorch, MXNet, CNTK, etc), all of which have their
own representation of a computation network. You cannot simply load a model trained by
PyTorch into tensorflow to perform inference. This creates barriers in collaboration. ONNX is
designed to solve this problem. Although different frameworks have different representation of a
model, they use a very similar set of operators. After all, they are all based on the same
mathematical concepts. ONNX supports a wide set of operators, and has both official and
unofficial converters for other frameworks. For example, a tensorflow-onnx converter has the
ability of taking a tensorflow model, do a traversal of the computation network (it's a just a
graph), reconstruct the graph replacing all operators with their ONNX equivalent. Ideally, if all
operators supported by tensorflow are also supported by ONNX, we can have a perfect converter,
being able to convert any tensorflow model to ONNX format. However this is not the case for
most machine learning frameworks. All these frameworks are constantly adding new operators
(with some being highly specialized), and it's very hard to keep up with all frameworks. ONNX is
under active development, with new operator support added in each release, trying to catch up
with the super set of all operators supported by all framework.
ONNX runtime is a high efficiency inference engine for ONNX models. Per its github page :
ONNX Runtime is a performance-focused complete scoring engine for Open Neural Network Exchange
(ONNX) models, with an open extensible architecture to continually address the latest developments in AI
and Deep Learning. ONNX Runtime stays up to date with the ONNX standard with complete
implementation of all ONNX operators, and supports all ONNX releases (1.2+) with both future and
backwards compatibility.
ONNX runtime's capability can be summarized as:
For (1), ONNX runtime supports loading a model from the filesystem or from a byte array in
memory, which is convenient for us; For (2), we need to construct values in CPU memory; For
(3), ONNX runtime automatically uses available accelerators and runtimes available on the host
machine. An abstraction of a runtime is called an execution provider. Current execution providers
include CUDA, TensorRT, Intel MKL, etc. ONNX runtime partitions the computation network
(the model) into subgraphs, and run each sub-graph on the most efficient execution provider. A
default fallback execution provider (the CPU) is guaranteed able to run all operators, so that even
if no special accelerator or runtime (GPU, etc.) exists, ONNX runtime can still perform inference
on an ONNX model, albeit at a much slower speed.
Beginning with version 10.0-3, MarkLogic server includes version 1.0.0 of the ONNX runtime.
We chose to expose a very small subset of the C API of onnxruntime representing the core
functionality. The rest of the C APIs are implemented as options passed to those core APIs.
• Security
• Limitations
20.6.3 Security
The onnxruntime does not read or write to the database or the file system.
These privileges are assigned to the ort-user role. A user must have the ort-user role to execute
these functions.
20.6.4 Limitations
We do not support custom operators, due to ONNX runtime listing them as “Experimental APIs”.
There is no distributed inference in the ONNX runtime. This is partly because an inference
session runs relatively fast: the runtime performs just one forward pass of the model, without
auto-differential, and with no need for millions of iterations. In addition, multiple inference
sessions can be executed under a single ONNX runtime.
https://s3.amazonaws.com/onnx-model-zoo/squeezenet/squeezenet1.1/squeezenet1.1.onnx
Use the Query Console to load it into the Documents database. This may be run by using the Query
Console, select Documents as the Database and JavaScript as the Query Type.
declareUpdate();
xdmp.documentLoad('c:\\temp\\squeezenet1.1.onnx',
{
uri : '/squeezenet.onnx',
permissions : xdmp.defaultPermissions(),
format : 'binary'
});
Using the Query Console, select Documents as the Database and JavaScript as the Query Type
and run the following query to load a model, define some runtime values, and perform an
evaluation:
'use strict';
var inputValues = []
for (i=0;i<inputCount;i++){
var p = 1
for(j=0;j<inputTypes[i]["shape"].length;j++){
p *= inputTypes[i]["shape"][j]
}
var data = []
for(j=0;j<p;j++){
data.push(j);
}
inputValues.push(ort.value(data, inputTypes[i]["shape"], "float"))
}
var inputMap = {}
for (i=0;i<inputCount;i++){
inputMap[inputNames[i]] = inputValues[i]
}
ort.run(session, inputMap)
{
"softmaxout_1": "OrtValue(Shape:[1, 1000, 1, 1], Type: FLOAT)"
}
let $input-values :=
for $i in (1 to $input-count)
(: generate some arbitrary input data. :)
let $data := (1 to fn:fold-left(function($a, $b) { $a * $b }, 1,
map:get($input-types, "shape")))
return ort:value($data, map:get($input-types, "shape"),
map:get($input-types, "tensor-type"))
• General Steps
• Conclusion
Before reading this guide, it is strongly advised that the reader get familiar with PyTorch and the
official PyTorch documentation on ONNX conversion first: guide 1, guide 2.
• unsupported operators
• control flow
• PyTorch internal bugs
For unsupprted operators, you can either wait for them to be added to PyTorch, or you can do it
yourself. For many cases, this is easier than you think. For example, in the following example, we
need operator bitwise-or, but it's not supported in PyTorch 1.4.0. A simple Google search reveals
that support for this operator is already in the master branch of PyTorch, it just didn't make it to
the latest official release (1.4.0). Simply adding the following code to the file /Library/
Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/onnx/
symbolic_opset9.py (this path is different on different operating systems/python installs):
@parse_args('v')
def bitwise_not(g, inp):
if inp.type().scalarType() != 'Bool':
return _unimplemented("bitwise_not", "non-bool tensor")
return g.op("Not", inp)
For PyTorch internal bugs, you can either fix it yourself or wait for the PyTorch team to fix it.
Fortunately, this case is very rare.
We will look at this example: Text Summarization with Bert. We will convert this particular PyTorch
model to ONNX format, completely from scratch.
To solve this issue, PyTorch has another method completely different to the tracer based method
to export models with control flow. It's called a script based method. Intuitively what happens is
that the model source code is 'compiled', and analyzed. The PyTorch 'compiler' will correctly
capture any control flow, and correctly export the model to ONNX format. This sounds like a
proper solution to the problem, however currently the script based method has significant
limitation on language feature support of the model source code, meaning that there are certain
Python language features (for example lambda) you cannot use when defining your model. Unless
the model is coded with the mission 'exporting to ONNX' in mind, it is generally very difficult to
rewrite the model source code to comply with the requirements of a script based method.
MarkLogic is a document database, we naturally want to work with models that handle text.
Unfortunately, almost all models that handle text contains control flow (with a small number of
exceptions), because most models construct the output in a recursive/iterative way (for example,
for each word in the input document, generate the next output word). This makes exporting these
PyTorch models to ONNX more challenging.
Fortunately, with a good understanding of the model, the exporting mechanism and some coding,
and ever growing ONNX operator support, we can convert lots of text-handling models to
ONNX.
Install Python3 (if you don't have it), it most certainly comes with pip. Notice that for macOS users
and some linux users, you need to make sure you're using the correct Python, since your operating
system comes with one pre-installed. For this particular task, we need at least Python 3.6.
Clone this git repo for the paper, then install the prerequisites by executing
Although "torch==1.1.0" is specified, we still want to try the latest PyTorch (1.4.0 as of this
writing) first, due to possibly better ONNX operator coverage, and overall improved
functionality. If the newest version of PyTorch failed, we then revert to the version specified in the
requirements. You can install the latest PyTorch here.
Now follow the instruction described by the git repo, to download pretrained models, and
training/testing datasets. We will be using CNN/DM BertExtAbs, the Abstractive Summarization
model based on Bert, trained with CNN/DM dataset. For datasets, we use the prepared data.
After downloading and decompressing thoses files, move the model file to models directory, and
move the datasets to bert_data directory. After those steps, in addition to the cloned source code,
your models directory should contain a file model_step_148000.pt, and your bert_data directory
should contain lots of files with name similar to cnndm.test.0.bert.pt.
We are now ready to edit the source code to add a function to export the model to ONNX format.
import torch
from models import data_loader, model_builder
from models.data_loader import load_dataset
from models.model_builder import AbsSummarizer
def onnx_export(args):
device = "cpu"
checkpoint = torch.load(
args.test_from, map_location=lambda storage, loc: storage)
opt = vars(checkpoint['opt'])
for k in opt.keys():
if (k in model_flags):
setattr(args, k, opt[k])
test_iter = data_loader.Dataloader(
args,
load_dataset(args, 'test', shuffle=False),
args.test_batch_size,
device,
shuffle=False,
is_test=True)
for input_data in test_iter:
dummy_input = (
input_data.src.index_select(0, torch.tensor([0])),
input_data.tgt.index_select(0, torch.tensor([0])),
input_data.segs.index_select(0, torch.tensor([0])),
input_data.clss.index_select(0, torch.tensor([0])),
input_data.mask_src.index_select(0, torch.tensor([0])),
input_data.mask_tgt.index_select(0, torch.tensor([0])),
input_data.mask_cls.index_select(0, torch.tensor([0]))
)
torch.onnx.export(
model,
dummy_input,
"AbsSummarizer.onnx",
opset_version=11)
break
The gist of the above code is to load the model just like when doing summarization from raw text,
and using the first batch of input data as dummy input, export the model to ONNX format. The
construction of dummy_input is dictated by the AbsSummarizer class's forward function. All
PyTorch model has a forward function, the signature of which determines the input and output of
the model. We then extract the required input data from the first batch, feed it to the ONNX
exporter and try to export the model as ONNX model.
Run
This is an easy fix. Just do as the error message suggests and fix the code, and try again.
Looking at the definition of AbsSummarizer class in model_builder.py, you will notice that the
model returns two output, one of which is None. That's our culprit! Simply deleting the None, and
let's try again.
This time it's successful! The command finishes without error, and there is a new file
AbsSummarizer.onnx which is 843 MB in our src directory. However, notice that we do have a
couple of Warnings:
Warnings like these are pretty self-explanatory: A variable is being treated as constant. So when
you run the exported model with a different set of inputs, the result will not change, it'll still be the
result based on the input we used during exporting, just like the case with control flows, rendering
the exported model completely useless!
To get around this issue, use torch.index_select instead of converting torch.tensor to Python
index. Do notice that different fixes are required for different scenarios, index_select is just one
of the fixes which works in this case. So this code in question:
becomes
Do the same with the other warning, we can now export the base AbsSummarizer model to
ONNX format warning free.
Now that we know the base model, without post processing, can be exported succussfully.
However, notice that in the definition of the base model, it only does a single round of
computation, generating one 'word' of the output summarization. In order to generate the full
summarization, we need to immitate the predictor.translate function call, to construct a real
working ONNX summarization model.
Here max_length is the maximum length (in terms of word) of the summarization, and step is the
length of current work-in-progress summarization. Recall that to export control flow we can use
the script based exporter, but since this piece of code contains many advanced Python features
that are not supported by the script based exporter, this option becomes unpractical (but still
possible, you can always rewrite the code from scratch).
From here on there is no official way to proceed. In this particular case we choose to export two
models, one representing initialization and the first loop, the other representing the loop body. We
take the control flow outside of the model, to be handled by application code(in other words, in
XQuery or javascript in MarkLogic). In this case, the original application (pseudo)code
transforms from a single ort.run:
class InitLoopModel(nn.Module):
def __init__(self, args, device, checkpoint):
super(InitLoopModel, self).__init__()
self.args = args
self.device = device
self.bert = Bert(args.large, args.temp_dir, args.finetune_bert)
self.vocab_size = self.bert.model.config.vocab_size
tgt_embeddings = nn.Embedding(
self.vocab_size, self.bert.model.config.hidden_size,
padding_idx=0)
self.decoder = TransformerDecoder(self.args.dec_layers,
self.args.dec_hidden_size,
heads=self.args.dec_heads,
d_ff=self.args.dec_ff_size,
dropout=self.args.dec_dropout,
embeddings=tgt_embeddings)
self.generator = get_generator(
self.vocab_size, self.args.dec_hidden_size, device)
self.generator[0].weight = self.decoder.embeddings.weight
self.load_state_dict(checkpoint['model'], strict=False)
self.to(device)
class LoopBodyModel(nn.Module):
def __init__(self, args, device, checkpoint):
super(LoopBodyModel, self).__init__()
self.args = args
self.device = device
self.bert = Bert(args.large, args.temp_dir, args.finetune_bert)
self.vocab_size = self.bert.model.config.vocab_size
tgt_embeddings = nn.Embedding(
self.vocab_size, self.bert.model.config.hidden_size,
padding_idx=0)
self.decoder = TransformerDecoder(self.args.dec_layers,
self.args.dec_hidden_size,
heads=self.args.dec_heads,
d_ff=self.args.dec_ff_size,
dropout=self.args.dec_dropout,
embeddings=tgt_embeddings)
self.generator = get_generator(
self.vocab_size, self.args.dec_hidden_size, device)
self.generator[0].weight = self.decoder.embeddings.weight
self.load_state_dict(checkpoint['model'], strict=False)
self.to(device)
'use strict';
function whitespace_tokenize(s) {
return s.split(" ")
}
function postprocess(s) {
s = s.replace(/ ##/g, "")
s = s.replace(/\[unused0\]/g, "")
s = s.replace(/\[unused1\]/g, "")
s = s.replace(/\[unused2\]/g, "")
s = s.replace(/\[unused3\]/g, "")
s = s.replace(/\[PAD\]/g, "")
s = s.replace(/ +/g, " ")
s = s.trim()
return s
}
photograph of her was taken during the event and was then circulated
online in Iran. \"They are very sensitive about the hijab when we are
representing Iran in international events and even sometimes they
send a person with the team to control our hijab,\" Bayat told CNN
Sport in a phone interview Tuesday. The headscarf, or the hijab, has
been a mandatory part of women's dress in Iran since the 1979 Islamic
revolution but, in recent years, some women have mounted opposition
and staged protests about headwear rules. Bayat said she had been
wearing a headscarf at the tournament but that certain camera angles
had made it look like she was not. \"If I come back to Iran, I think
there are a few possibilities. It is highly possible that they arrest
me [...] or it is possible that they invalidate my passport,\" added
Bayat. \"I think they want to make an example of me.\" The
photographs were taken at the first stage of the chess championship
in Shanghai, China, but Bayat has since flown to Vladivostok, Russia,
for the second leg between Ju Wenjun and Aleksandra Goryachkina. She
was left \"panicked and shocked\" when she became aware of the
reaction in Iran after checking her phone in the hotel room. The 32-
year-old said she felt helpless as websites reportedly condemned her
for what some described as protesting the country's compulsory law.
Subsequently, Bayat has decided to no longer wear the headscarf.
\"I'm not wearing it anymore because what is the point? I was just
tolerating it, I don't believe in the hijab,\" she added. \"People
must be free to choose to wear what they want, and I was only wearing
the hijab because I live in Iran and I had to wear it. I had no other
choice.\" Bayat says she sought help from the country's chess
federation. She says the federation told her to post an apology on
her social media channels. She agreed under the condition that the
federation would guarantee her safety but she said they refused. \"My
husband is in Iran, my parents are in Iran, all my family members are
in Iran. I don't have anyone else outside of Iran. I don't know what
to say, this is a very hard situation,\" she said. CNN contacted the
Iranian Chess Federation on Tuesday but has yet to receive a
response."
let processed = preprocess(article, vocab)
let src = processed[0]
let segs = processed[1]
let loopBodyInputs = {}
for (let i = 0; i < names.length; i++) {
loopBodyInputs[names[i] + "_in"] = initOutputs[names[i]]
}
let step = 0
let maxStep = 50
let loopBodyOutputs
let result
let minLengthVal = ort.value([20], [1], "INT64")
shohreh bayat says she fears arrest after a photograph of her was
circulated online
21.5 Conclusion
Above is just one example of trying to convert a state-of-the-art PyTorch NLP model to ONNX. It
is true that the conversion is not a one-click solution; it actually requires a rather good
understanding of PyTorch and the model itself and some non-trivial problem-solving through
debugging/coding. However, this should be expected given the complex nature of the model.
BERT is a very significant step forward for NLP, and very widely used. It is actually used in
Google search today. Also this model is not authored with conversion to ONNX in mind, making
the job more difficult. Given the deep integration of PyTorch and ONNX, if the author of a model
writes code with ONNX in mind, the conversion process would be much smoother.
Again, the code in this example is far from optimal or even idiomatic. This is just one way to
make it work, as a proof of concept. With a better understanding of PyTorch and the model, there
would definitely be much better solutions.
A summary of the above working code is available as a git patch to the original source code.
This chapter describes how to work with JSON in MarkLogic Server, and includes the following
sections:
• Document Properties
MarkLogic Server supports JSON documents. You can use JSON to store documents or to deliver
results to a client, whether or not the data started out as JSON. The following are some highlights
of the MarkLogic JSON support:
• You can perform document operations and searches on JSON documents within
MarkLogic Server using JavaScript, XQuery, or XSLT. You can perform document
operations and searches on JSON documents from client applications using the Node.js,
Java, and REST Client APIs.
• The client APIs all have options to return data as JSON, making it easy for client-side
application developers to interact with data from MarkLogic.
• The REST Client API and the REST Management API accept both JSON and XML input.
For example, you can specify queries and configuration information in either format.
• The MarkLogic client APIs provide full support for loading and querying JSON
documents. This allows for fine-grained access to the JSON documents, as well as the
ability to search and facet on JSON content.
• You can easily transform data from JSON to XML or from XML to JSON. There is a rich
set of APIs to do these transformations with a large amount of flexibility as to the
specification of the transformed XML and/or the specification of the transformed JSON.
The supporting low-level APIs are built into MarkLogic Server, allowing for extremely
fast transformations.
For a JSON document, the nodes below the document node represent JSON objects, arrays, and
text, number, boolean, and null values. Only JSON documents contain object, array, number,
boolean, and null node types.
For example, the following picture shows a JSON object and its tree representation when stored in
the database as a JSON document. (If the object were an in-memory construct rather than a
document, the root document node would not be present.)
Document node
{ "property1" : "value",
"property2" : [ 1, 2 ],
"property3" : true,
"property4" : null
}
name: (none)
node kind: object
The name of a node is the name of the innermost JSON property name. For example, in the node
tree above, "property2" is the name of both the array node and each of the array member nodes.
Nodes which do not have an enclosing property are unnamed nodes. For example, the following
array node has no name, so neither do its members. Therefore, when you try to get the name of the
node in XQuery using fn:node-name, an empty sequence is returned.
• What is XPath?
http://www.w3.org/TR/xpath20/#id-sequence-expressions
For more details, see XPath Quick Reference in the XQuery and XSLT Reference Guide.
If you want to use node traversal to explore what is selected by an XPath expression, you can use
one of the following patterns as a template in Query Console:
Language Template
The results are wrapped in a call to xdmp:describe in XQuery and xdmp.describe in JavaScript to
clearly illustrate the result type, independent of Query Console formatting.
Note that in XQuery you can apply an XPath expression directly to a node, but in JavaScript, you
must use the Node.xpath method. For example, $node/a vs. node.xpath('/a/b'). Note also that
the xpath method returns a Sequence in JavaScript, so you may need to iterate over the results
when using this method in your application.
For example, if you want to explore what happens if you apply the XPath expression “/a” to a
JSON node that contains the data {"a": 1}, then you can run one following examples in Query
Console:
Language Template
(: XQuery :)
$node/a ==> number-node { 1 }
$node/a/data() ==> 1
// JavaScript
node.xpath('/a') ==> number-node { 1 }
node.xpath('/a/data()') ==> 1
You can use node test operators to limit selected nodes by node type or by node type and name;
for details, see “Node Test Operators” on page 382.
A JSON array is treated like a sequence by default when accessed with XPath. For details, see
“Selecting Arrays and Array Members” on page 384.
{ "a": {
"b": "value",
"c1": 1,
"c2": 2,
"d": null,
"e": {
"f": true,
"g": ["v1", "v2", "v3"]
}
} }
Then the table below demonstrates what node is selected by of several XPath expressions applied
to the object. You can try these examples in Query Console using the pattern described in
“Exploring the XPath Examples” on page 380.
$node/a/b "value"
number-node{ 1 }
$node/a/c1/data() 1
$node/a/d null-node { }
$node/a/e/f/data() true
$node/a/e/g[2] "v2"
{
"b": "value”,
"c1": 1,
"c2": 2,
...
}
• object-node()
• array-node()
• number-node()
• boolean-node()
• null-node()
• text()
All node test operators accept an optional string parameter for specifying a JSON property name.
For example, the following expression matches any boolean node named “a”:
boolean-node("a")
{ "a": {
"b": "value",
"c1": 1,
"c2": 2,
"d": null,
"e": {
"f": true,
"g": ["v1", "v2", "v3"]
}
} }
Then following table contains several examples of XPath expressions using node test operators.
You can try these examples in Query Console using the pattern described in “Exploring the XPath
Examples” on page 380.
(1,2)
number-node{2}
$node/a/text("b") "value"
To access an array as an array rather than a sequence, use the array-node() operator. To access an
item in an array rather than the associated node, use the data() operator.
Note: Unlike native JavaScript arrays, sequence (array) indices in XPath expressions
begin with 1, rather than 0. That is, an XPath expression such as /someArray[1]
addresses the first item in a JSON array.
Note that the “descendant-or-self” axis (“//”) can select both the array node and the array items if
you are not explicit about the node type. For example, given a document of the following form:
{ "a" : [ 1, 2] }
The XPath expression //node("a") selects both the array node and two number nodes for the item
values 1 and 2.
{
"a": [ 1, 2 ],
"b": [ 3, 4, [ 5, 6 ] ],
"c": [
{ "c1": "cv1" },
{ "c2": "cv2" }
]
}
Then following table contains examples of XPath expressions accessing arrays and array
members. You can try these examples in Query Console using the pattern described in “Exploring
the XPath Examples” on page 380.
(number-node{1}, number-node{2})
(1, 2)
[1, 2]
[1, 2]
$node/a[1] number-node{1}
$node/a[1]/data() 1
(3, 4, 5, 6)
[5, 6]
$node/b[3] number-node{5}
{ "c1": "cv1" }
number-node{2}
[1, 2]
Indexing for JSON documents differs from that of XML documents in the following ways:
• JSON string values are represented as text nodes and indexed as text, just like XML text
nodes. However, JSON number, boolean, and null values are indexed separately, rather
than being indexed as text.
• Each JSON array member value is considered a value of the associated property. For
example, a document containing {"a":[1,2]} matches a value query for a property "a"
with a value of 1 and a value query for a property "a" with a value of 2.
A complex XML node has a string value for indexing purposes that is the concatenation of the
text nodes of all its descendant nodes. There is no equivalent string value for a JSON object node.
For example, in XML, a field value query for “John Smith” matches the following document if
the field is defined on the path /name and excludes “middle”. The value of the field for the
following document is “John Smith” because of the concatenation of the included text nodes.
<name>
<first>John</first>
<middle>NMI</middle>
<last>Smith</last>
<name>
You cannot construct a field that behaves the same way for JSON because there is no
concatenation. The same field over the following JSON document has values “John” and
“Smith”, not “John Smith”.
{ "name": {
"first": "John",
"middle": "NMI",
"last": "Smith"
}
Also, field value and field range queries do not traverse into JSON object nodes. For example, if a
path field named “myField” is defined for the path /a/b, then the following query matches the
document “my.json”:
xdmp:document-insert("my.json",
xdmp:unquote('{"a": {"b": "value"}}'));
cts:search(fn:doc(), cts:field-value-query("myField", "value"));
However, the following query will not match “my.json” because /a/b is an object node
({"c":"example"}), not a string value.
xdmp:document-insert("my.json",
xdmp:unquote('{"a": {"b": {"c": "value"}}}'));
cts:search(fn:doc(), cts:field-value-query("myField", "value"));
To learn more about fields, see Overview of Fields in the Administrator’s Guide.
• Geospatial Data
• Semantic Data
• A JSON property whose value is an array of numbers, where the first 2 members represent
the latitude and longitude (or vice versa) and all other members are ignored. For example,
the value of the coordinates property of the following object:{"location": {"desc":
"somewhere", "coordinates": [37.52, 122.25]}}
• A pair of JSON properties, one whose value represents latitude, and the other whose value
represents the longitude. For example: {"lat": 37.52, "lon": 122.25}
• A string containing two numbers separated by a space. For example, "37.52 122.25”.
You can create indexes on geospatial data in JSON documents, and you can search geospatial data
using queries such as cts:json-property-geospatial-query,
cts:json-property-child-geospatial-query, cts:json-property-pair-geospatial-query, and
cts:path-geospatial-query (or their JavaScript equivalents). The Node.js, Java, and REST
Client APIs support similar queries.
Note that GeoJSON regions all have the same structure (a type and a coordinates property). Only
the type property differentiates between kinds of regions, such as points vs. polygons. Therefore,
when defining indexes for GeoJSON data, we recommend you use a geospatial path range index
that includes a predicate on type in the path expression.
For example, to define an index that covers only GeoJSON points ("type": "Point"), you can use
a path expressions similar to the following when defining the index. Then, search using
cts:path-geospatial-query or the equivalent structured query (see geo-path-query in the Search
Developer’s Guide).
/whatever/geometry[type="Point"]/array-node("coordinates")
A JSON string value in a recognized date-time format can be used in the same contexts as the
equivalent text in XML. MarkLogic Server recognizes the date and time formats defined by the
XML Schema, based on ISO-8601 conventions. For details, see the following document:
http://www.w3.org/TR/xmlschema-2/#isoformats
To create range indexes on a temporal data type, the data must be stored in your JSON documents
as string values in the ISO-8601 standard XSD date format. For example, if your JSON
documents contain data of the following form:
{ "theDate" : "2014-04-21T13:00:01Z" }
Then you can define an element range index on theDate with dateTime as the “element” type, and
perform queries on the theDate that take advantage of temporal data characteristics, rather than
just treating the data as a string.
{ "triple": {
"subject": IRI_STRING,
"predicate": IRI_STRING,
"object": STRING_PRESENTATION_OF_RDF_VALUE
} }
For example:
{
"my" : "data",
"triple" : {
"subject": "http://example.org/ns/dir/js",
"predicate": "http://xmlns.com/foaf/0.1/firstname",
"object": {"value": "John", "datatype": "xs:string"}
}
}
For more details, see Loading Semantic Triples in the Semantics Developer’s Guide.
• 0x0
• 0xD800 - 0xDFFF
• 0xFFFE, 0xFFFF, and characters above 0x10FFF
When MarkLogic serializes an xs:unsignedLong value that is too large for JSON to represent, the
value is serialized as a string. Otherwise, the value is serialized as a number. This means that the
same operation can result in either a string value or a number, depending on the input.
For example, the following code produces a JSON object with one property value that is a number
and one property value that is a string:
The object node created by this code looks like the following, where "notTooBig" is a number
node and "tooBig" is a text node.
{"notTooBig":1111111111111, "tooBig":"11111111111111111"}
Code that works with serialized JSON data that may contain large numbers must account for this
possibility.
Interfaces are also available to work with JSON documents using Java, JavaScript, and REST. See
the following guides for details:
• object-node
• array-node
• number-node
• boolean-node
• null-node
• text
Each constructor creates a JSON node. Constructors can be nested inside one another to build
arbitrarily complex structures. JSON property names and values can be literals or XQuery
expressions.
The table below provides several examples of JSON constructor expressions, along with the
corresponding serialized JSON.
{ "key" : { object-node {
"child1" : "one", "key" : object-node {
"child2" : "two" "child1" : "one",
} } "child2" : "two"
}
}
You can also create JSON nodes from string using xdmp:unquote. For example, the following
creates a JSON document that contains {"a": "b"}.
You can also create a JSON document node using xdmp:to-json, which accepts as input all the
nodes types you can create with a constructor, as well as a map:map representation of name-value
pairs. For details, see “Building a JSON Object from a Map” on page 393 and “Low-Level JSON
XQuery APIs and Primitive Types” on page 409.
For example, the following code creates a document node that contains a JSON object with one
property with atomic type (“a”), one property with array type (“b”), and one property with
object-node type:
{ "a":1,
"b":[2, 3, 4],
"c":{"c1":"one", "c2":"two"}
}
A json:object is a special type of map:map that represents a JSON object. You can combine map
operations and json:* functions. The following example uses both json:* functions such as
json:object and json:to-array and map:map operations like map:with.
To use JSON node constructors instead, see “Constructing JSON Nodes” on page 391.
fn:data( 1
number-node { 1 }
)
fn:data( true
boolean-node { true }
)
fn:data( ()
null-node { }
)
You can probe this behavior using a query similar to the following in Query Console:
In the above example, the fn:data call is wrapped in xdmp:describe to more accurately represent
the in-memory type. If you omit the xdmp:describe wrapper, serialization of the value for display
purposes can obscure the type. For example, the array example returns [1,2] if you remove the
xdmp:describe wrapper, rather than a <json:array/> node.
• xdmp:document-insert
• xdmp:document-load
• xdmp:document-delete
• xdmp:node-replace
• xdmp:node-insert-child
• xdmp:node-insert-before
Use the node constructors to build JSON nodes programmatically; for details, see “Constructing
JSON Nodes” on page 391.
Note: A node to be inserted into an object node must have a name. A node to be inserted
in an array node can be unnamed.
Use xdmp:unquote to convert serialized JSON into a node for insertion into the database. For
example:
Similar document operations are available through the Java, JavaScript, and REST APIs. You can
also use the mlcp command line tool for loading JSON documents into the database.
Notice that when inserting one object into another, you must pass the named object node to the
node operation. That is, if inserting a node of the form object-node {"c": "NEW"} you cannot pass
that expression directly into an operation like xdmp:node-insert-child. Rather, you must pass in
the associated named node, object-node {"c": "NEW"}/c.
For example, assuming fn:doc("my.json")/a/b targets a object node, then the following
generates an XDMP-CHILDUNNAMED error:
xdmp:node-insert-after(
fn:doc("my.json")/a/b,
object-node { "c": "NEW" }
)
You can also search JSON documents with string query, structured query, and QBE through the
client APIs. For details, see the following references:
• cts:json-property-word-query
• cts:json-property-value-query
• cts:json-property-range-query
• cts:json-property-scope-query
• cts:json-property-geospatial-query
• cts:json-property-child-geospatial-query
• cts:json-property-pair-geospatial-query
• cts:json-property-words
• cts:json-property-word-match
• cts:values
• cts:value-match
Constructors for JSON index references are also available, such as cts:json-property-reference.
The Search API and MarkLogic client APIs (REST, Java, Node.js) also support queries on JSON
documents using string and structured queries and QBE. For details, see the following:
If the parent node is an XML element node, the query is serialized as XML. If the parent node is a
JSON object or array node, the query is serialized as JSON. Otherwise, a query is serialized based
on the calling language. That is, as JSON when called from JavaScript and as XML otherwise.
If the value of a JSON query property is an array and the array is empty, the property is omitted
from the serialized query. If the value of a property is an array containing only one item, it is still
serialized as an array.
MarkLogic provides a toObject method on JSON document nodes for easy conversion from a
JSON node to its natural JavaScript representation. However, you still need to be aware of the
document model described in “How MarkLogic Represents JSON Documents” on page 378.
Using a NodeBuilder is optional when passing a JSON object node or array node into a function
that expects a node because MarkLogic implicitly converts native JavaScript object and array
parameter values into JSON object nodes and array nodes. For example:
For more details on programmatically constructing nodes, see NodeBuilder API in the JavaScript
Reference Guide.
1. Use the toObject method of the document node to convert it into an in-memory JavaScript
representation.
The following example applies the toObject technique to a document with an object node root.
The example inserts, updates, and deletes JSON properties on a mutable object, and then updates
the original document using xdmp.nodeReplace.
declareUpdate();
xdmp.nodeReplace(doc, obj);
The example uses xdmp.nodeReplace rather than xdmp.documentInsert to update the original
document because xdmp.nodeReplace preserves document metadata such as collections and
permissions. However, you can use whatever update/insert function meets the needs of your
application.
You can use this technique even when the root node of the document is not an object node. The
following example applies the same toObject technique to update a document with an array node
as its root.
declareUpdate();
If you attempt to modify a JSON document node without converting it to its mutable JavaScript
representation using toObject, you will get an error. For example, the following code would
produce an error because it attempts to change the value of a property named “a” on the
immutable document node:
declareUpdate();
This technique applies even if the root node of the document is not an object node. For example,
the following code retrieves the first item from a JSON document whose root node is an array
node:
The following example uses a JSON document whose root node is a number node:
If you cannot read the entire document into memory for some reason, you can also access its
contents through the document node root property. For example:
For more details, see Document Object in the JavaScript Reference Guide.
• xdmp.nodeReplace
• xdmp.nodeInsertChild
• xdmp.nodeInsertBefore
• xdmp.nodeInsertAfter
• xdmp.nodeDelete
You can only use the insert and replace functions in contexts in which you can construct a suitable
node to insert or replace. For example, inserting or updating array items, or updating the value of
an existing JSON property.
You cannot construct a node that represents just a JSON property, so you cannot use
xdmp.nodeInsertAfter, xdmp.nodeInsertChild, or xdmp.nodeInsertBefore to insert a new JSON
property into an object node. Instead, use the technique described in “Updating JSON Documents
from JavaScript” on page 399.
To replace the value of an array node, you must address the array node, not one of the array items.
For example, use a path expression with an array-node or node expression in its leaf step. For
more details, see “Selecting Arrays and Array Members” on page 384.
Keep the following points in mind when passing new or replacement nodes into the update
functions. For more details, see “Constructing JSON Nodes in JavaScript” on page 399.
• You are not required to programmatically construct object and array nodes because
MarkLogic implicitly converts a native JavaScript object or array into its corresponding
JSON node during parameter passing.
• Any other node type must be constructed. For example, use a NodeBuilder to create a
number, boolean, text, or null node.
The following examples illustrate using the node update functions on JSON documents. For more
information on using XPath on JSON documents, see “Traversing JSON Documents Using
XPath” on page 379.
xdmp.nodeInsertAfter(someDoc.xpath('/target[1]'),
new NodeBuilder().addNumber(10).toNode());
The JSON XQuery library module converts documents to and from JSON and XML. To ensure
fast transformations, it uses the underlying low-level APIs described in “Low-Level JSON
XQuery APIs and Primitive Types” on page 409. This section describes how to use the XQuery
library and includes the following parts:
• Conversion Philosophy
• Make it easy and fast to perform simple conversions using default conversion parameters.
• Make it possible to do custom conversions, allowing custom JSON and/or custom XML as
either output or input.
• Enable both fast key/value lookup and fine-grained search on JSON documents.
• Make it possible to perform semantically lossless conversions.
Because of these goals, the defaults are set up to make conversion both fast and easy. Custom
conversion is possible, but will take a little more effort.
• XQuery: json:transform-from-json
• Server-Side JavaScript: json.transformFromJson
The main function to convert from XML to JSON is:
• XQuery: json:transform-to-json
• Server-Side JavaScript: json.transformToJson
For examples, see the following sections:
• basic
• full
• custom
A strategy is a piece of configuration that tells the JSON conversion library how you want the
conversion to behave. The basic conversion strategy is designed for conversions that start in
JSON, and then get converted back and forth between JSON, XML, and back to JSON again. The
full strategy is designed for conversion that starts in XML, and then converts to JSON and back
to XML again. The custom strategy allows you to customize the JSON and/or XML output.
To use any strategy except the basic strategy, you can set and check the configuration options
using the following functions:
let $x := json:transform-from-json( $j )
let $jx := json:transform-to-json( $x )
return ($x, $jx)
=>
<json type="object" xmlns="http://marklogic.com/xdmp/json/basic">
<blah type="string">first value</blah>
<second_20_Key type="array">
<item type="string">first item</item>
<item type="string">second item</item>
<item type="null"/>
<item type="string">third item</item>
<item type="boolean">false</item>
</second_20_Key>
<thirdKey type="number">3</thirdKey>
<fourthKey type="object">
<subKey type="string">sub value</subKey>
<boolKey type="boolean">true</boolKey>
<empty type="null"/>
</fourthKey>
<fifthKey type="null"/>
<sixthKey type="array"/>
</json>
{"blah":"first value",
"second Key":["first item","second item",null,"third item",false],
"thirdKey":3,
"fourthKey":{"subKey":"sub value", "boolKey":true, "empty":null},
"fifthKey":null, "sixthKey":[]}
Suppose the database contains the following XML document with the URI “booklist.xml”:
<BOOKLIST>
<BOOKS>
<ITEM CAT="MMP">
<TITLE>Pride and Prejudice</TITLE>
<AUTHOR>Jane Austen</AUTHOR>
<PUBLISHER>Modern Library</PUBLISHER>
<PUB-DATE>2002-12-31</PUB-DATE>
<LANGUAGE>English</LANGUAGE>
<PRICE>4.95</PRICE>
<QUANTITY>187</QUANTITY>
<ISBN>0679601686</ISBN>
<PAGES>352</PAGES>
<DIMENSIONS UNIT="in">8.3 5.7 1.1</DIMENSIONS>
<WEIGHT UNIT="oz">6.1</WEIGHT>
</ITEM>
</BOOKS>
</BOOKLIST>
Then the following code converts the contents from XML to JSON and back again.
Language Example
let $c := json:config("full")
=> map:with("whitespace", "ignore"),
$j := json:transform-to-json(fn:doc("booklist.xml", $c),
$xj := json:transform-from-json($j,$c)
return ($j, $xj)
{"BOOKLIST": { "_children": [
{"BOOKS": { "_children": [ {
"ITEM": {
"_attributes": { "CAT": "MMP" },
"_children": [
{"TITLE": { "_children": [ "Pride and Prejudice" ] } },
The following code is an XQuery example. The equivalent Server-Side JavaScript example
follows.
let $c := json:config("custom")
=> map:with("whitespace", "ignore")
=> map:with("array-element-names",
xs:QName("search:bucket"))
=> map:with("attribute-names",
("facet","type","ge","lt","name","ns" ))
=> map:with("text-value", "label")
=> map:with("camel-case", fn:true())
=> map:with("element-namespace",
"http://marklogic.com/appservices/search")
let $j := json:transform-to-json($doc ,$c)
let $x := json:transform-from-json($j,$c)
return ($j, $x)
'use strict';
const json = require('/MarkLogic/json/json.xqy');
{"options":
{"constraint":
{"name":"decade",
"range":{"facet":true, "type":"xs:gYear",
"bucket":[{"ge":"1970", "lt":"1980", "name":"1970s",
"label":"1970s"},
{"ge":"1980", "lt":"1990", "name":"1980s","label":"1980s"},
{"ge":"1990", "lt":"2000", "name":"1990s", "label":"1990s"},
{"ge":"2000", "name":"2000s", "label":"2000s"}],
"facetOption":"limit=10",
"attribute":{"ns":"", "name":"year"},
"element":{"ns":"https:\/\/2.gy-118.workers.dev/:443\/http\/marklogic.com\/wikipedia",
"name":"nominee"}
}}}}
<options xmlns="http://marklogic.com/appservices/search">
<constraint name="decade">
<range facet="true" type="xs:gYear">
<bucket ge="1970" lt="1980" name="1970s">1970s</bucket>
<bucket ge="1980" lt="1990" name="1980s">1980s</bucket>
<bucket ge="1990" lt="2000" name="1990s">1990s</bucket>
<bucket ge="2000" name="2000s">2000s</bucket>
<facet-option>limit=10</facet-option>
<attribute ns="" name="year"/>
<element ns="http://marklogic.com/wikipedia" name="nominee"/>
</range>
</constraint>
</options>
• xdmp:to-json
• xdmp:from-json
These APIs make the data available to XQuery as a map, and serialize the XML data as a JSON
string. Most XQuery types are serialized to JSON in a way that they can be round-tripped
(serialized to JSON and parsed from JSON back into a series of items in the XQuery data model)
without any loss, but some types will not round-trip without loss. For example, an xs:dateTime
value will serialize to a JSON string, but that same string would have to be cast back into an
xs:dateTime value in XQuery in order for it to be equivalent to its original. The high-level API
can take care of most of those problems.
There are also a set of low-level APIs that are extensions to the XML data model, allowing
lossless data translations for things such as arrays and sequences of sequences, neither of which
exists in the XML data model. The following functions support these data model translations:
• json:array
• json:array-pop
• json:array-push
• json:array-resize
• json:array-values
• json:object
• json:object-define
• json:set-item-at
• json:subarray
• json:to-array
Additionally, there are primitive XQuery types that extend the XQuery/XML data model to
specify a JSON object (json:object), a JSON array (json:array), and a type to make it easy to
serialize an xs:string to a JSON string when passed to xdmp:to-json (json:unquotedString).
To further improve performance of the transformations to and from JSON, the following built-ins
are used to translate strings to XML NCNames:
• xdmp:decode-from-NCName
• xdmp:encode-for-NCName
The low-level JSON APIs, supporting XQuery functions, and primitive types are the building
blocks to make efficient and useful applications that consume and or produce JSON. While these
APIs are used for JSON translation to and from XML, they are at a lower level and can be used
for any kind of data translation. But most applications will not need the low-level APIs; instead
use the XQuery library API (and the REST and Java Client APIs that are built on top of the it),
described in “Converting JSON to XML and XML to JSON” on page 403.
For the signatures and description of each function, see the MarkLogic XQuery and XSLT
Function Reference.
(:
returns:
[{"some-prop":45683}, "this is a string", 123]
:)
For details on maps, see “Using the map Functions to Create Name-Value Maps” on page 157.
let $json :=
xdmp:unquote('[{"some-prop":45683}, "this is a string", 123]')
return
xdmp:from-json($json)
json:array(
<json:array xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:json="http://marklogic.com/xdmp/json">
<json:value>
<json:object>
<json:entry key="some-prop">
<json:value xsi:type="xs:integer">45683
</json:value>
</json:entry>
</json:object>
</json:value>
Note that what is shown above is the serialization of the json:array XML element. You can also
use some or all of the items in the XML data model. For example, consider the following, which
adds to the json:object based on the other values (and prints out the resulting JSON string):
This query uses the map functions to modify the first json:object
in the json:array.
:)
In the above query, the first item ($items[1]) returned from the xdmp:from-json call is a
json:array, and the you can use the map functions to modify the json:array, and the query then
returns the modified json:array. You can treat a json:array like a map, as the main difference is
that the json:array is ordered and the map:map is not. For details on maps, see “Using the map
Functions to Create Name-Value Maps” on page 157.
For details, see Loading Content Using MarkLogic Content Pump in the Loading Content Into
MarkLogic Server Guide.
You can also use the Java Client API to create JSON documents that represent POJO domain
objects. For details, see POJO Data Binding Interface in the Java Application Developer’s Guide.
For details, see Loading Documents into the Database in the Node.js Application Developer’s Guide.
{
"key1":"value1",
"key2":{
"a":"value2a",
"b":"value2b"
}
}
Run the following curl command to use the documents endpoint to create a JSON document:
You can then retrieve the document from the REST Client API as follows:
For details about the REST Client API, see REST Application Developer’s Guide.
MarkLogic Server includes pre-commit and post-commit triggers. This chapter describes how
triggers work in MarkLogic Server and includes the following sections:
• Overview of Triggers
• Trigger Events
• Trigger Scope
Creating a robust trigger framework is complex, especially if your triggers need to maintain state
or recover gracefully from service interruptions. Before creating your own custom triggers,
consider using the Content Processing Framework. CPF provides a rich, reliable framework
which abstracts most of the event management complexity from your application. For more
information, see “Triggers and the Content Processing Framework” on page 417.
Note: Triggers run as the user performing the update transaction that caused the trigger.
The programmer is free to call amped library functions in triggers if the use case
requires certain roles to work correctly. The only exception here is the
database-online trigger, because in that case there is no triggering update
transaction, and hence no user. For database-online trigger the user is specified by
the trigger itself. Some customization of CPF installation scripts is required in
order to insure that this event is run as an existing administrative user.
configure
trigger db of
content db
trigger action
monitored content trigger definition module
ref’d in
<myContent> <trgr:trigger> trigger defn (: action.xqy :)
... <trgr:module> xquery version...
<myContent> ... ...
</trgr:trigger>
Usually, the content, triggers and module databases are different physical databases, but there is
no requirement that they be separate. A database named Triggers is installed by MarkLogic
Server for your convenience, but any database may serve as the content, trigger, or module
database. The choice is dependent on the needs of your application.
For example, if you want your triggers backed up with the content to which they apply, you might
store trigger definitions and their action modules in your content database. If you want to share a
trigger action module across triggers that apply to multiple content databases, you would use a
separate trigger modules database.
Note: Most trigger API function calls must be evaluated in the context of the triggers
database.
In a pipeline used with the Content Processing Framework, a trigger fires after one stage is
complete (from a document update, for example) and then the XQuery module specified in the
trigger is executed. When it completes, the next trigger in the pipeline fires, and so on. In this way,
you can create complex pipelines to process documents.
The Status Change Handling pipeline, installed when you install Content Processing in a
database, creates and manages all of the triggers needed for your content processing applications,
so it is not necessary to directly create or manage any triggers in your content applications.
When you use the Content Processing Framework instead of writing your own triggers:
• Pre-Commit Triggers
• Post-Commit Triggers
Therefore, pre-commit triggers and the modules from which the triggers are invoked execute in a
single context; if the trigger fails to complete for some reason (if it throws an exception, for
example), then the entire transaction, including the triggering transaction, is rolled back to the
point before the transaction began its evaluation.
This transactional integrity is useful when you are doing something that does not make sense to
break up into multiple asynchronous steps. For example, if you have an application that has a
trigger that fires when a document is created, and the document needs to have an initial property
set on it so that some subsequent processing can know what state the document is in, then it makes
sense that the creation of the document and the setting of the initial property occur as a single
transaction. As a single transaction (using a pre-commit trigger), if something failed while adding
the property, the document creation would fail and the application could deal with that failure. If it
were not a single transaction, then it is possible to get in a situation where the document is
created, but the initial property was never created, leaving the content processing application in a
state where it does not know what to do with the new document.
When a post-commit trigger spawns an XQuery module, it is put in the queue on the task server.
The task server maintains this queue of tasks, and initiates each task in the order it was received.
The task server has multiple threads to service the queue. There is one task server per group, and
you can set task server parameters in the Admin Interface under Groups > group_name > Task
Server.
Because post-commit triggers are asynchronous, the code that calls them must not rely on
something in the trigger module to maintain data consistency. For example, the state transitions in
the Content Processing Framework code uses post-commit triggers. The code that initiates the
triggering event updates the property state before calling the trigger, allowing a consistent state in
case the trigger code does not complete for some reason. Asynchronous processing has many
advantages for state processing, as each state might take some time to complete. Asynchronous
processing (using post-commit triggers) allows you to build applications that will not lose all of
the processing that has already occurred if something happens in the middle of processing your
pipeline. When the system is available again, the Content Processing Framework will simply
continue the processing where it left off.
• document create
• document update
• document delete
• any property change (does not include MarkLogic Server-controlled properties such as
last-modified and directory)
• The trigger scope defines the set of documents to which the event applies. Use
trgr:*-scope functions such as trgr:directory-scope to create this piece. For more
information, see “Trigger Scope” on page 420.
• The content condition defines the triggering operation, such as document creation, update
or deletion, or property modification. Use the trgr:*-content functions such as
trgr:document-content to create this piece.
To watch more than one operation, you must use multiple trigger events and define
multiple triggers.
• The timing indicator defines when the trigger action occurs relative to the transaction that
matches the event condition, either pre-commit or post-commit. Use trgr:*-commit
functions such as trgr:post-commit to create this piece. For more information, see
“Pre-Commit Versus Post-Commit Triggers” on page 418.
The content database to which an event applies is not an explicit part of the event or the trigger
definition. Instead, the association is made through the triggers database configured for the
content database.
Whether the module that the trigger invokes commits before or after the module that produced the
triggering event depends upon whether the trigger is a pre-commit or post-commit trigger.
Pre-commit triggers in MarkLogic Server listen for the event and then invoke the trigger module
before the transaction commits, making the entire process a single transaction that either all
completes or all fails (although the module invoked from a pre-commit trigger sees the updates
from the triggering event).
Post-commit triggers in MarkLogic Server initiate after the event is committed, and the module
that the trigger spawns is run in a separate transaction from the one that updated the document.
For example, a trigger on a document update event occurs after the transaction that updates the
document commits to the database.
Because the post-commit trigger module runs in a separate transaction from the one that caused
the trigger to spawn the module (for example, the create or update event), the trigger module
transaction cannot, in the event of a transaction failure, automatically roll back to the original
state of the document (that is, the state before the update that caused the trigger to fire). If this will
leave your document in an inconsistent state, then the application must have logic to handle this
state.
For more information on pre- and post-commit triggers, see “Pre-Commit Versus Post-Commit
Triggers” on page 418.
A document trigger scope specifies a given document URI, and the trigger responds to the
specified trigger events only on that document.
A collection trigger scope specifies a given collection URI, and the trigger responds to the
specified trigger events for any document in the specified collection.
A directory scope represents documents that are in a specified directory, either in the immediate
directory (depth of 1); or in the immediate or any recursive subdirectory of the specified directory.
For example, if you have a directory scope of the URI / (a forward-slash character) with a depth
of infinity, that means that any document in the database with a URI that begins with a
forward-slash character ( / ) will fire a trigger with this scope upon the specified trigger event.
Note that in this directory example, a document called hello.xml is not included in this trigger
scope (because it is not in the / directory), while documents with the URIs /hello.xml or
/mydir/hello.xml are included.
For post-commit triggers, the module is spawned onto the task server when the trigger is fired
(when the event completes). The spawned module is evaluated in an analogous way to calling
xdmp:spawn in an XQuery statement, and the module evaluates asynchronously on the task server.
Once the post-commit trigger module is spawned, it waits in the task server queue until it is
evaluated. When the spawned module evaluates, it is run as its own transaction. Under normal
circumstances the modules in the task server queue will initiate in the order in which they were
added to the queue. Because the task server queue does not persist in the event of a system
shutdown, however, the modules in the task server queue are not guaranteed to run.
• trgr:uri as xs:string
• trgr:trigger as node()
The trgr:uri external variable is the URI of the document which caused the trigger to fire (it is
only available on triggers with data events, not on triggers with database online events). The
trgr:trigger external variable is the trigger XML node, which is stored in the triggers database
with the URI http://marklogic.com/xdmp/triggers/trigger_id, where trigger_id is the ID of the
trigger. You can use these external variables in the trigger module by declaring them in the prolog
as follows:
For real-world examples of XQuery code that creates triggers, see the
<install_dir>/Modules/MarkLogic/cpf/domains.xqy XQuery module file. For a sample trigger
example, see “Simple Trigger Example” on page 423. The functions in this module are used to
create the needed triggers when you use the Admin Interface to create a domain.
1. Use the Admin Interface to set up the database to use a triggers database. You can specify
any database as the triggers database. The following screenshot shows the database named
Documents as the content database and Triggers as the triggers database.
2. Create a trigger that listens for documents that are created under the directory /myDir/
with the following XQuery code. Note that this code must be evaluated against the triggers
database for the database in which your content is stored.
This code returns the ID of the trigger. The trigger document you just created is stored in
the document with the URI http://marklogic.com/xdmp/triggers/trigger_id, where
trigger_id is the ID of the trigger you just created.
3. Load a document whose contents is the XQuery module of the trigger action. This is the
module that is spawned when the when the previously specified create trigger fires. For
this example, the URI of the module must be /modules/log.xqy in the database named
Documents (from the trgr:trigger-module part of the trgr:create-trigger code above).
Note that the document you load, because it is an XQuery document, must be loaded as a
text document and it must have execute permissions. For example, create a trigger module
in the Documents database by evaluating the following XQuery against the modules
database for the App Server in which the triggering actions will be evaluated:
4. The trigger will now fire when you create documents in the database named Documents in
the /myDir/ directory. For example, the following:
xdmp:document-insert("/myDir/test.xml", <test/>)
Note: This example only fires the trigger when the document is created. If you want it to
fire a trigger when the document is updated, you will need a separate trigger with a
trgr:document-content of "modify".
When a pre-commit trigger fires, its actions are part of the same transaction. Therefore, any
updates performed in the trigger will not fire the same trigger again. To do so is to guarantee
trigger storms, which generally result in an XDMP-MAXTRIGGERDEPTH error message.
In the following example, we create a trigger that calls a module when a document in the /storm/
directory is modified the database. The triggered module attempts to update the document with a
new child node. This triggers another update of the document, which triggers another update, and
so on, ad infinitum. The end result is an XDMP-MAXTRIGGERDEPTH error message and no updates to
the document.
if (xdmp:database() eq xdmp:database("Modules"))
then ()
else error((), 'NOTMODULESDB', xdmp:database()) ,
xdmp:log(text {{
'storm:',
$trgr:uri,
xdmp:describe($trgr:trigger)
}}) ,
2. In the Triggers database, create the following trigger to call the storm.xqy module each
time a document in the /storm/ directory in the database is modified:
if (xdmp:database() eq xdmp:database("Triggers"))
then ()
else error((), 'NOTTRIGGERSDB', xdmp:database()) ,
trgr:create-trigger(
"storm",
"storm",
trgr:trigger-data-event(trgr:directory-scope("/storm/", "1"),
trgr:document-content("modify"),
trgr:pre-commit()),
trgr:trigger-module(
xdmp:database("Modules"),
"/triggers/",
"storm.xqy"),
fn:true(),
xdmp:default-permissions(),
fn:true() )
3. Now insert a document twice into any database that uses Triggers as its triggers database:
xdmp:document-insert('/storm/test', <test/> )
4. The second attempt to insert the document will fire the trigger, which will result in an
XDMP-MAXTRIGGERDEPTH error message and repeated messages in ErrorLog.txt that look
like the following:
If you encounter similar circumstances in your application and it’s not possible to modify your
application logic, you can avoid trigger storms by setting the $recursive parameter in the
trgr:create-trigger function to fn:false(). So your new trigger would look like:
trgr:create-trigger(
"storm",
"storm",
trgr:trigger-data-event(trgr:directory-scope("/storm/", "1"),
trgr:document-content("modify"),
trgr:pre-commit()),
trgr:trigger-module(
xdmp:database("Modules"),
"/triggers/",
"storm.xqy"),
fn:true(),
xdmp:default-permissions(),
fn:false() )
The result will be a single update to the document and no further recursion.
A native plugin is a C++ dynamically loaded library that provides one or more plugin
implementations to MarkLogic. This chapter covers how to create, install, and manage native
plugins.
The UDF interfaces define the extension points that can take advantage of a native plugin.
MarkLogic currently supports the following UDFs:
• StemmerUDF: Define a custom stemmer for a language. For more details, see Using a
User-Defined Stemmer Plugin in the Search Developer’s Guide.
The implementation requirement for each UDF varies, but they all use the native plugin
mechanism for packaging, deployment, and version.
When you install a native plugin library, MarkLogic Server stores it in the Extensions database. If
the MarkLogic Server instance in which you install the plugin is part of a cluster, your plugin
library is automatically propagated to all the nodes in the cluster.
There can be a short delay between installing a plugin and having the new version available.
MarkLogic Server only checks for changes in plugin state about once per second. Once a change
is detected, the plugin is copied to hosts with an older version.
In addition, each host has a local cache from which to load the native library, and the cache cannot
be updated while a plugin is in use. Once the plugin cache starts refreshing, operations that try use
a plugin are retried until the cache update completes.
MarkLogic Server loads plugins on-demand. A native plugin library is not dynamically loaded
until the first time an application calls a UDF implemented by the plugin. A plugin can only be
loaded or unloaded when no plugins are in use on a host.
• Compile your library with a C++ compiler and standard libraries compatible with
MarkLogic Server. See the table below. This is necessary because C++ is not guaranteed
binary compatible across compiler versions.
• Compile your C++ code with the options your platform requires for creating shared
objects. For example, on Linux, compile with the -fPIC option.
• Build a 64-bit library (32-bit on Windows).
The sample plugin in marklogic_dir/Samples/NativePlugins includes a Makefile usable with
GNU make on all supported platforms. Use this makefile as the basis for building your own
plugins as it includes all the required compiler options.
The makefile builds a shared library, generates a manifest, and zips up the library and manifest
into an install package. The makefile is easily customized for your own plugin by changing a few
make variables at the beginning of the file:
PLUGIN_NAME = sampleplugin
PLUGIN_VERSION = 0.1
PLUGIN_PROVIDER = MarkLogic
PLUGIN_SRCS = \
SamplePlugin.cpp
The table below shows the compiler and standard library versions used to build MarkLogic
Server. You must build your native plugin with compatible tools.
Platform Compiler
• A plugin manifest file called manifest.xml. See “The Plugin Manifest” on page 435.
• Optionally, additional shared libraries required by the plugin implementation.
Including dependent libraries in your plugin zip file gives you explicit control over which library
versions are used by your plugin and ensures the dependent libraries are available to all nodes in
the cluster in which the plugin is installed.
The following example creates the plugin package sampleplugin.zip from the plugin
implementation, libsampleplugin.so, a dependent library, libdep.so, and the plugin manifest.
If the plugin contents are organized into subdirectories, include the subdirectories in the paths in
the manifest. For example, if the plugin components are organized as follows in the zip file:
$ unzip -l sampleplugin.zip
Archive: sampleplugin.zip
Length Date Time Name
-------- ---- ---- ----
28261 06-28-12 12:54 libsampleplugin.so
334 06-28-12 12:54 manifest.xml
0 06-28-12 12:54 deps/
28261 06-28-12 12:54 deps/libdep.so
-------- -------
56856 4 files
Then manifest.xml for this plugin must include deps/ in the dependent library path:
For example, the following code installs a native plugin contained in the file
/space/plugins/sampleplugin.zip. The relative plugin path in the Extensions directory is
“native”.
Language Example
plugin:install-from-zip("native",
xdmp:document-get("/space/plugins/sampleplugin.zip")/node())
plugin.installFromZip(
'native',
fn.head(
xdmp.documentGet('/space/plugins/sampleplugin.zip')).root);
If the plugin was already installed on MarkLogic Server, the new version replaces the old.
An installed plugin is identified by its “path”. The path is of the form scope/plugin-id, where
scope is the first parameter to plugin:install-from-zip, and plugin-id is the ID in the <id/>
element of the plugin manifest. For example, if the manifest for the above plugin contains
<id>sampleplugin-id</id>, then the path is native/sampleplugin-id.
The plugin zip file can be anywhere on the filesystem when you install it, as long as the file is
readable by MarkLogic Server. The installation process deploys your plugin to the Extensions
database and creates a local on-disk cache inside your MarkLogic Server directory.
Installing or updating a native plugin on any host in a MarkLogic Server cluster updates the
plugin for the whole cluster. However, the new or updated plugin may not be available
immediately. For details, see “How MarkLogic Server Manages Native Plugins” on page 429.
Language Example
plugin:uninstall("native", "sampleplugin-id")
The plugin is removed from the Extensions database and unloaded from memory on all nodes in
the cluster. There can be a slight delay before the plugin is uninstalled on all hosts. For details, see
“How MarkLogic Server Manages Native Plugins” on page 429. There can be a slight delay
Every C++ native plugin library must implement an extern "C" function called marklogicPlugin
to perform this load-time registration. The function interface is:
When MarkLogic Server loads your plugin library, it calls marklogicPlugin so your plugin can
register itself. The exact requirements for registration depend on the interfaces implemented by
your plugin, but must include at least the following:
For example, the following code registers two AggregateUDF implementations. For a complete
example, see marklogic_dir/Samples/NativePlugins.
#include “MarkLogic.h”
using namespace marklogic;
When you deploy a new plugin version, both the old and new versions of the plugin can be
present in the cluster for a short time. If MarkLogic Server detects this state when your plugin is
used, MarkLogic Server reports XDMP-BADPLUGINVERSION and retries the operation until the plugin
versions synchronize.
Calling Registry::version with no arguments uses a default version constructed from the
compilation date and time (__DATE__ and __TIME__). This ensures the version number changes
every time you compile your plugin. The following example uses the default version number:
...
}
You can override this behavior by passing an explicit version to Registry::version. The version
must be a numeric value. For example:
Note: Native plugin libraries are demand loaded when an application uses one of the
UDFs implemented by the plugin. Plugins that are installed but not yet loaded will
not appear in the host status.
2. Click the name of the host you want to monitor, either on the tree menu or the summary
page. The host summary page appears.
3. Click the Status tab at the top right. The host status page appears.
To examine loaded programatically, open Query Console and run a query similar to the following:
Language Example
You will see output similar to the following if there are plugins loaded. The XQuery code emits
XML. The JavaScript code emits a JavaScript object (pretty-printed as JSON by Query Console).
This output is the result of installing and loading the sample plugin in
MARKLOGIC_DIR/Samples/NativePlugin, which implements several aggregate UDFs (“max”,
“min”, etc.), a lexer UDF, and a stemmer UDF.
Language Example
Server-Side [{
JavaScript "path":"native/sampleplugin/libsampleplugin.so",
"version":"356528850",
"capabilities":[
"max", "min_point", "min", "variance",
"median-test", "max_dateTime", "max_string",
"sample_lexer",
"sample_stemmer"]
}]
You can use the same manifest on multiple platforms by specifying the native plugin library
without a file extension or, on Unix, lib prefix. If this is the case, then MarkLogic Server forms
the library name in a platform specific fashion, as shown below:
If the plugin package includes dependent libraries, list them in the <native> element. For
example:
The plugin-path is same plugin library path you use when invoking the plugin. For example, if
you install the following plugin and its manifest specifies the plugin path as “sampleplugin”, then
the plugin-specific privilege would be
http://marklogic.com/xdmp/privileges/native-plugin/native/sampleplugin.
plugin:install-from-zip("native",
xdmp:document-get("/space/udf/sampleplugin.zip")/node())
The plugin-specific privilege is not pre-defined for you. You must create it. However, MarkLogic
Server will honor it if it is present.
The sample Makefile will lead you through compiling, linking, and packaging the native plugin.
The README.txt provides instructions for installing and exercising the plugin library.
This chapter describes how to create user-defined aggregate functions. This chapter includes the
following sections:
MarkLogic Server provides a C++ interface for defining your own aggregate functions. You build
your aggregate user-defined functions (UDFs) into a dynamically linked library, package it as a
native plugin, and install the plugin in MarkLogic Server. To learn more about native plugins, see
“Using Native Plugins” on page 428.
This chapter covers how to implement an aggregate UDF. For information on using aggregate
UDFs, see Using Aggregate User-Defined Functions in the Search Developer’s Guide.
• What is MapReduce?
You can explicitly leverage In-Database MapReduce efficiencies by using builtin and
user-defined aggregate functions. For details, see Using Aggregate Functions in the Search
Developer’s Guide.
Map tasks calculate intermediate results by passing the input data through a map function. Then,
the intermediate results are processed by reduce tasks to produce final results.
• In-database MapReduce distributes processing across a MarkLogic cluster when you use
qualifying functions, such as builtin or user-defined aggregate functions. For details, see
“How In-Database MapReduce Works” on page 439.
• External MapReduce distributes work across an Apache Hadoop cluster while using
MarkLogic Server as the data source or result repository. For details, see the MarkLogic
Connector for Hadoop Developer’s Guide.
MarkLogic Server stores data in structures called forests and stands. A large database is usually
stored in multiple forests. The forests can be on multiple hosts in a MarkLogic Server cluster.
Data in a forest can be stored in multiple stands. For more information on how MarkLogic Server
organizes content, see Understanding Forests in the Administrator’s Guide and Clustering in
MarkLogic Server in the Scalability, Availability, and Failover Guide.
2. The originating e-node distributes the work required by the job among the local and
remote forests of the target database. Each unit of work is a task in the job.
3. Each participating host runs map tasks in parallel to process data on that host. There is at
least one map task per forest that contains data needed by the job.
4. Each participating host runs reduce tasks to roll up the local per stand map results, then
returns this intermediate result to the originating e-node.
5. The originating e-node runs reduce tasks to roll up the results from each host.
6. The originating e-node runs a “finish” operation to produce the final result.
• Implementing AggregateUDF::map
• Implementing AggregateUDF::reduce
• Implementing AggregateUDF::finish
Note: An aggregate UDF runs in the same memory and process space as MarkLogic
Server, so errors in your plugin can crash MarkLogic Server. Before deploying an
aggregate UDF, read and understand “Using Native Plugins” on page 428.
3. Package your implementation into a native plugin. See “Packaging a Native Plugin” on
page 430.
The table below summarizes the key methods of marklogic::AggregateUDF that you must
implement:
Method
Description
Name
start Initialize the state of a job and process arguments. Called once per job, on the
originating e-node.
map Perform the map calculations. Called once per map task (at least once per stand
of the database containing target content). May be called on local and remote
objects. For example, in a mean aggregate, calculate a sum and count per stand.
reduce Perform reduce calculations, rolling up the map results. Called N-1 times,
where N = # of map tasks. For example, in a mean aggregate, calculate a total
sum and count across the entire input data set.
finish Generate the final results returned to the calling application. Called once per
job, on the originating e-node. For example, in a mean aggregate, calculate the
mean from the sum and count.
clone Create a copy of an aggregate UDF object. Called at least once per map task to
create an object to execute your map and reduce methods.
close Notify your implementation that a cloned object is no longer needed.
encode Serialize your aggregate UDF object so it can be transmitted to a remote host in
the cluster.
decode Deserialize your aggregate UDF object after it has been transmitted to/from a
remote host.
Use the marklogic::TupleIterator to access the input range index values. Store your map results
as members of the object on which map is invoked. Use the marklogic::Reporter for error
reporting and logging; see “Aggregate UDF Error Handling and Logging” on page 449.
The order of values within a tuple corresponds to the order of the range indexes in the invocation
of your aggregate UDF. The first index contributes the first value in each tuple, and so on. Empty
(null) tuple values are possible.
If you try to extract a value from a tuple into a C++ variable of incompatible type, MarkLogic
Server throws an exception. For details, see “Type Conversions in Aggregate UDFs” on page 451.
In the following example, the map method expects to work with 2-way co-occurrences of <name>
(string) and <zipcode> (int). Each tuple is a (name, zipcode) value pair. The name is the 0th item
in each tuple; the zipcode is the 1st item.
#include "MarkLogic.h"
using namespace marklogic;
...
void myAggregateUDF::map(TupleIterator& values, Reporter& r)
{
if (values.width() != 2) {
r.error("Unexpected number of range indexes.");
// does not return
}
for (; !values.done(); values.next()) {
if (!values.null(0) && !values.null(1)) {
String name;
int zipcode;
values.value(0, name);
values.value(1, zipcode);
// work with this tuple...
}
}
#include "MarkLogic.h"
using namespace marklogic;
...
RangeIndex::getOrder myAggregateUDF::getOrder() const
{
return RangeIndex::ASCENDING;
}
The reduce method has the following signature. Fold the data from the input AggregateUDF into
the object on which reduce is called. Use the Reporter to report errors and log messages; see
“Aggregate UDF Error Handling and Logging” on page 449.
MarkLogic Server repeatedly invokes reduce until all the map results are folded together, and
then invokes finish to produce the final result.
For example, consider an aggregate UDF that computes the arthimetic mean of a set of values.
The calculation requires a sum of the values and a count of the number of values. The map tasks
accumulate intermediate sums and counts on subsets of the data. When all reduce tasks complete,
one object on the e-node contains the sum and the count. MarkLogic Server then invokes finish
on this object to compute the mean.
For example, if the input range index contains the values 1-9, then the mean is 5 (45/9). The
following diagram shows the map-reduce-finish cycle if MarkLogic Server distributes the index
values across 3 map tasks as the sequences (1,2,3), (4,5), and (6,7,8,9):
map output reduce output reduce output
input (sum,count) (sum,count) (sum,count) mean
map
(1,2,3) (6,3)
reduce (15,5)
map finish
(4,5) (9,2) reduce (45,9) 5
map
(6,7,8,9) (30,4)
The following code snippet is an aggregate UDF that computes the mean of values from a range
index (sum/count). The map method (not shown) computes a sum and a count over a portion of the
range index and stores these values on the aggregate UDF object. The reduce method folds
together the sum and count from a pair of your aggregate UDF objects to eventually arrive at a
sum and count over all the values in the index:
#include "MarkLogic.h"
using namespace marklogic;
Use OutputSequence::writeValue to add a value to the output sequence. To add a value that is a
key-value map, bracket paired calls to OutputSequence::writeMapKey and
OutputSequence::writeValue between OutputSequence::startMap and OutputSequence::endMap.
For example:
For information on how MarkLogic Server converts types between your C++ code and the calling
application, see “Type Conversions in Aggregate UDFs” on page 451.
// From MarkLogic.h
namespace marklogic {
The string passed to Registry::registerAggregate is the name applications use to invoke your
plugin. For example, as the second parameter to cts:aggregate in XQuery:
Or, as the value of the aggregate parameter to /values/{name} using the REST Client API:
GET /v1/values/theLexicon?aggregate=ex1&aggregatePath=pluginPath
The following example illustrates using the template function to register MyFirstAggregate with
the name “ex1” and the virtual member function to register a second aggregate that uses an object
factory, under the name “ex2”.
#include "MarkLogic.h"
using namespace marklogic;
...
AggregateUDF* mySecondAggregateFactory() {...}
When a clone is no longer needed, such as at the end of a task or job, MarkLogic Server releases
it by calling AggregateUDF::close.
The clone and close methods of your aggregate UDF may be called many times per job.
The factory function is called whenever an application invokes your plugin. That is, once per call
to cts:aggregate (or the equivalent). Additional objects needed to execute map and reduce tasks
are created using AggregateUDF::clone.
The factory function must conform to the marklogic::AggregateFunction interface, shown below:
// From MarkLogic.h
namespace marklogic {
#include "MarkLogic.h"
using namespace marklogic;
...
AggregateUDF* myAggregateFactory() { ... }
The object created by your factory function and AggregateUDF::clone must persist until
MarkLogic Server calls your AggregateUDF::close method.
Use the following entry points to control the allocation and deallocation of your your aggregate
UDF objects:
class AggregateUDF
{
public:
...
virtual void encode(Encoder&, Reporter&) = 0;
virtual void decode(Decoder&, Reporter&) = 0;
...
};
You must provide implementations of encode and decode that adhere to the following guidelines:
The following example demonstrates how to encode/decode an aggregate UDF with 2 data
members, sum and count. Notice that the data members are encoded and decoded in the same
order.
#include "MarkLogic.h"
Report fatal errors using marklogic::Reporter::error. When you call Reporter::error, control
does not return to your code. The reporting task stops immediately, no additional related tasks are
created on that host, and the job stops prematurely. MarkLogic Server returns XDMP-UDFERR to the
application. Your error message is included in the XDMP-UDFERR error.
Note: The job does not halt immediately. The task that reports the error stops, but other
in-progress map and reduce tasks may still run to completion.
Report non-fatal errors and other messages using marklogic::Reporter::log. This method logs a
message to the MarkLogic Server error log, ErrorLog.txt, and returns control to your code. Most
methods of AggregateUDF have marklogic::Reporter input parameter.
The following example aborts the analysis if the caller does not supply a required parameter and
logs a warning if the caller supplies extra parameters:
#include "MarkLogic.h"
using namespace marklogic;
...
void ExampleUDF::start(Sequence& arg, Reporter& r)
{
if (arg.done()) {
r.error("Required parameter not found.");
}
arg.value(target_);
arg.next();
if (!arg.done()) {
r.log(Reporter::Warning, "Ignoring extra parameters.");
}
}
From XQuery, pass an argument sequence in the 4th parameter of cts:aggregate. The following
example passes two arguments to the “count” aggregate UDF:
cts:aggregate(
"native/samplePlugin",
"count",
cts:element-reference(xs:QName("name"),
(arg1,arg2))
For a more complete example, see “Example: Passing Arguments to an Aggregate UDF” on
page 451.
class AggregateUDF
{
public:
...
virtual void start(Sequence& arg, Reporter&) = 0;
...
};
The Sequence class has methods for iterating over the argument values (next and done), checking
the type of the current argument (type), and extracting the current argument value as one of
several native types (value).
Type conversions are applied during value extraction. For details, see “Type Conversions in
Aggregate UDFs” on page 451.
If you need to propagate argument data to your map and reduce methods, copy the data to a data
member of the object on which start is invoked. Include the data member in your encode and
decode methods to ensure the data is available to remote map and reduce tasks.
The start method shown below extracts the argument value from the input Sequence and stores it
in the data member ExampleUDF::target: The value is automatically propagated to all tasks in the
job when MarkLogic Server clones the object on which it invokes start.
All these interfaces (Sequence, TupleIterator, OutputSequence) provide methods for either
inserting or extracting values as C++ types. For details, see marklogic_dir/include/Marklogic.h.
Where the C++ and XQuery types do not match exactly during value extraction, XQuery type
casting rules apply. If no conversion is available between two types, MarkLogic Server reports an
error such as XDMP-UDFBADCAST and aborts the job. For details on XQuery type casting, see:
http://www.w3.org/TR/xpath-functions/#Casting
Then your C++ code can safely extract the arg directly as an integral value:
If the application instead passes a non-numeric string such "dog", the call to Sequence::value
raises an exception and stops the job.
xs:int int
xs:unsignedInt unsigned
xs:long int64_t
xs:unsignedLong uint64_t
xs:float float
xs:double double
xs:boolean bool
xs:decimal marklogic::Decimal
xs:dateTime marklogic::DateTime
xs:time marklogic::Time
xs:date marklogic::Date
xs:gYearMonth marklogic::GYearMonth
xs:gYear marklogic::GYear
xs:gMonth marklogic::GMonth
xs:gDay marklogic::GDay
xs:yearMonthDuration marklogic::YearMonthDuration
xs:dayTimeDuration marklogic::DayTimeDuration
xs:string marklogic::String
xs:anyURI marklogic::String
cts:point marklogic::Point
map:map marklogic::Map
item()* marklogic::Sequence
Redaction is the process of eliminating or obscuring portions of a document as you read it from
the database. For example, you can use redaction to eliminate or mask sensitive personal
information such as credit card numbers, phone numbers, or email addresses from documents.
This chapter describes redaction features you can use when reading a document from the
database.
• Introduction to Redaction
• Security Considerations
Term Definition
rule document A document containing exactly one redaction rule. Rule documents must
be installed in the schema database and be part of a collection before you
can use them to redact content. For details, see “Installing Redaction
Rules” on page 477.
rule collection A database collection that only includes rule documents. A rule must be
part of a collection before you can use it to redact documents.
redaction function A function used to modify content during redaction. A redaction rule
must include a redaction function specification. MarkLogic provides
several built-in redaction functions. You can also create user-defined
redaction functions. For details, see “Built-in Redaction Function
Reference” on page 483 and “User-Defined Redaction Functions” on
page 519.
source document A database document to which you apply one or more redaction rules.
Redacting a document creates an in-memory copy. The source document
is unmodified.
random masking A form of redaction in which the original value is replaced by a new,
random value. The same input does not result in the same output every
time. For an example, see “mask-random” on page 488.
Term Definition
redaction A specially formatted collection of values that can be used as a source for
dictionary dictionary-based masking. Redaction dictionaries must be installed in the
schemas database. You can define a dictionary using XML or JSON. For
details, see “Defining a Redaction Dictionary” on page 534.
• What is Redaction?
Redaction is best suited for granular data hiding when you’re exporting content from the database.
For granular, real-time, in-application information hiding use Element Level Security; for more
details, see Element Level Security in the Security Guide. For document-level access control, use
security features such as document permissions and URI privileges. For more details on these and
other security features in MarkLogic, see the Security Guide.
Warning Redaction does not secure your documents within the database. For example, even
if you redact a document when it is read, applications can still search or modify the
content unless you properly secure the content with features such as document
permissions and Element Level Security.
The table below describes some of the techniques you can use to redact your content. The details
of what to redact and what techniques to apply depend on the requirements of your application.
For details, see “Choosing a Redaction Strategy” on page 468.
Redaction
Variations Description
Type
MarkLogic supports redaction through the mlcp command line tool and an XQuery library
module in the rdt namespace. You can also use the library module with Server-Side JavaScript.
The redaction feature includes built-in redaction functions for common redaction tasks such as
obscuring social security numbers and telephone numbers. You can also plug in your own
redaction functions.
A key component of a redaction rule is a redaction function specification. This function is what
modifies the input nodes selected by the rule. MarkLogic provides several built-in redaction
functions that you can use in your rules. For example, there are built-in redaction functions for
redacting Social Security numbers, telephone numbers, and email addresses. You can also define
your own redaction functions.
Before you can apply a rule, you must intall it in the Schemas database as part of a rule collection.
For details, see “Installing Redaction Rules” on page 477.
The redaction workflow enables you to protect the business logic captured in a redaction rule
independent of the documents to be redacted. For example, the user who generates redacted
documents need not have privileges to modify or create rules, and the user who creates and
administers rules need not have privileges to read or modify the content to be redacted.
In this example, rules are installed and applied using Query Console. For a similar example based
on mlcp, see Example: Using mlcp for Redaction in the mlcp User Guide.
When you complete these steps, your Documents database will contain the following documents.
The documents are also inserted in a collection named “gs-samples” for easy reference.
• /redact-gs/sample1.xml
• /redact-gs/sample2.json
2. Paste the following script into a new query tab in Query Console.
<collection>gs-samples</collection>
</collections>
</options>
);
6. Optionally, click the Explore (eyeglass) icon next to the Database dropdown to explore the
database and confirm insertion of the sample documents.
You can install rules using any document insert technique. This example uses XQuery and Query
Console. You do not need to be familiar with XQuery to complete this exercise. For other rule
installation options, see “Installing Redaction Rules” on page 477.
When you complete this exercise, your schemas database will contain one rule defined in XML
one rule defined in JSON. The rules are inserted in a collection named “gs-rules”. The XML rule
uses the redact-us-phone built-in redaction function. The JSON rule uses the conceal built-in
redaction function.
Follow these steps to install the rules. For an explanation of what the rules do, see “Understanding
the Rules” on page 461.
2. Paste the following script into a new query tab in Query Console.
</options>
</rule>,
<options xmlns="xdmp:document-insert">
<permissions>{xdmp:default-permissions()}</permissions>
<collections>
<collection>gs-rules</collection>
</collections>
</options>
);
5. Click the Run button. The rule documents are installed with the URIs
“/rules/gs/redact-phone.xml” and “/rules/gs/conceal-id.json”. added to the “custom-rules”
collection.
The JSON rule installed in “Installing the Rules” on page 460 has the following form:
{ "rule": {
"description": "Remove customer ids.",
"path": "//id",
"method": { "function": "conceal" }
}}
The user who applies the rules must have read permission on the source documents, the rule
documents, and the rule collection. For more details, see “Security Considerations” on page 464.
2. If you want to use XQuery to apply the rules, perform the following steps:
a. Paste the following script into a new query tab in Query Console:
3. If you want to use Server-Side JavaScript to apply the rules, perform the following steps:
a. Paste the following script into a new query tab in Query Console:
5. Click the Run button. The rules in the “gs-rules” collection are applied to the documents in
the “gs-samples” collection.
The following table shows the result of redacting the XML sample document. Notice that the
telephone number in the summary element has been partially redacted by the redact-us-phone
function. Also, the id element has been completely hidden by the conceal function. The affected
parts of the content are highlighted in the table.
Original <personal>
Document <name>Little Bopeep</name>
<summary>Seeking lost sheep. Please call 123-456-7890.</summary>
<id>123456</id>
</personal>
Redacted <personal>
Result <name>Little Bopeep</name>
<summary>Seeking lost sheep. Please call ###-###-7890.</summary>
</personal>
The following table shows the result of redacting the JSON sample document. Notice that the
telephone number in the summary property has been partially redacted by the redact-us-phone
function. Also, the id property has been completely hidden by the conceal function.The affected
parts of the content are highlighted in the table.
Original {"personal": {
Document "name": "Jack Sprat",
"summary": "Free nutrition advice! Call (234)567-8901 now!",
"id": 234567
}}
Redacted {"personal": {
Result "name": "Jack Sprat",
"summary": "Free nutrition advice! Call (###)###-8901 now!"
}}
Rule documents and rule collections are potentially sensitive information. Carefully consider the
access controls and security requirements applicable to your redaction rules and rule collections.
For example, implement security controls that limit exposures such as the following:
• An attacker who can read a rule has access to potentially sensitive business logic. Even if
the attacker lacks read access to your content, read access to rule logic can reveal the
structure of your content.
• An attacker who can modify a rule or change which rules are in a rule collection can affect
the outcome of a redaction operation, exposing data that would otherwise be redacted.
Consider the following actors when designing your security architecture:
• Rule Administrators: Users who need to be able to create, modify, and delete rules;
manage rule collections; and create and modify redaction dictionaries. You might have
multiple such users, with rights to administer different rule collections.
• Rule Users: Users who need to be able to apply rules but not create, modify, or delete rules
or manage rule collections. Different rule users might have access to different rules or rule
collections.
• Other Users: Other users typically will not have access to or control over rule documents,
rule collections, or redaction dictionaries.
The following diagram illustrates high level redaction flow and the separation of responsibilities
between the rule administrator and the rule user:
Content DB
Insert Rules
Orig
Doc
Redacted
collection Documents
Schemas DB
Redaction Workflow
The following table lists some common tasks around administering and using redaction rules, the
actor who usually performs this task, and the relevant security features available in MarkLogic.
The security features are discussed in more detail below the table.
Document permissions enable you to control who can read, create, or update rule documents and
redaction dictionaries. A rule administrator will usually have read and update permissions on such
documents. Rule users will usually only have read permissions on rule documents and redaction
dictionaries. To learn more about document permissions, see Protecting Documents in the Security
Guide.
Placing rule documents in a protected collection enables you to control who can add documents to
or remove documents from the collection. Rule administrators will usually have update
permissions on a protected rule collection. Rule users will not have any special permissions on a
protected rule collection. A protected collection must be explicitly created before you can add
documents to it. To learn more about protected collections, see Collections and Security in the
Search Developer’s Guide.
Note: A protected collection cannot be used to control who can read or modify the
contents of documents in the collection; you must rely on document permissions
for this control. Protected collections also cannot be used to control who can see
which documents are in the collection.
MarkLogic predefines a redaction-user role. This role (or equivalent privileges) is required to
validate rules and redact documents. That is, you must have this role to use the XQuery functions
rdt:redact and rdt:rule-validate, the JavaScript functions rdt.redact and rdt.ruleValidate,
or the -redaction option of mlcp.
To learn more about security features in MarkLogic, see the Security Guide.
• An XPath expression defining the document components to which the rule applies. Some
restrictions apply; for details, see “Limitations on XPath Expressions in Redaction Rules”
on page 470.
• A descriptor specifying either a built-in or user-defined redaction function. The function
performs the redaction on the node(s) selected by the path expression.
A rule definition can include additional data, such as a description or options. For details, see
“XML Rule Syntax Reference” on page 473 or “JSON Rule Syntax Reference” on page 475.
• Choose a redaction strategy. For example, decide whether to mask or conceal redacted
values. For details, see “Choosing a Redaction Strategy” on page 468.
• Determine whether to use a built-in or user-defined redaction function. For details, see
“Choosing a Redaction Function” on page 469.
The following example rule specifies that the built-in redaction function redact-us-ssn will be
applied to nodes matching the XPath expression //ssn. The redact-us-ssn function accepts a
level parameter that specifies how much of the SSN to mask (full or partial). Use the options
section of the rule definition to specify the level.
JSON {"rule": {
"description": "Mask SSNs",
"path": "//ssn",
"method": { "function": "redact-us-ssn" },
"options": { "level": "partial" }
}}
If you apply these rules to example documents from “Preparing to Run the Examples” on
page 546, you will see the ssn XML element and JSON property values such as the following:
###-##-7890
###-##-9012
###-##-6789
###-##-8901
You can also create your own XQuery or Server-Side JavaScript redaction functions and define
rules that apply them. A user-defined function is identified in the method XML element or JSON
property by function name, URI of the implementing module, and the module namespace URI (if
your function is implemented in XQuery). For details, see “User-Defined Redaction Functions”
on page 519.
The following example specifies that the user-defined redaction function “redact-name” will be
applied to nodes matching the XPath expression //name. For more details and examples, see
“User-Defined Redaction Functions” on page 519.
JSON {"rule": {
"description": "Mask names",
"path": "//name",
"method": {
"function": "redact",
"module": "/example/redact-name.sjs"
}
}}
• Partial masking: Replace only a portion of the redacted value. For example, replace all but
the last 4 digits in a credit card number with the character “#”.
• Full masking: Replace the entire redacted value with a new value. For example, replace all
characters in an account number with a random string of characters.
• Concealment: Completely eliminate the redacted value or node.
• Do I want the replacement value to always be the same for a given input (deterministic), or
do I want it to be randomized?
Deterministic masking can preserve relationships between values and facilitate searches,
which can be either beneficial or undesirable, depending on the application.
• Do I want the replacement value to be drawn from a known list of values (a dictionary)?
When you do not use a dictionary, the replacement value is either a randomly generated or
repeating set of characters, depending on whether you choose random or deterministic
masking. A redaction dictionary enables you to source replacement values from a
pre-defined set of values instead.
Once you determine the privacy requirements of your application, you can select an appropriate
built-in redaction function or create one of your own.
The following built-in redaction functions are installed with MarkLogic. These functions meet the
needs of most applications. These functions are discussed in detail in “Built-in Redaction
Function Reference” on page 483. Examples are included with each function description.
• mask-deterministic
• mask-random
• conceal
• redact-number
• redact-regex
• redact-us-ssn
• redact-us-phone
• redact-email
• redact-ipv4
• redact-datetime
If the built-in functions do not meet the needs of your application, you can create your own
redaction function using XQuery or Server-Side JavaScript. For example, you might need a
user-defined function to implement conditional redaction such as “redact the name if the customer
is a minor”. For more details, see “User-Defined Redaction Functions” on page 519.
JSON {"rule": {
"path": "//emp:ssn",
"namespaces": [
{"namespace": {
"prefix": "emp"
"namespace-uri": "http://my/employees"
}}, ...
],
"method": { ... }
}}
Redaction rules applied to JSON documents have no such restrictions. However, if you apply
rules to a mix of XML and JSON documents, limit your rules to the supported XPath subset.
Rule validation does not check the rule path for conformance to this limitation because it cannot
know if the rule will ever be applied to an XML document. If you apply a rule to an XML
document with an invalid path, the exception RDT-INVALIDRULEPATH is raised.
The XPath expression in the path XML element or JSON property of a rule is restricted to the
subset of XPath supported by XSLT when the rule is applied to XML documents. Therefore, you
must restrict your rule paths when redacting a mixture of XML and JSON context. For more
details, see “Limitations on XPath Expressions in Redaction Rules” on page 470.
You must understand the interactions between XPath and the document model to ensure proper
selection of nodes by a redaction rule. The XML and JSON document models differ in ways that
can be surprising if you are not familiar with the models. For example, a simple path expression
such as “//id” might match an element in an XML document, but all the items in an array value in
JSON.
The built-in redaction functions compensate for differences in the JSON and XML document
models in most cases, so they behave in a consistent way regardless of document type. If you
write your own redaction functions, you might need to make similar adjustments.
You can write a single XPath expression that selects nodes in both XML and JSON documents,
but if you do not understand the document models thoroughly, it might not select the nodes you
expect. Keep the following tips in mind:
• XML and JSON contain different node types. Only XML documents contain element and
attribute nodes; only JSON documents contain object, text, number, boolean, and null
nodes. Thus, an expression such as “//@color” will never match nodes in a JSON
document, even if the document contains a “color” property.
• There is no “JSON property node”. A JSON document such as {"a": 42} is modeled as an
unnamed root object node with a single number node child. The number node is named
“a” and has the value 42. You can change the value of the number node, but you can only
conceal the property by manipulating the parent object node.
• Each item in a JSON array is a node with same name. For example, given {"a": [1,2]},
the path expression “//a” selects two number nodes, not the containing array node.
Selecting the array node requires a JSON specific path expression such as
//array-node('a'). Thus, concealing an array-valued property requires a different
strategy than concealing, say, a string-valued property.
• A JSON property node whose name is not a valid XML element local name, such as one
that contains whitespace, can only be selected using a node test operator such as
node(name). For example, given a document such as {"aa bb": "value"}, use the path
expression /node('aa bb') to select the property named “aa bb”.
• The fn:data() function aggregates text children of XML elements, but does not do so for
JSON properties. See the example in the table below.
For more details, see “Working With JSON” on page 377.
Any redaction function that can receive input from both XML and JSON must be prepared to
handle multiple node types. For example, the same XPath expression might select an element
node in XML, but an object node in JSON.
The rest of this section demonstrates some of the XML and JSON document model differences to
be aware of. For a more detailed discussion of XPath over JSON, see “Traversing JSON
Documents Using XPath” on page 379.
XML JSON
<person> { "person": {
<name> "name": {
<first>John</first> "first": "John",
<last>Smith</last> "last": "Smith"
</name> },
<id>1234</id> "id": 1234,
<alias>Johnboy</alias> "alias": ["Johnboy", "Smitty"],
<alias>Smitty</alias> "home phone": "123-4567"
</person> }}
Then the following table summarizes the nodes selected by several XPath expressions.
number-node {"id":1234}
text {"123-4567"}
<prefix>namespace prefix</prefix>
<namespace-uri>uri</namespace-uri>
</namespace>
</namespaces>
<method>
<function>redaction function name</function>
<module>user-defined module URI</module>
<module-namespace>user-defined module namespace</module-namespace>
</method>
<options>params as elements</options>
</rule>
Note the presence of rule/@xml:lang. The @lang value “zxx” is not a valid language. Rather,
“zxx” is a special value that tells MarkLogic not to tokenize, stem, and index this element.
Though you are not required to include this setting in your rules, it is strongly recommended that
you do so because rules are configuration information and not meant to be searchable.
The following table provides more detail on the rule child elements.
Element Description
Element Description
Use this form to apply a built-in redaction function. For details, see “Built-in
Redaction Function Reference” on page 483.
<method>
<function>builtInFuncName</function>
</method>
<method>
<function>userDefinedFuncName</function>
<module>javascriptModuleURI</module>
</method>
<method>
<function>userDefinedFuncLocalName</function>
<module>xqueryModuleURI</module>
<module-namespace>moduleNSURI</module-namespace>
</method>
{"rule": {
"description": "any text",
"path": "XPath expression",
"method": {
"function": "redaction function name",
"module": "user-defined module URI",
"moduleNamespace": "user-defined module namespace URI",
},
"namespaces": [
{"namespace": {
"prefix": "namespace prefix",
"namespace-uri": "uri"
}, ...
],
"options": {
"anyPropName": anyValue
}
} }
Element Description
Element Description
Use this form to apply a built-in redaction function. For details, see “Built-in
Redaction Function Reference” on page 483.
"method": {
"function": "userDefinedFuncName",
"module": "javascriptModuleURI"
}
"method": {
"function": "userDefinedFuncName",
"module": "xqueryModuleURI",
"moduleNamespace": "xqueryModuleNSURI"
}
A rule document can only contain one rule and must not contain any non-rule data. A rule
collection can contain multiple rule documents, but must not contain any non-rule documents.
Every rule document must be associated with at least one collection because rules are specified by
collection to redaction operations.
Use any MarkLogic document insertion APIs to insert rules into the schema database, such as the
xdmp:document-insert XQuery function, the xdmp.documentInsert Server-Side JavaScript
function, or the document creation features of the Node.js, Java, or REST Client APIs. You can
assign rules to a collection at insertion time or as a separate operation.
If you run one of the following examples in Query Console using your schema database as the
context database, a rule document is inserted into the database and assigned to two collections,
“pii-rules” and “security-rules”.
Language Example
Server-Side declareUpdate();
JavaScript
xdmp.documentInsert(
'/redactionRules/ssn.json',
{ rule: {
description: 'hide SSNs',
path: '//ssn',
method: { function: 'redact-us-ssn' },
options: { pattern: 'partial' }
}},
{ permissions: xdmp.defaultPermissions(),
collections: ['security-rules','pii-rules']});
Set permissions on your rule documents to constrain who can access or modify the rules. For
more details, see “Security Considerations” on page 464.
• Overview
The mlcp command line tool is the recommended interface because it can efficiently apply
redaction to large numbers of documents when you export them from the database or copy them
between databases. To learn more about mlcp, see the mlcp User Guide.
The rdt:redact and rdt.redact functions are suitable for debugging redaction rules or redacting
small sets of documents.
26.7.1 Overview
Once you install one or more rule documents in the Schemas database and assign them to a
collection, you can redact documents in the following ways:
• Exporting documents from a database using the mlcp command line tool.
• Copying documents between databases using the mlcp command line tool.
• Calling the XQuery function rdt:redact function.
• Calling the Server-Side JavaScript function rdt.redact.
The mlcp command line tool will provide the highest throughput, but you may find rdt:redact or
rdt.redact convenient when developing and debugging rules.
Regardless of the redaction method you use, you select a set of documents to be redacted and one
or more rule collections to apply to those documents.
• You can redact both XML and JSON documents in the same operation.
• You can apply rules defined in XML to JSON documents and vice versa.
• You can only apply redaction rules to XML and JSON documents.
• You cannot redact document metadata such as document properties.
• You cannot rely on the order in which rules are applied. For details, see “No Guaranteed
Ordering of Rules” on page 482.
• You must have read permissions for both the documents to be redacted and the redaction
rules.
• If you apply a rule that uses a user-defined redaction function, you must have execute
permissions for the module that contains the implementation. For details, see “Security
Considerations” on page 464.
Your redaction operation will fail if any of the rule collections contain an invalid rule or no rules.
You can use the rdt:rule-validate XQuery function or the rdt.ruleValidate JavaScript
function to verify your rule collections before applying them. For details, see “Validating
Redaction Rules” on page 482.
The following example command applies the rules in the collections with URIs “pii-rules” and
“hipaa-rules” to documents in the database directory “/employees/” on export.
The following example applies the same rules during an mlcp copy operation:
For more details, see Redacting Content During Export or Copy Operations in the mlcp User Guide.
The following example applies the redaction rules in the collections with URIs “pii-rules” and
“hipaa-rules” to the documents in the collection “personnel”:
at "/MarkLogic/redaction.xqy";
rdt:redact(fn:collection("personnel"), ("pii-rules","hipaa-rules"))
The output is a sequence of document nodes, where each document is the result of applying the
rules in the rule collections. The results includes both documents modified by the redaction rules
and unmodified documents that did not match any rules or were not changed by the redaction
functions.
If any of the rule collections passed to rdt:redact is empty, an RDT-NORULE exception is thrown.
This protects you from accidentally failing to apply any rules, leading to unredacted content.
An exception is also thrown if any of the rule collections contain non-rule documents, if any of
the rules are invalid, or if the path expression for a rule selects something other than a node. You
can use rdt:rule-validate to test the validity of your rules before calling rdt:redact.
You must use a require statement to bring the redaction functions into scope in your application.
These functions are implemented by the XQuery library module /MarkLogic/redaction.xqy. For
example:
The following example applies the redaction rules in the collections with URIs “pii-rules” and
“hipaa-rules” to the documents in the collection “personnel”:
The output is a Sequence of document nodes, where each document is the result of applying the
rules in the rule collections. A Sequence is an Iterable. For example, you can process your results
with a for-of loop similar to the following:
The results includes both documents modified by the redaction rules and unmodified documents
that did not match any rules or were not changed by the redaction functions.
If any of the rule collections passed to rdt.redact is empty, an RDT-NORULE exception is thrown.
This protects you from accidentally failing to apply any rules, leading to unredacted content. An
exception is also thrown if any of the rule collections contain non-rule documents, if any of the
rules are invalid, or if the path expression for a rule selects something other than a node.
You can use rdt.ruleValidate to test the validity of your rules before calling rdt.redact. For
details, see “Validating Redaction Rules” on page 482.
In addition, the final redacted result for a given reflects the result of at most one rule. If you have
multiple rules that select the same node, they will all run, but the final document produced by
redaction reflects the result of at most one of these rules.
Therefore, do not have multiple rules in the same redaction operation that redact or examine the
same nodes.
For example, suppose you have two rule collections, A and B, with the following characteristics:
Collection A contains:
ruleA1 using path //id
ruleA2 using path //id
Collection B contains:
ruleB1 using path //id
If you apply both rule collections to a set of documents, you cannot know or rely on the order in
which ruleA1, ruleA2, and ruleB1 are applied to any selected id node. In addition, the output only
reflect the changes to //id made by one of ruleA1, ruleA2, and ruleB1.
Validation confirms that your rule(s) and rule collection(s) conforms to the expected structure and
does not rely on any non-existent code, such as an undefined redaction function.
Note that a successfully validated rule can still cause runtime errors. For example, rule validation
does not include dictionary validation if your rule uses dictionary-based masking. Similarly,
validation does not verify that the XPath expression in a rule conforms to the limitations
described in “Limitations on XPath Expressions in Redaction Rules” on page 470.
If all the rules in the input rule collections are valid, the validation function returns the URIs of all
validated rules. Otherwise, an exception is thrown when the first validation error is encountered.
The following example validates the rules in two rule collections with URIs “pii-rules” and
“hipaa-rules”.
Language Example
XML JSON
<method> "method": {
<function>builtInName</function> "function": "builtInFuncName"
</method> }
If the built-in accepts configuration parameters, specify them in the options child XML element
or JSON property of the rule. For syntax, see “Defining Redaction Rules” on page 466. For
parameter specifics and examples, see the reference section for each built-in.
The following table summarizes the built-in redaction functions and expected input parameters.
Refer to the section on each function for more details and examples.
mask-deterministic Replace values with masking text that is deterministic. That is, a
given input generates the same mask value every time it is applied.
You can control features such as the length and type of the generated
value.
mask-random Replace values with random text. The masking value can vary across
repeated application to the same input value. You can control the
length of the generated value and type of replacement text (numbers
or letters).
conceal Remove the value to be masked.
redact-number Replace values with random numbers. You can control the data type,
range, and format of the masking values.
redact-us-ssn Redact data that matches the pattern of a US Social Security Number
(SSN). You can control whether or not to preserve the last 4 digits
and what character to use as a masking character.
redact-us-phone Redact data that matches the pattern of a US telephone number. You
can control whether or not to preserve the last 4 digits and what
character to use as a masking character.
redact-email Redact data that matches the pattern of an email address. You can
control whether to mask the entire address, only the username, or
only the domain name.
redact-ipv4 Redact data that matches the pattern of an IPv4 address. You can
control what character to use as a masking character.
redact-datetime Redact data that matches the pattern of a dateTime value. You can
control the expected input format and the masking dateTime format.
redact-regex Redact data that matches a given regular expression. You must
specify the regular expression and the masking text.
For a complete example of using all the built-in functions, see “Example: Using the Built-In
Redaction Functions” on page 508.
26.9.1 mask-deterministic
Use this built-in to mask a value with a consistent masked value. That is, with deterministic
masking, a given input always produces the same output. The original value is not derivable from
the masked value.
Deterministic masking can be useful for preserving relationships across records. For example,
you could mask the names in a social network, yet still be able to trace relationships between
people (X knows Y, and Z knows Y).
Use the following parameters to configure the behavior of this function. Set parameters in the
options section of a rule.
• length:The length, in characters, of the output value to generate. Optional. Default: 64.
You cannot use this option with the dictionary option.
• character: The class of character(s) to use when constructing the masked value. Allowed
values: any (default), alphanumeric, numeric, alphabetic. You cannot use this option with
the dictionary option.
• dictionary: The URI of a redaction dictionary. Use the dictionary as the source of
replacement values. You cannot use this option with any other options.
• salt:A salt to apply when generating masking values. MarkLogic applies the salt even
when drawing replacement values from a dictionary. The default behavior is no salt.
• extend-salt: Whether/how to extend the salt with runtime information. You can extend
the salt with the rule set collection name or the cluster id. Allowed values: none,
collection, cluster-id (default).
When you use dictionary-based masking, a given input will always map to the same redaction
dictionary entry. If you modify the dictionary, then the dictionary mapping will also change.
The salt and extend-salt options options introduce rule and/or cluster-specific randomness to
the generated masking values. Each masking value is still deterministic when salted: The same
input produces the same output. However, the same input with different salts produces different
output. For details, see “Salting Masking Values for Added Security” on page 543.
The following example rule applies deterministic masking to nodes selected by the XPath
expression “//name”. The replacement value will be 10 characters long because of the length
option.
XML JSON
The following table illustrates the effect of applying mask-deterministic to several different
types of nodes. For an end-to-end example, see “Example: Using the Built-In Redaction
Functions” on page 508.
In most cases, the entire value of the node is replaced by the redacted value, even if the original
contents are complex, such as the //address example, above.
However, notice the //alias example above, which selects individual alias array items in the
JSON example, rather than the entire array. If you want to redact the entire array value, you need
a rule with a JSON-specific path selector. For example, a rule path such as
//array-node('alias') selects the entire array in the JSON documents, resulting in a value such
as the following for the “alias” property:
"alias": "6b162c290e"
For more details, see “Defining Rules Usable on Multiple Document Formats” on page 471.
To illustrate the effects of the various character option settings, assume a length option of 10 and
the following input targeted for redaction:
<pii>
<priv>redact me</priv>
<priv>redact me</priv>
<priv>redact me too</priv>
</pii>
Then the following table shows the result of applying each possible value of the character option.
alphanumeric <pii>
<priv>F1Fp64Cnox</priv>
<priv>F1Fp64Cnox</priv>
<priv>LiN5mrmG0g</priv>
</pii>
numeric <pii>
<priv>1838664450</priv>
<priv>1838664450</priv>
<priv>5771438029</priv>
</pii>
alphabetic <pii>
<priv>PQXWBHfASy</priv>
<priv>PQXWBHfASy</priv>
<priv>ZroFQNkNqi</priv>
</pii>
26.9.2 mask-random
Use this built-in to replace a value with a random masking value. A given input produces different
output each time it is applied. The original value is not derivable from the masked value. Random
masking can be useful for obscuring relationships across records.
Use the following parameters to configure the behavior of this function. Set parameters in the
options section of a rule.
• length:The length, in characters, of the output value to generate. Optional. Default: 64.
You cannot use this option with the dictionary option.
• character: The type of character(s) to use when constructing the masked value. Allowed
values: any (default), alphanumeric, numeric, alphabetic. You cannot use this option with
the dictionary option.
• dictionary: The URI of a redaction dictionary. Use the dictionary as the source of
replacement values. You cannot use this option with any other options.
The following example rule applies random masking to nodes selected by the XPath expression
“//name”. The replacement value will be 10 characters long because of the length option.
XML JSON
The following table illustrates the effect of applying mask-random to several different types of
nodes. For an end-to-end example, see “Example: Using the Built-In Redaction Functions” on
page 508.
In most cases, the entire value of the node is replaced by the redacted value, even if the original
contents are complex, such as the //address example, above.
However, notice the //alias example above, which selects individual alias array items in the
JSON example, rather than the entire array. If you want to redact the entire array value, you need
a rule with a JSON-specific path selector. For example, a rule path such as
//array-node('alias') selects the entire array in the JSON documents, resulting in a value such
as the following for the “alias” property:
"alias": "6b162c290e"
For more details, see “Defining Rules Usable on Multiple Document Formats” on page 471.
To illustrate the effects of the various character option settings, assume a length option of 10 and
the following input targeted for redaction:
<pii>
<priv>redact me</priv>
<priv>redact me</priv>
<priv>redact me too</priv>
</pii>
Then the following table shows the result of applying each possible value of the character option.
alphanumeric <pii>
<priv>qIEsmeJua6</priv>
<priv>WfVLAAckzu</priv>
<priv>P8BGgCdt5s</priv>
</pii>
numeric <pii>
<priv>7902282158</priv>
<priv>8313199931</priv>
<priv>2026296703</priv>
</pii>
alphabetic <pii>
<priv>rZimfgZwSG</priv>
<priv>knqbTrKTdl</priv>
<priv>wKYeTkVjLC</priv>
</pii>
26.9.3 conceal
Use this built-in to entirely remove a selected value.
The following example rule applies concealment to values selected by the path expression //name.
XML JSON
The following table illustrates the effect of applying conceal to several different types of nodes.
For an end-to-end example, see “Example: Using the Built-In Redaction Functions” on page 508.
In most cases, the entire selected node is concealed, even if the original contents are complex,
such as the //address example, above.
However, note that a path such as //alias, above, conceals each array item in the JSON sample,
rather than concealing the entire array. This is because the alias path step matches each array
item individually; for details, see “Defining Rules Usable on Multiple Document Formats” on
page 471 and “Traversing JSON Documents Using XPath” on page 379.
If you want to redact the entire array value, you need a rule with a JSON-specific path selector,
such as //array-node('alias'). For more details, see “Defining Rules Usable on Multiple
Document Formats” on page 471.
26.9.4 redact-number
Use this built-in to mask values with a random number that conforms to a configurable range and
format.
This function differs from the mask-random function in that it provides finer control over the
masking value. Also, mask-random always generates a text node, while redact-number generates
either a number node or a text node, depending on the configuration.
The redact-number function enables you to control the following aspects of the masking value:
• min:The minimum acceptable masking value, inclusive. This function will not generate a
masking value less than the min value. Optional. Default: 0.
• max:The maximum acceptable masking value, inclusive. This function will not generate a
masking value greater than the max value. Optional. Default: 18446744073709551615.
• format: Special formatting to apply to the replacement value. Optional. Default: No
special formatting. The format string must conform to the syntax for an XSLT “picture
string”, as described in the function reference for fn:format-number (XQuery) or
fn.formatNumber ( JavaScript) and in https://www.w3.org/TR/xslt20/#function-format-number.
If you specify a format, the replacement value is a text node in JSON documents instead of
a number node. Note: If you specify a format, then the values in the range defined by min
and max must be convertible to decimal.
• type: The data type of the replacement value. Optional. Allowed values: integer, decimal,
double. Default: integer. The values specified in the min and max options are subject to the
specified type restriction.
The following example rule applies redact-number to values selected by the XPath expression
//balance. The matched values will be replaced by decimal values in the range 0.0 to 100000.00,
with two digits after the decimal point. The rule generates replacement values such as 3.55, 19.79,
82.96.
XML JSON
When applied to a JSON document, the node replaced by redaction can be either a text node or a
number node, depending on whether or not you use the format option. With no explicit
formatting, redaction produces a number node for JSON. With explicit formatting, redaction
produces a text node. For example, redact-number might affect the value of a JSON property
named “key” as follows:
no format option
"key": 61.4121623617221
The value range defined by a redact-number rule must be valid for the data type. For example, the
following set of options is invalid because the specified range does not express a meaningful
integer range from which to generate values:
min: 0.1
max: 0.9
type: integer
The values of min and max must be castable to the specified type.
The following table illustrates the effect of applying redact-number with various option
combinations. For an end-to-end example, see “Example: Using the Built-In Redaction
Functions” on page 508.
26.9.5 redact-us-ssn
Use this built-in to mask values that conform to one of the following patterns. These patterns
correspond to typical representations for US Social Security Numbers (SSNs). The character N in
these patterns represents a single digit in the range 0 - 9.
You can use the following parameters to configure the behavior of this function. Set parameters in
the options section of a rule.
• level: How much to redact. Optional. This option can have the following values:
• full: Default. Replace all digits with the character specified by the character
option.
• partial: Retain the last 4 digits; replace all other digits with the character
specified by the character option.
• full-random: Replace all digits with random digits. The character option is
ignored. You will get a different value each time you redact a given value.
• character: The character with which to replace each redacted digit when level is full or
partial. Optional. Default: “#”.
The following example redacts SSNs selected by the path expression //id. The parameters
specify that last 4 digits of the SSN are preserved and the remaining digits are replaced with the
character “X”.
XML JSON
The following table illustrates the effect of applying redact-us-ssn with various input values and
configuration parameters. For a complete example, see “Example: Using the Built-In Redaction
Functions” on page 508.
26.9.6 redact-us-phone
Use this built-in to mask values that conform to one of the following patterns. These patterns
correspond to typical representations for US telephone numbers. The character N in these patterns
represents a single digit in the range 0 - 9.
You can use the following parameters to configure the behavior of this function. Set parameters in
the options section of a rule.
• level: How much to redact. Optional. This option can have the following values:
• full: Default. Replace all digits with the character specified by the character
option.
• partial: Retain the last 4 digits; replace all other digits with the character
specified by the character option.
• full-random: Replace all digits with random digits. The character option is
ignored. You will get a different random value each time you redact a given input.
• character:The character with which to replace each redacted digit when level is full or
partial. Optional. Default: “#”.
The following example masks telephone numbers selected by the path expression //ph. The
parameters specify that last 4 digits of the telephone number are preserved and the remaining
digits are replaced with the character “X”.
XML JSON
The following table illustrates the effect of applying redact-us-phone with various input values
and configuration parameters. For a complete example, see “Example: Using the Built-In
Redaction Functions” on page 508.
26.9.7 redact-email
Use this built-in to mask values that conform to the pattern of an email address. The function
assumes an email has the form name@domain.
Use the following parameters to configure the behavior of this function. Set parameters in the
options section of a rule.
• level:How much of each email address to redact. Allowed values: full, name, domain.
Optional. Default: full.
Redacting the username portion of an email address replaces the username with “NAME”.
Redacting the domain portion of an email address replaces the domain name with “DOMAIN”.
Thus, full redaction on the email address “[email protected]” produces the replacement value
“NAME@DOMAIN”.
The following example rule fully redacts email addresses selected by the path expression
“//email”.
XML JSON
The following table illustrates the effect of applying redact-email with various levels of
redaction. For a complete example, see “Example: Using the Built-In Redaction Functions” on
page 508.
26.9.8 redact-ipv4
Use this built-in to mask values that conform to the pattern of an IP address. This function only
redacts IPv4 addresses. That is, a value is redacted if it conforms to the following pattern, where N
represents a decimal digit (0-9).
• Four blocks of 1-3 decimal digits, separated by period (“.”). The value of each block of
digits must less than or equal to 255. For example: 123.201.098.112, 123.45.678.0.
The redacted IP address is normalized to contain characters for the maximum number of digits.
That is, an IP address such as 123.4.56.7 is masked as “###.###.###.###”.
Use the following options to configure the behavior of this function. Set parameters in the options
section of a rule.
• character: The character with which to replace each redacted digit. Optional. Default:
“#”.
The following example rule redacts IP addresses selected by the path expression //ip. The
character parameter specifies the digits of the redacted IP address are replaced with “X”.
XML JSON
The following table illustrates the effect of applying redact-ipv4 with various configuration
options. For a complete example, see “Example: Using the Built-In Redaction Functions” on
page 508.
26.9.9 redact-datetime
Use this built-in to mask values that represent a dateTime value. You can use this function to
mask dateTime value in one of two ways:
• Parse the input dateTime value and replace it with a masking value derived from applying
a dateTime picture string to the input dateTime components. For example, redact the value
“2012-05-23” by obscuring the month and date, producing a masking value such as
“2012-MM-DD”. You can only use this type of dateTime redaction to redact values that
can be parsed by fn:parse-dateTime or fn.parseDateTime.
• Replace any value with a random dateTime value, formatted according to a specified
picture string. You can restrict the value to a particular year range.
You can use the following parameters to configure the behavior of this function. Set parameters in
the options section of a rule.
• level: The type of dateTime redaction. Required. Allowed values: parsed, random.
• format: A dateTime picture string describing how to format the masking value. Required.
• picture: A dateTime picture string describing the required input value format. This option
is required when level is parsed and ignored otherwise. Any input value that does not
conform to the expected format is not redacted.
• range: A comma separated pair of years, used to constrain the masking value range when
level is random. Optional. This option is ignored if level is not random. For example, a
range value of “1900,1999” will only generate masking values for the years 1900 through
1999, inclusive.
Note: When you apply redact-datetime with a picture option, the content selected by
your rule path must serialize to text whose leading characters conform to the
picture string. If there are other leading characters in the serialized content,
redaction fails with an error.
The following example rule redacts dateTime values using the parsed method. The picture option
specifies that only input values of the form YYYY-MM-DD are redacted. The format option
specifies that the masking value is of the form MM-DD-YYYY, with the day portion replaced by
the literal value “NN”.
XML JSON
If you apply the above rules to a value such as “2012-11-09”, the redacted value becomes
“NN-NN-2012”.
The following example rule redacts values using the random method. The format option specifies
that the masking value be of the form YYYY-MM-DD, and that the masking values be in the year
range 1900 to 1999, inclusive. The format of the value to be redacted does not matter.
XML JSON
For a complete example, see “Example: Using the Built-In Redaction Functions” on page 508.
26.9.10 redact-regex
Use this built-in to mask values that match a regular expression. The regular expression and the
replacement text are configurable.
• pattern: A regular expression identifying the values to be redacted. Required. Use the
regular expression language syntax defined for XQuery and XPath. For details, see
http://www.w3.org/TR/xpath-functions/%23regex-syntax.
Note that the replacement pattern can contain back references to portions of the matched text. A
back reference enables you to “capture” portions of the matched text and re-use them in the
replacement value. See the example at the end of this section.
Regular expression patterns can contain characters that require escaping in your rule definitions.
The following contains a few examples of problem characters. This is not an exhaustive list.
• Curly braces (“{ }”) in pattern in an XML rule installed with XQuery must be escaped as
“{{“ and “}}” to prevent the XQuery interpreter from treating them as code block
delimiters.
• A left angle bracket (“<“) in an XML rule must be replaced by the entity reference “<”.
• Backslashes (“\”) in a JSON rule definition must be escaped as “\\” because “\” is a special
character in JSON strings.
The following example redacts text which has one of the following forms, where N represents a
single digit in the range 0-9.
\d{2}[-.\s]\d{7}
The following rule specifies that values in an id XML element or JSON property that match the
pattern will be replaced with the text “NN-NNNNNNN”. Notice the escaped characters in the
pattern.
XML JSON
The table below illustrates the result of applying the rule to documents matching the rule.
The following rule uses a back reference in the pattern to leave the first 2 digits of the id intact.
The pattern in the previous example has been modified to have parentheses around the
sub-expression for the first block of digits (“(\d{2})”. The parentheses “capture” that block of text
in a variable that is referenced in the replacement string as “$1”.
XML JSON
Applying this rule to the same documents as before results in the following redaction:
12-NNNNNNN
For more details, see the fn:replace XQuery function or the fn.replace Server-Side JavaScript
function.
For a complete example, see “Example: Using the Built-In Redaction Functions” on page 508.
• Install the rules. Choose XML or JSON. Install one set or the other, not both.
• Install the XML Rules
The rules are inserted with a URI of the following form, where name is the XML element local
name or JSON property name of the node selected by the rule. (The URI suffix depends on the
rule format you install.)
/rules/redact-name.{xml|json}
For example, /rules/redact-alias.xml targets the alias XML element or JSON property of the
sample documents.
Every rule is inserted into two collections, an “all” collection and a collection that identifies the
built-in used by the rule. For example, /rules/redact-alias.json, which uses the mask-random
built-in, is inserted in the collections “all” and “random”. This enables you to apply the rules
together or selectively.
Rule URI
Built-in Function Used Path Selector Collections
Basename
Follow these steps to install the example rules in XML format using XQuery. If you prefer to use
JSON rules, see “Install the JSON Rules” on page 513. For a detailed example of installing rules
with Query Console, see “Example: Getting Started With Redaction” on page 458.
5. Optionally, use the Query Console database explorer to review the rules.
Use the following script to install the rules. For a summary of what these rules do, see “Example
Rule Summary” on page 509.
let $rules := (
<rules>
<rule>
<name>redact-name</name>
<collection>deterministic</collection>
<rdt:rule xml:lang="zxx"
xmlns:rdt="http://marklogic.com/xdmp/redaction">
<rdt:path>//name</rdt:path>
<rdt:method>
<rdt:function>mask-deterministic</rdt:function>
</rdt:method>
<rdt:options>
<length>10</length>
</rdt:options>
</rdt:rule>
</rule>
<rule>
<name>redact-alias</name>
<collection>random</collection>
<rdt:rule xml:lang="zxx"
xmlns:rdt="http://marklogic.com/xdmp/redaction">
<rdt:path>//alias</rdt:path>
<rdt:method>
<rdt:function>mask-random</rdt:function>
</rdt:method>
<rdt:options>
<length>10</length>
</rdt:options>
</rdt:rule>
</rule>
<rule>
<name>redact-address</name>
<collection>conceal</collection>
<rdt:rule xml:lang="zxx"
xmlns:rdt="http://marklogic.com/xdmp/redaction">
<rdt:path>//address</rdt:path>
<rdt:method>
<rdt:function>conceal</rdt:function>
</rdt:method>
</rdt:rule>
</rule>
<rule>
<name>redact-balance</name>
<collection>balance</collection>
<rdt:rule xml:lang="zxx"
xmlns:rdt="http://marklogic.com/xdmp/redaction">
<rdt:path>//balance</rdt:path>
<rdt:method>
<rdt:function>redact-number</rdt:function>
</rdt:method>
<rdt:options>
<min>0</min>
<max>100000</max>
<format>0.00</format>
<type>decimal</type>
</rdt:options>
</rdt:rule>
</rule>
<rule>
<name>redact-anniversary</name>
<collection>datetime</collection>
<rdt:rule xml:lang="zxx"
xmlns:rdt="http://marklogic.com/xdmp/redaction">
<rdt:path>//anniversary</rdt:path>
<rdt:method>
<rdt:function>redact-datetime</rdt:function>
</rdt:method>
<rdt:options>
<level>random</level>
<format>[Y0001]-[M01]-[D01]</format>
<range>1900,1999</range>
</rdt:options>
</rdt:rule>
</rule>
<rule>
<name>redact-ssn</name>
<collection>ssn</collection>
<rdt:rule xml:lang="zxx"
xmlns:rdt="http://marklogic.com/xdmp/redaction">
<rdt:path>//ssn</rdt:path>
<rdt:method>
<rdt:function>redact-us-ssn</rdt:function>
</rdt:method>
<rdt:options>
<level>partial</level>
</rdt:options>
</rdt:rule>
</rule>
<rule>
<name>redact-phone</name>
<collection>phone</collection>
<rdt:rule xml:lang="zxx"
xmlns:rdt="http://marklogic.com/xdmp/redaction">
<rdt:path>//phone</rdt:path>
<rdt:method>
<rdt:function>redact-us-phone</rdt:function>
</rdt:method>
<rdt:options>
<level>full</level>
</rdt:options>
</rdt:rule>
</rule>
<rule>
<name>redact-email</name>
<collection>email</collection>
<rdt:rule xml:lang="zxx"
xmlns:rdt="http://marklogic.com/xdmp/redaction">
<rdt:path>//email</rdt:path>
<rdt:method>
<rdt:function>redact-email</rdt:function>
</rdt:method>
<rdt:options>
<level>name</level>
</rdt:options>
</rdt:rule>
</rule>
<rule>
<name>redact-ip</name>
<collection>ip</collection>
<rdt:rule xml:lang="zxx"
xmlns:rdt="http://marklogic.com/xdmp/redaction">
<rdt:path>//ip</rdt:path>
<rdt:method>
<rdt:function>redact-ipv4</rdt:function>
</rdt:method>
<rdt:options>
<character>X</character>
</rdt:options>
</rdt:rule>
</rule>
<rule>
<name>redact-id</name>
<collection>regex</collection>
<rdt:rule xml:lang="zxx"
xmlns:rdt="http://marklogic.com/xdmp/redaction">
<rdt:path>//id</rdt:path>
<rdt:method>
<rdt:function>redact-regex</rdt:function>
</rdt:method>
<rdt:options>
<pattern>\d{{2}}[-.\s]\d{{7}}</pattern>
<replacement>NN-NNNNNNN</replacement>
</rdt:options>
</rdt:rule>
</rule>
</rules>
)
return
for $r in $rules/rule return
let $collections := (<collection>all</collection>, $r/collection)
let $options :=
<options xmlns="xdmp:document-insert">
<permissions>{xdmp:default-permissions()}</permissions>
<collections>
<collection>all</collection>
<collection>{$r/*:collection/data()}</collection>
</collections>
</options>
return xdmp:document-insert(
fn:concat("/rules/", $r/name, ".xml"),
$r/rdt:rule, $options
)
5. Optionally, use the Query Console database explorer to review the rules.
Use the following script to install the rules. For a summary of what these rules do, see “Example
Rule Summary” on page 509.
declareUpdate();
const rules = [
{ name: 'redact-name',
content:
{rule: {
path: '//name',
method: {function: 'mask-deterministic'},
{rule: {
path: '//phone',
method: {function: 'redact-us-phone'},
options: {level: 'full'}
}},
collection: 'phone'
},
{ name: 'redact-email',
content:
{rule: {
path: '//email',
method: {function: 'redact-email'},
options: {level: 'name'}
}},
collection: 'email'
},
{ name: 'redact-ip',
content:
{rule: {
path: '//ip',
method: {function: 'redact-ipv4'},
options: {character: 'X'}
}},
collection: 'ip'
},
{ name: 'redact-id',
content:
{rule: {
path: '//id',
method: {function: 'redact-regex'},
options: {
pattern: '\\d{2}[-.\\s]\\d{7}',
replacement: 'NN-NNNNNNN'
}
}},
collection: 'regex'
}
];
rules.forEach(function (rule, i, a) {
xdmp.documentInsert(
'/rules/' + rule.name + '.json',
rule.content,
{ permissions: xdmp.defaultPermissions(),
collections: ['all', rule.collection] }
);
})
If you have not already done so, install the sample documents from “Preparing to Run the
Examples” on page 546. This example assumes they are installed in the Documents database.
4. Click Run.
The redacted documents will be displayed in Query Console. For a discussion of the expected
results, see “Review the Results” on page 517.
4. Click Run.
The redacted documents will be displayed in Query Console. For a discussion of the expected
results, see “Review the Results” on page 517.
Change the example command line as needed to match your environment. The output directory
(./results) must not already exist.
The redacted documents will be exported to ./results. For a discussion of the expected results,
see “Review the Results” on page 517.
For more details on using mlcp with Redaction, see Redacting Content During Export or Copy
Operations in the mlcp User Guide.
//name mask-deterministic
//alias mask-random
//address conceal
//balance mask-number
//anniversary redact-datetime
//ssn redact-us-ssn
//phone redact-us-phone
//email redact-email
//ip redact-ipv4
//id redact-regex
//balance redact-number
The following table illustrates the effect on the sample documents /redact-ex/person1.xml. The
redacted values you observe will differ from those shown if the rule generates a value, rather than
masking an existing value.
<person> <person>
<name>Little Bopeep</name> <name>63a63aa762</name>
<alias>Peepers</alias> <alias>47c1fc8b29</alias>
<alias>Bo</alias> <alias>7a314dcf2d</alias>
<address> <ssn>###-##-6789</ssn>
<street>100 Nursery Lane</street> <phone>###-###-####</phone>
<city>Hometown</city> <email>[email protected]</email>
<country>Neverland</country> <ip>XXX.XXX.XXX.XXX</ip>
</address> <id>NN-NNNNNNN</id>
<ssn>123-45-6789</ssn> <birthdate>2015-01-15</birthdate>
<phone>123-456-7890</phone> <anniversary>1930-05-13</anniversary>
<email>[email protected]</email> <balance>0.67</balance>
<ip>111.222.33.4</ip> </person>
<id>12-3456789</id>
<birthdate>2015-01-15</birthdate>
<anniversary>2017-04-18</anniversary>
<balance>12.34</balance>
</person>
The following table illustrates the effect on the sample document /redact-ex/person3.json.
Note: The results in Query Console will not necessarily be in the order person1, person2,
person3, etc.
2. Install the function in the Modules database associated with your App Server. For details,
see “Installing a User-Defined Redaction Function” on page 520.
3. Define a rule that specifies your function. For syntax, see “Defining Redaction Rules” on
page 466.
For a complete example, see “Example: Using Custom Redaction Rules” on page 523.
Language Interface
The input node parameter is the node selected by the XPath expression in a rule using your
function. The options parameter can be used to pass user-defined data from the rule into your
function. Your function will return a node (redacted or not) or nothing.
Define your function in an XQuery or JavaScript library module. Install the module in the
modules database associated with the App Server through which redaction will be applied. For
details, see “Installing a User-Defined Redaction Function” on page 520.
The following table contains module templates suitable for defining your own conforming
module. For a complete example, see “Example: Custom Redaction Using JavaScript” on
page 523 or “Example: Custom Redaction Using XQuery” on page 529.
Language Interface
exports.redact = yourFunc
The procedure outlined here makes the following assumptions. You will need to modify the
procedure and example code to match your environment and application requirements.
• The default document permissions are suitable for the module permissions.
Use a procedure similar to the following to install your XQuery module in the Modules database.
2. Paste the following script into Query Console. Modify the module URI and the path in the
xdmp:document-get line to match your environment.
(: MODIFY THE FILE SYSTEM PATH AND URI TO MATCH YOUR ENV :)
xquery version "1.0-ml";
xdmp:document-insert(
"/your/module/uri",
xdmp:document-get("/your/module/path/impl.xqy"),
<options xmlns="xdmp:document-insert">
<permissions>{xdmp:default-permissions()}</permissions>
</options>
)
5. Click the Run button. The module is installed in the Modules database.
You can use the Explore feature of Query Console to browse the Modules database and confirm
the installation.
The procedure outlined here makes the following assumptions. You will need to modify the
procedure and example code to match your environment and application requirements.
• The default document permissions are suitable for the module permissions.
Use a procedure similar to the following to install your XQuery module in the Modules database.
2. Paste the following script into Query Console. Modify the module URI and the path in the
xdmp.documentGet line to match your environment.
// MODIFY THE FILE SYSTEM PATH and URI TO MATCH YOUR ENV
declareUpdate();
xdmp.documentInsert(
'/your/module/uri',
xdmp.documentGet('/your/module/path/impl.sjs'));
5. Click the Run button. The module is installed in the Modules database.
You can use the Explore feature of Query Console to browse the Modules database and confirm
the installation.
• Java: Managing Dependent Libraries and Other Assets in the Java Application Developer’s
Guide
• Node.js: Managing Assets in the Modules Database in the Node.js Application Developer’s
Guide
• REST: Managing Dependent Libraries and Other Assets in the REST Application Developer’s
Guide
Choose one of the following examples to explore using custom redaction rules.
For simplicity, this example only uses JavaScript and JSON. You can also write a custom a
function to handle both XML and JSON. For a similar XQuery/XML example, see “Example:
Custom Redaction Using JavaScript” on page 523.
Before running the example, install the sample documents from “Preparing to Run the Examples”
on page 546.
• Input Data
26.12.1.1Input Data
The input documents have the following structure. The birthdate property is used to determine
whether or not to redact the name property.
To install the sample documents, see “Preparing to Run the Examples” on page 546.
exports.redact = redactName;
3. Paste the following script into Query Console. Modify the path in the xdmp.documentGet
line to match the file location from Step 1.
6. Click the Run button. The module is installed in the Modules databasew ith the URI
“/redaction/redact-xml-name.sjs”.
You can use Query Console to explore the Modules database and confirm the installation.
The custom function expects to receive a JSON node corresponding to the node that is a candidate
for redaction. This node must be a child of an object that also has a birthdate property. This code
snippet implements this check:
...
Note that you could theoretically write the function to expect the parent object as input and have
the redaction rule use an XPath expression such as /name/parent::node(). However, such a rule
path is invalid if the rule is ever applied to an XML document, so we traverse up to the parent
node inside the redaction function instead of in the rule. For more details, see “Limitations on
XPath Expressions in Redaction Rules” on page 470.
The redaction function uses the birthdate element to compute the age. If the age is less than 18,
then the text in the name element is redacted. The value of the “newName” property in the options
object is used as the replacement text.
const birthday =
xdmp.parseDateTime('[Y0001]-[M01]-[D01]', parent.birthdate);
const age = Math.floor(fn.daysFromDuration(
fn.currentDateTime().subtract(birthday)) / 365);
if (age < 18) {
// underage, so redact
const builder = new NodeBuilder();
builder.addText(options.newName);
return builder.toNode();
}
Redaction functions must return a node, not a simple value. In this case, we need to return a JSON
text node that will replace the original input node. You cannot construct a text node from a native
JavaScript object, so the function uses a NodeBuilder to construct the return node.
These requirements are not specific to working with the root object node. Any time you have a
node as input and want to modify it as a native JavaScript type, you need to use toObject.
Similarly, you must always return a node, not a native JavaScript value.
These instructions assume you will use the pre-installed App Server on localhost:8000 and the
Documents database, which is configured to use the Schemas database. This example uses
Server-Side JavaScript and Query Console to install the rule, but you can use any document
insertion interface.
2. Paste the following script into a new query tab in Query Console.
declareUpdate();
xdmp.documentInsert('/rules/redact-name.json',
{ rule: {
path: '/name',
method: {
function: 'redact',
module: '/redaction/redact-json-name.sjs'
},
options: { newName: 'Jane Doe' }
}},
{ permissions: xdmp.defaultPermissions(),
collections: ['custom-rules'] }
);
5. Click the Run button. The rule document is installed with the URI
“/rules/redact-name.json” and added to the “custom-rules” collection.
The path expression in the rule selects the name property for redaction. Since the custom function
uses the birthdate sibling property of name to control the redaction, it would be more natural in
some ways to apply the rule to the parent object. However, the parent object is anonymous, so it
cannot be addressed by name in an XPath expression.
An XPath expression such as /name/parent::node() would select the anonymous parent object,
but it will cause an error if the rule is ever applied to an XML document. Since we have a mixed
XML and JSON document set, we choose write the rule and the custom function to use the name
property as the redaction target.
The custom function is identified in the rule by exported function name and the URI of the
implementation installed in the modules database:
method: {
function: 'redact',
module: '/redaction/redact-json-name.sjs'
}
The options property contains a single child, newName. This value is used as the replacement
value for any redacted name elements:
For a similar XQuery/XML example of defining and installing a rule that uses a custom function,
see “Example: Custom Redaction Using XQuery” on page 529.
2. Paste the following script into a new query tab in Query Console:
jsearch.collections('personnel').documents()
.map(function (match) {
match.document = fn.head(
rdt.redact(fn.root(match.document), 'custom-rules')
.root;
return match;
}).result();
5. Click the Run button. The rules in the “custom-rules” collection are applied to the
documents in the “personnel” collection.
If you use the sample documents from “Preparing to Run the Examples” on page 546, running the
script will have the following effect on the search result matches:
Note that the node passed to rdt.redact is obtained by applying fn.root to match.document.
rdt.redact(fn.root(match.document), 'custom-rules')
The rdt.redact function expects a document node as input, whereas match.document is the root
node under the document node, such as a JSON object-node or XML element node. In the context
of DocumentsSearch.map, the node in match.document is an in-database node, not an in-memory
construct, so we can access the enclosing document node using fn.root, as shown above.
A similar technique is used, in reverse, to save the redaction result back into the search results:
match.document = fn.head(rdt.redact(...)).root;
This is necessary because rdt.redact function returns a Sequence of in-memory document nodes.
To save the redacted content in the expected form, we access the first node in the Sequence with
fn.head, and then “dereference” it using the “.root” property so that match.document again
contains the root node under the document node.
For more details, see Redacting Content During Export or Copy Operations in the mlcp User Guide.
If you use the sample documents from “Preparing to Run the Examples” on page 546, running the
script will create 4 files in the directory ./mlcp-output.
These files will reflect the following effects relative to the input documents:
This example only uses XQuery and XML. You can write a custom a function to handle both
XML and JSON, but you might find it more convenient to use XQuery for XML and Server-Side
JavaScript for JSON. For an equivalent JavaScript/JSON example, see “Example: Custom
Redaction Using JavaScript” on page 523.
Before running this example, you must install the sample documents from “Preparing to Run the
Examples” on page 546.
• Input Data
26.12.2.1Input Data
The input documents have the following structure. The birthdate element is used to determine
whether or not to redact the name element.
<person>
<name>any text</name>
...
<birthdate>YYYY-MM-DD</birthdate>
</person>
To install the sample documents, see “Preparing to Run the Examples” on page 546.
3. Paste the following script into Query Console. Modify the path in the xdmp:document-get
line to match the file location from Step 1.
6. Click the Run button. The module is installed in the Modules database with the URI
“/redaction/redact-xml-name.xqy”.
You can use Query Console to explore the Modules database and confirm the installation.
The custom function expects to receive a <person/> node as input and options that include a
“new-name” key specifying the replacement name value.
The function uses the birthdate element to compute the age. If the age is less than 18, then the
text in the name element is redacted.
If the input does not have the expected “shape” or the age is 18 or older, the input node is
returned, unchanged.
For a similar JavaScript-based solution, see “Example: Custom Redaction Using JavaScript” on
page 523.
These instructions assume you will use the pre-installed App Server on localhost:8000 and the
Documents database, which is configured to use the Schemas database. This example uses
XQuery and Query Console to install the rule, but you can use any document insertion interface.
2. Paste the following script into a new query tab in Query Console.
5. Click the Run button. The rule document is installed with URI “/rules/redact-name.xml”
and added to the “custom-rules” collection.
Recall that the sample documents are rooted at a <person/> element, so the rule selects the entire
contents by using “/person” as the path value. This enables the redaction function to easily
examine /person/birthdate, as well as modify /person/name.
The custom function is identified in the rule by function name, module URI, and module
namespace:
<rdt:method>
<rdt:function>redact</rdt:function>
<rdt:module>/redaction/redact-xml-name.xqy</rdt:module>
<rdt:module-namespace>
http://marklogic.com/example/redaction
</rdt:module-namespace>
</rdt:method>
The options element contains a single element, new-name, that is used as the replacement value for
any redacted name elements:
<rdt:options>
<new-name>John Doe</new-name>
</rdt:options>
For a similar JavaScript/JSON example of defining and installing a rule that uses a custom
function, see “Example: Custom Redaction Using JavaScript” on page 523.
2. Paste the following script into a new query tab in Query Console:
5. Click the Run button. The rules in the “custom-rules” collection are applied to the
documents in the “personnel” collection.
If you use the sample documents from “Preparing to Run the Examples” on page 546, running the
script will return the following:
For more details, see in Redacting Content During Export or Copy Operations the mlcp User Guide.
If you use the sample documents from “Preparing to Run the Examples” on page 546, running the
script will create 4 files in the directory ./mlcp-output. These files will reflect the following
effects relative to the input documents:
Format Syntax
JSON { "dictionary": {
"entry":[
value,
...
]
}}
The following requirements apply. If these requirements are not met, you will get an
RDT-INVALIDDICTIONARY error when you use the dictionary.
The following example is a trivial dictionary containing four entries of various types. For a
complete example, see “Example: Dictionary-Based Masking” on page 536.
Format Syntax
JSON { "dictionary": {
"entry":[
"a phrase",
"a_term",
1234,
true
]
}}
Install the using the same techniques discussed in “Installing Redaction Rules” on page 477.
For security purposes, use document permissions to carefully control who can read or modify
your dictionary. For more details, see “Security Considerations” on page 464.
For example, the mask-deterministic and mask-random built-in redaction functions support a
dictionary option, so you can draw values from a dictionary with a rule similar to the following:
For more details, see “Built-in Redaction Function Reference” on page 483. For a complete
example, see “Example: Dictionary-Based Masking” on page 536.
• The mask-deterministic function and a JSON dictionary is applied to the country XML
element or JSON property of the sample data.
• The mask-random function and an XML dictionary is applied to the street XML element or
JSON property of the sample data.
Before running this example, you must install the sample documents from “Preparing to Run the
Examples” on page 546.
4. Click Run. The dictionaries are installed in the Schemas database with the URIs
/rules/dict/countries.xml and /rules/dict/streets.json.
5. Optionally, use the Query Console database explorer to review the dictionaries.
4. Click Run. The dictionaries are installed in the Schemas database with the URIs
/rules/dict/countries.xml and /rules/dict/streets.json.
5. Optionally, use the Query Console database explorer to review the dictionaries.
4. Click Run. The rules are installed in the Schemas database with the URIs
/rules/randomize-country.xml and /rules/redact-street.json.
5. Optionally, use the Query Console database explorer to review the rules.
}}'
)
return xdmp:document-insert(
$ruleURI, $rule,
<options xmlns="xdmp:document-insert">
<permissions>{xdmp:default-permissions()}</permissions>
<collections>
<collection>dict</collection>
<collection>dict-deter</collection>
</collections>
</options>
);
4. Click Run. The rules are installed in the Schemas database with the URIs
/rules/randomize-country.xml and /rules/redact-street.json.
5. Optionally, use the Query Console database explorer to review the rules.
declareUpdate();
4. Click Run. The redacted street and country names from each document are displayed.
You will see output similar to the following, though the values may vary.
If you run the script again, the values for the street names will not change because they are
redacted using mask-deterministic. The values for the countries will change with each run since
they are redacted using mask-random.
// Extract the redacted streed and country data for display purposes
const displayAccumulator = ['*** STREETS ***'];
for (let doc of results) {
displayAccumulator.push(doc.xpath('//street/data()'));
}
displayAccumulator.push('*** COUNTRIES ***');
for (let doc of results) {
displayAccumulator.push(doc.xpath('//country/data()'));
}
4. Click Run. The redacted street and country names from each document are displayed.
You will see output similar to the following, though the values may vary.
Germany
France
If you run the script again, the values for the street names will not change because they are
redacted using mask-deterministic. The values for the countries will change with each run since
they are redacted using mask-random.
The mask-deterministic function supports applying a salt to masking value generation via the
following options. You can use them individually or together.
For example, consider the following rules that apply equivalent redaction logic to two different
paths, using no salt:
</options>
</rule>
If you apply these rules to the following documents, both produce the same masking value by
default for the input “John Smith”:
<data> <data>
<pii1>John Smith</pii1> <pii1>6c50dad68163a7a079db</pii1>
</data> </data>
<data> <data>
<pii2>John Smith</pii2> <pii2>6c50dad68163a7a079db</pii2>
</data> </data>
An attacker could use a similar “salt-less” rule to generate a lookup table that indicates “John
Smith” redacts to “6c50dad68163a7a079db”. That knowledge can be used to reverse engineer
redacted output.
Then the masking values generated by the two rules differ as shown below. An attacker cannot
deduce the relationship between the redacted value (“89d7499b154a8b81c17f”) and the input
value (“John Smith”) without also knowing the salt.
<data> <data>
<pii1>John Smith</pii1> <pii1>89d7499b154a8b81c17f</pii1>
</data> </data>
<data> <data>
<pii2>John Smith</pii2> <pii2>6c50dad68163a7a079db</pii2>
</data> </data>
By default, extend-salt option is set to cluster-id and the salt option is empty. This means that
equivalent rules applied on the same cluster will generate the same output, but the same values
would not be generated on a different cluster.
Similarly, setting extend-salt to collection means that an attacker who has access to one rule set
cannot generate a lookup table that can be used to reverse engineer redacted values generated by a
different rule set.
The following table outlines the impact of various salt and extend-salt option combinations,
assuming all other options are the same.
empty (default) none For a given input, all rules with no salt value produce the
same output.
any value none For a given input, all rules with the same salt value produce
the same output for the same input.
empty cluster-id For a given input, a rule applied in cluster C produces the
same output as other rules with no salt applied in cluster C.
Any rule specifying a non-empty salt applied in cluster C
produces different output, as does any rule applied in a
different cluster.
any value cluster-id For a given input, a rule applied in Cluster C only produces
the same output as other rules with the same salt applied in
cluster C. Any rule with a different or no salt applied in
Cluster C produces different output, as does any rule applied
in a different cluster.
empty collection For a given input, any rule in rule collection R produces the
same output as other rules in R that do not specify a salt.
Rules in another rule collection produce different output.
any value collection For a given input, a rule in rule collection R only produces
the same output as other rules in R with the same salt. Rules
in another rule collection produce different output, even with
the same salt.
Change the example command line as needed to match your environment. The output directory
(./dict-results) must not already exist.
The redacted documents will be exported to ./dict-results. The //street and //country values
will reflect values from the street and country dictionaries, respectively.
The redacted streets values will be the same each time you export the documents because they are
redacted using mask-deterministic. The redacted country values will change each time you
export the documents because they are redacted using mask-random.
For more details on using mlcp with Redaction, see Redacting Content During Export or Copy
Operations in the mlcp User Guide.
The documents are inserted into collections so they can easily be selected for redaction. The
“personnel” collection contains all the samples. The “xml-people” collection includes only the
XML samples. The “json-people” collection includes only the JSON samples.
When you complete the steps in this section, your Documents database will contain the following
documents. The collection names are shown in parentheses after the URI in the following list.
2. Paste the following script into a new query tab in Query Console:
<person>
<name>Little Bopeep</name>
<alias>Peepers</alias>
<alias>Bo</alias>
<address>
<street>100 Nursery Lane</street>
<city>Hometown</city>
<country>Neverland</country>
</address>
<ssn>123-45-6789</ssn>
<phone>123-456-7890</phone>
<email>[email protected]</email>
<ip>111.222.33.4</ip>
<id>12-3456789</id>
<birthdate>2015-01-15</birthdate>
<anniversary>2017-04-18</anniversary>
<balance>12.34</balance>
</person>,
<options xmlns="xdmp:document-insert">
<permissions>{xdmp:default-permissions()}</permissions>
<collections>
<collection>personnel</collection>
<collection>xml-people</collection>
</collections>
</options>
);
5. Click the Run button. The sample documents are installed in the Documents database.
6. Optionally, click Explore next to the Database dropdown to explore the database and
confirm insertion of the sample documents.
27.0 Copyright
999
The MarkLogic software is protected by United States and international copyright laws, and
incorporates certain third party libraries and components which are subject to the attributions,
terms, conditions and disclaimers set forth below.
For all copyright notices, including third-party copyright notices, see the Combined Product
Notices for your version of MarkLogic.
MarkLogic 10
MarkLogic Server Copyright
MarkLogic provides technical support according to the terms detailed in your Software License
Agreement or End User License Agreement.
Complete product documentation, the latest product release downloads, and other useful
information is available for all developers at http://developer.marklogic.com. For technical
questions, we encourage you to ask your question on Stack Overflow.
MarkLogic 10
MarkLogic Server Technical Support