ETP Specification v1.2 Doc v1.1
ETP Specification v1.2 Doc v1.1
ETP Specification v1.2 Doc v1.1
Specification v1.2
Acknowledgements
Energistics would like to thank the Energistics Architecture Team, who drove ETP design, and the
members of the WITSML, RESQML and PRODML Special Interest Groups (SIGs) who participate in the
design, testing, review, and implementation of ETP.
Amendment History
Standard Document Date Comment
Version Version
1.2 1.1 Sept 27, 2021 Documentation udpate (clarification) only. No
schema change.
For channel data protocols:
Content was added to clarify that the primary
index is or must be first, for example, in an array
of channel data.
For range replace messages, content was added
to explicitly state expected data and order (similar
to content for ChannelData messages).
Chapters/protocols impacted:
Chapter 6, ChannelStreaming (Protocol 1)
(Sections: 6.1.1, 6.2.2 (Row 3), 6.3.4)
Chapter 7, ChannelDataFrame (Protocol 2)
(Sections: 7.2.2 (Row 8), 7.3.2, 7.3.6)
Chapter 19, ChannelSubscribe (Protocol 21)
(Sections 19.2.2 (Row 7), 19.3.5, 19.3.7, 19.3.11)
Chapter 20, ChannelDataLoad (Protocol 22)
(Sections 20.2.2. (Row 6), 20.3.6, 20.3.7)
Chapter 23, ETP Datatypes (Section 23.33.7)
1.2 1.0 Sept. 9, 2021 Publication of ETP v1.2.
Version v1.2 is an extensive redesign and expansion
of ETP v1.1. For the summary of changes, see
Section 2.1.
Table of Contents
Table of Contents ..................................................................................................... 4
1 Introduction to ETP .........................................................................................14
1.1 Working with Different Energistics Data Models ..................................................... 14
1.2 Support for Multiple Versions of ETP...................................................................... 14
1.3 Overview of Supported Use Cases ......................................................................... 15
1.4 ETP Design Principles ............................................................................................ 16
1.4.1 Design Decisions for ETP v1.2 ..................................................................... 17
1.5 Document Details .................................................................................................... 17
1.5.1 How to Use This Document (IMPORTANT: Read This!) ............................. 17
1.5.2 Recommendation for Using the PDF............................................................ 18
1.5.3 Parts of this Document Are Created from the ETP UML Model ................... 18
1.5.4 Documentation Conventions ........................................................................ 18
1.6 ETP Resources Available for Download ................................................................. 20
2 Published ETP Protocols and Summary of Changes ...................................21
2.1 Summary of Changes from ETP v1.1 to v1.2 ......................................................... 22
2.1.1 The Specification Document has been Reorganized and Improved ............ 22
2.1.2 Things that Have Been Removed from ETP ................................................ 23
2.1.3 Improved/Redesigned ETP Sub-Protocols and New Features .................... 24
2.1.4 New ETP Sub-Protocols ............................................................................... 27
2.1.5 Error Codes Have Been Significantly Revised ............................................. 28
3 Overview of ETP and How it Works (Crucial—read this chapter!)...............30
3.1 ETP Overview: Big Picture ..................................................................................... 31
3.1.1 Sub-Protocols defined by ETP ..................................................................... 32
3.1.2 Endpoints and Roles .................................................................................... 33
3.1.3 Session ......................................................................................................... 34
3.1.4 Data objects, Resources, and Identifiers (UUIDs, URIs, and UIDs) ............ 35
3.1.5 ETP Messages ............................................................................................. 35
3.1.6 Security ......................................................................................................... 36
3.2 Sessions: HTTP, WebSocket and ETP .................................................................. 36
3.2.1 Why WebSocket for Transport? ................................................................... 36
3.3 Capabilities: Endpoint, Protocol, Server and Data Object ...................................... 37
3.3.1 How Protocol and Endpoint Capabilities Work ............................................. 38
3.3.2 "Global" Capabilities ..................................................................................... 39
3.3.3 Support for ETP Optional Functionality ........................................................ 47
3.3.4 Data Object Capabilities: How They Work ................................................... 48
3.3.5 ADVISORY: Implication of Capabilities and Required Behavior for Stores . 49
3.4 ETP Message Approach ......................................................................................... 49
3.4.1 Messages are Defined by Avro Schemas .................................................... 49
3.4.2 General Message Types and Naming Conventions ..................................... 51
3.5 ETP Message Format and Basic Sequence Requirements ................................... 53
3.5.1 Overview of an ETP Message ...................................................................... 53
3.5.2 General Requirements for ETP Message Format ........................................ 53
3.5.3 General Sequence for ETP Request/Response Messages ......................... 54
3.5.4 ETP Message Header .................................................................................. 55
3.5.5 ETP Message Body ...................................................................................... 58
3.5.6 Mechanisms to Limit Message Size ............................................................. 59
3.5.7 Message Compression ................................................................................. 59
3.6 ETP Extension Mechanisms ................................................................................... 60
3.6.1 Custom Protocols and Capabilities .............................................................. 60
3.6.2 MessageHeaderExtension ........................................................................... 60
1 Introduction to ETP
Energistics Transfer Protocol (ETP) is a data transfer specification that enables the efficient transfer of
data between two software applications (endpoints), which includes real-time streaming. ETP has been
specifically envisioned and designed to meet the unique needs of the upstream oil and gas industry and,
more specifically, to facilitate the exchange of data in the Energistics family of data standards, which
includes: WITSML (well/drilling), RESQML (earth/reservoir modeling), PRODML (production), and EML
(the data objects defined in Energistics common, which is shared by the other three domain standards).
Initially designed to be the API for WITSML v2.0, ETP is now part of the Energistics Common Technical
Architecture (CTA).
ETP defines a publish/subscribe mechanism so that data receivers do not have to poll for data and can
receive new data as soon as they are available from a data provider, which reduces data on the wire and
improves data transmission efficiency and latency. Additionally, ETP functionality includes data discovery,
real-time streaming, store (CRUD) operations, and historical data queries, among others.
For the list of protocols published in the current version of ETP and list of changes since the previous
version, see Chapter 2.
For an overview of how ETP works, see Chapter 3.
Section 3.1 is a big picture overview that defines the main concepts and constructs in ETP and
how those "pieces" work together; both developers and business people who want a high-level
understanding of "how ETP works" should find it useful.
- The remaining sections in Chapter 3 are details for developers.
These use cases share some common features, which can include:
Potentially long-lived sessions: An ETP session (which represents a single WebSocket
connection) may be expected to last anywhere from minutes to months.
Dynamic data: Over the lifetime of a session, many changes—including deletions, additions,
authorizations and "de-authorizations"—may happen to data available in the session’s endpoints.
The variations in these use cases also fall into broad categories, which also impact the design and
implementation of ETP. These variations include:
End-User Driven: Scenarios where an end user is using an application that is an ETP client
connected to an ETP server.
Machine-to-Machine: Scenarios where a background service is operating an ETP client connected
to an ETP server.
Partner Data Sharing: Scenarios where the ETP clients and servers belong to unrelated companies.
Reverse Data Flows: Scenarios where ETP clients act as data stores and ETP servers act as data
customers.
a. This principle does not extend to bulk data such as channel data, growing object parts, and array data.
ETP provides support for adding, editing, or removing subsets of bulk data in specialized protocols.
1.5.3 Parts of this Document Are Created from the ETP UML Model
ETP has been designed and developed using UML® implemented with Enterprise Architect (EA), a UML
modeling tool from Sparx Systems. The schemas and some of the content in this specification (the
example message schemas for each protocol and Datatypes in Chapter 23) have been generated from
the UML model.
If any discrepancy exists between the schema and the specification, the schema is the primary
source (though all content should be consistent because it is produced from the same source.)
NOTE: Only this document provides definitions of data fields in the schemas.
Content in this specification should be considered "normative" unless otherwise specified.
b. Message names are in bold, italic text (and Pascal case); EXAMPLE: the OpenSession
message.
c. Field names are in italics (and camel case); EXAMPLE: the serverInstanceId.
3. Energistics domain standards—WITSML (well/drilling), RESQML (earth/reservoir modeling),
PRODML (production operations and reporting)—are informally referred to as the "MLs".
a. Name spaces for the MLs include the ML name and version number. EXAMPLE: witsml20.
b. Each domain standard has a package of shared data objects defined in Energistics common
(which, in this document, is always referred to as shown here: Energistics common). Objects
defined in Energistics common have a namespace of eml plus the version of common
EXAMPLE: For Energistics common v2.1 the namespace is eml21.
4. Error codes. For brevity in this specification, when an error condition is described, the text states
"send error Name (N)" where Name and N are actual error code names and numbers, such as "send
error EUNSUPPORTED_PROTOCOL (4)" as defined by this specification (see Chapter 24). The
error code is sent in the Protocol Exception message, which is defined in Core (Protocol 0) but is
used in any protocol when an error occurs. For more information, see Section 3.7.2.1 and Section
5.3.8.
5. Extensive use of numbering. In addition to chapter and section numbering, main steps, paragraphs,
table rows, and key points are numbered in this document.
a. In some cases (EXAMPLE: The task/message sequence section in each protocol-specific
chapter) the numbers are used to show sequence.
b. In other cases, items have been numbered for easy reference (i.e., when discussing with a
colleague or reporting an issue, you can refer to "Section 3.7.3, Paragraph 2.b.ii"). Use of
numbering makes it easier to link to very specific content; when possible, that has been done.
6. Energistics documentation is produced using U.S. English spelling and punctuation conventions.
Document/Resource Description
1. ETP Specification v1.2 Provides an overview or ETP, its business purpose, supported
(This document) use cases, design, etc. It's located in the doc folder of the ETP
download package.
Defines key concepts, messages, field definitions, and
behaviors of ETP. For full understanding of ETP, the
specification MUST be used in conjunction with the schemas.
NOTE: Only this document provides the definitions of the data
fields in the schemas.
2. Schemas Avro schemas as described in this document. The download
package organizes the ETP schemas into 2 main groups
(folders) plus a standalone file:
Protocols: A folder for each ETP sub-protocol, which contains
the message schemas for messages defined in those
protocols.
Datatypes: A set of folders for the low-level data structures,
which are used to define the ETP messages. It contains data
types defined by both Avro and ETP.
etp.apvr file: Is a single file that contains all schemas.
3. Proxy classes The src folder contains proxy classes for the following:
C#
Java
4. ETP DevKit A .NET library providing a common foundation and the basic
(NOT in the ETP download) infrastructure for implementing ETP. For more information and
to download a copy, go to this link at the Energistics website:
https://www.energistics.org/developer-resources/
5. ML-specific implementation specifications Each version of each Energistics domain standard (i.e.,
(These will be made available when WITSML, RESQML and PRODML) has or will have its own
published.) implementation specification that provides any ML-specific
details required to use a particular version of ETP with a
particular version of an ML/data model.
New Appendixes:
Appendix: Energistics Identifiers documents the specific requirements for identifying
Energistics data objects in the domain standards and in ETP, which is predominantly using URIs,
which most use the formats specified in the appendix, which are referred to as the canonical
Energistics URIs. NOTE: For ETP v1.2, this content supersedes the Energistics Identifier
Specification, v4.0.
Appendix: Data Replication and Outage Recovery Workflows describes high-level workflows
for the stated tasks and also provides additional information about why new features have been
added to ETP v1.2 (for example, the storeLastWrite field, which is a key component for these
workflows) how some of the sub-protocols and new features are intended to work.
- Appendix: Security Requirements and Rationale for the Current Approach
provides a high-level summary of requirements for the new security design and a brief
explanation as to why the other security standards that were considered were not selected.
Core (Protocol 0) Some new data fields have been added to existing messages, for example, timestamps
to support clock-based eventual consistency (data replication) workflows and changed
or new fields to support new security behavior (see Section 2.1.3.5).
o RenewToken message has been renamed to Authorize and a new
AuthorizeResponse message has been added; these messages are used for
initial authorization of an ETP session and for renewal of Bearer Tokens.
New Ping and Pong messages to also support clock-based eventual consistency (data
replication) workflows.
Support for multiple versions of ETP:
o Server can support both ETP v1.1 and ETP v1.2.
o Client can choose which version of ETP it wants to use.
CANNOT use different versions of ETP sub-protocols from different versions of ETP.
(That is, an ETP session now is with one version of ETP and all sub-protocols in that
version).
Placeholder support for exchanging data objects in JSON or other formats.
More granular object support.
More secure session identifiers.
Message compression support (vs. object compression support in ETP v1.1).
Addition of endpoint and data object capabilities (in addition to protocol capabilities).
o WebSocket limits are exchanged and must be respected.
Error handling: ProtocolException messages now have 2 modes:
o Single error.
o Map of error messages relating back to a map of multiple request items (which,
allows some of the requests to pass/fail (instead of the entire request failing)).
ChannelStreaming (Protocol 1) Now for "simple streaming" only, so many messages have been removed and the message
names and behavior have been simplified. The "standard streaming" capabilities have been
moved to two new protocols (ChannelSubscribe (Protocol 21) and ChannelDataLoad
(Protocol 22)), which are explained in Section 2.1.4).
Removed all ‘discovery’ aspects previously in this protocols; all discovery operations are
done using Discovery (Protocol 3).
Removed the notification aspects (channel status changes and notifications of added /
removed channels); all relevant notifications are now done with StoreNotification
(Protocol 5).
Data rate-throttling limits were removed.
Streaming of growing object parts is no longer allowed (as was the case in ETP v1.1).
Discovery (Protocol 3) Design change for the discovery operation to "walk" the data model as a graph.
Can now discover data objects (nodes on the graph) and relationships between them
(edges that connect the nodes).
Changing this protocol was a major redesign to properly support cross-domain
workflows and all Energistics data models.
Added support for discovering deleted objects
Moved support for dataspace discovery to Dataspaces (Protocol 24) and model
discovery to SupportedTypes (Protocol 25).
Store (Protocol 4) Message names and functionality have been changed, significantly. Clear naming-
convention patterns (see Section 3.4.2).
ChannelDataFrame (Protocol 2) Gets channel data from a store in "rows". Supports the log on-disk use case.
GrowingObjectNotification (Protocol 7) Allows store customers to receive notification of changes to parts of growing data
objects in the store in an event-driven manner, from events in Protocol 6
(GrowingObject).
Where applicable, consistent in design with StoreNotification (Protocol 5)
DiscoveryQuery (Protocol 13) Query behavior for discovery operations.
StoreQuery (Protocol 14) Query behavior appended for store operations.
GrowingObjectQuery (Protocol 16) Query behavior for parts within a growing data object.
Transaction (Protocol 18) Handles messages associate with software application transactions, for example,
end messages for applications that may have long, complex operations (typically
associated with earth modeling/RESQML).
ChannelSubscribe (Protocol 21) The "read/get" behavior for channel data, this protocol provides standard
publish/subscribe behavior.
In ETP v1.1, some of this behavior was previously in ChannelStreaming
(Protocol 1), which is now only for simple streamers.
Significant design to include efficiency and outage recovery.
Added previously missing functionality (from WITSML v1.x and ETP v1.1),
including synchronization/historical change detection features.
ChannelDataLoad (Protocol 22) The "write/put" behavior for channel data; this protocol allows an endpoint with the
customer role to connect to an endpoint with the store role and push/load data to it.
EMAX_TRANSACTIONS_EXCEEDED (15)
EDATAOBJECTTYPE_NOTSUPPORTED (16)
EMAXSIZE_EXCEEDED (17)
EMULTIPART_CANCELLED (18)
EINVALID_MESSAGE (19)
EINVALID_INDEXKIND (20)
ENOSUPPORTEDFORMATS (21)
EREQUESTUUID_REJECTED (22)
EUPDATEGROWINGOBJECT_DENIED (23)
EBRACKPRESSURE_LIMIT_EXCEEDED (24)
EBACKPRESSURE_WARNING (25)
ETIMED_OUT (26)
EAUTHORIZATION_REQUIRED (27)
EAUTHORIZATION_EXPIRING (28)
ENOSUPPORTEDDATAOBJECTTYPES (29)
ERESPONSECOUNT_EXCEEDED (30)
EINVALID_APPEND (31)
EINVALID_OPERATION (32)
ERETENTION_PERIOD_EXCEEDED (5001)
- ENOTGROWINGOBJECT (6001)
Like most modern communication protocols, ETP uses a layered approach and sits on top of the existing
Transmission Control Protocol (TCP) layered model (Figure 1, gray boxes) (for a simplified stack
diagram, see Figure 2). Thus, the concept of protocol is used in many contexts throughout this document,
and the notion of a sub-protocol is used to discuss protocols that sit (somewhat un-intuitively) just above
another protocol in the stack.
IMPORTANT! In this document, the terms protocol and sub-protocol are used interchangeably, about any
layer, depending on context.
Figure 2 shows that ETP is itself a sub-protocol of the WebSocket protocol and that ETP also has its own
layers and sub-protocols, each designed to carry specific data that follows a specific pattern (in terms of
size, frequency, and variability of data, as described above). Core (Protocol 0) has a direct connection to
WebSocket and is agnostic to the various kinds of messages that are carried in each of its own sub-
protocols. (For more information, see Section 3.9.1 How ETP is Bound to WebSocket.) This layered
approach allows for separation of concerns between the various parts of the stack and supports the
adoption of future standards that may be developed lower in the stack.
b. Where ETP endpoint, protocol or data object capabilities allow, applications may advertise and
impose certain limits on the functionality they support.
4. If a sending endpoint requests an action for a protocol that the receiving endpoint does not allow, the
receiving endpoint MUST send either the specific error code defined by the relevant part of this
specification or, if no specific error code is defined, EREQUEST_DENIED (6) or an appropriate
custom error code.
RECOMMENDATION: Endpoints SHOULD supply an error message explaining why the request was
denied. For example, for read-only servers (which do not allow Put operations), the explanation could
be "Read-only server; operation not allowed."
NOTE: Error codes are sent in a ProtocolException message, which is defined in Core (Protocol 0),
but is used in the protocol where the error occurred. For more information about ProtocolException
messages and how they work, see Section 3.7.2.1.
3.1.3 Session
ETP includes the notion of a session, which is an established WebSocket connection between a client
and server that is open for a period of time. Each endpoint maintains information for the life of the session
(as explained in other sections of this specification).
When the ETP session is established, the client and server (in their respective ETP protocol-specific
roles) may begin using the sub-protocols and data objects negotiated in the session to perform the
required operations. The operation of each ETP sub-protocol are covered in Chapters 5–22 of this
document.
For more information about ETP sessions, see Section 3.2.
IMPORTANT! ETP endpoints MUST have clocks. Workflows for reconnecting after a dropped connection
and eventual consistency between stores are based on these endpoints being able to assess changes
and retrieve changed (historical) data since a particular time.
For more information about use of time and timestamps in ETP, see Section 3.12.5.
For more information about workflows based on using these timestamps, see Appendix: Data
Replication and Outage Recovery Workflows.)
3.1.4 Data objects, Resources, and Identifiers (UUIDs, URIs, and UIDs)
Operations in ETP are performed on data objects that represent real-world business objects, like wells,
horizons, or production volumes. These data objects are defined by the Energistics domain standards,
WITSML, RESQML and PRODML.
However, for efficiency of operations, initial inquiries in ETP often return a resource, which is a lighter
weight meta-object based on the content of the actual instance of a data object.
Energistics specifies and requires these main types of identifiers: UUID, URI, and UID. (For more
information about Energistics identifiers, see Appendix: Energistics Identifiers.)
UUID. In Energistics domain models, an instance of a data object is uniquely identified with a UUID.
In most messages and records use of UUID must be of datatype Uuid (Section 23.6).
URI. In ETP, an instance of a data object MUST be identified with a URI.
Energistics specifies canonical URIs (e.g., for data objects, data spaces, and data object
queries), which MUST be supported.
IMPORTANT! In most cases in this specification, when the customer has to provide a URI (for
example, in a request message) it must be the canonical Energistics URI (Section 25.3.5).
ETP also supports use of alternate URI formats. If an endpoint supports them, and their use is
established in an ETP session, alternate URI formats may be used in subsequent requests in the
ETP session.
- For more information about rules and usage for URIs in ETP, see Section 3.7.4.
UID. In Energistics domain models, some data objects have one or more collections of sub-objects or
parts. A UID uniquely identifies one sub-object or part within its collection. A UID may or may not be
in the form of a UUID. EXAMPLE: A Trajectory has a collection of TrajectoryStations, and each
TrajectoryStation has a UID that is different than the UID of all other TrajectoryStations in that
Trajectory.
Some ETP sub-protocols use UIDs to refer directly to a specific part or sub-object within a data
object.
- Other IDs like message IDs, channel IDs and map keys are discussed elsewhere in this
document.
3.1.6 Security
In any communication protocol—especially one carrying sensitive, proprietary data—security is a major
concern. With regard to security:
ETP DOES NOT itself define any new security protocols. Rather, it relies on the available security
mechanisms in its underlying protocols (HTTP, WebSocket, TLS, TCP, etc.).
ETP DOES specify authorization methods (based on adoption and adaption of OAuth 2.0 and
OpenID Connect Discovery v1.0), which MUST be supported by all servers for interoperability.
However, this approach DOES allow implementers to add custom behavior now and for future
extensibility.
ETP focuses only on authorizing the connections between ETP applications (not necessarily a
device).
For more information, see Section 4.1.
IMPORTANT: Not every store will be able to accurately track activeStatus over a long period of time. For
example, if a store application restarts, the store may lose track of this information. The minimum
requirement to enable eventual consistency workflows is this:
If a store loses track of whether a given data object is “active” or “inactive”, the store MUST set the
data object’s activeStatus to true and start the ActiveStatusTimeout.
The store MUST also send any appropriate notifications caused by the change to activeStatus.
SCOPE: This table summarizes protocols and messages that define behaviors related to changing and
setting the activeStatus field, sending notifications about change in status and messages that either
display the field or trigger changes to its. For details, see the relevant ETP-sub-protocol-specific chapters.
REQUIRED BEHAVIOR:
1. For growing data objects, when no parts have been added, changed or deleted for the duration of the
store's relevant ActiveTimeoutPeriod value, a store MUST set a growing data object's activeStatus
field (and the data object element it maps to) to "inactive".
2. For Channel data objects, when no data points have been added, changed or deleted for the duration
of the store's relevant ActiveTimeoutPeriod value, a store MUST set a channel's activeStatus field
(and the data object element it maps to) to "inactive".
3. For other data objects (i.e., other than growing and channel data objects), when no updates have
been made that cause the data object’s activeStatus to be set to true for the duration of the store’s
relevant ActiveTimeoutPeriod value, a store MUST set the data object’s activeStatus (and the data
object element it maps to) to “inactive”.
4. The relevant ActiveTimeoutPeriod capability is the data object capability for the type of data object
affected, if set, or, if not set, it is the endpoint capability.
5. When setting a data object’s activeStatus to “inactive”, the store MUST NOT make the change sooner
than the ActiveTimeoutPeriod after the most recent change that activated the data object.
a. The store MUST make the change as soon as is practical after the ActiveTimeoutPeriod has
elapsed. RECOMMENDATION: Change activeStatus within seconds after the
ActiveTimeoutPeriod has elapsed.
6. NOTIFICATION BEHAVIOR: When a data object's activeStatus field changes, a store MUST send an
ObjectActiveStatusChanged notification message for any relevant subscriptions. For more
information, see Chapter◦10◦StoreNotification (Protocol 5).
2. When several changes to the object happen within this period, the endpoint MAY choose to send only
a single notification for a data object, provided that both of these conditions are met:
a. The endpoint does not exceed this limit.
b. The notification accurately reflects the state of the affected object at the time the notification is
sent, which MUST represent the most recent state of the object.
REQUIRED BEHAVIOR:
1. After receiving a request from a customer, a store MUST send the standalone response message or
the first message in a multipart response no later than the value for the customer's
ResponseTimeoutPeriod.
a. If the store cannot respond within the customer's ResponseTimeoutPeriod, the store MAY cancel
by sending error ETIMED_OUT (26).
b. If the store's value for ResponseTimeoutPeriod is less than the customer's value, and the store
exceeds its limit, then the store MAY cancel the response by sending error ETIMED_OUT (26).
2. If a customer receives an ETIMED_OUT error, it may indicate that the session has become
congested or the store has encountered other "abnormal circumstances."
SCOPE: It must be used in the protocols and messages/operations listed in this table. For details, see the
relevant ETP-sub-protocol-specific chapters.
Protocols Messages/Operations
Store (Protocol 4) Put and get operations
StoreNotification (Protocol 5) If sending object data with notifications
GrowingObject (Protocol 6) Operations on growing data object
"headers" (i.e., "non-growing portion",
which are data objects, just informally
considered/called "headers" in relation
to their respective parts
StoreQuery (Protocol 14) FindDataObjectsResponse
REQUIRED BEHAVIOR:
1. In requests, a customer MUST limit the size of each data object to the value of the store's relevant
MaxDataObjectSize protocol, object or endpoint capability.
a. The limit that applies to a specific data object is the lesser of the global capability limit for that
data object type, if set, and the protocol capability limit.
b. The global capability limit for the object is the data object capability limit for the data object type if
set, or, if not set, the endpoint capability limit.
2. If any data object in the request exceeds its relevant limit, a store must deny the entire request by
sending error EMAXSIZE_EXCEEDED (17).
3. A store MUST limit the size of data objects in responses and notifications to the customer's protocol
value for MaxDataObjectSize.
a. If the store is sending the customer a notification about a data object that, when the data object is
requested and including it in the message would exceed the customer's value for
MaxDataObjectSize, the Store MUST instead send the notification without the associated data
object data.
b. If a data object exceeds the customer's MaxDataObjectSize value (limit), the customer MAY
notify the store by sending error EMAXSIZE_EXCEEDED (17).
SCOPE: It must be used in the protocols and messages/operations listed in this table. For details, see the
relevant ETP-sub-protocol-specific chapters.
Protocols Messages/Operations
Store (Protocol 4) Allowed get and put operations on
growing data and its parts.
StoreNotification (Protocol 5) If sending object data with notifications
(growing object and its parts)
GrowingObject (Protocol 6) Put and get operations on the parts of a
growing data object
GrowingObjectNotification (Protocol 7) If sending parts data with notifications
StoreQuery (Protocol 14) FindDataObjectsResponse (if the result
of the query is a growing object and its
parts)
GrowingObjectQuery (Protocol 16) FindPartsResponse (if the result of the
query is a growing object and its parts)
REQUIRED BEHAVIOR:
1. In requests, a customer MUST limit the size of each data object part in requests to the Store's
relevant MaxPartSize endpoint capability.
a. If any data object part in the request exceeds its relevant limit, a store MUST deny the entire
request by sending error EMAXSIZE_EXCEEDED (17).
2. In responses, a store MUST limit the size of data object parts and notifications to the customer's
endpoint value for MaxPartSize.
a. If a data object part exceeds the customer's MaxPartSize value (limit), the customer MAY notify
the store by sending error EMAXSIZE_EXCEEDED (17).
REQUIRED BEHAVIOR:
1. If a new connection from a particular client may cause a server to exceed its value for
MaxSessionClientCount endpoint capability, a server MAY refuse the incoming connection.
a. If a server chooses to reject an incoming connection because it would exceed this limit:
i. If it does this during the WebSocket connect or upgrade step, it SHOULD deny the
connection or upgrade with HTTP 429: Too Many Requests.
ii. If it does this on receiving a RequestSession message, it SHOULD deny the request by
sending error ELIMIT_EXCEEDED (12).
REQUIRED BEHAVIOR:
1. If a new connection may cause a server to exceed its value for MaxSessionGlobalCount endpoint
capability, a server MAY refuse the incoming connection.
a. If a server chooses to reject an incoming connection because it would exceed this limit, it
SHOULD reject the WebSocket request with HTTP 503: Service Unavailable.
REQUIRED BEHAVIOR:
1. A client MUST NOT send a WebSocket frame that exceeds either its value or the server's value for
MaxWebSocketFramePayloadSize.
2. A server MUST NOT send any WebSocket frame that exceeds either its value or the client's value for
MaxWebSocketFramePayloadSize.
3. In either case, if the limit is exceeded, ETP behavior is undefined.
a. The likely behavior is the WebSocket connection will be closed.
REQUIRED BEHAVIOR:
1. A client MUST NOT send a WebSocket message that exceeds either its value or the server's value
for MaxWebSocketMessagePayloadSize.
2. A server MUST NOT send a WebSocket message that exceeds its value or the client's value for
MaxWebSocketMessagePayloadSize.
3. In either case, if the limit is exceeded, ETP behavior is undefined.
4. If a store response to a customer request would exceed the limit, the store MUST try to send the
response as a multipart message, where each message part does not exceed the limit.
a. If the store cannot do so, it MUST deny the request and send error EMAXSIZE_EXCEEDED.
5. If a store notification to a customer would exceed the limit, the store MUST try to send the notification
as separate, stand-alone notifications.
a. If the store cannot do so, it MUST attempt to remove optional information (such as object data) so
that the notification can be sent without exceeding the limit.
6. ETP behavior is undefined if this limit is exceeded. If an endpoint cannot send a message because
doing so would exceed this limit, the most likely outcome is that the endpoint will drop the connection.
NOTE: One strategy for overcoming WebSocket limits communicated by this capability is use of Chunk
messages; for more information, see Section 3.7.3.2.
REQUIRED BEHAVIOR:
1. If a server does not receive a RequestSession message within this period, it MAY send error
ETIMED_OUT (26) and close the WebSocket connection.
2. The server MUST NOT send the CloseSession message because no attempt was made to establish
a session.
For a server:
A valid session is established when it sends an OpenSession
message to the client, which indicates a session has been
successfully established.
The time period starts when it receives the initial
RequestSession message from the client.
For a client:
A valid session is established when it receives an OpenSession
message from the server.
The time period starts when it sends the initial RequestSession
message to the server.
REQUIRED BEHAVIOR:
1. If a session is not successfully established within this period, either endpoint MAY send error
ETIMED_OUT (26) and then close the WebSocket.
2. The CloseSession message MUST NOT be sent because no session was established.
REQUIRED BEHAVIOR: The required behavior is the same for each of the capabilities listed in the table.
(In the following instruction, <RequestType> may be get, put or delete and <DataObjectCapability> is one
the corresponding Data Object Capabilities from the table above.)
1. A customer MUST NOT send a <RequestType> request for an object type where the
<DataObjectCapability> value is false. (EXAMPLE: <RequestType>/<DataObjectCapability> = get,
SupportsGet)
2. If a Store's <DataObjectCapability> value is false, the store MUST reject any <RequestType> request
by sending error ENOTSUPPORTED (7). (EXAMPLE: SupportsPut, put)
download, the folders for each ETP sub-protocol are alpha ordered, not ordered by ETP sub-protocol
number.)
Figure 3: ETP download folder structure of ETP protocols (left) and message schemas for Core protocol
(right). This specification contains a chapter for each published protocol in the current version of ETP; for
easy reference, each protocol-specific chapter also displays the message schemas for the subject protocol.
3.4.1.1 Messages are Composed of Data Types and Primitives Defined by Avro and ETP
For consistent design, ETP leverages Avro primitive data types (long, float, string, etc.) and defines other
low-level data types (which are specified as Avro records, enumerations, etc.). Figure 4 shows examples
of some frequently used Avro records defined by ETP and the messages that use those records. For the
complete list and definitions of data types, see Chapter 23.
NOTE: The schemas and related documentation reference these data types and links are provided.
Typically, to completely understand the content of a specific message, you must read the related data
type documentation, especially for records and enumerations.
Figure 4: Examples of ETP-defined Avro records that are used by multiple messages and other records.
The following data types are composed of primitives that are defined by Avro or ETP:
Record: Specifically, this is an Avro record, which is similar to a C or C++ struct or to a JSON object
or JavaScript object. The record stereotype is used to designate low-level data types that are
composed to create messages. EXAMPLE: In the figure above, the SubscriptionInfo record is used
in several notification messages by different protocols, and the SubscriptionInfo record uses the
ContextInfo record.
NOTE: The key components of an ETP message, the header, body and optional header extension,
are also defined as Avro records, each of which are composed of other Avro primitives and records.
Enumeration: Enumerated values are defined in the schemas as a list of literal names and serialized
on the wire as an integer value. Avro schemas do not allow a bespoke integer to be associated with a
given enumeration, and so they are order dependent. NOTE: This order-dependency means that, for
maximum interoperability, the ordering of enumerations must be consistent across ETP versions.
Union: Used to represent a type that can be any one of a selected list of types. Union is similar to
unions in C or C++ and more or less maps to the xsd:choice element in XML schemas.
NOTE: Additionally, ETP has two messages that are defined in Protocol 0 but may be used in any of the
ETP protocols; these include: ProtocolException and Acknowledge messages. For more information
on these "universal" messages, see Section 3.7.2.
Figure 5: ETP standard message format. Each message header and body is encoded in a separate Avro
record.
This separate encoding of message header and message body enables the receiver to read and decode
the ETP message header, independent of the ETP protocol or ETP message body itself. (This design is
consistent with and supports software design best practices for modularity and efficiency, e.g., protocol-
specific "handlers".)
NOTE: Unlike earlier Energistics standards based on SOAP and XML (e.g., WITSML v1.x), ETP has no
concept of an ‘envelope’ schema that contains the entire ETP message. However, the WebSocket
payload length field plays the same role in terms of defining the extent of the message content; for more
information, see Section 3.9.
a. The message header for all ETP-defined messages has a standard format defined by the
MessageHeader schema, which MUST be used. For details about the content, use, and rules for
processing the message header, see Section 3.5.4.
b. Each message body has a unique schema (one for each ETP-defined message). For more
information about the message body, see Section 3.5.5.
i. For certain messages, the message body MAY be zero length (which is specified in the
relevant message schemas).
ii. The body of any message—except for those explicitly excluded in Core (Protocol 0)—MAY be
compressed, regardless of role, based on the compression encoding negotiated during
initialization of the ETP session (see Chapter 5). For more information about compression, see
Section 3.5.7.
2. An ETP message—except for those explicitly excluded in Core (Protocol 0)—MAY include an optional
message header extension (MessageHeaderExtension), which allows the sender to add additional
contextual/extension data (e.g., such as information for open tracing) to any message.
a. If used, the message header extension MUST be sent between the message header and the
message body. For more information about using MessageHeaderExtension, see Section 3.6.2.
3. ETP provides several mechanisms to limit the size of messages (to respect WebSocket limits and to
help with throughput and performance); for more information, see Section 3.5.6.
i. A single WebSocket message is considered the lowest common "unit of work" that must be
received to begin processing.
b. After receiving the entire WebSocket message, the receiver MUST first attempt to de-serialize the
MessageHeader (before the message body):
i. If deserialization FAILS: The receiver MUST send error EINVALID_MESSAGE (19).The
MessageHeader of the ProtocolException message (that contains the error code) MUST
have protocol = 0, and correlationId = 0. (Because the receiver could not de-serialize the
header, it does not know the protocol or message ID of the errant message. For information
on the content of and on how to populate the MessageHeader, see Section◦3.5.4.)
ii. If the deserialization SUCCEEDS, the receiver MUST process the content of the
MessageHeader, some of which may require the receiver to take action even before
processing the message body. For details on how to process the content of the
MessageHeader, see Section◦3.5.4.2.
c. After de-serializing the header, the receiver MUST de-serialize and process the message body
and respond to the request.
i. For requirements unique to Acknowledge and ProtocolException messages, see
Section◦3.7.2.
ii. For more information on sequences for more complex ETP message patterns (for example,
plural and multipart messages), see Section 3.7.3.
iii. The key tasks and related message sequences—including message-specific processing,
error scenarios, when to send a ProtocolException message, and which error codes to
use—are explained in the protocol-specific chapters (Chapters 5 through 22).
request is composed of 5 ReplaceRange messages, the FIN bit is set on the fifth message).
iii. For information on setting the FIN bit on multipart requests, responses and notifications, see
Section 3.7.3.1.
b. 0x08: Message body (and optional MessageHeaderExtension, if used) is compressed.
c. 0x10: Sender is requesting an Acknowledge message. For more information on the
Acknowledge message, see Section 3.7.2.2.
d. 0x20: Indicates that this message includes an optional message extension. In this version of
ETP, the only message extension mechanism is the MessageHeaderExtension. For more
information on MessageHeaderExtension, see Section 3.6.2.
e. NOTE: 0X01 and 0X04 (which were used in the previous version of ETP) are currently unused.
3.7.2.2.
d. 0x20: If true, it indicates that the message has a MessageHeaderExtension (optional header
between the MessageHeader and MessageBody). (For more information and requirements for
processing a MessageHeaderExtension, see Section 3.6.2.)
Figure 6: An example ETP message schema (GetDataObjects message from Store (Protocol 4)).
implemented as a series of multiple related messages. Only messages where this value is true
may be implemented as a multipart request or response.
b. The remainder of the WebSocket payload after the MessageHeader is compressed, which
includes:
i. The message body.
ii. If used, the optional MessageHeaderExtension (see Section 3.6.2).
iii. The 0x08 flag in its MessageHeader MUST be set to true.
3.6.2 MessageHeaderExtension
Use of message header extensions (MessageHeaderExtension) allows additional contextual
information, about either the MessageHeader or the message body, to be sent with a specific message.
It can be used by implementers to send system-wide, custom properties and contextual information that
needs to be passed up and down a call stack. A common use case in cloud-native environments and
other call stacks such as HTTP/Rest and gRPC is the requirement to pass tracing information (such as
open tracing) down, and back up through a call stack.
If used, the sender indicates (using the designated bit in the messageFlags field of the standard
MessageHeader) that a MessageHeaderExtension is being sent, and then sends the
MessageHeaderExtension between the standard MessageHeader and the MessageBody.
WARNING: It is strongly recommended that message header extensions NOT be used with "one-way"
notification messages or other high-throughput or streaming messages (such as ChannelData
messages) due to potentially high overhead if their use is abused.
For the schema for the MessageHeaderExtension, see Section 23.26.
3. The endpoint that receives the MessageHeaderExtension MUST do at least one of the following:
a. If the endpoint supports MessageHeaderExtension, it MUST attempt to process it.
b. If the MessageHeaderExtension contains keys that the receiver does not understand or is not
interested it, it MUST ignore them (no error messages).
c. If the endpoint does NOT support MessageHeaderExtension, it MUST send error
EINVALID_MESSAGE (19).
d. If the MessageHeaderExtension flag is set to true AND the MessageHeaderExtension is
omitted (not just an empty map, but the map is omitted entirely), it MUST send error
EINVALID_MESSAGE (19).
i. NOTE: Conditions 3.c and 3.d and 4.a will cause message de-serialization issues,
4. ETP permits only one MessageHeaderExtension per message.
a. If an endpoint sends more than 1, the receiver MUST send error EINVALID_MESSAGE (19).
5. If a message containing a MessageHeaderExtension is compressed, then the
MessageHeaderExtension MUST be compressed with the message body.
This section defines these constructs and explains how to use them. The section
includes:
Definitions for these terms: universal message, map, Chunk message, multipart requests and
responses, and plural messages, (see Section 3.7.1).
Usage rules for:
"Universal" messages, which include ProtocolException and Acknowledge (see Section 3.7.2).
Plural messages, which allow multiple requests and/or responses in a single message (see
Section 3.7.3).
- Multipart requests, responses, and notifications (which can be thought of as one large virtual
request, response or notification that has been implemented as a group of related messages)
(see Section 3.7.3.1).
IMPORTANT! This specification defines which particular messages are plural (indicated by the message
name and data structures in the message body) and which response, requests, and notifications MAY be
implemented as a set of multiple related messages (indicated by the multipartFlag on the message
schema set to true).
1. Only messages designated as multipart (i.e., in the Avro schema header, multipartFlag: true; see
Section 3.5) may be implemented as multipart. If multipartFlag=false, the endpoint MUST send only
one message of that type per request or response.
2. In the simplest usage, multipart requests, responses, and notifications are composed of 2 or more of
the same type of message (EXAMPLES: For a multipart request, all ReplaceRange messages; for a
multipart response, all GetResourcesResponse messages). For general rules on how to use
multipart messages, see Section 3.7.3.1.
However, multipart requests, responses, and notifications MAY also be composed of multiple TYPES
of messages; for example, they may include:
One or more positive response messages with one or more ProtocolException message(s).
This pattern is used with messages that contain a map data structure; for more information on
how it works, see Section 3.7.3.
One or more types of positive responses, as defined by a specific ETP sub-protocol (EXAMPLE:
In ChannelDataFrame (Protocol 2) the standard positive response behavior to a GetFrame
request message is to return 1 (one) GetFrameResponseHeader message and 1 to n
GetFrameResponseRows messages (where n is the number of messages that are needed to
return all the rows that fulfill the request).
For store-related protocols (Protocols 4, 5, and 14) a positive response may have 1 or more
associated Chunk messages. For rules on how to use Chunk messages, see Section 3.7.3.2.
Details of protocol-specific behavior are captured in the relevant protocol chapter in this specification.
Use the Acknowledge message (defined in Chapter 5) in any protocol where specific acknowledgement
of receipt of a message is needed. That is, an Ack is confirmation that a message was received; it does
NOT indicate that an action was completed.
An Ack is a logically separate response message from any behavioral responses defined in specific ETP
sub-protocols.
In certain cases, use of Acks is prohibited. Rules and requirements are specified in the following list.
NOTE: This section documents behavior specific to Acks, which fits into the larger message sequence
documented in Section 3.5.3.
NOTE: Because Acknowledge is one of the messages defined in Protocol 0 that may
be used in any protocol, protocol should only be set to 0 if you are acknowledging a
Protocol 0 message.)
2. The correlationId in the MessageHeader of the Acknowledge message MUST be set to
the ID of the message (messageId field) that requested the acknowledgement.
NOTE: Each message in ETP MUST have a messageId unique to the endpoint in an
ETP session; for ETP message ID numbering requirements, see Section 3.5.4.
3. Set the messageFlags. Observe these details for setting the messageFlags on an
Acknowledge message:
The 0x02 bit (FIN bit) MUST ALWAYS be set to true. That is, an Ack is always only a
single ETP message.
The 0x10 bit MUST NEVER be set to true; an Ack MUST NOT be acked.
4. Cautions for using Acknowledge messages:
a. Consider use of Acks carefully. While ETP has no restrictions on their use, overuse of Acks can
degrade performance. For example, in general it would not be good practice to request Acks for
every ChannelData message in a streaming protocol (such as ChannelSubscribe (Protocol 21)).
However, in some cases (for example, poor phone line connection), use of Acks on every part
could be beneficial.
b. NOTE: In ETP v1.1, the Acknowledge message was also used with the 0x04 bit to indicate a
response of "no data" to a specific request. The "no data" responses have been clarified in ETP
v1.2 and this 0x04 bit is no longer used.
i. The endpoint making the request that contains the map MUST assign the map keys.
ii. If a map is used in a multipart response or request, the keys MUST be unique across all
messages of the multipart response/request. EXAMPLE: The OpenChannelsResponse
messages in ChannelDataLoad (Protocol 22) is a multipart message with a map; it is a
response that lists the channels a store can accept data for, each channel must be identified
by a map key unique across all messages that the make up the response.
iii. As previously specified (in Section 3.5.4), the final message in a request or response MUST
have its FIN bit set (messageFlags 0x02 in the MessageHeader). For information on setting
FIN bits on multipart requests, responses, and notifications, see Section 3.7.3.1.
3. A Get or Find request WITHOUT a map MUST have ONE of the following as a response: a) the
ETP-defined response message that contains items that the store could return, b) the ETP-defined
response message with an empty array (no data was found that met the request criteria) or c) a
ProtocolException message (PE) with the error field populated (i.e., a single error for the entire
request) with the appropriate error code).
EXAMPLE: In Discovery (Protocol 3), the possible responses to a GetResources request message
are: a) one or more GetResourcesResponse message with the array of resources that the store
could return, b) one GetResourcesResponse message with an empty array (the URI is valid but no
data meeting the criteria specified in the request was found) or c) a ProtocolException message,
e.g., if the URI in the request is malformed, the PE would contain in its error field EINVALID_URI (9).
a. If the response is multipart, it is possible that an endpoint might begin sending response
messages and THEN encounter an error (e.g., a server exception occurs). In this case, the server
MUST do all of the following:
i. Send a ProtocolException message with an appropriate error code.
ii. Stop processing the request that caused the error.
iii. Stop sending any additional response messages for that request.
b. As previously specified (in Section 3.5.4), the final message in a request or response MUST have
its FIN bit set (messageFlags 0X02 in the MessageHeader). For more information on setting FIN
bits on multipart requests, responses, and notifications, see Section 3.7.3.1.
4. A map request MUST have as a response: a) zero or more positive map response messages, b)
zero or more map ProtocolException errors, and c) zero or one terminating, non-map
ProtocolException error.
a. If the response is multipart, it is possible that an endpoint might begin sending response
messages and THEN encounter an error (e.g., a server exception occurs). In this case, the server
MUST do all of the following:
i. Send a terminating, non-map ProtocolException message with an appropriate error code.
ii. Stop processing the request that caused the error.
iii. Stop sending any additional response messages for that request.
b. Otherwise, if no terminating errors are sent:
i. Each key from the map in a map request MUST appear either as the key in a positive
response map or as the key in the errors map in a ProtocolException message; that is,
each request in a map was either successfully completed or results in an error, but not both.
ii. If a request message results in both positive responses and errors, the number of returned
positive responses and the number of errors in ProtocolException MUST equal the total
number of request items.
c. Terminating ProtocolException messages are intended for store-wide or request-wide failures
that are unrelated to the success or failure of individual requests within the request message. In
response to these errors, the customer MAY try the request again later or try to split it into smaller
groups of individual requests. These errors will not help the customer correct errors within the
individual requests.
i. An example of a store-wide error could be a store losing its database connection.
ii. Examples of request-wide errors could be an unhandled exception processing the request
that prevents further processing or exceeding the endpoint's value for the
MultipartMessageTimeoutPeriod capability.
d. Map ProtocolException messages are intended to provide specific failure reasons for individual
requests within the request message. In response to these errors, the customer SHOULD attempt
to correct the individual requests based on the specific error received. EXAMPLE: For
EINVALID_OBJECT, the customer SHOULD attempt to fix issues with the object data.
e. Terminating ProtocolException messages are NOT a substitute for map ProtocolException
messages.
i. Customers are likely to simply retry requests that fail with a terminating ProtocolException
message or send different subsets of the request to try to narrow down the potential problem.
f. If a store MUST send a terminating ProtocolException message, it SHOULD attempt to send all
positive responses and all map error responses it is able to before sending the terminating
ProtocolException message.
g. A store MUST NOT use a terminating ProtocolException message as a convenience
mechanism to avoid sending map error responses, even if all map error responses to a request
are the same.
5. The ProtocolException message(s) that contain the error responses to a map request MUST have:
a. protocolId in the MessageHeader set to the protocol number that the request message was
issued from. EXAMPLE: If the ProtocolException is in response to a GetResources message,
the protocolId is "3" (for Discovery (Protocol 3).
b. correlationId in the MessageHeader set to the request message (messageId) that it is a
response to.
c. An error code for each item in the errors map.
6. As previously specified (in Section 3.5.4), the final message in a request or response MUST have its
FIN bit set (messageFlags 0X02 in the MessageHeader). For more information on setting FIN bits on
multipart requests, responses, and notifications, see Section 3.7.3.1.
Design patterns and rules for multipart request, response, and notification messages:
1. Only ETP messages with multipartFlag = true may be implemented as a series of related messages.
This flag appears in the header information of the Avro schema that defines each ETP message (see
Section 3.4.1) and in each section of this specification that defines a message.
a. How to partition and group data for multipart requests, responses, and notifications is an
implementation issue; the ETP Specification provides no guidance or recommendations on how
to do this. Each implementer determines its own approach.
i. Message sizes MUST honor each endpoint's MaxWebSocketMessagePayloadSize capability
(see Section 3.3.2.9).
b. To protect performance (e.g., throughput), ETP specifies capabilities so that endpoints can
impose limits to size, concurrency, and duration of multipart requests, responses, and
notifications. (For more information about capabilities and how they work, see Section 3.3.) The
capabilities pertaining to multipart messages include those listed here, with links to details of
required behavior when using them:
i. ResponseTimeoutPeriod (endpoint) (see Section 4.b below)
ii. MaxResponseCount (protocol) (see 4.d below)
iii. MaxConcurrentMultipart (endpoint) (see Section 3.7.3.1.1.1 below)
iv. MultipartMessageTimeoutPeriod (endpoint) (see Section 3.7.3.1.1.2 below)
2. By definition, a multipart request, response, or notification MUST be bounded. (That is, a multipart
request, response, or notification should be considered a single virtual request, response, or
notification.)
3. As described in Section 3.5.4, each message in an ETP session MUST be uniquely numbered (using
the messageId field in the MessageHeader)—this rule applies to each message of a multipart
response, request, or notification, related Chunk messages (if used, see Section 3.7.3.2) and
ProtocolException messages (if used, see Section 3.7.2.1).
a. For messageId requirements, see Section 3.5.4.
4. A multipart response:
a. MUST correlate to a specific request message.
i. For usage rules for correlationIds (included in the MessageHeader of each ETP message),
see numbers 6 and 7 below.
b. Data messages (such as ChannelData messages sent in Protocols 1, 21 or 22) ARE NOT
responses and ARE NOT multipart; they are individual messages containing data that are sent as
they become available.
i. For types of messages and naming conventions, see Section 3.4.1.
c. MUST begin within the customer's ResponseTimeoutPeriod endpoint capability. That is, after
receiving a request from a customer, a store MUST send the standalone response message or
the first message in a multipart response no later than the value for the customer's
ResponseTimeoutPeriod.
i. If the store cannot respond within the customer's ResponseTimeoutPeriod, the store MAY
cancel by sending error ETIMED_OUT (26).
ii. If the store's value for ResponseTimeoutPeriod is less than the customer's value, and the
store exceeds its limit, then the store MAY cancel the response by sending error
ETIMED_OUT (26).
iii. If a customer receives an ETIMED_OUT error, it may indicate that the session has become
congested or the store has encountered other "abnormal circumstances."
d. MAY include a combination of valid response messages and errors, which MAY specifically
include:
i. Valid designated response messages as defined by each ETP sub-protocol (EXAMPLES:
in Store (Protocol 4) one or more GetDataObjectsResponse messages may be returned in
response to a GetDataObjects request message; in ChannelDataFrame (Protocol 2), one
GetFrameResponseHeader and one or more GetFrameResponseRows are returned in
response to a GetFrame request message).
ii. Chunk message. This message is used in 3 store-related protocols (Protocols 4, 5 and 14); it
makes it possible for the store to attach potentially large data objects (which may be included
with some request, response, and notification messages) as binary large objects (BLOBs)
partitioned into manageable sized "chunks". For more information on how the Chunk message
works, see Section 3.7.3.2.
iii. ProtocolException message(s). The map construct makes it possible to submit multiple
requests in a single message and have some requests pass and some requests fail (instead
of the entire request failing); in this case, the response is a mix of valid response messages
(for the requests the store could fulfill) and ProtocolException messages (for requests that
resulted in errors).
Errors MUST be sent in one or more ProtocolException messages, which MAY also be
a series of multiple related messages. For more rules related to ProtocolException
messages as part of a multipart response, see numbers 5, 8 and 9 below.
e. A Store MUST limit the total count of response items it returns in response to one non-map
request to the customer's MaxResponseCount value. (The MaxResponseCount is an endpoint
capability; it is the maximum total count of responses allowed in a complete multipart message
response to a single non-map request.) EXAMPLE: A store must not return more than
MaxResponseCount Resource records in response to a GetResources request message.
i. If the store's MaxResponseCount value is smaller than the customer's MaxResponseCount
value, the store MAY send fewer response items.
ii. If the store's response exceeds this limit, the customer MAY notify the store by sending error
ERESPONSECOUNT_EXCEEDED (30).
iii. If a store cannot send all responses to a request because it would exceed the lower of the
customer's or the store's MaxResponseCount value, the store:
1. MUST terminate the multipart response by sending error
ERESPONSECOUNT_EXCEEDED (30).
2. MUST NOT terminate the response until it has sent MaxResponseCount responses.
iv. NOTE: In some protocols, there are additional capabilities that limit the response count to
specific requests. For example, in ChannelSubscribe (Protocol 21), MaxRangeDataItemCount
limits the count of DataItem records sent in response to a GetRanges request. These
capabilities are documented in the relevant protocols, but the behavior for these is as
described here for MaxResponseCount: if the limit would be exceeded, the store MUST send
ERESPONSECOUNT_EXCEEDED (30) and the store MUST NOT send this until it has sent
the maximum number of responses allowed by the capability.
5. Message Flags/FIN bit. For all ETP messages, the MessageHeader contains a messageFlags
attribute. This attribute acts as a bit-field and allows multiple Boolean flags to be set on a message
(for the complete list of flags, see Section 23.25). The 0x02 flag indicates that the message is the last
message for a request, response or notification (a FIN bit). (NOTE: FIN bit is set for every "action";
that is, each message or set of messages that serve as a request, response, notification or data. For
example: If a request message is composed of only a single message, its FIN bit MUST be set.
Follow these rules to set the FIN bit for multipart requests, responses or notifications:
a. For a multipart request, you MUST set the FIN bit on the last message of the request ONLY.
b. For a multipart response, you MUST set the FIN bit on EITHER: the last message of the multipart
response OR on a related ProtocolException message (per number 4 above), depending on
which message is sent last (see also, numbers 8 and 9 below.)
i. The final message in a multipart response MAY be an "empty" response message with the
0x02 flag set.
c. For multipart notifications, you MUST set the FIN bit on the last message of the multipart
notification.
6. CorrelationId: For a multipart REQUEST or NOTIFICATION:
a. The correlationId of the first message MUST be set to 0 and the correlationId of all successive
messages in the same multipart request or notification MUST be set to the messageId of the first
message of the multipart request or notification.
7. CorrelationId: For a multipart RESPONSE:
a. The correlationId of each message that comprises the response MUST be set to the messageId
of the request message.
b. If the request message is itself multipart, the correlationId of each message of the multipart
response MUST be set to the messageId of the FIRST message in the multipart request.
c. When an endpoint receives a response message with the 0x02 flag set (which indicates it is the
last part), the endpoint will NOT receive any additional response messages with the same
correlationId. (i.e., after sending a message with the FIN bit set, the sending endpoint MUST NOT
send any additional messages with the same correlationId).
d. When an endpoint receives a notification message with the 0x02 flag set (which indicates it is the
last part), the endpoint will NOT receive any additional notification messages with the same
correlationId. (i.e., after sending a message with the FIN bit set, the sending endpoint MUST NOT
send any additional messages with the same correlationId).
8. ProtocolException messages. If a response includes one or more ProtocolException messages:
a. Its correlationId must be set to the correlationId of the request message that caused the error.
b. If the ProtocolException message is the last part in the multipart response, then its 0x02 bit flag
MUST be set (indicating it is the last part).
9. If a catastrophic error occurs in the middle of a multipart response:
a. The sender MUST send a ProtocolException message with a single error
EMULTIPART_CANCELLED (18) (in the error field) and:
i. NO map keys are populated.
ii. The FIN bit is set.
iii. NOTE: This is the only situation in which a ProtocolException message that is part of a
multipart response can have an empty map.
b. The receiver MUST treat this situation as a cancellation of the entire operation (because multipart
messages are treated as an atomic operation).
10. Message content. For a multipart request, response, or notification, only the content of the collection
field (i.e., array or map) may vary between the message parts. The values for all other fields carrying
metadata MUST be identical for all message parts of a multipart message. EXAMPLES:
a. For GrowingObject (Protocol 6), a multipart ReplacePartsByRange request MUST contain the
same values for uri, format, deleteInterval, and includeOverlappingIntervals within all message
parts; the only data that may change among messages is the content of the objectParts collection.
11. RECOMMENDATION: To avoid sending more individual messages than necessary when sending
multipart requests, responses and notifications, group together data, where possible. EXAMPLE:
Group all error responses to a map request into a single ProtocolException message, if doing so
does not exceed MaxWebSocketFramePayloadSize.
12. WARNING: Use of multipart requests, responses, and notifications in data-movement protocols (such
as PutDataObjects in the Store (Protocol 4) may create mutating or racing conditions. Currently,
ETP does not attempt to handle these conditions. Identifying and addressing these conditions is up to
the developer/implementer. The safest thing for client applications to do now is to ensure they do not
issue concurrent, competing requests to a store.
(for all 3 data objects) are part of the same multipart request.
c. The data object MUST be partitioned and each Chunk message MUST be sent in order, as
indicated by the messageId (described in Section 3.5.4).
i. The last Chunk message for the data object MUST have the final field set to true. Because a
Chunk message MUST be sent in the context of another request, response or notification
message, which may be multipart itself, the Chunk message has its own final flag field (in the
body of the Chunk message), indicating the last chunk for one data object.
ii. The receiver of the messages uses the blobId, messageId and final fields to re-assemble the
data object in its correct order.
d. Chunk messages for different objects MUST NOT be interleaved within the context of one
multipart message operation.
i. If more than one data object must be sent using Chunk messages, you MUST finish sending
all chunks for each data object before sending the chunks for the next data object.
3. If a Chunk messages is the last message in a multipart request, response or notification, the sender
MUST set the FIN bit in the message header. EXAMPLE: In the example on 2.b.i above, the FIN bit
MUST be set on the last Chunk message of the third data object.
3.7.4 How and "Where" URIs are Used in ETP (General Usage Rules)
For information on data objects, resources, and Energistics identifiers, see Appendix: Energistics
Identifiers.
3.7.4.2 Rules for when Alternate URIs MAY Be Used and when Canonical URIs MUST Be Used
Canonical Energistics URIs must be used in some messages even when the use of alternate URIs has
been negotiated for a session. In the following messages, canonical Energistics URIs MUST ALWAYS be
used:
1. All Discovery (Protocol 3) requests (i.e., GetResources and GetDeletedResources).
2. All GetSupportedTypes (Protocol 25) requests (i.e., GetSupportedTypes).
3. All Put and Delete operations in Store (Protocol 4) and Dataspace (Protocol 24) (e.g.,
PutDataObjects in Store (Protocol 4) and DeleteDataspaces in Dataspace (Protocol 24).
4. All query protocol requests (e.g., FindResources in DiscoveryQuery (Protocol 13).
5. All response messages. EXCEPTION: Store-supported alternate URIs in the alternateUris field on
Resource records.
6. All notification messages. EXCEPTION: Store-supported alternate URIs in the alternateUris field on
Resource records.
In all other request messages, if use of alternate URIs has been negotiated for the session, then alternate
URIs MAY be used.
For the specific rules for individual message, see the documentation for each message that uses URIs.
NOTE: Unlike XML, Avro has no concept of a well-formed vs. valid document or a generic document
node model; thus, it is not possible to de-serialize an Avro document without knowledge of the schema of
that document. As such, ETP provides capabilities for a client to discover which versions of ETP a server
supports, and clients request a specific ETP version, with associated schemas, when establishing the
WebSocket connection.
ETP v1.2 is based on Avro v1.10 but remains compatible with Avro v1.8.2.
1. ETP is considered a sub-protocol of WebSocket as defined in Sections 1.9 and 11.5 of RFC 6455.
2. ETP sessions start with and may use optional headers in the WebSocket opening handshake.
3. An ETP message (the message header, optional message header extension (if used), and message
body) maps directly to a WebSocket message (Figure 8), which in turn is composed of the
“application data” sections of WebSocket data frames. As shown in the figure, both the header and
body of an ETP message are sent in the same WebSocket message. In most cases, these details are
invisible to developers, because developers use a vendor-supplied library to interface with
WebSocket.
ETP ETP ETP ETP
Message #1 Message #1 Message #2 Message #2
Header Body Header Body
WebSocket Message #1 WebSocket Message #2
Message #1 Message #2 Message #2 Message #2
Data Frame #1 Data Frame #1 Data Frame # Data Frame #N
Opcode = 1 or 2 Opcode = 1 or 2 Opcode = 0 Opcode = 0
FIN = 1 FIN = 0 FIN = 0 FIN = 1
NOTE: Unlike earlier Energistics standards based on SOAP and XML (e.g., WITSML v1.x), ETP has no
concept of an ‘envelope’ schema that contains the entire ETP message (however, the WebSocket
payload length field plays the same role in terms of defining the extent of the message content).
3.11.2 "Relaxed" Change Tracking and Detection Behavior for Some Stores
However, not all stores can track these changes accurately over a long period of time. EXAMPLE: Some
stores are end-user applications without a persistent data store for ETP information, and other stores are
implemented as an API over an existing, legacy data store. To support these types of stores, ETP allows
some change detection behavior to be "relaxed"; EXAMPLES: Provided a store meets certain
requirements when doing so, the store may retain changes for shorter periods and/or set change times on
a best endeavors basis. The specific ways this "relaxed behavior" is allowed is documented in the
relevant sections of the specifications.
Figure 9: Examples of simple graphs: left image is an undirected graph and right image is a directed graph.
ETP has been designed to navigate Energistics data models as graphs where:
Nodes represent data objects in a data model (WITSML, RESQML, PRODML or EML (i.e.,
Energistics common) (For the definition of data object, see Section 25.1).
Lines (directed links between nodes) represent relationships between those data objects. A data
object can have multiple distinct references to other data objects (as specified in the various domain
models).
For a complete explanation of graphs and how they work in ETP, see Section 8.1.1.
RECOMMENDATION: Read Section 8.1.1 and make sure you understand the related inputs as specified
in the respective messages where they are used.
3.12.5 Time
Time and timestamps are important component of data acquisition and transfer related to oil and gas
operations and in ETP.
Elapsed Time MUST be the number of microseconds from 0 and serialized as ChannelIndexKind in
an Avro long. ChannelMetadata
an endpoint's clock time can change. When this happens, it may disrupt ongoing data transfer
operations. ETP does not provide explicit features to recover from this scenario, but it is possible in
some cases to detect when this has happened by using the Ping and Pong messages. Ping and
Pong are defined in Core (Protocol 0) and can be used at any time during an ETP session. For more
information, see 26 Appendix: Data Replication and Outage Recovery Workflows.
3. On reconnect, request changes for a time a bit earlier than the latest known timestamp.
Because ETP is asynchronous and multiple messages can be sent in response to an action or
operation, there is no guarantee to timing on when you will receive a particular response.
RECOMMENDATION: On reconnecting after a session was disconnected, a customer should use
the store’s value for ChangePropagationPeriod endpoint capability (which is in seconds) to request
changes that many seconds before the last store change time the customer knew about before it was
disconnected (e.g., if a store's value for ChangePropagation Period is 300 seconds, the customer
should request changes 300 seconds before the last store change time the customer knew about
before it was disconnected).
EXAMPLE: If a container data object is deleted, and it requires pruning of orphan data objects, that
operation might trigger multiple messages with the same change timestamp, which may not all be
sent at exactly the same time. If the session disconnects in the middle of receiving these messages
(e.g., the receiving endpoint sees only the first 2 of 3 generated messages) won't be aware of all of
the changes that happened at that timestamp. By adding requesting changes a bit earlier than the
latest timestamp it has, an endpoint can account for this possibility and ensure greater probability that
it doesn't "miss" any data.
4. Not every store will be able to accurately track creation and modification time over a long period of
time. The minimum requirements to enable eventual consistency workflows are that:
a. When a data object, dataspace or data array is created in a store, the store MUST set both
storeCreated and storeLastWrite (for creation) to the same timestamp, which is equal to or more
recent than the actual time the data object, dataspace or data array was created or modified. This
requirement applies even when a data object, dataspace or data array was created or modified
by something other than an ETP store operation.
b. When a data object, dataspace or data array is created or modified in a store, the store MUST set
storeLastWrite equal to or more recent than the actual time the data object, dataspace or data
array was created or modified. This requirement applies even when a data object, dataspace or
data array was created or modified by something other than an ETP store operation.
c. When the creation or modification happens through ETP store operation, the store MUST set
the timestamps equal to the actual creation or modification time as part of the store operation.
d. storeCreated MUST always be equal to or more recent than the actual time the data object,
dataspace or data array was created.
e. storeLastWrite MUST always be equal to or more recent than the actual time the data object,
dataspace or data array was modified.
f. storeLastWrite MUST always be equal to or more recent than storeCreated for any given data
object, dataspace or data array.
With these rules, stores MAY use a more recent time for storeLastWrite and storeCreated if
necessary under certain circumstances. EXAMPLE: If a store application is restarted and it loses
track of previously known storeCreated and storeLastWrite timestamps, it may choose to initialize
all storeCreated and storeLastWrite timestamps to the time at which the store application started.
However, for optimal support of eventual consistency workflows, both storeCreated and
storeLastWrite SHOULD always be equal to the actual creation or modification time. Choosing a
different time may lead customers to request more data that would otherwise not be necessary.
IMPORTANT: The store MUST send appropriate ObjectChanged notifications in response to
ANY change to storeLastWrite and storeCreated, including those changes that are not in
response to an ETP store operation.
EXAMPLE: WITSML v2.0 requires that if a WITSML implementation supports the Channel data object,
that WITSML implementation MUST support the PropertyKind data object (i.e. the contents of the
PropertyKindDictionary). Messages in ETP that handle the Channel data object have a field that MUST
reference a PropertyKind data object, by specifying a URI to the specific property. For WITSML v2.0, the
PropertyKindDictionary is in the Energistics common v2.1 ancillary folder.
For ETP, the Properties in the Property Kind Dictionary are used as follows:
1. Discovery (Protocol 3): Because relevant PropertyKind data objects must be available for an ETP
store that supports Channel objects, endpoints can discover the available PropertyKind data objects,
and then use the discovered/desired values to do discovery or query operations on the ETP store,
EXAMPLE: Give me all the channels with property equal to property kind "gamma ray".
2. ChannelSubscribe (Protocol 21): When getting metadata about a channel from a store, the store
MUST populate the channelClassUri field (in the ChannelMetadataRecord; see Section 23.33.7)
with the URI for the appropriate PropertyKind data object.
a. NOTE: In ETP v1.2, the IndexMetadataRecord and AttributeMetadataRecord also provide
fields where the URI of a property kind MAY be entered. The field is optional in this version of
ETP because the current published domain model (WITSML v2.0) does not have this field. In
future versions of ETP (and WITSML) this field may be required.
3. DiscoveryQuery (Protocol 13) and StoreQuery (Protocol 14), see item 1 in this list.
NOTE: Version PWLS v3.0 was published in March 2021. At the time ETP v1.2 was published, the
current published version of the PropertyKindDictionary published in Energistics common was based on
earlier drafts of PWLS v3.0. The PropertyKindDictionary based on PWLS v3.0 will be updated and
published in the next version of Energistics common.
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.ChannelData",
"name": "ChannelMetadataRecord",
"fields":
[
NOTE: Arrays, maps, strings and the bytes data type are NOT nullable unless they are in an Avro union
that includes “null”. When these data types are not in such a union, to send a “null” value for these data
types, you MUST send:
for arrays, a zero length array
for maps, an empty map
for strings, an empty string
for bytes, a zero length array
Binary encoding of nullable types is specified in the Avro Specification chapter “Data Serialization”, sub-
heading “Unions”. For example, the “startIndex” in the example above with the value “16” must be
encoded like: 02, 20, (hex). The value 0x02 is the zig-zag encoded value 1 representing the “long” type in
the union, followed by the zig-zag encoded value.
3.13 Troubleshooting
Data acquisition and transfer related to oil and gas operations presents many, frequently occurring
challenges. ETP has been designed to help address many of these challenges. This section highlights
commonly occurring problems and ways to deal with them. In some cases, the method to address an
issue is quite specific (EXAMPLE: Section 3.13.4) in other cases more of a heuristic (EXAMPLE: Section
3.13.3).
The complete list and definitions of endpoint, data object and protocol capabilities are included in
Chapter 23.
Based on requirements, security for ETP v1.2 had these design goals:
Use common mechanisms for HTTP resources and ETP resources.
Some HTTP resources may be protected, for example, the well-known endpoint for pre-session
discovery. We did not want to use 2 different mechanisms.
Adopt and re-use HTTP Auth schemes.
Allow for flexible extensibility as security protocols evolve.
Allow any type of bearer token to be used (NOT just JWT)
To better allow for end-to-end scenarios, allow authorization to happen at the ETP application layer,
not just the HTTP transport layer.
As in WITSML, ETP v1.2 specifies auth schemes that MUST be implemented by all servers for
interoperability, but allows for additional schemes (client certificates, for example) to be used by
agreement between specific parties.
allows for implementers to develop custom enhanced functionality now, and for ETP to expand its
functionality and requirements in the future.
Some basic concepts used in this security scheme are introduced here and explained further below in this
section:
A "bearer token" is a general term for a mechanism that if the "bearer" presents the token, it can gain
access to a resource. In the context of IT security there are many types of bearer tokens. The term
"access token" refers specifically to a bearer token issued by an authorization service that authorizes
access to a specific resource. The bearer does not know the contents of the access token, nor does it
need to; the bearer only needs to know how to get the access token and present it to the resource it
needs access to. The required ETP workflow uses access tokens.
This security scheme requires an authorization server, which issues access tokens for an ETP
endpoint. An authorization server may be separate from the ETP endpoint and may have a different
authority, or it may be a relative URI to the ETP endpoint.
Implementations should be guided in the approach they take by any requirements or limitations of any
middleware they want to participate in handling incoming connections.
b. The access token is an opaque bearer token (i.e., the client does not know its content nor does it
need to).
c. An endpoint MUST acquire the bearer token from one of the peer endpoint’s authorization
servers.
5. An endpoint then presents the bearer token to the peer ETP endpoint as described in Section 4.3.
["Bearer authz_server=\"https://YourAuthServer/Path\"]
["Basic",
"Bearer authz_server=\"https://YourOtherAuthServer/SomePath\"
scope=\"yourRequiredScopes\""]
3. An ETP server MUST have the AuthorizationDetails endpoint capability, which must meet the
requirements of Point 2 above.
4. If an ETP client does NOT need to authorize ETP servers, it MAY omit the AuthorizationDetails.
WebSocket connection request and "prepare" the ETP session request. EXAMPLES:
a. The AuthorizationDetails endpoint capability has the information the client needs to find the
authorization server and get a bearer token. For the high-level workflow for how to get the bearer
token, see Section 4.1.2.
b. For endpoint capabilities MaxWebSocketFramePayloadSize and
MaxWebSocketMessagePayloadSize, the client can compare these values to its own max
payload limits and then use them in the WebSocket connection request as described in step 4.
3. Client connects to the ETP server using WebSocket.
a. ETP servers MUST support HTTP/1.1 and RFC6455 (https://tools.ietf.org/html/rfc6455) and MAY
support HTTP/2 and RFC8441 (https://tools.ietf.org/html/rfc8441).
b. If the server requires transport layer authorization, it must use RFC6750.
4. Establish the WebSocket connection. To do this, the client begins with the standard WebSocket
handshake, and specifies the necessary attributes listed in the table below. This list of attributes
includes both standard and ETP-custom ones.
a. Some of the attributes are REQUIRED (RQD) as indicated in the table.
b. Some of the attributes MAY be specified as EITHER a header property (HP) or a query string
(QSP) but SHOULD not be specified as both in a WebSocket request.
i. If the same attribute is specified as both a header property and a query string parameter, the
server MUST process ONLY the header property.
ii. HTML5 Web browser clients cannot currently add custom headers to a WebSocket request,
so they MUST include these options as query string parameters. (For more information, see
Section 4.3.2.)
c. All protocol header names and values MUST be lower case.
5. The server establishes the WebSocket (ws/wss) connection by responding to the client's WebSocket
handshake with the latest version of ETP that it supports (based on the clients preference list).
a. If the sec-websocket-protocol header value is not present or does not match any version that the
server supports, then the server must send error HTTP status code 400 (Bad Request).
b. For the custom header of etp-encoding:
i. If this header is not present, the encoding is assumed to be binary.
ii. For HTML5 Web browser clients that send etp-encoding as a query string parameter, servers
MUST accept and process this value.
c. If the server does not support the requested etp-encoding, it MUST reject the connection
request with HTTP status code 400 (Bad Request). The client can try again (if it wishes) with the
paths:
/.well-known/etp-server-capabilities:
parameters:
in: query
name: GetVersions
type: boolean
required: false
default: false
A server supporting ETP v1.1 and ETP v1.2 would return this response:
[
"energistics-tp",
"etp12.energistics.org"
]
c. Query parameter for the format of specific capabilities of each version of ETP:
in: query
name: $format
type: string
required: false
default: binary
description: This controls the format used in the response when
GetVersions=false. The Avro binary encoding of the requested ETP
version's ServerCapabilities schema is used when $format=binary and a
Avro JSON encoding is used when $format=json. An ETP Server MUST support
the binary format and may support the JSON format. NOTE: The $format
query parameter does not apply when GetVersion=energistics-tp.
/.well-known/etp-server-capabilities?GetVersions=true
/.well-known/etp-server-capabilities
/.well-known/etp-server-capabilities?GetVersions=false
/.well-known/etp-server-capabilities?GetVersion=energistics-tp
/.well-known/etp-server-capabilities?GetVersions=false&GetVersion=energistics-tp
NOTE: The above for ETP v1.1 are all technically equivalent due to the way the default query
parameter values are defined to be backwards compatible with ETP v1.1. The version with query
string omitted is what ETP v1.1 clients should issue.
Query parameter for an ETP v1.2 server capabilities:
/.well-known/etp-server-capabilities?GetVersion=etp12.energistics.org&$format=binary
/.well-known/etp-server-capabilities?GetVersions=false&GetVersion=etp12.energistics.org
NOTE: The above for ETP v1.2 are both technically equivalent due to the default for the
GetVersions query parameter being GetVersions=false and the default for the $format parameter
being $format=binary.
i. The content of this endpoint is likely to be a branded HTML page with marketing or support
information about the server and is intended to be accessible by an end user from a Web
browser.
4.3.2 How Browser-based Clients use Query Parameters Instead of Header Properties
Because the HTML5 WebSocket API definition does not allow access to the request headers, it is not
possible for browser-based clients to add request headers when they make a WebSocket connection
request (as specified in Section 4.3, step 4). Therefore, if a browser-based client wants/needs to use
these header properties, it MUST specify them as query string parameters.
ETP defines these additional rules, which browser-based clients MUST observe and all servers MUST
support:
1. The client MAY provide the header property information (e.g., etp-encoding) in the query string
parameter of the WebSocket upgrade request.
2. All parameters provided on the query string must be URL-encoded.
EXAMPLE:
etp-encoding: binary
&etp-encoding=binary
ETP uniquely identifies each session by assigning a UUID. However, the session identification is only to
help with debugging and troubleshooting. ETP does NOT maintain session state between WebSocket
connections or provide any means to resume a prior session (i.e., there is no session survivability).
For important facts about ETP sessions, see Section 3.2.
Requirements
1. If a server requires authorization, it MAY be done at the transport layer (as part of establishing the
WebSocket connection, which is explained in Section 4.3), at the application layer, or both layers
(depending on requirements of individual implementations). Instructions for authorizing in the
application layer are included in the procedure below.
2. An ETP session MUST be established within the client's and server's respective values for their
SessionEstablishmentTimeoutPeriod endpoint capability (for definition, see Section 5.2.3).
a. If a session is not successfully established within this period, either endpoint may send error
ETIMED_OUT (26) and then close the WebSocket. The CloseSession message MUST NOT be
sent because no session was established.
b. The SessionEstablishmentTimeoutPeriod begins with the first RequestSession message.
Process steps
1. After the WebSocket connection has been established (as described in Section 4.3), the client MUST
send a RequestSession message (Section 5.3.1) to the store.
a. The client MUST send a RequestSession message within the server's value for the
RequestSessionTimeoutPeriod endpoint capability (for definition, see Section 5.2.3).
i. If a server does not receive a RequestSession message within this period, it MAY send
error ETIMED_OUT (26) and close the WebSocket connection. The CloseSession message
MUST NOT be sent, because no attempt was made to establish a session.
b. The field names on the RequestSession message are listed here for easy reference and context
in this message sequence. For complete definitions, purposes and usage requirements, see
Section 5.3.1.
i. applicationName
ii. applicationVersion
iii. clientInstanceId
iv. requestedProtocols is the list of protocols that the client wants to use in this ETP session. For
each protocol being requested, this field includes the protocol number, version, the role that
the client wants the server to take in the session, and protocol capabilities with the client's
values for them. The roles MUST be consistent. That is, if a client requests one role in one
protocol, it MAY NOT request the other role in another protocol. EXAMPLE: The client may
not request the store role in Store (Protocol 4) and the customer role in StoreNotification
(Protocol 5). If the client requests inconsistent roles, the server MUST reject the request with
EINVALID_OPERATION (32).
NOTE: Core (Protocol 0) MUST NOT be listed in this field.
v. supportedDataObjects, which includes for each data object being requested, its qualifiedType
and dataObjectCapabilities with the client's values for them. For more information about data
object capabilities, see Section 3.3.4.
vi. supportedCompression
vii. supportedFormats
viii. currentDateTime
ix. earliestRetainedChangeTime
x. endpointCapabilities, with the client values specified. NOTE: If the client requires the server to
authorize to it (serverAuthorizationRequired field = true) this field should include the client's
AuthorizationDetails endpoint capability (for definition, see Section 5.2.3).
xi. serverAuthorizationRequired. A flag that if set to true means the client is indicating that the
server MUST authorize with the client. NOTE: This field is intended for clients that are ETP
stores. Clients MAY use this in other scenarios, but servers are not required to support use of
this field in all cases.
c. This request MUST NOT exceed the server's value for MaxSessionClientCount endpoint
capability.
i. A server SHOULD check for this limit at the time of the WebSocket connection request (see
Section 4.3). However, it's possible that the server may not be able to determine if it must
reject a specific client until the client has been authorized and it receives a RequestSession
message with the client's client instance ID (clientInstanceId).
ii. A server MAY refuse any incoming connections if a new connection from a particular client
may cause it to exceed its value for MaxSessionClientCount endpoint capability.
iii. If a server chooses to reject an incoming connection because it would exceed this limit, it
SHOULD reject the request with ELIMIT_EXCEEDED (12).
2. The server MUST respond with one of the following:
a. If the client already authorized (when it created the WebSocket connection) and the server
requires no additional authorization, continue with Step 9.
b. If the server requires the client to authorize, then it MUST send error
EAUTHORIZATION_REQUIRED (28).
i. For the client to authorize to the server it MUST send the Authorize message (see Section
5.3.4) with the authorization field populated with an equivalent HTTP Authorization header
value (i.e., bearer token) issued by the server's authorization server.
ii. Steps 3 and 4 explain the possible scenarios (CASE 1 and CASE 2) and steps for the client
to get the bearer token.
3. CASE 1: The client got a token during the WebSocket connection process (described in Section
4.3):
a. The client MUST send the Authorize message with the authorization field populated with the
token.
b. If the server accepts that token, (i.e., the client has satisfied all requirements for authorization),
go to Step 5.
c. If the server DOES NOT accept the token, then it MUST send error
EAUTHORIZATION_REQUIRED (28).
i. The client MUST continue with step 4.
4. CASE 2: The client DOES NOT have a token. In this case, the client and server MUST exchange
Authorize and AuthorizeResponse messages so the client can get the information needed to get a
valid token, which is explained in steps a – d.
a. The client MUST send the Authorize message with the authorization field blank (empty string).
b. The server MUST send the AuthorizeResponse message with the success flag set to false and
the challenges field MUST contain the challenges needed and the metadata with the location of
the authorization server. (NOTE: This MUST be the same information that is specified in an
endpoint's AuthorizationDetails endpoint capability, as described in Section 4.1.3.)
i. Depending on the specific details of the authorization requirements, the client and server
MAY require several exchanges before the client has the information needed to get a valid
authorization.
c. The client uses the information in the AuthorizeResponse message to get a bearer token using
the workflow described in Section 4.1.2.
d. After the client has acquired the bearer token, the client MUST send the Authorize message
with the authorization field populated with a valid equivalent HTTP Authorization header value
(i.e., a bearer token) accepted by the server.
e. If the server receives too many unsuccessful attempts and there is no other valid authorization
for connection, the server MAY send error EAUTHORIZATION_EXPIRED (10) and disconnect
the WebSocket.
5. When the server receives the Authorize message with valid authorization information from the client,
it MUST send the AuthorizeResponse message with the success flag set to true.
6. The client MUST re-send the RequestSession message.
a. The client MUST send the RequestSession message within the server's value for the
SessionEstablishmentTimeoutPeriod endpoint capability.
i. If a server does not receive a RequestSession message within this period, it MAY send
error ETIMED_OUT (26) and close the WebSocket connection. The CloseSession message
MUST NOT be sent, because no attempt was made to establish a session.
b. If the serverAuthorizationRequired flag is set to false, continue with Step 9.
c. If the serverAuthorizationRequired flag is set to true, then the client is requiring that the server
authorize.
i. For the server to authorize to the client, the server MUST send the Authorize message (see
Section 5.3.4) with the authorization field populated with an equivalent HTTP Authorization
header value (i.e., bearer token) issued by the client's authorization server.
1. If BOTH the server and the client require authorization, the client MUST authorize to the
server first, then the server MUST authorize to the client. These MUST be sequential
(NOT concurrent) operations.
ii. The client MUST provide the metadata and challenges that comprise the AuthorizationDetails
(as defined in Section 4.1.3) to the server.
iii. ETP specifies the 2 methods below for the client to provide the metadata and challenges to
the server; endpoints MUST support BOTH methods, but use ONLY ONE method in this
workflow:
1. METHOD A: Populate the AuthorizationDetails endpoint capability (in the
RequestSession message's endpointCapabilities field) with the metadata and
challenges. Continue with Step 7.
2. METHOD B: Use the Authorize and AuthorizeResponse messages to iterate and
provide the metadata and challenges (as described in Step 4 above, where the client is
authorizing to the server). Continue with Step 8.
7. For METHOD A: The server MUST use the metadata and challenges to get a valid HTTP
Authorization header (i.e., a bearer token), as described in Section 4.1.2.
a. After the server has acquired the bearer token, the server MUST send an Authorize message
with the authorization field populated with an equivalent HTTP Authorization header value (i.e.,
bearer token) accepted by the client.
b. The client MUST respond with the AuthorizeResponse message with the success flag set to
true.
c. Continue with Step 9.
8. For METHOD B: The server MUST use the same process described above (in Step 4, for the client)
to exchange Authorize and AuthorizeResponse messages to get the information needed to get a
valid HTTP Authorization header and then use that information to get a bearer token as described in
Section 4.1.2.
a. After the server has acquired the bearer token, the server MUST send an Authorize message
with the authorization field populated with an equivalent HTTP Authorization header value (i.e.,
bearer token) accepted by the client.
b. The client MUST respond with the AuthorizeResponse message, with the success flag set to
true.
c. Continue with Step 9.
9. If the server supports at least one of the requested protocols, then the server MUST respond with the
OpenSession message, indicating which of the requested protocols and roles it can support.
a. When the server has sent the OpenSession message, the session is established and authorized
(per the requirements of the two endpoints in a particular session).
b. The field names on the OpenSession message are listed here for easy reference and context in
this message sequence. For complete definitions, purposes and usage requirements, see Section
0.
i. applicationName
ii. applicationVersion
iii. serverInstanceId
iv. supportedProtocols NOTE: Core (Protocol 0) MUST NOT be listed in this field.
v. supportedDataObjects
vi. supportedCompression
vii. supportedFormats
viii. sessionId
ix. currentDateTime
x. earliestRetainedChangeTime
xi. endpointCapabilities
c. Possible errors:
i. If the server supports NONE of the requested protocols, it MUST send error
ENOSUPPORTEDPROTOCOLS (2) and drop the connection.
ii. If the server supports NONE of the roles for each protocol that the client requested, it MUST
send error ENOROLE (1) and drop the connection.
iii. If the server supports NONE of the formats for data objects that the client requested, the
server MUST send error ENOSUPPORTEDFORMATS (28) and drop the connection.
iv. For additional requirements and information, see Section 5.2.2, Rows 5 and 6 of the table.
10. Based on the information in the OpenSession message, the client "decides" whether to terminate the
session or proceed with operations that it connected to the server to perform.
a. If the client required authorization but the server fails to do so (but sends the OpenSession
message anyways) the client MUST:
i. Send error EAUTHORIZATION_EXPIRED (10).
ii. Send the CloseSession message (Section 5.3.3).
3. Capabilities-related behavior 1. Relevant ETP-defined endpoint, data object, and/or protocol capabilities
MUST be specified when the ETP session is established (see Chapter 5)
and MUST be used/honored as defined in the relevant ETP sub-protocol.
2. For an explanation of endpoint, data object, and protocol capabilities, see
Section 3.3.
a. For the list of global capabilities and related behavior, see Section
3.3.2.
3. The capabilities listed in Section 5.2.3 MUST BE used in this ETP sub-
protocol. Additional details for how to use the protocol capabilities are
included below in this table and in Section 5.2.1 Core: Message
Sequences.
5. Protocol Negotiation 1. A client and server determine which protocols they will use in an ETP
session using the requestedProtocols and supportedProtocols fields (on the
RequestSession and OpenSession messages, respectively), which for
each protocol includes protocol number, version, client-requested role, and
protocol capabilities.
a. The negotiated protocols are essentially the intersection of
requestedProtocols and supportedProtocols.
b. These fields MUST NOT list Core (Protocol 0).
2. In addition to the rules specified in Section 5.2.1.1, the server in its
OpenSession response message:
a. MUST NOT change the version of a requested protocol. If the server
cannot support the exact version requested, then the server MUST treat
the requested protocol(s) as 'unsupported'.
b. MAY offer to support only some of the requested protocols.
c. MUST NOT offer to support any additional protocols.
d. MUST NOT change the requested role for each protocol and MUST fill
only one role (the one specified by the client).
3. If the server response does not provide adequate functionality, then the
client MAY send the CloseSession message immediately.
4. During the ETP session, endpoints MUST use only the supported protocols
that were negotiated.
a. If a client tries to use a protocol not included in the supportedProtocols,
field, the server MUST send error EUNSUPPORTED_PROTOCOL (4).
6. Negotiation of supported data objects 1. A client and server determine which data objects they will use in an ETP
and related capabilities session using the supportedDataObjects fields (same name on the
RequestSession and OpenSession messages). The negotiated supported
data objects are essentially the intersection of the two fields.
2. For each data object in these fields, each endpoint MUST list its supported
data objects, and for each MUST include:
a. A qualifiedType
7. Messages that MUST NEVER be 1. If in the MessageHeader record, the protocoI field = 0, the message MUST
compressed. NEVER be compressed.
a. NOTE: ProtocolException and Acknowledge messages are defined
in Core (Protocol 0); however, they may be used in any protocol, so
their protocol field is rarely 0.
2. If an endpoint receives a message with protocol=0 that is compressed, it
MUST send error ECOMPRESSION_NOTSUPPORTED (13).
8. Authorization renewal and expiration 4. An endpoint SHOULD remain authorized with the other endpoint (as required
by the respective endpoints when the ETP session was established) for the
duration of the ETP session.
a. An endpoint MUST re-authorize with the other endpoint BEFORE the
current authorization expires.
i. For the high-level workflow on how an endpoint gets a bearer
token, see Section 4.1.2.
b. As needed, either endpoint CAN send the Authorize message (as
described in Section 5.2.1.1) at any time, to remain authorized for the
duration of the session.
c. After the initial authorization, the authorization method and security
principal MUST not change and the scope MUST not be reduced.
d. The authorization for each endpoint may have very different expirations,
so each endpoint may re-authorize to the other at different times.
5. If an endpoint's authorization will expire "soon", the other endpoint MAY send
error EAUTHORIZATION_EXPIRING (28).
a. For more information, see the detailed text on the error code in Section
24.3.
6. During an ETP session, if an endpoint's authorization expires, the other
endpoint MUST:
a. Send error EAUTHORIZATION_EXPIRED (10).
b. Send the CloseSession message.
For a server:
A valid session is established when it sends an OpenSession
message to the client, which indicates a session has been
successfully established.
The time period starts on receiving the initial RequestSession
message from the client.
For a client:
A valid session is established when it receives an OpenSession
message from the server.
The time period starts when it sends the initial RequestSession
message to the server.
AuthorizationDetails:
1. Contains an ArrayOfString with WWW-Authenticate style
challenges.
2. To support the required authorization workflow (to enable an
endpoint to acquire an access token with the necessary scope
from the designated authorization server), the
AuthorizationDetails endpoint capability MUST include at least
one challenge with the Bearer scheme which must include the
‘authz_server' and ‘scope’ parameters.
a. The 'authz_server' parameter MUST be a URI for an
authorization server to enable the endpoint to acquire any
other needed metadata about the authorization server using
OpenID Connect Discovery.
3. An ETP server MUST have the AuthorizationDetails endpoint
capability, which must meet the requirements of Point 2 above.
4. If an ETP client does NOT need to authorize ETP servers, it
MAY omit the AuthorizationDetails.
Protocol Capabilities
NONE
currentDateTime The current date and time of the endpoint's system clock. long 1 1
When establishing an ETP session, each endpoint
indicates its current date and time.
The purpose of this field is part of the behavior for eventual
consistency between 2 stores.
It must be a UTC dateTime value, serialized as a long,
using the Avro logical type timestamp-micros
(microseconds from the Unix Epoch, 1 January 1970
00:00:00.000000 UTC).
earliestRetainedChangeTime When the endpoint is a store, the endpoint MUST set this long 1 1
to the earliest timestamp that customers may use to
request retained change information, such as deleted
resources and change annotations. For some stores, if the
store has not yet been running longer than its value for the
ChangeRetentionPeriod capability, the value in this field
MAY be more recent than the value for
ChangeRetentionPeriod. Customers should not request
and stores will not provide retained change information
from before this timestamp.
endpointCapabilities A map of key-value pairs of endpoint-specific capability DataValue 0 *
data (i.e., constraints, limitations). The names, defaults,
optionality, and expected data types are defined by this
specification. These endpoint capabilities are exchanged in
this and the OpenSession message between the 2
endpoints for use in applicable protocols as defined in
relevant chapters in this specification.
Map keys are capability names, which are case-
sensitive strings. For ETP-defined capabilities, the
name must spelled exactly as listed in
EndpointCapabilityKind.
Map values are of type DataValue.
For more information about capabilities and rules for
using them, see Section 3.3.
serverAuthorizationRequired A flag that if set to true means the client is indicating that boolean 1 1
the server MUST authorize with the client.
NOTE: This field is intended for clients that are ETP
stores. Clients MAY use this in other scenarios, but servers
are not required to support use of this field in all cases.
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Core",
"name": "RequestSession",
"protocol": "0",
"messageType": "1",
"senderRole": "client",
"protocolRoles": "client, server",
"multipartFlag": false,
"fields":
[
{ "name": "applicationName", "type": "string" },
{ "name": "applicationVersion", "type": "string" },
{ "name": "clientInstanceId", "type": "Energistics.Etp.v12.Datatypes.Uuid" },
{
"name": "requestedProtocols",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.SupportedProtocol" }
},
{
"name": "supportedDataObjects",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.SupportedDataObject" }
},
{
"name": "supportedCompression",
"type": { "type": "array", "items": "string" }, "default": []
},
{
"name": "supportedFormats",
"type": { "type": "array", "items": "string" }, "default": ["xml"]
},
{ "name": "currentDateTime", "type": "long" },
{ "name": "earliestRetainedChangeTime", "type": "long" },
{ "name": "serverAuthorizationRequired", "type": "boolean", "default": false },
{
"name": "endpointCapabilities",
"type": { "type": "map", "values": "Energistics.Etp.v12.Datatypes.DataValue" },
"default": {}
}
]
}
currentDateTime The current date and time of the endpoint's system long 1 1
clock. When establishing an ETP session, each
endpoint indicates its current date and time.
The purpose of this field is part of the behavior for
eventual consistency between 2 stores.
It must be a UTC dateTime value, serialized as a
long, using the Avro logical type timestamp-micros
(microseconds from the Unix Epoch, 1 January 1970
00:00:00.000000 UTC).
earliestRetainedChangeTime When the endpoint is a store, the endpoint MUST set long 1 1
this to the earliest timestamp that customers may use
to request retained change information, such as
deleted resources and change annotations. For some
stores, if the store has not yet been running longer
than its value for the ChangeRetentionPeriod
capability, the value in this field MAY be more recent
than the value for ChangeRetentionPeriod.
Customers should not request and stores will not
provide retained change information from before this
timestamp.
sessionId An ID (UUID) that the server assigns to uniquely Uuid 1 1
identify an ETP session; It must be of the type Uuid.
The sessionId is only to help with debugging and
troubleshooting. ETP does NOT maintain session
state (i.e., there is no session survivability).
endpointCapabilities A map of key-value pairs of endpoint-specific DataValue 0 *
capability data (i.e., constraints, limitations). The
names, defaults, optionality, and expected data types
are defined by this specification. These endpoint
capabilities are exchanged in this and the
RequestSession message between the 2 endpoints
for use in applicable protocols as defined in relevant
chapters in this specification.
Map keys are capability names, which are case-
sensitive strings. For ETP-defined capabilities,
the name must spelled exactly as listed in
EndpointCapabilityKind.
Map values are of type DataValue.
For more information about capabilities and rules
for using them, see Section 3.3.
Additionally, the ServerCapabilities may list a server's
endpoint capabilities, though they may vary from the
ones listed here for various reasons.
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Core",
"name": "OpenSession",
"protocol": "0",
"messageType": "2",
"senderRole": "server",
"protocolRoles": "client, server",
"multipartFlag": false,
"fields":
[
{ "name": "applicationName", "type": "string" },
{ "name": "applicationVersion", "type": "string" },
{ "name": "serverInstanceId", "type": "Energistics.Etp.v12.Datatypes.Uuid" },
{
"name": "supportedProtocols",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.SupportedProtocol" }
},
{
"name": "supportedDataObjects",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.SupportedDataObject" }
},
{ "name": "supportedCompression", "type": "string", "default": "" },
{
"name": "supportedFormats",
"type": { "type": "array", "items": "string" }, "default": ["xml"]
},
{ "name": "currentDateTime", "type": "long" },
{ "name": "earliestRetainedChangeTime", "type": "long" },
{ "name": "sessionId", "type": "Energistics.Etp.v12.Datatypes.Uuid" },
{
"name": "endpointCapabilities",
"type": { "type": "map", "values": "Energistics.Etp.v12.Datatypes.DataValue" },
"default": {}
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Core",
"name": "CloseSession",
"protocol": "0",
"messageType": "5",
"senderRole": "client,server",
"protocolRoles": "client, server",
"multipartFlag": false,
"fields":
[
{ "name": "reason", "type": "string", "default": "" }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Core",
"name": "Authorize",
"protocol": "0",
"messageType": "6",
"senderRole": "client,server",
"protocolRoles": "client, server",
"multipartFlag": false,
"fields":
[
{ "name": "authorization", "type": "string" },
{
"name": "supplementalAuthorization",
"type": { "type": "map", "values": "string" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Core",
"name": "Ping",
"protocol": "0",
"messageType": "8",
"senderRole": "client,server",
"protocolRoles": "client, server",
"multipartFlag": false,
"fields":
[
{ "name": "currentDateTime", "type": "long" }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Core",
"name": "Pong",
"protocol": "0",
"messageType": "9",
"senderRole": "client,server",
"protocolRoles": "client, server",
"multipartFlag": false,
"fields":
[
{ "name": "currentDateTime", "type": "long" }
]
}
Avro Source
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Core",
"name": "AuthorizeResponse",
"protocol": "0",
"messageType": "7",
"senderRole": "client,server",
"protocolRoles": "client, server",
"multipartFlag": false,
"fields":
[
{ "name": "success", "type": "boolean" },
{
"name": "challenges",
"type": { "type": "array", "items": "string" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Core",
"name": "ProtocolException",
"protocol": "0",
"messageType": "1000",
"senderRole": "*",
"protocolRoles": "client, server",
"multipartFlag": true,
"fields":
[
{ "name": "error", "type": ["null", "Energistics.Etp.v12.Datatypes.ErrorInfo"] },
{
"name": "errors",
"type": { "type": "map", "values": "Energistics.Etp.v12.Datatypes.ErrorInfo" },
"default": {}
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Core",
"name": "Acknowledge",
"protocol": "0",
"messageType": "1001",
"senderRole": "*",
"protocolRoles": "client, server",
"multipartFlag": false,
"fields":
[
]
6 ChannelStreaming (Protocol 1)
ProtocolID: 1
Defined Roles: producer, consumer
Use ChannelStreaming (Protocol 1) to stream channel-oriented data from an endpoint that is a "simple"
producer (i.e., a sensor) to a consumer endpoint. Beginning in ETP v1.2, Protocol 1 is used only for so-
called "simple streamers" (in previous versions of ETP, Protocol 1 included all channel streaming
behavior).
The main use case that this protocol supports is basic "WITS-like", one-way data streaming from a sensor
or other relatively "dumb" device. There is no real "back and forth" between the endpoints—that is, the
consumer cannot discover available channels nor specify which channels it wants; the producer simply
sends any data it has. Also, there is no flow control—except for "stop". In its simplest form, this protocol
supports a workflow of connect and begin receiving data. Reliability (data transmission without loss)
cannot be guaranteed.
to use the metadata to "set up" (e.g., for the receiving endpoint to interpret and understand what channels
it will be receiving, relevant units of measure (UOM), etc.), and then as new data points are produced, the
sending endpoint simply streams the new data, which at a minimum is typically the latest index and value
at that index.
The main types of metadata include those listed here:
Channel metadata is exchanged in the ChannelMetadataRecord (see Section 23.33.7), which is a
standard ETP structure (Avro record) that contains the metadata for one channel. Various messages
in the channel streaming protocols—for example: ChannelMetadata message in ChannelStreaming
(Protocol 1); GetChannelMetadataResponse message in ChannelSubscribe (Protocol 21); and the
OpenChannelsResponse message in ChannelDataLoad (Protocol 22)—send one
ChannelMetadataRecord record per channel.
Index metadata is the metadata about the index(es) in one channel, which includes information such
as the index kind (time, depth, scalar or elapsed time) and direction (increasing or decreasing). The
IndexMetadataRecord (Section 23.33.6) is an ETP datatype (Avro record) sent in the indexes field
of the ChannelMetadataRecord.
Attribute metadata. ETP provides an Avro record (DataAttribute, Section 23.23) that allows an
endpoint to pass attributes associated with individual channel data points (sometimes referred to as
"decorating" individual points). These attributes are typically metadata for things such as quality,
confidence, audit information, etc. NOTE: ETP simply provides the structure for passing such data.
ETP does NOT specify the content and usage, which may be specified by individual MLs (in the
relevant implementation specification) or may be custom.
Consistent with the established pattern (for channels and indexes), ETP defines attribute metadata in
the AttributeMetadataRecord record (Section 23.24), which is the information needed to
interpret/understand the data attributes that may be sent in an ETP session.
and the length of each subarray that Avro will encode onto the wire. For more information see, Section
23.33.7.
In some situations, it's possible that individual values within an array of data could be null. ETP offers
these approaches for specifying null values in an array:
Sparse arrays. Using axisVectorLengths, specify the number of 'skips' (which indicate null values) in
addition to the 'start' offsets.
For arrays of Int, Long or Boolean, ETP specifies a corresponding nullable type (i.e.,
ArrayOfNullableInt, ArrayOfNullableLong, ArrayOfNullableBoolean).
For arrays of double or float values, use "NaN" to specify null values.
You must observe these rules when specifying null values in an array:
1. The base type specified in ChannelMetadataRecord for the array type must be the base, non-
nullable array type (see ArrayofLong example in next item).
2. The underlying type must be consistent for the same channel or other usage. i.e., if you start sending
ArrayofLong, you must only use ArrayofLong, ArrayofLongNullable, SparseArray with Long.
is occasionally dropped), then you MUST use the protocols for standard streaming, which are
ChannelSubscribe (Protocol 21) and ChannelDataLoad (Protocol 22). The simple streaming protocol
(ChannelStreaming (Protocol 1)) does not support eventual consistency.
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelStreaming",
"name": "StartStreaming",
"protocol": "1",
"messageType": "3",
"senderRole": "consumer",
"protocolRoles": "producer,consumer",
"multipartFlag": false,
"fields":
[
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelStreaming",
"name": "StopStreaming",
"protocol": "1",
"messageType": "4",
"senderRole": "consumer",
"protocolRoles": "producer,consumer",
"multipartFlag": false,
"fields":
[
]
}
Multi-part: False
Sent by: producer
Field Name Description Data Type Min Max
channels The list of channels with metadata for each; the ChannelMetadataRecord 1 n
fields for each channel are defined in the
ChannelMetadataRecord.
The URIs MUST be canonical Energistics data
object URIs; for more information, see Appendix:
Energistics Identifiers.
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelStreaming",
"name": "ChannelMetadata",
"protocol": "1",
"messageType": "1",
"senderRole": "producer",
"protocolRoles": "producer,consumer",
"multipartFlag": false,
"fields":
[
{
"name": "channels",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.ChannelData.ChannelMetadataRecord" }
}
]
}
ii. If an index value is the same as the previous index value in the data array, the index value
MAY be sent as null.
c. EXAMPLE: These index values from adjacent DataItem records in the data array:
[1.0, 1.0, 2.0, 3.0, 3.0]
MAY be sent as:
[1.0, null, 2.0, 3.0, null].
d. When the DataItem records have both primary and secondary index values, these rules apply
separately to each index.
e. EXAMPLE: These primary and secondary index values from adjacent DataItem records in the
data array:
[[1.0, 10.0], [1.0, 11.0], [2.0, 11.0], [3.0, 11.0], [3.0, 12.0]]
MAY be sent as:
[[1.0, 10.0], [null, 11.0], [2.0, null], [3.0, null], [null, 12.0]].
f. If ALL index values for a DataItem record are to be sent as null, the indexes field should be set to
an empty array.
6. For more information about sending channel data, see Section 6.1.3.
Message Type ID: 2
Correlation Id Usage: MUST be ignored and SHOULD be set to 0.
Multi-part: False
Sent by: producer
Field Name Description Data Type Min Max
data Contains the data points for channels, which is an DataItem 1 n
array of DataItem records. Note that the value
must be one of the types specified in DataValue
(Section 23.30)—which include options to send a
single data value (of various types such as
integers, longs, doubles, etc.) OR arrays of
values.
For more information, see Section 6.1.3.
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelStreaming",
"name": "ChannelData",
"protocol": "1",
"messageType": "2",
"senderRole": "producer",
"protocolRoles": "producer,consumer",
"multipartFlag": false,
"fields":
[
{
"name": "data",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.ChannelData.DataItem" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelStreaming",
"name": "TruncateChannels",
"protocol": "1",
"messageType": "5",
"senderRole": "producer",
"protocolRoles": "producer,consumer",
"multipartFlag": false,
"fields":
[
{
"name": "channels",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.ChannelData.TruncateInfo" }
}
]
}
7 ChannelDataFrame (Protocol 2)
ProtocolID: 2
Defined Roles: store, customer
A customer uses ChannelDataFrame (Protocol 2) to get channel data from a store in a row-orientated
'frame' or 'table' of data. In oil and gas jargon, the general use case that Protocol 2 supports is typically
referred to as getting a "historical log". (In ETP jargon you are actually getting a frame of data from a
ChannelSet data object; for more information, see Section 7.1.1).
With this protocol, a customer endpoint gets rows of data, where one row consists of a primary index
value, all associated secondary index values from the ChannelSet’s secondary indexes, and all
associated data and attribute values from the ChannelSet’s channels. Being able to retrieve data in a
frame simplifies logic for customer role software applications when dealing with data as a "log" rather than
individual channels.
NOTES:
1. Protocol 2 supports get/read functionality only. To put/write channel data for individual channels, use
ChannelDataLoad (Protocol 22) (see Chapter 20); you CANNOT put "rows" in a channel set.
2. This protocol SHOULD NOT be used to poll for realtime data. Instead use ChannelSubscribe
(Protocol 21) (see Chapter 19) or for "simple streamers" use ChannelStreaming (Protocol 1) (see
Chapter 6).
3. ChannelDataFrame (Protocol 2) allows stores to introduce a delay between when they receive new
channel data and when they make the data available for consumption using Protocol 2. This delay is
intended to help ensure customers receive “complete” rows of data from a store because new data
for channels in a channel set may arrive in the store at different times from different sources.
EXAMPLE: If your primary interval is time, this field could be a depth interval on which filtering is
being requested.
Protocol Capabilities
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelDataFrame",
"name": "GetFrame",
"protocol": "2",
"messageType": "3",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "uri", "type": "string" },
{ "name": "includeAllChannelSecondaryIndexes", "type": "boolean", "default": false },
{ "name": "requestedInterval", "type":
"Energistics.Etp.v12.Datatypes.Object.IndexInterval" },
{ "name": "requestUuid", "type": "Energistics.Etp.v12.Datatypes.Uuid" },
{
"name": "requestedSecondaryIntervals",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.Object.IndexInterval" }, "default": []
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelDataFrame",
"name": "GetFrameResponseHeader",
"protocol": "2",
"messageType": "4",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "channelUris",
"type": { "type": "array", "items": "string" }
},
{
"name": "indexes",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.ChannelData.IndexMetadataRecord" }
}
]
}
Correlation Id Usage: MUST be set to the messageId of the GetFrame message that this message is a
response to.
Multi-part: True
Sent by: store
Field Name Description Data Type Min Max
frame An array of rows with each row containing the FrameRow 1 *
content defined in FrameRow.
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelDataFrame",
"name": "GetFrameResponseRows",
"protocol": "2",
"messageType": "6",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "frame",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.ChannelData.FrameRow" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelDataFrame",
"name": "CancelGetFrame",
"protocol": "2",
"messageType": "5",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "requestUuid", "type": "Energistics.Etp.v12.Datatypes.Uuid" }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelDataFrame",
"name": "GetFrameMetadata",
"protocol": "2",
"messageType": "1",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "uri", "type": "string" },
{ "name": "includeAllChannelSecondaryIndexes", "type": "boolean", "default": false }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelDataFrame",
"name": "GetFrameMetadataResponse",
"protocol": "2",
"messageType": "2",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{ "name": "uri", "type": "string" },
{
"name": "indexes",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.ChannelData.IndexMetadataRecord" }
},
{
"name": "channels",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.ChannelData.FrameChannelMetadataRecord" }
}
]
}
8 Discovery (Protocol 3)
ProtocolID: 3
Defined Roles: store, customer
Customers of a store use Discovery (Protocol 3) to enumerate and understand the contents of a store.
The store represents a database or storage of data object information; Discovery uses Energistics domain
data models to navigate the store as a graph. (For more information on graph concepts and how this
works in ETP, see Section 8.1).
IMPORTANT! The main benefit of data model as graph is the ability to efficiently and precisely identify—
often with a single request—the set of data objects that you are interested in. For more information on
graphs and how ETP design leverages them, see Section 8.1.1.
In Discovery, a customer and a store exchange discrete request and response messages that allow the
customer application to request information and "walk the graph" to discover the store's content, which
includes the data objects (nodes on the graph) and the relationships between them (the edges between
the nodes).
Since the previous version of ETP, Discovery (Protocol 3) has been significantly redesigned with the 2
main goals of the redesign being:
A single discovery protocol that works consistently across all Energistics domain models.
The ability to reduce the number of messages required (i.e. reduce the back and forth between
endpoints) to get all data objects of interest, thereby reducing traffic on the wire.
Additionally, Discovery (Protocol 3) contains messages for discovering deleted resources. This and other
behavior defined here are functionality required to support workflows for eventual consistency between 2
stores.
Other ETP sub-protocols that may be used with Discovery (Protocol 3):
If more than one dataspace exists on an endpoint and a customer needs to navigate dataspaces to
find a particular store, the customer MUST use Dataspaces (Protocol 24) (see Chapter 21).
- When the customer finds the particular dataspace it wants, then the customer MUST use
Discovery (Protocol 3) to discover and enumerate the content of the dataspace. NOTE: ETP
stores MUST always have a default dataspace; for more information, see Section 8.2.2.
If a customer wants to dynamically discover a store's data model (i.e., understand what object types
are possible in the store at a given location whether or not there is any data in the store), without prior
knowledge of the overall data model and graph connectivity, the customer MUST use
SupportedTypes (Protocol 25) (see Chapter◦22).
To filter on property values within a data object, use DiscoveryQuery (Protocol 13) (see Chapter 15).
This includes discovery of planned vs. actual objects; for more information, see Section 8.2.2.
NOTE: For some widely used use cases, the GetResources message in Discovery (Protocol 3)
provides a few filters at the message level; see Section 8.2.1.1.
For information about workflows for eventual consistency between stores, see Appendix: Data
Replication and Outage Recovery Workflows.
- Definitions of the key endpoint and protocol capabilities used in this protocol (see Section 8.2.3).
Sample schemas of the messages defined in this protocol (which are identical to the Avro
schemas published with this version of ETP). However, only the schema content in this specification
includes documentation for each field in a schema (see Section 8.3).
Figure 13: Examples of graphs: left image is an undirected graph and right image is a directed graph.
Figure 14: A set of data objects and the relationships among them form a directed multigraph.
Figure 15: Node C is the "target" of the directed link from A to C; node A is the "source" of the directed link
from A to C.
NOTES:
1. The “source” of a relationship between two data objects may be ML-specific.
2. For more information on these topics, see the relevant ML's ETP implementation specification.
Sources are nodes with directed links to C. Targets are nodes with directed links from C.
Nodes A and G are sources of C. Nodes B, D and F are targets of C.
C is the target of these relationships. C is the source of these relationships
Figure 16: More examples of targets and sources relative to node C.
Message Sequence. Summarizes all messages defined by this protocol, identifies main tasks that
can be done with this protocol and describes the response/request pattern for the messages needed
to perform the tasks, including usage of ETP-defined capabilities, error scenarios, and resulting ETP
error codes.
General Requirements. Identifies high-level (across ETP) and protocol-wide general behavior and
rules that must be observed (in addition to behavior specified in Message Sequence), including usage
of ETP-defined endpoint, data object and protocol capabilities, error scenarios, and resulting error
codes.
Capabilities. Lists and defines the ETP-defined parameters most relevant for this sub-protocol. ETP
defines these parameters to set necessary limits to help prevent aberrant behavior (e.g., sending
oversized messages or sending more messages than an endpoint can handle).
8.2.1.1 To discover data objects in a store and optionally the relationships between them:
1. The customer sends a GetResources message (see Section 8.3.1) to the store.
For all details of all fields in this message, see the section referred to above. This table summarizes
the fields in the GetResources message and referenced records and how they may impact the
Discovery operation and message sequence.
Field Description
context REQUIRED. Context defines the parts of the data model that the customer wants the store to
discover as defined in the ContextInfo record, which includes these fields:
Field Description
uri The URI of the dataspace or data object from which to start
discovery. This MUST be the canonical Energistics URI as defined in
Appendix: Energistics Identifiers.
Often, the first GetResources message that a customer sends
contains a canonical dataspace URI, typically the default
dataspace URI: eml:///
A customer may also begin discovery by specifying the
canonical URI for a specific data object.
For more information about URIs used in Discovery (Protocol 3), see
Section 8.2.2.
depth The depth in the graph (data model) from the "starting" URI.
RECOMMENDATION: For maximum efficiency in discovery and
notification operations, understand how the graph is intended to work
and specify an appropriate value here ((i.e., for Discovery (Protocol
3) DO NOT simply set depth =1 and iterate)). For more information,
see Section 8.1.1.
dataObjectTypes The types of data objects that you want to discover. This MUST be
the set or a subset of the supportedDataObjects negotiated for the
current ETP session. For more information, see Chapter 5 and
Section 8.2.2.
navigableEdges Edges in a graph represent the relationships between the nodes
(data objects). This field indicates the type of edges (relationships) to
be navigated during the discovery operation. Choices are Primary,
Secondary or Both. For more information about these types of
relationships, see Section 8.1.1.1.2.
Only edges of the specified type are navigated during discovery. Use
of this field helps to exclude unwanted objects being returned in
Discovery.
If true, the initial candidate set of nodes is expanded with targets of
includeSecondaryTargets secondary relationships of nodes in the initial candidate set of nodes.
The edges for these contextual relationships are also included.
NOTE: This flag and includeContextualSources MUST be applied
"simultaneously" (not in sequence) so the candidate set is expanded
once, not twice. For more information, see Section 8.2.2, row 10.
If true, the initial candidate set of nodes is expanded with sources of
includeSecondary contextual relationships of nodes in the initial candidate set of nodes.
Sources The edges for these contextual relationships are also included. For
more information, see Section 8.2.2, row 10.
scope Scope is specified in reference to the URI entered in context (row above). It indicates which direction
in the graph that the operation should proceed (targets or sources) and whether or not to include the
starting point (self) in the results.
For definitions of sources and targets, see Section 8.1.1.1.1.
NOTE: Specifying an appropriate context and scope can significantly reduce the number of
GetResources messages/back-and-forth between endpoints required to discover particular
resources.
Field Description
countObjects If true, the store provides counts of sources and targets for each type of resource identified by the
discovery operation. Default is false.
storeLastWriteFilter Use this to optionally filter the discovery on a date when a data object was last written in a particular
store. The store returns resources whose storeLastWrite date/time is greater than the date/time
specified in this filter field.
Purpose of this field is part of the behavior for eventual consistency between 2 stores.
activeStatusFilter Use this to optionally filter the discovery for data objects that are currently "active" or "inactive" as
defined in ActiveStatusKind.
includeEdges If true, the store returns "edges" (relationships between the nodes). Default is false.
2. The store MUST respond with one or more of the messages listed below (column 1, Message Name),
based on criteria in the table (column 2 (Positive or Error and Required or Optional) and column 3
(Description of conditions and related behavior).
a. For information on the logic of the discovery operation, see Section 8.1.1.1.4.
3. Based on the response to a GetResources message and the specific data the customer is looking
for, the customer MAY iteratively send additional GetResources messages until it discovers the
desired resource(s).
EXAMPLE: If the customer started with the URI eml:///, it might want to use one of the returned
resources and continue discovering from there, so it would send another GetResources message
using the URI of the desired resource.
d. If the customer and store both support alternate URI formats and the store returned them
in the GetResourcesResponse message, then the customer MAY use alternate URIs to
make subsequent requests. (For more information and rules about using alternate URIs,
see the uri field on the Resources record.)
Field Description
dataspacesUri REQUIRED. The URI of the dataspace where the objects were deleted.
NOTE: Tombstones for deleted objects most likely no longer have sufficient history
to put them in a context/scope of the data model, so the discovery MUST be done at
the dataspace level. For more information, see Appendix: Data Replication and
Outage Recovery Workflows.
deleteTimeFilter Optionally, specify a delete time.
1. A customer MUST NOT request deleted resources with a deleteTimeFilter that
is older than the store's ChangeRetentionPeriod endpoint capability. For more
information, see Section 8.2.3.
2. A store MUST deny any request that exceeds its value for
ChangeRetentionPeriod and send error ERETENTION_PERIOD_EXCEEDED
(5001).
dataObjectsType Optionally, filter for the types of data objects you want.
2. The store MUST respond with one or more of these messages (column 1, Message Name), based on
criteria in the table (column 2 (Positive or Error and Required or Optional) and column 3 (Description
of conditions):
For general information about the types of capabilities and how they may be used, see Section 3.3.
NOTE: Many endpoint capabilities are "universal", used in all or most of the ETP protocols.
For more information, see Section 3.3.2.
Behavior associated with other endpoint capabilities are defined in relevant chapters.
EXAMPLE: The capabilities defined for limiting ETP sessions between 2 endpoints are discussed in
Section 4.3, How a Client Establishes a WebSocket Connection to an ETP Server.
ChangeRetentionPeriod: The minimum time period in seconds Long Seconds Default:
that a store retains the canonical URI of a deleted data object and Value units: 86,400
any change annotations for channels and growing data objects. <number of MIN: 86,400
RECOMMENDATION: This period should be as long as is feasible seconds>
in an implementation. When the period is shorter, the risk is that
additional data will need to be transmitted to recover from outages,
leading to higher initial load on sessions.
Protocol Capabilities
MaxResponseCount: The maximum total count of responses Long count MIN: 10,000
allowed in a complete multipart message response to a single Value units:
request. <count of
responses>
«enumera tion»
Object::ActiveStatusKind
«Mes s a ge»
Active
GetResourcesResponse
Ina ctive
+ res ources : Res ource [0..n] (a rra y) = EmptyArra y notes
«Mes s a ge»
tags Enumeration of possible channel or growing data object
GetResourcesEdgesResponse
AvroSrc = <memo> statuses. Statuses are mapped from domain data objects,
Correl a tionId = <memo> such as wellbores, channels, and growing data objects. + edges : Edge [1..n] (a rra y)
Mes s a geTypeID = 4
Mul tiPa rt = True notes
SenderRol e = s tore If the customer sets the includeEdges flag to true in the GetResources
«record» message, the store returns one or more of these messages, which
notes Object::Edge lists the edges in the graph, which represent the relationships
A store sends to a customer in response to the between data objects (nodes). This message is returned in addition to
GetResources message; each GetResourcesResponse + cus tomDa ta: Da taVa l ue [0..*] (ma p) = EmptyMa p the GetResourcesResponse message(s).
message contains an array of Resource records. + rel a tions hi pKi nd: Rel a tions hi pKi nd RECOMMENDATION:
Discovery (Protocol 3) works based on the notion of the + s ourceUri : s tri ng
data model as a graph. For an explanation of this + targetUri : s tri ng 1. First return resources (in the GetResourcesResponse message) in
concept and related definitions, see Section 8.1.1. notes breadth-first search order.
2. Send edges AFTER sending resource records for both ends of an
Record that contains the information to define an edge
between 2 nodes in a graph data model. edge.
Discovery (Protocol 3) works based on the notion of the data model as a graph. For an explanation of this
concept and related definitions, see Section 8.1.1.
Discovery proceeds in three steps:
1) An initial candidate set of nodes and edges is discovered based on the uri and depth fields specified in
ContextInfo and the scope field.
2) This set is optionally expanded to include the secondary edges and nodes for the initial candidate
nodes (based on other flags also specified in ContextInfo).
3) Nodes with types not specified in the dataObjectTypes field (also in ContextInfo) are removed from the
set. Edges not connected to a node in the final set are removed.
Message Type ID: 1
Correlation Id Usage: MUST be ignored and SHOULD be set to 0.
Multi-part: False
Sent by: customer
Field Name Description Data Type Min Max
context As defined in the ContextInfo record, which ContextInfo 1 1
includes the URI of the dataspace or data object
to begin the discovery, what specific types of data
objects are of interest, and how many "levels" of
relationships in the model to discover, among
others.
The URI MUST be a canonical Energistics data
object or dataspace URI; for more information,
see Appendix: Energistics Identifiers.
scope Scope is specified in reference to the URI (which ContextScopeKind 1 1
is entered in the context field). It indicates which
direction in the graph that the operation should
proceed (targets or sources) and whether or not to
include the starting point (self). The enumerated
values to choose from are specified in
ContextScopeKind.
For definitions of targets and sources, see Section
8.1.1.
NOTE: If scope = "self", then depth (in
ContextInfo) is ignored.
storeLastWriteFilter Use this to optionally filter the discovery on a date long 0 1
when the data object was last written in a
particular store. The store returns resources
whose storeLastWrite date/time is GREATER
than the date/time specified in this filter field.
Purpose of this field is part of the behavior for
eventual consistency between 2 stores.
It must be a UTC dateTime value, serialized as a
long, using the Avro logical type timestamp-micros
(microseconds from the Unix Epoch, 1 January
1970 00:00:00.000000 UTC).
countObjects If true, the store provides counts of sources and boolean 1 1
targets for each resource identified by Discovery.
Default is false.
activeStatusFilter Use this to optionally filter the discovery for data ActiveStatusKind 0 1
objects that are currently "active" or "inactive" as
defined in ActiveStatusKind.
This field is for data objects that have a notion of
being active or inactive. Each ML defines which
data objects this applies to and how it applies to
them. Examples include WITSML channel data
objects and growing data objects, which have a
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Discovery",
"name": "GetResources",
"protocol": "3",
"messageType": "1",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "context", "type": "Energistics.Etp.v12.Datatypes.Object.ContextInfo" },
{ "name": "scope", "type": "Energistics.Etp.v12.Datatypes.Object.ContextScopeKind" },
{ "name": "countObjects", "type": "boolean", "default": false },
{ "name": "storeLastWriteFilter", "type": ["null", "long"] },
{ "name": "activeStatusFilter", "type": ["null",
"Energistics.Etp.v12.Datatypes.Object.ActiveStatusKind"] },
{ "name": "includeEdges", "type": "boolean", "default": false }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Discovery",
"name": "GetResourcesResponse",
"protocol": "3",
"messageType": "4",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "resources",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.Object.Resource" }, "default": []
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Discovery",
"name": "GetResourcesEdgesResponse",
"protocol": "3",
"messageType": "7",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "edges",
"type": { "type": "array", "items": "Energistics.Etp.v12.Datatypes.Object.Edge" }
}
]
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Discovery",
"name": "GetDeletedResources",
"protocol": "3",
"messageType": "5",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "dataspaceUri", "type": "string" },
{ "name": "deleteTimeFilter", "type": ["null", "long"] },
{
"name": "dataObjectTypes",
"type": { "type": "array", "items": "string" }, "default": []
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Discovery",
"name": "GetDeletedResourcesResponse",
"protocol": "3",
"messageType": "6",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "deletedResources",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.Object.DeletedResource" }, "default": []
}
]
}
9 Store (Protocol 4)
ProtocolID: 4
Defined Roles: store, customer
Use Store (Protocol 4) to get, put and delete ALL data objects defined by Energistics domain data
models—which includes data objects that "contain" other data objects (such as channel sets and logs),
channels, and "growing data objects" (such as WITSML trajectories, mud logs and others). NOTE: The
ability to handle some operations on growing data objects in this protocol is new behavior for ETP v1.2.
For the Energistics' definition of data object, information on the kinds of data objects, and information on
identification, see Appendix: Energistics Identifiers (Section 25.1).
Other ETP sub-protocols that may be used with Store (Protocol 4):
To subscribe to and receive notifications from the store about operations/events in the store resulting
from operations using Store (Protocol 4), use StoreNotification (Protocol 5). That is, this chapter
explains events that trigger notifications in StoreNotification (Protocol 5); however, the store is only
required to send notifications if the customer is subscribed to notifications for the appropriate
context. For more information on Protocol 5, see Chapter 10.
To UPDATE a growing data object—including the "header" or any of the parts—use GrowingObject
(Protocol 6) (Chapter 11).
To query on fields in a data object for Store get operations, use StoreQuery (Protocol 14)
(Chapter◦16).
For information on streaming channel data or other operations specific to channels, see:
ChannelStreaming (Protocol 1), Chapter 6
ChannelDataFrame (Protocol 2), Chapter 7
ChannelSubscribe (Protocol 21), Chapter 19
- ChannelDataLoad (Protocol 22), Chapter 20
EXAMPLE:
An Energistics data object MAY be included in one or more container data objects.
One of the best-known examples comes from WITSML where:
One or more Channel data objects can be contained in one or more ChannelSet data objects. In this
example, the Channels are the "contained" data objects and the Channel Set is the "container".
One or more ChannelSet data objects can be contained in one or more Log data objects. In this
example, the ChannelSets are the "contained" data objects and the Log is the "container".
NOTE: Individual container/contained data objects are listed in the relevant ML's ETP implementation
specification (which is a companion document to this ETP Specification). For example, Channel,
Channel Set and other contained data objects defined in WITSML are listed in the ETP v1.2 for
WITSML v2.0 Implementation Specification.
For more details about the relationships between Channel, Channel Set and Log data objects, and
this contained object concept, see http://docs.energistics.org/#WITSML/WITSML_TOPICS/WITSML-
000-050-0-C-sv2000.html.
The inherent design of these container/contained data objects requires some additional handling for store
operations and related notifications. ETP defines a data object capability,
MaxContainedDataObjectCount, which allows an endpoint to limit the number of contained data objects in
a container data object.
9.1.3.2 Pruning
Another important concept is the notion of "pruning," which is deletion of contained data objects when an
operation to the container would result in "orphan" contained data objects (that is, the contained data
object is no longer joined to any container at all).
ETP specifies a data object capability, OrphanedChildrenPrunedOnDelete, which allows an endpoint to
specify for each type of data object, whether or not it allows pruning operations. In addition, Store
(Protocol 4) request messages that might result in orphaned contained data objects have a field named
pruneContainedObjects flag, which allows the customer to request that orphans be pruned. Both
conditions (the capability and the flag) must be true for pruning to occur.
For more information on capabilities for this protocol, see Section 9.2.3.
For related behavior for their use and all details of operations on container/contained data objects
Section◦9.2.2.
Capabilities. Lists and defines the ETP-defined parameters most relevant for this sub-protocol. ETP
defines these parameters to set necessary limits to help prevent aberrant behavior (e.g., sending
oversized messages or sending more messages than an endpoint can handle).
2. For the URIs it successfully returns data objects for, the store MUST send one or more
GetDataObjectsResponse map response messages (Section 9.3.6) where the map values are
DataObject records with the data object URIs and data.
a. For more information on how map response messages work, see Section 3.7.3.
b. The store MUST return all data objects that it can that meet the criteria of the request.
i. For definition of data object, see Section Appendix: Energistics Identifiers.
c. The store MUST observe limits specified by its own and the customer's values for the
MaxDataObjectSize capability. For more information about how this capability works and required
behavior, see Section 3.3.2.4.
d. If a data object is too large to fit in a WebSocket message, the store MAY subdivide the object
and send it in "chunks" using the Chunk message. For more information on how to use Chunk
messages, see Section 3.7.3.2.
e. For more information on how GetDataObjects works for container/contained data objects, see
Section 9.2.2, Row 15.
i. For definitions of container/contained objects, see Section 9.1.3.
f. For more information on how GetDataObjects works for channel data objects, see Section 9.2.2,
Row 16.
g. For more information on how GetDataObjects works for growing data objects, see Section 9.2.2,
Row 17.
3. For the URIs it does NOT successfully return data objects for, the store MUST send one or more map
ProtocolException messages where values in the errors field (a map) are appropriate errors, such
as ENOT_FOUND (11).
a. For more information on how ProtocolException messages work with plural messages, see
Section 3.7.3.
Row 21.
2. For the data objects it successfully puts (add to/replace in the store), the store MUST send one or
more PutDataObjectsResponse map response messages (Section 9.3.3).
a. For more information on how map response messages work, see Section 3.7.3.
b. For data objects that already exist in the store, the put operation MUST ALWAYS be a complete
replacement of the object.
i. The store MUST also update the storeLastWrite field, which is only on the Resource for the
data object (not on the data object itself). For more information about the storeLastWrite field,
see Section 3.12.5.2.
ii. The store MUST NOT set or change any elements on the data object that are not “store
managed”. The Creation or LastUpdate elements on data objects are NOT “store managed”.
The store MUST NOT set or change the value of these elements.
c. For data objects that do not yet exist in the store, the store MUST add them.
i. The store MUST also update the storeCreated and the storeLastWrite fields, which are only
on the Resource for the data object (not the data object itself).
1. The store MUST NOT set or change any elements on the data object that are not “store
managed”. The Creation or LastUpdate elements on data objects are NOT “store
managed”. The store MUST NOT set or change the value of these elements.
ii. If the data object being added is a Channel data object, growing data object, or other data
object that can be "active", the store MUST also set its activeStatus flag to "inactive" (the
default when a new data object is added to a store).
iii. For additional information for growing data objects and container/contained data objects, see
Section 9.2.2.
d. A store MAY schema-validate an object, but it is NOT REQUIRED to do so.
e. NOTIFICATION BEHAVIOR: The store MUST send an ObjectChanged notification message
with a type (objectChangeKind) of "insert" or "update".
i. A store MUST send a notification for only the most recent effective state of a data object. So
if multiple insert or update changes happened to a data object since the most recent insert or
update notifications were sent for the data object, the store MAY send only one notification. If
the object was inserted since the most recent insert or update notification was sent, the store
MUST send an insert notification with the timestamp of the most recent insert or update
change. Otherwise, the store MUST send an update notification with the timestamp of the
most recent update.
ii. Notifications are sent in StoreNotification (Protocol 5). For more information on rules for
populating/sending notifications and why notification behavior is specified here, see Section
9.2.2.
f. The store MUST observe limits specified by its values for the MaxDataObjectSize capability. For
more information about how this capability works and required behavior, see Section 3.3.2.4.
g. If the PutDataObjects message includes container objects, the PutDataObjectsResponse
message MUST contain additional information as specified in the PutResponse record (Section
23.34.9) in the message.
i. For more information on how PutDataObjects works for container/contained data objects, see
Section 9.2.2.
h. For more information on how PutDataObjects works for growing data objects, see Section 9.2.2.
3. For the data objects it does NOT successfully put, the store MUST send one or more map
ProtocolException messages where values in the errors field (a map) are appropriate errors, such
as EREQUEST_DENIED (6).
a. For more information on how ProtocolException messages work with a plural messages, see
Section 3.7.3.
b. The store MAY schema validate a data object but is not required to.
i. A store MAY reject any document that is not schema valid and send error
EINVALID_OBJECT (14).
as ENOT_FOUND (11).
a. For more information on how ProtocolException messages work with a plural messages, see
Section 3.7.3.
2. Capabilities-related behavior 1. Relevant endpoint, data object, and/or protocol capabilities MUST be specified
when the ETP session is established (see Chapter 5) and MUST be used/honored
as defined in the relevant ETP sub-protocol.
2. For an explanation of endpoint, data object, and protocol capabilities, see Section
3.3.
a. For the list of global capabilities and related behavior, see Section 3.3.2.
3. Section 9.2.3 identifies the capabilities most relevant to this ETP sub-protocol. If
one or more of the defined capabilities is presented by an endpoint, the other
endpoint in the ETP session MUST accept it (them) and process the value, and
apply them to the behavior as specified in this document.
a. Additional details for how to use the capabilities are included below in this
table and in Section 9.2.1 Store: Message Sequence.
4. Plural messages (which includes 1. This protocol uses plural messages. For detailed rules on handling plural
maps) messages (including ProtocolException handling), see Section 3.7.3.
5. Requirements for use of PWLS Practical Well Log Standard (PWLS) is an industry standard stewarded by Energistics.
It provides an industry-agreed list of logging tool classes and a hierarchy of
measurement properties and applies all known mnemonics to them. For more
information, see Section 3.12.7.
1. If an ETP store supports the WITSML Channel data object, then it MUST support
PropertyKind data objects (which are an implementation of PWLS).
2. Endpoints MUST be able to discover property kind data objects (to determine
available property kinds) and use the returned property kinds in relevant
Discovery, Store and Query operations.
6. For data objects that exceed an 1. Some messages in this protocol allow or require a data object to be sent with the
endpoint's WebSocket message message. If the size of the data object (bytes) is too large for the WebSocket
size, use the Chunk message. message size (which for some WebSocket libraries can be quite small, e.g. 128
kb), an endpoint MAY subdivide the data object and send it in "chunks" using the
Chunk message defined in this protocol. For information on how to handle these
binary large objects (BLOBs), see Section 3.7.3.2.
2. NOTE: Use of Chunk messages DOES NOT address an endpoint's
MaxDataObjectSize limit.
3. The specific messages in this protocol that may use Chunk messages are:
a. GetDataObjectsResponse
b. PutDataObjects
7. Notifications 1. This chapter explains events (operations) in Store (Protocol 4) that trigger the
store to send notifications, which the store sends using StoreNotification (Protocol
5). However, statements of NOTIFICATION BEHAVIOR are here in this chapter,
in the context of the detailed explanation of the behavior that triggers the
notification.
2. Notification behavior is described here using MUST. However, the store MUST
ONLY send notifications IF AND ONLY IF there is a customer subscribed to
notifications for an appropriate context (i.e., a context that includes the data
object) and the store MUST ONLY send notifications to those customers that are
subscribed to appropriate contexts.
a. For more information on data object notifications, see Chapter 10
StoreNotification (Protocol 5).
b. For information on notifications for parts in growing data objects, see
Chapter◦12 GrowingObjectNotification (Protocol 7).
8. Store Behavior: Updates to 1. Each Resource in ETP has these two fields: storeCreated and storeLastWrite.
storeCreated and storeLastWrite a. These fields appear ONLY on the Resource NOT on the data object and are
fields. used in workflows for eventual consistency between 2 stores.
b. For more information about these fields, see Section 3.12.5.1 and their
definitions/required format in Resource (see Section 23.34.11).
2. For operations in Store (Protocol 4) that ADD a new data object (e.g.
PutDataObjects), the store MUST do both of these:
a. Set the storeCreated field to the time that the data object was added in the
store.
b. Set the storeLastWrite to the same time as storeCreated.
3. For operations to data objects that may occur in another protocol that change any
data for the data object (e.g., GrowingObject (Protocol 6), which may result in
changes to the growing data object header or its parts, or ChannelSubscribe
(Protocol 21) where data may be appended to a channel), the store MUST update
the storeLastWrite field with the time of the change in the store.
a. Currently other protocols that trigger updates to these fields include:
i. GrowingObject (Protocol 6); see Chapter 11.
ii. ChannelStreaming (Protocol 1); see Chapter 6.
iii. ChannelDataLoad (Protocol 22); see 20.
17. Get data objects: Additional rules 1. You MUST follow the rules specified in Section 9.2.1.1 with these additional
for growing data objects requirements for growing data objects.
2. The store MUST return the full growing data object, including its parts, as defined
by the growing data object schema. (NOTE: The parts of growing data objects are
not themselves Energistics data objects. As such, Store (Protocol 4) does not
operate directly on parts. Store (Protocol 4) only handles parts of growing data
objects when they are included within the body of the growing data object. To
operate directly on parts, use GrowingObject (Protocol 6).)
a. The store must observe limits specified by its own and the customer’s values
for the MaxPartSize capability. For more information about how this
capability works and required behavior, see Section 3.3.2.5.
b. For more information about specific growing data objects, consult the
relevant ML documentation and companion ETP implementation
specification.
3. When returning a growing data object, any store-managed elements or attributes
in the growing data object header that are populated with information from the
growing data object’s index metadata MUST be populated consistently with the
index metadata.
a. EXAMPLE: The MdMn and MdMx elements on a WITSML 2.0 Trajectory
MUST have the same unit and depth datum as the Md elements on the
trajectory’s stations.
18. Put (insert or update) data 1. For the general requirements and message sequence for creating/inserting or
objects: General rules updating data objects, see Section 9.2.1.2.
19. Put/update data objects: 1. You MUST follow the general rules for put operations in Section 9.2.1.2 and the
Additional rules for general rules for container/contained objects in Row 12, and the additional
container/contained data objects requirements listed in this row.
a. If the customer wants to prune orphaned contained data objects, it MUST set
the pruneContainedObjects field in PutDataObjects message to true. For all
details on how prune operations work, see Row 12 above.
b. Rule 2 in Section 9.2.1.2 MUST be applied to the container object only—
NOT to the contained objects. EXAMPLE: If the customer request is to put a
ChannelSet with 6 Channels, the store MUST replace ONLY the ChannelSet
(NOT each of the Channels in the set).
c. The notification behavior in Rule 2.e MUST be applied for the container
object only. NOTE: Additional items below in this row explain additional
notifications that MAY be sent for contained objects.
2. A customer MUST limit in a put request, the count of data objects contained in
each container data object to a store's value for MaxContainedDataObjectCount
data object capability for that specific container data object type.
a. For any request that exceeds the store's limit, the store MUST deny the
request and send error ELIMIT_EXCEEDED (12).
22. Put/update data objects: 1. For data objects that are both growing data objects AND container data objects
Additional rules for data objects that (such as the WITSML v2.0 InterpretedGeology object), the additional rules for
are BOTH growing data objects both growing data objects and container data objects apply, but the growing data
AND container data objects object rules take precedence.
a. When creating a new growing data object that contains previously existing
parts, the previously existing parts MUST be both linked AND updated.
i. NOTIFICATION BEHAVIOR: (reminder: Row 7) in StoreNotification
(Protocol 5), for previously existing parts that are linked, the store
MUST send an ObjectChanged notification message, with an
ObjectChangeKind of "joined".
b. When creating a new growing data object, a customer MUST also limit the
count of parts in the growing data object to the store's relevant value for
MaxContainedDataObjectCount.
c. EXAMPLE: Putting a new InterpretedGeology data object that contains a
combination of new and existing InterpretedGeologyInterval contained data
objects/parts WILL SUCCEED.
i. The new InterpretedGeology data object will be created in the store;
any new InterpretedGeologyInterval data objects will be created in the
store; any previously existing InterpretedGeologyInterval data objects
that are included in the new InterpretedGeology data object will be
updated with the new content; the InterpretedGeology data object will
be linked with all intervals it contains; and an ObjectChanged
notification message, with an ObjectChangeKind of "joined" will be
sent for the previously existing intervals.
d. EXAMPLE: Putting an existing InterpretedGeology data object with a
different set of InterpretedGeologyInterval contained data objects/parts WILL
FAIL because Store (Protocol 4) does not support updates to growing data
objects.
i. Because the put fails, no changes will be made to the
InterpretedGeologyInterval data objects/parts contained in the
InterpretedGeology interval and no linking or unlinking will happen.
23. Put/update data objects: 1. Observe these rules for data objects that are both growing data object parts AND
Additional rules for data objects that contained data objects (such as the WITSML v2.0 InterpretedGeologyInterval
are BOTH growing data object parts object):
AND contained data objects a. You MUST follow the general rules for put operations in Section 9.2.1.2.
b. NOTIFICATION BEHAVIOR: If the data objects are included in any growing
data objects, in GrowingObjectNotification (Protocol 7), the store MUST send
a PartsChanged notification message.
24. Delete data objects: General rules 1. For the general requirements and message sequence for deleting one or more
(including growing data objects) data objects, see Section 9.2.1.3.
25. Delete data Objects: Additional 1. You MUST follow the general rules for delete operations in Section 9.2.1.3 and
rules for container and contained the general rules container/contained objects in Row 12, and the additional
data objects requirements listed in this row.
a. Step 2 in Section 9.2.1.3 applies to the container object only (not the
contained objects).
For contained objects, observe these rules for a delete container data object operation:
2. If a customer wants to prune orphan contained data objects, it MUST set the
pruneContainedObjects flag to true in the DeleteDataObjects message.
a. The store MUST delete orphan contained objects as described in Row 12.
NOTIFICATION BEHAVIOR (StoreNotification (Protocol 5) (reminder: Row
ChangeRetentionPeriod: The minimum time period in seconds Long Seconds Default: 86,400
that a store retains the canonical URI of a deleted data object and Value units: MIN: 86,400
any change annotations for channels and growing data objects. <number of
RECOMMENDATION: This period should be as long as is feasible seconds>
in an implementation. When the period is shorter, the risk is that
additional data will need to be transmitted to recover from outages,
leading to higher initial load on sessions.
MaxPartSize: The maximum size in bytes of each data object part Long byte Min: 10,000
allowed in a standalone message or a complete multipart <number of bytes
message. Size in bytes is the total size in bytes of the bytes>
uncompressed string representation of the data object part in the
format in which it is sent or received.
Applies to get and put operations of growing data objects.
Data Object Capabilities
(For definitions of all data object capabilities, see Section 3.3.4)
Protocol Capabilities
MaxDataObjectSize: (This is also an endpoint capability and a long byte MIN: 100,000
data object.) The maximum size in bytes of a data object allowed in <number of bytes
a complete multipart message. Size in bytes is the size in bytes of bytes>
the uncompressed string representation of the data object in the
format in which it is sent or received.
This capability can be set for an endpoint, a protocol, and/or a data
object. If set for all three, here is how they generally work:
An object-specific value overrides an endpoint-specific value.
A protocol-specific value can further lower (but NOT raise) the
limit for the protocol.
EXAMPLE: A store may wish to generally support sending and
receiving any data object that is one megabyte or less with the
exceptions of Wells that are 100 kilobytes or less and Attachments
that are 5 megabytes or less. A store may further wish to limit the
size of any data object sent as part of a notification in
StoreNotification (Protocol 5) to 256 kilobytes.
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Store",
"name": "GetDataObjects",
"protocol": "4",
"messageType": "1",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "uris",
"type": { "type": "map", "values": "string" }
},
{ "name": "format", "type": "string", "default": "xml" }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Store",
"name": "PutDataObjects",
"protocol": "4",
"messageType": "2",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "dataObjects",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.Object.DataObject" }
},
{ "name": "pruneContainedObjects", "type": "boolean", "default": false }
]
}
Multi-part: True
Sent by: store
Field Name Description Data Type Min Max
success For non-container data objects, the map PutResponse 1 *
value MUST an empty PutResponse record
(Section 23.34.9), which has all arrays set to
empty arrays.
For contained data objects, the map value
MUST be a PutResponse record with the
arrays populated appropriately.
For more information about
container/contained data objects, see
Section 9.2.2, Row 19.
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Store",
"name": "PutDataObjectsResponse",
"protocol": "4",
"messageType": "9",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "success",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.Object.PutResponse" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Store",
"name": "DeleteDataObjects",
"protocol": "4",
"messageType": "3",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "uris",
"type": { "type": "map", "values": "string" }
},
{ "name": "pruneContainedObjects", "type": "boolean", "default": false }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Store",
"name": "DeleteDataObjectsResponse",
"protocol": "4",
"messageType": "10",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "deletedUris",
"type": { "type": "map", "values": "Energistics.Etp.v12.Datatypes.ArrayOfString" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Store",
"name": "GetDataObjectsResponse",
"protocol": "4",
"messageType": "4",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "dataObjects",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.Object.DataObject" }, "default": {}
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Store",
"name": "Chunk",
"protocol": "4",
"messageType": "8",
"senderRole": "store,customer",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{ "name": "blobId", "type": "Energistics.Etp.v12.Datatypes.Uuid" },
{ "name": "data", "type": "bytes" },
{ "name": "final", "type": "boolean" }
]
}
10 StoreNotification (Protocol 5)
ProtocolID: 5
Defined Roles: store, customer
StoreNotification (Protocol 5) allows store customers to subscribe to and receive notifications of changes
to data objects in the store, in an event-driven manner, from events (operations) that occur in Store
(Protocol 4). Customers can choose to receive notifications with the changed data object OR only
notifications of change—then based on the change, can decide whether or not to get the full data object.
Customers subscribe to changes within a given context (defined, in part, by a URI) in the store
(EXAMPLE: A context might be all changes that occur in a specific well). The store provides notifications
to the customer—only while the session is valid—of additions, changes, and deletions in the specified
context. Additionally, this protocol contains a message for so-called "unsolicited" subscriptions
(subscriptions that a store may automatically create for a customer) to support new workflows.
NOTE: Notification messages are a "fire and forget" operation. They are a reliable way for a store to
inform a customer that data has changed, which is useful for typical customer applications, such as
visualizations, calculations and data synchronization tools. However, notification messages are not a
reliable way for the store to ensure the customer successfully receives and persists the changed data. If a
data store needs to ensure that another data store is eventually consistent with it, the preferred workflow
is for the data store to instead act as a store customer using the push workflow to deliver data to the other
data store as described in Appendix: Data Replication and Outage Recovery Workflows (Section
26.4).
Other ETP sub-protocols that may be used with StoreNotification (Protocol 5):
The events that trigger notifications in this protocol happen in Store (Protocol 4). Some of the details
of operations that trigger notifications are explained in Chapter 9.
NOTE: Use of the PutGrowingDataObjectsHeader message in GrowingObject (Protocol 6)
creates or updates the header information of a data object, so operations using that message
trigger notifications in this protocol, StoreNotification (Protocol 5).
To receive notifications for changes to the parts of one growing data object, ETP has similar
protocols: GrowingObject (Protocol 6) where the event/operations occur and
GrowingObjectNotification (Protocol 7), where customers can subscribe to receive notifications about
operations on/to the parts within the context of one growing data object. For information on operations
and notifications related to parts of a growing data object, see Chapters 11 and 12.
For data objects that are both growing data objects AND container data objects (i.e., where the parts
are themselves also data objects), other operations in GrowingObject (Protocol 6) will also trigger
StoreNotification (Protocol 5) messages.
Sample schemas of the messages defined in this protocol (which are identical to the Avro schemas
published with this version of ETP). However, only the schema content in this specification includes
documentation for each field in a schema (see Section 10.3).
10.1.1 Definitions
This section defines terms for this protocol.
Term Definition
Subscription We're all familiar with the concept of a video or audio streaming subscription
or a magazine subscription, which is the action of making or agreeing to make
an advance payment in order to receive or participate in something.
In the context of ETP, a subscription is an agreement to receive notifications of
events or operations, e.g., adds, deletes or updates of data objects that
happen in Store (Protocol 4). Subscriptions are created and notifications sent
using StoreNotification (Protocol 5).
NOTE: Subscriptions work similarly for parts in growing data objects. That is,
subscriptions are created and notifications about changes to parts in a growing
data object are sent in GrowingObjectNotification (Protocol 7) for changes that
happen as result of actions in GrowingObject (Protocol 6).
Subscriptions can be established in these main ways:
1. A customer can create one or more subscriptions using the
SubscribeNotifications message.
2. A store can automatically create a subscription for a customer. This is
referred to as an “unsolicited subscription”. See the row below.
Unsolicited subscription Subscriptions created by the store on behalf of the customer, usually based on
business agreements or other information exchanged out of band of an ETP
session.
EXAMPLE: In some newer workflows, operators want to automatically create
subscriptions for contracted data providers, based on business agreements
(contracts) executed outside of ETP. When a contracted data provider
connects to the operator's data store, the data provider will automatically be
subscribed to notifications for an appropriate context, e.g., a well, wellbore,
etc. as agreed in a contract. When the data provider connects to the operator's
system, it automatically receives UnsolicitedStoreNotifications messages.
For more information about these workflows in the drilling domain, see the
WITSML v2.0 for ETP v1.2 Implementation Specification.
The main tasks in this protocol are subscribing to the appropriate objects or contexts (sets of related
objects) in a store to receive the desired notifications and canceling/stopping those subscriptions. Once a
subscription has been created, a store MUST send appropriate notifications based on events in Store
(Protocol 4) and put header operations in GrowingObject (Protocol 6).
attempt to take corrective action but the store MUST NOT terminate the associated subscriptions.
1. ETP-wide behavior that MUST be 1. Requirements for general behavior consistent across all of ETP are defined
observed in all protocols in Chapter 3. This behavior includes information such as: all details of
message handling (such as message headers, handling compression, use of
message IDs and correlation IDs, requirements for plural and multipart
message patterns) use of acknowledgements, general rules for sending
ProtocolException messages, URI encoding, serialization and more.
RECOMMENDATION: Read Chapter 3 first.
2. For information about Energistics identifiers and prescribed ETP URI
formats, see Appendix: Energistics Identifiers.
a. In MOST cases, endpoints performing operations in this protocol MUST
use the canonical Energistics URI. For more information, see Section
3.7.4.
3. For the complete list of error codes defined by ETP, see Chapter 24.
2. Capabilities-related behavior 1. Relevant endpoint, data object, and/or protocol capabilities MUST be
specified when the ETP session is established (see Chapter 5) and MUST
be used/honored as defined in the relevant ETP sub-protocol.
2. For an explanation of endpoint, data object, and protocol capabilities, see
Section 3.3.
a. For the list of global capabilities and related behavior, see Section
3.3.2.
3. Section 10.2.3 identifies the capabilities most relevant to this ETP sub-
protocol. Additional details for how to use the protocol capabilities are
included below in this table and in Section 10.2.1 StoreNotification:
Message Sequence.
3. Message sequence 1. The Message Sequence section above (Section 10.2.1) describes
See Section 10.2.1. requirements for the main tasks listed there and also defines required
behavior.
4. Plural messages (which includes 1. This protocol uses plural messages. For detailed rules on handling plural
maps) messages (including ProtocolException handling), see Section 3.7.3.
5. For data objects that exceed an 1. Some messages in this protocol allow or require a data object to be sent with
endpoint's WebSocket message the message. If the size of the data object (bytes) is too large for the
size, use the Chunk message. WebSocket message size (which for some WebSocket libraries can be quite
small, e.g. 128 kb), an endpoint MAY subdivide the data object and send it in
"chunks" using the Chunk message defined in this protocol. For information
on how to handle these binary large objects (BLOBs), see Section 3.7.3.2.
2. NOTE: Use of Chunk messages DOES NOT address an endpoint's
MaxDataObjectSize limit.
3. The specific messages in this protocol that may use Chunk messages are:
a. ObjectChanged
6. Customers must be able to receive 1. All customer role applications MUST implement support for receiving and
and consume data objects. consuming notifications that include the data objects (that is, all data for the
object in a format (e.g., XML or JSON) negotiated when establishing the
session).
7. Unsolicited subscriptions 1. The store may automatically configure unsolicited subscriptions to include
the data objects (i.e., the includeObjectData on the unsolicited
SubscriptionInfo record may be true). If the customer application does not
want the data, it can do one of the following:
a. Unsubscribe and stop receiving the notifications.
b. Simply ignore the data payloads and get the data manually.
c. Unsubscribe from the unsolicited subscription and then explicitly create
the subscription (see Section 10.2.1.1) and set includeObjectData to
false.
8. All behaviors defined in this table 1. We are aiming to state these requirements and behaviors as clearly and
assume that a valid customer concisely as possible. All required behaviors ("MUST" statements) described
in the rows below assume:
subscription for the correct context a. A valid subscription has been created as described in Section◦10.2.1.1.
has been created. b. References to "data object(s)" means "data object(s) within the context
specified in the subscription".
c. EXAMPLE: Below in this table where it states "When a store performs a
PutDataObjects operation, it MUST send an ObjectChanged
message"; this means, if the customer has a subscription whose scope
and context includes the data object that was put, then the store must
send the ObjectChanged message to the subscribed customer.
2. A valid subscription is one where all of the following conditions are met:
a. SubscriptionInfo.context is a valid:
i. ContextInfo.uri references a data object or dataspace that exists
and is available in the store (i.e., the store will return it if requested
using Store (Protocol 4) or Dataspaces (Protocol 24)).
ii. ContextInfo.dataObjectTypes is empty or only includes data
object types negotiated when establishing the session.
b. SubscriptionInfo.requestUuid is not already in use by another
subscription.
c. SubscriptionInfo.format is a format negotiated when establishing the
session.
9. Notifications are for operations that 1. The notifications sent in this protocol are based on operations that happen in
happen in Store (Protocol 4) and Store (Protocol 4). As such, detailed behaviors that trigger notifications are
put header operations in described in Chapter 9 (see Sections 9.2.1 and 9.2.2) an indicated with text
GrowingObject (Protocol 6). "NOTIFICATION BEHAVIOR".
a. RECOMMENDATION: For complete understanding of notification
behavior, use both Chapters 9 and 10.
2. Additionally, operations to a growing data object "header" and growing data
object parts that are themselves data objects in GrowingObject (Protocol 6)
may trigger a notification in StoreNotification (Protocol 5); these operations
add and update growing data object headers and add, update, link, unlink
and delete growing data object parts that are data objects. As such, the
notification requirements for these operations are the same as for changes to
data objects as described in Store (Protocol 4) (Chapter 9).
a. For more information about growing data object operations and
notifications, see Chapter 11.
10. No session survivability for 1. If the ETP session is closed or the connection drops, then the store MUST
subscriptions cancel notification subscriptions for the dropped customer endpoint.
2. On reconnect, the customer MUST re-create subscriptions (as explained in
Section 10.2.1.1).
a. For information on resuming operations after a disconnect, see
Appendix: Data Replication and Outage Recovery Workflows.
11. Order of notifications 1. For a given data object, the store MUST send notifications in the same order
that operations are performed in the store.
a. The intent of this rule is that objects are always "correct" (schema
compliant), and never left in an inconsistent state. The rule applies
primarily to contained data objects and growing data objects.
b. In general, global ordering of notifications is NOT required. However,
there are some situations where the order of notifications affecting
multiple objects is important and must be preserved.
12. Objects covered by more than one A customer can create multiple subscriptions on a store. It is possible that the
subscription same data object is included in more than one subscription.
1. In this case, the store MUST send one notification per relevant subscription.
EXAMPLE: If a customer has subscribed to two different scope/contexts that
include the same data object, then the customer will receive at least 2
notifications, one for each subscription.
a. Each notification message includes the requestUuid that uniquely
identifies each subscription (so a customer can determine which
subscription resulted in each notification message).
b. A store MUST send notifications for only the most recent effective state
of a data object. So if notifications are queued for a data object, and
that data object is subsequently deleted, the store MAY discard any
previous notifications.
3. If the data object being deleted is the primary data object of a subscription,
the store MUST also do the following:
a. MAY send any relevant notifications that may have already been
queued (i.e., for other data objects in the subscription).
b. MUST stop any subscriptions for the deleted object by sending the
SubscriptionEnded message.
c. After sending the SubscriptionEnded message, MUST NOT send any
further notifications for the subscription.
16. Data objects that can be 1. REMINDER: Row 8
"active": changes to activeStatus 2. Growing data objects, channel data objects, and other data objects that can
field be “active” in ETP have a field named activeStatus, which may have a value
of "inactive" or "active".
a. For information about this field and required behavior for setting it to
"inactive" related to the ActiveTimeoutPeriod capability, see Section
3.3.2.1.
b. Behavior that causes the field to be set to "active" are described in the
protocols in which they occur and summarized in Section 9.2.2, Row 9.
3. NOTIFICATION BEHAVIOR: When a data object's activeStatus field
changes, a store MUST send an ObjectActiveStatusChanged notification
message.
17. Entitlement changes to data REMINDER: Row 8
objects
Many stores grant entitlements (access to data) at the well, wellbore or log level.
This means: even if a customer-user is subscribed to the correct context, it cannot
receive notification of the new object (e.g., well or wellbore) until the user is
granted permission. In this situation, the store MUST do the following:
1. When the customer is granted access to a data object, the store MUST send
the ObjectChanged notification message with an ObjectChangeKind of
authorized.
Conversely, a customer-user may initially be given access to a data object, only to
have it later revoked. In this situation, the store MUST do the following:
1. When the customer’s access to a data object is revoked, the store MUST
send the ObjectAccessRevoked notification message.
MaxPartSize: The maximum size in bytes of each data object part long byte Min: 10,000
allowed in a standalone message or a complete multipart <number of bytes
message. Size in bytes is the total size in bytes of the bytes>
uncompressed string representation of the data object part in the
format in which it is sent or received.
Data Object Capabilities
Protocol Capabilities
MaxDataObjectSize: (This is also an endpoint capability and a long byte MIN: 100,000
data object.) The maximum size in bytes of a data object allowed in <number of bytes
a complete multipart message. Size in bytes is the size in bytes of bytes>
the uncompressed string representation of the data object in the
format in which it is sent or received.
This capability can be set for an endpoint, a protocol, and/or a data
object. If set for all three, here is how they generally work:
An object-specific value overrides an endpoint-specific value.
A protocol-specific value can further lower (but NOT raise) the
limit for the protocol.
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.StoreNotification",
"name": "UnsubscribeNotifications",
"protocol": "5",
"messageType": "4",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "requestUuid", "type": "Energistics.Etp.v12.Datatypes.Uuid" }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.StoreNotification",
"name": "ObjectChanged",
"protocol": "5",
"messageType": "2",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{ "name": "change", "type": "Energistics.Etp.v12.Datatypes.Object.ObjectChange" },
{ "name": "requestUuid", "type": "Energistics.Etp.v12.Datatypes.Uuid" }
]
}
changeTime The time the change occurred in the store. This is long 1 1
the value from the deletedTime field on the
DeletedResource record.
It must be a UTC dateTime value, serialized as a
long, using the Avro logical type timestamp-micros
(microseconds from the Unix Epoch, 1 January
1970 00:00:00.000000 UTC).
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.StoreNotification",
"name": "ObjectDeleted",
"protocol": "5",
"messageType": "3",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "uri", "type": "string" },
{ "name": "changeTime", "type": "long" },
{ "name": "requestUuid", "type": "Energistics.Etp.v12.Datatypes.Uuid" }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.StoreNotification",
"name": "ObjectAccessRevoked",
"protocol": "5",
"messageType": "5",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "uri", "type": "string" },
{ "name": "changeTime", "type": "long" },
{ "name": "requestUuid", "type": "Energistics.Etp.v12.Datatypes.Uuid" }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.StoreNotification",
"name": "SubscriptionEnded",
"protocol": "5",
"messageType": "7",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "reason", "type": "string" },
{ "name": "requestUuid", "type": "Energistics.Etp.v12.Datatypes.Uuid" }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.StoreNotification",
"name": "SubscribeNotifications",
"protocol": "5",
"messageType": "6",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "request",
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.StoreNotification",
"name": "SubscribeNotificationsResponse",
"protocol": "5",
"messageType": "10",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "success",
"type": { "type": "map", "values": "string" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.StoreNotification",
"name": "ObjectActiveStatusChanged",
"protocol": "5",
"messageType": "11",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "activeStatus", "type":
"Energistics.Etp.v12.Datatypes.Object.ActiveStatusKind" },
{ "name": "changeTime", "type": "long" },
{ "name": "resource", "type": "Energistics.Etp.v12.Datatypes.Object.Resource" },
{ "name": "requestUuid", "type": "Energistics.Etp.v12.Datatypes.Uuid" }
]
}
NOTE: The store may configure unsolicited subscriptions to send object data with notifications. The
customer can check the includeObjectData field on the SubscriptionInfo record to determine if this is the
case or not. For more information, see Section 10.2.2.
Message Type ID: 8
Correlation Id Usage: MUST be ignored and SHOULD be set to 0.
Multi-part: False
Sent by: store
Field Name Description Data Type Min Max
subscriptions An array of SubscriptionInfo records, each of SubscriptionInfo 1 n
which identifies the details of an unsolicited
subscription. Each record includes information
such the context and scope of the subscription,
and the request UUID that initiated a subscription.
The URI in the ContextInfo record MUST be a
canonical Energistics data object URI; for more
information, see Appendix: Energistics
Identifiers.
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.StoreNotification",
"name": "UnsolicitedStoreNotifications",
"protocol": "5",
"messageType": "8",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "subscriptions",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.Object.SubscriptionInfo" }
}
]
}
4. Use a set of Chunk message to send small portions of the data object (small enough to fit into the
negotiated WebSocket size limit for the session). Each Chunk message MUST contain its assigned
"parent" BlobId and a portion of the data object.
5. For endpoints that receive these messages, to correctly "reassemble" the data object (BLOB): use
the blobId, and the messageId (which indicates the message sequence, because ETP (via
WebSocket) guarantees messages to be delivered in order), and final (flag that indicates the last
chunk that comprises a particular data object).
6. Chunk messages for different data objects MUST NOT be interleaved within the context of one
multipart message operation. If more than one data object must be sent using Chunk messages, the
sender MUST finish sending each data object before sending the next one. To indicate the last
Chunk message for one data object, the sender MUST set the final flag to true.
For more information on how to use the Chunk message, see Section 3.7.3.2.
Correlation Id Usage: MUST be set to the messageId of the ObjectChanged message that resulted in
this Chunk message being created.
Multi-part: True
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.StoreNotification",
"name": "Chunk",
"protocol": "5",
"messageType": "9",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{ "name": "blobId", "type": "Energistics.Etp.v12.Datatypes.Uuid" },
{ "name": "data", "type": "bytes" },
{ "name": "final", "type": "boolean" }
]
}
11 GrowingObject (Protocol 6)
ProtocolID: 6
Defined Roles: store, customer
GrowingObject (Protocol 6) allows customer applications to operate independently on the two main
elements that comprise a growing data object: its "header" (or parent) data object and its set of parts,
which are index-based (i.e., either time or depth). (For a definition of growing data object, see Section
11.1.1.)
The ability to operate separately on the header and set of parts supports use cases and workflows that
minimize traffic on the wire. For example, an end-user of a customer application can get a list of only
headers to review, and then determine which headers they want to get some or all of the parts for.
GrowingObject (Protocol 6) defines messages that allow a customer to work with growing data object
headers, individual parts, or a range of parts, independently of one another. The combination of these
messages lets customers:
Edit existing growing data objects, by editing the header, the set of parts, or both.
To edit growing data objects a customer MUST use GrowingObject (Protocol 6) but some
operations on growing data objects MAY be done with Store (Protocol 4) (see below on this
page).
Add new growing data objects.
Request metadata about the parts in growing data objects.
Do a full range of operations on the set of parts: including get, put, delete and do similar operations
on a specified range of parts.
Determine what intervals of growing data objects have changed while disconnected, which helps
guide "catch up" operations and minimizes the likelihood of having to get "all data" again.
NOTE: All of these operations work the same for all growing data objects, regardless of their current
design. That is, all growing data objects (e.g., in WITSML v2.0) are NOT currently designed identically,
but all are handled with Protocol 6.
Each Energistics domain standard defines its growing data objects; for the list of growing data objects,
see an ML's ETP implementation specification.
Other ETP sub-protocols that may be used with GrowingObject (Protocol 6):
To subscribe to notifications of changes to growing data object parts that occur in Protocol 6, use
GrowingObjectNotification (Protocol 7) (Chapter 12). (These two protocols work together similarly as
Store (Protocol 4) and StoreNotification (Protocol 5).)
NOTE: Use of the PutGrowingDataObjectsHeader message in this protocol primarily triggers
notifications in StoreNotification (Protocol 5)—not Protocol 7. This difference is because this
message actually creates or updates the growing data object (i.e., the header, which is also
called the parent growing data object), not parts.
Store (Protocol 4) allows some operations on a "complete" growing data object (complete = the
growing data object "header" and all its parts). It is possible to add (insert, but NOT update), get or
delete the "complete" growing data object. See Chapter 9.
To query the parts of a growing data object, see GrowingObjectQuery (Protocol 16) (Chapter 17).
NOTE: Beginning with WITSML v2.0, Logs are no longer categorized as growing data objects (they were
in WITSML v1.4.1.1) but are explicitly defined using the Channel, ChannelSet and Log data objects. To
edit channel data, you MUST use protocols specifically designed for channels (see ChannelSubscribe
(Protocol 21), Chapter 19 and ChannelDataLoad (Protocol 22), Chapter 20).
Key ETP concepts that are important to understanding how this protocol is intended to work (see
Section 11.1).
Required behavior, which includes:
Description of the message sequence for main tasks, along with required behavior, use of
capabilities, and possible errors (see Section 11.2.1).
Other functional requirements (not covered in the message sequence) including use of additional
endpoint, data object, and protocol capabilities for preventing and protecting against aberrant
behavior (see Section◦11.2.2).
- Definitions of the endpoint, data object, and protocol capabilities used in this protocol (see
Section 11.2.2.2).
Sample schemas of the messages defined in this protocol (which are identical to the Avro schemas
published with this version of ETP). However, only the schema content in this specification includes
documentation for each field in a schema (see Section 11.3).
11.1.2 Most Actions are on the "Parts" in the Context of One "Parent" Data Object
GrowingObject (Protocol 6) has 3 main kinds of messages, one for each kind of data the protocol
operates on: parts, ranges of parts, and headers. Each message name contains the word "parts", "range"
or "header" depending on the type of data it was designed to handle.
Key message types and related facts include:
Most "part" and "range" messages are operations for the parts or ranges of parts in one growing data
object. (EXCEPTION: GetPartsMetadata returns metadata for a list of growing data objects, not just
one data object.) That is, each part is sent in the context of one "parent" data object and involves
sending/receiving one or more object fragments in a format (e.g., XML or JSON) that comprise the
growing parts of the data object. The parent data object is always referenced by its URI.
Each individual part in a growing data object is identified by a UID that must be unique within the
context of the parent data object. NOTE: The application that first creates a growing data object
assigns its UUID (for more information see Section 25.2); the application that first creates parts of
a growing data object assigns part UIDs.
NOTE: Some parts are also data objects themselves. These parts have both a UID and a UUID.
GrowingObject (Protocol 6) references these parts by their UID, NOT their UUID.
A range of parts is specified with an indexInterval, which is defined in the relevant messages in
this document.
"header" messages are get or put operations for one or more growing data object(s), each one
identified by its URI.
Put header messages MAY include parts when first adding a growing data object to a store.
Put header messages MUST NOT include parts when updating an existing growing data object
header in a store.
Get header messages do NOT return parts.
As stated above, if an application creates a growing data object, that application must assign the
growing data object's UUID.
- If any parts are included when creating the growing data object, the application must also assign
UIDs to the parts.
Term Definition
Adjacent Two ranges are adjacent when the end index of one is equal to the start index of
the other.
Two ChangeAnnotations records are adjacent if the ranges defined by their
interval fields are adjacent.
IMPORTANT: Even though ChangeAnnotation records may be adjacent, store
customers MUST consider the entire interval in a ChangeAnnotation to be affected
by the change, including the end index. When ChangeAnnotation records are
adjacent, store customers MUST consider the changeTime for channel data points or
non-interval parts (e.g., WITSML TrajectoryStations) at the index value shared by
both ChangeAnnotation records to be the most recent changeTime of the two
records.
Append An append is when new data points or parts are added to the “end” of a channel or
growing data object such that:
1. No added data point or part overlaps the existing data range.
2. For increasing data, all added data points and parts have a primary index value
or start index value that is greater than or equal to the end index of the existing
data range.
3. For decreasing data, all added data points and parts have a primary index value
or start index value that is less than or equal to the end index of the existing data
range.
Covering A range covers an index value if the index value is:
For increasing data, greater than or equal to the range’s start index and less than
or equal to the range’s end index.
For decreasing data, less than or equal to the range’s start index and greater than
or equal to range’s end index.
A range covers another range if it covers both the start and end index of the other
range.
Decreasing data Data for a channel or growing data object is decreasing if the direction field on the
IndexMetadataRecord for the primary index is set to “Decreasing”.
With decreasing data, the end index is less than or equal to the start index for all data
ranges. This includes the data range for the channel or growing data object. This also
includes the interval field on any ChangeAnnotation record for the channel or
growing data object.
Increasing Data Data for a channel or growing data object is increasing if the direction field on the
IndexMetadataRecord for the primary index is set to “Increasing”.
With increasing data, the end index is greater than or equal to the start index for all
data ranges. This includes the data range for the channel or growing data object.
This also includes the interval field on any ChangeAnnotation record for the channel
or growing data object.
Inside An index value is inside a range if:
For increasing data, strictly greater than the range’s start index and strictly less
than the range’s end index.
For decreasing data, strictly less than the range’s start index and strictly greater
than the range’s end index.
If range A’s start index and end index are both inside range B, then range A is inside
range B.
Term Definition
Overlapping Range A and Range B overlap when they are NOT adjacent and any index value is
in both Range A and Range B. That is, when any of the following are true:
1. Range A’s start index is the same as range B’s start index.
2. Range A’s end index is the same as range B’s end index.
3. Range A’s start index or end index are inside Range B.
4. Range B’s start index or end index are inside Range A.
Two ChangeAnnotations records overlap if the ranges defined by their interval
fields overlap.
NOTE: Adjacent ranges and adjacent ChangeAnnotation records do NOT overlap
each other.
Prepend A prepend is when new data points or parts are added to the “start” of a channel or
growing data object such that:
1. No added data point or part overlaps the existing data range.
2. For increasing data, all added data points and parts have a primary index value
or end index value that is less than or equal to the start index of the existing data
range.
3. For decreasing data, all added data points and parts have a primary index value
or end index value that is greater than or equal to the start index of the existing
data range.
11.2.1.1 To get parts metadata for one or more growing data objects:
1. The customer MUST send the store the GetPartsMetadata message (Section 11.3.1), which
contains a map whose values MUST each be the URI of a growing data object that the customer
wants to get parts metadata for.
2. For the growing data objects that the store successfully returns parts metadata for, it MUST send one
or more GetPartsMetadataResponse map response messages (Section 11.3.2), which contains a
map whose values are PartsMetadataInfo records (Section 23.34.17).
a. For more information on how map response messages work, see Section 3.7.3.
3. For the URIs it does NOT successfully return parts metadata for, the store MUST send one or more
map ProtocolException messages, where values in the errors field (a map) are appropriate errors,
such as ENOT_FOUND (11).
a. For more information on how ProtocolException messages work with plural messages, see
Section 3.7.3.
11.2.1.2 To get the headers for one or more growing data objects:
1. The customer MUST send the store the GetGrowingDataObjectsHeader message (Section 11.3.3),
which contains a map whose values MUST be the URI of a growing data object that the customer
wants to get header information for.
2. For the URIs it successfully returns growing data object header information for, the store MUST send
one or more GetGrowingDataObjectsHeaderResponse map response messages (Section 11.3.4)
where the map values are DataObject records (Section 23.34.5) with the growing data object URIs
and header data.
a. For more information on how map response messages work, see Section 3.7.3.
3. For the URIs it does NOT successfully return growing data object header information for, the store
MUST send one or more map ProtocolException messages where values in the errors field (a map)
are appropriate errors, such as ENOT_FOUND (11).
a. For more information on how ProtocolException messages work with plural messages, see
Section 3.7.3.
11.2.1.4 To get a range of parts (interval) for one growing data object:
1. The customer MUST send the store the GetPartsByRange message (Section 11.3.11), which
contains the URI of the parent growing data object, the index interval (which specifies the range of
interest), and a flag to includeOverlappingIntervals.
a. For more information on how overlapping intervals work, see Section 11.2.2.1.
2. If the store successfully returns parts from the request interval, it MUST send one or more
GetPartsByRangeResponse messages (Section 11.3.12), each of which contains an array of UIDs
and data for each part that the store could return.
a. The store MUST limit the total count of parts returned to the customer's value for
MaxResponseCount protocol capability.
b. The customer MAY notify the store of responses that exceed this limit by sending error
ERESPONSECOUNT_EXCEEDED (30).
c. If a store's value for MaxResponseCount protocol capability is smaller than a customer's value, a
store MAY further limit the total count of parts to its value.
d. If a store is unable to return all parts to a request due to exceeding the lower of the customer's or
the store's value for MaxResponseCount protocol capability, the Store MUST terminate the
multipart response by sending error ERESPONSECOUNT_EXCEEDED (30).
i. A store MUST NOT send ERESPONSECOUNT_EXCEEDED until it has sent
MaxResponseCount parts.
3. If the store has no parts in the request interval, it MUST send a GetPartsByRangeResponse
message with the FIN bit set and the parts field set to an empty array.
4. If the store does NOT successfully return parts or a GetPartsByRangeResponse with an empty
parts array, it MUST send a non-map ProtocolException message with an appropriate error, such
as EREQUEST_DENIED (6).
11.2.1.5 To add or update the headers for one or more growing data objects:
1. The customer MUST send the store the PutGrowingDataObjectsHeader message (Section 11.3.7),
which contains a map whose values MUST be the URIs of the growing data objects that the customer
wants to add (insert) or update and the data for each.
a. REMINDER: ETP uses "upsert" semantics, so all put operations are a complete replace of any
existing data. For more information, see Section 9.1.1.
b. A customer MUST honor the store's MaxDataObjectSize capability. For more information, see
Section 3.3.2.4.
c. When adding a new growing data object, the growing data object MAY include parts. When
updating an existing growing data object, the growing data object MUST NOT include parts. For
additional details on required behavior when adding parts, see Section 11.2.1.6.
d. When a growing data object includes parts, the customer MUST honor the store’s MaxPartSize
capability. For more information, see Section 3.3.2.5.
2. For growing data object headers it successfully puts (add to/replace in the store), the store MUST
send one or more PutGrowingDataObjectsHeaderResponse map response messages (Section
11.3.8).
a. For more information on how map response messages work, see Section 3.7.3.
b. The store MUST send this message AFTER it performs these operations:
i. If the growing data object does not exist in the store, the store MUST add it. If the growing
data object includes parts, the store MUST follow the same rules defined for Store (Protocol
4) when creating a growing data object that includes parts as described in Section 9.2.2,
Row◦21. If the parts are themselves also data objects, the store MUST also follow the rules
described in Section 9.2.2, Row 22.
ii. If the growing data object does exist in the store and the customer included parts in the
update, the store MUST reject the update and send error
EUPDATEGROWINGOBJECT_DENIED (23).
iii. If the growing data object does exist in the store, the store MUST replace the entire existing
header with the information the customer provided in the PutGrowingDataObjectsHeader
message.
iv. Store-managed fields on the Resource only (storeCreated and storeLastWrite) MUST be
updated for these operations; for more information, see Section 11.2.2, Row 8.
c. Successful put header operations MAY trigger notifications in StoreNotification (Protocol 5)
(because putting a header = inserting or updating a data object). For more information, see
Section 10.2.2, Row 9.
d. NOTIFICATION BEHAVIOR: When a put header operation succeeds and includes parts, the
store MUST send PartsChanged notifications as described in Section 11.2.1.6 for the added or
updated parts.
3. For growing data object headers the store does NOT successfully put, it must send a
ProtocolException message with errors field (map) whose values MUST be the URIs of the growing
data objects from the request that could not be added and an appropriate error code for each, for
example, EREQUEST_DENIED (6).
a. For more information about use of ProtocolException messages with plural messages, see
Section 3.7.3.
4. After adding a growing data object header, a customer can use the PutParts, DeleteParts and
ReplacePartsByRange messages to add and edit a growing data object's parts.
11.2.1.6 To add or update one or more parts for one growing data object:
1. The customer MUST send the store the PutParts message (Section 11.3.5), which contains the URI
of the parent growing data object and a map whose values MUST be the UIDs and data for each part
that the customer wants to add (insert) or update.
a. PutParts represents a set of distinct add or update operations. It does not explicitly operate on a
range of data. To operate on a range of data, use ReplacePartsByRange.
b. REMINDER: ETP uses "upsert" semantics, so all put operations are a complete replace of any
existing data. For more information, see Section 9.1.1.
2. For the parts it successfully puts (add to/replace in the store), the store MUST send one or more
PutPartsResponse map response messages (Section 11.3.6).
a. For more information on how map response messages work, see Section 3.7.3.
b. The store MUST send this message AFTER it performs these operations:
i. If the parts do not exist in the store, the store MUST add them.
1. If the parts are themselves also data objects, adding new parts MUST NOT exceed the
store’s value for MaxContainedDataObjectCount data object capability for the parent
growing data object type. For each part that would exceed this limit, the store MUST NOT
add the part. The store MUST instead send ELIMIT_EXCEEDED (12).
ii. If the parts do exist, the store MUST replace them with the information the customer provided
in the PutParts message.
iii. For BOTH i and ii, the store MUST do the following:
1. If the parts are themselves also data objects, the store MUST also follow these rules for
the parts:
a. The rules for putting data objects into a store defined in Section 9.2.1.2.
b. The store MUST link any parts not previously in the growing data object to the
growing data object.
c. The additional rules for putting parts that are data objects into a store defined in
Section 9.2.2, Row 23.
2. Update the storeLastWrite field on the growing data object's Resource. For more
information, see Section 11.2.2, Row 8.
3. Update the activeStatus field on the growing data object. For more information, see
Section 11.2.2, Row 9.
4. Create appropriate ChangeAnnotation records. For more information, see Section
11.2.2.3.
3. For the parts it does NOT successfully put, the store MUST send one or more map
ProtocolException messages where values in the errors field (a map) are appropriate errors, such
as EREQUEST_DENIED (6).
a. For more information on how ProtocolException messages work with plural messages, see
Section 3.7.3.
4. NOTIFICATION BEHAVIOR: The store MUST send a PartsChanged notification message with a
type (objectChangeKind) of "insert" or "update".
a. If the parts are themselves also data objects, for any parts that were newly linked to the growing
data objects, the store MUST send an ObjectChanged notification with ObjectChangeKind set to
“joined”.
b. When a PutParts message both inserts and updates parts, 2 PartsChanged notifications must
be sent: one for the inserted parts and one for the updated parts.
c. A store MUST send a notification for only the most recent effective state of a part. So if multiple
insert or update changes to a part since the notifications were sent for the part, the store MAY
send only one notification.
i. If the part is in a range that will be included in a ReplacePartsByRange message,
PartsChanged MUST NOT be sent. Instead, the part MUST be included in the
PartsReplacedByRange message.
ii. If the part will NOT be included in a ReplacePartsByRange and it was inserted since the
most recent insert or update notification was sent, the store MUST send an insert notification
with the timestamp of the most recent insert or update change.
iii. Otherwise, the store MUST send an update notification with the timestamp of the most recent
update.
d. Notifications are sent in GrowingObjectNotification (Protocol 7). For more information on rules for
populating/sending notifications and why notification behavior is specified here, see Section
11.2.2, Row 5.
e. When the parts in a PutParts message are themselves also data objects, the store MUST also
send ObjectChanged notification messages in StoreNotification (Protocol 5) as described in
Section 9.2.1.2 and Section 9.2.2.
11.2.1.7 To delete one or more parts from one growing data object:
1. The customer MUST send the store the DeleteParts message (Section 11.3.9), which contains the
URI of the parent growing data object and the map whose values MUST be the part UIDs that the
customer wants to delete.
a. When the parts in a DeleteParts message are themselves data objects, the store MUST also
treat DeleteParts as a request to delete (NOT prune or unjoin) the data objects.
2. For the parts it successfully deletes, the store MUST send one or more DeletePartsResponse map
response messages (Section 11.3.10).
a. For more information on how map response messages work, see Section 3.7.3.
b. The store MUST send this message AFTER it performs these operations:
i. Update the storeLastWrite field on the growing data object's Resource. For more
information, see Section 11.2.2, Row 8.
ii. Update the activeStatus field on the growing data object. For more information, see Section
11.2.2, Row 9.
iii. Create appropriate ChangeAnnotation records. For more information, see Section 11.2.2.3.
iv. If the parts are themselves also data objects, the store MUST also follow these rules for the
parts:
1. The rules for deleting data objects from a store defined in Section 9.2.1.3.
2. The additional rules for deleting contained data objects defined in Section 9.2.2, Row 25.
3. The additional rules for deleting parts that are data objects from a store defined in
Section 9.2.2, Row 27.
3. For the parts it does NOT successfully delete, the store MUST send one or more map
ProtocolException messages where values in the errors field (a map) are appropriate errors, such
as ENOT_FOUND (11) or EREQUEST_DENIED (6).
a. For more information on how ProtocolException messages work with plural messages, see
Section 3.7.3.
4. NOTIFICATION BEHAVIOR: The store MUST send a PartsDeleted notification message.
a. A store MUST send a notification for only the most recent effective state of a part. So if
notifications are queued, and the part is subsequently deleted, the store MAY discard any
previous notifications.
b. If the part is in a range that will be included in a ReplacePartsByRange message,
PartsChanged MUST NOT be sent. Instead, the part MUST be included in the
PartsReplacedByRange message.
c. Notifications are sent in GrowingObjectNotification (Protocol 7). For more information on rules for
populating/sending notifications and why notification behavior is specified here, see Section
11.2.2.
d. When the parts in a DeleteParts message are themselves also data objects, the store MUST
also send ObjectDeleted notification messages in StoreNotification (Protocol 5) as described in
Section 9.2.1.3 and Section 9.2.2.
11.2.1.8 To delete a range of parts (interval) and (optionally) replace it with another range of
parts:
1. The customer MUST send the store the ReplacePartsByRange message (Section 11.3.15), which
contains these fields: uri, which MUST be the URI of the parent growing data object from which the
parts are to be deleted; deleteInterval, which MUST specify he index interval for the range of parts to
be deleted; parts, which is an array that MUST identify the UIDs and data for each part that is to be
added (i.e., the new parts that will replace the parts that have been deleted); and the
includeOverlappingIntervals flag.
a. The number of parts deleted DOES NOT have to equal the number of parts added.
b. If the parts field is left empty, then the message is a delete request for the interval specified in
deleteInterval.
c. For information on how overlapping intervals work, see Section 11.2.2.1.
d. When the parts deleted by a ReplacePartsByRange message are themselves data objects, the
store MUST also treat ReplacePartsByRange as a request to delete (NOT prune or unjoin) the
data objects.
2. ReplacePartsByRange is an atomic operation: the entire request either succeeds or fails.
a. The store MUST delete the range of parts specified in deleteInterval and replace it with the parts
specified in parts.
i. If the parts are themselves also data objects, the store MUST also follow these rules for the
deleted parts:
1. The rules for deleting data objects from a store defined in Section 9.2.1.3.
2. The additional rules for deleting contained data objects defined in Section 9.2.2, Row 25.
3. The additional rules for deleting parts that are data objects from a store defined in
Section 9.2.2, Row 27.
ii. If the parts are themselves also data objects, the store MUST also follow these rules for the
replacement parts:
1. The rules for putting data objects into a store defined in Section 9.2.1.2.
2. The store MUST link any parts not previously in the growing data object to the growing
data object.
3. The additional rules for parts that are data objects into the store defined in Section 9.2.2,
Row 23.
iii. If it completes these operations successfully, it MUST send a
ReplacePartsByRangeResponse message (Section 11.3.16), which is a "success only"
message indicating that the store has successfully completed the entire operation as
requested.
b. If any replacement part is NOT covered by the deleteInterval, the store MUST fail the operation
and send EINVALID_OPERATION (32). NOTE: includeOverlappingIntervals DOES NOT allow
replacement parts to overlap the deleteInterval. They MUST always be covered by the
deleteInterval.
c. If the parts are themselves data objects and adding the replacement parts would exceed the
store’s value for MaxContainedDataObjectCount data object capability for the parent growing
data object type, the store MUST fail the operation and send ELIMIT_EXCEEDED (12).
d. After deleting the range of parts specified in deleteInterval, if any replacement parts would have
the same UID as a part still in the growing data object, the store MUST fail the operation and
send EINVALID_OPERATION (32). That is, replacement parts MUST ONLY replace parts that
are deleted by the message.
e. If the operation fails, the store MUST:
i. Rollback the entire request. That is, the store MUST be in the state it was in before receiving
the ReplacePartsByRange message.
ii. Send a non-map ProtocolException message with an appropriate error code such as
EREQUEST_DENIED (6).
3. NOTIFICATION BEHAVIOR: The store MUST send a PartsReplacedByRange notification
message.
a. A store MUST send a notification for only the most recent effective state of a part. So if
notifications are queued:
i. If the parts affected by a ReplacePartsByRange message were PREVIOUSLY affected by
PutParts or DeleteParts messages before PartsReplacedByRange is sent, the store MAY
discard the previous notifications and only send PartsReplacedByRange.
ii. If the parts affected by a ReplacePartsByRange message were LATER affected by other
PutParts or DeleteParts messages before PartsReplacedByRange is sent, the store MAY
discard the later notifications and only send PartsReplacedByRange with changeTime set to
the most recent change covered by range included in the message.
iii. If a range is affected by more than one ReplacePartsByRange message before
PartsReplacedByRange is sent, a store MAY choose to only send one
PartsReplacedByRange message that covers the combined range of all relevant
ReplacePartsByRange messages with changeTime set to the most recent relevant
timestamp.
iv. When combining multiple notifications into a single PartsReplacedByRange message, the
store MUST set includeOverlappingIntervals to false, set the deletedInterval to the smallest
range that covers all affected parts, and include as replacement parts any existing parts
covered by the message’s deletedInterval.
b. Notifications are sent in GrowingObjectNotification (Protocol 7). For more information on rules for
populating/sending notifications and why notification behavior is specified here, see Section
11.2.2.
c. When the parts deleted by a ReplacePartsByRange message are themselves also data objects,
the store MUST also send ObjectDeleted notification messages in StoreNotification (Protocol 5)
as described in Section 9.2.1.3 and Section 9.2.2.
i. Each ChangeAnnotation record contains a timestamp for when the change occurred in the
store and the interval of the growing data object that changed. (NOTE: Change annotations
keep track ONLY of the interval that changed, NOT the actual data that changed).
c. For information about how the store tracks and manages these change annotations, see
Section◦11.2.2, Row 15).
4. For the URIs it does NOT successfully return change annotations for, the store MUST send one or
more map ProtocolException messages where values in the errors field (a map) are appropriate
errors.
a. For more information on how ProtocolException messages work with plural messages, see
Section 3.7.3.
5. Based on information in the GetChangeAnnotationsResponse message, the customer MAY:
a. Use the GetPartsByRange message to retrieve intervals of interest that have changed (as
described in Section 11.2.1.4).
b. Re-establish growing data object or growing data object parts notification subscriptions that were
in place when a session was disconnected, (see Sections 10.2.1.1 and 12.2.1.1, respectively).
2. Capabilities-related behavior 1. Relevant endpoint, data object, and/or protocol capabilities MUST be specified
when the ETP session is established (see Chapter 5) and MUST be used/honored
as defined in the relevant ETP sub-protocol.
2. For an explanation of endpoint, data object, and protocol capabilities, see Section
3.3.
a. For the list of global capabilities and related behavior, see Section 3.3.2.
3. Section 11.2.2.2 identifies the capabilities most relevant to this ETP sub-protocol.
Additional details for how to use the protocol capabilities are included below in this
table and in Section 11.2.1 GrowingObject: Message Sequence.
4. Endpoint Capability MaxPartSize must be honored for most requests and
response in this protocol. For the general behavior that must be applied, see
Section 3.3.2.5.
3. Message Sequence 1. The Message Sequence section above (Section 11.2.1) describes requirements
See Section 11.2.1. for the main tasks listed there and also defines required behavior.
4. Plural messages (which includes 1. This protocol uses plural message. For detailed rules on handling plural
maps) messages (including handling of ProtocolException messages), see Section
3.7.3.
5. Notifications 1. This chapter explains events (operations) in GrowingObject (Protocol 6) that
trigger the store to send notifications, which the store sends using
StoreNotification (Protocol 5) and/or GrowingObjectNotification (Protocol 6).
However, statements of NOTIFICATION BEHAVIOR are here in this chapter, in
the context of the detailed explanation of the behavior that triggers the notification.
2. Notification behavior is described here using MUST. However, the store MUST
ONLY send notifications IF AND ONLY IF there is a customer subscribed to
notifications for an appropriate context (i.e., a context that includes the data
object) and the store MUST ONLY send notifications to those customers that are
subscribed to appropriate contexts.
a. For more information on data object notifications, see Chapter 10
StoreNotification (Protocol 5).
b. For information on notifications for parts in growing data objects, see
Chapter◦12 GrowingObjectNotification (Protocol 7).
6. Growing data object operations 1. To perform all operations listed in this row, a customer MUST use messages in
that may be performed using Store Store (Protocol 4); for more information, see Chapter 9.
(Protocol 4) a. To add (insert) a new growing data object and its parts in one
operation, a customer MUST use a PutDataObjects message. See Section
9.2.1.2.
i. A customer MAY add a growing data object using GrowingObject
(Protocol 6) by first adding the growing data object header and then
adding the parts. For more information, see Sections 11.2.1.5 and
11.2.1.6.
b. To get a growing data object and its parts in one operation, a customer
MUST use a GetDataObjects message. See Section 9.2.1.1.
c. To delete a growing data object, a customer MUST use a
DeleteDataObjects message. See Section 9.2.1.3.
i. A growing data object CANNOT be deleted using GrowingObject
(Protocol 6), only Store (Protocol 4).
7. Growing data object operations 1. To perform all operations listed in this row, a customer MUST use messages in
that MUST be performed using GrowingObject (Protocol 6):
GrowingObject (Protocol 6) a. All "updates" to growing data objects, for header and parts information.
b. All operations (additions, edits, deletes) on parts only in the context of one
growing data object.
c. For the list of all tasks that can be done in this protocol and how they work,
see Section 11.2.1.
11.2.2.1.1 EXAMPLE
A growing data object has these 3 "range parts":
Range Part 1: 1,000 to 2,000 ft
Range Part 2: 2,000 to 3,000 ft
Range Part 3: 3,000 to 4,000 ft
A "ByRange" request specifies an interval (request interval) of 1,500 to 3,500 ft.
If the includeOverlappingIntervals flag is true, all 3 range parts are included in the operation (because
a portion or each range part overlaps the request interval).
EXAMPLE: In the ReplacePartsByRange message, if includeOverlappingIntervals flag is true,
the store will delete any range part that overlaps the deleteInterval, so all 3 range parts.
If the includeOverlappingIntervals flag is false, only Range Part 2 is included in the operation (where
the minimum and maximum points that define the range part are wholly contained in the request
interval).
EXAMPLE: In the ReplacePartsByRange message, if includeOverlappingIntervals flag is false,
the store will delete only range part that are completely contained in the deleteInterval, so only
Range Part 2.
Definition: "Range parts" are affected where ANY part of their interval overlaps with the request interval.
There are 4 cases of how an object may or may not overlap, or be contained within the request interval.
The table below shows the logic that addresses these cases.
A. Range part falls completely within the request interval
B. mdTop is inside the request interval, but mdBase is outside
C. mdTop is outside the request interval, but mdBase is inside
D. both mdTop and mdBase are outside the request interval, but the range part SPANS the request
interval.
includeOverlappingIntervals: false
Range parts are affected only where their interval is wholly contained within the request interval. Any
partially overlapping range parts are ignored. The following logic applies to all ByRange operations (get,
put, and delete):
mdTop >= startIndex && mdTop <= endIndex
&& mdBase >= startIndex && mdBase <= endIndex
11.2.2.2 Rules for Creating Change Annotations for Channel Data Objects
Figure 20 illustrates some common types of changes to channel data and how a store may or must
create or update ChangeAnnotation records in response to them.
Figure 20: Example showing how change annotations (CA) work over time for channels. Blue box = channel
data, white box with red label = change annotation. The size of the white box is intended to show that the
annotation is for the entire corresponding channel (blue box).
The table below describes how stores create ChangeAnnotation records in different scenarios for
channel data objects.
IMPORTANT: ChangeAnnotation records MUST be created based on the type of change that happens
and NOT solely based on the ETP message used. For example, ReplaceRange in ChannelDataLoad
(Protocol 22) may replace data at the start, in the middle, or at the end of a channel’s data range.
IMPORTANT: The table explains how to create ChangeAnnotation records in response to customer
requests. Whenever new records overlap each other or existing records, the store MUST merge the
overlapping records together. In addition, the store MAY merge non-overlapping records. For rules
governing merging ChangeAnnotation records, see Section 11.2.2.4.
Data Prepended: new For increasing data, Yes (required) Range between new Time when store
data prepended; start index decreases. and old start index. prepended data.
existing data unaffected
For decreasing data,
start index increases.
Range Deleted For increasing data, Yes (required) Range between new Time when store
Covering End Index: end index decreases. and old end index. deleted data.
existing data removed; For “decreasing” data,
no data added or
end index increases.
changed
Range Deleted For increasing data, Yes (required) Range between new Time when store
Covering Start Index: start index increases. and old start index. deleted data.
existing data removed; For decreasing data,
no data added or start index decreases.
changed
Range Deleted Inside No. Yes (required) changedInterval from Time when store
Existing Data Range: request. deleted data.
existing data removed;
no data added or
changed
All Data Deleted: Start and end indexes Yes (required) Range between old Time when store
existing data removed; become null. start and end index. deleted data.
no data added or
changed
Range Replaced End index may increase Yes (required) Smallest range Time when store
Covering End Index: or decrease. covering: replaced data.
existing data removed; a) range between new
replacement data
and old end index, and
added
b) added data range.
Range Replaced Start index may Yes (required) Smallest range Time when store
Covering Start Index: increase or decrease. covering: replaced data.
existing data removed;
Range Replaced No. Yes (required) changedInterval from Time when store
Inside Existing Data request. replaced range.
Range: existing data
removed; replacement
data added
All Data Replaced: Both start and end Yes (required) Range between old
existing data removed; indexes may increase start and end index.
replacement data or decrease.
added
11.2.2.3 Rules for Creating Change Annotations for Growing Data Objects
Figure 21 illustrates some common types of changes to growing data object parts and how a store may
or must create or update ChangeAnnotation records in response to them.
Figure 21: Example showing how change annotations (CA) work over time for growing data objects. Blue
boxes = growing data object parts, white box with red label = change annotations. The size of the white box
is intended to show that change annotations could potentially exist anywhere in the full range of data
covered by the parts, and the red boxes show the actual ranges of data covered by change annotations.
The table below describes how stores create ChangeAnnotation records in different scenarios for
growing data objects.
IMPORTANT: ChangeAnnotation records MUST be created based on the type of change that happens
and NOT solely based on the ETP message used. For example, ReplacePartsByRange in
GrowingObject (Protocol 6) may replace parts at the start, in the middle or at the end of a growing data
object’s data range. The PutParts and DeleteParts messages may cause multiple types of changes that
result in multiple ChangeAnnotation records being created.
IMPORTANT: The table explains how to create ChangeAnnotation records in response to customer
requests. Whenever new records overlap each other or existing records, the store MUST merge the
overlapping records together. In addition, the store MAY merge non-overlapping records. For rules
governing merging ChangeAnnotation records, see Section 11.2.2.4.
Part(s) Prepended: Start index decreases. Yes (required) Range between new Time when store
new part(s) prepended; and old start index. prepended parts.
existing parts
unaffected.
Part(s) Added End index may Yes (required) Smallest range Time when store
Covering End Index: increase. covering start and end added parts.
new part(s) added; index of each added
existing parts part.
unaffected.
Part(s) Added Start index may Yes (required) Smallest range Time when store
Covering Start Index: decrease. covering start and end added parts.
new part(s) added; index of each added
existing parts part.
unaffected.
Part(s) Added Inside No. Yes (required, Range of each added Time when store
Existing Data Range: one per part) part. added each part.
new part(s) added;
existing parts
unaffected.
Part(s) Updated: Both start and end Yes (required, Range of each updated Time when store
existing part(s) indexes may increase one per part) part. updated each
updated; no parts or decrease part.
added or deleted.
Part(s) Deleted End index may Yes (required) Smallest range Time when store
Covering End Index: decrease. covering: deleted parts.
existing part(s) deleted; a) range between new
no parts added or
and old end index, and
updated.
b) start and end index
of each deleted part.
Part(s) Deleted Start index may Yes (required) Smallest range Time when store
Covering Start Index: increase. covering: deleted parts.
existing part(s) deleted;
a) range between new
no parts added or and old start index, and
updated.
b) start and end index
of each deleted part.
Part(s) Deleted Inside No. Yes (required, Range of each deleted Time when store
Existing Data Range: one per part) part. deleted each
existing part(s) deleted; part.
no parts added or
updated.
Range Deleted End index decreases. Yes (required) Smallest range Time when store
Covering End Index: covering: deleted data.
existing parts removed;
a) start index of
no parts added or deleteInterval from
changed. request,
b) range between new
and old end index, and
c) start and end index
of each deleted part.
Range Deleted Start index increases. Yes (required) Smallest range Time when store
Covering Start Index: covering: deleted data.
existing parts removed;
a) end index of
no parts added or deleteInterval from
changed. request,
b) range between new
and old start index, and
c) start and end index
of all deleted parts.
Range Deleted Inside No. Yes (required) Smallest range Time when store
Existing Data Range: covering: deleted data.
existing parts removed; a) deleteInterval from
no parts added or
request, and
changed.
b) start and end index
of each deleted part.
All Parts Deleted: Start and end indexes Yes (required) Range between old Time when store
existing parts removed; become null. start and end index. deleted data.
no parts added or
changed.
Range Replaced End index may increase Yes (required) Smallest range Time when store
Covering End Index: or decrease. covering: replaced data.
existing parts removed; a) start index of
replacement parts deleteInterval from
added.
request,
b) range between new
and old end index,
c) start and end index
of each added part, and
d) start and end index
of each deleted part.
Range Replaced Start index may Yes (required) Smallest range Time when store
Covering Start Index: increase or decrease. covering: replaced data.
existing parts removed;
Range Replaced No. Yes (required) Range that was Time when store
Inside Existing Data deleted. replaced range.
Range: existing parts
removed; replacement
parts added.
All Parts Replaced: Both start and end Yes (required) Smallest range
existing parts removed; indexes may increase covering:
replacement parts or decrease a) Range between new
added.
and old start index, and
b) start and end index
of each added part.
Overlapping Yes (required) Smallest range covering the Most recent timestamp of
Annotations range of each merged merged annotations.
annotation.
Adjacent Annotations Merging is optional Smallest range covering the Most recent timestamp of
range of each merged merged annotations.
annotation.
Other Annotations Merging is optional Smallest range covering the Most recent timestamp of
range of each merged merged annotations.
annotation.
ChangeRetentionPeriod: The minimum time period in seconds long Seconds Default: 86,400
that a store retains the canonical URI of a deleted data object and Value units: MIN: 86,400
any change annotations for channels and growing data objects. <number of
RECOMMENDATION: This period should be as long as is feasible seconds>
in an implementation. When the period is shorter, the risk is that
additional data will need to be transmitted to recover from outages,
leading to higher initial load on sessions.
MaxPartSize: The maximum size in bytes of each data object part long byte Min: 10,000
allowed in a standalone message or a complete multipart <number of bytes
message. Size in bytes is the total size in bytes of the bytes>
uncompressed string representation of the data object part in the
format in which it is sent or received.
Data Object Capabilities
(For definitions of each data object capability, see Section 3.3.4.)
Protocol Capabilities
MaxResponseCount: The maximum total count of responses long count MIN: 10,000
allowed in a complete multipart message response to a single <count of
request. responses>
+ forma t: s tri ng = xml + forma t: s tri ng = xml + uri s : s tri ng [1..n] (ma p) + da ta Objects : Da ta Object [1..*] (ma p)
+ pa rts : ObjectPa rt (a rra y) + pa rts : ObjectPa rt (ma p) tags tags
+ uri : s tri ng + uri : s tri ng AvroSrc = <memo> AvroSrc = <memo>
tags tags Correl a ti onId = <memo> Correl a ti onId = <memo>
AvroSrc = <memo> AvroSrc = <memo> Mes s a geTypeID = 8 Mes s a geTypeID = 15
Correl a ti onId = <memo> Correl a ti onId = <memo> Mul ti Pa rt = Fa l s e Mul ti Pa rt = True
Mes s a geTypeID = 10 Mes s a geTypeID = 6 SenderRol e = cus tomer SenderRol e = s tore
Mul ti Pa rt = True Mul ti Pa rt = True notes notes
SenderRol e = s tore SenderRol e = s tore A customer sends to store to request the metadata one or A store sends to a customer in response to a
notes notes more growing data objects and their respective parts. The GetGrowingDataObjectsHeader. It contains a map of the
Sent from a store to a customer as a response to a A store sends to the customer in response to a GetParts response to this message is GetPartsMetadataResponse. growing data object headers that the store could return.
GetPartsByRange message. message. It is a map of the parts of the growing data
object that the store could return.
notes notes
A customer sends to a store to add or update the header A store MUST send this "success only" message to a
information for one or more growing data objects. The customer as confirmation of a successful operation in
"success only" response to this message is the response to a PutGrowingDataObjectsHeader message.
PutGrowingDataObjectsHeaderResponse message. These "success only" response messages have been added
NOTE: ETP uses "upsert" semantics so the "update" to ETP to support more efficient operations of customer role
operation is always a complete replacement of an existing software. Errors MUST be handled using the
data object. For more information, see Section 9.1.1. ProtocolException message, as defined elsewhere in the ETP
Use of this message is the only way to UPDATE the header Specification.
information in a growing data object. A customer can use
either this message or PutDataObjects in Store (Protocol 4)
to add (insert) a growing data object and its parts in one
operation; however, all updates (to the header or parts)
MUST be done using the messages in GrowingObject
(Protocol 6).
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObject",
"name": "GetParts",
"protocol": "6",
"messageType": "3",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "uri", "type": "string" },
{ "name": "format", "type": "string", "default": "xml" },
{
"name": "uids",
"type": { "type": "map", "values": "string" }
}
]
}
Correlation Id Usage: MUST be set to the messageId of the GetParts message that this message is a
response to.
Multi-part: True
Sent by: store
Field Name Description Data Type Min Max
uri The URI of the "parent" growing data object. For string 1 1
example: in WITSML, a Trajectory is a growing
data object and each TrajectoryStation is a part.
This MUST be a canonical Energistics data object
URI; for more information, see Appendix:
Energistics Identifiers.
format Specifies the format (e.g., XML or JSON) of the string 1 1
data for the parts being sent in this message. This
MUST match the format in the GetParts request.
Currently, ETP MAY support "xml" and "json".
Other formats may be supported in the future, and
endpoints may agree to use custom formats.
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObject",
"name": "GetPartsResponse",
"protocol": "6",
"messageType": "6",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{ "name": "uri", "type": "string" },
{ "name": "format", "type": "string", "default": "xml" },
{
"name": "parts",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.Object.ObjectPart" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObject",
"name": "GetGrowingDataObjectsHeader",
"protocol": "6",
"messageType": "14",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "uris",
"type": { "type": "map", "values": "string" }
},
{ "name": "format", "type": "string", "default": "xml" }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObject",
"name": "GetGrowingDataObjectsHeaderResponse",
"protocol": "6",
"messageType": "15",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "dataObjects",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.Object.DataObject" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObject",
"name": "PutParts",
"protocol": "6",
"messageType": "5",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "uri", "type": "string" },
{ "name": "format", "type": "string", "default": "xml" },
{
"name": "parts",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.Object.ObjectPart" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObject",
"name": "PutPartsResponse",
"protocol": "6",
"messageType": "13",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "success",
"type": { "type": "map", "values": "string" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObject",
"name": "PutGrowingDataObjectsHeader",
"protocol": "6",
"messageType": "16",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "dataObjects",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.Object.DataObject" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObject",
"name": "PutGrowingDataObjectsHeaderResponse",
"protocol": "6",
"messageType": "17",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "success",
"type": { "type": "map", "values": "string" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObject",
"name": "DeleteParts",
"protocol": "6",
"messageType": "1",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "uri", "type": "string" },
{
"name": "uids",
"type": { "type": "map", "values": "string" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObject",
"name": "DeletePartsResponse",
"protocol": "6",
"messageType": "11",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "success",
"type": { "type": "map", "values": "string" }
}
]
}
Multi-part: False
Sent by: customer
Field Name Description Data Type Min Max
uri MUST be the URI for the parent growing data string 1 1
object for the parts being requested. For example:
in WITSML, a Trajectory is a growing data object
and each TrajectoryStation is a part.
If both endpoints support alternate URIs for the
session, these MAY be alternate data object
URIs. Otherwise, they MUST be canonical
Energistics data object URIs. For more
information, see Appendix: Energistics
Identifiers.
indexInterval The index interval as defined in IndexInterval for IndexInterval 1 1
the list of parts you want to get:
If the StartIndex is specified as NULL, then
the server MUST assume a value of negative
infinity.
If the endIndex is specified as NULL, then
the server MUST assume a value of positive
Infinity.
The ending index for the get range MUST be
NULL or >= startIndex or you MUST send
error EINVALID_ARGUMENT (5).
format Specifies the format (e.g., XML or JSON) in which string 1 1
you want to receive data for the requested parts.
This MUST be a format that was negotiated when
establishing the session.
Currently, ETP MAY support "xml" and "json".
Other formats may be supported in the future, and
endpoints may agree to use custom formats.
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObject",
"name": "GetPartsByRange",
"protocol": "6",
"messageType": "4",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "uri", "type": "string" },
{ "name": "format", "type": "string", "default": "xml" },
{ "name": "indexInterval", "type":
"Energistics.Etp.v12.Datatypes.Object.IndexInterval" },
{ "name": "includeOverlappingIntervals", "type": "boolean" }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObject",
"name": "GetPartsByRangeResponse",
"protocol": "6",
"messageType": "10",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{ "name": "uri", "type": "string" },
{ "name": "format", "type": "string", "default": "xml" },
{
"name": "parts",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.Object.ObjectPart" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObject",
"name": "GetPartsMetadata",
"protocol": "6",
"messageType": "8",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "uris",
"type": { "type": "map", "values": "string" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObject",
"name": "GetPartsMetadataResponse",
"protocol": "6",
"messageType": "9",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "metadata",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.Object.PartsMetadataInfo" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObject",
"name": "ReplacePartsByRange",
"protocol": "6",
"messageType": "7",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{ "name": "uri", "type": "string" },
{ "name": "deleteInterval", "type":
"Energistics.Etp.v12.Datatypes.Object.IndexInterval" },
{ "name": "includeOverlappingIntervals", "type": "boolean" },
{ "name": "format", "type": "string", "default": "xml" },
{
"name": "parts",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.Object.ObjectPart" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObject",
"name": "ReplacePartsByRangeResponse",
"protocol": "6",
"messageType": "18",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObject",
"name": "GetChangeAnnotations",
"protocol": "6",
"messageType": "19",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "sinceChangeTime", "type": "long" },
{
"name": "uris",
"type": { "type": "map", "values": "string" }
},
{ "name": "latestOnly", "type": "boolean", "default": false }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObject",
"name": "GetChangeAnnotationsResponse",
"protocol": "6",
"messageType": "20",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "changes",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.Object.ChangeResponseInfo" }
}
]
}
12 GrowingObjectNotification (Protocol 7)
ProtocolID: 7
Defined Roles: store, customer
GrowingObjectNotification (Protocol 7) allows store customers to subscribe to and receive notifications of
changes to the parts of growing data objects in the store, in an event-driven manner, for events (or
operations) that occur in GrowingObject (Protocol 6). That is a customer subscribes to one or more
growing data objects using this protocol, and as parts are added, deleted, or changed (operations that
happen using messages in GrowingObject (Protocol 6)), that behavior triggers the store to send
notifications with Protocol 7. The store can also create "unsolicited" subscriptions to part notifications on a
customer's behalf.
Other ETP sub-protocols that may be used with GrowingObjectNotification (Protocol 7):
The events that trigger notifications in this protocol happen using GrowingObject (Protocol 6). For
details of operations that trigger notifications, see Chapter 11.
To receive notifications for changes to data objects, ETP has similar protocols: Store (Protocol 4)
where the event/operations occur and StoreNotification (Protocol 5), where customers can subscribe
to receive notifications about operations on data objects in a specified context (e.g., all the changes
that happen a well). For information on operations and notifications related to data objects, see
Chapters 9 and 10.
IMPORTANT: To subscribe to changes to a growing data object for changes other than to parts, a
customer MUST subscribe to the growing data object in StoreNotification (Protocol 5). One operation in
Protocol 6 (PutGrowingDataObjectsHeader) may cause StoreNotification (Protocol 5) to send an
ObjectChange notification message (with ObjectChangeKind = insert) because
PutGrowingDataObjectsHeader is adding a new data object. Details are explained below in this chapter
and in Chapter 10.
Message Sequence. Summarizes all messages defined by this protocol, identifies main tasks that
can be done with this protocol and describes the response/request pattern for the messages needed
to perform the tasks, including usage of ETP-defined capabilities, error scenarios, and resulting ETP
error codes.
General Requirements. Identifies high-level (across ETP) and protocol-wide general behavior and
rules that must be observed (in addition to behavior specified in Message Sequence), including usage
of ETP-defined endpoint, data object and protocol capabilities, error scenarios, and resulting error
codes.
Capabilities. Lists and defines the ETP-defined parameters most relevant for this sub-protocol. ETP
defines these parameters to set necessary limits to help prevent aberrant behavior (e.g., sending
oversized messages or sending more messages than an endpoint can handle).
The main tasks in this protocol are subscribing to the appropriate growing data objects in a store to
receive the desired notifications and canceling/stopping those subscriptions. Once a subscription has
been created, a store MUST send appropriate notifications based on events in GrowingObject (Protocol
6).
12.2.1.1 To subscribe to notifications about parts in a growing data object (i.e., create a
subscription):
1. A customer MUST send a store a SubscribePartNotifications message (Section 12.3.1).
a. This message is a map of subscription requests. The details of each subscription request is
specified in the SubscriptionInfo record (Section 23.34.16) each of which uses a ContextInfo
record (see Section 23.34.15).
b. The SubscriptionInfo record contains a lot of important information where the customer
specifies details of the part notification subscription it wants to create, but some key fields worth
noting here are:
i. requestUuid, which assigns a UUID to uniquely address each subscription, which can later
be used to cancel a subscription.
ii. includeObjectData, a Boolean flag the customer uses to request that added or updated data
object parts be included with notification messages. By setting this field to true, a customer is
essentially having growing data object parts streamed to it, as new parts are added to a
store.
a. A customer MUST limit the total count of subscriptions in a session to the store's value for the
MaxSubscriptionSessionCount protocol capability.
i. The Store MUST deny requests that exceed this limit by sending error
ELIMIT_EXCEEDED◦(12).
2. For the requests it successfully creates subscriptions for, the store MUST respond with a one or more
SubscribePartNotificationsResponse map response messages (Section 12.3.2), which list the
successful subscriptions that the store has created.
a. For more information on how map response messages work, see Section 3.7.3.
b. The store MUST then send notification messages for the subscriptions identified in this response
message (according to criteria specified in the SubscribePartNotifications message) and
according to any rules stated in this specification.
c. For details about general requirements for when to send specific notifications, see Section 12.2.2.
3. For the requests it does NOT successfully create subscriptions for, the store MUST send one or more
map ProtocolException messages where values in the errors field (a map) are appropriate errors,
such as ENOT_FOUND (11) if a request URI could not be resolved.
a. For more information on how ProtocolException messages work with plural messages, see
Section 3.7.3.
4. NOTE: A store can also create "unsolicited" part notification subscriptions on behalf of a customer.
For more information, see Section 12.2.2 (Row 10).
5. If a customer sends a ProtocolException message in response to a PartsChanged, PartsDeleted,
or PartsReplacedByRange message, the store MAY attempt to take corrective action, but the store
MUST NOT terminate the associated subscriptions.
a. The store MUST stop sending any further notifications that were specified in the subscription that
has now been ended. It's possible that the customer COULD receive a few additional notifications
that were in process/queued before the subscription was stopped.
b. A store MUST NOT send any notifications for the subscription after sending
PartSubscriptionEnded.
3. If the store could not successfully cancel the subscription, it MUST send a ProtocolException
message with an appropriate error code (e.g., if the request UUID could not be found by the store
send ENOT_FOUND (11)).
4. The store MAY also end a subscription without receiving a customer request. If the store does so, it
MUST notify the customer by sending a PartSubscriptionEnded message. EXAMPLE:◦This
happens if the subscription’s context URI refers to a growing data object that is deleted.
5. Once a customer has canceled a subscription, the store MUST NOT restart it, even if the subscription
was created by the store on behalf of the customer with UnsolicitedPartNotifications messages.
a. If the customer wants to restart the subscription, it MUST instead set up a new subscription by
sending a SubscribePartNotifications message as described in Section 12.2.1.1, using a NEW
requestUuid.
3. Message Sequence 1. The Message Sequence section above (Section 12.2.1) describes
See Section 12.2.1. requirements for the main tasks listed there and also defines required
behavior.
4. Plural messages (which includes maps) 1. This protocol uses plural messages. For detailed rules on handling plural
messages (including ProtocolException handling), see Section 3.7.3.
5. Customers must be able to receive and 1. All customer role applications MUST implement support for receiving and
consume data object parts. consuming notifications that include the data object parts (that is, all data
for the object parts in a format (e.g., XML or JSON) negotiated when
establishing the session).
6. All behaviors defined in this table assume 1. We are aiming to state these requirements and behaviors as clearly and
that a valid customer subscription for the concisely as possible. All behaviors described below assume:
correct context has been created. a. A valid subscription has been created as described in
Section◦12.2.1.1.
b. References to "parts" means "parts within the context specified in the
subscription" which for this protocol must be one growing data object.
2. A valid parts subscription is one where all of the following conditions are
met:
a. SubscriptionInfo.context is a valid:
i. ContextInfo.uri that references a growing data object that exists
and is available in the store (i.e., the store will return it if
requested using Store (Protocol 4)).
ii. ContextInfo.dataObjectTypes is empty.
b. SubscriptionInfo.requestUuid is not already in use by another
subscription.
c. SubscriptionInfo.format is a format negotiated when establishing
the session.
3. Consistent with the previous paragraph (2), for the store to create an
unsolicited parts subscription, BOTH conditions MUST BE met.
4. REMINDER: To receive notification of other changes (other than add,
update, or deleting of parts) to a growing data object, a customer MUST
subscribe to changes to the growing data in StoreNotification (Protocol 5)
(Chapter 10).
7. No Session Survivability 1. If the ETP session is closed or the connection drops, then the store MUST
cancel notification subscriptions for the dropped customer endpoint.
2. On reconnect, the customer MUST re-create subscriptions (see Section
12.2.1.1).
3. For information on resuming operations after a disconnect, see Appendix:
Data Replication and Outage Recovery Workflows.
8. Order of Notifications 1. For a given data object, the store MUST send notifications in the same
order that operations are performed in the store.
a. The intent of this rule is that objects are always "correct" (schema
compliant), and never left in an inconsistent state. The rule applies
primarily to contained data objects and growing data objects.
b. In general, global ordering of notifications is NOT required. However,
there are some situations where the order of notifications affecting
multiple objects is important and must be preserved.
9. Objects covered by more than one A customer can create multiple subscriptions on a store. It is possible that the
subscription same data object is include in more than one subscription.
1. In this case, the store MUST send one notification per relevant
subscription.
10. Unsolicited subscriptions 1. The store may automatically configure unsolicited part subscriptions to
include the data object parts (i.e., the includeObjectData on the unsolicited
SubscriptionInfo record may be true). If the customer application does
not want the data, it can do one of the following:
a. Unsubscribe and stop receiving the notifications.
b. Simply ignore the data payloads and get the data manually.
c. Unsubscribe from the unsolicited notification and then explicitly
create the subscription (see Section 12.2.1.1) and set
includeObjectData to false.
11. Sending part notifications: general 1. REMINDER: Row 6
requirements 2. Notification messages are those whose name begins with the word
"Parts". Each message's definition/description provides general
information for when the store must send each message. EXAMPLE:
When a store deletes parts of a growing data object, the store MUST send
a PartsDeleted message (to all subscribed customers.
a. Other rows in this table state additional requirements for specific
operations and requirements for notifications.
3. A store MUST send all appropriate notifications, including PartsChanged,
PartsDeleted, and PartsReplacedByRange, even if the change was not
through an ETP store operation.
4. A store MUST send notifications within its value for
ChangePropagationPeriod endpoint capability, which MUST be less than
or equal to the maximum value stated in this specification (see Section
3.3.2.2).
12. Putting (inserting/adding) and updating 1. REMINDER: Row 6 and Row 11.
parts: Additional notification requirements
2. When a store completes a PutParts operation (in GrowingObject
(Protocol 6), it MUST send a PartsChanged notification message.
a. Because ETP uses upsert semantics, this message includes
information about the type of change, which is specified by the
ObjectChangeKind enumeration.
i. If the store inserted (added) a new part, then it MUST set
ObjectChangeKind to "insert".
ii. If the store updated (replaced) an existing part, then it MUST set
ObjectChangeKind to "update".
iii. If the change was caused by an ETP store operation, the store
MUST differentiate between insert and update.
iv. If the change was NOT caused by an ETP store operation and the
store cannot determine if the operation was an insert or update, it
MUST set ObjectChangeKind to "insert". NOTE: "insert" was
chosen because it is the "pessimistic" choice. That is, customers
using the replication workflow will assume the affected parts have
been completely replaced. While this may cause customers to
query more data than is necessary when the operation is actually
an update, using "insert" and the pessimistic assumptions that go
with it are necessary in some edge cases achieve eventual
consistency between data stores
b. If a single PutParts operation resulted in parts being both inserted
and updated, the store MUST send 2 notifications: one for inserted
parts and one for updated parts.
c. The MaxPartSize endpoint capability MUST be observed. See Row
15.
14. For notifications that exceed an endpoint’s 1. Some notifications in this protocol allow or require data object parts to be
WebSocket message size, send smaller sent with the message.
notifications
a. If including all required parts in the notification message causes it to
exceed either endpoint’s value for
MaxWebSocketMessagePayloadSize endpoint capability, the store
MUST first attempt to send the parts without data. That is, the data
field in each ObjectPart record must be an empty array.
b. If the approach described in Paragraph a. still exceeds the
MaxWebSocketMessagePayloadSize, the store MUST break the
notification into several, smaller messages (e.g., each with half of the
parts) and send those as separate notifications.
15. MaxPartSize capability 1. If on a subscription request (SubscriptionInfo record) the
includeObjectData field was true, the store MUST include the parts data in
relevant notification messages.
a. The store MUST limit the size of data object parts in the notifications
to the lesser the store's and the customer's value for MaxPartSize
endpoint capability.
i. If any part would exceed this limit, the store MUST send the part
without its data. That is, the data field in the ObjectPart record
must be an empty array.
b. If the part size exceeds this limit, the customer MAY notify the store
by sending error EMAXSIZE_EXCEEDED (17).
16. Deleting parts: additional requirements 1. REMINDER: Row 6 and Row 11.
2. When one or more parts are deleted, the store MUST send a
PartsDeleted message.
a. A delete is an atomic operation; the store MUST perform the delete
operation and then send notifications.
17. Ending subscriptions 1. REMINDER: Row 6 and Row 11
2. A store MUST end a customer’s subscription to part notifications when:
a. The customer cancels the subscription by sending an
UnsubscribePartNotifications message.
b. The parent growing data object for the subscription (i.e., the data
object identified by the URI in the context field of the subscription’s
SubscriptionInfo record) is deleted.
c. The customer loses access to the parent growing data object for the
subscription.
3. When ending a subscription:
a. The store MAY discard any queued part notifications for the
subscription.
b. The store MUST send a PartSubscriptionEnded message either as
a response to a customer UnsubscribePartNotifications request or
as a notification.
c. The store MUST include a human readable reason why the
subscription was ended in the PartSubscriptionEnded message.
4. After sending a PartSubscriptionEnded message, the store MUST NOT
send any further part notifications for the subscription.
5. After a subscription has ended, the store MUST NOT restart it, even if the
subscription was created by the store on behalf of the customer with the
UnsolicitedPartNotifications message.
18. Index Metadata 1. A growing data object’s index metadata MUST be consistent:
a. All parts MUST have the same index unit and the same vertical
datum.
MaxPartSize: The maximum size in bytes of each data object part long byte Min: 10,000
allowed in a standalone message or a complete multipart <number of bytes
message. Size in bytes is the total size in bytes of the bytes>
uncompressed string representation of the data object part in the
format in which it is sent or received.
Data Object Capabilities
(For definitions of each data object capability, see Section 3.3.4.)
Protocol Capabilities
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObjectNotification",
"name": "SubscribePartNotifications",
"protocol": "7",
"messageType": "7",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "request",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.Object.SubscriptionInfo" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObjectNotification",
"name": "SubscribePartNotificationsResponse",
"protocol": "7",
"messageType": "10",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "success",
"type": { "type": "map", "values": "string" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObjectNotification",
"name": "PartsChanged",
"protocol": "7",
"messageType": "2",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "uri", "type": "string" },
{ "name": "requestUuid", "type": "Energistics.Etp.v12.Datatypes.Uuid" },
{ "name": "changeKind", "type":
"Energistics.Etp.v12.Datatypes.Object.ObjectChangeKind" },
{ "name": "changeTime", "type": "long" },
{ "name": "format", "type": "string", "default": "" },
{
"name": "parts",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.Object.ObjectPart" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObjectNotification",
"name": "PartsDeleted",
"protocol": "7",
"messageType": "3",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "uri", "type": "string" },
{ "name": "requestUuid", "type": "Energistics.Etp.v12.Datatypes.Uuid" },
{ "name": "changeTime", "type": "long" },
{
"name": "uids",
"type": { "type": "array", "items": "string" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObjectNotification",
"name": "UnsubscribePartNotification",
"protocol": "7",
"messageType": "4",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "requestUuid", "type": "Energistics.Etp.v12.Datatypes.Uuid" }
]
}
Correlation Id Usage: For the first message, MUST be set to 0. If there are multiple messages in this
multipart request, the correlationId of all successive messages that comprise the request MUST be set to
the messageId of the first message of the multipart request.
Multi-part: True
Sent by: store
Field Name Description Data Type Min Max
uri The URI of the "parent" growing data object from string 1 1
which the parts were deleted. For example: in
WITSML, a Trajectory is a growing data object
and each TrajectoryStation is a part.
This MUST be a canonical Energistics data object
URI; for more information, see Appendix:
Energistics Identifiers.
requestUuid Each subscription was assigned a UUID by the Uuid 1 1
customer requesting it, when the subscription was
created (in the SubscriptionInfo record) or was
assigned in an UnsolicitedPartNotifications
message.
Must be of type Uuid (Section 23.6).
changeTime The time the data-change event occurred. This is long 1 1
not the time the event happened, but the time that
the change occurred in the store database. This is
the value from storeLastWrite field on the “parent”
growing data object (for more information see
Resource) and the ChangeAnnotation record
created for the change.
It must be a UTC dateTime value, serialized as a
long, using the Avro logical type timestamp-micros
(microseconds from the Unix Epoch, 1 January
1970 00:00:00.000000 UTC).
deletedInterval The index interval for the deleted range as IndexInterval 1 1
specified in IndexInterval . This is NOT the index
range of the parts that replaced the deleted parts.
includeOverlappingIntervals Specifies if the interval is inclusive or exclusive for boolean 1 1
objects that span the interval.
For more information, see the Section 11.2.2.1,
which explains overlapping interval behavior.
If true, then any object with any part of it
crossing the specified range is affected.
If false, then only objects that fall completely
within the range are affected.
format Specifies the format (e.g., XML or JSON) of the string 0 1
data for the replacement parts being sent in this
message. This MUST match the format in the
SubscriptionInfo record for the subscription.
Currently, ETP MAY support "xml" and "json".
Other formats may be supported in the future, and
endpoints may agree to use custom formats.
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObjectNotification",
"name": "PartsReplacedByRange",
"protocol": "7",
"messageType": "6",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{ "name": "uri", "type": "string" },
{ "name": "requestUuid", "type": "Energistics.Etp.v12.Datatypes.Uuid" },
{ "name": "changeTime", "type": "long" },
{ "name": "deletedInterval", "type":
"Energistics.Etp.v12.Datatypes.Object.IndexInterval" },
{ "name": "includeOverlappingIntervals", "type": "boolean" },
{ "name": "format", "type": "string", "default": "" },
{
"name": "parts",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.Object.ObjectPart" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObjectNotification",
"name": "PartSubscriptionEnded",
"protocol": "7",
"messageType": "8",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "reason", "type": "string" },
{ "name": "requestUuid", "type": "Energistics.Etp.v12.Datatypes.Uuid" }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObjectNotification",
"name": "UnsolicitedPartNotifications",
"protocol": "7",
"messageType": "9",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "subscriptions",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.Object.SubscriptionInfo" }
}
]
}
13 DataArray (Protocol 9)
ProtocolID: 9
Defined Roles: store, customer
Use DataArray (Protocol 9) to transfer large, binary arrays of homogeneous data values. With Energistics
domain standards, this data is often stored as an HDF5 file. However, this protocol can be used for any
array data, even if HDF files are not required or used.
Energistics domain standards have typically store this type of data using HDF5. For example, RESQML
uses HDF5 to store seismic, interpretation, and modeling data: PRODML-DAS uses HDF5 to store
distributed acoustic sensing data. As such, the arrays that are transferred with the DataArray protocol are
logical versions of the HDF5 data sets.
DataArray (Protocol 9):
Supports any array of values of different types (bytes, integer, float, doubles, etc.). In Energistics data
models, this array data is typically associated with a data object (that is, it is the binary array data for
the data object).
Imposes no limits on the dimensions of the array. Multi-dimensional arrays have no limits to the
number of dimensions. However, a store may limit the size of a message and therefore the size of the
arrays it can handle in a single message. For this reason, this protocol provides functionality to
portion arrays into manageably sized sub-arrays for data transfer.
Was designed to support transfer of the data typically stored in HDF5 files but also can be used to
transfer this type of data when HDF5 files are not required or used.
most cases, the HDF5 files are stored outside the EPC file. To accurately maintain all relationships, the
package requires use of an external reference to the HDF5 file, which is called an
EpcExternalPartReference.
DataArray (Protocol 9) has been designed to get and put data arrays in this context--and also to handle
this type of array data when no files are required, for example, from endpoint to endpoint. For more
information about these technologies and their use in the Energistics CTA, see Energistics Online:
http://docs.energistics.org/#CTA/CTA_TOPICS/CTA-000-018-0-C-sv2100.html and
http://docs.energistics.org/#ETP/ETP_TOPICS/ETP-000-000-titlepage.html.
13.2.1.3 Transactions
Typically, arrays are not transferred in isolation because they correspond to the binary data associated to
a data object. All array-put transfers corresponding to the same data object must be successful for the
data object to be complete, so it is recommended to include transfer of the data object and its arrays in a
transaction, using Transaction (Protocol 18); see Chapter 18.
2. Capabilities-related behavior 1. Relevant endpoint, data object, and/or protocol capabilities MUST be
specified when the ETP session is established (see Chapter 5) and
MUST be used/honored as defined in the relevant ETP sub-protocol.
2. For an explanation of endpoint, data object, and protocol capabilities, see
Section 3.3.
a. For the list of global capabilities and related behavior, see Section
3.3.2.
3. Section 13.2.3 identifies the capabilities most relevant to this ETP sub-
protocol. Additional details for how to use the protocol capabilities are
included below in this table and in Section 13.2.1 DataArray: Message
Sequence.
3. Message Sequence 1. The Message Sequence section above (Section 13.2.1) describes
See Section 13.2.1. requirements for the main tasks listed there and also defines required
behavior.
4. Plural Messages (which includes maps) 1. This protocol uses plural messages. For detailed rules on handling plural
messages (including ProtocolException handling), see Section 3.7.3.
5. Endpoints MUST honor MaxDataArraySize 1. For any get or put operations in this protocol where array data is being
protocol capability (for definition, see Section sent, each endpoint MUST NOT exceed the other's value for
13.2.3). MaxDataArraySize protocol capability.
2. If an endpoint's value is exceeded, it MUST deny the request by sending
error ELIMIT_EXCEEDED (12).
6. Transaction (optional) Because data arrays can be large and complex, operations associated with
them can also be large and complex.
Endpoints can optionally define a transaction using Transaction (Protocol 18).
For more information, see Chapter 18.
7. Store behavior: updates to a data array's 1. Similarly to a Resource, data arrays have metadata fields named
storeCreated and storeLastWrite fields storeCreated and storeLastWrite fields, which are maintained by an ETP
store, primarily to support replication workflows. For more information
about these fields, see:
a. Section 23.32.2.
b. Section 3.12.5.1.
2. For operations in this protocol that ADD a new data array (e.g.
PutDataArrays, PutUninitializedDataArrays), the store MUST do both
of these:
a. Set the storeCreated field to the time that the array was added in
the store.
b. Set the storeLastWrite field to the same time as storeCreated.
3. For operations in this protocol that UPDATE a data array (e.g.
PutDataArrays, because ETP uses upsert semantics the same
message is used to add or update the array), the store MUST update the
storeLastWrite field with the time that the update happened in the store.
8. Updates to data in a data array require an 1. When a data value in a data array is updated, each data object that
update to the data object(s) that reference references the array MUST also be updated.
that array.
13.2.2.1 Allowed Mappings of Logical Array Types and Transport Array Types
In the table below, for each enumeration (specified in AnyLogicalArrayType) for the logicalArrayType field
(left column), the right column specifies the allowed enumerations (specified in AnyArrayType) for the
transportArrayType field.
The types in the left column are all the enumerations listed in the AnyLogicalArrayType record (see
Section 23.1). These types have been specified based on signed/unsigned (U), bit size of the preferred
sub-array dimension (8, 16, 32, 64 bits), and endianness (LE = little, BE = big).
The following usage rules apply:
1. Implementers decide which encoding is best for their data and particular implementation.
2. Type of "bytes" is a fixed-size encoding. So an 8-bit array uses 1 byte, 16 bit uses 2 bytes, 32 bit
uses 4 bytes, and 64 bit uses 8 bytes.
3. Types "arrayOfLong" and "arrayOfInt" follow Avro encoding, which is variable length.
Protocol Capabilities
MaxDataArraySize: The maximum size in bytes of a data array long byte MIN:100,000
allowed in a store. Size in bytes is the product of all array <number of
dimensions multiplied by the size in bytes of a single array bytes>
element.
+ da ta Arra ys : Da ta Arra yIdenti fi er [1..*] (ma p) + da ta Suba rra ys : GetDa ta Suba rra ys Type [1..*] (ma p) + da ta Arra ys : Da ta Arra yIdenti fi er [1..*] (ma p)
tags tags tags
AvroSrc = <memo> AvroSrc = <memo> AvroSrc = <memo>
Correl a ti onId = <memo> Correl a ti onId = <memo> Correl a ti onId = <memo>
Mes s a geTypeID = 2 Mes s a geTypeID = 3 Mes s a geTypeID = 6
Mul ti Pa rt = Fa l s e Mul ti Pa rt = Fa l s e Mul ti Pa rt = Fa l s e
SenderRol e = cus tomer SenderRol e = cus tomer SenderRol e = cus tomer
+ da ta Arra ys : PutDa ta Arra ys Type [1..*] (ma p) + da ta Arra ys : PutUni ni ti a l i zedDa ta Arra yType [1..*] (ma p) + da ta Suba rra ys : PutDa ta Suba rra ys Type [1..*] (ma p)
tags tags tags
AvroSrc = <memo> AvroSrc = <memo> AvroSrc = <memo>
Correl a ti onId = <memo> Correl a ti onId = <memo> Correl a ti onId = <memo>
Mes s a geTypeID = 4 Mes s a geTypeID = 9 Mes s a geTypeID = 5
Mul ti Pa rt = Fa l s e Mul ti Pa rt = Fa l s e Mul ti Pa rt = Fa l s e
SenderRol e = cus tomer SenderRol e = cus tomer SenderRol e = cus tomer
notes notes notes
A customer sends to a store as a request to put one or more data arrays A customer sends to a store as a request to establish the dimensions of A customer sends to a store as requests to put portions of data arrays
in the store. The "success only" response to this message is the one or more arrays in a store, before it begins sending the sub-arrays of (sub-arrays), when the entire array is too large for the WebSocket
PutDataArraysResponse message. data (using the PutDataSubarrays message) to populate these arrays. message of an implementation. The "success only" response to this
The "success only" response to this message is the message is a PutDataSubarraysResponse message.
PutUninitializedDataArraysResponse.
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.DataArray",
"name": "GetDataArrays",
"protocol": "9",
"messageType": "2",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "dataArrays",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.DataArrayTypes.DataArrayIdentifier" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.DataArray",
"name": "GetDataArraysResponse",
"protocol": "9",
"messageType": "1",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "dataArrays",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.DataArrayTypes.DataArray" }, "default": {}
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.DataArray",
"name": "GetDataSubarrays",
"protocol": "9",
"messageType": "3",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "dataSubarrays",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.DataArrayTypes.GetDataSubarraysType" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.DataArray",
"name": "GetDataSubarraysResponse",
"protocol": "9",
"messageType": "8",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "dataSubarrays",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.DataArrayTypes.DataArray" }, "default": {}
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.DataArray",
"name": "PutDataArrays",
"protocol": "9",
"messageType": "4",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "dataArrays",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.DataArrayTypes.PutDataArraysType" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.DataArray",
"name": "PutDataArraysResponse",
"protocol": "9",
"messageType": "10",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "success",
"type": { "type": "map", "values": "string" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.DataArray",
"name": "PutDataSubarrays",
"protocol": "9",
"messageType": "5",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "dataSubarrays",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.DataArrayTypes.PutDataSubarraysType" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.DataArray",
"name": "PutDataSubarraysResponse",
"protocol": "9",
"messageType": "11",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "success",
"type": { "type": "map", "values": "string" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.DataArray",
"name": "PutUninitializedDataArrays",
"protocol": "9",
"messageType": "9",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "dataArrays",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.DataArrayTypes.PutUninitializedDataArrayType" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.DataArray",
"name": "PutUninitializedDataArraysResponse",
"protocol": "9",
"messageType": "12",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "success",
"type": { "type": "map", "values": "string" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.DataArray",
"name": "GetDataArrayMetadata",
"protocol": "9",
"messageType": "6",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "dataArrays",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.DataArrayTypes.DataArrayIdentifier" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.DataArray",
"name": "GetDataArrayMetadataResponse",
"protocol": "9",
"messageType": "7",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "arrayMetadata",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.DataArrayTypes.DataArrayMetadata" }, "default": {}
}
]
}
OData resources:
OASIS URL Conventions: http://docs.oasis-open.org/odata/odata/v4.0/odata-v4.0-part2-url-
conventions.html
OData.net libraries: http://www.odata.org/libraries/
Link to Querystring features: http://linqtoquerystring.net/features.html
14.1.1 Filtering
The $filter clause allows you to create an expression filtering out objects based on the elements of the
object. For example, the following statement:
eml:///witsml20.Channel?$filter=ChannelClass/Title eq 'Gamma'&$top=300
returns results where the Channel and the element channelClass has a Title with value ‘Gamma’.
14.1.2 Pagination
Pagination is used to limit the number of elements returned from a query to prevent exceeding limits
specified by the MaxResponseCount protocol capability, which results in error
ERESPONSECOUNT_EXCEEDED (30) error. (EXAMPLE: If the customer's MaxResponseCount
protocol capability is 1,000 and the store has 3,000 objects of interest, the customer can use pagination
options to limit returned values to groups of 1,000.)
These pagination query options are supported:
$top limits the number of items returned in the result
$skip is used to indicate the starting index for the subset.
eml:///witsml20.Channel$top=300
4. For pagination to work, the store MUST implement a deterministic sort order and it MUST provide that
2. A "FindXResponse" message, where X is the same "thing" in the FindX message (i.e., For
DiscoveryQuery (Protocol 13) it's FindResourcesResponse, for StoreQuery (Protocol 14) it is
FindDataObjectsResponse, for GrowingObjectQuery (Protocol 16) it is FindPartsResponse).
For this ETP Query Sub-Protocol… The store MUST respond with this message…
DiscoveryQuery (Protocol 13) A FindResourcesResponse with an empty array
StoreQuery (Protocol 14) A FindDataObjectsResponse with an empty array
GrowingObjectQuery (Protocol 16) A FindPartsResponse with an empty array
14.3.2 Usage Rules for Query Syntax with ETP Query Sub-Protocols
ETP Implementations MUST observe these rules:
1. OData queries are case sensitive; if you want case insensitive, you MUST use tolower and toupper
functions.
2. If a query includes query options that are NOT supported by the store, the store MUST send error
ENOTSUPPORTED (7).
3. Query results are always bound by a user's permissions for access to any endpoint.
4. This ETP Specification provides syntax and transport information only. For specific details (e.g.,
which fields are queryable, sort order, etc.) on how to query objects for a specific data model (e.g.,
WITSML, RESQML or PRODML), see that ML's ETP implementation specification.
5. When a canonical Energistics data object query URI includes a query string, the query string uses
OData query syntax, for example:
eml:///witsml20.Channel?$filter=ChannelClass/Title eq 'Gamma'
a. You can use an instance of a data object as a filter in a query. EXAMPLE: In Well ABC give me
all the channels with activeStatus = true.
b. The mapping from ETP to OData entity sets is that an entity set is all data objects of a particular
type (e.g., well). The types are defined by the underlying ML.
c. OData property paths used in filters are constructed by concatenating the (nested) element
names together for the (sub)element used as a filter. In the filter in the above example,
ChannelClass refers to the ChannelClass element on a Channel, which is a Data Object
Reference, and Title refers to the Title element within the Data Object Reference.
d. When an element is ComplexTypeWithSimpleContent (i.e., it is a value with associated
attributes), then the property path for the value is ElementName/_.
EXAMPLE: To filter a list of WITSML 2.0 Channels by the value of the NominalHoleSize element,
a filter could look like this: NominalHoleSize/_ eq 8.5.
6. Filters are evaluated for the current URI data object type only.
a. Data object references are NOT resolved or expanded.
eml:///witsml20.Channel?$filter=LogggingCompanyName eq 'SLB'
DiscoveryQuery: multipart message each of which contains the ID (a URI, a UUID, etc.) of a channel
store query: multipart message each of which contains the channel header of a channel (i.e., it's the data)
eml:///witsml20.Channel?$filter=ChannelClass/Title eq 'Gamma'
Find channel sets where channel class is Gamma and hole size is 8.5 inches
NOTE: In NominalHoleSize/_ the /_ convention is the Energistics proposed JSON convention for a
complex type with simple content.
Find channel sets where channel class is Gamma and run number is 5
Find channel sets where channel class is Surface and run number is 5
eml:///witsml20.ChannelSet?$filter=Channel/any(c:c/Mnemonic eq 'BDEP')
Find all rig utilization objects where rig name is Songa Endurance
Find all logs having an extension name value where name is TestValue
eml:///witsml20.Log?$filter=ExtensionNameValue/any(e:e/Name eq 'TestValue')
Find all data objects where a particular data assurance rule has not been met
4. Rules specified in Chapter 14 Overview of 1. The general rules and requirements specified in Chapter 14 MUST be
Query Behavior observed and used with the additional details specific to DiscoveryQuery
(Protocol 13) (which are specified in the next row.)
5. Rules specific to DiscoveryQuery 1. In the FindResourcesResponse message, the store MUST return only
(Protocol◦13) resources ONLY for the data object type specified in the request (i.e.,
queries in this protocol are limited to one type of data object only).
EXAMPLE: The following URI returns only channels that meet the filter
criteria.
eml:///witsml20.Channel?$filter=ChannelClass/Title eq 'Gamma'
a. Discovery (Protocol 3) returns multiple types of data objects.
Because of how the OData syntax works, the customer MUST
specify only one type of data object in the FindResources request
message.
2. Query protocols specify a pagination option, which make it possible to
get results in groups and prevent exceeding MaxResponseCount limits.
For more information, see Section 14.1.2.
3. This protocol DOES NOT support querying for parts in a growing data
object; to do that you MUST use GrowingObjectQuery (Protocol 16)
(Chapter 17).
4. When a store navigates the data object graph to
return Resource records in response to a FindResources request, it
MUST respect the navigation direction and the navigable edge types
specified in the scope and context fields in the request.
a. When the URI in the context includes a query URI with a specific
data object followed by a data object type, the navigation direction
and the navigable edge types also apply to the relationships
between the data object and the data object type specified in the
URI.
EXAMPLE: If the query URI is eml:///witsml20.Well(34aa7e1d-
adb6-486b-9100-65412100d24e)/witsml21.Wellbore and the
navigation direction is targets and the navigable edge types is
primary, then the query should navigate all Wellbores that are
sources of primary relationships with witsml20.Well(34aa7e1d-
adb6-486b-9100-65412100d24e) as a target. In this example, the
query should NOT navigate secondary relationships or primary
relationships where the Wellbore is the target of a relationship
where the well is the source.
NOTE: Many endpoint capabilities are "universal", used in all or most of the ETP protocols.
For more information, see Section 3.3.2.
Behavior associated with other endpoint capabilities are defined in relevant chapters.
EXAMPLE: The capabilities defined for limiting ETP sessions between 2 endpoints are discussed in
Section 4.3, How a Client Establishes a WebSocket Connection to an ETP Server.
Data Object Capabilities (See Section 3.3.4)
MaxResponseCount: The maximum total count of responses long count MIN: 10,000
allowed in a complete multipart message response to a single Value units:
request. <count of
responses>
class DiscoveryQuery
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.DiscoveryQuery",
"name": "FindResources",
"protocol": "13",
"messageType": "1",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "context", "type": "Energistics.Etp.v12.Datatypes.Object.ContextInfo" },
{ "name": "scope", "type": "Energistics.Etp.v12.Datatypes.Object.ContextScopeKind" },
{ "name": "storeLastWriteFilter", "type": ["null", "long"] },
{ "name": "activeStatusFilter", "type": ["null",
"Energistics.Etp.v12.Datatypes.Object.ActiveStatusKind"] }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.DiscoveryQuery",
"name": "FindResourcesResponse",
"protocol": "13",
"messageType": "2",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "resources",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.Object.Resource" }, "default": []
},
{ "name": "serverSortOrder", "type": "string" }
]
}
2. Capabilities-related behavior 1. Relevant endpoint, data object, and/or protocol capabilities MUST be
specified when the ETP session is established (see Chapter 5) and
MUST be used/honored as defined in the relevant ETP sub-protocol.
2. For an explanation of endpoint, data object, and protocol capabilities, see
Section 3.3.
a. For the list of global capabilities and related behavior, see Section
3.3.2.
3. Section 16.2.3 identifies the capabilities most relevant to this ETP sub-
protocol.
3. Plural Messages (which includes maps) 1. This protocol uses plural messages. For detailed rules on handling plural
messages (including ProtocolException handling), see Section 3.7.3.
4. Rules specified in Chapter 14 Overview of 1. The general rules and requirements specified in Chapter 14 MUST be
Query Behavior observed and used with the additional details specific to StoreQuery
(Protocol 16) (which are specified in the next row.)
5. Rules specific to StoreQuery (Protocol◦16) 1. In general, the results that are returned MUST follow the rules for Get
operations in Store (Protocol 4). For details, see Section 9.2.2. This
includes behavior for:
a. Oversized data objects and use the Chunk message (which is also
defined in this protocol); for more information see Section 3.7.3.2.
b. Returning growing data objects and their parts.
c. Returning container data objects and their contained data objects.
d. All related capabilities behavior to these operations.
e. EXCEPTION: Query protocols specify a pagination option, which
make it possible to get results in groups and prevent exceeding
MaxResponseCount limits. For more information, see Section
14.1.2.
2. This protocol DOES NOT support querying for parts in a growing data
object; to do that you MUST use GrowingObjectQuery (Protocol 16)
(Chapter 17).
6. Index Metadata: General rules for channels, 1. A growing data object’s index metadata MUST be consistent:
channel sets and growing data objects a. All parts MUST have the same index unit and the same vertical
datum.
b. The index units and vertical datums in the growing data object
header MUST match the parts.
2. A channel data object’s index metadata MUST be consistent:
a. The index units and vertical datums MUST match the channel’s
index metadata.
3. A channel set data object’s index metadata MUST be consistent:
a. The index units and vertical datums MUST match the channel set’s
index metadata.
b. The channel set’s index metadata MUST match the relevant index
metadata in the channels it contains.
4. When sending messages, both the store AND the customer MUST
ensure that all index metadata and data derived from index metadata are
consistent in all fields in the message, including in XML or JSON object
data or part data.
a. EXAMPLE: The uom and depthDatum in an IndexInterval record
MUST be consistent with the data object’s index metadata.
b. EXAMPLE: Data object elements related to index values in growing
data object headers (e.g., MdMn and MdMx on a WITSML 2.0
Trajectory) and parts (e.g., Md on a WITSML 2.0 TrajectoryStation)
MaxPartSize: The maximum size in bytes of each data object part long byte Min: 10,000
allowed in a standalone message or a complete multipart <number of bytes
message. Size in bytes is the total size in bytes of the bytes>
uncompressed string representation of the data object part in the
format in which it is sent or received.
Data Object Capabilities
SupportsGet
For definitions and usage rules for this data object capability, see
Section 3.3.4.
MaxDataObjectSize: (This is also an endpoint capability and a long byte MIN: 100,000
data object.) The maximum size in bytes of a data object allowed in <number of bytes
a complete multipart message. Size in bytes is the size in bytes of bytes>
the uncompressed string representation of the data object in the
format in which it is sent or received.
This capability can be set for an endpoint, a protocol, and/or a data
object. If set for all three, here is how they generally work:
An object-specific value overrides an endpoint-specific value.
A protocol-specific value can further lower (but NOT raise) the
limit for the protocol.
EXAMPLE: A store may wish to generally support sending and
receiving any data object that is one megabyte or less with the
exceptions of Wells that are 100 kilobytes or less and Attachments
that are 5 megabytes or less. A store may further wish to limit the
size of any data object sent as part of a notification in
StoreNotification (Protocol 5) to 256 kilobytes.
MaxResponseCount: The maximum total count of responses long count MIN: 10,000
allowed in a complete multipart message response to a single <count of
request. responses>
«Mes s a ge»
FindDataObjects «record» «record»
Object::DataObject Object::Resource
+ a cti veSta tus Fi l ter: Acti veSta tus Ki nd [0..1]
+ context: ContextInfo + bl obId: Uui d [0..1] + a cti veSta tus : Acti veSta tus Ki nd
+ forma t: s tri ng = xml + da ta : bytes [0..1] = EmptyStri ng + a l terna teUri s : s tri ng [0..1] (a rra y) = EmptyArra y
+ s cope: ContextScopeKi nd + forma t: s tri ng [0..1] = xml + cus tomDa ta : Da ta Va l ue [0..n] (ma p) = EmptyMa p
+ s toreLa s tWri teFi l ter: l ong [0..1] + res ource: Res ource + l a s tCha nged: l ong
+ na me: s tri ng
tags tags
+ s ourceCount: i nt [0..1] = nul l
AvroSrc = <memo> AvroSrc = <memo>
+ s toreCrea ted: l ong
Correl a ti onId = <memo> notes + s toreLa s tWri te: l ong
Mes s a geTypeID = 1 Record that must carry a single data object. This record encapsulates a + ta rgetCount: i nt [0..1] = nul l
Mul ti Pa rt = Fa l s e Resource record, which contains most of the metadata, and carries the object + uri : s tri ng
SenderRol e = cus tomer data as a byte array. To specify the format of the data (e.g., XML or JSON) use tags
notes the format field. If the data object is too large (binary large object--BLOB) for AvroSrc = <memo>
A customer sends to a store as a query to find all data the WebSocket message size, use the blobId field to identify the BLOB and
objects that match the specified criteria. The response to Chunk messages to send actual data. notes
this message is the FindObjectsResponse message. Record for resource descriptions on a graph. The record is
actually a meta-object, not the resource itself, which in ETP
are data objects. This Resource structure is used by:
«Mes s a ge» - Discovery (Protocol 3) and DiscoveryQuery (Protocol 13) to
FindDataObjectsResponse «Mes s a ge» provide information about the contents of a store.
Chunk - Store (Protocol 4), StoreNotification (Protocol 5) and
+ da ta Objects : Da ta Object [0..n] (a rra y) = EmptyArra y StoreQuery (Protocol 14), where resource is encapsulated in
+ s erverSortOrder: s tri ng + bl obId: Uui d
dataObject in response messages only.
+ da ta : bytes
tags The use of the "lighter-weight" resources in ETP reduces
+ fi na l : bool ea n
AvroSrc = <memo> traffic on the wire for initial inquiries such as Discovery, which
Correl a ti onId = <memo> tags allows customer applications to determine when to do the
Mes s a geTypeID = 2 AvroSrc = <memo> "heavy lifting" of getting the full data object and/or all of its
Mul ti Pa rt = True Correl a ti onId = <memo> associated data.
SenderRol e = s tore Mes s a geTypeID = 3
Mul ti Pa rt = True
notes SenderRol e = s tore
A store sends to a consumer in response to the FindObjects
message; it's the results the store could return in response notes
to the query. A message used when a data object (being sent in a message from store to customer OR
customer to store) is too large for the negotiated WebSocket message size limit
(MaxWebSocketMessagePayloadSize) for the session (which for some WebSocket
libraries can be quite small, e.g. 128 kb).
This Chunk message:
1. Is used in Store (Protocol 4), StoreNotification (Protocol 5), and StoreQuery (Protocol
14).
2. Can be used in conjunction with any request, response or notification message that
allows or requires a data object to be sent with the message. Such messages contain a
field called dataObjects, which is a map composed of the ETP data type DataObject. If the
data object size (bytes) exceeds the maximum negotiated WebSocket size limit for the
session, and you want to send it with the message, you MUST use Chunk messages.
3. The DataObject type (record) contains an optional Binary Large Object (BLOB) ID
(blobId). If you must divide a data object into multiple chunks, you MUST assign a blobId
and the dataObject field MUST NOT contain any data.
4. Use a set of Chunk message to send small portions of the data object (small enough
to fit into the negotiated WebSocket size limit for the session). Each Chunk message
MUST contain its assigned "parent" BlobId and a portion of the data object.
5. For endpoints that receive these messages, to correctly "reassemble" the data
object (BLOB): use the blobId, and the messageId (which indicates the message
sequence, because ETP (via WebSocket) guarantees messages to be delivered in order),
and final (flag that indicates the last chunk that comprises a particular data object).
6. Chunk messages for different data objects MUST NOT be interleaved within the
context of one multipart message operation. If more than one data object must be sent
using Chunk messages, the sender MUST finish sending each data object before sending
the next one. To indicate the last Chunk message for one data object, the sender MUST
set the final flag to true.
For more information on how to use the Chunk message, see Section 3.7.3.2.
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.StoreQuery",
"name": "FindDataObjects",
"protocol": "14",
"messageType": "1",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "context", "type": "Energistics.Etp.v12.Datatypes.Object.ContextInfo" },
{ "name": "scope", "type": "Energistics.Etp.v12.Datatypes.Object.ContextScopeKind" },
{ "name": "storeLastWriteFilter", "type": ["null", "long"] },
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.StoreQuery",
"name": "FindDataObjectsResponse",
"protocol": "14",
"messageType": "2",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "dataObjects",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.Object.DataObject" }, "default": []
},
{ "name": "serverSortOrder", "type": "string" }
]
}
is a map composed of the ETP data type DataObject. If the data object size (bytes) exceeds the
maximum negotiated WebSocket size limit for the session, and you want to send it with the message,
you MUST use Chunk messages.
3. The DataObject type (record) contains an optional Binary Large Object (BLOB) ID (blobId). If you
must divide a data object into multiple chunks, you MUST assign a blobId and the dataObject field
MUST NOT contain any data.
4. Use a set of Chunk message to send small portions of the data object (small enough to fit into the
negotiated WebSocket size limit for the session). Each Chunk message MUST contain its assigned
"parent" BlobId and a portion of the data object.
5. For endpoints that receive these messages, to correctly "reassemble" the data object (BLOB): use
the blobId, and the messageId (which indicates the message sequence, because ETP (via
WebSocket) guarantees messages to be delivered in order), and final (flag that indicates the last
chunk that comprises a particular data object).
6. Chunk messages for different data objects MUST NOT be interleaved within the context of one
multipart message operation. If more than one data object must be sent using Chunk messages, the
sender MUST finish sending each data object before sending the next one. To indicate the last
Chunk message for one data object, the sender MUST set the final flag to true.
For more information on how to use the Chunk message, see Section 3.7.3.2.
Correlation Id Usage: MUST be set to the messageId of the FindObjectsResponse message that
resulted in the assignment of a blobId and this Chunk message being created.
Multi-part: True
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.StoreQuery",
"name": "Chunk",
"protocol": "14",
"messageType": "3",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{ "name": "blobId", "type": "Energistics.Etp.v12.Datatypes.Uuid" },
{ "name": "data", "type": "bytes" },
{ "name": "final", "type": "boolean" }
]
}
2. Capabilities-related behavior 1. Relevant endpoint, data object, and/or protocol capabilities MUST be
specified when the ETP session is established (see Chapter 5) and
MUST be used/honored as defined in the relevant ETP sub-protocol.
2. For an explanation of endpoint, data object, and protocol capabilities, see
Section 3.3.
a. For the list of global capabilities and related behavior, see Section
3.3.2.
4. Rules specified in Chapter 14 Overview of 1. The general rules and requirements specified in Chapter 14 MUST be
Query Behavior observed and used with the additional details specific to
GrowingObjectQuery (Protocol 17) (which are specified in the next row.)
5. Rules specific to GrowingObjectQuery 1. The URI in the request message MUST be a canonical Energistics query
(Protocol◦17) that includes BOTH of these:
a. A reference to a specific growing data object using the data object’s
qualified type and UUID.
b. The qualified type of the data object’s parts.
c. EXAMPLE: To query a trajectory’s stations, the URI could look like:
eml:///witsml20.Trajectory(63b93219-e507-4934-a1b5-
e7e550701934)/witsml20.TrajectoryStation?<query params>
2. The data object portion of the URI in the request message MUST resolve
to a single growing data object; if not the store MUST send error
ENOTGROWINGOBJECT (6001).
3. In general, the results that are returned MUST follow the rules for Get
operations in GrowingObject (Protocol 6), including honoring limits
specified by capabilities. For details, see Section 11.2.2.
a. EXCEPTON: Query protocols specify a pagination option, which
make it possible to get results in groups and prevent exceeding
MaxResponseCount limits. For more information, see Section
14.1.2.
MaxPartSize: The maximum size in bytes of each data object part long byte Min: 10,000 bytes
allowed in a standalone message or a complete multipart <number of
message. Size in bytes is the total size in bytes of the bytes>
uncompressed string representation of the data object part in the
format in which it is sent or received.
Data Object Capabilities
(For definitions of each data object capability, see Section 3.3.4.)
Protocol Capabilities
MaxResponseCount: The maximum total count of responses long count MIN: 10,000
allowed in a complete multipart message response to a single <count of
request. responses>
«Mes s a ge»
FindParts
+ forma t: s tri ng = xml
+ uri : s tri ng
tags
AvroSrc = <memo>
Correl a ti onId = <memo>
Mes s a geTypeID = 1
Mul ti Pa rt = Fa l s e
SenderRol e = cus tomer
notes
A customer sends to a store to query for parts of a
growing data object that match the specified criteria. The
response to this message is the FindPartsResponse
message.
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObjectQuery",
"name": "FindParts",
"protocol": "16",
"messageType": "1",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "uri", "type": "string" },
{ "name": "format", "type": "string", "default": "xml" }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.GrowingObjectQuery",
"name": "FindPartsResponse",
"protocol": "16",
"messageType": "2",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{ "name": "uri", "type": "string" },
{ "name": "serverSortOrder", "type": "string" },
{ "name": "format", "type": "string", "default": "xml" },
{
"name": "parts",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.Object.ObjectPart" }, "default": []
}
]
}
This section describes the basic sequence, related key behaviors, ETP-defined protocol capability usage
and possible errors. By definition, Transaction (Protocol 18) involves work being done by other ETP
protocols; for example, Store (Protocol 4) may be used to get or put a data object and DataArray
(Protocol 9) may be used to get or put the related array data. The following Requirements section
provides additional functional requirements and rules for how this protocol is intended to work.
Protocol Capabilities
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Transaction",
"name": "StartTransaction",
"protocol": "18",
"messageType": "1",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "readOnly", "type": "boolean" },
{ "name": "message", "type": "string", "default": "" },
{
"name": "dataspaceUris",
"type": { "type": "array", "items": "string" }, "default": [""]
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Transaction",
"name": "StartTransactionResponse",
"protocol": "18",
"messageType": "2",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "transactionUuid", "type": "Energistics.Etp.v12.Datatypes.Uuid" },
{ "name": "successful", "type": "boolean", "default": true },
{ "name": "failureReason", "type": "string", "default": "" }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Transaction",
"name": "CommitTransaction",
"protocol": "18",
"messageType": "3",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "transactionUuid", "type": "Energistics.Etp.v12.Datatypes.Uuid" }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Transaction",
"name": "CommitTransactionResponse",
"protocol": "18",
"messageType": "5",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "transactionUuid", "type": "Energistics.Etp.v12.Datatypes.Uuid" },
{ "name": "successful", "type": "boolean", "default": true },
{ "name": "failureReason", "type": "string", "default": "" }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Transaction",
"name": "RollbackTransaction",
"protocol": "18",
"messageType": "4",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "transactionUuid", "type": "Energistics.Etp.v12.Datatypes.Uuid" }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Transaction",
"name": "RollbackTransactionResponse",
"protocol": "18",
"messageType": "6",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
Other ETP sub-protocols or information that may be used related for Channel data
objects:
Store (Protocol 4). Because a Channel is an Energistics data object, to add, update or delete
Channel data◦objects from a store, use Store (Protocol 4) (see Chapter 9).
StoreNotification (Protocol 5). To subscribe to notifications about Channel data objects (e.g.,
channels added, deleted, status change, etc.), use StoreNotification (Protocol 5) (see Chapter 10).
For more information about high-level workflows for data replication and outage recovery, see
Appendix: Data Replication and Outage Recovery Workflows.
NOTE: Energistics data models (e.g., WITSML) allow channels to be grouped into channel sets and logs.
However, ETP channel streaming protocols handle individual channels; that is, whether or not the
channel is part of a channel set or log is irrelevant to how it is handled in a channel streaming protocol.
(For more information about channel sets, see Section 7.1.1.)
Description of the message sequence for main tasks, along with required behavior, use of
capabilities, and possible errors (see Section 19.2.1).
Other functional requirements (not covered in the message sequence) including use of endpoint,
data object, and protocol capabilities for preventing and protecting against aberrant behavior (see
Section 19.2.2).
- Definitions of the endpoint, data object, and protocol capabilities used in this protocol (see
Section 19.2.3).
Sample schemas of the messages defined in this protocol (which are identical to the Avro schemas
published with this version of ETP). However, only the schema content in this specification includes
documentation for each field in a schema (see Section 19.3).
19.2.1.1 To do the initial setup to subscribe to channels and be streamed data as it is available:
1. The customer MUST send a store a GetChannelMetadata message (Section 19.3.1).
a. The GetChannelMetadata message contains a map whose values MUST each be the URI of a
channel that the customer wants to get channel metadata for.
i. To find a particular set of channels and their respective URIs (e.g., all the channels in a
particular wellbore) the customer MAY use Discovery (Protocol 3). The customer may also
receive these URIs out of band of ETP.
b. Before doing any other operations defined by other messages in this protocol, the customer
MUST first get the metadata for each channel.
2. For the URIs that it successfully returns channel metadata for, the store MUST send one or more
GetChannelMetadataResponse map response messages (Section 19.3.211.3.2), which contains a
map whose values are ChannelMetadataRecord records (Section 23.33.7).
a. For more information on how map response messages work, see Section 3.7.3.
b. ChannelMetadataRecord has the necessary contextual information (indexes, units of measure,
etc.) that the customer needs to correctly interpret channel data.
c. The store MUST assign the channel an integer identifier that is unique for the session in this
protocol. This identifier is used instead of the channel URI to identify the channel in subsequent
messages in this protocol for the session. This identifier is set in the id field in the
ChannelMetadataRecord.
RECOMMENDATION: Use the smallest available integer value for a new channel identifier.
IMPORTANT: If the channel is deleted and recreated during a session, it MUST be assigned a
new identifier.
3. For the URIs it does NOT return channel metadata for, the store MUST send one or more map
ProtocolException messages, where values in the errors field (a map) are appropriate errors.
a. For more information on how ProtocolException messages work with plural messages, see
Section 3.7.3.
b. For requested channels that the store does not contain, send error ENOT_FOUND (11).
4. Based on the list in the GetChannelMetadataResponse message, the customer determines which
channels it wants the store to stream data for.
a. NOTE: For each channel, the customer compares its latest channel index with the latest channel
index returned by the store.
b. Before setting up subscriptions, a customer MAY want to do a GetRanges operation (see Section
19.2.1.3) and then set up the subscription to receive future changes, when they occur. OR the
customer can choose to set up a subscription and start streaming from a specified index as
describe in this step below.
5. To subscribe to (i.e., to have the store stream data for) one or more of these channels, the customer
MUST send to the store the SubscribeChannels message (Section 19.3.3), which contains a map
whose values MUST each be a ChannelSubscribeInfo record for a channel the customer wants to
be streamed.
a. A customer MUST limit the total count of channels concurrently open for streaming in the current
ETP session to the store's value for MaxStreamingChannelsSessionCount protocol capability.
b. For each channel that would exceed the store’s limit, the store MUST NOT start streaming for the
channel. The store MUST instead send ELIMIT_EXCEEDED (12).
The ChannelSubscribeInfo record (Section 23.33.9) contains these data fields (for each channel):
c. The channelId (this is the id returned on the ChannelMetadataRecord).
d. One of the following fields MAY be populated:
i. For startIndex, the customer MUST specify an index value that it wants the store to start
streaming from. (The customer may determine this based on information in the
GetChannelMetadataResponse message and the index that it currently has.) If
requestLatestIndexCount is null AND startIndex is NOT null:
1. For increasing data: If startIndex is greater than the channel’s end index or, for
decreasing data, less than the channel’s end index, the store MUST deny that request
with EINVALID_OPERATION (32).
2. Otherwise: The store MUST start streaming from the first channel index that, for
increasing data, is greater than or equal to the requested start index and, for decreasing
5. If the store has no data in any of the request intervals, it MUST send a GetRangesResponse
message with the FIN bit set and the data field set to an empty array.
6. If the store does NOT successfully return data or a GetRangesResponse with an empty data array, it
MUST send a non-map ProtocolException message with an appropriate error, such as
EREQUEST_DENIED (6).
7. To cancel a GetRanges operation, the customer MUST send to the store a CancelGetRanges
message (Section 19.3.12), which identifies the UUID of the request to be stopped.
a. If the store has not already finished responding to the request that is being canceled, the store
MUST:
i. Send a final GetRangesResponse message with the FIN bit set; this final message MAY be
empty (no data).
ii. Stop sending GetRangesResponse messages for the specified channels.
19.2.1.4 To reconnect and resume streaming when the session has been interrupted (using
ChangeAnnotations):
This process can be used by a customer anytime it connects to a store; that is, when it first connects to a
store and wants to determine the latest changes or after an unintended disconnect when it wants to
determine data it may have missed.
ETP has no session survivability. If the session is interrupted (e.g., a satellite connection drops), using
this process makes it easier for a customer to determine what has changed while disconnected, get any
changed data it requires, and resume operations that were in process when the session dropped—and do
that with the reduced likelihood of NOT having to "resend all data from the beginning" (i.e., all data from
before the session dropped). For more information about related workflows, see Appendix: Data
Replication and Outage Recovery Workflows.
1. The customer MUST reconnect to the store and re-create and re-authorize (if required) the ETP
session (as described in Section 4.3 and 5.2.1.1) and get channel metadata using the process
described in Section 19.2.1.1 (Steps 1 and 2). (REMINDER: The GetChannelMetadataResponse
message is where the store assigns channel IDs; these channel IDs are used in the messages for the
remaining steps below).
2. The customer MUST "re-subscribe" to desired channels as described in Section 19.2.1.1 (Steps 3–5).
a. For each channel, a customer should compare the end index it has for a channel with the end
index it receives in the GetChannelMetadataResponse message to determine the index it wants
the store to start streaming from.
b. Recommended best practice is to re-subscribe to channels first (before getting change
annotations), so you start receiving current data and related change notifications as soon as
possible.
3. To determine what has changed while disconnected, the customer MUST send the store a
GetChangeAnnotations message (Section 19.3.13).
a. This message contains a map whose values MUST each be a ChannelChangeRequestInfo
record (Section 23.33.15) where the customer specifies the list of channels and for each, the
"changes since" time (that is, the customer wants all changes since this time, which should be
based on the time the customer was last sure it received data from the store). In the message,
the customer MUST also indicate if it wants all change annotations or only the latest change
annotation for each channel.
i. The "changes since" time (sinceChangeTime field) MUST BE equal to or more recent than
the store's ChangeRetentionPeriod endpoint capability.
4. For ChannelChangeRequestInfo records it successfully returns change annotations for, the store
MUST respond with one or more GetChangeAnnotationsResponse map response messages
(Section 19.3.14).
a. For more information on how map response messages work, see Section 3.7.3.
b. The map values in each message are ChangeResponseInfo records (Section 23.34.19), which
contains a time stamp for when the response was sent and the ChangeAnnotation records
(Section 23.34.18) for the channels in a ChannelChangeRequestInfo record.
i. Each ChangeAnnotation record contains a timestamp for when the change occurred in the
store and the interval of the channel that changed. (NOTE: Change annotations keep track
ONLY of the interval that change, NOT the actual data that changed).
c. For information about how the store tracks and manages these change annotations, see Section
19.2.2, Row 12.
5. For ChannelChangeRequestInfo records it does NOT successfully send change annotations for, the
store MUST send one or more map ProtocolException messages where values in the errors field (a
map) are appropriate errors.
a. For more information on how ProtocolException messages work with plural messages, see
Section 3.7.3.
6. Based on information in the GetChangeAnnotationsResponse message, the customer MAY:
a. Use the GetRanges message to retrieve intervals of interest that have changed (as described in
Section 19.2.1.3).
3. Message Sequence for main tasks in 1. The Message Sequence section above (Section 19.2.1) describes
this protocol: See Section 19.2.1. requirements for the main tasks listed there and also defines required
behavior.
4. If a customer wants to be able to 1. ETP supports workflows and provides mechanisms to help customers to
"reconnect and catch up on what more easily recover missed data (i.e., easier than "re-stream the entire
happened while disconnected" after channel") after reconnecting from an unintended outage.
unintended outage, it MUST track this a. For more information, see Appendix: Data Replication and
information during an ETP session Outage Recovery Workflows.
2. If a customer wants to use these workflows, the customer role endpoint
MUST do the following:
a. When the customer first subscribes to a channel, it MUST get the
most recent ChangeAnnotation record for each channel (if there
are any).
i. When reconnecting after an outage, the customer MUST get
all ChangeAnnotation records.
b. In the subscription request for each channel (specifically, in the
ChannelSubscribeInfo record), it MUST set the dataChanges field
to true, which means the store MUST send RangeReplaced
messages with data changes.
c. During the session, the customer MUST track the most recently
received index value for each channel it is subscribed to.
d. For more information about change annotations, see Row 12.
5. Plural messages (which includes maps) 1. This protocol uses plural messages, which includes maps. For detailed
rules on handling plural messages (including ProtocolException
handling), see Section 3.7.3.
6. To get notifications of changes to the 1. A channel is a data object: as such adding, updating, and deleting
channel itself (not the data in the channel) channels is done using Store (Protocol 4).
or new channels
a. NOTE: "Updating" means updates to the channel data object
itself—which DOES NOT include the data points in a channel,
which is done using ChannelStreaming (Protocol 1) and
ChannelDataLoad (Protocol 22). For more information, see Section
6.1.1.
2. To receive notifications of changes, such as new channels added,
changes in channel status (active or inactive), or a channel has been
7. Store Behavior: Data order for channel 1. Streaming data points (in ChannelData messages) MUST be sent in
subscriptions and range responses primary index order for each channel, both within one message and
across multiple messages.
2. Data points in GetRangesResponse and RangeReplaced messages
MUST be sent in primary index order for each channel, both within a
single message and across all messages within a multipart message
(response or notification).
3. Primary index order is always as appropriate for the index direction of a
channel (i.e., increasing or decreasing).
4. The index values for each data point are in the same order as their
corresponding IndexMetadataRecord records in the corresponding
channel’s ChannelMetadataRecord record, and the primary index is
always first.
5. The same primary index value MUST NOT appear more than once for
the same channel in any ChannelData message UNLESS the channel
data at that index was affected by a truncate operation during the
session (i.e., a ChannelsTruncated message was received for the
channel with a range that covered the primary index value).
6. The same primary index value MUST NOT appear more than once for
the same channel in the same multipart GetRangesResponse or
RangeReplaced message.
8. Secondary indexes in range operations 1. For GetRanges operations, ETP provides support for additional filtering
on secondary indexes/intervals.
a. Support of secondary indexes is considered advanced functionality
and is optional.
2. If an endpoint supports filtering on secondary indexes, it MUST set the
SupportsSecondaryIndexFiltering protocol capability to true.
a. If a store's SupportsSecondaryIndexFiltering protocol capability is
false and a customer requests that data be filtered by secondary
index values, then the Store MUST deny the request and send error
ENOTSUPPORTED (7).
b. ETP provides an optional field on the IndexMetadataRecord
(Section 23.33.6) named filterable, which allows a store to specify if
a particular index can be filtered on in various request messages in
some ETP sub-protocols.
3. Results with secondary indexes are highly variable depending on the
specifics of the data and the indexes. EXAMPLE: Results based on
secondary index filtering may result in no data values at some secondary
indexes or multiple data values at some secondary indexes (e.g., a
wireline tool where time is the primary index is time may result in multiple
depth readings).
4. A customer specifies the secondary intervals that it wants to filter on in
the ChannelRangeInfo record, which is used by the GetRanges
message.
9. Notifying that a range of data in a channel 1. The behavior described in this row assumes that a customer has
has been updated or deleted subscribed to the channel as described in Section 19.2.1.1.
(When sending a RangeReplaced 2. When a range of data in a channel that a customer is subscribed to has
message to a customer) been updated or deleted, a store MUST do the following:
a. A store MUST send a RangeReplaced message with details about
the change to the customer for the channel.
For protocol-specific behavior relating to using these capabilities in this protocol, see◦Sections 19.2.1
and 19.2.2.
For definitions for endpoint and data object capabilities, see the links in the table.
For general information about the types of capabilities and how they may be used, see Section 3.3.
ChannelSubscribe (Protocol 21): Capabilities
Name: Description Type Units Defaults
Value Units and/or
MIN/MAX
Endpoint Capabilities
(For definitions of each endpoint capability, see Section 3.3.)
ChangeRetentionPeriod: The minimum time period in seconds long second Default: 86,400
that a store retains the Canonical URI of a deleted data object and <number of MIN: 86,400
any change annotations for channels and growing data objects. seconds>
RECOMMENDATION: This period should be as long as is feasible
in an implementation. When the period is shorter, the risk is that
additional data will need to be transmitted to recover from outages,
leading to higher initial load on sessions.
Data Object Capabilities
(For definitions of each data object capability, see Section 3.3.4.)
MaxIndexCount: The maximum index count value allowed for a long count Default: 100
channel streaming request. <count of MIN: 1
indexes>
MaxRangeDataItemCount: The maximum total count of DataItem long count MIN: 1,000,000
records allowed in a complete multipart range message. <count of
records>
1. This message "appends" data to a channel. It does NOT include «Mes s a ge»
changes to existing data in the channel.
RangeReplaced
2. There is no requirement that any given channel appear in an
«Mes s a ge» individual ChannelData message, or that a given channel appear only + cha ngedInterva l : IndexInterva l
UnsubscribeChannels once in ChannelData message (i.e., a range of several index values for + cha ngeTi me: l ong
the same channel may appear in one message). + cha nnel Ids : l ong [1..n] (a rra y)
+ cha nnel Ids : l ong [1..n] (ma p) 3. This is a "fire and forget" message. The sender does NOT receive a + da ta : Da ta Item [1..n] (a rra y)
tags positive confirmation from the receiver that it has successfully received
and processed the message. tags
AvroSrc = <memo> AvroSrc = <memo>
4. For streaming data, ETP does NOT send null data values.
Correl a ti onId = <memo> Correl a ti onId = <memo>
Mes s a geTypeID = 7 EXCEPTION: If channel data values are arrays, then the arrays MAY
contain null values, but at least one array value MUST be non-null and Mes s a geTypeID = 6
Mul ti Pa rt = Fa l s e Mul ti Pa rt = True
SenderRol e = cus tomer the entire array CANNOT be null.
5. To optimize size on-the-wire, redundant index values MAY be sent SenderRol e = s tore
notes as null. The rules for this are as follows: notes
A customer sends to a store to cancel its subscription (unsubscribe) to A store sends to a customer to notify it that a range of data in channels it
one or more channels and to discontinue streaming data for these a. The index value of the first DataItem record in the data array MUST is subscribed to have been updated or deleted.
channels. NOT be sent as null.
The response to this message is the SubscriptionsStopped message. b. For subsequent index values:
i. If an index value differs from the previous index value in the data
array, the index value MUST NOT «Mes bes asent
ge»as null.
ii. If an index value is the SubscriptionsStopped
same as the previous index value in the data
«Mes s a ge» +array, chathe index
nnel Idsvalue
: l ongMAY be(ma
[0..n] sentp)as= null.
EmptyMa p
GetRanges +c. EXAMPLE:
rea s on: These
s tri ngindex
[0..1]values from adjacent DataItem records in the
data array:
+ cha nnel Ra nges : Cha nnel Ra ngeInfo [1..n] (a rra y) [1.0, 1.0, 2.0, 3.0, 3.0] tags
+ reques tUui d: Uui d AvroSrc
MAY be sent = <memo>
as:
Correl a ti onId
[1.0, null, = <memo>
2.0, 3.0, null]. «Mes s a ge»
tags Mes s a geTypeID
d. When the DataItem= 8 records have both primary and secondary index CancelGetRanges
AvroSrc = <memo> Mul ti Pathese
values, rt = True
rules apply separately to each index.
Correl a ti onId = <memo> SenderRol e = s tore + reques tUui d: Uui d
EXAMPLE: These primary and secondary index values from adjacent
Mes s a geTypeID = 9 DataItem records in the data array: tags
Mul ti Pa rt = Fa l s e notes
[[1.0, 10.0], [1.0, 11.0], [2.0, 11.0], [3.0, 11.0], [3.0, 12.0]] AvroSrc = <memo>
SenderRol e = cus tomer The store MUST send to a customer as a confirmation response to the
MAY be sent as: Correl a ti onId = <memo>
customer's
[[1.0, 10.0],UnsubscribeChannels
[null, 11.0], [2.0, null],message.
[3.0, null], [null, 12.0]].
notes Mes s a geTypeID = 11
If the store stops a customer’s subscription on its own without a request
A customer sends to a store to request data over a specific range for one f. If ALL index values for a DataItem record are to be sent as null, the Mul ti Pa rt = Fa l s e
from
indexes thefield
customer
should(e.g.,
be setif to
theanchannel
empty has been deleted), the store
array.
or more channels. The response to this is the GetRangesResponse SenderRol e = cus tomer
MUST send this message to notify the customer that the subscription
message. 6. For more information about sending channel data, see Section 6.1.3.
has been stopped. When sent as a notification, there MUST only be one notes
message in the multi-part notification. A customer sends to a store to stop streaming data for a previous
The store MUST provide a human readable reason why the subscriptions GetRanges request.
were stopped.
«Mes s a ge» «Mes s a ge»
GetChangeAnnotations GetRangesResponse
«Mes s a ge»
+ cha nnel s : Cha nnel Cha ngeReques tInfo [1..*] (ma p) + da ta : Da ta Item [0..n] (a rra y) = EmptyArra y SubscribeChannelsResponse
+ l a tes tOnl y: bool ea n = fa l s e
tags + s ucces s : s tri ng [1..*] (ma p)
tags AvroSrc = <memo>
AvroSrc = <memo> Correl a ti onId = <memo> tags
Correl a ti onId = <memo> Mes s a geTypeID = 10 AvroSrc = <memo>
Mes s a geTypeID = 14 Mul ti Pa rt = True Correl a ti onId = <memo>
Mul ti Pa rt = Fa l s e SenderRol e = s tore Mes s a geTypeID = 12
SenderRol e = cus tomer Mul ti Pa rt = True
notes SenderRol e = s tore
notes A store sends to a customer in response to a GetRanges message. It
A customer sends to a store to get change annotations contains the data for the specified range(s). notes
(ChangeAnnotation record) for the channels listed in this message. A store MUST send this "success only" message to a customer as
A change annotation identifies the interval(s) in a channel that have confirmation of a successful operation in response to a
changed and the time that the change happened in the store. They are SubscribeChannels message.
«Mes s a ge»
used in recovering from unplanned outages (connection drops). For more It confirms the channels for which the store successfully created
GetChangeAnnotationsResponse
information, see Appendix: Data Replication and Outage Recovery streaming subscriptions.
Workflows. + cha nges : Cha ngeRes pons eInfo [1..*] (ma p)
The response to this message is the GetChangeAnnotationsResponse
message. tags
AvroSrc = <memo>
Correl a ti onId = <memo>
Mes s a geTypeID = 15
Mul ti Pa rt = True
SenderRol e = s tore
notes
A store sends to a customer in response to a GetChangeAnnotations
message. It is a map of ChangeResponseInfo data structures which each
contains a change annotation (ChangeAnnotation) for the requested
channel data objects that the store could respond to. The returned
annotations are based on the store's storeLastWrite time for each
channel data object.
The store tracks changes "globally" (NOT per user, customer or
endpoint). Also, a store MAY combine annotations over time, as it sees
fit. For more information on how annotations work, see Section 19.2.1.4.
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelSubscribe",
"name": "GetChannelMetadata",
"protocol": "21",
"messageType": "1",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "uris",
"type": { "type": "map", "values": "string" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelSubscribe",
"name": "GetChannelMetadataResponse",
"protocol": "21",
"messageType": "2",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "metadata",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.ChannelData.ChannelMetadataRecord" }, "default": {}
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelSubscribe",
"name": "SubscribeChannels",
"protocol": "21",
"messageType": "3",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "channels",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.ChannelData.ChannelSubscribeInfo" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelSubscribe",
"name": "SubscribeChannelsResponse",
"protocol": "21",
"messageType": "12",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "success",
"type": { "type": "map", "values": "string" }
}
]
}
3. This is a "fire and forget" message. The sender does NOT receive a positive confirmation from the
receiver that it has successfully received and processed the message.
4. For streaming data, ETP does NOT send null data values. EXCEPTION: If channel data values are
arrays, then the arrays MAY contain null values, but at least one array value MUST be non-null and
the entire array CANNOT be null.
5. The index values in each DataValue record are in the same order as their corresponding
IndexMetadataRecord records in the corresponding channel’s ChannelMetadataRecord record,
and the primary index is always first.
6. To optimize size on-the-wire, redundant index values MAY be sent as null. The rules for this are as
follows:
a. The index value of the first DataItem record in the data array MUST NOT be sent as null.
b. For subsequent index values:
i. If an index value differs from the previous index value in the data array, the index value MUST
NOT be sent as null.
ii. If an index value is the same as the previous index value in the data array, the index value
MAY be sent as null.
c. EXAMPLE: These index values from adjacent DataItem records in the data array:
[1.0, 1.0, 2.0, 3.0, 3.0]
MAY be sent as:
[1.0, null, 2.0, 3.0, null].
d. When the DataItem records have both primary and secondary index values, these rules apply
separately to each index.
e. EXAMPLE: These primary and secondary index values from adjacent DataItem records in the
data array:
[[1.0, 10.0], [1.0, 11.0], [2.0, 11.0], [3.0, 11.0], [3.0, 12.0]]
MAY be sent as:
[[1.0, 10.0], [null, 11.0], [2.0, null], [3.0, null], [null, 12.0]].
f. If ALL index values for a DataItem record are to be sent as null, the indexes field should be set to
an empty array.
6. For more information about sending channel data, see Section 6.1.3.
Message Type ID: 4
Correlation Id Usage: MUST be ignored and SHOULD be set to 0.
Multi-part: False
Sent by: store
Field Name Description Data Type Min Max
data Contains the data points for channels, which is an DataItem 1 n
array of DataItem records. Note that the value
must be one of the types specified in DataValue
(Section 23.30)—which include options to send a
single data value (of various types such as
integers, longs, doubles, etc.) OR arrays of
values.
For more information, see Section 6.1.3.
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelSubscribe",
"name": "ChannelData",
"protocol": "21",
"messageType": "4",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "data",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.ChannelData.DataItem" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelSubscribe",
"name": "ChannelsTruncated",
"protocol": "21",
"messageType": "13",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "channels",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.ChannelData.TruncateInfo" }
},
{ "name": "changeTime", "type": "long" }
]
}
e. EXAMPLE: These primary and secondary index values from adjacent DataItem records in the
data array:
[[1.0, 10.0], [1.0, 11.0], [2.0, 11.0], [3.0, 11.0], [3.0, 12.0]]
MAY be sent as:
[[1.0, 10.0], [null, 11.0], [2.0, null], [3.0, null], [null, 12.0]].
f. If ALL index values for a DataItem record are to be sent as null, the indexes field should be set to
an empty array.
For more information about sending channel data, see Section 6.1.3.
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelSubscribe",
"name": "RangeReplaced",
"protocol": "21",
"messageType": "6",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{ "name": "changeTime", "type": "long" },
{
"name": "channelIds",
"type": { "type": "array", "items": "long" }
},
{ "name": "changedInterval", "type":
"Energistics.Etp.v12.Datatypes.Object.IndexInterval" },
{
"name": "data",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.ChannelData.DataItem" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelSubscribe",
"name": "UnsubscribeChannels",
"protocol": "21",
"messageType": "7",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "channelIds",
"type": { "type": "map", "values": "long" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelSubscribe",
"name": "SubscriptionsStopped",
"protocol": "21",
"messageType": "8",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{ "name": "reason", "type": "string" },
{
"name": "channelIds",
"type": { "type": "map", "values": "long" }, "default": {}
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelSubscribe",
"name": "GetRanges",
"protocol": "21",
"messageType": "9",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "requestUuid", "type": "Energistics.Etp.v12.Datatypes.Uuid" },
{
"name": "channelRanges",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.ChannelData.ChannelRangeInfo" }
}
]
}
f. If ALL index values for a DataItem record are to be sent as null, the indexes field should be set to
an empty array.
For more information about sending channel data, see Section 6.1.3.
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelSubscribe",
"name": "GetRangesResponse",
"protocol": "21",
"messageType": "10",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "data",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.ChannelData.DataItem" }, "default": []
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelSubscribe",
"name": "CancelGetRanges",
"protocol": "21",
"messageType": "11",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "requestUuid", "type": "Energistics.Etp.v12.Datatypes.Uuid" }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelSubscribe",
"name": "GetChangeAnnotations",
"protocol": "21",
"messageType": "14",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "channels",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.ChannelData.ChannelChangeRequestInfo" }
},
{ "name": "latestOnly", "type": "boolean", "default": false }
]
}
The store tracks changes "globally" (NOT per user, customer or endpoint). Also, a store MAY combine
annotations over time, as it sees fit. For more information on how annotations work, see Section 19.2.1.4.
Message Type ID: 15
Correlation Id Usage: MUST be set to the messageId of the GetChangeAnnotations message that this
message is a response to.
Multi-part: True
Sent by: store
Field Name Description Data Type Min Max
changes ETP general map of ChangeResponseInfo 1 *
ChannelChangeResponseInfo records, one for
each ChannelChangeRequestInfo the store can
respond to, which lists the channels that have
changed and the information for each as specified
in the ChangeAnnotation record.
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelSubscribe",
"name": "GetChangeAnnotationsResponse",
"protocol": "21",
"messageType": "15",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "changes",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.Object.ChangeResponseInfo" }
}
]
}
NOTE: Energistics data models (e.g., WITSML) allow channels to be grouped into channel sets and logs.
However, ETP channel streaming protocols handle individual channels; that is, whether or not the
channel is part of a channel set or log is irrelevant to how it is handled in a channel streaming protocol.
(For more information about channel sets, see Section 7.1.1.)
This chapter includes main sections for:
Required behavior, which includes:
Description of the message sequence for main tasks, along with required behavior, related
capabilities, and possible errors (see Section 20.2.1).
Other functional requirements (not covered in the message sequence) including use of endpoint,
data object, and protocol capabilities for preventing and protecting against aberrant behavior (see
Section◦20.2.2).
- Definitions of the endpoint, data object, and protocol capabilities used in this protocol (see
Section 20.2.3).
Sample schemas of the messages defined in this protocol (which are identical to the Avro schemas
published with this version of ETP). However, only the schema content in this specification includes
documentation for each field in a schema (see Section 20.3).
ii. preferRealtime, flag to indicate preference to receive realtime data first/as priority, before
historical data.
iii. dataChanges, flag to indicate if it wants to receive historical data changes (which are sent with
ReplaceRange messages).
b. For the channels it does NOT successfully open for receiving data, the store MUST send one or
more map ProtocolException messages, where values in the errors field (a map) are
appropriate errors.
i. For more information on how ProtocolException messages work with plural messages, see
Section 3.7.3.
ii. For the channels that do not currently exist on the store, error ENOT_FOUND (11).
iii. For the channels that the customer does not have permission to access/write to, error
EREQUEST_DENIED (6).
3. To send data to the store, the customer MAY do any of the following:
a. To append new data for any channel that the store opened for receiving data, the customer
MUST send ChannelData messages (Section 20.3.6).
i. New data MUST always be sent in primary index order and MUST always be an append (i.e.,
with primary index value greater than, for increasing data, or less than, for decreasing data,
the channel’s end index).
ii. The customer MAY continue to send these messages as new data becomes available or until
all new data for a channel is sent.
iii. NOTIFICATION BEHAVIOR: When the customer streams new data to the store, the store
MUST send ChannelData messages in ChannelSubscribe (Protocol 21).
iv. To ensure the store’s channel data remains in a consistent state, if the store is unable to
successfully process all data received in a ChannelData message:
1. If a store IS NOT able to parse the message and find the set of all channelIds included in
the message (e.g., the message body could not be deserialized):
a. The store MUST send a non-map ProtocolException message in response to the
message.
b. The store MUST close ALL channels currently open for receiving data in the session.
2. If store IS able to parse the message and find the set of all channelds included in the
message:
a. For each channelId that represents a valid, open channel, the store MUST process
ALL data for the channel, in primary index order, until it encounters an error for the
channel.
b. For the channels with errors (invalid or unable to process all data), or for all channels
if an error prevents the store from processing any data in the message:
i. The store MUST send a map ProtocolException message where the map keys
are the string version of the channelIds of the affected channels and the values
are an appropriate error for each channel.
1. If data for a channel is NOT an append or is not in primary index order, the
store MUST send error EINVALID_APPEND (31).
ii. The store MUST close the channels that are valid and open for receiving data
and send ChannelsClosed.
b. If the store indicated that it wants to receive data change, to update or delete an existing range of
data in a channel, the customer MUST send ReplaceRange messages (see Section 20.3.7).
i. For successful range replaces, the store MUST respond with a ReplaceRangeResponse
message.
ii. For more information about how range replacement operations work, see Section 20.2.2,
Row◦10.
c. To correct "index jump" errors, the customer MUST send a TruncateChannels message
(Section 20.3.4).
i. For channels that it successfully truncated, the store MUST respond with a
TruncateChannelsResponse message (Section 20.3.5).
ii. For more information about how truncate channels operations work, see Section 20.2.2, Row
9.
d. For channels that are not found on the store, the customer MAY use Store (Protocol 4) to add the
channels to the store.
i. To begin streaming data to these newly added channels, the customer must send the
OpenChannels message per Step 1 above.
ii. NOTE: This exception-based workflow (option to add channels that do not exist) is
recommended to reduce load on the store, particularly on the reconnection process (if you've
lost connection to a channel you were previously streaming to).
3. Message Sequence 1. The Message Sequence section above (Section 20.2.1) describes
See Section 20.2.1. requirements for the main tasks listed there and also defines required
behavior.
4. Plural messages (which includes 1. This protocol uses ETP-wide message patterns including plural messages and
maps) multipart responses. For more information on behaviors related to these
messages, see Section 3.7.3.
5. Notifications 1. This chapter explains events (operations) in ChannelDataLoad (Protocol 22)
that trigger the store to send notifications, which the store sends using
ChannelSubscribe (Protocol 21). However, statements of NOTIFICATION
MaxRangeDataItemCount: The maximum total count of DataItem long count MIN: 1,000,000
records allowed in a complete multipart range message. <count of
records>
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelDataLoad",
"name": "OpenChannels",
"protocol": "22",
"messageType": "1",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "uris",
"type": { "type": "map", "values": "string" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelDataLoad",
"name": "OpenChannelsResponse",
"protocol": "22",
"messageType": "2",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "channels",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.ChannelData.OpenChannelInfo" }, "default": {}
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelDataLoad",
"name": "CloseChannels",
"protocol": "22",
"messageType": "3",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "id",
"type": { "type": "map", "values": "long" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelDataLoad",
"name": "TruncateChannels",
"protocol": "22",
"messageType": "9",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "channels",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.ChannelData.TruncateInfo" }
}
]
}
It contains a map indicating which channels were successfully truncated (which end indexes were
successfully updated) and the time at which that change occurred in the store.
Message Type ID: 10
Correlation Id Usage: MUST be set to the messageId of the TruncateChannels message that this
message is a response to.
Multi-part: True
Sent by: store
Field Name Description Data Type Min Max
channelsTruncatedTime A map whose value is the time each channel in long 1 *
the map was truncated/updated in the store.
Must be a UTC dateTime value, serialized as a
long, using the Avro logical type timestamp-micros
(microseconds from the Unix Epoch, 1 January
1970 00:00:00.000000 UTC).
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelDataLoad",
"name": "TruncateChannelsResponse",
"protocol": "22",
"messageType": "10",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "channelsTruncatedTime",
"type": { "type": "map", "values": "long" }
}
]
}
5. For streaming data, ETP does NOT send null data values. EXCEPTION: If channel data values are
arrays, then the arrays MAY contain null values, but at least one array value MUST be non-null and
the entire array CANNOT be null.
6. To optimize size on-the-wire, redundant index values MAY be sent as null. The rules for this are as
follows:
a. The index value of the first DataItem record in the data array MUST NOT be sent as null.
b. For subsequent index values:
i. If an index value differs from the previous index value in the data array, the index value MUST
NOT be sent as null.
ii. If an index value is the same as the previous index value in the data array, the index value
MAY be sent as null.
c. EXAMPLE: These index values from adjacent DataItem records in the data array:
[1.0, 1.0, 2.0, 3.0, 3.0]
MAY be sent as:
[1.0, null, 2.0, 3.0, null].
d. When the DataItem records have both primary and secondary index values, these rules apply
separately to each index.
e. EXAMPLE: These primary and secondary index values from adjacent DataItem records in the
data array:
[[1.0, 10.0], [1.0, 11.0], [2.0, 11.0], [3.0, 11.0], [3.0, 12.0]]
MAY be sent as:
[[1.0, 10.0], [null, 11.0], [2.0, null], [3.0, null], [null, 12.0]].
f. If ALL index values for a DataItem record are to be sent as null, the indexes field should be set to
an empty array.
6. For more information about sending channel data, see Section 6.1.3.
Message Type ID: 4
Correlation Id Usage: MUST be ignored and SHOULD be set to 0.
Multi-part: False
Sent by: customer
Field Name Description Data Type Min Max
data Contains the data points for channels, which is an DataItem 1 n
array of DataItem records. Note that the value
must be one of the types specified in DataValue
(Section 23.30)—which include options to send a
single data value (of various types such as
integers, longs, doubles, etc.) OR arrays of
values.
For more information, see Section 6.1.3.
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelDataLoad",
"name": "ChannelData",
"protocol": "22",
"messageType": "4",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "data",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.ChannelData.DataItem" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelDataLoad",
"name": "ReplaceRange",
"protocol": "22",
"messageType": "6",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{ "name": "changedInterval", "type":
"Energistics.Etp.v12.Datatypes.Object.IndexInterval" },
{
"name": "channelIds",
"type": { "type": "array", "items": "long" }
},
{
"name": "data",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.ChannelData.DataItem" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelDataLoad",
"name": "ReplaceRangeResponse",
"protocol": "22",
"messageType": "8",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "channelChangeTime", "type": "long" }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.ChannelDataLoad",
"name": "ChannelsClosed",
"protocol": "22",
"messageType": "7",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{ "name": "reason", "type": "string" },
{
"name": "id",
"type": { "type": "map", "values": "long" }
}
]
}
General Requirements. Identifies high-level (across ETP) and protocol-wide general behavior and
rules that must be observed (in addition to behavior specified in Message Sequence), including usage
of ETP-defined endpoint, data object and protocol capabilities, error scenarios, and resulting error
codes.
Capabilities. Lists and defines the ETP-defined parameters most relevant for this sub-protocol. ETP
defines these parameters to set necessary limits to help prevent aberrant behavior (e.g., sending
oversized messages or sending more messages than an endpoint can handle).
ii. MUST NOT terminate the response until it has sent MaxResponseCount responses.
3. If the store has no dataspaces that meet the criteria specified in the GetDataspaces message, the
store MUST send a GetDataspacesResponse message with the FIN bit set and the dataspaces field
set to an empty array.
4. If the store does NOT successfully return dataspaces, it MUST send a non-map ProtocolException
message with an appropriate error, such as EREQUEST_DENIED (06).
2. Capabilities-related behavior 1. Relevant endpoint, data object, and/or protocol capabilities MUST be
specified when the ETP session is established (see Chapter 5) and
MUST be used/honored as defined in the relevant ETP sub-protocol.
2. For an explanation of endpoint, data object, and protocol capabilities, see
Section 3.3.
a. For the list of global capabilities and related behavior, see Section
3.3.2.
3. Section 21.2.3 identifies the capabilities most relevant to this ETP sub-
protocol. Additional details for how to use the protocol capabilities are
included below in this table and in Section 21.2.1 Dataspace: Message
Sequence.
3. Message Sequence 1. The Message Sequence section above (Section 21.2.1) describes
See Section 21.2.1. requirements for the main tasks listed there and also defines required
behavior.
4. Plural message (which includes maps) 1. This protocol uses plural messages. For detailed rules on handling plural
messages (including ProtocolException handling), see Section 3.7.3.
For definitions for endpoint and data object capabilities, see the links in the table.
For general information about the types of capabilities and how they may be used, see Section 3.3.
Dataspace (Protocol 24): Capabilities
Name: Description Type Units Defaults
Value Units and/or
MIN/MAX
Endpoint Capabilities
(For definitions of each endpoint capability, see Section 3.3.)
Protocol Capabilities
MaxResponseCount: The maximum total count of responses long count MIN: 10,000
allowed in a complete multipart message response to a single <count of
request. responses>
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Dataspace",
"name": "GetDataspaces",
"protocol": "24",
"messageType": "1",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "storeLastWriteFilter", "type": ["null", "long"] }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Dataspace",
"name": "GetDataspacesResponse",
"protocol": "24",
"messageType": "2",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "dataspaces",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.Object.Dataspace" }, "default": []
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Dataspace",
"name": "PutDataspaces",
"protocol": "24",
"messageType": "3",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "dataspaces",
"type": { "type": "map", "values":
"Energistics.Etp.v12.Datatypes.Object.Dataspace" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Dataspace",
"name": "PutDataspacesResponse",
"protocol": "24",
"messageType": "6",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "success",
"type": { "type": "map", "values": "string" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Dataspace",
"name": "DeleteDataspaces",
"protocol": "24",
"messageType": "4",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{
"name": "uris",
"type": { "type": "map", "values": "string" }
}
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.Dataspace",
"name": "DeleteDataspacesResponse",
"protocol": "24",
"messageType": "5",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "success",
"type": { "type": "map", "values": "string" }
}
]
}
Other ETP sub-protocols that may be used with SupportedTypes (Protocol 25):
To discover dataspaces in a store that you might want to get supported types for, use Dataspace
(Protocol 24), see Chapter 21.
To discover data objects in a store that you might want to get supported types for, use Discovery
(Protocol 3), see Chapter 8.
Capabilities. Lists and defines the ETP-defined parameters most relevant for this sub-protocol. ETP
defines these parameters to set necessary limits to help prevent aberrant behavior (e.g., sending
oversized messages or sending more messages than an endpoint can handle).
22.2.1.1 To discover data object types that are instantiated or supported in the store:
1. The customer sends the GetSupportedTypes message (Section 22.3.1). This request message:
a. MUST specify a URI from where the data types will be searched.
i. If the URI is a dataspace URI (for example eml:///), then all datatypes supported in the
dataspace are returned.
ii. If the URI is a data object, then only datatypes that may have a link with the URI data object
type are potentially returned.
b. MUST specify the scope (Section 22.3.1).
i. If the URI is a data object, then the scope MUST be either "sources" or "targets". If the scope is
NOT "sources" or "targets", the store MUST reject the request and send error
EINVALID_OPERATION (32).
ii. If the URI is not a data object, then this scope is ignored by the store.
c. Includes an option to count the number of instances of each data object type that matches the
request (countObjects MUST be set to true).
d. Includes an option to see the entire list of supported types (include types of which the store does
not have any instances). To do this, the returnEmptyTypes flag MUST be set. Otherwise only
data objects that currently have data are returned.
2. The store MUST respond with a GetSupportedTypesResponse message (Section 22.3.2) or a
ProtocolException message.
a. If the store does return supported types, it MUST send one or more
GetSupportedTypesResponse messages, which are arrays of SupportedType records that the
store supports and an optional count of each type.
i. If the request URI (in the GetSupportedTypes request) was a dataspace, then the
relationshipKind field in the SupportedType record MUST be set to "Primary" for all
datatypes that are returned.
ii. If the request URI (in the GetSupportedTypes request) was a data object, then the
relationshipKind field in the SupportedType record MUST be set to the appropriate value
("Primary" or "Secondary") for all datatypes that are returned.
iii. A store MUST limit the total count of responses to the customer's value for the
MaxResponseCount protocol capability.
iv. If the store exceeds the customer's MaxResponseCount value, the customer MAY send error
ERESPONSECOUNT_EXCEEDED (30).
v. If a store's MaxResponseCount value is less than the customer's MaxResponseCount value,
the store MAY further limit the total count of responses (to its value).
vi. If a store cannot return all responses to a request because it would exceed the lower or the
customer's or the store's value for MaxResponseCount, the store MUST terminate the
multipart message with error ERESPONSECOUNT_EXCEEDED (30).
vii. A store MUST NOT send ERESPONSECOUNT_EXCEEDED (30) until it has sent
MaxResponseCount responses.
b. If no supported types meet the criteria specified in the GetSupportedTypes message:
i. If the dataspace or data object specified by the URI in the context does not exist, the store
MUST send error ENOT_FOUND (11).
ii. If the URI in the context exists, but no supported types could be found matching the request,
the store MUST send the GetSupportedTypesResponse message with the FIN bit set and
the supportedTypes field set to an empty array.
2. Capabilities-related behavior 1. Relevant endpoint, data object, and/or protocol capabilities MUST be
specified when the ETP session is established (see Chapter 5) and
MUST be used/honored as defined in the relevant ETP sub-protocol.
2. For an explanation of endpoint, data object, and protocol capabilities,
see Section 3.3.
a. For the list of global capabilities and related behavior, see
Section 3.3.2.
3. Section 22.2.3 identifies the capabilities most relevant to this ETP sub-
protocol. Additional details for how to use the protocol capabilities are
included below in this table and in Section 22.2.1 SupportedTypes:
Message Sequence.
3. Message Sequence 1. The Message Sequence section above (Section 22.2.1) describes
See Section 22.2.1. requirements for the main tasks listed there and also defines required
behavior.
4. Maps and plural message (which includes 1. This protocol uses plural messages. For detailed rules on handling
maps) plural messages (including ProtocolException handling), see Section
3.7.3.
5. Session negotiation: specify "all" for 1. For best results using this protocol, in the RequestSession message,
supportedDataObjects in the supportedDataObjects field, the customer SHOULD specify "all"
data objects (EXAMPLE: witsml20.*)
a. For more information, see Row 1, Para 4 above or the
RequestSession message Section 5.3.
Protocol Capabilities
MaxResponseCount: The maximum total count of responses long count MIN: 10,000
allowed in a complete multipart message response to a single <count of
request. responses>
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.SupportedTypes",
"name": "GetSupportedTypes",
"protocol": "25",
"messageType": "1",
"senderRole": "customer",
"protocolRoles": "store,customer",
"multipartFlag": false,
"fields":
[
{ "name": "uri", "type": "string" },
{ "name": "scope", "type": "Energistics.Etp.v12.Datatypes.Object.ContextScopeKind" },
{ "name": "returnEmptyTypes", "type": "boolean", "default": false },
{ "name": "countObjects", "type": "boolean", "default": false }
]
}
Avro Source
{
"type": "record",
"namespace": "Energistics.Etp.v12.Protocol.SupportedTypes",
"name": "GetSupportedTypesResponse",
"protocol": "25",
"messageType": "2",
"senderRole": "store",
"protocolRoles": "store,customer",
"multipartFlag": true,
"fields":
[
{
"name": "supportedTypes",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.Object.SupportedType" }, "default": []
}
]
}
23 ETP Datatypes
The datatypes package is intended to hold only low-level types that are broadly re-used in various
protocols. In general, primitive datatypes follow the rules for Avro itself. These are the lower-level
datatypes defined for the protocols. They are only used as fields of messages, not as messages in their
own right.
For more information and definitions, see Section 3.4.1.1.
Figure 33 shows examples of some frequently used datatypes and the messages (and other datatypes)
that use those datatypes.
Figure 33: Examples of ETP-defined datatypes (Avro records) that are used by multiple messages and other
records.
class Datatypes
23.1 AnyLogicalArrayType
The enumeration for the logical types of representations.
These types have been specified based on signed/unsigned (U), bit size of the preferred sub-array
dimension (8, 16, 32, 64 bits), and endianness (LE = little, BE = big).
For more information about use of this enumeration, see Section 13.2.2.1.
Enumeration Description
arrayOfBoolean
arrayOfInt8
arrayOfUInt8
arrayOfInt16LE
arrayOfInt32LE
arrayOfInt64LE
arrayOfUInt16LE
arrayOfUInt32LE
arrayOfUInt64LE
arrayOfFloat32LE
arrayOfDouble64LE
arrayOfInt16BE
arrayOfInt32BE
arrayOfInt64BE
arrayOfUInt16BE
arrayOfUInt32BE
arrayOfUInt64BE
arrayOfFloat32BE
arrayOfDouble64BE
arrayOfString
arrayOfCustom
Avro Source
{
"type": "enum",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "AnyLogicalArrayType",
"symbols":
[
"arrayOfBoolean",
"arrayOfInt8",
"arrayOfUInt8",
"arrayOfInt16LE",
"arrayOfInt32LE",
"arrayOfInt64LE",
"arrayOfUInt16LE",
"arrayOfUInt32LE",
"arrayOfUInt64LE",
"arrayOfFloat32LE",
"arrayOfDouble64LE",
"arrayOfInt16BE",
"arrayOfInt32BE",
"arrayOfInt64BE",
"arrayOfUInt16BE",
"arrayOfUInt32BE",
"arrayOfUInt64BE",
"arrayOfFloat32BE",
"arrayOfDouble64BE",
"arrayOfString",
"arrayOfCustom"
]
}
23.2 AnyArrayType
The enumeration for the options for transports representations.
bytes are fixed sizes.
arrayOfInt and arrayOfLong follow Avro integer encoding, which is variable length.
Enumeration Description
arrayOfBoolean
arrayOfInt
arrayOfLong
arrayOfFloat
arrayOfDouble
arrayOfString
bytes
Avro Source
{
"type": "enum",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "AnyArrayType",
"symbols":
[
"arrayOfBoolean",
"arrayOfInt",
"arrayOfLong",
"arrayOfFloat",
"arrayOfDouble",
"arrayOfString",
"bytes"
]
}
23.3 DataObjectCapabilityKind
Parameters that allow an endpoint to specify capabilities for types of data objects; EXAMPLE: Data
object capabilities allow an endpoint to specify whether/which specific data objects can be retrieved,
saved or deleted.
For each parameter, the table below lists the parameter keyword (data object capability), description
(which includes, units/unit values, default, as applicable), and data type.
For more information about capabilities and how they work, see Section 3.3.
Data Object Capability Description Data Type
ActiveTimeoutPeriod The minimum time period in seconds that a store keeps the active long
status (activeStatus field in ETP) for a data object as “active” after
the most recent update causing the data object’s active status to
be set to true. For growing data objects, this is any change to its
parts. For channels, this is any change to its data points.
This capability can be set for an endpoint and/or for a data object.
A data object-specific value overrides an endpoint-specific value.
Units/Value units: seconds, <number of seconds>
Avro Source
{
"type": "enum",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "DataObjectCapabilityKind",
"symbols":
[
"ActiveTimeoutPeriod",
"MaxContainedDataObjectCount",
"MaxDataObjectSize",
"OrphanedChildrenPrunedOnDelete",
"SupportsGet",
"SupportsPut",
"SupportsDelete",
"MaxSecondaryIndexCount"
]
}
23.4 EndpointCapabilityKind
Parameters that are applicable to an endpoint, in any protocol where it makes sense. EXAMPLES:
MaxWebSocketFramePayloadSize—the maximum size for a WebSocket frame that an endpoint can
handle—applies to all ETP protocols that are implemented by the endpoint.
For each parameter, the table below lists the parameter keyword (endpoint capability), description (which
includes, units/unit values, default, as applicable), and data type.
For more information about capabilities and how they work, see Section 3.3.
Endpoint Capability Description Data Type
ActiveTimeoutPeriod The minimum time period in seconds that a store keeps the long
active status (activeStatus field in ETP) for a data object as
“active”, after the most recent update causing the data object’s
active status to be set to true. For growing data objects, this is
any change to its parts. For channels, this is any change to its
data points.
This capability can be set for an endpoint and/or for a data
object. A data object-specific value overrides an endpoint-
specific value.
Units/Value units: seconds, <number of seconds>
Min: 60 seconds
Default: 3,600 seconds
AuthorizationDetails 1. Contains an ArrayOfString with WWW-Authenticate style ArrayOfString
challenges.
2. To support the required authorization workflow (to enable
an endpoint to acquire an access token with the
necessary scope from the designated authorization
server), the AuthorizationDetails endpoint capability
MUST include at least one challenge with the Bearer
scheme which must include the ‘authz_server' and ‘scope’
parameters.
a. The 'authz_server' parameter MUST be a URI for an
authorization server to enable the endpoint to
acquire any other needed metadata about the
authorization server using OpenID Connect
Discovery.
Avro Source
{
"type": "enum",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "EndpointCapabilityKind",
"symbols":
[
"ActiveTimeoutPeriod",
"AuthorizationDetails",
"ChangePropagationPeriod",
"ChangeRetentionPeriod",
"MaxConcurrentMultipart",
"MaxDataObjectSize",
"MaxPartSize",
"MaxSessionClientCount",
"MaxSessionGlobalCount",
"MaxWebSocketFramePayloadSize",
"MaxWebSocketMessagePayloadSize",
"MultipartMessageTimeoutPeriod",
"ResponseTimeoutPeriod",
"SupportsAlternateRequestUris",
"SupportsMessageHeaderExtensions",
"RequestSessionTimeoutPeriod",
"SessionEstablishmentTimeoutPeriod"
]
23.5 ProtocolCapabilityKind
Parameters that are defined by ETP for use by either endpoint (role) for use in individual protocols to help
prevent aberrant behavior (e.g., sending oversized messages or sending more messages than an
endpoint can handle).
For each parameter, the table below lists the parameter keyword (protocol capability), description (which
includes, units/unit values, default, as applicable), and data type.
For more information about capabilities and how they work, see Section 3.3.
Protocol Capability Description Data Type
FrameChangeDetectionPeriod The maximum time period in seconds for updates to a channel long
to be visible in ChannelDataFrame (Protocol 2).
Updates to channels are not guaranteed to be visible in
responses in less than this period. (EXAMPLE: If your
requested range includes rows that just received new data,
the store may not return those rows. The store may be
allowing time to potentially receive additional values for the
rows before including them in responses.)
The intent for this capability is that rows in ChannelDataframe
messages are complete, and not 'partially updated'.
ChannelDataFrame (Protocol 2) should not be used to poll for
realtime data.
Units/Value Units: seconds, <number of seconds>
Min: 1 second
Max: 600 seconds
Default: 60 seconds
MaxDataArraySize The maximum size in bytes of a data array allowed in a store. long
Size in bytes is the product of all array dimensions multiplied
by the size in bytes of a single array element.
Units/Value Units: bytes, <number of bytes>
Min: 100,000 bytes
MaxDataObjectSize The maximum size in bytes of a data object allowed in a long
complete multipart message. Size in bytes is the size in bytes
of the uncompressed string representation of the data object
in the format in which it is sent or received.
This capability can be set for an endpoint, a protocol, and/or a
data object. If set for all three, here is how they generally
work:
An object-specific value overrides an endpoint-specific
value.
A protocol-specific value can further lower (but NOT
raise) the limit for the protocol.
EXAMPLE: A store may wish to generally support sending
and receiving any data object that is one megabyte or less
with the exceptions of Wells that are 100 kilobytes or less and
Attachments that are 5 megabytes or less. A store may further
wish to limit the size of any data object sent as part of a
notification in StoreNotification (Protocol 5) to 256 kilobytes.
Units/Value Units: bytes, <number of bytes>
Min: 100,000 bytes
MaxFrameResponseRowCount The maximum total count of rows allowed in a complete long
multipart message response to a single request.
Units/Value Units: count, <count of rows>
Min: 100,000 rows
Avro Source
{
"type": "enum",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "ProtocolCapabilityKind",
"symbols":
[
"FrameChangeDetectionPeriod",
"MaxDataArraySize",
"MaxDataObjectSize",
"MaxFrameResponseRowCount",
"MaxIndexCount",
"MaxRangeChannelCount",
"MaxRangeDataItemCount",
"MaxResponseCount",
"MaxStreamingChannelsSessionCount",
"MaxSubscriptionSessionCount",
"MaxTransactionCount",
"TransactionTimeoutPeriod",
"SupportsSecondaryIndexFiltering"
]
}
Avro Schema
{
"type": "fixed",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "Uuid",
"size": 16
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "ArrayOfBoolean",
"fields":
[
{
"name": "values",
"type": { "type": "array", "items": "boolean" }
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "ArrayOfNullableBoolean",
"fields":
[
{
"name": "values",
"type": { "type": "array", "items": ["null", "boolean"] }
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "ArrayOfInt",
"fields":
[
{
"name": "values",
"type": { "type": "array", "items": "int" }
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "ArrayOfNullableInt",
"fields":
[
{
"name": "values",
"type": { "type": "array", "items": ["null", "int"] }
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "ArrayOfLong",
"fields":
[
{
"name": "values",
"type": { "type": "array", "items": "long" }
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "ArrayOfNullableLong",
"fields":
[
{
"name": "values",
"type": { "type": "array", "items": ["null", "long"] }
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "ArrayOfFloat",
"fields":
[
{
"name": "values",
"type": { "type": "array", "items": "float" }
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "ArrayOfDouble",
"fields":
[
{
"name": "values",
"type": { "type": "array", "items": "double" }
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "ArrayOfString",
"fields":
[
{
"name": "values",
"type": { "type": "array", "items": "string" }
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "ArrayOfBytes",
"fields":
[
{
"name": "values",
"type": { "type": "array", "items": "bytes" }
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "AnySparseArray",
"fields":
[
{
"name": "slices",
"type": { "type": "array", "items": "Energistics.Etp.v12.Datatypes.AnySubarray" }
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "AnySubarray",
"fields":
[
{ "name": "start", "type": "long" },
{ "name": "slice", "type": "Energistics.Etp.v12.Datatypes.AnyArray" }
]
}
This record is NOT part of any ETP message. It is used for pre-ETP-session server discovery. Beginning
with ETP v1.2, servers MUST support this. If a client requests a ServerCapabilities, the server MUST
provide it.
For more information about how the ServerCapabilities is exchanged and used, see Section 4.3.
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "ServerCapabilities",
"fields":
[
{ "name": "applicationName", "type": "string" },
{ "name": "applicationVersion", "type": "string" },
{ "name": "contactInformation", "type": "Energistics.Etp.v12.Datatypes.Contact" },
{
"name": "supportedCompression",
"type": { "type": "array", "items": "string" }, "default": []
},
{
"name": "supportedEncodings",
"type": { "type": "array", "items": "string" }, "default": ["binary"]
},
{
"name": "supportedFormats",
"type": { "type": "array", "items": "string" }, "default": ["xml"]
},
{
"name": "supportedDataObjects",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.SupportedDataObject" }
},
{
"name": "supportedProtocols",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.SupportedProtocol" }
},
{
"name": "endpointCapabilities",
"type": { "type": "map", "values": "Energistics.Etp.v12.Datatypes.DataValue" },
"default": {}
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "SupportedDataObject",
"fields":
[
{ "name": "qualifiedType", "type": "string" },
{
"name": "dataObjectCapabilities",
"type": { "type": "map", "values": "Energistics.Etp.v12.Datatypes.DataValue" },
"default": {}
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "SupportedProtocol",
"fields":
[
{ "name": "protocol", "type": "int" },
{ "name": "protocolVersion", "type": "Energistics.Etp.v12.Datatypes.Version" },
{ "name": "role", "type": "string" },
{
"name": "protocolCapabilities",
"type": { "type": "map", "values": "Energistics.Etp.v12.Datatypes.DataValue" },
"default": {}
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "Version",
"fields":
[
{ "name": "major", "type": "int", "default": 0 },
{ "name": "minor", "type": "int", "default": 0 },
{ "name": "revision", "type": "int", "default": 0 },
{ "name": "patch", "type": "int", "default": 0 }
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "DataAttribute",
"fields":
[
{ "name": "attributeId", "type": "int" },
{ "name": "attributeValue", "type": "Energistics.Etp.v12.Datatypes.DataValue" }
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "AttributeMetadataRecord",
"fields":
[
{ "name": "attributeId", "type": "int" },
{ "name": "attributeName", "type": "string" },
{ "name": "dataKind", "type":
"Energistics.Etp.v12.Datatypes.ChannelData.ChannelDataKind" },
{ "name": "uom", "type": "string" },
{ "name": "depthDatum", "type": "string" },
{ "name": "attributePropertyKindUri", "type": "string" },
{
"name": "axisVectorLengths",
"type": { "type": "array", "items": "int" }
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "MessageHeader",
"fields":
[
{ "name": "protocol", "type": "int" },
{ "name": "messageType", "type": "int" },
{ "name": "correlationId", "type": "long" },
{ "name": "messageId", "type": "long" },
{ "name": "messageFlags", "type": "int" }
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "MessageHeaderExtension",
"fields":
[
{
"name": "extension",
"type": { "type": "map", "values": "Energistics.Etp.v12.Datatypes.DataValue" },
"default": {}
}
]
}
Avro Schema
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "Contact",
"fields":
[
{ "name": "organizationName", "type": "string", "default": "" },
{ "name": "contactName", "type": "string", "default": "" },
{ "name": "contactPhone", "type": "string", "default": "" },
{ "name": "contactEmail", "type": "string", "default": "" }
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes",
"name": "ErrorInfo",
"fields":
[
{ "name": "message", "type": "string" },
{ "name": "code", "type": "int" }
]
}
23.32 DataArrayTypes
This section contains low-level types used for DataArray (Protocol 9).
class DataArrayTypes
«record» «record»
DataArray DataArrayIdentifier
+ da ta : AnyArra y + pa thInRes ource: s tri ng
+ di mens i ons : l ong [1..*] (a rra y) + uri : s tri ng
notes notes
A record that contains the dimensions of the array and A record that contains fields to identify the URI of the resource and the
its data. path in that resource, to identify and find an array.
«record» «record»
PutDataArraysType DataArrayMetadata
+ a rra y: Da ta Arra y + cus tomDa ta : Da ta Va l ue [0..*] (ma p) = EmptyMa p
+ cus tomDa ta : Da ta Va l ue [0..*] (ma p) = EmptyMa p + di mens i ons : l ong [1..*] (a rra y)
+ ui d: Da ta Arra yIdenti fi er + l ogi ca l Arra yType: AnyLogi ca l Arra yType
+ preferredSuba rra yDi mens i ons : l ong [0..*] (a rra y) = EmptyArra y
notes + s toreCrea ted: l ong
A record that contains the fields required to put + s toreLa s tWri te: l ong
sub-arrays. + tra ns portArra yType: AnyArra yType
notes
A record that contains fields for metadata to help interpret and
understand the data in an array (DataArray).
«record»
GetDataSubarraysType
+ counts : l ong [0..*] (a rra y) = EmptyArra y
+ s ta rts : l ong [0..*] (a rra y) = EmptyArra y «record»
+ ui d: Da ta Arra yIdenti fi er PutDataSubarraysType
notes + counts : l ong [0..*] (a rra y)
A record that contains the fields required to get + da ta : AnyArra y
sub-arrays. + s ta rts : l ong [0..*] (a rra y)
+ ui d: Da ta Arra yIdenti fi er
notes
«record» A record that contains the field of data needed to put a sub-array.
PutUninitializedDataArrayType
+ meta da ta : Da ta Arra yMeta da ta
+ ui d: Da ta Arra yIdenti fi er
notes
The record that contains the fields required to put an
uninitialized array.
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.DataArrayTypes",
"name": "DataArray",
"fields":
[
{
"name": "dimensions",
"type": { "type": "array", "items": "long" }
},
{ "name": "data", "type": "Energistics.Etp.v12.Datatypes.AnyArray" }
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.DataArrayTypes",
"name": "DataArrayMetadata",
"fields":
[
{
"name": "dimensions",
"type": { "type": "array", "items": "long" }
},
{
"name": "preferredSubarrayDimensions",
"type": { "type": "array", "items": "long" }, "default": []
},
{ "name": "transportArrayType", "type": "Energistics.Etp.v12.Datatypes.AnyArrayType"
},
{ "name": "logicalArrayType", "type":
"Energistics.Etp.v12.Datatypes.AnyLogicalArrayType" },
{ "name": "storeLastWrite", "type": "long" },
{ "name": "storeCreated", "type": "long" },
{
"name": "customData",
"type": { "type": "map", "values": "Energistics.Etp.v12.Datatypes.DataValue" },
"default": {}
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.DataArrayTypes",
"name": "DataArrayIdentifier",
"fields":
[
{ "name": "uri", "type": "string" },
{ "name": "pathInResource", "type": "string" }
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.DataArrayTypes",
"name": "GetDataSubarraysType",
"fields":
[
{ "name": "uid", "type":
"Energistics.Etp.v12.Datatypes.DataArrayTypes.DataArrayIdentifier" },
{
"name": "starts",
"type": { "type": "array", "items": "long" }, "default": []
},
{
"name": "counts",
"type": { "type": "array", "items": "long" }, "default": []
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.DataArrayTypes",
"name": "PutDataArraysType",
"fields":
[
{ "name": "uid", "type":
"Energistics.Etp.v12.Datatypes.DataArrayTypes.DataArrayIdentifier" },
{ "name": "array", "type": "Energistics.Etp.v12.Datatypes.DataArrayTypes.DataArray" },
{
"name": "customData",
"type": { "type": "map", "values": "Energistics.Etp.v12.Datatypes.DataValue" },
"default": {}
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.DataArrayTypes",
"name": "PutUninitializedDataArrayType",
"fields":
[
{ "name": "uid", "type":
"Energistics.Etp.v12.Datatypes.DataArrayTypes.DataArrayIdentifier" },
{ "name": "metadata", "type":
"Energistics.Etp.v12.Datatypes.DataArrayTypes.DataArrayMetadata" }
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.DataArrayTypes",
"name": "PutDataSubarraysType",
"fields":
[
{ "name": "uid", "type":
"Energistics.Etp.v12.Datatypes.DataArrayTypes.DataArrayIdentifier" },
{ "name": "data", "type": "Energistics.Etp.v12.Datatypes.AnyArray" },
{
"name": "starts",
"type": { "type": "array", "items": "long" }
},
{
"name": "counts",
"type": { "type": "array", "items": "long" }
}
]
}
23.33 ChannelData
This section contains low-level types used for protocols that stream and handle historical channel data,
which include:
ChannelStreaming (Protocol 1)
ChannelDataFrame (Protocol 2)
ChannelSubscribe (Protocol 21)
ChannelDataLoad (Protocol 22)
23.33.1 ChannelDataKind
An enumeration that lists the possible kinds of data in a Channel data object as specified in its
ChannelMetadataRecord. It is a union of relevant logical index kinds (see ChannelIndexKind) and Avro
primitives (i.e., the list from DataValue, excluding arrays).
NOTE: Channel data may also be an ARRAY of the Avro types listed below. If it is an array, the
axisVectorLengths field in the ChannelMetadataRecord must be populated so that the array can be
correctly interpreted.
Channel Data Kind Description Data Type
DateTime Each value for channel data is a timestamp. string
The actual channel data is a UTC dateTime value,
serialized as a long, using the Avro logical type
timestamp-micros (microseconds from the Unix Epoch,
1 January 1970 00:00:00.000000 UTC).
ElapsedTime Each value for channel data is an elapsed time. string
The actual channel data is the number of microseconds
from zero and is an Avro long.
NOTE:
1. This value is NOT related to any time datum.
2. The index UOM MUST be set to "us".
EXAMPLE elapsed time use case: Engine hours for
equipment, which is how long the equipment has been
running.
MeasuredDepth Each value for channel data represents a measured string
depth (MD).
PassIndexedDepth Each value for channel data represents a pass indexed string
depth.
TrueVerticalDepth Each value for channel data represents a true vertical string
depth (TVD).
typeBoolean string
typeInt string
typeLong string
typeFloat string
typeDouble string
typeString string
typeBytes string
Avro Source
{
"type": "enum",
"namespace": "Energistics.Etp.v12.Datatypes.ChannelData",
"name": "ChannelDataKind",
"symbols":
[
"DateTime",
"ElapsedTime",
"MeasuredDepth",
"PassIndexedDepth",
"TrueVerticalDepth",
"typeBoolean",
"typeInt",
"typeLong",
"typeFloat",
"typeDouble",
"typeString",
"typeBytes"
]
}
23.33.2 ChannelIndexKind
An enumeration that lists the possible kinds of indexes in a Channel as specified in its
ChannelMetadataRecord and IndexMetadataRecord.
It indicates the kind of index, so that the index value can be correctly interpreted/understood.
NOTES:
1. Index units of measure and datum (if used) are also specified in the ChannelMetadataRecord.
2. ChannelIndexKind is also used by GrowingObject (Protocol 7). However, for growing objects,
ChannelIndexKind MUST be "time" or "depth" only.
Channel Index Kind Description Data Type
DateTime The index for the channel is a timestamp.
Each actual index value is a UTC dateTime value, serialized as a
long, using the Avro logical type timestamp-micros (microseconds
from the Unix Epoch, 1 January 1970 00:00:00.000000 UTC).
ElapsedTime The index for the channel is an elapsed time.
Each actual index value is the number of microseconds from zero
and is an Avro long.
NOTES:
1. This value is NOT related to any time datum.
2. The index UOM MUST be set to "us".
EXAMPLE elapsed time use case: Engine hours for equipment,
which is how long the equipment has been running.
MeasuredDepth The index of the channel is measure depth (MD).
TrueVerticalDepth The index of the channel is a true vertical depth (TVD).
PassIndexedDepth The index of the channel is a pass indexed depth.
Pressure The index of the channel is a pressure.
Temperature The index of a channel is a temperature.
Scalar The index of the channel represents values that are temperature or
pressure. It indicates that the index is of type Avro double.
NOTES:
1. Even if the index values are integer numbers, the index values
MUST be sent as Avro doubles.
2. Optionally, you may specify a datum (in the
ChannelMetadataRecord).
Avro Source
{
"type": "enum",
"namespace": "Energistics.Etp.v12.Datatypes.ChannelData",
"name": "ChannelIndexKind",
"symbols":
[
"DateTime",
"ElapsedTime",
"MeasuredDepth",
"TrueVerticalDepth",
"PassIndexedDepth",
"Pressure",
"Temperature",
"Scalar"
]
}
23.33.3 IndexDirection
The possible values for the direction of an index. This field describes the CURRENT sort order of the
indexes; PassDirection describes the absolute order of the indexes.
Field Name Description Data Type Min Max
Increasing The index values increase. 1 1
Avro Source
{
"type": "enum",
"namespace": "Energistics.Etp.v12.Datatypes.ChannelData",
"name": "IndexDirection",
"symbols":
[
"Increasing",
"Decreasing",
"Unordered"
]
}
23.33.4 PassDirection
The possible values for the direction of a pass in a wireline operation. It defines the absolute ordering for
PassIndexedDepth data (compared to IndexDirection which is the current sort order).
Field Name Description Data Type Min Max
Up The wireline tool is moving up in the 1 1
hole/wellbore.
HoldingSteady The wireline tool is not moving in the 1 1
hole/wellbore. NOTE: This MUST NOT be used
for primary indexes. This value ONLY applies to
secondary indexes.
Down The wireline tool is moving down in the 1 1
hole/wellbore.
Avro Source
{
"type": "enum",
"namespace": "Energistics.Etp.v12.Datatypes.ChannelData",
"name": "PassDirection",
"symbols":
[
"Up",
"HoldingSteady",
"Down"
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.ChannelData",
"name": "DataItem",
"fields":
[
Avro Schema
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.ChannelData",
"name": "IndexMetadataRecord",
"fields":
[
{ "name": "indexKind", "type":
"Energistics.Etp.v12.Datatypes.ChannelData.ChannelIndexKind", "default": "DateTime" },
{ "name": "interval", "type": "Energistics.Etp.v12.Datatypes.Object.IndexInterval" },
{ "name": "direction", "type":
"Energistics.Etp.v12.Datatypes.ChannelData.IndexDirection", "default": "Increasing" },
{ "name": "name", "type": "string", "default": "" },
{ "name": "uom", "type": "string" },
{ "name": "depthDatum", "type": "string", "default": "" },
{ "name": "indexPropertyKindUri", "type": "string" },
{ "name": "filterable", "type": "boolean", "default": true }
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.ChannelData",
"name": "ChannelMetadataRecord",
"fields":
[
{ "name": "uri", "type": "string" },
{ "name": "id", "type": "long" },
{
"name": "indexes",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.ChannelData.IndexMetadataRecord" }
},
{ "name": "channelName", "type": "string" },
{ "name": "dataKind", "type":
"Energistics.Etp.v12.Datatypes.ChannelData.ChannelDataKind" },
{ "name": "uom", "type": "string" },
{ "name": "depthDatum", "type": "string" },
{ "name": "channelClassUri", "type": "string" },
{ "name": "status", "type": "Energistics.Etp.v12.Datatypes.Object.ActiveStatusKind" },
{ "name": "source", "type": "string" },
{
"name": "axisVectorLengths",
"type": { "type": "array", "items": "int" }
},
{
"name": "attributeMetadata",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.AttributeMetadataRecord" }, "default": []
},
{
"name": "customData",
"type": { "type": "map", "values": "Energistics.Etp.v12.Datatypes.DataValue" },
"default": {}
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.ChannelData",
"name": "ChannelRangeInfo",
"fields":
[
{
"name": "channelIds",
"type": { "type": "array", "items": "long" }
},
{ "name": "interval", "type": "Energistics.Etp.v12.Datatypes.Object.IndexInterval" },
{
"name": "secondaryIntervals",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.Object.IndexInterval" }, "default": []
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.ChannelData",
"name": "ChannelSubscribeInfo",
"fields":
[
{ "name": "channelId", "type": "long" },
{ "name": "startIndex", "type": "Energistics.Etp.v12.Datatypes.IndexValue" },
{ "name": "dataChanges", "type": "boolean", "default": true },
{ "name": "requestLatestIndexCount", "type": ["null", "int"] }
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.ChannelData",
"name": "OpenChannelInfo",
"fields":
[
{ "name": "metadata", "type":
"Energistics.Etp.v12.Datatypes.ChannelData.ChannelMetadataRecord" },
{ "name": "preferRealtime", "type": "boolean", "default": true },
{ "name": "dataChanges", "type": "boolean", "default": true }
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.ChannelData",
"name": "FrameChannelMetadataRecord",
"fields":
[
{ "name": "uri", "type": "string" },
{ "name": "channelName", "type": "string" },
{ "name": "dataKind", "type":
"Energistics.Etp.v12.Datatypes.ChannelData.ChannelDataKind" },
{ "name": "uom", "type": "string" },
{ "name": "depthDatum", "type": "string" },
{ "name": "channelPropertyKindUri", "type": "string" },
{ "name": "status", "type": "Energistics.Etp.v12.Datatypes.Object.ActiveStatusKind" },
{ "name": "source", "type": "string" },
{
"name": "axisVectorLengths",
"type": { "type": "array", "items": "int" }
},
{
"name": "attributeMetadata",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.AttributeMetadataRecord" }, "default": []
},
{
"name": "customData",
"type": { "type": "map", "values": "Energistics.Etp.v12.Datatypes.DataValue" },
"default": {}
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.ChannelData",
"name": "FramePoint",
"fields":
[
{ "name": "value", "type": "Energistics.Etp.v12.Datatypes.DataValue" },
{
"name": "valueAttributes",
"type": { "type": "array", "items": "Energistics.Etp.v12.Datatypes.DataAttribute"
}, "default": []
}
]
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.ChannelData",
"name": "FrameRow",
"fields":
[
{
"name": "indexes",
"type": { "type": "array", "items": "Energistics.Etp.v12.Datatypes.IndexValue" }
},
{
"name": "points",
"type": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.ChannelData.FramePoint" }
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.ChannelData",
"name": "TruncateInfo",
"fields":
[
{ "name": "channelId", "type": "long" },
{ "name": "newEndIndex", "type": "Energistics.Etp.v12.Datatypes.IndexValue" }
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.ChannelData",
"name": "ChannelChangeRequestInfo",
"fields":
[
{ "name": "sinceChangeTime", "type": "long" },
{
"name": "channelIds",
"type": { "type": "array", "items": "long" }
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.ChannelData",
"name": "PassIndexedDepth",
"fields":
[
{ "name": "pass", "type": "long" },
{ "name": "direction", "type":
"Energistics.Etp.v12.Datatypes.ChannelData.PassDirection" },
{ "name": "depth", "type": "double" }
]
}
23.34 Object
This section contains datatypes for working with data objects. These datatypes are used by Discovery
(Protocol 3), Store (Protocol 4), Store Notification (Protocol 5), GrowingObject (Protocol 6),
GrowingObjectNotification (Protocol 7), StoreQuery (Protocol 14), and GrowingObjectQuery (Protocol 16).
«enumera ti on»
«record» ContextScopeKind
Edge «record»
s el f = 0
+ cus tomDa ta : Da ta Va l ue [0..*] (ma p) = EmptyMa p PartsMetadataInfo
s ources = 1
+ rel a ti ons hi pKi nd: Rel a ti ons hi pKi nd ta rgets = 2
+ cus tomDa ta : Da ta Va l ue [0..n] (ma p) = EmptyMa p
+ s ourceUri : s tri ng + i ndex: IndexMeta da ta Record s ources OrSel f = 3
+ ta rgetUri : s tri ng + na me: s tri ng ta rgets OrSel f = 4
notes + uri : s tri ng
notes
Record that contains the information to define an edge between 2 notes Energistics data models can be considered directed graphs. (For more
nodes in a graph data model. Record to carry metadata about an ObjectPart, which helps to information on this concept, see Section 8.1.1).
interpret and understand the data in the ObjectPart of a growing For certain ETP operations (such as Discovery (Protocol 3) and
data object. notifications (StoreNotification (Protocol 5) and
GrowingObjectNotification (Protocol 7) and others) you must specify a
"context" (ContextInfo), which simplistically is where in the data model
(at what node/data object) you want to start the operation and what
«record» direction you want to navigate.
DeletedResource ContextScopeKind lets you specify the "direction" in the graph that you
want the operation to navigate.
+ cus tomDa ta : Da ta Va l ue [0..n] (ma p) = EmptyMa p NOTE: If contextScopeKind = "self" then depth in ContextInfo is ignored.
+ del etedTi me: l ong
+ uri : s tri ng
notes
Record for data fields retained for deleted data objects (tombstones). «enumera ti on»
NOTE: The fields on DeletedResource are a subset of the fields on the ActiveStatusKind
Resource record and include the fields most likely to be retained for a
deleted object plus customData (which the store may use to send any Acti ve
custom or additional information). Ina cti ve
notes
Enumeration of possible channel or growing data object statuses.
Statuses are mapped from domain data objects, such as wellbores,
channels, and growing data objects.
23.34.1 ActiveStatusKind
Enumeration of possible channel or growing data object statuses. Statuses are mapped from domain data
objects, such as wellbores, channels, and growing data objects.
Active Status Description Data Type
Active The data object is currently producing data points. Same as
ObjectGrowing = true in WITSML 1.x
Inactive The data object is not currently producing data points. Same as
ObjectGrowing = False in WITSML 1.x
Avro Source
{
"type": "enum",
"namespace": "Energistics.Etp.v12.Datatypes.Object",
"name": "ActiveStatusKind",
"symbols":
[
"Active",
"Inactive"
]
}
23.34.2 RelationshipKind
Energistics data models can be considered directed graphs. (For more information on this concept, see
Section 8.1.1).
For discovery and notification operations, a customer can specify the kinds of relationship it wants to be
included.
Relationship Description Data Type
Primary The nature of a Primary relationship has to do with organizing or
grouping data objects, for example organizing Channels into
ChannelSets or organizing ChannelSets into Logs.
Characteristics of a Primary relationship:
One end of the relationship is almost always mandatory; that
is, one object cannot exist (as a data object in the system)
without the other. In the above example: A ChannelSet
cannot exist without at least 1 Channel.
In Energistics data models, a ByValue relationship is
ALWAYS organizational. NOTE: A ByValue relationship is
one where one data object "contains" one or more other data
objects, indicated with the ByValue construct in XML, such as
ChannelSets containing Channels.
Secondary Secondary relationships provide additional contextual information
about a data object, to improve understanding. For example, the
reference from a Channel to a Wellbore.
Characteristics of a Secondary relationship:
Both ends of the relationship are usually optional.
It is always specified using the Energistics Data Object
Reference (DOR) construct (never the ByValue construct).
For more information about DORs, see Energistics Online.
Both Refers to both Primary and Secondary relationships.
Avro Source
{
"type": "enum",
"namespace": "Energistics.Etp.v12.Datatypes.Object",
"name": "RelationshipKind",
"symbols":
[
"Primary",
"Secondary",
"Both"
]
}
23.34.3 ContextScopeKind
Energistics data models can be considered directed graphs. (For more information on this concept, see
Section 8.1.1).
For certain ETP operations (such as Discovery (Protocol 3) and notifications (StoreNotification (Protocol
5) and GrowingObjectNotification (Protocol 7) and others) you must specify a "context" (ContextInfo),
which simplistically is where in the data model (at what node/data object) you want to start the operation
and what direction you want to navigate.
ContextScopeKind lets you specify the "direction" in the graph that you want the operation to navigate.
NOTE: If contextScopeKind = "self" then depth in ContextInfo is ignored.
Context Scope Description Data Type
self The data object as specified in the context URI. int
If contextScopeKind = "self", then depth in ContextInfo is ignored.
sources For a complete definition of sources, see Section 8.1.1. int
targets For a complete definition of targets, see Section 8.1.1. int
sourcesOrSelf Those objects in the data model that are sources of self or self (the int
data object referred to by the URI in ContextInfo).
For a complete definition of sources, see Section 8.1.1.
targetsOrSelf Those objects in the data model that are targets of self or self (the int
data object referred to by the URI in ContextInfo).
For a complete definition of targets, see Section 8.1.1.
Avro Source
{
"type": "enum",
"namespace": "Energistics.Etp.v12.Datatypes.Object",
"name": "ContextScopeKind",
"symbols":
[
"self",
"sources",
"targets",
"sourcesOrSelf",
"targetsOrSelf"
]
}
23.34.4 ObjectChangeKind
Enumeration of the kinds of change that can be supplied in a notification record. Although the Store
protocol uses upsert semantics for PutObject, a notification record will specify whether an object was
created or replaced so that a customer can distinguish the actual type of change that occurred in the
store. If a server does not know if a change type is an "insert" or an "update" use "update".
Object Change Description Data Type
insert Object has been inserted (or added) to a store. int
update The object has been updated in the store or the store cannot int
determine if the object has been inserted or updated.
authorized A user has been authorized (given permissions) to a data object. int
joined A data object now references another data object with a ByValue int
reference. The contained object is said to be "joined" to the
container object. For more information, about containers and
contained objects, see Section 9.1.3.
unjoined A contained data object has been "removed" from its container int
data object. The contained object is said to be "unjoined" from the
container object. For more information, about containers and
contained objects, see Section 9.1.3.
joinedSubscription A data object has been added to the scope and context of a int
StoreNotification (Protocol 5) subscription.
unjoinedSubscription A data object has been removed from the scope and context of a int
StoreNotification (Protocol 5) subscription.
Avro Source
{
"type": "enum",
"namespace": "Energistics.Etp.v12.Datatypes.Object",
"name": "ObjectChangeKind",
"symbols":
[
"insert",
"update",
"authorized",
"joined",
"unjoined",
"joinedSubscription",
"unjoinedSubscription"
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.Object",
"name": "DataObject",
"fields":
[
{ "name": "resource", "type": "Energistics.Etp.v12.Datatypes.Object.Resource" },
{ "name": "format", "type": "string", "default": "xml" },
{ "name": "blobId", "type": ["null", "Energistics.Etp.v12.Datatypes.Uuid"] },
{ "name": "data", "type": "bytes", "default": "" }
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.Object",
"name": "ObjectPart",
"fields":
[
{ "name": "uid", "type": "string" },
{ "name": "data", "type": "bytes" }
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.Object",
"name": "ObjectChange",
"fields":
[
{ "name": "changeKind", "type":
"Energistics.Etp.v12.Datatypes.Object.ObjectChangeKind" },
{ "name": "changeTime", "type": "long" },
{ "name": "dataObject", "type": "Energistics.Etp.v12.Datatypes.Object.DataObject" }
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.Object",
"name": "IndexInterval",
"fields":
[
{ "name": "startIndex", "type": "Energistics.Etp.v12.Datatypes.IndexValue" },
{ "name": "endIndex", "type": "Energistics.Etp.v12.Datatypes.IndexValue" },
{ "name": "uom", "type": "string" },
{ "name": "depthDatum", "type": "string", "default": "" }
]
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.Object",
"name": "PutResponse",
"fields":
[
{
"name": "createdContainedObjectUris",
"type": { "type": "array", "items": "string" }, "default": []
},
{
"name": "deletedContainedObjectUris",
"type": { "type": "array", "items": "string" }, "default": []
},
{
"name": "joinedContainedObjectUris",
"type": { "type": "array", "items": "string" }, "default": []
},
{
"name": "unjoinedContainedObjectUris",
"type": { "type": "array", "items": "string" }, "default": []
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.Object",
"name": "Dataspace",
"fields":
[
{ "name": "uri", "type": "string" },
{ "name": "path", "type": "string", "default": "" },
{ "name": "storeLastWrite", "type": "long" },
{ "name": "storeCreated", "type": "long" },
{
"name": "customData",
"type": { "type": "map", "values": "Energistics.Etp.v12.Datatypes.DataValue" },
"default": {}
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.Object",
"name": "Resource",
"fields":
[
{ "name": "uri", "type": "string" },
{
"name": "alternateUris",
"type": { "type": "array", "items": "string" }, "default": []
},
{ "name": "name", "type": "string" },
{ "name": "sourceCount", "type": ["null", "int"], "default": null },
{ "name": "targetCount", "type": ["null", "int"], "default": null },
{ "name": "lastChanged", "type": "long" },
{ "name": "storeLastWrite", "type": "long" },
{ "name": "storeCreated", "type": "long" },
{ "name": "activeStatus", "type":
"Energistics.Etp.v12.Datatypes.Object.ActiveStatusKind" },
{
"name": "customData",
"type": { "type": "map", "values": "Energistics.Etp.v12.Datatypes.DataValue" },
"default": {}
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.Object",
"name": "DeletedResource",
"fields":
[
{ "name": "uri", "type": "string" },
{ "name": "deletedTime", "type": "long" },
{
"name": "customData",
"type": { "type": "map", "values": "Energistics.Etp.v12.Datatypes.DataValue" },
"default": {}
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.Object",
"name": "Edge",
"fields":
[
{ "name": "sourceUri", "type": "string" },
{ "name": "targetUri", "type": "string" },
{ "name": "relationshipKind", "type":
"Energistics.Etp.v12.Datatypes.Object.RelationshipKind" },
{
"name": "customData",
"type": { "type": "map", "values": "Energistics.Etp.v12.Datatypes.DataValue" },
"default": {}
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.Object",
"name": "SupportedType",
"fields":
[
{ "name": "dataObjectType", "type": "string" },
{ "name": "objectCount", "type": ["null", "int"] },
{ "name": "relationshipKind", "type":
"Energistics.Etp.v12.Datatypes.Object.RelationshipKind" }
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.Object",
"name": "ContextInfo",
"fields":
[
{ "name": "uri", "type": "string" },
{ "name": "depth", "type": "int" },
{
"name": "dataObjectTypes",
"type": { "type": "array", "items": "string" }, "default": []
},
{ "name": "navigableEdges", "type":
"Energistics.Etp.v12.Datatypes.Object.RelationshipKind" },
{ "name": "includeSecondaryTargets", "type": "boolean", "default": false },
{ "name": "includeSecondarySources", "type": "boolean", "default": false }
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.Object",
"name": "SubscriptionInfo",
"fields":
[
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.Object",
"name": "PartsMetadataInfo",
"fields":
[
{ "name": "uri", "type": "string" },
{ "name": "name", "type": "string" },
{ "name": "index", "type":
"Energistics.Etp.v12.Datatypes.ChannelData.IndexMetadataRecord" },
{
"name": "customData",
"type": { "type": "map", "values": "Energistics.Etp.v12.Datatypes.DataValue" },
"default": {}
}
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.Object",
"name": "ChangeAnnotation",
"fields":
[
{ "name": "changeTime", "type": "long" },
{ "name": "interval", "type": "Energistics.Etp.v12.Datatypes.Object.IndexInterval" }
]
}
Avro Schema
{
"type": "record",
"namespace": "Energistics.Etp.v12.Datatypes.Object",
"name": "ChangeResponseInfo",
"fields":
[
{ "name": "responseTimestamp", "type": "long" },
{
"name": "changes",
"type": { "type": "map", "values": { "type": "array", "items":
"Energistics.Etp.v12.Datatypes.Object.ChangeAnnotation" } }
}
]
}
The server does not support any of the client's supported data object
29 ENOSUPPORTEDDATAOBJECTTYPES
types.
Specification). For example, Channel, Channel Set and other contained data objects defined in
WITSML are listed in the ETP v1.2 for WITSML v2.0 Implementation Specification.
A resource is a meta-object that contains information that identifies an actual data object. A resource
contains a mix of fields: some fields are from the data object and some fields are from the data object
as instantiated on a particular store, for example, the storeLastWrite field. The use of the "lighter-
weight" resources for some use cases in ETP reduces traffic on the wire for initial inquiries (such as
discovery operations), which allows customer applications to determine when to do the "heavy lifting"
of getting the full data object and/or all of its associated data.
A dataspace is an abstraction representing a distinct collection of data objects, such as a project or a
specific database. (For more information, see Section 21.1.1.)
MUST support eml:///, which is the URI for the default dataspace, which may or may not be empty.
MAY support alternate URI formats, which are explained in Section 25.3.9.
25.3.2 Overview
Energistics URIs provide a flexible way to identify dataspaces and data objects within dataspaces. By
building on OData URI syntax, Energistics URIs can represent:
individual dataspaces and data objects
hierarchical relationships between objects
sub-elements within data objects
queries for collections of objects
ETP v1.2+ uses a subset of Energistics URIs to identify:
Dataspaces
Individual data objects within a dataspace
A query that will match a collection of data objects within a dataspace
When both ETP endpoints in a session can support them, ETP v1.2+ allows optional use of other forms of
Energistics URIs in some protocol messages, which may have application-specific meaning. For more
information on optional use of other URI forms, see Sections 25.3.4 and Section 25.3.9.
For named dataspaces, the path may be a relative path. For example:
eml:///dataspace('rdms-db')
Observe these rules for dataspace URIs:
In addition to named dataspaces, all ETP stores and producers MUST support the default, nameless
dataspace, which is identified by the empty string.
- While the default dataspace MUST be supported, it MAY be empty; that is, it may not have any
data objects in it.
IMPORTANT: The default dataspace is NOT an alias for a named dataspace. It is a simplification for ETP
stores and producers that do not need to support named dataspaces.
data object types that are defined in Energistics common for use by the domain standards), all in
lower case concatenated with the first 2 digits of its version. For example, RESQML v2.0.1 would use:
resqml20. Supported versions of the other Energistics standards include: witsml20, prodml20,
prodml21, eml20, eml21, and eml22.
2. The data object type name MUST be the schema name of the data object as defined in the
Energistics standard. It IS case sensitive.
25.3.7.2 Rules for Using Dataspaces and Version in Data Object URIs
Observe these rules for using dataspaces and version in data object URIs:
If the dataspace is the default dataspace, then the dataspace segment MUST be omitted from the
canonical URI.
Data objects identified by Energistics URIs are always in a dataspace, so the data object URI is
prefixed with the relevant dataspace.
- If the dataspace is omitted from the URI, then the URI implicitly refers to the default dataspace.
If version is omitted and there are multiple versions of a data object behind an ETP endpoint, then the
URI implicitly refers to the most recent version.
ETP 1.2 does not provide rules that define which of two versions of a data object is the most
recent version. The data object version that is most recent is ETP-endpoint-dependent.
- If the intent is to refer to the most recent version of the data object, then the version segment
SHOULD be omitted from the canonical URI.
The data object URI uses these conventions from OData:
The data object type in a data object URI is semantically equivalent to an OData qualifiedEntityType;
for example: witsml20.Well (as described above).
The specification of the uuid and the optional version are semantically equivalent to keys in OData
collections.
25.3.8.1 Rules for Using Dataspaces and Version in Data Object Query URIs
Observe these rules for using dataspaces and version in data object query URIs:
If the dataspace is the default dataspace, then the dataspace segment MUST be omitted from the
canonical URI.
If the intent is to refer to the most recent version of the data object, then the version segment
SHOULD be omitted from the canonical URI.
A data object query URI MAY specify an OData Entity Collection. That is, a data object type without
associated uuid or version. This represents a query for objects of the specified type.
When a data object query URI includes a specific data object uuid, the query operates on data
objects that have some relationship to the data object specified by the uuid.
Whether the relationship is primary or secondary or goes from sources to targets or targets to
sources depends on other contextual information where the URI is used. EXAMPLE: In
DiscoveryQuery, the context and scope fields on the FindResources message will provide this
information.
A data object query URI MAY also include a URI query string (for details, see Chapter 14).
- When the URI path ends with an OData Entity Collection, the query string is optional (because
the OData Entity Collection represents an implicit query).
- When the URI path ends with a specific data object, the query string is required.
- An ETP endpoint that wants to use alternate URIs in requests SHOULD assume the other
endpoint in the session supports only alternate URIs it has explicitly received in response to
previous requests.
- Even if an endpoint indicates it supports alternate URIs, it is NOT required or guaranteed that all
possible forms of alternate URIs are supported.
Canonical Data Object Query URIs with Data Object but no OData Entity Collection:
^eml:\/\/\/(?:dataspace\('(?<dataspace>[^']*?(?:''[^']*?)*)'\)\/)?(?<domain>witsml|resqml|prodml|eml)(?<
domainVersion>[1-9]\d)\.(?<objectType>\w+)\((?:(?<uuid>[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-
[0-9a-fA-F]{4}-[0-9a-fA-F]{12})|uuid=(?<uuid2>[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-
F]{4}-[0-9a-fA-F]{12}),version='(?<version>[^']*?(?:''[^']*?)*)')\)\?(?<query>[^#]+)$
EXAMPLES:
eml:///witsml20.Well(ec8c3f16-1454-4f36-ae10-27d2a2680cf2)?query
eml:///witsml20.Well(uuid=ec8c3f16-1454-4f36-ae10-27d2a2680cf2,version='1.0')?query
- eml:///dataspace('/folder-name/project-
name')/resqml20.obj_HorizonInterpretation(uuid=421a7a05-033a-450d-bcef-
051352023578,version='2.0')?query
Figure 38: ETP now supports push and pull data replication workflows, as well as "man-in-the-middle"
synchronization applications. The words in parenthesis (either customer or store) refer to the ETP role
assigned to that endpoint.
The source is the data store from which the data is being replicated; the destination is the data store to
which the data is be replicated. The goal of replication is for the destination to be eventually consistent
with the source.
NOTE: The source and destination DO NOT necessarily coincide with the client and server endpoints in
an ETP session. (For more information about clients, servers, and ETP-assigned endpoint roles, see
Section 3.1.2.)
Source and destination are determined by whether the workflow is "push" or "pull". A key factor to how
the workflows function is the ETP-assigned role of each endpoint, (in all but one ETP sub-protocol the two
defined endpoint roles are "customer" or "store").
In the push workflow, the endpoint with the customer role is the source. It controls the workflow
operations by using "put" messages from the various ETP sub-protocols to push data to the store.
In the pull workflow, the endpoint with the customer role is the destination. It controls the operations
by using "subscribe" and "get" messages to pull data from the store (source).
In the man-in-the middle workflow, the synchronization application (sync app) is the customer in both
the push and pull workflows. (NOTE: Though the sync app has the customer role in both the pull and
push workflows, two separate ETP sessions must be created: one for the pull workflow and one for
the push workflow.)
In both push and pull workflows, the ETP customer role is the active participant, which is an informal
general term used in this appendix for the ETP endpoint that is in control of the replication operation.
NOTE: ETP has similar features that can be used to support replication for other objects (such as
dataspaces and data arrays) however, those workflows have not yet been documented.
The main replication tasks in the push workflow are accomplished as follows:
Replication Scope Identification:
The replication scope comes from an external source, for example, a contract that says what data
your endpoint is expected to deliver (EXAMPLE: Your rigsite store as a logging/data acquisition
company must replicate data for Well XYZ to the destination endpoint (e.g., a client oil company's
data store)). That is, you cannot discover the replication scope with ETP functionality.
When a data store itself is the source, the scope details are provided in an externally supplied
configuration, which the data store must use to identify the specific data objects in itself that fall
within the replication scope. The data store must use internal knowledge of itself to track changes
to the set of objects that fall within the scope.
- When a synchronization application (sync app) is the source, the sync app receives the scope
details from an externally supplied configuration and uses the pull workflow from the data store it
is replicating to identify the specific data objects within the replication scope and any changes to
the set of objects that fall within the scope.
Data Object Replication: Once you have identified the data objects within the replication scope, you
must replicate these data objects and any changes for each with these operations:
For creates and updates:
For growing data objects (which does not include channels, which are different a type of data
object), push the header using GrowingObject.PutGrowingDataObjectsHeader. (For details of
the message flows for this protocol, see Chapter 11 GrowingObject (Protocol 6).)
- For all other data objects (including channel), push the data object using Store.PutDataObjects.
(For details of the message flows for this protocol, see Chapter 9 Store (Protocol 4).)
For deletes:
For all data objects, delete the data objects using Store.DeleteDataObjects. (For details of the
message flows for this protocol, see Chapter 9 Store (Protocol 4).)
Growing data object Part Replication:
- Push part creates, changes and deletes using GrowingObject.PutParts,
GrowingObject.DeleteParts and/or GrowingObject.ReplacePartsByRange. The
recommendation is always to work as efficiently as possible; EXAMPLE: If you are deleting 1000
contiguous parts, you can do this using a single ReplacePartsByRange message. But the
combination of messages used to replicate those changes is up to the implementer. (For details
of the message flows for this protocol, see Chapter 11 GrowingObject (Protocol 6).)
Channel Data Replication:
Push existing channel data using either ChannelDataLoad.ChannelData or
ChannelDataLoad.ReplaceRange.
Push new channel data (i.e., appended index/data points) using
ChannelDataLoad.ChannelData. (For details of the message flows for this protocol, see
Chapter 20 ChannelDataLoad (Protocol 22).)
The main replication tasks in the pull workflow are accomplished as follows:
Replication Scope Identification
The replication scope comes from an external source, as described for the push workflow (see
Section 26.4).
The set of objects in the scope MAY come from an external source when it is a static, explicit list
(e.g. replicate Channel 123 and Channel 456 and nothing else).
When the set of objects in the scope is dynamic (which is the most likely scenario):
• Discover the initial set of objects in the scope using Discovery.GetResources. (For details of
the message flows for this protocol, see Chapter 8 Discovery (Protocol 3).)
• Subscribe to changes to the set of objects in the scope with
StoreNotification.SubscribeNotifications.
• Receive changes to the set of objects in the scope with StoreNotification.ObjectChanged,
StoreNotification.ObjectAccessRevoked and StoreNotification.ObjectDeleted.
EXAMPLE: If data objects are added, removed or deleted from a scope, the destination
endpoint is notified of these changes through these notification messages. (For details of the
message flows for this protocol, see Chapter 10 StoreNotification (Protocol 5).)
Data Object Replication
Discover the initial set of data objects, including growing data objects, using Discovery (Protocol
3) (which tells you their current state and when they last changed) and, based on the results of
discovery, pull the desired objects using Store.GetDataObjects. (For details of the message
flows for this protocol, see Chapter 9 Store (Protocol 4).)
For creates, joins, and updates:
- Receive created, updated, and joined (i.e. existing data objects added to the replication scope)
data objects with StoreNotification.ObjectChanged. In some cases, the notification includes the
actual data object (which reduces traffic on the wire because you don't have to issue a request for
it). However, in some scenarios, you will not get the data object with the notification, so must get
it using Store.GetDataObjects (Chapter 9 Store (Protocol 4)) or
GrowingObject.GetDataObjectsHeader (Chapter 11 GrowingObject (Protocol 6)).
For deletes, unjoins and access revocations:
Receive unjoins (i.e. data objects removed from the replication scope without being deleted) with
StoreNotification.ObjectChanged (see Chapter 10 StoreNotification (Protocol 5)).
Receive access revocations with StoreNotification.ObjectAccessRevoked (see Chapter 10
StoreNotification (Protocol 5)).
Receive deletes with StoreNotification.ObjectDeleted (see Chapter 10 StoreNotification
(Protocol 5)).
Growing data object Part Replication
Pull the initial set of parts together with the data object header using Store.GetDataObjects
(Chapter 9 Store (Protocol 4)).
Subscribe to growing data objects in the replication scope using
GrowingObjectNotification.SubscribePartNotificaitons (see Chapter 12
GrowingObjectNotification (Protocol 7)) to receive part creates, changes and deletes.
Receive part creates, changes and deletes with GrowingObjectNoficiation.PartsChanged,
GrowingObjectNoficiation.PartsDeleted and/or
GrowingObjectNoficiation.PartsReplacedByRange (see Chapter 12
GrowingObjectNotification (Protocol 7)).
Channel Data Replication
Get metadata for channels in the replication scope using
ChannelSubscribe.GetChannelMetadata.
Pull existing channel data in the data ranges provided in the returned channel metadata using
ChannelSubscribe.GetRanges.
Subscribe to the channels starting from the end of the existing data range provided in the channel
metadata using ChannelSubscribe.SubscribeChannels (see Chapter 19 ChannelSubscribe
(Protocol 21)) to receive new, streaming channel data.
Receive new (i.e. appended) data as it becomes available with
ChannelSubscribe.ChannelData.
Receive data edits and deletes with ChannelSubscribe.ChannelsTruncated and/or
ChannelSubscribe.RangeReplaced.
additional data will need to be transmitted to recover from outages, leading to higher initial load on
sessions.
In some stores, the retention history may be lost from time to time (e.g., if the store application restarts).
These stores MUST retain change data for at least the CRP as long as at least one session is connected
to the store’s endpoint. If the store DOES lose the retention history, the store MUST send the earliest
timestamp for which it DOES have retained change data in the earliestRetainedChangeTime field in either
the RequestSession or OpenSession messages. From the customer’s perspective, this essentially
serves as a potentially shorter CRP than usual when initially connecting. For the remainder of this
appendix, when ChangeRetentionPeriod or CRP are used, they mean either the store’s advertised
ChangeRetentionPeriod OR the shorter period based on the store’s earliestRetainedChangeTime.
Figure 39: Example showing how timestamps in various messages and change retention period work to
retain a "high-water mark" timestamp, which is crucial to determine what content changed during an outage.
In general, the basic idea of how timestamps and the CRP works is as follows:
1. Before connecting and/or as part of the establishing the ETP session, an endpoint's CRP is
discovered.
2. When a session is established, initial (t0) timestamps (currentDateTime) are exchanged in Core
(Protocol 0) RequestSession and OpenSession messages.
3. While connected, the latest timestamp of changes that have been pushed or pulled can be tracked (t 1,
t2); during periods of inactivity, Ping and Pong messages can be used to track updated timestamps
(t3); this latest timestamp of known change(s) is informally referred to as the high-water mark. The
active participant MUST track this high-water mark during the ETP session.
DETAILS from Figure 39: At t1, an object was created, so an ObjectChanged message is sent
saying an object was created at timestamp t1. Timestamp t1 is now the high-water mark, so we know
about any and all changes that may have happened in the window between t0 and t1.
At t2, an object is deleted, and an ObjectDeleted message is sent with the timestamp of t2, so t2
becomes the new high-water mark.
Then the data transmission goes idle for a while. So to establish a new (more recent) high-water mark
within the source store's CRP, the customer endpoint sends the Core.Ping message and receives
the response Core.Pong message with a timestamp of t3. Timestamp t3 is now the high-water mark.
After t3, the connection is inadvertently dropped and the session disconnected. So t3 remains the last
known high-water mark.
4. When reconnecting after the disconnect (t4), the currentDateTime timestamps are exchanged when
establishing the new session (just like when the initial session was established).
5. After reconnecting, the active participant must compare the gap between the timestamp of the new
session start (t4) and the high-water mark from the previous session (t3) to the CRP. Necessary
actions depend on whether the gap is less than/equal to the CRP (i.e., you have reconnected within
the CRP) or greater than the CRP (i.e., you have connected later than the CRP).
For next steps, see Section 26.6.3 Main Resumption Workflow.
NOTES:
1. Real-world operations—which may mean hundreds or thousands of messages flowing back and forth
between endpoints simultaneously, some operations resulting in multiple notifications with the same
timestamp, and receipt of multiple messages at a particular timestamp—will make it more challenging
to determine exactly what happened at a particular timestamp. However, if an endpoint receives a
message with a particular timestamp, it can be confident it received all changes before that
message/timestamp. This is an important semantic that stores must understand and support for this
change detection process to work.
2. The process for determining missed data during an outage is more relevant to pull workflows than
push workflows (because an active participant that is pushing data "knows/tracks" what data it was
pushing). However, the process is important to man-in-the-middle applications, which uses both push
and pull workflows.
3. After the changes have been identified, the active participant must:
a. Start pushing or pulling any changes detected in the new session.
For the push workflow, see Section 26.6.4.
For the pull workflow, see Section 26.6.5.
b. Replicate the changes that happened while disconnected and, when needed, replicate any
objects from scratch.
The source may additionally use aspects of the pull workflow on resumption by issuing queries to
the destination to verify that the destination’s content matches the source’s expectations based
on the information it tracked. This option is not described in detail here, but possible actions when
the source’s expectations are not met are to stop the transfer with an error or triggering a full
resend of data for affected data objects
2. On reconnect, after the changes that need to be pushed have been identified (step 1), the changes
are pushed using the normal push workflow (described in Section 26.4).
In the push workflow, there is no difference between pushing changes that happened while
disconnected and pushing changes that happen while connected.
2. After the changes have been identified, the destination must pull the changes.
26.6.5.1 Information That Must Be Tracked by the Destination and How to Initialize and Track it
During the replication process, the destination in the pull workflow must track what is being replicated (the
list in Step 1a above and repeated in the table below), and it must tracks changes to those items
throughout the replication process, so it can keep the required tracked information current.
NOTE: This information is NOT explicitly listed for the push workflow, because in the push workflow, the
source is in control of sending the messages to push the data (from itself!) and must simply track
confirmation that the action specified in the messages was successfully completed (with the positive
responses from the destination). In the pull workflow, the destination must also pull this "tracking data"
from the source.
The following tables summarizes, what must be tracked (column 1), and for each of those items,
specifically what is tracked (column 2), how the tracked information is initialized (column 3), and updated
(column 4) during the replication process. The text below the table describes the behavior for each "row"
in the table.
(Row 1) On initial connection for replication, the destination initializes the replication scope using the
Discovery.GetResources message; the source replies with the list of resources (one for each data
object in the scope) which contains the URI for the data object (among other data). The destination must
also subscribe to notifications for changes to objects in the replication scope; when changes occur, the
source sends the destination StoreNotification.ObjectChanged,
StoreNotification.ObjectAccessRevoked, and StoreNotification.ObjectDeleted. The destination uses
these notices to update the data object's storeLastWrite time.
(Row 2) For each data object in the replication scope, the destination must track its URI and its
storeLastWrite time.
(Row 3) The high-water mark for edits or deletes to parts in a growing data object is the timestamp on the
most recent ChangeAnnotation for that growing data object; it conveys that no parts were changed in
that growing data object after that timestamp. (For information about how change annotations (CA) work
in this workflow, see Section 0.) The destination initializes this information by sending message
GrowingObject.GetChangeAnnotations (with latestOnly=true). If no CAs are returned, the high-water
mark is the currentDateTime stamp exchanged in Core (Protocol 0) when establishing the session in the
RequestSession and OpenSession messages. Tracked information is updated in the destination during
replication with GrowingObjectNotification.PartsChanged,GrowingObjectNotification.PartsDeleted,
Figure 40: Example replication scope changes while disconnected. Solid colorled circles represent data
objects of interest (in the replication scope); dashed-line circles are objects outside the replication scope.
After reconnect (righ) in this example, B is now in scope and G is out of scope.
Change Type How Store Retains It How Customer Discovers How Customer
Change on Reconnect Requests Change
New data object created Updates storeLastWrite and Object URI not in previously tracked Store.GetDataObjects
storeCreated replication scope
Change Type How Store Retains It How Customer Discovers How Customer
Change on Reconnect Requests Change
Existing data object added Updates storeLastWrite on Object URI not in previously tracked Store.GetDataObjects
to scope one end of the relationship replication scope and storeCreated is
older than the high-water mark.
(Row 1) New objects may be created in the source that fall within the replication scope. Any time a new
object is created, the source store initializes both storeLastWrite and storeCreated to the object’s creation
time. On reconnect, the destination sends Discovery.GetResources to get the updated replication
scope. Any URIs that were not previously known to the destination are newly created data objects if their
storeCreated time is newer than the destination’s high-water mark. The destination requests the new data
object with Store.GetDataObjects.
(Row 2) New relationships may be created in the source between existing data objects that cause the
existing data objects to be included in the replication scope. When this happens, the source store
initializes the storeLastWrite of the container data object or the source of the data object reference in the
relationship. On reconnect, the destination sends Discovery.GetResources to get the updated
replication scope. Any URIs that were not previously known to the destination are existing data objects
that have been added to the scope if their storeCreated time is newer than the destination’s high-water
mark. The destination requests the new data object with Store.GetDataObjects.
(Row 3) Existing data objects within the replication scope may be deleted. When this happens, the source
creates a DeletedResource for the data object. On reconnect, the destination sends
Discovery.GetResources to get the updated replication scope. If any URIs previously known to the
destination are missing from the response, the destination sends Discovery.GetDeletedResources to
get the list of deleted DeletedResource records. A deleted object will have a corresponding
DeletedResource in the response.
(Row 4) The source may revoke the destination’s access to an object that is within the replication scope.
When this happens, the store does not track this information in a field on an ETP record. On reconnect,
the destination sends Discovery.GetResources to get the updated replication scope. If any URIs
previously known to the destination are missing from the response, the destination sends
Discovery.GetDeletedResources to get the list of deleted DeletedResource records. If any URIs
previously known to the destination do NOT have a corresponding DeletedResource in the response, the
destination sends a Discovery.GetDeletedResources for each such URI and scoped only to that URI. If
no Resource is returned, the destination has lost access to the data object.
(Row 5) Relationships between objects within the replication scope in the source may be modified
causing some of the objects to no longer be in the scope. When this happens, the source store initializes
the storeLastWrite of the container data object or the source of the data object reference in the
relationship. On reconnect, the destination sends Discovery.GetResources to get the updated
replication scope. If any URIs previously known to the destination are missing from the response, the
26.6.5.2.2 Objects
While disconnected, data objects in the replication scope may be updated; that is, elements on the data
object have changed.
Figure 41: Example of update to data object. While disconnected, the channel's title has been changed (from
"Hookload" to "HKLD".
Change How Store How Customer Discovers How Customer Requests Change
Type Retains It Change on Reconnect
(Row 1) Data objects within the replication scope may be changed in the source. Any time an object is
changed, the source updates storeLastWrite. On reconnect, the destination sends
Discovery.GetResources to get the updated replication scope. If the Resource for a data object has a
newer storeLastWrite than the data object’s last known storeLastWrite AND the Resource has a
storeCreated that is equal to or older than the data object’s last known storeLastWrite, then the data
object was changed while the session was disconnected. To get the last data for the data object, the
destination sends Store.GetDataObjects or GrowingObject.GetDataObjectsHeader.
(Row 2) Data objects within the replication scope may be deleted and recreated in the source. Any time
this happens, the source updates both storeLastWrite and storeCreated. If the Resource for a data object
has a newer storeCreated than the data object’s last known storeLastWrite, then the data object was
deleted and recreated while the session was disconnected. To get the last data for the data object, the
destination sends Store.GetDataObjects or GrowingObject.GetDataObjectsHeader. If the data object
is a growing data object or a channel, the destination requests the new data with
GrowingObject.GetPartsByRange or ChannelSubscribe.GetRanges.
Figure 42: Example: trajectory had 4 trajectory stations; on reconnect there are still 4, but one has been
deleted, a new one added, and one has been updated.
The table below lists the four possible actions, which are explained below the table.
(Row 1) New parts may be added to growing data objects within the replication scope. When this
happens, the source updates the index ranges as necessary and creates a ChangeAnnotation for the
affected data objects. On reconnect, the destination sends GrowingObject.GetChangeAnnotations with
the high-water mark to get any new ChangeAnnotations that may have been created while
disconnected. Steps to take in response to new ChangeAnnotations are described below.
(Row 2) Existing parts may be modified in growing data objects within the replication scope. When this
happens, the source creates a ChangeAnnotation for the affected data objects. On reconnect, the
destination sends GrowingObject.GetChangeAnnotations with the high-water mark to get any new
ChangeAnnotations that may have been created while disconnected. Steps to take in response to new
ChangeAnnotations are described below.
(Row 3) Parts may be deleted from growing data objects within the replication scope. When this happens,
the source updates the index ranges as necessary and creates a ChangeAnnotation for the affected
data objects. On reconnect, the destination sends GrowingObject.GetChangeAnnotations with the
high-water mark to get any new ChangeAnnotations that may have been created while disconnected.
Steps to take in response to new ChangeAnnotations are described below.
(Row 4) Ranges of parts may be deleted from growing data objects and replaced with new parts within
the replication scope. When this happens, the source updates the index ranges as necessary and creates
a ChangeAnnotation for the affected data objects. On reconnect, the destination sends
GrowingObject.GetChangeAnnotations with the high-water mark to get any new ChangeAnnotations
that may have been created while disconnected. Steps to take in response to new ChangeAnnotations
are described below.
Figure 43: Example of change annotations and how they work (see details below).
The example in Figure 43 is channel data with indexes increasing downward. The table below lists the
changes to channel data that can occur (or some combination of these). A customer must be able to
detect all these changes on reconnect.
New Data Updates end Index New end index values in the interval ChannelSubscribe.SubscribeChannels
Appended field on IndexMetadataRecord in
ChannelMetadataRecord
(Row 1) New data may be appended to channels within the replication scope. When this happens, the
source updates the end indexes for the affected channels. On reconnect, the destination sends
ChannelSubscribe.GetChannelMetadata and compares the previously known end indexes for each
channel against the new ones returned in ChannelMetadataRecord. If the new end indexes are beyond
(where beyond may be greater than or less than depending on direction in IndexMetadataRecord) the
previously known end indexes, new data was appended. The destination sends
ChannelSubscribe.GetRanges to request the new data.
(Row 2) A channel within the replication scope may be truncated, which is when the end index is reset to
an earlier value and any data beyond the new end index is deleted. When this happens, the source resets
the channel’s end index, and it creates a ChangeAnnotation covering the truncated interval, merging this
as needed with existing ChangeAnnotation records. On reconnect, the destination sends
ChannelSubscribe.GetChangeAnnotations with the high-water mark to get any new
ChangeAnnotations that may have been created while disconnected. Steps to take in response to new
ChangeAnnotations are described below.
(Row 3) Data within a channel within the replication scope may be changed. When this happens, the
source creates a ChangeAnnotation covering the changed interval, merging this as needed with existing
ChangeAnnotation records. On reconnect, the destination sends
ChannelSubscribe.GetChangeAnnotations with the high-water mark to get any new
ChangeAnnotations that may have been created while disconnected. Steps to take in response to new
ChangeAnnotations are described below.
(Row 4) Data within a channel within the replication scope may be deleted. When this happens, the
source creates a ChangeAnnotation covering the deleted interval, merging this as needed with existing
ChangeAnnotation records. On reconnect, the destination sends
ChannelSubscribe.GetChangeAnnotations with the high-water mark to get any new
ChangeAnnotations that may have been created while disconnected. Steps to take in response to new
ChangeAnnotations are described below.
scenario, the destination retrieves all new data beyond the previously known end index with a single
ChannelSubscribe.GetRanges.
27.2 Approaches Considered and Why the Current One Was Selected
The current approach was determined and designed for these reasons:
Extensive investigation showed there is no one standard or simple approach that could simply be
"picked off the shelf".
All options that were investigated had limitations; these included: HTTPS, Cookies, Basic, Mutual
TLS, URL Query, Sec-WebSocket-Protocol, other Headers.
Many of the current security systems are for user-driven, interactive workflows, which are not
appropriate for most of our device-to-device connectivity scenarios.
Based on the research and the feedback collected from the community, including security experts, the
Architecture Team believes this is the best approach because:
It best supports our use cases.
It's a minimal but extensive method that appears mainstream enough and is implemented in sufficient
packages and languages (i.e., existing tools are available to support it).
It does not prevent organizations from supporting more advanced, interactive workflows.
It's believed to be extensible in the future without further schema changes (but, of course, there are
no guarantees—Internet security changes fast).
It's simple to implement Auth Server on an ETP server for small, self-contained installs, while allowing
external Auth Servers for larger/corporate configurations.