OpenText Documentum Foundation Services 16.7 - Development Guide English (EDCPKSVC160700-PGD-EN-01)
OpenText Documentum Foundation Services 16.7 - Development Guide English (EDCPKSVC160700-PGD-EN-01)
OpenText Documentum Foundation Services 16.7 - Development Guide English (EDCPKSVC160700-PGD-EN-01)
Services
Development Guide
EDCPKSVC160700-PGD-EN-01
OpenText™ Documentum™ Foundation Services
Development Guide
EDCPKSVC160700-PGD-EN-01
Rev.: 2019-Sept-13
This documentation has been created for software version 16.7.
It is also valid for subsequent software versions as long as no new document version is shipped with the product or is
published at https://knowledge.opentext.com.
Tel: +1-519-888-7111
Toll Free Canada/USA: 1-800-499-6544 International: +800-4996-5440
Fax: +1-519-888-0677
Support: https://support.opentext.com
For more information, visit https://www.opentext.com
One or more patents may cover this product. For more information, please visit https://www.opentext.com/patents.
Disclaimer
Every effort has been made to ensure the accuracy of the features and techniques presented in this publication. However,
Open Text Corporation and its affiliates accept no responsibility and offer no warranty whether expressed or implied, for the
accuracy of this publication.
Table of Contents
1 Overview ................................................................................... 15
1.1 What are Documentum Foundation Services? ................................... 15
1.2 Web Services ................................................................................. 16
1.3 Java Services ................................................................................. 16
1.4 Productivity Layer ........................................................................... 16
1.5 DFS and DFC ................................................................................. 17
1.5.1 XML Support .................................................................................. 18
1.6 DFS Tools ...................................................................................... 19
1.7 Enterprise Content Services ............................................................ 20
14.6.2 Starting a workflow using the DFS Workflow service ........................ 271
Preface
This document is a guide to using Documentum Foundation Services (DFS) for the
development of DFS service consumers, and of custom DFS services.
IMPORTANT
i Intended Audience
This document is intended for developers and architects building consumers of DFS
services, and for service developers seeking to extend DFS services with custom
services. This document will also be of interest to managers and decision makers
seeking to determine whether DFS would offer value to their organization.
ii Revision History
The following changes have been made to this document.
The Javadoc or .NET HTML help files provide additional information, to the sample
code delivered with the DFS SDK, and to resources on OpenText My Support
(https://support.opentext.com).
For public method names C# conventionally uses Pascal case (for example
GetStatus), while Java uses “camel case” (getStatus). The corresponding WSDL
message uses the same naming convention as the Java method. This document will
use the convention followed by Java and the SOAP API.
Java uses getter and setter methods for data encapsulation (properties are an
abstraction) and C# uses properties; these correspond to typed message parts in the
SOAP API. This document will refer to such an entity as a property, using the name
from the SOAP API. For example:
Overview
The following table lists some of the technologies that are included in DFS:
Other Documentum products provide services that are compatible with the DFS
framework. The overarching term for the services as a whole is Enterprise Content
Services. Documentum Enterprise Content Services Reference Guide provides a
comprehensive reference to the available services.
(remote mode), or a local consumer, running in the same JVM as the DFS Java
services.
When programming in DFS, some of the central and familiar concepts from DFC are
no longer a part of the model. Session managers and sessions are not part of the DFS
abstraction for DFS consumers. However, DFC sessions are used by DFS services
that interact with the DFC layer. The DFS consumer sets up identities (repository
names and user credentials) in a service context, which is used to instantiate service
proxies, and with that information DFS services take care of all the details of getting
and disposing of sessions. “DFC sessions in DFS services” on page 137 provides
more details on how sessions are used. DFS does not have (at the exposed level of
the API) an object type corresponding to a SysObject. Instead it provides a generic
DataObject class that can represent any persistent object, and which is associated
with a repository object type using a property that holds the repository type name
(for example “dm_document”). Unlike DFC, DFS does not generally model the
repository type system (that is, provide classes that map to and represent repository
types). Any repository type can be represented by a DataObject, although some
more specialized classes can also represent repository types (for example Acl or a
Lifecycle).
In this documentation, we've chosen to call the methods exposed by DFS services
operations, in part because this is what they are called in the WSDLs that represent
the web service APIs. Don't confuse the term with DFC operations—in DFS the term
is used generically for any method exposed by the service.
DFS services generally speaking expose a just a few service operations. The
operations generally have simple signatures. For example the Object service update
operation has this signature:
Using XML support requires you to provide a controlling XML application. When
you import an XML document, DFC examines the controlling application’s
configuration file and applies any chunking rules that you specify there. If the
application’s configuration file specifies chunking rules, DFC creates a virtual
document from the chunks it creates. It imports other documents that the XML
document refers to as entity references or links and makes them components of the
virtual document. It uses attributes of the containment object associated with a
component to remember whether it came from an entity or a link and to maintain
other necessary information. Assembly objects have the same XML-related attributes
as containment objects do. The processed XML files are imported in Documentum
Server as virtual documents and therefore, in order to retrieve the XML files, you
must use methods that are applicable for processing virtual documents.
DFC provides substantial support for the Documentum XML capabilities. XML
processing by DFC is largely controlled by configuration files that define XML
applications.
ContentTransferProfile.setXMLApplicationName(String
xmlApplicationName);
If no XML application is provided, DFC will use the default XML application for
processing. To disable XML processing, set the application name to Ignore.
Use the UCF mode for importing XML files with external links and uploading
external files. If you use other content transfer modes, only the XML file will be
imported and the links will not be processed.
DFS services can be implemented as POJOs (Plain Old Java Objects), or as BOF
(Business Object Framework) service-based business objects (SBOs). The service-
generation tools build service artifacts that are archived into a deployable EAR file
for remote execution and into JAR files for local execution using the optional client
runtime. C# client-side proxies are generated using the DFS Proxy Generator utility.
The Documentum Composer User Guide provides detailed information on using the
tools through the Composer interface.
The following table describes the supported Ant tasks that can be used for tools
scripting:
“Custom Service Development with DFS“ on page 135 provides details on building
custom services and the build tools.
This chapter provides information about how to consume DFS services remotely
using web services frameworks and the DFS WSDL interface (or for that matter any
Enterprise Content Services WSDL), without the support of the client productivity
layer. The chapter will present some concrete examples using Axis2 to illustrate a
general approach to DFS service consumption that is applicable to other
frameworks. Documentation of an additional Java sample that uses JAX-WS RI and
demonstrates content transfer using the Object service, is available on OpenText My
Support https://support.opentext.com. A .NET consumer sample is available on
OpenText My Support https://support.opentext.com.
In the latter case any state information passed in the SOAP request header is merged
into the service context on the server.
In general we promote the stateless option, which has the virtual of being simpler and
avoids some limitations that are imposed by maintaining state on the server;
however both options are fully supported.
Whichever of these options you choose, if you are not using the client productivity
layer your consumer code will need to modify the SOAP header. In the case of
stateless consumption, a serviceContext header is constructed based on user
credentials and other data regarding service state and placed in the SOAP envelope.
In the case of a registered service context, the consumer invokes the DFS
ContextRegistryService to obtain a token in exchange for the service context data.
The token must then be included within the SOAP request header within a
wsse:Security element. Both of these techniques are illustrated in the provided Axis2
samples.
As a convenience, an Ant build.xml file is provided for the Axis samples. You can
use this file to compile and run the samples instead of carrying out the tasks in the
• javaOutputDirectory – the directory where you want the java client proxies to
be output to
• host:port – the host and port where DFS is located
The classes that are generated from this WSDL are recommended for all DFS
consumers regardless of whether or not you register the service context. The
ContextRegistryService provides convenience classes such as the ServiceContext
class, which makes developing consumers easier.
2. Generate the proxies for each service that you want to consume with the
following command. For the samples to work correctly, generate proxies for the
SchemaService:
• javaOutputDirectory – the directory where you want the java client proxies to
be output to
• host:port – the host and port where DFS is located
• module – the name of the module that the service is in, such as “core” or
“search”
• ServiceName – the name of the service that you want to consume, such as
“SchemaService” or “SearchService”
3. Add the directory that you specified for javaOutputDirectory as a source folder
in your project. They need to be present for your consumer to compile correctly.
Once you have the proxies generated, you can begin writing your consumer.
ContextRegistryServiceStub stub =
new
ContextRegistryServiceStub(this.contextRegistryURL);
RegisterResponse response = stub.register(register);
return response.getReturn();
}
The following SOAP message is the SOAP message request when a context gets
registered:
secext-1.0.xsd","wsse");
OMElement securityElement =
omFactory.createOMElement("Security", wsse);
OMElement tokenElement =
omFactory.createOMElement("BinarySecurityToken",
wsse);
OMNamespace wsu = tokenElement.declareNamespace(
"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-
wssecurity-
utility-1.0.xsd","wsu");
tokenElement.addAttribute("QualificationValueType",
"http://schemas.emc.com/
documentum#ResourceAccessToken",
wsse);
tokenElement.addAttribute("Id", "RAD", wsu);
tokenElement.setText(token);
securityElement.addChild(tokenElement);
return securityElement;
}
The following snippet of XML is what the security header should look like. The
value for the BinarySecurityToken element is the token that was returned by the
Context Registry Service.
<wsse:Security xmlns:wsse=
"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-
secext-1.0.xsd">
<wsse:BinarySecurityToken
QualificationValueType="http://schemas.emc.com/
documentum#ResourceAccessToken"
xmlns:wsu=
"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-
wssecurity-
utility-1.0.xsd"
wsu:Id="RAD">
hostname/123.4.56.789-123456789123-45678901234567890-1 </
wsse:BinarySecurityToken>
</wsse:Security>
(serviceContext.getIdentities().get(0))).getRepositoryName(),
null);
System.out.println("Repository Default Schema Name:" +
r.getDefaultSchemaName() + "\n" +
"Repository Description: " + r.getDescription() +
"\n" + "Repository Label: " + r.getLabel() +
"\n" + "Repository Schema Names: " +
r.getSchemaNames());
}
catch (Exception e)
{
e.printStackTrace();
}
}
The following SOAP message gets sent as the request when calling the Schema
Service's getRepositoryInfo operation:
secext-1.0.xsd">
<wsse:BinarySecurityToken
QualificationValueType="http://
schemas.emc.com/
documentum#ResourceAccessToken"
xmlns:wsu=
"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-
wssecurity-
utility-1.0.xsd"
wsu:Id="RAD">
hostname/123.4.56.789-123456789123-45678901234567890-1
</wsse:BinarySecurityToken>
</wsse:Security>
</S:Header>
<S:Body>
<ns7:getRepositoryInfo
xmlns:ns2="http://
properties.core.datamodel.fs.documentum.emc.com/"
xmlns:ns3="http://core.datamodel.fs.documentum.emc.com/"
xmlns:ns4="http://content.core.datamodel.fs.documentum.emc.com/"
xmlns:ns5="http://
profiles.core.datamodel.fs.documentum.emc.com/"
xmlns:ns6="http://schema.core.datamodel.fs.documentum.emc.com/"
xmlns:ns7="http://core.services.fs.documentum.emc.com/">
<repositoryName>repository</repositoryName>
</ns7:getRepositoryInfo>
</S:Body>
</S:Envelope>
JAXBContext.newInstance("com.emc.documentum.fs.datamodel.core.
context");
Marshaller marshaller = jaxbContext.createMarshaller();
marshaller.marshal( new JAXBElement(
new QName("http://
context.core.datamodel.fs.documentum.emc.com/",
"ServiceContext"), ServiceContext.class,
serviceContext ), builder);
OMElement header= builder.getRootElement();
header.declareDefaultNamespace("http://
context.core.datamodel.fs.
documentum.emc.com/");
client.addHeader(header);
serviceContext.getIdentities().get(0)).getRepositoryName());
GetRepositoryInfoResponse response =
stub.getRepositoryInfo(get);
RepositoryInfo r = response.getReturn();
System.out.println("Repository Default Schema Name:" +
r.getDefaultSchemaName() + "\n" +
"Repository Description: " + r.getDescription() + "\n" +
"Repository Label: "
+ r.getLabel() + "\n" + "Repository Schema Names: " +
r.getSchemaNames());
}
catch (Exception e)
{
e.printStackTrace();
}
}
The following SOAP message gets sent as the request when calling the Schema
Service's getRepositoryInfo operation:
profiles.core.datamodel.fs.documentum.emc.com/"
xmlns:ns4="http://
context.core.datamodel.fs.documentum.emc.com/"
xmlns:ns5="http://
content.core.datamodel.fs.documentum.emc.com/"
xmlns:ns6="http://core.datamodel.fs.documentum.emc.com/"
token="temporary/USXXLYR1L1C-1210201103234">
<ns4:Identities xmlns:xsi="http://www.w3.org/
2001/ XMLSchema-instance"
xsi:type="ns4:RepositoryIdentity"
repositoryName="techpubs"/>
</ns4:ServiceContext>
</S:Header>
<S:Body>
<ns7:getSchemaInfo
xmlns:ns2="http://
properties.core.datamodel.fs.documentum.emc.com/"
xmlns:ns3="http://core.datamodel.fs.documentum.emc.com/"
xmlns:ns4="http://
content.core.datamodel.fs.documentum.emc.com/"
xmlns:ns5="http://
profiles.core.datamodel.fs.documentum.emc.com/"
xmlns:ns6="http://
schema.core.datamodel.fs.documentum.emc.com/"
xmlns:ns7="http://core.services.fs.documentum.emc.com/">
<repositoryName>techpubs</repositoryName>
<schemaName>DEFAULT</schemaName>
</ns7:getSchemaInfo>
</S:Body>
</S:Envelope>
2. Edit the build.properties file for the appropriate sample and specify values for
axis.home, schema.service.wsdl, and context.registry.service.wsdl.
3. Enter “ant all” on the command line from the location of the build.xml file to
compile and run the samples. This target calls the clean, artifacts, compile,
run.registered.client, and run.unregistered.client targets. You can choose to run
these targets individually to examine the output of each step.
This chapter describes how to run the Java consumers provided with the DFS SDK
that utilize the Java productivity layer. The Java productivity layer is an optional
client library that provides convenience classes to make it easier to consume DFS
services using Java. “Consuming DFS with the Java DFS Productivity Layer“
on page 45 provides detailed information about Java productivity layer consumers.
• A running DFS server. This can be a standalone instance of DFS running on your
local machine or on a remote host. “Verify the DFS server” on page 36 provides
detailed information.
• The DFS server that you are using needs to point to a connection broker through
which it can access a test repository. Your consumer application will need to
know the name of the test repository and the login and password of a repository
user who has Create Cabinet privileges. “Verify repository and login
information” on page 36 provides detailed information.
• Optionally, a second repository can be available for copying objects across
repositories. This repository should be accessible using the same login
information as the primary repository.
• You must have JDK 6 installed on your system, and your JAVA_HOME
environment variable should be set to the JDK location.
• You must have Apache Ant 1.8.x or higher installed and on your path.
• The DFS SDK must be available on the local file system. Its location will be
referred to as %DFS_SDK%. Make sure that there are no spaces in the folder
names on the path to the SDK.
• The sample consumer source files are located in %DFS_SDK%\samples
\DfsJavaSamples. This folder will be referred to as %SAMPLES_LOC%.
Note: The LifecycleService samples require installing sample data on your test
repository. Before running these samples, install the Documentum Composer
project contained in ./csdata/LifecycleProject.zip to your test repository using
Documentum Composer, or install the DAR file contained in the zip archive
using the darinstaller utility that comes with Composer. Documentum Composer
User Guide provides more information on how to use a composer to install a
DAR file on the repository.
• repository name
• user name of a user with Create Cabinet privileges
• user password
The repository name must be the name of a repository accessible to the DFS server.
The list of available repositories is maintained by a connection broker.
Note: DFS knows where the connection broker is, because the IP address or
DNS name of the machine hosting the connection broker is specified in the
dfc.properties file stored in the DFS EAR file. The connection broker host and
port will have been set during DFS installation. If the EAR file was manually
deployed to an application server, the connection broker host and port should
have been set manually as part of the deployment procedure. Documentum
Platform and Platform Extensions Installation Guide provides detailed
information.
This procedure walks you through running the sample using the provided Ant build
script. This script and its properties file are located in %SAMPLES_LOC%. These
require no modification if they are used from their original location in the SDK.
• repository
• userName
• password
• host
/************************************************************
* You must supply valid values for the following fields: */
2. Run the following command to delete any previously compiled classes and
compile the Java samples project:
The TQueryServiceTest program queries the repository and outputs the names
and IDs of the cabinets in the repository.
If you are running the samples in the Eclipse IDE, read the following sections:
If you are running the samples using Ant, read the following sections:
If you want to run one sample, run “ant run -Dtest.class=<classname>”, the
<classname> would be the name of test class in package
com.emc.documentum.fs.doc.test.client, such as TObjServiceCreate. If you want to
run all samples, run “ant run -Dtest.class=TDriver”. Please notice that some tests are
marked as ignored in testConfig.xml as you need execute some extra operations in
order to run them. If you want to run these tests, please change the value of ignore
attribute to false in testConfig.xml.
2. In the New Project dialog box, choose Java Project from Existing Ant Build File,
then click Next.
You should now be able to compile and run the samples in Eclipse after you have
configured the samples correctly, which will be discussed in “Setting hard coded
values in the sample code” on page 40 and “Configuring DFS client properties
(remote mode only)” on page 41.
1. Edit %SAMPLES_LOC%\test\com\emc\documentum\fs\doc\test\client
\SampleContentManager.java and specify the values for the <gifImageFilePath>
and <gifImage1FilePath> variables. The consumer samples use these files to
create test objects in the repository. Two gif images are provided in the
"core">
<ModuleInfo name="core"
protocol="http"
host="YOUR_HOST_NAME"
port="YOUR_PORT"
contextRoot="services">
</ModuleInfo>
<ModuleInfo name="bpm"
protocol="http"
host="YOUR_HOST_NAME"
port="YOUR_PORT"
contextRoot="services">
</ModuleInfo>
<ModuleInfo name="collaboration"
protocol="http"
host="YOUR_HOST_NAME"
port="YOUR_PORT"
contextRoot="services">
</ModuleInfo>
<ModuleInfo name="ci"
protocol="http"
host="YOUR_HOST_NAME"
port="YOUR_PORT"
contextRoot="services">
</ModuleInfo>
</DfsClientConfig>
Note: There is more than one copy of this file on the SDK, so make sure you
edit the one in %SAMPLES_LOC%\etc.
Also, to run the workflow service tests, you must specify correct values for
<dfc.globalregistry.username>, <dfc.globalregistry.password>, and
<dfc.globalregistry.repository>. If you are running the samples in remote mode, you do
not have to edit your local copy of dfc.properties. The dfc.properties file that is on
the server is used, which was configured during installation of DFS.
1. If necessary, modify the path to the DFS SDK root directory in the Ant
%SAMPLES_LOC%/build.properties file. In the provided file, this value is set as
follows—modify it as required:
dfs.sdk.home=c:/<dfs-sdk-version>/
2. To enable you to run the samples without modifying the sample code, this
release provides you with property files which contain configurable parameters
for the samples. If necessary, modify these parameters in the corresponding
property files. For example, the property files under the <dfs-sdk-version>
\<dfs-sdk-version>\samples\AcmeCustomService\ directory contain
configurable parameters for the TAccessControlService sample.
4. Execute the info target for information about running the samples.
ant info
This will print information to the console about available Ant targets and
available samples to run.
Buildfile: build.xml
[echo] DFS SDK home is 'c:/<dfs-sdk-version>/'
[echo] This project home is
'c:\<dfs-sdk-version>\samples\JavaConsumers
\DesktopProductivityLayer'
info:
[echo] Available tasks for the project
[echo] ant clean - to clean the project
[echo] ant compile - to compile the project
[echo] ant run -Dtest.class=<class name> - to run a test
class
[echo] Available test classes for run target:
[echo] TAccessControlService
[echo] TDriver
[echo] TLifecycleService
[echo] TExceptionHandlingD7
[echo] TObjServiceAspect
[echo] TObjServiceCopy
[echo] TObjServiceCreate
[echo] TObjServiceGet
[echo] TObjServiceDelete
[echo] TObjServiceMove
[echo] TObjServiceUpdate
[echo] TQueryServicePassthrough
[echo] TQueryServiceTest
[echo] TSchemaServiceDemo
[echo] TSearchService
[echo] TVersionControlServiceDemo
[echo] TVirtualDocumentService
5. Run any of the classes, listed above, individually using the ant run target as
follows:
The DFS productivity layer contains a set of Java libraries that assist in writing DFS
consumers. Using the DFS productivity layer is the easiest way to begin consuming
DFS.
A remote DFS client is a web service consumer, using SOAP over HTTP to
communicate with a remote DFS service provider. The service runs in the JEE
container JVM and handles all of the implementation details of invoking DFC and
interacting with Documentum Server.
In a local DFS client, both the consumer and service run on the same Java virtual
machine. DFS uses a local DFC client to interact with Documentum Server.
Consumer code invokes DFS services using the productivity layer, and does not
invoke classes on the DFC layer.
Necessarily, a local DFS consumer differs in some important respects from a remote
consumer. In particular note the following:
• Service context registration (which sets state in the remote DFS service) has no
meaning in a local context, so registering the service context does nothing in a
local consumer.
• Content transfer in a local application is completely different from content
transfer in a remote application. Remote content transfer protocols (MTOM,
Base64, and UCF) are not used by a local consumer. Instead, content is
transferred by the underlying DFC client. “Content types returned by DFS”
on page 189 provides detailed information.
To develop (or deploy) a DFS consumer that can only invoke services remotely,
include the JARs listed in the following table.
To develop (or deploy) a DFS consumer that can invoke services locally, include the
JARs listed in the following table. A local consumer can also invoke services
remotely, so these are the dependencies you will need to develop a consumer that
can be switched between local and remote modes.
• <your-custom>-services.jar
• emc-dfs-services.jar
• <your-custom>-services-remote.jar
• emc-dfs-services-remote.jar
Applications that use core services and the core data model should also include on
their classpath, in addition to the core services and runtime jars:
For remote execution of DFS services, you do not have to configure a local copy of
dfc.properties.. DFS uses the DFC client that is bundled in the dfs.ear file that is
deployed on a standalone application server. In these cases, the minimum
dfc.properties settings for the connection broker and global registry are set during
the DFS installation. If you do not use the DFS installation program you will need to
configure dfc.properties in the EAR file. Documentum Platform and Platform Extensions
Installation Guide provides detailed information.
To configure dfc.properties:
Note: If you are using explicit addressing all of the time, you do not have to
configure the dfs-client.xml file, because you will be specifying the host
information with each service invocation.
core">
<ModuleInfo name="core"
protocol="http"
host="dfsHostName"
port="8080"
contextRoot="services">
</ModuleInfo>
<ModuleInfo name="bpm"
protocol="http"
host="dfsHostName"
port="8080"
contextRoot="services">
</ModuleInfo>
<ModuleInfo name="collaboration"
protocol="http"
host="dfsHostName"
port="8080"
contextRoot="services">
</ModuleInfo>
<ModuleInfo name="ci"
protocol="http"
host="dfsHostName"
port="8080"
contextRoot="services">
</ModuleInfo>
<ModuleInfo name="my_module"
protocol="http"
host="dfsHostName"
port="8080"
contextRoot="my_services">
</ModuleInfo>
</DfsClientConfig>
<protocol>://<host>:<port>/<contextRoot>/<module>/<serviceName>
For example:
http://dfsHostName:8080/services/core/ObjectService
Please also note that the service context is not thread safe and should not be accessed
by separate threads in a multi-threaded application. If you require multiple threads
your application must provide explicit synchronization.
the SOAP header with each service invocation. There are advantages and
disadvantages to both approaches (see “Service context registration” on page 56).
Properties and profiles can often be passed to an operation during service operation
invocation through an OperationOptions argument, as an alternative to storing
properties and profiles in the service context, or as a way of overriding settings
stored in the service context.
4.5.2 Identities
A service context contains a collection of identities, which are mappings of
repository names onto sets of user credentials used in service authentication. A
service context is expected to contain only one identity per repository name.
Identities are set in a service context using one of the concrete Identity subclasses:
• BasicIdentity directly extends the Identity parent class, and includes accessors for
user name and password, but not for repository name. This class can be used in
cases where the service is known to access only a single repository, or in cases
where the user credentials in all repositories are known to be identical.
BasicIdentity can also be used to supply fallback credentials in the case where the
user has differing credentials on some repositories, for which RepositoryIdentity
instances will be set, and identical credentials on all other repositories. Because
BasicIdentity does not contain repository information, the username and
password is authenticated against the global registry. If there is no global registry
defined, authentication fails.
4.5.3 Locale
The locale property of an IServiceContext object specifies the language and
optionally country setting to use for locale-sensitive features. The locale is used, for
example, to control which NLS-specific Data Dictionary strings will be provided by
Documentum Server to the DFS layer. The format of the locale string value is based
on Java locale strings, which in turn are based on ISO 639-1 two-character, lowercase
language codes and ISO 3166 country codes. The format of a Java locale string is
<languagecode>[_<countrycode>]; for example, the Java locale string for British
English is “en_GB”.
If the locale is not set in the service context, the DFS server runtime will use the
value set in the DFS server application. Typically this means that a DFS client
(particularly a remote client) should set the locale to the locale expected by the user,
rather than relying on the value set on the server. The locale setting used by the DFS
server can be specified in the dfc.locale property of dfc.properties. If the value is not
set in the service context by the client and not set on the server, the DFS server will
use the locale of the JVM in which it is running.
In most cases, when you encapsulate business logic into a custom service with
transaction enabled, multiple DFS service calls are involved. Then, this custom
service can be consumed remotely or locally as a transaction.
• Transaction is not supported across multiple remote service calls. And thus, you
can only call DFS services in local mode when implementing the custom service.
• You cannot add other identities when implementing the custom service, meaning
that, you can only use the identities passed by the custom service client.
• You cannot use ContextFactory.newContext() in the custom service
implementation. Instead, you have to use ContextFactory.getContext().
"PAYLOAD_CONTINUE_ON_EXCEPTION");
The expected behavior is that the payload policy must be honored first, then the
transaction policy. For example, suppose that we use the Object service to create
objects based on a DataPackage that has two DataObject trees. We use
PAYLOAD_CONTINUE_ON_EXCEPTION with transaction support to create the
objects. At runtime, a leaf in the first DataObject tree fails and all others succeed. In
this case only the objects in the second DataObject tree would be created; the
creation of the first DataObject tree would be rolled back. If no transaction support
were used, some leaves from the first DataObject tree would be created, as well as
the entire second DataObject tree.
There are two benefits to registering the service context. The first benefit is that
services can share a registered context. This minimizes over the wire traffic since the
consumer does not have to send service context information to every service it calls.
The second benefit occurs when a consumer calls a service and passes in a delta
modification to the service context. The DFS client runtime figures out the minimal
amount of data to send over the wire (the modifications) and the server runtime
merges the delta modifications into the service context that is stored on the server. If
your application is maintaining a lot of data (such as profiles, properties, and
identities) in the service context, this can significantly reduce how much data is sent
with each service call, because most of the data can be sent just once when the
service context is registered. On the other hand, if your application is storing only a
small amount of data in the service context, there is really not much to be gained by
registering the service context.
You should be aware that there are limitations that result from registration of service
context.
• The service context can be shared only by services that share the same
classloader. Typically this means that the services are deployed in the same EAR
file on the application server. This limitation means that the client must be aware
of the physical location of the services that it is invoking and manage service
context sharing based on shared physical locations.
• Registration of service contexts prevents use of failover in clustered DFS
installations.
• Registration of the service context is not supported with identities that store
Kerberos credentials.
If you are using the DFS client productivity layer, registering a service context is
mostly handled by the runtime, with little work on your part. You start by creating a
service context object, then you call one of the overloaded register methods.
If you wish to register the service context and are not using the productivity layer,
you can register the context by invoking the ContextRegistry service directly (see
“Writing a consumer that registers the service context” on page 24).
The register method can only be executed remotely and is meaningless in a local
Java service client. If you are running your client in local mode, the register method
will still result in an attempt at remote invocation of ContextRegistryService. If the
remote invocation fails, an exception will be thrown. If the invocation succeeds
(because there is a remote connection configured and available), there will be a
harmless invocation of the remote service.
• Both DFS checked and runtime exceptions now implement a common interface.
IDfsException is the common interface that is implemented by DfsException and
DfsRuntimeException. DfsException is the common ancestor class for all DFS
checked exceptions. And DfsRuntimeException is the root class for all DFS
runtime exceptions. Consequently, you no longer need to catch either the generic
Exception class or multiple exceptions that are more specific (for example,
ServiceException, ServiceFrameworkException, ServiceRegistryException, and
SerializableException). Instead, you only need to catch the root DfsException or
DfsRuntimeException class.
• Exceptions contain more specific information about the error condition. For
example, IDfsException's CauseCode contains the message ID and the message of
the root exception. Consequently, you no longer need to parse the exception
chain to find the root cause.
• You can handle exception states wisely by using IDfsException.
getExceptionGroup. You respond to categories of exceptions by using the
ExceptionGroup enum, which corresponds to error groups. For example,
exceptions related to incorrect input are identified by the ExceptionGroup
enum’s INPUT field. You retrieve the category of an exception by calling a
DfsException and DfsRuntimeException method.
• The following exceptions enable you to respond to more specific errors:
– AdapterInitException
– ContentHandlingException
– GraphParseException
– IllegalInputException
– InvalidObjectIdentityException
– RelationException
• DfsExceptionHolder implements IDfsException.
• Because bare WSDL clients are public contracts, using DfsException or
DfsRuntimeException eliminates compatibility issues.
• com.emc.documentum.fs.rt.IDfsException
The base interface that all DFS exceptions must implement.
• com.emc.documentum.fs.rt.DfsException
The root class for all DFS checked exceptions.
• com.emc.documentum.fs.rt.DfsRuntimeException
The root class for all DFS runtime exceptions.
Note: The following exceptions that were in the previous release do not inherit
from DfsException nor DfsRuntimeException:
• UcfException
• ServiceCreationException
4.7.3 Examples
The following code samples and output are derived from com.emc.documentum.
fs.doc.test.client.TExceptionHandlingD7.
case CONFIGURATION:
fixCongiguration();
break;
case AUTHENTICATION:
fixAuthentication();
break;
case INPUT:
fixInput();
break;
default:
throw new RuntimeException("Unexpected
error");
}
The following code sample shows the different methods that you can call to
show information about any DfsException:
catch (DfsException e)
{
System.out.println("Iteration = " + i);
System.out.println("Message = " +
e.getMessage());
System.out.println("CauseCode = " +
e.getCauseCode());
System.out.println("ExceptionGroup = " +
e.getExceptionGroup());
The following output shows a sample of the output for the DfsException
fields:
Iteration = 0
Message = Service
"com.emc.documentum.fs.services.core.ObjectService"
is not available at url: "http://localhost:8089/core/
ObjectService?WSDL".
Connection refused: connect
ExceptionGroup = CONFIGURATION
Iteration = 1
ExceptionGroup = AUTHENTICATION
Iteration = 2
ExceptionGroup = INPUT
created object with identity = 090004d38001479d
4.8 OperationOptions
DFS services generally take an OperationOptions object as the final argument when
calling a service operation. OperationOptions contains profiles and properties that
specify behaviors for the operation. The properties have no overlap with properties
set in the service context's RuntimeProperties. The profiles can potentially overlap
with properties stored in the service context. In the case that they do overlap, the
profiles in OperationOptions always take precedence over profiles stored in the
service context. The profiles stored in the service context take effect when no
matching profile is stored in the OperationOptions for a specific operation. The
override of profiles in the service context takes place on a profile-by-profile basis:
there is no merge of specific settings stored within the profiles.
OperationOptions are discussed in more detail under the documentation for specific
service operations. For more information on core profiles, see “PropertyProfile”
on page 103, “ContentProfile” on page 107, “PermissionProfile” on page 113, and
“RelationshipProfile” on page 125. Other profiles are covered under specific
services in the Documentum Enterprise Content Services Reference Guide.
• The productivity layer API features Java beans with additional convenience
functionality and logic for the data model classes, while the light API only
contains generated beans.
• The productivity layer API supports both local and remote service invocation,
while the light API supports remote service invocation only.
The light API for the services is intended to be used in conjunction with the DFS
productivity layer, so you can still utilize conveniences such as the ContextFactory
and ServiceFactory. The generateRemoteClient task also generates a service model
XML file and a dfs-client.xml file that you can use for implicit addressing of the
services that you want to consume. The “generateRemoteClient task” on page 171
section provides detailed information on the generateRemoteClient task. WSDL first
consumption of DFS is also available through the Composer IDE. Documentum
Composer User Guide provides more information on generating the light API through
Composer.
This chapter has two goals. The first is to show you a basic DFS consumer, invoke a
service, and get some results. This will let you know whether your environment is
set up correctly, and show you the basic steps required to code a DFS consumer.
The second goal is to show you how to set up and run the DFS documentation
samples. You may want to debug the samples in Visual Studio to see exactly how
they work, and you may want to modify the samples or add samples of your own to
the sample project.
• A running DFS server. This can be a standalone instance of DFS running on your
local machine or on a remote host. “Verify the DFS server” on page 36 provides
detailed information.
• The DFS server that you are using needs to be pointed to a Connection Broker
through which it can access a test repository. Your consumer application will
need to know the name of the test repository and the login and password of a
repository user who has Create Cabinet privileges. “Verify repository and login
information” on page 36 provides detailed information.
• Optionally, a second repository can be available for copying objects across
repositories. This repository should be accessible using the same login
information as the primary repository.
• For UCF content transfer, you must have Java Runtime Engine (JRE) 11 installed
on your system, and the JAVA_HOME environment variable should be set to the
Java location.
• The DFS SDK must be available on the local file system.
• The sample consumer source files are located in %DFS_SDK%\samples
\DfsJavaSamples. This folder will be referred to as %SAMPLES_LOC%.
• repository name
• user name of a user with Create Cabinet privileges
• user password
The repository name must be the name of a repository accessible to the DFS server.
The list of available repositories is maintained by a connection broker (often still
referred to as docbroker).
Note: DFS knows where the connection broker is, because the IP address or
host name of the machine hosting the connection broker is specified in the
dfc.properties file stored in the DFS EAR or WAR file. The connection broker
host and port should have been set manually as part of the deployment
procedure. Documentum Platform and Platform Extensions Installation Guide
provides detailed information.
2. In all three projects in the solution, replace the following references with
references to the corresponding assemblies on the DFS SDK (in <dfs-sdk-
version>\lib\dotnet).
• Emc.Documentum.FS.DataModel.Bpm
• Emc.Documentum.FS.DataModel.CI
• Emc.Documentum.FS.DataModel.Collaboration
• Emc.Documentum.FS.DataModel.Core
• Emc.Documentum.FS.DataModel.Shared
• Emc.Documentum.FS.Runtime
• Emc.Documentum.FS.Services.Bpm
• Emc.Documentum.FS.Services.CI
• Emc.Documentum.FS.Services.Collaboration
• Emc.Documentum.FS.Services.Core
• Emc.Documentum.FS.Services.Search
/*
* This routine returns up a service context
* which includes the repository name and user credentials
*/
private IServiceContext getSimpleContext()
{
/*
* Get the service context and set the user
* credentials and repository information
*/
ContextFactory contextFactory = ContextFactory.Instance;
IServiceContext serviceContext = contextFactory.NewContext();
RepositoryIdentity repositoryIdentity =
new RepositoryIdentity(repository, userName, password, "");
serviceContext.AddIdentity(repositoryIdentity);
return serviceContext;
}
When the QueryService is invoked, the DFS client-side runtime will serialize data
from the local ServiceContext object and pass it over the wire as a SOAP header like
the one shown here:
<s:Header>
<ServiceContext token="temporary/
127.0.0.1-1205168560578-476512254"
xmlns="http://
context.core.datamodel.fs.documentum.emc.com/">
<Identities xsi:type="RepositoryIdentity"
userName="MyUserName"
password="MyPassword"
repositoryName="MyRepositoryName"
domain=""
xmlns:xsi="http://www.w3.org/2001/XMLSchema-
instance"/>
<RuntimeProperties/>
</ServiceContext>
</s:Header>
/*
* Get an instance of the QueryService by passing
* in the service context to the service factory.
*/
ServiceFactory serviceFactory = ServiceFactory.Instance;
IServiceContext serviceContext = getSimpleContext();
IQueryService querySvc
=
serviceFactory.GetRemoteService<IQueryService>(serviceContext,
moduleName, address);
Next, CallQueryService constructs two objects that will be passed to the Execute
method: a PassthroughQuery object that encapsulates a DQL statement string, and a
QueryExecution object, which contains service option settings. Both objects will be
serialized and passed to the remote service in the SOAP body.
/*
* Construct the query and the QueryExecution options
*/
PassthroughQuery query = new PassthroughQuery();
query.QueryString = "select r_object_id, object_name from
dm_cabinet";
query.AddRepository(repository);
QueryExecution queryEx = new QueryExecution();
queryEx.CacheStrategyType =
CacheStrategyType.DEFAULT_CACHE_STRATEGY;
CallQueryService then calls the Execute method of the service proxy, which causes
the runtime to serialize the data passed to the proxy Execute method, invoke the
remote service, and receive a response via HTTP.
/*
* Execute the query passing in operation options
* This sends the SOAP message across the wire
* Receives the SOAP response and wraps the response in the *
QueryResult object
*/
OperationOptions operationOptions = null;
QueryResult queryResult = querySvc.Execute(query, queryEx,
operationOptions);
The complete SOAP message passed to the service endpoint is shown here:
<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
<s:Header>
<ServiceContext token="temporary/
127.0.0.1-1205239338115-25203285"
xmlns="http://context.core.datamodel.fs.documentum.emc.com/">
<Identities xsi:type="RepositoryIdentity"
userName="MyUserName"
password="MyPassword"
repositoryName="MyRepositoryName"
domain=""
xmlns:xsi="http://www.w3.org/2001/XMLSchema-
instance"/>
<RuntimeProperties/>
</ServiceContext>
</s:Header>
<s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<execute xmlns="http://core.services.fs.documentum.emc.com/">
<query xsi:type="q1:PassthroughQuery"
queryString="select r_object_id, object_name from
dm_cabinet"
xmlns=""
xmlns:q1="http://
query.core.datamodel.fs.documentum.emc.com/">
<q1:repositories>techpubs</q1:repositories>
</query>
<execution startingIndex="0"
maxResultCount="100"
maxResultPerSource="50"
cacheStrategyType="DEFAULT_CACHE_STRATEGY"
xmlns=""/>
</execute>
</s:Body>
</s:Envelope>
1. Open QueryServiceTest.cs source file and specify valid hard-coded values for
the following fields.
• repository
• userName
• password
• address
/************************************************************
* You must supply valid values for the following fields: */
/***********************************************************/
To specify the service endpoint address, replace HostName with the IP address
or host name of the machine where DFS is deployed, and replace PortNumber
with the port number where the DFS application is deployed. The port name
will depend on the deployment environment, typically port 8080:
http://localhost:8080/services
4. If the sample executes successfully, the output window should show, among
other things, a list of the names and object identities of the cabinets in the test
repository. The first time run of the sample will be significantly slower than
subsequent runs.
The documentation samples proper (that is, the ones you will see in the this
document) are all in the DfsDotNetSamples project. The DotNetSampleRunner
project provides a way of running the samples, including some support for creating
and deleting sample data on a test repository. Methods in the DotNetSampleRunner
project set up expected repository data, call the sample methods, passing them
appropriate values, then remove the sample data that was initially set up.
For some samples (specifically the LifeCycleService samples) you will need to install
additional objects on the repository using Composer. The objects that you need are
provided on the SDK as a Composer project file in <dfs-sdk-version>\samples
\DfsDotNetSamples\Csdata\LifecycleProject.zip.
To set up and run the samples, follow these steps, which are detailed in the sections
that follow.
registryProviderModuleName="core">
<ModuleInfo name="core"
protocol="http"
host="MY_DFS_HOST"
port="MY_PORT"
contextRoot="services"/>
<ModuleInfo name="search"
protocol="http"
host="MY_DFS_HOST"
port="MY_PORT"
contextRoot="services"/>
<ModuleInfo name="bpm"
protocol="http"
host="MY_DFS_HOST"
port="MY_PORT"
contextRoot="services"/>
<ModuleInfo name="collaboration"
protocol="http"
host="MY_DFS_HOST"
port="MY_PORT"
contextRoot="services" />
</ConfigObject>
.
.
.
• defaultRepository
• userName
• password
If you have a second repository available, you can also set a valid value for
secondaryRepository, otherwise leave it set to null. The secondary repository is used
in only one sample, which demonstrates how to copy and object from one repository
to another. The secondary repository must be accessible by using the same login
credentials that are used for the default repository.
// TODO: You must supply valid values for the following variables
private string defaultRepository = "YOUR_REPOSITORY";
private string userName = "YOUR_USER_NAME";
private string password = "YOUR_USER_PASSWORD";
The sample runner removes any sample data that it created from the repository after
each sample is run. If you want to leave the data there so you can see what
happened on the repository, set isDataCleanedUp to false.
If you do this, you should delete the created sample data yourself after running each
sample to avoid errors related to duplicate object names when running successive
samples.
If more than one client is going to test the samples against the same repository, you
should create a unique name for the test cabinet that gets created on the repository
by the sample runner. Do this by changing the testCabinetPath constant (for
example, by replacing XX with your initials).
The .NET productivity layer is functionally identical to the Java productivity layer,
except that the .NET productivity layer supports only remote service invocation.
Note that while DFS samples are in C#, the .NET library is CLS compliant and can be
used by any CLS-supported language.
DFS consumer projects will require references to the following assemblies from the
DFS SDK:
• Emc.Documentum.FS.DataModel.Core
• Emc.Documentum.FS.DataModel.Shared
• Emc.Documentum.FS.Runtime
In addition, the application may need to reference some of the following assemblies,
depending on the DFS functionality that the application utilizes:
• Emc.Documentum.FS.DataModel.Bpm
• Emc.Documentum.FS.DataModel.CI
• Emc.Documentum.FS.DataModel.Collaboration
• Emc.Documentum.FS.Services.Bpm
• Emc.Documentum.FS.Services.CI
• Emc.Documentum.FS.Services.Collaboration
• Emc.Documentum.FS.Services.Core
• Emc.Documentum.FS.Services.Search
XmlSerializerSectionHandler,
Emc.Documentum.FS.Runtime"/>
</sectionGroup>
</sectionGroup>
</configSections>
<Emc.Documentum>
<FS>
<ConfigObject
type="Emc.Documentum.FS.Runtime.Impl.Configuration.
ConfigObject,
Emc.Documentum.FS.Runtime"
defaultModuleName="core"
registryProviderModuleName="core"
requireSignedUcfJars="true">
<ModuleInfo name="core"
protocol="http"
host="MY_DFS_HOST"
port="MY_PORT"
contextRoot="services"/>
<ModuleInfo name="search"
protocol="http"
host="MY_DFS_HOST"
port="MY_PORT"
contextRoot="services"/>
<ModuleInfo name="bpm"
protocol="http"
host="MY_DFS_HOST"
port="MY_PORT"
contextRoot="services"/>
<ModuleInfo name="collaboration"
protocol="http"
host="MY_DFS_HOST"
port="MY_PORT"
contextRoot="services" />
</ConfigObject>
</FS>
</Emc.Documentum>
<system.serviceModel>
<bindings>
<basicHttpBinding>
<binding name="DfsAgentService"
closeTimeout="00:01:00"
openTimeout="00:01:00"
receiveTimeout="00:10:00"
sendTimeout="00:01:00"
allowCookies="false"
bypassProxyOnLocal="false"
hostNameComparisonMode="StrongWildcard"
maxBufferSize="1000000"
maxBufferPoolSize="10000000"
maxReceivedMessageSize="1000000"
messageEncoding="Text"
textEncoding="utf-8"
transferMode="Buffered"
useDefaultWebProxy="true">
<readerQuotas maxDepth="32"
maxStringContentLength="8192"
maxArrayLength="16384"
maxBytesPerRead="4096"
maxNameTableCharCount="16384" />
<security mode="None">
<transport clientCredentialType="None"
proxyCredentialType="None"
realm="" />
<message clientCredentialType="UserName"
algorithmSuite="Default" />
</security>
</binding>
<binding name="DfsContextRegistryService"
closeTimeout="00:01:00"
openTimeout="00:01:00"
receiveTimeout="00:10:00"
sendTimeout="00:01:00"
allowCookies="false"
bypassProxyOnLocal="false"
hostNameComparisonMode="StrongWildcard"
maxBufferSize="1000000"
maxBufferPoolSize="10000000"
maxReceivedMessageSize="1000000"
messageEncoding="Text"
textEncoding="utf-8"
transferMode="Buffered"
useDefaultWebProxy="true">
<readerQuotas maxDepth="32"
maxStringContentLength="8192"
maxArrayLength="16384"
maxBytesPerRead="4096"
maxNameTableCharCount="16384" />
<security mode="None">
<transport clientCredentialType="None"
proxyCredentialType="None"
realm="" />
<message clientCredentialType="UserName"
algorithmSuite="Default" />
</security>
</binding>
<binding name="DfsDefaultService"
closeTimeout="00:01:00"
openTimeout="00:01:00"
receiveTimeout="00:10:00"
sendTimeout="00:01:00"
allowCookies="false"
bypassProxyOnLocal="false"
hostNameComparisonMode="StrongWildcard"
maxBufferSize="1000000"
maxBufferPoolSize="10000000"
maxReceivedMessageSize="1000000"
messageEncoding="Text"
textEncoding="utf-8"
transferMode="Buffered"
useDefaultWebProxy="true">
<readerQuotas maxDepth="32"
maxStringContentLength="8192"
maxArrayLength="16384"
maxBytesPerRead="4096"
maxNameTableCharCount="16384" />
<security mode="None">
<transport clientCredentialType="None"
proxyCredentialType="None"
realm="" />
<message clientCredentialType="UserName"
algorithmSuite="Default" />
</security>
</binding>
</basicHttpBinding>
</bindings>
</system.serviceModel>
</configuration>
The configuration file contains settings that are DFS-specific, as well as settings that
are WCF-specific, but which impact DFS behavior. The DFS-specific settings are
those within the <Emc.Documentum> <FS> tags. The remaining settings (within
<basicHttpBinding>) are specific to Microsoft WCF.
this is not needed, but it must be set to false if the client runtime is version 6.5 or
higher and the service runtime is version 6 (which does not have signed UCF
JARs).
The ModuleInfo elements have properties that together describe the address of a
module (and of the services at that address), using the following attributes:
<protocol>://<host>:<port>/<contextRoot>/<module>/<serviceName>
For example:
http://dfsHostName:8080/services/core/ObjectService
Beware that the app.config provided with the SDK is oriented toward productivity-
layer consumers. In productivity-layer-oriented app.config, the DfsDefaultService
binding acts as the configuration for all DFS services, except for DFS runtime
services (the AgentService and ContextRegistryService), which have separate,
named bindings declared. The following sample shows the DfsDefaultService
binding as delivered with the SDK:
<binding name="DfsDefaultService"
closeTimeout="00:01:00"
openTimeout="00:01:00"
receiveTimeout="00:10:00"
sendTimeout="00:01:00"
allowCookies="false"
bypassProxyOnLocal="false"
hostNameComparisonMode="StrongWildcard"
maxBufferSize="1000000"
maxBufferPoolSize="10000000"
maxReceivedMessageSize="1000000"
messageEncoding="Text"
textEncoding="utf-8"
transferMode="Buffered"
useDefaultWebProxy="true">
<readerQuotas maxDepth="32"
maxStringContentLength="8192"
maxArrayLength="16384"
maxBytesPerRead="4096"
maxNameTableCharCount="16384" />
<security mode="None">
<transport clientCredentialType="None"
proxyCredentialType="None"
realm="" />
<message clientCredentialType="UserName"
algorithmSuite=
"Default" />
</security>
</binding>
<binding name="ObjectServicePortBinding"
closeTimeout="00:01:00"
openTimeout="00:01:00"
receiveTimeout="00:10:00"
sendTimeout="00:01:00"
allowCookies="false"
bypassProxyOnLocal="false"
hostNameComparisonMode="StrongWildcard"
maxBufferSize="65536"
maxBufferPoolSize="524288"
maxReceivedMessageSize="65536"
messageEncoding="Text"
textEncoding="utf-8"
transferMode="Buffered"
useDefaultWebProxy="true">
<readerQuotas maxDepth="32"
maxStringContentLength="8192"
maxArrayLength="16384"
maxBytesPerRead="4096"
maxNameTableCharCount="16384" />
<security mode="None">
<transport clientCredentialType="None"
proxyCredentialType="None"
realm="" />
<message clientCredentialType="UserName"
algorithmSuite="Default" />
</security>
</binding>
If you do not want to prevent users from declaring a value that is too small for such
attributes, programmatically check and override the declared values as follows:
binding.ReaderQuotas.MaxStringContentLength = 1000000;
objectServicePortClient.Endpoint.Binding = binding;
• In the productivity layer:
If you use are using the productivity layer and are concerned about preventing users
from declaring too small a value for such attributes, programmatically check and
override the declared values, as follows:
Please also note that the service context is not thread safe and should not be accessed
by separate threads in a multi-threaded application. If you require multiple threads
your application must provide explicit synchronization.
Properties and profiles can often be passed to an operation during service operation
invocation through an OperationOptions argument, as an alternative to storing
properties and profiles in the service context, or as a way of overriding settings
stored in the service context. OperationOptions settings are passed in the SOAP
body, rather than the SOAP header.
ContentTransferProfile contentTransferProfile
= new
ContentTransferProfile();
contentTransferProfile.TransferMode =
ContentTransferMode.MTOM;
serviceContext.SetProfile(contentTransferProfile);
}
Context registration is an optional technique for optimizing how much data is sent
over the wire by remote DFS consumers. “Service context registration” on page 56
provides more information.
DFS services generally take an OperationOptions object as the final argument when
calling a service operation. “OperationOptions” on page 62 provides more
information.
• Root cause exceptions for the top-level exception are chained correctly.
• Exceptions contain more specific information about the error condition. For
example, the kind of exception, error message, message ID, message arguments,
stack trace, and so forth. Previously, you had to parse the returned XML to
retrieve this important information.
• The SoapFaultHelper and DfsAnalogException classes implement well-defined
and unambiguous functionality.
The DFS SDK .NET Help provides specific information about the SoapFaultHelper
and DfsAnalogException.
6.6.2 Examples
Example 6-2: Printing out a SOAP fault
The following code sample illustrates throwing a SOAP fault exception and
then printing out the exception information. You use the ToString method
to return a well-formatted string that represents the information contained in
the DfsAnalogException instance.
try
{
((RepositoryIdentity)m_service.GetServiceContext().GetIdentit
y(0)).Password =
"invalid_password";
object_service.Create(new DataPackage(testObj), null);
}
catch (FaultException ex)
{
DfsAnalogException ex2 = SoapFaultHelper.Translate(ex);
Assert.True(SoapFaultHelper.ContainsExceptionType(ex,
"dfs.authentication.exception"));
Assert.True(SoapFaultHelper.ContainsExceptionClass(ex,
"com.emc.documentum.fs.rt.AuthenticationException"));
Assert.True(SoapFaultHelper.ContainsMessageId(ex,
"E_SERVICE_AUTHORIZATION_FAILED"));
Assert.AreEqual(SoapFaultHelper.GetExceptionGroup(ex),
ExceptionGroup.AUTHENTICATION);
}
The DFS data model comprises the object model for data passed to and returned by
Enterprise Content Services. This chapter covers fundamental aspects of the data
model and important concepts related to it. This chapter is a supplement to the API
documentation, which provides more comprehensive coverage of DFS classes.
7.1 DataPackage
The DataPackage class defines the fundamental unit of information that contains
data passed to and returned by services operating in the DFS framework. A
DataPackage is a collection of DataObject instances, which is typically passed to, and
returned by, Object service operations such as create, get, and update. Object service
operations process all the DataObject instances in the DataPackage sequentially.
Note that this sample populates a DataPackage twice, first using the
addDataObject convenience method, then again by building a list then
setting the DataPackage contents to the list. The result is that the
DataPackage contents are overwritten; but the purpose of this sample is to
simply show two different ways of populating the DataPackage, not to do
anything useful.
//build list and then set the DataPackage contents to the list
ArrayList<DataObject> dataObjectList = new
ArrayList<DataObject>();
dataObjectList.add(dataObject);
dataObjectList.add(dataObject1);
dataPackage.setDataObjects(dataObjectList);
{
System.out.println("Data Object: " + dataObject2);
}
7.2 DataObject
A DataObject is a representation of an object in an ECM repository. In the context of
Documentum technology, the DataObject functions as a DFS representation of a
persistent repository object, such as a dm_sysobject or dm_user. Enterprise Content
Services (such as the Object service) consistently process DataObject instances as
representations of persistent repository objects.
A DataObject instance is potentially large and complex, and much of the work in
DFS service consumers will be dedicated to constructing the DataObject instances. A
DataObject can potentially contain comprehensive information about the repository
object that it represents, including its identity, properties, content, and its
relationships to other repository objects. In addition, the DataObject instance may
contain settings that instruct the services about how the client wishes parts of the
DataObject to be processed. The complexity of the DataObject and related parts of
the data model, such as Profile classes, are design features that enable and
encourage simplicity of the service interface and the packaging of complex
consumer requests into a minimal number of service interactions.
For the same reason DataObject instances are consistently passed to and returned by
services in simple collections defined by the DataPackage class, permitting
processing of multiple DataObject instances in a single service interaction.
Class Description
ObjectIdentity An ObjectIdentity uniquely identifies the
repository object referenced by the
DataObject. A DataObject can have 0 or 1
identities. “ObjectIdentity” on page 91
provides detailed information.
PropertySet A PropertySet is a collection of named
properties, which correspond to the
properties of a repository object represented
by the DataObject. A DataObject can have 0
or 1 PropertySet instances. “Property”
on page 95 provides detailed information.
Content Content objects contain data about file
content associated with the data object. A
DataObject can contain 0 or more Content
instances. A DataObject without content is
referred to as a “contentless DataObject.”
“Content model and profiles” on page 104
provides detailed information.
Permission A Permission object specifies a specific basic
or extended permission, or a custom
permission. A DataObject can contain 0 or
more Permission objects. “Permissions”
on page 112 provides detailed information.
Relationship A Relationship object defines a relationship
between the repository object represented by
the DataObject and another repository object.
A DataObject can contain 0 or more
Relationship instances. “Relationship”
on page 114 provides detailed information.
Aspect The Aspect class models an aspect that can
be attached to, or detached from, a persistent
repository object. “Aspect” on page 133
provides detailed information.
dataObject.getContents().add(new FileContent("c:/temp/
MyImage.gif", "gif"));
dataObject.Contents.Add(new FileContent("c:/temp/
MyImage.gif", "gif"));
7.3 ObjectIdentity
The function of the ObjectIdentity class is to uniquely identify a repository object.
An ObjectIdentity instance contains a repository name and an identifier that can take
various forms, described in the following table listing the ValueType enum
constants.
ValueType Description
OBJECT_ID Identifier value is of type ObjectId, which is a
container for the value of a repository
r_object_id attribute, a value generated by
Documentum Server to uniquely identify a
specific version of a repository object.
OBJECT_PATH Identifier value is of type ObjectPath, which
contains a String expression specifying the
path to the object, excluding the repository
name. For example /MyCabinet/MyFolder/
MyDocument.
QUALIFICATION Identifier value is of type Qualification,
which can take the form of a DQL expression
fragment. The Qualification is intended to
uniquely identify a Documentum Server
object.
OBJECT_KEY Identifier value is of type ObjectKey, which
contains a PropertySet, the properties of
which, joined by logical AND, uniquely
identity the repository object.
When constructing a DataObject to pass to the create operation, or in any case when
the DataObject represents a repository object that does not yet exist, the
ObjectIdentity need only be populated with a repository name. If the ObjectIdentity
does contain a unique identifier, it must represent an existing repository object.
Note that the ObjectIdentity class is generic in the Java client library, but non-generic
in the .NET client library.
7.3.1 ObjectId
An ObjectId is a container for the value of a repository r_object_id attribute, which is
a value generated by Documentum Server to uniquely identify a specific version of a
repository object. An ObjectId can therefore represent either a CURRENT or a non-
CURRENT version of a repository object. DFS services exhibit service- and
operation-specific behaviors for handling non-CURRENT versions, which are
documented under individual services and operations.
7.3.2 ObjectPath
An ObjectPath contains a String expression specifying the path to a repository object,
excluding the repository name. For example /MyCabinet/MyFolder/MyDocument.
An ObjectPath can only represent the CURRENT version of a repository object.
Using an ObjectPath does not guarantee the uniqueness of the repository object,
because Documentum Server does permit objects with identical names to reside
within the same folder. If the specified path is unique at request time, the path is
recognized as a valid object identity; otherwise, the DFS runtime will throw an
exception.
7.3.3 Qualification
A Qualification is an object that specifies criteria for selecting a set of repository
objects. Qualifications used in ObjectIdentity instances are intended to specify a
single repository object. The criteria set in the qualification is expressed as a
fragment of a DQL SELECT statement, consisting of the expression string following
“SELECT FROM”, as shown in the following example.
Qualification qualification =
new Qualification("dm_document where object_name =
'dfs_sample_image'");
DFS services use normal DQL statement processing, which selects the CURRENT
version of an object if the ALL keyword is not used in the DQL WHERE clause. The
preceding example (which assumes for simplicity that the object_name is sufficient
to ensure uniqueness) will select only the CURRENT version of the object named
dfs_sample_image. To select a specific non-CURRENT version, the Qualification
must use the ALL keyword, as well as specific criteria for identifying the version,
such as a symbolic version label:
Qualification qualification
= new Qualification("dm_document where r_object_id =
'090007d280075180'");
objectIdentities[2] = new
ObjectIdentity<Qualification>(qualification, repName);
7.3.5 ObjectIdentitySet
An ObjectIdentitySet is a collection of ObjectIdentity instances, which can be passed
to an Object service operation so that it can process multiple repository objects in a
single service interaction. An ObjectIdentitySet is analogous to a DataPackage, but is
passed to service operations such as move, copy, and delete that operate only
against existing repository data, and which therefore do not require any data from
the consumer about the repository objects other than their identity.
Qualification qualification =
new Qualification("dm_document where object_name =
'bl_upwind.gif'");
objIdSet.addIdentity(new ObjectIdentity(qualification,
repName));
Qualification qualification
= new Qualification("dm_document where object_name
=
'bl_upwind.gif'");
objIdSet.AddIdentity(new ObjectIdentity(qualification,
repName));
Identities.GetEnumerator();
while (identityEnumerator.MoveNext())
{
Console.WriteLine("Object Identity: " +
identityEnumerator.Current);
}
7.4 Property
A DataObject optionally contains a PropertySet, which is a container for a set of
Property objects. Each Property in normal usage corresponds to a property (also
called attribute) of a repository object represented by the DataObject. A Property
object can represent a single property, or an array of properties of the same data
type. Property arrays are represented by subclasses of ArrayProperty, and
correspond to repeating attributes of repository objects.
Property[] properties =
{
new StringProperty("subject", "dangers"),
new StringProperty("title", "Dangers"),
new NumberProperty("short", (short) 1),
new DateProperty("my_date", new Date()),
new BooleanProperty("a_full_text", true),
new ObjectIdProperty("my_object_id", new
ObjectId("090007d280075180")),
new StringArrayProperty("keywords",
new String[]{"lions", "tigers",
"bears"}),
new NumberArrayProperty("my_number_array", (short) 1, 10, 100L,
10.10),
new BooleanArrayProperty("my_boolean_array", true, false, true,
false),
new DateArrayProperty("my_date_array", new Date(), new Date()),
new ObjectIdArrayProperty("my_obj_id_array",
new ObjectId("0c0007d280000107"), new
ObjectId("090007d280075180")),
};
To indicate that a Property is transient, set the isTransient property of the Property
object to true.
Console.WriteLine(vInfo.DataObject.Properties.Get("my_unique_i
d"));
}
}
while (items.hasNext())
{
Property property = (Property) items.next();
{
System.out.println(property.getClass().getName() +
" = " +
property.getValueAsString());
}
}
propertySet.Set("TestDoubleName", 10.10);
<xs:complexType name="NumberProperty">
<xs:complexContent>
<xs:extension base="xscp:Property">
<xs:sequence>
<xs:choice minOccurs="0">
<xs:element name="Short" type="xs:short"/>
<xs:element name="Integer" type="xs:int"/>
<xs:element name="Long" type="xs:long"/>
<xs:element name="Double" type="xs:double"/>
</xs:choice>
</xs:sequence>
</xs:extension>
</xs:complexContent>
</xs:complexType>
7.4.4 ArrayProperty
The subclasses of ArrayProperty each contain an array of Property objects of a
specific subclass corresponding to a data type. For example, the
NumberArrayProperty class contains an array of NumberProperty. The array
corresponds to a repeating attribute (also known as repeating property) of a
repository object.
7.4.4.1 ValueAction
Each ArrayProperty optionally contains an array of ValueAction objects that contain
an ActionType-index pair. These pairs can be interpreted by the service as
instructions for using the data stored in the ArrayProperty to modify the repeating
attribute of the persistent repository object. The ValueAction array is synchronized
to the ArrayProperty array, such that any position p of the ValueAction array
corresponds to position p of the ArrayProperty. The index in each ActionType-index
pair is zero-based and indicates a position in the repeating attribute of the persistent
repository object. ValueActionType specifies how to modify the repeating attribute
list using the data stored in the ArrayProperty.
The following table describes how the ValueActionType values are interpreted by an
update operation.
Note in the preceding description of processing that the INSERT and DELETE
actions will offset index positions to the right of the alteration, as the ValueAction
array is processed from beginning to end. These effects must be accounted for in the
coding of the ValueAction object, such as by ensuring that the repeating properties
list is processed from right to left.
When using a ValueAction to delete a repeating attribute value, the value stored at
position ArrayProperty[p], corresponding to ValueAction[p] is not relevant to the
operation. However, the two arrays must still line up. In this case, you should store
an empty (dummy) value in ArrayProperty[p] (such as the empty string “”), rather
than null.
7.4.5 PropertySet
A PropertySet is a container for named Property objects, which typically (but do not
necessarily) correspond to persistent repository object properties.
You can restrict the size of a PropertySet returned by a service using the filtering
mechanism of the PropertyProfile class (see “PropertyProfile” on page 103).
7.4.6 PropertyProfile
A PropertyProfile defines property filters that limit the properties returned with an
object by a service. This allows you to optimize the service by returning only those
properties that your service consumer requires. PropertyProfile, like other profiles,
is generally set in the OperationOptions passed to a service operation (or it can be
set in the service context).
PropertyFilterMode Description
NONE No properties are returned in the
PropertySet. Other settings are ignored.
SPECIFIED_BY_INCLUDE No properties are returned unless specified
in the includeProperties list.
SPECIFIED_BY_EXCLUDE All properties are returned unless specified
in the excludeProperties list.
ALL_NON_SYSTEM Returns all properties except system
properties.
ALL All properties are returned.
When you initially populate the properties of the DataObject (for example, using the
result of an Object service get or create operation), avoid setting the
PropertyFilterMode to ALL, if you plan to pass the result into a checkin or update
operation. Instead, you can set the property filter to ALL_NON_SYSTEM. (The
default is operation-specific, but this is generally the default setting for Object
service get and similar operations.)
If you do need to modify a system property, you should strip other system
properties from the DataObject prior to the update.
The BinaryContent type includes a Base64–encoded byte array and is typically used
with the Base64 content transfer mode:
<xs:complexType name="BinaryContent">
<xs:complexContent>
<xs:extension base="tns:Content">
<xs:sequence>
<xs:element name="Value" type="xs:base64Binary"/>
</xs:sequence>
</xs:extension>
</xs:complexContent>
</xs:complexType>
<xs:complexType name="DataHandlerContent">
<xs:complexContent>
<xs:extension base="tns:Content">
<xs:sequence>
<xs:element name="Value"
ns1:expectedContentTypes="*/*"
type="xs:base64Binary"
xmlns:ns1="http://www.w3.org/2005/05/xmlmime"/>
</xs:sequence>
</xs:extension>
</xs:complexContent>
</xs:complexType>
<xs:complexType name="UrlContent">
<xs:complexContent>
<xs:extension base="tns:Content">
<xs:sequence/>
<xs:attribute name="url" type="xs:string" use="required"/>
</xs:extension>
</xs:complexContent>
</xs:complexType>
The DFS client productivity layer includes an additional class, FileContent, which is
used as a convenience class for managing content files. FileContent is also the
primary type returned to the productivity layer by services invoked in local mode.
7.5.2 ContentProfile
The ContentProfile class enables a client to set filters that control the content
returned by a service. This has important ramifications for service performance,
because it permits fine control over expensive content transfer operations.
Note that you can use the following DQL to get a list of all format names stored in a
repository:
7.5.2.1 postTransferAction
You can set the postTransferAction property of a ContentProfile instance to open a
document downloaded by UCF for viewing or editing.
• To open the document for edit, ensure the document is checked out before the
UCF content transfer.
• If the document has not been checked out from the repository, you can open the
document for viewing it (as read-only).
7.5.2.2 contentReturnType
The contentReturnType property of a ContentProfile is a client-side convenience
setting used in the productivity layer. It sets the type of Content returned in the
DataObject instances returned to the productivity layer by converting the type
returned from a remote service (or returned locally if you are using the productivity
layer in local mode). It does not influence the type returned in the SOAP envelope
by a remote service.
7.5.3 ContentTransferProfile
Settings in the ContentTransferProfile class determine the mode of content transfer,
and also specify behaviors related to content transfer in a distributed environment.
Distributed content transfer can take place when DFS delegates the content transfer
to UCF, or when content is downloaded from an ACS server or BOCS cache using a
UrlContent object.
Notes
• In the View mode,
the file is read-only.
It is cached in a
predefined location,
and will be cleaned
up automatically by
a housekeeping task.
In the Export mode,
you can specify the
location on the client
machine where
content will be
copied.
• If the
contentRegistryOpti
on field is NULL DFS
server also loads
content with the
default behavior in
DFS 6.0.
• The
contentRegistryOpti
7.6 Permissions
A DataObject contains a list of Permission objects, which together represent the
permissions of the user who has logged into the repository on the repository object
represented by the DataObject. The intent of the Permission list is to provide the
client with read access to the current user's permissions on a repository object. The
client cannot set or update permissions on a repository object by modifying the
Permission list and updating the DataObject. To actually change the permissions, the
client would need to modify or replace the repository object's permission set (also
called an Access Control List, or ACL).
The following table shows the PermissionType enum constants and Permission
constants:
7.6.1 PermissionProfile
The PermissionProfile class enables the client to set filters that control the contents of
the Permission lists in DataObject instances returned by services. By default, services
return an empty Permission list: the client must explicitly request in a
PermissionProfile that permissions be returned.
7.6.2 Relationship
Relationships allow the client to construct a single DataObject that specifies all of its
relationships to other objects, existing and new, and to get, update, or create the
entire set of objects and their relationships in a single service interaction.
This document will use the term container DataObject when speaking of the
DataObject that contains a Relationship. It will use the term target object to refer to
the object specified within the Relationship. Each Relationship instance defines a
relationship between a container DataObject and a target object. In the case of the
ReferenceRelationship subclass, the target object is represented by an ObjectIdentity;
in the case of an ObjectRelationship subclass, the target object is represented by a
7.6.2.2.2 RelationshipIntentModifier
The following table describes the possible values for the RelationshipIntentModifier.
Relationships are directional, having a notion of source and target. The targetRole of
a Relationship is a string representing the role of the target in a relationship. In the
case of folders and VDMs, the role of a participant in the relationship can be parent
or child. The following table describes the possible values for the Relationship
targetRole.
As an example, consider the case of a document linked into two folders. The
DataObject representing the document would need two ReferenceRelationship
instances representing dm_folder objects in the repository. The relationships to the
references are directional: from parent to child. The folders must exist in the
repository for the references to be valid. The following figure represents an object of
this type.
To create this object with references you could write code that does the following:
In most cases the client would know the ObjectId of each folder, but in some cases
the ObjectIdentity can be provided using a Qualification, which would eliminate a
remote query to look up the folder ID.
Let's look at a slightly different example of an object with references. In this case we
want to model a new folder within an existing folder and link an existing document
into the new folder.
To create this DataObject with references you could write code that does the
following:
A typical case for using a compound DataObject would be to replicate a file system's
folder hierarchy in the repository. The following figure represents an object of this
type.
To create this compound DataObject you could write code that does the following:
In this logic there is a new DataObject created for every node and attached to a
containing DataObject using a child ObjectRelationship.
In a normal case of object creation, the new object will be linked into one or more
folders. This means that a compound object will also normally include at least one
ReferenceRelationship. The following figure shows a compound data object
representing a folder structure with a reference to an existing folder into which to
link the new structure.
To create this compound DataObject you could write code that does the following:
The preceding diagram shows that a new PARENT relation to folder 3 is added to
folder 1, and an existing relation with folder 2 is removed. This has the effect of
linking folder1 into folder3 and removing it from folder2. The folder2 object is not
deleted.
7.6.2.5 RelationshipProfile
A RelationshipProfile is a client optimization mechanism that provides fine control
over the size and complexity of DataObject instances returned by services. By
default, the Object service get operation returns DataObject containing no
Relationship instances. To alter this behavior, you must provide a
RelationshipProfile that explicit sets the types of Relationship instances to return.
7.6.2.5.1 ResultDataMode
The filters are ANDed together to specify the conditions for inclusion of a
Relationship instance. For example, if targetRoleFilter is set to
RelationshipProfile.ROLE_CHILD and depthFilter is set to 1, only proximate child
relationships will be returned by the service.
However, relationships more than one step removed from the primary DataObject
(where depth > 1) will be returned in a relationship graph only if they have the same
relationship name and targetRole as the first relationship on the branch. Let's look at
a couple of examples of how this works. In all of the examples we will assume the
following settings in the RelationshipProfile:
resultDataMode = ResultDataMode.OBJECT
targetRoleFile = TargetRoleFilter.ANY
nameFilelter = RelationshipNameFilter.ANY
depthFilter = DepthFilter.UNLIMITED
return dataPackage.getDataObjects().get(0);
}
Let's start with a case where all relationships have the same relationship name
(folder).
The primary object in this case is folder_1.2. As you can , both of its proximate
relationships are retrieved. On the child branch the deep relationship (to
folder_1.2.1.1) is retrieved, because both the name and targetRole of the deep
relationship is the same as the first relationship on the branch. However, on the
parent branch, the relationship to folder_1.1 is not retrieved, because the targetRole
of the relationship to folder_1.1 (child) is not the same as the targetRole of the first
relationship on the branch (parent).
Let's look at another example where the relationship name changes, rather than the
targetRole. In this example, we want to retrieve the relationships of a folder that has
two child folders. Each child folder contains a document, and one of the documents
is a virtual document that contains the other.
As before, both proximate relationships are retrieved. The deep folder relationships
to the documents are also retrieved. But the virtual_document relationship is not
retrieved, because its relationship name (virtual_document) is not the same as the
name of the first relationship on the branch (folder).
You can reference a custom relationship in the name property of a DFS Relationship
object using the syntax:
Let's look at an example of how you might use such an extended relationship.
Suppose you wanted to create a custom object type called acme_geoloc to contain
geographic place names and locations that can be used to display positions in maps.
This geoloc object contains properties such as place name, latitude, and longitude.
You want to be able to associate various documents, such as raster maps, tour
guides, and hotel brochures with an acme_geoloc object. Finally, you also want to be
able to capture metadata about the relationship itself.
To enable this, you could start by making the following modifications in the
repository using Composer:
Once these objects are created in the repository, your application can create
relationships at runtime between document (dm_document) objects and
acme_geoloc objects. By including the relationship in DataObject instances, your
client application can choose to include geolocation information about the document
for display in maps, and also examine custom metadata about the relationship itself.
The following Java sample code creates an acme_geoloc object, a document, and a
relationship of type acme_geoloc_relation_type between the document and the
acme_geoloc.
{
// define a geoloc object
DataObject geoLocObject = new DataObject(new
ObjectIdentity
(defaultRepositoryName), "acme_geoloc");
PropertySet properties = new PropertySet();
properties.set("name", "TourEiffel");
properties.set("latitude", "48512957N");
properties.set("longitude", "02174016E");
geoLocObject.setProperties(properties);
// define a document
DataObject docDataObj = new DataObject(new ObjectIdentity
(defaultRepositoryName),"dm_document");
PropertySet docProperties = new PropertySet();
docProperties.set("object_name", "T-Eiffel");
docProperties.set("title", "Guide to the Eiffel Tower");
docDataObj.setProperties(docProperties);
objRelationship.setName("acme_geoloc_relation_type/
acme_geoloc_relation");
objRelationship.setTargetRole(Relationship.ROLE_CHILD);
objRelationship.setRelationshipProperties(relPropertySet);
docDataObj.getRelationships().add(
new ObjectRelationship(objRelationship));
propertyProfile.setFilterMode(PropertyFilterMode.ALL_NON_SYSTE
M);
relationProfile.setNameFilter(RelationshipNameFilter.SPECIFIED
);
relationProfile.setRelationName("acme_geoloc_relation_type");
relationProfile.setDepthFilter(DepthFilter.SPECIFIED);
relationProfile.setDepth(1);
relationProfile.setPropertyProfile(propertyProfile);
OperationOptions operationOptions = new
OperationOptions();
operationOptions.setRelationshipProfile(relationProfile);
• acme_geoloc_relation
• acme_books_geoloc_relation
If there are objects of both of these types in the repository, and they both
reference the same dm_relation_type in their relation_name property, it will
not be possible to indicate in the relationship name filter which of the
relationship names to filter on. To work around this limitation, use a custom
dm_relation_type and make sure that only instances of your custom
dm_relation subtype reference your custom dm_relation_type.
7.7 Aspect
The Aspect class models an aspect, and provides a means of attaching an aspect to a
persistent object, or detaching an aspect from a persistent object during a service
operation.
DFS 7.3 onwards does not support aspect type attributes.Therefore, if an aspect is
associated with a type that contains custom attributes, an error may occur when the
aspect is attached to, or detached from an object.
Aspects are a BOF type (dmc_aspect_type). Like other BOF types, they have these
characteristics:
This chapter is intended to introduce you to writing custom services in the DFS
framework and how to use the DFS SDK build tools to generate a deployable EAR
file. Sample custom services are also provided to get you started on developing your
own custom services with DFS.
If you have existing SBOs that are used in DFC clients or projected as Documentum
5.3 web services, the optimal route to DFS may be to convert the existing services
into DFS services. However, bear in mind that not all SBOs are suitable for
projection as web services, and those that are technically suitable may still be lacking
an optimal SOA design. As an alternative strategy you could preserve current SBOs
and make their functionality available as a DFS service by creating DFS services as
facades to the existing SBOs.
The SBO approach may also be of value if you wish to design services that are
deployed across multiple repositories and multiple DFC client applications
(including WDK-based applications). An SBO implementation is stored in a single
location, the global registry, from which it is dynamically downloaded to client
applications. If the implementation changes, the changes can be deployed in a single
location. The BOF runtime framework automatically propagates the changed
implementation to all clients. (Note that the SBO interface must be deployed to each
DFC client.)
For example, a service should always return a DFS DataPackage rather than a
specialized object representing a DFC typed object. Services should always be
designed so that no DFC client is required on the service consumer.
• The service should have an appropriate level of granularity. The most general
rule is that the service granularity should be determined by the needs of the
service consumer. However, in practice services are generally more coarse-
grained than methods in tightly bound client/server applications. They should
avoid “chattiness”, be sensitive to round-trip overhead, and anticipate relatively
low bandwidth and high latency.
• As mentioned previously, if the service is intended to be used as an extension of
DFS services, it should use the DFS object model where possible, and conform to
the general design features of the DFS services.
• The service should specify stateless operations that perform a single
unambiguous function that the service consumer requires. The operation should
stand alone and not be overly dependent on consumer calls to auxiliary services.
• The service should specify parameters and return values that are easily bound to
XML, and which are faithfully transformed in interactions between the client and
the service.
Not all intrinsic Java types map into identical XML intrinsic types; and not all
intrinsic type arrays exhibit are transformed identically to and from XML. Service
developers should therefore be aware of the following mappings when designing
service interfaces.
immediately after each service request completes so that the session manager is not
cached. DfcSessionManager.getSessionManager retrieves a session manager from
the cache based on the token stored in the serviceContext, and takes care of the
details of populating the session manager with identities stored in the service
context. The service context itself is created based on data passed in SOAP headers
from remote clients, or on data passed by a local client during service instantiation.
From the viewpoint of the custom DFS service, the essential thing is to get the
session manager using DfcSessionManager.getSessionManager, then invoke the
session manager to get a session on a repository. To get a session, the service needs
to pass a string identifying the repository to the IDfSessionManager.getSession
method, so generally a service will need to receive the repository name from the
caller in one of its parameters. Once the service method has the session, it can invoke
DFC methods on the session within a try clause and catch any DfException thrown
by DFC. In the catch clause it should wrap the exception in a custom DFS exception
(“Creating a custom exception” on page 148), or in a generic ServiceException, so
that the DFS framework can handle the exception appropriately and serialize it for
remote consumers. The session must be released in a finally clause to prevent
session leakage. This general pattern is shown in the listing below.
import com.emc.documentum.fs.rt.context.DfcSessionManager;
...
If your DFS application does not include custom services, or if your custom services
do not use DFC, then you need not be too concerned about programmatic
management of sessions. However, it's desirable to understand what DFS is doing
with sessions because some related aspects of the runtime behavior are configurable
using DFS and DFC runtime properties. As stated above, DFS maintains a cache of
session managers. This cache is cleaned up at regular intervals (by default every 20
minutes), and the cached session managers expire at regular intervals (by default
every 60 minutes). The two intervals can be modified in dfs-runtime.properties by
changing dfs.crs.perform_cleanup_every_x_minutes and
dfs.crs.cache_expiration_after_x_minutes. Once the session is obtained, it is
managed by the DFC layer, so configuration settings that influence runtime behavior
in regard to sessions, such as whether the sessions are pooled and how quickly their
connections time out, are in dfc.properties (and named dfc.session.*). These settings
are documented in the dfcfull.properties file, and DFC session management in
general is discussed in the Documentum Foundation Classes Development Guide. In
some DFS custom applications, you may encounter the DFC session exhausting
issue. This issue mainly occurs when there are a large number of concurrent sessions
or when certain DFC-related properties are not configured properly. When DFC
level 1 pooling is enabled (by default), sessions are owned by the session manager
who creates them for a period of time (by default 5 seconds) before they can be
reused by other session managers. To resolve the DFC session exhausting issue, you
can increase DFC concurrent session count, and decrease level 1 pooling interval by
modifying the dfc.session.max_count and dfc.session.pool.expiration_interval
properties respectively.
Note that for each request from a service consumer, DFS will use only one
IDfSessionManager instance. All underlying DFC sessions are managed (and may be
cached, depending on whether session pooling is enabled) by this instance. If there
are multiple simultaneous DFS requests, there should theoretically be an equivalent
number of active DFC sessions. However, the number of concurrent sessions may be
limited by configuration settings in dfc.properties, or by external limits imposed by
the OS or network on the number of available TCP/IP connections.
3. Implement your service by using the principles that are described in “The well-
behaved service implementation” on page 136. “DFS exception handling”
on page 148 provides details on creating and handling custom exceptions.
4. Define where you want the service to be addressable at, which is described in
“Defining the service address” on page 151.
5. Build and package your service with the DFS SDK build tools as described in .
“Building and packaging a service into an EAR file” on page 152.
import com.emc.documentum.fs.rt.annotations.DfsPojoService;
@DfsPojoService()
public class AcmeCustomService implements IAcmeCustomService
{
// service implementation
}
import com.emc.documentum.fs.rt.annotations.DfsBofService;
@DfsBofService()
public class MySBO extends DfService implements IMySBO
{
//SBO service implementation
}
Attribute Description
serviceName The name of the service. Required to be non-
empty.
targetNamespace Overrides the default Java-package-to-XML-
namespace conversion algorithm. Optional.
Attribute Description
requiresAuthentication When set to “false”, specifies that this is an
open service, requiring no user
authentication. Default value is “true”.
useDeprecatedExceptionModel Set to true if you want to maintain
backwards compatibility with the DFS 6.0
SP1 or earlier exception model. Default value
is “false”.
Attribute Description
implementation Name of implementation class. Required to
be non-empty if the annotation applies to an
interface declaration.
targetNamespace Overrides the default Java-package-to-XML-
namespace conversion algorithm. Optional.
targetPackage Overrides the default Java packaging
algorithm. Optional.
requiresAuthentication When set to “false”, specifies that this is an
open service, requiring no user
authentication. Optional; default value is
“true”.
useDeprecatedExceptionModel Set to true if you want to maintain
backwards compatibility with the DFS 6.0
SP1 or earlier exception model. Default value
is “false”.
package com.acme.services.samples.common;
import javax.xml.bind.annotation.*;
import java.util.List;
@XmlElement(name = "Repositories")
private List repositories;
@XmlAttribute
private boolean isSessionPoolingActive;
@XmlAttribute
private boolean hasActiveSessions;
@XmlAttribute
private String defaultSchema;
}
• @XmlType:
@XmlType(name = "AcmeServiceInfo",
namespace = "http://common.samples.services.acme.com/")
@XmlSeeAlso({ReferenceRelationship.class,
ObjectRelationship.class})
String value
List<String> values
• As a basic requirement of Javabeans and general Java convention, a field's
accessors (getters and setters) should incorporate the exact field name. This leads
to desired consistency between the field name, method names, and the XML
element name.
@XmlAttribute
private String defaultSchema;
• Annotate primitive and simple data types (int, boolean, long, String, Date) using
@XmlAttribute.
• Annotate complex data types and lists using @XmlElement, for example:
@XmlElement(name = "Repositories")
private List repositories;
@XmlElement(name = "MyComplexType")
private MyComplexType myComplexTypeInstance;
• Fields should work without initialization.
• The default of boolean members should be false.
The following conditions can also lead to problems either with the WSDL itself, or
with .NET WSDL import utilities.
For your custom service operation to execute as a transaction, the service consumer
must have set the IServiceContext.USER_TRANSACTION_HINT runtime property
equal to IServiceContext.TRANSACTION_REQUIRED.
To how this works, look at the following method, and assume that it is running as a
custom service operation. We set IServiceContext.USER_TRANSACTION_HINT
runtime property as IServiceContext.TRANSACTION_REQUIRED before the first
service call. This method invokes the create operation twice, each time creating an
object. If one of the calls fail, then the transaction will be rolled back.
throws ServiceException
{
IServiceContext context =
ContextFactory.getInstance().getContext();
context.setRuntimeProperty(IServiceContext.USER_TRANSACTION_HI
NT,
IServiceContext.TRANSACTION_REQUIRED);
IObjectService service = ServiceFactory.getInstance().
getLocalService(IObjectService.class, context);
DataPackage(object1), null);
DataPackage dp2 = service.create(new
DataPackage(object2), null);
ObjectIdentity objectIdentity1 =
dp1.getDataObjects().
get(0).getIdentity();
ObjectIdentity objectIdentity2 =
dp2.getDataObjects().
get(0).getIdentity();
System.out.println(“object created:
“ + objectIdentity1.
getValue().toString());
System.out.println(“object created:
“ + objectIdentity2.
getValue().toString());
}
Thread.currentThread().getContextClassLoader().getResource("some.prop
erties");
If a target namespace is not specified for a service, the default target namespace is
generated by reversing the package name nodes and prepending a ws (to avoid
name conflicts between original and JAX-WS generated classes). For example, if the
service package name is com.acme.services.samples. the DFS SDK build tools
generate the following target namespace for the service:
http://ws.samples.services.acme.com/
You can override this namespace generation by specifying a value for the
targetNamespace attribute for the service annotation that you are using
(@DfsPojoService or @DfsBofService). “Overriding default service namespace
generation” on page 147 provides detailed information on overriding the target
namespace for a service.
To change this behavior, specify a value for the targetNamespace attribute of the
@DfsPojoService or @DfsBofService annotation that is different from the default
target namespace (this approach is used in the AcmeCustomService sample).
package com.acme.services.samples.impl;
import com.emc.documentum.fs.rt.annotations.DfsPojoService;
@DfsPojoService(targetNamespace = http://
samples.services. acme.com)
public class AcmeCustomService
.
.
.
With this input, the DFS SDK build tools generate the service interface and other
DFS artifacts n the com.acme.services.samples.client package. It places the service
implementation and other files generated by JAX-WS in the
com.acme.services.samples package. The service namespace would be “http://
samples.services.acme.com” as specified in the service annotation attribute.
Note: A conflict occurs when you have two services that have the following
namespaces: http://a.b.c.d and http://b.c.d/a In this case, when JAX-WS tries to
generate the client proxies for these two services, they will be generated in the
same package (d.c.b.a), so you will only be able to call the first service in the
classpath. Avoid assigning namespaces in this way to prevent this situation.
• All instance variables in the exception class must be JAXB serializable; they have
to be part of the java.lang package or properly JAXB annotated.
• All instance variables in the exception class must be properly set on the server
side exception instance, either through explicit setters or through a constructor,
so that they make it into the serialized XML.
• DFS requires the exception to have a constructor accepting the error message as a
String. Optionally, this constructor can have an argument of type Throwable for
chained exceptions. In other words, there must be a constructor present with the
following signature: (String , Throwable) or (String).
• The exception class must have proper getter and setter methods for its instance
variables (except for the error message and cause since these are set in the
constructor).
• The exception class must have a field named exceptionBean of type
List<DfsExceptionHolder> and accessor and mutator methods for this field. The
field is used to encapsulate the exception attributes, which is subsequently sent
over the wire. If this field is not present, the exception attributes will not be
properly serialized and sent over the wire.
• If you do not explicitly declare your custom exception in the throws clause of a
method (a RuntimeException for instance), a ServiceException is sent down the
wire in its place.
When the exception is unmarshalled on the client, the DFS client runtime attempts to
locate the exception class in the classpath and initialize it using a constructor with
the following signature: (String , Throwable) or (String). If that attempt fails, the
client runtime will throw a generic UnrecoverableException that is created with the
following constructor: UnrecoverableException(String, Throwable).
3. Define the fields that you want the exception to contain. Ensure accessor and
mutator methods exist for these fields and that each field is JAXB serializable.
The DFS runtime will receive the DfsExceptionHolder object and re-create and
throw the exception on the client side.
{
super(errorCode, cause);
}
public List<DfsExceptionHolder> getExceptionBean(){
return exceptionBean;
}
public void setExceptionBean(List<DfsExceptionHolder>
exceptionBean){
this.exceptionBean = exceptionBean;
}
public Object[] getArgs ()
{
return args;
}
public void setArgs (Object[] args)
{
this.args = args;
}
private Object[] args;
private List<DfsExceptionHolder> exceptionBean;
}
In dfs-runtime.properties:
resource.bundle = dfs-messages
resource.bundle.1 = dfs-services-messages
resource.bundle.2 = dfs-bpm-services-messages
In local-dfs-runtime.properties.
resource.bundle.3 = my-custom-services-messages
http://127.0.0.1:7001/services/samples/AcmeCustomService?wsdl
When instantiating a service, a Java client application can pass the module name and
the fully-qualified context root to ServiceFactory.getRemoteService, as shown here:
mySvc = serviceFactory.getRemoteService(IAcmeCustomService.class,
context,
"samples",
"http://localhost:7001/
services");
<DfsClientConfig defaultModuleName="samples"
registryProviderModuleName="samples">
<ModuleInfo name="samples"
protocol="http"
host="127.0.0.1"
port="7001"
contextRoot="services">
</ModuleInfo>
</DfsClientConfig>
The order of precedence is as follows. The DFS runtime will first use parameters
passed in the getRemoteService method. If these are not provided, it will use the
values provided in the DfsClientConfig configuration file.
1. Call the generateModel task, specifying as input the annotated source. The
“generateModel task” on page 166 section provides detailed information on
calling this task.
2. Call the generateArtifacts task, specifying as input the annotated source and
service model. The “generateArtifacts task” on page 167 section provides
detailed information on calling this task.
3. Call the buildService task to build and package JAR files for your service
implementation classes. The “buildService task” on page 168 section provides
detailed information on calling this task.
Pre-requisites
• Download ant 1.7.0 from the Apache web site, and unzip it on your local
machine.
@DfsPojoService(targetNamespace = "http://example.service.com",
requiresAuthentication = true)
Note that this service would also work if requiresAuthentication were set to
false; we set it to true only to demonstrate the more typical setting in a DFS
service. “Annotating a service” on page 140 provides detailed information on
annotations.
<pathelement location="${dfs.sdk.libs}/dfc/*.jar"/>
cd %SAMPLES_LOC%/Services/HelloWorldService
ant artifacts package
8. Restart the DFS application server. Once the server is restarted, the Hello World
service should be addressable at http://<host>:<port>/services/example/
HelloWorldService.
• contextRoot
• moduleName
• repository
• user
• password
cd %SAMPLES_LOC%/HelloWorldService
ant run
The run target runs both the compile and run.client targets. After the run target
completes, you should the string “response = Hello John”, which indicates a
successful call to the service.
The getAcmeServiceInfo method gets a DFC session manager and populates the
AcmeServiceInfo object with data from the session manager:
The context, explicit service module name (“core”), and context root (“http://
127.0.0.1:8080/services”) is passed to the getRemoteService method to get the
Schema service. (You may need to change the hardcoded address of the remotely
invoked schema service, depending on your deployment.)
ISchemaService schemaService
= ServiceFactory.getInstance()
.getRemoteService(ISchemaService.class,
context,
"core",
"http://127.0.0.1:8080/services");
Note: It is also possible to invoke DFS services locally rather than remotely in
your custom service, if the service JARs from the SDK have been packaged in
your custom service EAR file. There are a number of potential advantages to
The getSchemaInfo operation of the Schema service is called and information from
this request is printed out:
The testExceptionHandling() method demonstrates how you can create and throw
custom exceptions. The method creates a new instance of CustomException and
throws it. The client side runtime catches the exception and recreates it on the client,
preserving all of the custom attributes. You must follow certain guidelines to create
a valid custom exception that can be thrown over the wire to the client. “DFS
exception handling” on page 148 provides detailed information on how to create a
DFS custom exception. The CustomException class is located in the
%SAMPLES_LOC%/AcmeCustomService/src/service/com/acme/services/samples/
common directory.
package com.acme.services.samples.impl;
import com.acme.services.samples.common.AcmeServiceInfo;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSessionManagerStatistics;
import com.emc.documentum.fs.datamodel.core.OperationOptions;
import com.emc.documentum.fs.datamodel.core.schema.SchemaInfo;
import com.emc.documentum.fs.rt.annotations.DfsPojoService;
import com.emc.documentum.fs.rt.context.ContextFactory;
import com.emc.documentum.fs.rt.context.IServiceContext;
import com.emc.documentum.fs.rt.context.ServiceFactory;
import com.emc.documentum.fs.rt.context.impl.DfcSessionManager;
import com.emc.documentum.fs.services.core.client.ISchemaService;
import com.acme.services.samples.common.CustomException;
import java.util.ArrayList;
import java.util.Iterator;
@DfsPojoService(targetNamespace = "http://samples.services.acme.com")
public class AcmeCustomService
{
return acmeServiceInfo;
}
}
2. Edit the %DFS_SDK%/etc/dfc.properties file and specify the correct values for
dfc.docbroker.host[0] and dfc.docbroker.port[0] at a minimum. “dfc.properties”
on page 159 provides detailed information.
ISchemaService schemaService
= ServiceFactory.getInstance()
.getRemoteService(ISchemaService.class,
context,"core",
"http://localhost:8080/services");
8.14.2.1 build.properties
The build.properties file contains property settings that are required by the Ant
build.xml file. To build AcmeCustomService, there is no need to change any of these
settings, unless you have moved the AcmeCustomService directory to another
location relative to the root of the SDK. In this case, you need to change the
dfs.sdk.home property. If you want AcmeCustomService to be automatically copied
to the deploy directory of the application server when you run the deploy target,
specify the directory in the autodeploy.dir property.
context.root = services
#Debug information
debug=true
keep=true
verbose=false
extension=true
#Deploy params
#The following example assumes that you use JBoss 5.1 as your
application
#server, which is installed on drive C.
autodeploy.dir=C:\jboss-eap-5.1\jboss-as\server\default\deploy
8.14.2.2 dfc.properties
The service-generation tools package a copy of dfc.properties within the service EAR
file. The properties defined in this dfc.properties file configure the DFC client
utilized by the DFS service runtime. The copy of dfc.properties is obtained from the
DFS SDK etc directory. The dfc.properties must specify the address of a docbroker
that can provide access to any repositories required by the service and its clients, for
example:
dfc.docbroker.host[0]=10.8.13.190
dfc.docbroker.port[0]=1489
You can also run the targets individually and examine the output of each step.
“build.xml” on page 160 provides detailed information on the targets. The
deploy target copies the EAR file to the directory that you specified in the
build.properties file. JBoss should automatically detect the EAR file and deploy
it. If this does not happen, restart the server.
Note: When the EAR file is being built, log4j may return messages like
ERROR Could not instantiate appender name. These messages do not
affect the building process, and thus can be ignored.
2. When the EAR file is done deploying, request the AcmeCustomService WSDL
by going to http://host:port/services/samples/AcmeCustomService?wsdl. A
return of the WSDL indicates a successful deployment. The default port for the
JBoss application server is 8080.
8.14.3.1 build.xml
The Ant build.xml file drives all stages of generating and deploying the custom
service. It contains the targets shown in the following table, which can be run in
order to generate and deploy the custom service.
The AcmeCustomService build.xml file includes an Ant target that compiles and
runs the Java test service consumer. As delivered, the consumer calls the service
remotely, but it can be altered to call the service locally by commenting out the
serviceFactory.getRemoteService method and uncommenting the
serviceFactory.getLocalService method.
Note: If you are developing consumers in .NET or using some other non-Java
platform, you might want to test the service using the Java client library,
because you can use local invocation and other conveniences to test your
service more quickly. However, it is still advisable to create test consumers on
your target consumer platform to confirm that the JAXB markup has generated
a WSDL from which your tools generate acceptable proxies.
2. Edit the Java or .NET code and specify values for the following code.
3. Run the Java consumer at the command prompt from the %DFS_SDK%/
samples/Services/AcmeCustomService directory:
ant run
8.14.4.1 dfs-client.xml
The dfs-client.xml file contains properties used by the Java client runtime for service
addressing. The AcmeCustomService test consumer provides the service address
explicitly when instantiating the service object, so does not use these defaults.
However, it's important to know that these defaults are available and where to set
them. The %DFS_SDK%/etc folder must be included in the classpath for clients to
utilize dfs-client.xml. If you want to place dfs-client.xml somewhere else, you must
place it in a directory named config and its parent directory must be in the classpath.
For example, if you place the dfs-client.xml file in the c:/myclasspath/config/dfs-
client.xml directory, add c:/myclasspath to your classpath.
<DfsClientConfig defaultModuleName="samples"
registryProviderModuleName="samples">
<ModuleInfo name="samples"
protocol="http"
host="127.0.0.1"
port="8080" contextRoot="services">
</ModuleInfo>
</DfsClientConfig>
The service stub and the data model classes are placed in a directory structure
that is determined by their target namespaces. For example, if the WSDL has a
3. Once the service is implemented, you can use the DFS build tools to build and
package the service.
The DFS build tools rely on a set of Ant tasks that can help you create and publish
services and generate client support for a service. When developing your own
services, you might need to extend the classpaths of these tasks to include libraries
that are required by your service. To see how the tasks are used in the context of a
build environment, examine the build.xml file in the AcmeCustomService sample.
<taskdef file="${dfs.sdk.libs}/emc-dfs-tasks.xml"/>
You can then call the individual tasks as described in the sample usage for each task:
GenerateModelTask">
<classpath location="${dfs.sdk.home}/lib/java/emc-dfs-tools.jar"/
>
<classpath location="${dfs.sdk.home}/lib/java/emc-dfs-rt.jar"/>
<classpath location="${dfs.sdk.home}/lib/java/utils/
aspectjrt.jar"/>
</taskdef>
Argument Description
contextRoot Attribute representing the root of the service
address. For example, in the URL http://
127.0.0.1:8080/services/ “services” signifies
the context root.
moduleName Attribute representing the name of the
service module.
destDir Attribute representing a path to a destination
directory into which to place the output
service-model XML.
<services> An element that provides a list (a <fileset>),
specifying the annotated source artifacts.
<classpath> An element providing paths to binary
dependencies.
<generateModel contextRoot="${context.root}"
moduleName="${module.name}"
destdir="${project.artifacts.folder}/src">
<services>
<fileset dir="${src.dir}">
<include name="**/*.java"/>
</fileset>
</services>
<classpath>
<pathelement location="${dfs.sdk.libs}/dfc/dfc.jar"/>
<path refid="project.classpath"/>
</classpath>
</generateModel>
<taskdef name="generateArtifacts"
classname="com.emc.documentum.fs.tools.build.ant.
GenerateArtifactsTask">
<classpath location="${dfs.sdk.home}/lib/java/emc-dfs-rt.jar"/>
<classpath location="${dfs.sdk.home}/lib/java/emc-dfs-tools.jar"/
>
<classpath location="${dfs.sdk.home}/lib/java/dfc/aspectjrt.jar"/
>
</taskdef>
Argument Description
serviceModel Attribute representing a path to the service
model XML created by the generateModel
task.
destDir Attribute representing the folder into which
to place the output source code. Client code
is by convention placed in a “client”
subdirectory, and server code in a “ws”
subdirectory.
<src> Element containing location attribute
representing the location of the annotated
source code.
<classpath> An element providing paths to binary
dependencies.
<generateArtifacts
serviceModel=
"${project.artifacts.folder}/src/${context.root}-${module.name}-
service-
model.xml"
destdir="${project.artifacts.folder}/src"
api="rich">
<src location="${src.dir}"/>
<classpath>
<path location="${basedir}/${build.folder}/classes"/>
<path location="${dfs.sdk.home}/lib/emc-dfs-rt.jar"/>
<path location="${dfs.sdk.home}/lib/emc-dfs-services.jar"/
>
<pathelement location="${dfs.sdk.home}/lib/dfc/dfc.jar"/>
<fileset dir="${dfs.sdk.home}/lib/ucf">
<include name="**/*.jar"/>
</fileset>
<path location="${dfs.sdk.home}/lib/jaxws/jaxb-api.jar"/>
<path location="${dfs.sdk.home}/lib/jaxws/jaxws-
tools.jar"/>
<path location="${dfs.sdk.home}/lib/commons/commons-lang-
2.1.jar"/>
<path location="${dfs.sdk.home}/lib/commons/commons-io-
1.2.jar"/>
</classpath>
</generateArtifacts>
Argument Description
serviceName Attribute representing the name of the
service module.
Argument Description
destDir Attribute representing the folder into which
to place the output JAR files.
<src> Element containing location attribute
representing the locations of the input source
code, including the original annotated source
and the source output by generateArtifacts.
<classpath> Element providing paths to binary
dependencies.
<buildService serviceName="${service.name}"
destDir="${basedir}/${build.folder}"
generatedArtifactsDir="${project.resources.folder}">
<src>
<path location="${src.dir}"/>
<path location="${project.artifacts.folder}/src"/>
</src>
<classpath>
<pathelement location="${dfs.sdk.home}/lib/dfc/dfc.jar"/>
<path refid="project.classpath"/>
</classpath>
</buildService>
<taskdef name="packageService"
classname="com.emc.documentum.fs.tools.build.ant.
PackageServiceTask">
<classpath location="${dfs.sdk.home}/lib/
java/ emc-dfs-tools.jar"/>
<classpath location="${dfs.sdk.home}/lib/
java/ emc-dfs-rt.jar"/>
<classpath location="${dfs.sdk.home}/lib/java/dfc/aspectjrt.jar"/
>
</taskdef>
Argument Description
deploymentName Attribute representing the name of the
service module. You can specify a .ear
or .war (for Tomcat deployment) file
extension depending on the type of archive
that you want.
destDir Attribute representing the folder into which
to place the output archives.
generatedArtifactsFolder Path to folder in which the WSDL and
associated files have been generated.
<libraries> Element specifying paths to binary
dependencies.
<resources> Element providing paths to resource files.
<packageService deploymentName="${service.name}"
destDir="${basedir}/${build.folder}"
generatedArtifactsDir="${project.resources.folder}">
<libraries>
<pathelement location="${basedir}/$
{build.folder}/ $
{service.name}.jar"/>
<pathelement location="${dfs.sdk.home}/lib/emc-dfs-rt.jar"/>
<pathelement location="${dfs.sdk.home}/lib/emc-
dfs- services.jar"/>
<pathelement location="${dfs.sdk.home}/lib/dfc/dfc.jar"/>
</libraries>
<resources>
<path location="${dfs.sdk.home}/etc/dfs.properties"/>
</resources>
</packageService>
<taskdef name="generateService"
classname="com.emc.documentum.fs.tools.GenerateServiceTask">
<classpath location="${dfs.sdk.home}/lib/java/emc-
dfs- tools.jar"/>
<classpath location="${dfs.sdk.home}/lib/java/emc-dfs-
rt.jar"/>
<classpath location="${dfs.sdk.home}/lib/java/
utils/ aspectjrt.jar"/>
</taskdef>
Argument Description
wsdlUri The local (file://) or remote (http://) location
of the WSDL
destDir Attribute representing the folder into which
to place the output source code.
debug The debug mode switch (“on” or “off”)
verbose The verbose mode switch (“on” or “off”)
<generateService
wsdllocation="${wsdl.location}"
destDir="${dest.dir}"
verbose="true"
debug="false"/>
<taskdef name="generateRemoteClient"
classname="com.emc.documentum.fs.tools.GenerateRemoteClientTask">
<classpath location="${dfs.sdk.home}/lib/java/emc-dfs-tools.jar"/>
<classpath location="${dfs.sdk.home}/lib/java/emc-dfs-rt.jar"/>
<classpath location="${dfs.sdk.home}/lib/java/utils/
aspectjrt.jar"/>
</taskdef>
Argument Description
wsdlUri (required) The local (file://) or remote (http://) location
of the WSDL
destdir (required) Attribute representing the folder into which
to place the output source code.
Argument Description
serviceProtocol Either http or https (default is http)
serviceHost The host where the service is located. This
value defaults to the WSDL host, so if the
WSDL is a local file, specify the host where
the service is located.
servicePort The port of the service host. This value
defaults to the WSDL host port, so if the
WSDL is a local file, specify the port where
the service is located.
serviceContextRoot The context root where the service is
deployed. This value defaults to the WSDL
context root, so if the WSDL is a local file,
specify the context root where the service is
located.
serviceModuleName The name of the service module. This value
defaults to the WSDL service module, so if
the WSDL is a local file, specify the module
where the service is located.
All attributes except for wsdlUri and destdir are used to override values that are
generated from the WSDL by the generateRemoteClient task.
<generateRemoteClient
wsdlUri="${wsdl.location}"
destdir="${dest.dir}"
serviceProtocol="true"
serviceHost="localhost"
servicePort="8080"
serviceContextRoot="services"
serviceModuleName="core" />
<taskdef name="generatePublishManifest"
classname="com.emc.documentum.fs.tools.registry.ant.GeneratePublishMa
nifestTask">
<classpath location="${dfs.sdk.home}/lib/java/emc-dfs-tools.jar" />
<classpath location="${dfs.sdk.home}/lib/java/emc-dfs-rt.jar" />
<classpath location="${dfs.sdk.home}/lib/java/utils/
aspectjrt.jar" />
<classpath location="${dfs.sdk.home}/lib/java/jaxr/jaxr-impl.jar" /
>
Argument Description
file The output service manifest file
organization The organization to publish the services
under
<modules> An element containing the location of the
service model file of the services that you
want to publish. You can have multiple
<modules> elements. Each <modules>
element contains <pathelement> elements
that specify the location of the service model
with the “location” attribute.
<publishset> An element containing the services that you
want to publish and the catalog and
categories that you want the services to be
under.
<target name="generateManifest">
<generatePublishManifest file="example-publish-manifest.xml"
organization="Documentum">
<modules>
<pathelement location="services-example-service-model.xml"/>
</modules>
<publishset>
<service name="MyService1" module="example"/>
<service name="MyService2" module="example"/>
<catalog name="Catalog1"/>
<category name="Category1"/>
<category name="Category2"/>
</publishset>
</generatePublishManifest>
</target>
1. Run the generateModel Ant task for each of the service modules that you want
to create. Ensure that you specify appropriate values for the following
parameters:
• contextRoot – Specify the same value for each service module that you want
to create. A good value to use is “services.”
• moduleName – Specify different values for each service module that you
want to create. This value is unique to each service module and creates
different service URLs for each of your service modules.
• destDir – Specify the same value for each service module that you want to
create. Using the same destination directory ensures that the service
modules get packaged into the same EAR file.
For example, if you want to create service modules with URLs at /services/
core, /services/bpm, and /services/search, your generateModel tasks might look
like the following:
<generateModel contextRoot="services"
moduleName="core"
destdir="build/services">
...
</generateModel>
<generateModel contextRoot="services"
moduleName="bpm"
destdir="build/services">
...
</generateModel>
<generateModel contextRoot="services"
moduleName="search"
destdir="build/services">
...
</generateModel>
2. Run the generateArtifacts Ant task for each service module that you want to
create. For example, given the output generated by the example above, your
generateArtifacts tasks should look like the following:
<generateArtifacts serviceModel="build/services/services-core-
service-model.xml"
destdir="build/services">
...
</generateArtifacts>
<generateArtifacts serviceModel="build/services/services-bpm-
service-model.xml"
destdir="build/services">
...
</generateArtifacts>
<generateArtifacts serviceModel="build/services/services-
search-service-model.xml"
destdir="build/services">
...
</generateArtifacts>
3. Run the buildService Ant task for each service of the service modules that you
want to create. For example, given the output generated by the examples above,
your buildService tasks should look like the following:
<buildService serviceName="core"
destdir="dist/services"
generatedArtifactsDir="build/services">
...
</generateArtifacts>
<buildService serviceName="bpm"
destdir="dist/services"
generatedArtifactsDir="build/services">
...
</generateArtifacts>
<buildService serviceName="search"
destdir="dist/services"
generatedArtifactsDir="build/services">
...
</generateArtifacts>
4. Run the packageService task once to package all of your service modules
together in the same EAR file. For example, given the output generated by the
examples above, your packageService task should look like the following:
<packageService deploymentName="emc-dfs"
destDir="dist/services"
generatedArtifactsDir="build/services">
...
</packageService>
You should now have all of your service modules packaged into one EAR file,
which can be deployed in your application server.
Note: You must run the DfsProxyGen utility locally and not from a network
drive.
To generate C# proxies:
1. In the Shared assemblies field, add any shared assemblies used by the service.
(There are none for AcmeCustomService.) “Creating shared assemblies for data
objects shared by multiple services” on page 177 provides detailed information.
2. In the Service model file field, browse to the service model file created by the
generateArtifacts ant task. For AcmeCustomService this will be <dfs-sdk-
version>\samples\AcmeCustomService\resources\services-samples-service-
model.xml.
3. In the Wsdl uri field, supply the name of the WSDL of the deployed service, for
example http://localhost:7001/services/samples/AcmeCustomService?wsdl.
Only URLs are permitted, not local file paths, so you should use the URL of the
WSDL where the service is deployed.
4. In the Output namespace, supply a namespace for the C# proxy (for example
samples.services.acme).
5. Optionally supply a value in the Output FileName field. If you don't supply a
name, the proxy file name will be the same as the name of the service, for
example AcmeCustomService.cs.
The results of the proxy generation will appear in the Log field. If the process is
successful, the name and location of the result file will be displayed.
1. Run DfsProxyGen against the WSDL and service model file for ServiceA.
This will generate the proxy source code for the service and its data classes
DataClass1 and DataClass2.
2. Create a project and namespace for the shared classes, DataClass1 and
DatasClass2, that will be used to build the shared assembly. Cut DataClass1 and
DataClass2 from the generated proxies source generated for ServiceA, and add
them to new source code file(s) in the new project.
5. Run DfsProxyGen against the WSDL and service model for ServiceB,
referencing the shared assembly created in step 4 in the Shared assemblies
field.
UCF content transfer is covered in a separate chapter (see “Content Transfer with
Unified Client Facilities“ on page 199).
Content transfer is an area where the productivity layer (PL) provides a lot of
functionality, so there are significant differences in client code using the productivity
layer and client code based on the WSDL alone. This chapter provides examples
showing how to do it both ways. The WSDL-based samples in this chapter were
written using JAX-WS RI 2.1.2.
A DFS Base64 message on the wire encodes binary data within the Contents element
of a DataObject. The following is an HTTP POST used to create an object with
content using the DFS object service create method.
wss-wssecurity-secext-1.0.xsd">
<wsse:BinarySecurityToken
QualificationValueType="http://schemas.emc.com/
documentum#ResourceAccessToken"
xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/
oasis-200401- wss-wssecurity-utility-1.0.xsd"
wsu:Id="RAD">USITFERRIJ1L1C/
10.13.33.174-1231455862108-4251902732573817364-2
</wsse:BinarySecurityToken>
</wsse:Security>
</S:Header>
<S:Body>
<ns8:create xmlns:ns2="http://rt.fs.documentum.emc.com/"
xmlns:ns3="http://
core.datamodel.fs.documentum.emc.com/"
xmlns:ns4="http://
properties.core.datamodel.fs.documentum.
emc.com/"
xmlns:ns5="http://content.core.datamodel.fs.documentum.
emc.com/"
xmlns:ns6="http://
profiles.core.datamodel.fs.documentum.
emc.com/"
xmlns:ns7="http://
query.core.datamodel.fs.documentum.emc.com/"
xmlns:ns8="http://
core.services.fs.documentum.emc.com/">
<dataPackage>
<ns3:DataObjects transientId="14615126"
type="dm_document">
<ns3:Identity repositoryName="Techpubs"
valueType="UNDEFINED"/>
<ns3:Properties isInternal="false">
<ns4:Properties xmlns:xsi="http://www.w3.org/2001/
XMLSchema-instance"
xsi:type="ns4:StringProperty"
isTransient="false"
name="object_name">
<ns4:Value>MyImage</ns4:Value>
</ns4:Properties>
<ns4:Properties xmlns:xsi="http://www.w3.org/2001/
XMLSchema-instance"
xsi:type="ns4:StringProperty"
isTransient="false"
name="title">
<ns4:Value>MyImage</ns4:Value>
</ns4:Properties>
<ns4:Properties xmlns:xsi="http://www.w3.org/2001/
XMLSchema-instance"
xsi:type="ns4:StringProperty"
isTransient="false"
name="a_content_type">
<ns4:Value>gif</ns4:Value>
</ns4:Properties>
</ns3:Properties>
<ns3:Contents xmlns:xsi="http://www.w3.org/
2001/ XMLSchema-instance"
xsi:type="ns5:BinaryContent"
pageNumber="0"
"gif">
<ns5:renditionType xsi:nil="true"/>
<ns5:Value>R0lGODlhAAUABIc...[Base64-encoded content]
</ns5:Value>
</ns3:Contents>
</ns3:DataObjects>
</dataPackage>
</ns8:create>
</S:Body>
</S:Envelope>
For most files, MTOM optimization is beneficial; however, for very small files
(typically those under 5K), there is a serious performance penalty for using MTOM,
because the overhead of serializing and deserializing the MTOM multipart message
is greater than the benefit of using the MTOM optimization mechanism.
An MTOM message on the wire consists of a multipart message. The parts of the
message are bounded by a unique string (the boundary). The first part of the
message is the SOAP envelope. Successive parts of the message contain binary
attachments. The following is an HTTP POST used to create an object with content
using the DFS object service create method. Note that the Value element within the
DFS Contents element includes an href pointing to the Content-Id of the attachment.
SOAPAction: ""
Content-Type:
multipart/related;
start="<rootpart*27995ec6-ff6b-438d-
[email protected].
sun.com>";
type="application/xop+xml";
boundary="uuid:27995ec6-ff6b-438d-b32d-0b6b78cc475f";
start-info="text/xml"
Accept: text/xml, multipart/related, text/html, image/gif, image/
jpeg,
sun.com>
Content-Type: application/xop+xml;charset=utf-8;type="text/xml"
Content-Transfer-Encoding: binary
wss-wssecurity-secext-1.0.xsd">
<wsse:BinarySecurityToken
QualificationValueType="http://schemas.emc.com/
documentum#ResourceAccessToken"
xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/
oasis-200401- wss-wssecurity-utility-1.0.xsd"
wsu:Id="RAD">USITFERRIJ1L1C/
10.13.33.174-1231455862108-4251902732573817364-2
</wsse:BinarySecurityToken>
</wsse:Security>
</S:Header>
<S:Body>
<ns8:create xmlns:ns8="http://core.services.fs.documentum.
emc.com/"
xmlns:ns7="http://query.core.datamodel.fs.documentum.
emc.com/"
xmlns:ns6="http://
profiles.core.datamodel.fs.documentum.
emc.com/"
xmlns:ns5="http://
content.core.datamodel.fs.documentum.
emc.com/"
xmlns:ns4="http://
properties.core.datamodel.fs.documentum.
emc.com/"
xmlns:ns3="http://
core.datamodel.fs.documentum.emc.com/"
xmlns:ns2="http://rt.fs.documentum.emc.com/">
<dataPackage>
<ns3:DataObjects transientId="8125444" type="dm_document">
<ns3:Identity repositoryName="Techpubs"
valueType="UNDEFINED">
</ns3:Identity>
<ns3:Properties isInternal="false">
<ns4:Properties xmlns:xsi="http://www.w3.org/2001/
XMLSchema-instance"
xsi:type="ns4:StringProperty"
isTransient="false"
name="object_name">
<ns4:Value>MyImage</ns4:Value>
</ns4:Properties>
<ns4:Properties xmlns:xsi="http://www.w3.org/2001/
XMLSchema-instance"
xsi:type="ns4:StringProperty"
isTransient="false"
name="title">
<ns4:Value>MyImage</ns4:Value>
</ns4:Properties>
<ns4:Properties xmlns:xsi="http://www.w3.org/2001/
XMLSchema-instance"
xsi:type="ns4:StringProperty"
isTransient="false"
name="a_content_type">
<ns4:Value>gif</ns4:Value>
</ns4:Properties>
</ns3:Properties>
<ns3:Contents xmlns:xsi="http://www.w3.org/
2001/ XMLSchema-instance"
xsi:type="ns5:DataHandlerContent"
pageNumber="0"
"gif">
<ns5:renditionType xsi:nil="true"></ns5:renditionType>
<ns5:Value>dc
<Include xmlns="http://www.w3.org/2004/08/xop/include"
href="cid:85f284b5-4f2c-4e68-8d08-
de160a5b47c6@example.
jaxws.sun.com"/>
</ns5:Value>
</ns3:Contents>
</ns3:DataObjects>
</dataPackage>
</ns8:create>
</S:Body>
</S:Envelope>
GIF89a[binary data...]
10.2.1.1 Workarounds
There are several options for working around this limitation:
• First, for content download operations, enable ACS/BOCS and make use of it. To
ensure that the urlContent type is returned by DFS, use the urlReturnPolicy
setting as described under “Content types returned by DFS” on page 189. The
client can use the urlContent returned by DFS to request content transfer from
the ACS server.
• For content upload operations, use UCF as the content transfer mode. UCF will
orchestrate content transfer in both directions between the client and the ACS
server.
• If you don't wish to use either of the preceding workarounds, make sure that
both the DFS .NET client and JVM that runs DFS server have enough memory to
buffer the content. However, be aware that in this case the application will be
limited to transfer of content in the range of hundreds of megabytes for a 32-bit
JVM, because on most modern 32-bit Windows systems the maximum heap size
will range from 1.4G to 1.6G (see http://www.oracle.com/technetwork/java/
hotspotfaq-138619.html#gc_heap_32bit). Although this specific limitation will not
apply to a 64-bit versions of Windows, the issue will still exist if you do not
ensure that there is sufficient heap space to buffer very large objects in memory.
• You can create a custom service. Due to a WCF limitation (see http://
msdn.microsoft.com/en-us/library/ms789010.aspx) wherein the stream data
transfer mode is supported only when there is a single parameter in the web
service method signature. Therefore, in the custom service, all parameters must
be wrapped into a single custom class object containing all input parameters of a
method as follows:
@DfsPojoService()
public class StreamingService
{
public DataPackage create(DataRequest request) throws
ServiceException
{
// DataRequest wraps DataPackage and OperationOptions,
the DataPackage might contain
large content
// do something with the content uploaded
......
}
}
{
return dataPackage;
}
public void setDataPackage(DataPackage dataPackage)
{
this.dataPackage = dataPackage;
}
public OperationOptions getOptions()
{
return options;
}
public void setOptions(OperationOptions options)
{
this.options = options;
}
}
Notes
10.2.2 For large files, the last temporary file not deleted
For large content transfers on the Java client-side, when the client code opens the
DataHandler’s streaming to read bytes by itself, MIME*.tmp file are generated and
the last one is not deleted from the temporary files directory. This situation is caused
by an issue in mimepull.jar. The following code sample is a workaround for this
issue:
• Content.getAsFile()
• Content.getAsFile(destDir, filename, deleteLocalHint)
((StreamingDataHandler)dh).moveTo(new File(path));
// is = dh.getInputStream();
// fos = new FileOutputStream(path);
// int iBufferSize = 1024;
// if (is != null)
// {
// int byteRead;
// while ((byteRead = is.read()) != -1)
// {
// fos.write(byteRead);
// }
// is.close();
// }
// fos.flush();
// fos.close();
10.3 ContentTransferMode
The DFS ContentTransferMode is a setting that is used to influence the transfer
mode used by DFS. This section discusses how this setting applies in various types
of clients. “Content types returned by DFS” on page 189 provides information about
the different types and content transfer mechanisms that can be expected from DFS
based on this setting and other factors.
<xs:simpleType name="ContentTransferMode">
<xs:restriction base="xs:string">
<xs:enumeration value="BASE64"/>
<xs:enumeration value="MTOM"/>
<xs:enumeration value="UCF"/>
</xs:restriction>
</xs:simpleType>
The way the ContentTransferMode setting functions varies, depending on the type
of client.
WSDL-based clients
In WSDL-based clients, ContentTransferMode influences the data type and content
transfer format that DFS uses to marshall content in HTTP responses. In a WSDL-
based client, ContentTransferMode only applies to content download from DFS, and
in this context, only Base64 and MTOM are relevant. To use the UCF content transfer
mechanism, you need to write client-side code that delegates the content transfer to
UCF. WSDL-based clients in particular need to be aware that DFS will use
UrlContent in preference to MTOM or Base64 to transfer content, if ACS is available
(see “Content types returned by DFS” on page 189).
The value passed in OperationOptions will take precedence over the setting in the
service context.
You can gain finer control over this behavior using the urlReturnPolicy property of
ContentProfile. The value of urlReturnPolicy is an enum constant of type
UrlReturnPolicy, as described in the following table:
Value Behavior
ALWAYS Return UrlContent where URL content is
available; fail with exception where URL
content is not available.
NEVER Return actual content; never return
UrlContent.
ONLY Return UrlContent where URL content is
available; return no content in DataObject
where URL content is not available.
PREFER Return UrlContent where URL content is
available; return actual content where URL
content is not available.
If you are writing a WSDL-only client that does not use the productivity layer, then
your code needs to be aware at runtime of the content type returned by DFS and
handle it appropriately. If you are using the productivity layer, the PL provides
convenience methods that support handling all Content subtypes in a uniform way,
and transparently handle the streaming of UrlContent to the client. “Downloading
content using Base64 and MTOM” on page 193 and “Downloading UrlContent”
on page 196 provide some sample code comparing these two approaches.
If you are using a WSDL-based client, you will need to use the API provided by
your framework to enable MTOM (or not), and explicitly provide an appropriate
subtype of Content in the DataObject instances passed to a DFS operation. The
following example, from a plain JAX-WS client, passes a DataObject containing
content stored in an existing file to the Object service create method as
BinaryContent.
(m_serviceContext.getIdentities().get(0))).
getRepositoryName());
DataObject dataObject = new DataObject();
dataObject.setIdentity(objIdentity);
dataObject.setType("dm_document");
PropertySet properties = new PropertySet();
dataObject.setProperties(properties);
StringProperty objNameProperty = new StringProperty();
objNameProperty.setName("object_name");
objNameProperty.setValue("MyImage-" +
System.currentTimeMillis());
properties.getProperties().add(objNameProperty);
dataObject.getContents().add(getDataHandlerContent(byteArray,
format));
}
else if (m_transferMode == ContentTransferMode.BASE_64)
{
// calls helper method shown below
dataObject.getContents().add(getBinaryContent(byteArray,
format));
}
DataPackage dataPackage = new DataPackage();
dataPackage.getDataObjects().add(dataObject);
String
format)
{
DataSource byteDataSource = new
ByteArrayDataSource(byteArray,
"gif");
DataHandler dataHandler = new DataHandler(byteDataSource);
DataHandlerContent dataHandlerContent = new
DataHandlerContent();
dataHandlerContent.setFormat(format);
dataHandlerContent.setValue(dataHandler);
return dataHandlerContent;
}
The transfer mode used to send the content over the wire is determined by the client
framework—in the case of this example by whether MTOM is enabled on the JAX-
WS ServicePort. The following snippet shows one means of enabling MTOM by
passing an instance of javax.xml.ws.soap.MTOMFeature when getting the service
port from the service.
String objectServiceURL = contextRoot + "/core/ObjectService";
ObjectService objectService = new ObjectService(
new URL(objectServiceURL),
new QName("http://core.services.fs.documentum.emc.com/",
"ObjectService"));
servicePort = objectService.getObjectServicePort
(new MTOMFeature());
If you are using the productivity layer, the productivity layer runtime checks the
ContentTransferMode setting and takes care of converting the content type to an
appropriate subtype before invoking the remote service. The transfer mode used for
the upload is determined by the runtime, also based on the ContentTransferMode
setting.
throws ServiceException
{
File testFile = new File(filePath);
if (!testFile.exists())
{
throw new IOException("Test file: " +
testFile.toString() +
ContentTransferProfile contentTransferProfile =
new ContentTransferProfile();
contentTransferProfile.setTransferMode(ContentTransferMode.MTOM);
File targetFile)
throws IOException, SerializableException
{
ObjectIdentitySet objectIdentitySet = new
ObjectIdentitySet();
objectIdentitySet.getIdentities().add(objectIdentity);
ContentTransferProfile contentTransferProfile
= new ContentTransferProfile();
contentTransferProfile.setTransferMode(m_transferMode);
operationOptions.getProfiles().add(contentTransferProfile);
operationOptions.getProfiles().add(contentProfile);
DataPackage dp =
m_servicePort.get(objectIdentitySet,
operationOptions);
Content content =
dp.getDataObjects().get(0).getContents().get(0);
OutputStream os = new FileOutputStream(targetFile);
if (content instanceof UrlContent)
{
//Handle URL content -- see following section
}
else if (content instanceof BinaryContent)
{
BinaryContent binaryContent = (BinaryContent) content;
os.write(binaryContent.getValue());
}
else if (content instanceof DataHandlerContent)
{
DataHandlerContent dataHandlerContent =
(DataHandlerContent) content;
InputStream inputStream =
dataHandlerContent.getValue().getInputStream();
if (inputStream != null)
{
int byteRead;
while ((byteRead = inputStream.read()) != -1)
{
os.write(byteRead);
}
inputStream.close();
}
}
os.close();
return targetFile;
}
The following productivity layer example does something similar; however it can
use the Content#getAsFile convenience method to get the file without knowing the
concrete type of the Content object.
DataPackage dataPackage =
objectService.get(objectIdSet,
operationOptions);
DataObject dataObject =
dataPackage.getDataObjects().get(0);
Content resultContent = dataObject.getContents().get(0);
if (resultContent.canGetAsFile())
{
return resultContent.getAsFile();
}
else
{
return null;
}
}
A client can get UrlContent explicitly using the Object service getContentUrls
operation. UrlContent can also be returned by any operation that returns content if
an ACS server is configured and active on the Documentum Server where the
content is being requested, and if the requested content is available via ACS. Clients
that do not use the productivity layer should detect the type of the content returned
by an operation and handle it appropriately. In addition, the ACS URL must be
resolvable when downloading the UrlContent.
Note: The expiration time for an ACS URL can be configured by setting the
default.validation.delta property in acs.properties. The default value is 6 hours.
Documentum Server Distributed Configuration Guide provides detailed
information.
A client that does not use the productivity layer needs to handle UrlContent that
results from a get operation or a getContentUrls operation by explicitly
downloading it from ACS. The following JAX-WS sample extracts the UrlContent
from the results a get operation, then passes the URL and a FileOutputStream to a
second sample method, which downloads the ACS content to a byte array which it
streams to the FileOutputStream.
File targetFile)
throws IOException, SerializableException
{
ObjectIdentitySet objectIdentitySet = new
ObjectIdentitySet();
objectIdentitySet.getIdentities().add(objectIdentity);
ContentTransferProfile contentTransferProfile
= new ContentTransferProfile();
contentTransferProfile.setTransferMode(m_transferMode);
operationOptions.getProfiles().add(contentTransferProfile);
operationOptions.getProfiles().add(contentProfile);
DataPackage dp =
m_servicePort.get(objectIdentitySet,
operationOptions);
Content content =
dp.getDataObjects().get(0).getContents().get(0);
OutputStream os = new FileOutputStream(targetFile);
if (content instanceof UrlContent)
{
UrlContent urlContent = (UrlContent) content;
// call private method shown below
downloadContent(urlContent.getUrl(), os);
}
else if (content instanceof BinaryContent)
{
//handle binary content -- see preceding
section
}
else if (content instanceof DataHandlerContent)
{
//handle DataHandlerContent -- see preceding
section
}
os.close();
return targetFile;
}
The following sample method does the work of reading the content from
ACS to a buffer and streaming it to an OutputStream.
private void downloadContent (String url, OutputStream os)
throws IOException
{
InputStream inputStream;
inputStream = new BufferedInputStream(new
URL(url).openConnection().
getInputStream());
int bytesRead;
byte[] buffer = new byte[16384];
while ((bytesRead = inputStream.read(buffer)) > 0)
{
os.write(buffer, 0, bytesRead);
}
}
If on the other hand you are using the productivity layer, the PL runtime does most
of the work behind the scenes. When retrieving content using a get operation, you
can call getAsFile on the resulting content object without knowing its concrete type.
If the type is UrlContent, the runtime will retrieve the content from ACS and write
the result to a file.
The following example gets UrlContent explicitly using the Object service
getContentUrls function and writes the results to a file.
" as file.");
}
}
Unified Client Facilities (UCF) orchestrates direct transfer of content between a client
computer and a Documentum repository. UCF is fully integrated with DFS, and can
be employed as the content transfer mechanism in many types of DFS consumer
application. The DFS SDK provides client libraries to support UCF content transfer
in Java and in .NET. The Java and .NET libraries are integrated into the DFS
productivity layer runtime to simplify usage by productivity layer applications.
Applications that do not use the productivity layer can use the UCF client libraries
directly in their applications outside of the DFS productivity layer runtime. Web
applications can package the UCF client libraries into an applet or an ActiveX object
to enable UCF content transfer between a browser and a Documentum Server.
Clients that use the .NET libraries do not need to have a Java Runtime Engine
installed on their system.
This chapter discusses the use of UCF for content transfer in a DFS context.
You may want to consider a list of its potential benefits when deciding whether and
when to use it rather than the alternative content transfer modes (MTOM and
Base64). Unified Client Facilities:
However, UCF content transfer mode may also be required to work around memory
limitations for .NET clients (see “Memory limitations associated with MTOM
content transfer mode” on page 184).
In DFS 7.0 or later, native .NET UCF client-side components are supported, either as
an ActiveX object (for web applications) or as a .NET assembly (for thick clients),
and no JRE is required on the client machine. Native DFS UCF .NET integration on
the client side will require DFS services version 7.0 or later on the server side.
If you are developing a thick client that uses the productivity layer, the components
are packaged in the DFS client runtime libraries delivered in the SDK. You can set
up a project using the usual DFS client dependencies, as described in
“Configuring .NET consumer project dependencies” on page 75 and “Configuring
Java dependencies for DFS productivity-layer consumers” on page 47. No other
dependencies are required.
If you are not using the productivity layer, and you are developing a thick client,
you will need to reference the UCF client-side libraries in your project, which
enables your application to invoke the UcfConnection class. In a .NET project you
will need to add a reference to Emc.Documentum.Fs.Runtime.Ucf.dll. In Java, you
should place ucf-connection.jar on your project classpath.
Finally, if you are developing a web application, and need to download the UCF
client components to the browser, you will need to develop an applet or an ActiveX
object for this purpose. A sample applet and a sample Activex are included in the
DFS SDK. You will need to package the DFS client runtime dependencies in the
applet or ActiveX object. “Write the applet code for deploying and launching UCF”
on page 215, “Build and bundle the applet” on page 216 (for Java applications), and
“Tutorial: Using UCF .NET in a .NET client” on page 227 provide detailed
information.
# ProxyPass
# enables Apache as forwarding proxy
UCF is a stateful protocol relying on HTTP sessions to preserve state. DFS requires
more than one trip to the server side to establish a UCF connection. For this reason, it is
required to use sticky session based load balancing so that all requests that are part
of the same HTTP session are routed to the same backend node.
Notes
The following example demonstrates how to use same HTTP session for
different UCF connections
UCF failover is not supported as a result. In case of a node failure the whole UCF
transfer process, including establishing a new UCF connection must be restarted.
Once an HTTP session is established, DFS will reuse it for the same service instance
to avoid any load balancing issues. (More accurately, DFS will reuse the same HTTP
session for the same service, provided that the client application does not update the
ActivityInfo instance in the ContentTransferProfile instance.) DFS will throw the
following exception if the consumer tries to override the existing HTTP JSESSIONID
value with a different one:
Can not execute DFS call: provided HTTP session id "xxx" overrides
• ContentTransferProfile
• ContentProfile
• ContentTransferMode
• ActivityInfo
• UcfContent
The ActivityInfo class permits a developer to control the UCF connection lifecycle
and to provide details for externally initialized UCF connections. Controlling the
Note: The ActivityInfo passed by the client might be updated by DFS runtime;
You will not be able to retrieve the cookies set in the ActivityInfo.
The UcfContent class is used explicitly to indicate that no further runtime UCF
processing is required on the Content instance. If the files to be transferred are not
located on the same machine as the DFS consumer, as it would be in case of a
browser integration, the application developer should explicitly provide a
UcfContent instance in the DataObject passed to the service operation.
A typical use of DFS-orchestrated UCF would be a thick client invoking the DFS
remote web services API.
The process of establishing a UCF connection consists of a set of steps that must be
taken in a specific order for the procedure to succeed. First of all, a UCF installer
must be downloaded from the server side. It will check whether a UCF client is
already present in the environment and if it is, whether it needs to be upgraded or
not. Before the UCF installer can be executed, it is necessary to confirm its author
and whether its integrity has been compromised or not. This is achieved through
digitally signing the installer and verifying the file signature on the client side. The
downloaded UCF installer is executed only if it considered trusted. Once running, it
will install and launch the UCF client and, eventually, request a UCF connection ID
from the UCF server once the process is successful.
To encapsulate this complexity, DFS provides the UcfConnection class. This class
takes the URL of the UCF server as a constructor argument and allows the developer
to obtain a UCF connection ID through a public method call. The provided URL
should point to the location of the UCF installer and ucf.installer.config.xml on the
remote server. This class is available for both Java and .NET consumers. Both Java
and .NET DFS Productivity Layers rely on UcfConnection to establish UCF
connections.
(getParameter("ucf-server")));
uid = c.getUid();
jsessionId = c.getJsessionId();
where ucf-server has is a string representing the DFS service context, such as “http://
host:port/context-root/module”, for example “http://localhost:8080/services/core”.
The values of “uid” and “jsessionId” must be passed on to the browser and
eventually, to the web application initiating the UCF content transfer. One way of
passing on these values to the browser is through the JSObject plugin, which allows
Java to manipulate objects that are defined in JavaScript.
[ComVisible(true)]
public string GetUid(String jsessionId, String url)
{
UcfConnection c = new UcfConnection(new Uri(url), jsessionId,
null);
return c.GetUcfId();
}
As with the Java integration, the UCF connection ID (uid) and “jsessionId” must be
passed to the web application initiating the UCF content transfer.
11.1.7 Authentication
UCF does not have any built-in authentication mechanisms. It is controlled from the
server side by DFC, which begins the content transfer only after authenticating the
user. This leaves the door open for Denial of Service attacks as clients can establish
as many UCF connections as they wish.
To establish a secure UCF connection, you must add the SSO cookie to the
UcfConnection constructor.
targetDeploymentId);
The hostname must be a text string up to 24 characters drawn from the alphabet (A-
Z), digits (0-9), minus sign (-), and period (.).
Otherwise, the .NET UCF client installation fails. For example, the hostname cannot
contain the underscore sign (_).
The client runtime provides a constructor that permits the consumer to set
autoCloseConnection only, and the remaining settings are provided by default. With
these settings, the DFS framework will supply standard values for activityId and
sessionId, so that content will be transferred between the standard endpoints: the
UCF server on the DFS host, and the UCF client on the DFS consumer. The following
snippet shows how to set the autoCloseConnection using the Java productivity
layer:
IServiceContext c = ContextFactory.getInstance().newContext();
c.addIdentity(new RepositoryIdentity("…", "…", "…", ""));
ContentTransferProfile p = new ContentTransferProfile();
p.setTransferMode(ContentTransferMode.UCF);
p.setActivityInfo(new ActivityInfo(false));
c.setProfile(p);
IObjectService s = ServiceFactory.getInstance()
.getRemoteService(IObjectService.class,
c,
"core",
"http://localhost:8080/services");
DataPackage result = s.get(new ObjectIdentitySet
(new ObjectIdentity
(new ObjectPath("/Administrator"), "…")),
null);
This optimization removes the overhead of launching the UCF client multiple times.
It is only effective in applications that will perform multiple content transfer
operations between the same endpoints. If possible, this overhead can be more
effectively avoided by packaging multiple objects with content in the DataPackage
passed to the operation.
Notes
Value Description
Null or empty string Take no action.
dfs:view Open the file in view mode using the
application associated with the file type by
the Windows operating system.
dfs:edit Open the file in edit mode using the
application associated with the file type by
the Windows operating system.
dfs:edit?app=_EXE_ Open the file for editing in a specified
application. To specify the application
replace _EXE_ with a fully-qualified path to
the application executable; or with just the
name of the executable. In the latter case the
operating system will need to be able to find
the executable; for example, in Windows, the
executable must be found on the %PATH%
environment variable. Additional parameters
can be passed to the application preceded by
an ampersand (&).
1. %UCF_LAUNCH_CLICK_ONCE_PATH%
2. %USERPROFILE%
3. %HOMEDRIVE%%HOMEPATH%
4. %WINDIR%
If none of the above variables are valid, a UCF exception occurs. You must then set
the %UCF_LAUNCH_CLICK_ONCE_PATH% variable with a folder path in non-network
drive with WRITE permission.
http://msdn.microsoft.com/en-us/library/t71a733d%28v=vs.80%29.aspx provides
detailed information on ClickOnce.
However, you can switch over to UCF Java after you configure the following:
Value Description
true UCF opens the local copy of a checked out
document on a subsequent checkout
operation.
false (default) An error is returned when UCF tries to check
out a document that already has been
checked out.
You can set this runtime property by using the setRuntimeProperty method of the
service context as shown here:
serviceContext.setRuntimeProperty("RUN_UCF_ACTION_ON_SUBSEQUENT_CHECK
OUT", "true");
11.3.1 Requirements
UCF is dependent on the availability of a JRE 11 on the client machine to which the
UCF jar files are downloaded. It determines the Java location using the
JAVA_HOME environment variable.
For our tests of this scenario, we deployed both the web application and DFS on
Tomcat 6e. The test application shown here also requires the Java Plug-in. The Java
Plug-in is part of the Java Runtime Environment (JRE), which is required on the end-
user machine.
1. The browser sends a request to a JSP page, which downloads an applet. If the
browser is configured to check for RSA certificates, the end user will need to
import the RSA certificate before the applet will run. (The signing of the applet
with the RSA certificate is discussed in “Sign the applet” on page 217.)
2. The applet instantiates a UCF connection, gets back a jsessionId and a uid, then
sends these back to the JSP page by calling a JavaScript function.
3. In the web application, a servlet uses the jsessionId, uid, and a filename
provided by the user to create an ActivityInfo object, which is placed in a
ContentTransferProfile in a service context. This enables DFS to perform content
transfer using the UCF connection established between the UCF server on the
DFS service host and the UCF client on the end-user machine.
The tasks required to build this test application are described in the following
sections:
3. “Code an HTML user interface for serving the applet” on page 212
4. “Write the applet code for deploying and launching UCF” on page 215
7. “Create a servlet for orchestrating the UCF content transfer” on page 217
• An end-user machine, which includes a browser, and which must have a Java
Runtime Environment available in which to run UCF (and the Java Plug-in). The
browser should be configured to use JRE 11.
• A proxy set up using the Apache application server (we tested using version 2.2).
• An application server hosting the web application components, including the
DFS consumer.
• An application server hosting the DFS services and runtime (which include the
required UCF server components). The DFS installation must have its
dfc.properties configured to point to a connection broker through which the
Documentum Server installation can be accessed.
• A Documentum Server installation.
To create a test application, each of these hosts must be on a separate port. They do
not necessarily have to be on separate physical machines. For purposes of this
sample documentation, we assume the following:
P# ProxyPass
# enables Apache as forwarding proxy
• http://proxy:80/services/core/runtime/AgentService.rest to http://dfs-server:8080/
services/ core/runtime/AgentService.rest.
• The default mapping is to the application server that hosts UI and DFS
consumer, so it forwards http://proxy:80/ucfweb/ImportFileServlet to http://ui-
server:8080/ucfweb/ImportFileServlet.
Note: This sample has been implemented with two buttons for demonstration
purposes. A button with the sole function of creating the UCF connection
would probably not be a useful thing to have in a production application.
Make sure not to click this button then close the browser without performing
the import: this will leave the UCF client process running.
var winPop;
function OpenWindow()
{
function validate()
{
if(document.form1.jsessionId.value == "" ||
document.form1.uid.value=="")
{
alert("UCF connection is not ready, please
wait");
return false;
}
else if(document.form1.file.value == "")
{
alert("Please enter a file path");
return false;
}
else
{
return true;
}
</script>
</head>
<body>
<h2>DFS Sample</h2>
<form name="form1"
onSubmit="return validate()"
method="post"
action="/ucfweb/ImportFileServlet">
Enter File Path: <input name="file" type="text" size=20><br>
<input name="jsessionId" type="hidden"><br>
<input name="uid" type="hidden"><br>
Note that hidden input fields are provided in the form to store the jsessionId and uid
values that will be obtained by the applet when it instantiates the UcfConnection.
<html>
<head>
<TITLE>Sample Applet PopUp Page</TITLE>
<script type="text/javascript">
function setHtmlFormIdsFromApplet()
{
if (arguments.length > 0)
{
window.opener.document.form1.jsessionId.value
= arguments[0];
window.opener.document.form1.uid.value =
arguments[1];
}
window.close();
</script>
</head>
<body>
<center><h2>Running Applet ........</h2><center>
<center>
<applet
CODE="com.emc.documentum.fs.sample.applet.SampleApplet.class"
CODEBASE="../
applet" archive="ucfApplet.jar,ucf-connection.jar,ucf-
The popup HTML downloads the applet, and also includes a Javascript function for
setting values obtained by the applet in dfsSample.html (see Example 11-1, “HTML
for user interface” on page 213). The applet will use the Java Plug-in to call this
JavaScript function.
11.3.2.4 Write the applet code for deploying and launching UCF
The applet must perform the following tasks:
Note that this Java code communicates with the Javascript in the JSP using the Java
Plug-in (JSObject).
import com.emc.documentum.fs.rt.ucf.UcfConnection;
import java.applet.*;
import java.net.URL;
import netscape.javascript.JSObject;
services/
core"));
System.out.println("jsessionId=" + conn.getJsessionId() + ",
uid=" + conn.getUid());
JSObject win = JSObject.getWindow(this);
win.call("setHtmlFormIdsFromApplet", new
Object[]
{conn.getJsessionId(),
conn.getUid()});
}
catch (Exception e)
{
e.printStackTrace();
}
}
public void start ()
{
}
}
The applet launches a UCF client process on the end-user machine, which
establishes a connection to the UCF server, obtaining the jsessionId and the uid for
the connection. It uses Java Plug-in JSObject to call the JavaScript function in the
HTML popup, which sets the jsessionId and uid values in the user interface HTML
form, which will pass them back to the servlet.
Method 1
The applet that you construct must contain the SampleApplet class and all classes
from the following archives, provided in the SDK:
• ucf-installer.jar
• ucf-connection.jar
To create the applet, extract the contents of these two jar files and place them in the
same folder with the compiled SampleApplet class, shown in the preceding step.
Bundle all of these classes into a new jar file called dfsApplet.jar.
Method 2
You can package the SampleApplet class into a ucfApplet.jar file, and put this
ucfApplet.jar file and the archives provided in the SDK (ucf-installer.jar,
and ucf-connection.jar) in one directory as demonstrated in the complete code
sample in the SDK.
If you use Method 1 to build and bundle the applet, you have to sign the
ucfApplet.jar file. If you use Method 2, you have to sign all jar files with the same
certificate. “Sign the applet” on page 217 provides detailed information.
1. Receive the jsessionId and uid from the browser and use this data to configure
an ActivityInfo, ContentTransferProfile, and ServiceContext such the DFS
service will use the UCF connection established between the UCF client running
on the end-user machine and the UCF server hosted in the DFS server
application.
2. Instantiate the DFS Object service and run a create operation to test content
transfer.
Note: This example uses productivity layer support. “Create the servlet
without the productivity layer” on page 221 provides suggestions on how to
create similar functionality without the productivity layer.
import
com.emc.documentum.fs.datamodel.core.content.ActivityInfo;
import
com.emc.documentum.fs.datamodel.core.content.ContentTransferMo
de;
import com.emc.documentum.fs.datamodel.core.content.Content;
import
com.emc.documentum.fs.datamodel.core.content.FileContent;
import
com.emc.documentum.fs.datamodel.core.context.RepositoryIdentit
y;
import
com.emc.documentum.fs.datamodel.core.profiles.ContentTransferP
rofile;
import com.emc.documentum.fs.datamodel.core.DataPackage;
import com.emc.documentum.fs.datamodel.core.DataObject;
import com.emc.documentum.fs.datamodel.core.ObjectIdentity;
import com.emc.documentum.fs.rt.context.IServiceContext;
import com.emc.documentum.fs.rt.context.ContextFactory;
import com.emc.documentum.fs.rt.context.ServiceFactory;
import com.emc.documentum.fs.rt.ServiceInvocationException;
import
com.emc.documentum.fs.services.core.client.IObjectService;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.servlet.ServletException;
import java.io.IOException;
import java.io.PrintWriter;
try
{
IObjectService service = getObjectService(req);
DataPackage dp = new DataPackage();
DataObject vo = new DataObject(new
ObjectIdentity(docbase),
"dm_document");
vo.getProperties().set("object_name",
"testobject");
int fileExtIdx = file.lastIndexOf(".");
(fileExtIdx +
1));
vo.getContents().add(content);
dp.addDataObject(vo);
IServiceContext context =
ContextFactory.getInstance().
newContext();
context.addIdentity(new
RepositoryIdentity(docbase,
username, password, ""));
IObjectService service =
ServiceFactory.getInstance().
getRemoteService(
IObjectService.class, context, "core",
serverUrl +
"/
services");
return service;
}
Note that you will need to provide values for username, password, and docbase
fields to enable DFS to connect to your test repository.
In the sample, the getObjectService method does the work of obtaining the
jsessionId and the uid from the http request.
Notice that in addition to the jsessionId and uid, the ActivityInfo is instantiated with
two other values. The first, which is passed null, is the initiatorSessionId. This is a
DFS internal setting to which the consumer should simply pass null. The second
setting, which is pass true, is autoCloseConnection. Setting this to true (which is also
the default), causes DFS to close the UCF connection after the service operation that
transfers content. “Optimization: controlling UCF connection closure” on page 206
provides detailed information.
Finally, getObjectService instantiates the Object service using the newly created
context.
getRemoteService(
IObjectService.class, context, "core", serverUrl +
"/services");
return service;
The key is that the context has been set up to use the UCF connection to the UCF
client running on the end user machine, obtained by the applet rather than the standard
connection to the UCF client machine.
The doPost method finishes by using the service to perform a test transfer of content,
using the Object service create method.
You can then instantiate the ObjectService with the ServiceContext factory method.
Applications that do not use the productivity layer must, in addition to setting the
transfer mode and activity info on the service context, provide explicit UcfContent
instances in the DataObject:
The following tasks must be completed to allow users to import through UCF using
a web application.
• UcfBrowserExtensionSample\BrowserExtensions\Chrome\src: Browser
extension source code for Google Chrome
• UcfBrowserExtensionSample\dfs\include: Java script files that communicate
with browser extension
• UcfBrowserExtensionSample\DFSExtnNative: DFS extension native code that
connects to DFS server to install and launch UCF
• UcfBrowserExtensionSample\pages: Sample html pages
{
"allowed_origins": [ "chrome-extension://
aigoniadnhenbdmnibcmlfndjideciml /" ],
"description": "com.documentum.dfs.native.1",
"name": "com.documentum.dfs.native.1",
"path": "C:\\Users\\Administrator\\AppData\\Local\\OpenText\
\ContentXfer\\com.documentum.dfs.native\\1\\run.bat",
"type": "stdio"
}
extn_installer_url: 'https://chrome.google.com/webstore/detail/
opentext-documentum-clien/aigoniadnhenbdmnibcmlfndjideciml'
You need to upload the extension code in Google Chrome store and update the ID in
manifest.json and clientConfig.json only once.
If you want to test browser extension before uploading the browser extension code
into Google Chrome store, then complete the following steps:
2. Enable Developer Mode using the toggle option on the top right hand corner of
the browser.
5. Copy the ID from the DFS OpenText Documentum Client Manager tile.
{
"allowed_origins": [ "chrome-extension://
aigoniadnhenbdmnibcmlfndjideciml /" ],
"description": "com.documentum.dfs.native.1",
"name": "com.documentum.dfs.native.1",
"path": "C:\\Users\\Administrator\\AppData\\Local\\OpenText\
\ContentXfer\\com.documentum.dfs.native\\1\\run.bat",
"type": "stdio"
}
Note: When you load unpack, it generates a new ID that is unique in each
client machine. You will need to update the ID in manifest.json and
clientConfig.json for each client machine.
You need to build the NativeSetup.exe file only once. The NativeSetup.exe file
sets up the native client on the client machine. To generate the EXE, complete the
following procedure:
2. Select Create new self Extraction Directive file and click Next.
3. Select Extract files and run an installation command and click Next.
11. Enter target path and filename for the package UcfTransfer\UCF.Java\
UcfBrowserExtensionSample\dfs\extension\NativeSetup.EXE.
2. Configure UcfBrowserExtensionSample\WEB-INF\classes\config.
properties with the following details:
4. To test the web application, open the following URL in Google Chrome:
http://<IP_address>:8080/UcfBrowserExtensionSample/dfsSample.html
Note: If browser extension and native client is already installed in the machine,
then you will not get the prompt to install. You can skip this section and
proceed to import the file of your choice. If any one of these components are
not present, then you will get the appropriate prompt to install the relevant
component.
Google Chrome
1. When you open the application on a machine for the first time, a yellow ribbon
appears in the browser with the message “Please Install Content Transfer
Extension”. Click Install and add the extension to Google Chrome.
2. Refresh or open a new tab in the browser and open the following application
URL:
http://<IP_address>:8080/UcfBrowserExtensionSample/dfsSample.html
Native client
1. When you open the application on a machine for the first time, a yellow ribbon
appears in the browser with the message “Please Install Native Client”. Click
Install. The NativeSetup.exe file downloads on the machine.
2. Double-click and run the NativeSetup.exe file. The native client will be
installed in %USERPROFILE%\AppData\Local\OpenText\ContentXfer\com.
documentum.dfs.native\1\.
3. Refresh or open a new tab in the browser and open the following application
URL:
http://<IP_address>:8080/UcfBrowserExtensionSample/dfsSample.html
http://<IP_address>:8080/UcfBrowserExtensionSample/dfsSample.html
Specify the complete path of the file you want to import and click Import.
11.5.1 Requirements
UCF .NET depends on the availability of .NET framework 4.0 on the client machine
on which the UCF assembly files are downloaded.
For simplicity, we installed the Apache proxy server and application server on the
same machine.
The tasks required to build this test application are described in the following
sections:
3. “Code an HTML user interface for serving the ActiveX control” on page 229
4. “Create an ASP web page using the DFS Productivity Layer” on page 230
• An end-user machine, which includes a 32-bit Internet Explorer, and has .NET
framework 4.0 installed (we tested using version 8.0).
• A proxy set up using the Apache application server (we tested using version 2.2).
• A .NET web server hosting the web application components, including the DFS
consumer. This can be an IIS web server or Visual Studio Development server.
• An application server hosting the DFS services and runtime (which include the
required UCF server components). The DFS installation must have its
dfc.properties configured to point to a connection broker through which the
Documentum Server installation can be accessed.
To create a test application, each of these hosts must be on a separate port. They do
not necessarily have to be on separate physical machines. For purposes of this
sample documentation, we assume the following:
• The DFS services (and the UCF components, which are included in the DFS ear
file) is at http://localhost:8080/services/core.
# ProxyPass
# enables Apache as forwarding proxy
http://proxy:80/services/core/runtime/AgentService.rest to
http://dfs-server:8080/services/core/runtime/AgentService.rest.
11.5.2.3 Code an HTML user interface for serving the ActiveX control
The sample HTML prompts the user to import a file with UCF .NET. This HTML has
been used for testing the ActiveX component within a CAB file provided by DFS
SDK.
function startUcf()
{
try {
var ucfClient =
document.getElementById("UcfLauncherCtrl");
ucfClient.init();
ucfClient.start();
}
catch (e) {
alert("Fail to stat Ucf client: " +
e.message);
}
}
</script>
</head>
Although the UcfLauncher.cab file is not packaged in the dfs.ear or dfs.war file, you
can download the DFS UcfLauncher.cab file from the dfs.ear or dfs.war file by
following these steps:
UCF .NET supports 32-bit and 64-bit browsers. DFS SDK provides two CAB files for
use:
You can locate the CAB files, in the DFS SDK, under <dfs-sdk-version>\lib\java\ucf
\browser.
Web server, which hosts the DFS consumer, determines which CAB file must be
installed on the client, based on the request.
11.5.2.4 Create an ASP web page using the DFS Productivity Layer
The ASP web server page performs the following tasks:
1. Receive the jsessionId and uid from browser and instantiate ActivityInfo,
ContentTransferProfile, and ServiceContext.
DFS service will use the UCF connection established between the UCF client
running on the end-user machine and the UCF server hosted in the DFS server
application.
2. Instantiate the DFS Object service and run a create operation to test content
transfer.
In the Javascript, add a new method to retrieve UCF ID from the ActiveX control.
The Import functionality will receive UCF ID and use it for DFS service operation.
<td class="style3">
<input id="ImportPath" type="file" />
</td>
<td>
<asp:Button ID="ImportButton" runat="server"
onclick="ImportButton_Click" Text="Import"
OnClientClick="getUcfId()"/>
</td>
</tr>
</table>
ALL);
context.SetProfile(propProfile);
return context;
}
DFS provides an integration with the Netegrity SiteMinder Policy Server and RSA
ClearTrust Server single sign-on plug-ins, which are available with Documentum
Server.
The productivity layer SSO interface is uniform, whether the client is a .NET remote
client, a Java remote client, or a local Java client. In all these cases the client needs to
create an instance of the SsoIdentity class and populate it with the SSO credentials. If
the SSO credentials are in the form of an incoming HTTP request, the client can
instantiate the SsoIdentity using this constructor in Java:
SsoIdentity(HttpServletRequest request)
Or in .NET:
SsoIdentity(HttpRequest request)
If the client has credentials in the form of a user name and token string, the client can
set the user name and token string in an alternate constructor as shown in the
sample below. The SsoIdentity, like other objects of the Identity data type, is set in
the service context and used in instantiating the service object:
identity.setSsoType("dm_rsa");
IServiceContext serviceContext =
ContextFactory.getInstance().
newContext();
serviceContext.addIdentity(identity);
ISchemaService service = ServiceFactory.getInstance().
getRemoteService(ISchemaService.class, serviceContext);
RepositoryInfo repoInfo = service.getRepositoryInfo(repository,
null);
System.out.println(repoInfo.getName());
}
Note that SsoIdentity, like its parent class BasicIdentity, does not encapsulate a
repository name. SsoIdentity, like BasicIdentity, will be used to login to any
repositories in the service context whose credentials are not specified in a
RepositoryIdentity. You can use SsoIdentity in cases where the login is valid for all
repositories involved in the operation, or use SsoIdentity as a fallback for a subset of
the repositories and supply RepositoryIdentity instances for the remaining
repositories. Also note that because SsoIdentity does not contain repository
information, the user name and password is authenticated against the designated
global registry. If there is no global registry defined, authentication fails.
You can provide a new SSO token with each request to handle SSO tokens that
constantly change and whose expiration times are not extended on every request.
Note however, that a ServiceContext object should contain only one SsoIdentity, so
when you add a new SsoIdentity to the ServiceContext, you should discard the old
one.
• In a single domain.
• In one-way and two-way trusts between multiple domains.
• In one-way and two-way trusts across forests.
The DFS web services can be configured to use server-side JAX-WS handlers that
interface with the Documentum Server Kerberos implementation. In addition, the
DFS SDK includes library dependencies for kerberos multi-domain authentication
support. DFS SOAP clients that do not use the support classes or libraries in the SDK
can authenticate against DFS web services using WS-Security headers that comply
with the Kerberos Token Profile 1.1 specification.
This chapter focuses specifically on the use of the DFS Kerberos API to integrate
DFS-based consumers local or remote DFS services that interact with Documentum
Server instances that are enabled for Kerberos authentication. General information
about Kerberos, as well as details regarding obtaining service tickets from a
Kerberos Key Distribution Center (KDC) are outside the scope of this
documentation. The following documents may be useful in that they address
matters pertaining to Kerberos that are not addressed here.
http://web.mit.edu/Kerberos/
http://technet.microsoft.com/en-us/library/bb742516.aspx
For information on the Java GSS API refer to the Oracle website.
/**
* BinaryIdentity is not XML serializable and will not be sent over
the wire.
*/
public class BinaryIdentity extends Identity
public BinaryIdentity(Object credential,
BinaryIdentity.CredentialType
credentialType)
CredentialType.KERBEROS_TGT)));
service.create(...);
The following diagram illustrates a mainstream scenario for using local DFS services
and Kerberos authentication in a web application.
Figure 13-1: Web application using local DFS and Kerberos authentication
In steps 1–4 in the diagram a browser client obtains a service ticket from the KDC
and passes it to web application as a SPNEGO token.
Steps 5–7 are the critical steps from the point of view of DFS support:
• In steps 5 the web application calls Kerberos utility static methods to extract the
ST from the SPNEGO token, and in step 6 the web application calls the Kerberos
utility again to accept the ST and get a Ticket Granting Ticket (TGT) as a result.
These steps could be performed with a helper method like the following:
throws GSSException
{
String dfs_st =
KerberosUtility.getSTFromSpenegoToken(SPNEGO);
if (dfs_st != null)
{
return KerberosUtility.accept(m_source_spn, dfs_st);
}
return null;
}
• In step 7 the web client instantiates a BinaryIdentity using the result returned by
the Kerberos utility and sets the identity in the serviceContext.
CredentialType.KERBEROS_TGT)));
In steps 8 and on, DFC uses the TGT to obtain STs from the Kerberos utility for
every repository involved in the operation. These STs have the same login
information as the original ST received from the client, and use Kerberos delegation
(provided by the Kerberos utility) to enable Documentum Server to authenticate the
credentials. (These steps are initiated by the DFS runtime and do not require any
code in your application.)
• The source SPN accepted in the Kerberos utility cannot end with any realm
name.
• The JAAS LoginModule to accept the SPN must be generated by using the
Quest library. Add the following line to your DFS local client code before
Kerberos handling happens which notifies the Quest library of the Kerberos
name servers:
System.setProperty("jcsi.kerberos.nameservers", "<KDC machine-IP
address>");
• You cannot pass the local initialized TGT to BinaryIdentity. The TGT must
be generated by using the KerberosUtility API.
The location of the keytab file on the server is specified in the JAAS configuration
file, which needs to be configured in the application server; the configuration
instructions for JAAS vary, depending on the application server you are deploying
to. DFC also requires a Kerberos configuration file. Documentum Platform and
Platform Extensions Installation Guide provides more details.
In Step 1 in the diagram, the DFS client is already assumed to be in possession of the
ST obtained from the KDC. If the client is using the DFS SDK libraries, the DFS client
sets the ST in a client-side JAX-WS handler or WCF behavior. The JAX-WS handler
or WCF behavior takes care of serializing the Kerberos service ticket in the SOAP
WS-Security header. On the server, server-side JAX-WS handlers take care of
validating the Kerberos service ticket using the Kerberos utility (steps 3 and 4), and
passing the ticket to the DFC layer for authentication on the Documentum Server
(steps 5-9).
Note: Due to the Kerberos V5 anti-replay mechanism, each DFS request has to
carry a unique service ticket.
From a DFS integration perspective, the main responsibility of the DFS consumer is
to provide the ST that it has obtained from the KDC for the DFS service to client-side
JAX-WS handlers (Java) or WCF behaviors (.NET). The following sections describe
the APIs provided in the DFS SDK for Java and .NET consumers for this purpose.
JAX-WS and WCF clients that do not use the productivity layer can make use of the
custom JAX-WS SOAP handler or WCF endpoint behavior provided in the DFS
SDK. Other types of SOAP clients will need to ensure that the Kerberos ticket is
contained in the WS-Security as defined in the Oasis Kerberos Token Profile 1.1
(http://www.oasis-open.org/committees/download.php/16788/wss-v1.1-spec-os-
KerberosTokenProfile.pdf) specification. A SOAP sample excerpted from this
specification is shown in “Kerberos Token 1.1 security header” on page 244.
Note: A JAX-WS client that does not use the full DFS productivity layer could
also use the KerberosTokenHandler to add serialization of the Kerberos token
to JAX-WS SOAP processing, by adding it to the handler chain without using
the getRemoteService productivity-layer method.
handlers);
The GetRemoteService method has been overloaded so that it can pass a list of
custom behaviors that the framework will invoke when creating the SOAP message.
Notes
• A WCF client that does not use the full DFS productivity layer could also use
the KerberosTokenHandler to add serialization of the Kerberos token to
WCF SOAP processing, by adding the custom endpoint behavior without
using the getRemoteService productivity-layer method.
• To generate the service ticket for .NET users, the Kerberos delegation level
needs to be enabled by setting the Impersonation Level to Delegate
(ImpersonationLevel.Delegate).
com.emc.documentum.fs.rt.handlers.KerberosTokenServerHandler
• ServerContextHandler
This handler extracts the following identities in sequence from the SOAP header
or the HTTP header:
Do not set multiple credentials to a client-side service context or handlers unless you
have to enable multiple authentication schemes. For example, if Kerberos SSO is the
only designed authentication scheme, do not set RepositoryIdentity to
ServiceContext.
The location of the keytab file on the server is specified in the JAAS configuration
file, which needs to be configured in the application server; the configuration
instructions for JAAS vary, depending on the application server you are deploying
to.
The Documentum Platform and Platform Extensions Installation Guide provides detailed
information on the Kerberos keytab file, JAAS configuration, and krb5.ini.
Note: The best practices and/or test results are derived or obtained after testing
the product in the testing environment. Every effort is made to simulate
common customer usage scenarios during performance testing, but actual
performance results will vary due to differences in hardware and software
configurations, data, and other variables.
Table 13-1: Response-time test results for single- and multi-domain requests
As many as three requests are sent to KDCs to acquire a service ticket. Although
each request's response time is very fast (less than 4 milliseconds), the delay between
requests is over 200 milliseconds. This delay occurs when Nagle’s algorithm is
triggered to combine small segments into a larger one. QUEST sends TCP requests
with two segments; however when the segment size is less than one Ethernet packet,
Nagle's algorithm is triggered.
To reduce these kinds of delays, set the maxpacketsize parameter, which specifies
the threshold (in bytes) at which QUEST switches from UDP to TCP, as follows:
Single User Test With QUEST's default With QUEST's tuned settings
settings
Kerberos Delegate 654 32
DFC getSession 30 45
This chapter provides a general orientation for users of DFC who are considering
creating DFS client applications. It compares some common DFC interfaces and
patterns to their functional counterparts in DFS.
When programming in DFS, some of the central and familiar concepts from DFC are
no longer a part of the model.
Session managers and sessions are not part of the DFS abstraction for DFS
consumers. However, DFC sessions are used by DFS services that interact with the
DFC layer. The DFS consumer sets up identities (repository names and user
credentials) in a service context, which is use to instantiate service proxies, and with
that information DFS services take care of all the details of getting and disposing of
sessions.
DFS does not have (at the exposed level of the API) an object type corresponding to
a SysObject. Instead it provides a generic DataObject class that can represent any
persistent object, and which is associated with a repository object type using a
property that holds the repository type name (for example “dm_document”). Unlike
DFC, DFS does not generally model the repository type system (that is, provide
classes that map to and represent repository types). Any repository type can be
represented by a DataObject, although some more specialized classes can also
represent repository types (for example an Acl or a Lifecycle).
In DFS, we've chosen to call the methods exposed by services operations, in part
because this is what they are called in the WSDLs that represent the web service
APIs. Don't confuse the term with DFC operations—in DFS the term is used
generically for any method exposed by the service.
DFS services generally speaking expose a just a few service operations (the
TaskManagement service is a notable exception). The operations generally have
simple signatures. For example the Object service update operation has this
signature:
The session manager serves as a factory for generating new IDfSession objects using
the IDfSessionManager.newSession method. Immediately after using the session to
do work in the repository, the application should release the session using the
IDfSessionManager.release() method in a finally clause. The session initially remains
available to be reclaimed by session manager instance that released it, and
subsequently will be placed in a connection pool where it can be shared.
/**
* Creates a simplest-case IDfSessionManager
* The user in this case is assumed to have the same login
* credentials in any available repository
*/
public static IDfSessionManager getSessionManager
(String userName, String password) throws Exception
{
// create a client object using a factory method in DfClientX
If the session manager has multiple identities, you can add these lazily, as sessions
are requested. The following method adds an identity to a session manager, stored
in the session manager referred to by the Java instance variable sessionMgr. If there
is already an identity set for the repository name, setIdentity will throw a
DfServiceException. To allow your method to overwrite existing identities, you can
check for the identity (using hasIdentity) and clear it (using clearIdentity) before
calling setIdentity.
loginInfo.setPassword(password);
if (sessionMgr.hasIdentity(repository))
{
sessionMgr.clearIdentity(repository);
}
sessionMgr.setIdentity(repository, loginInfo);
}
Note that setIdentity does not validate the repository name nor authenticate the user
credentials. This normally is not done until the application requests a session using
the getSession or newSession method; however, you can authenticate the credentials
stored in the identity without requesting a session using the
IDfSessionManager.authenticate method. The authenticate method, like getSession
and newSession, uses an identity stored in the session manager object, and throws
an exception if the user does not have access to the requested repository.
In DFS, sessions are handled by the service layer and are not exposed in the DFS
client API. DFS services, however, do and must use managed sessions in their
interactions with the DFC layer. For more information
package com.emc.tutorial;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
{
public TutorialMakeDocument()
{
}
package com.emc.tutorial;
import com.documentum.fc.client.IDfFolder;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.client.IDfType;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.DfTime;
import com.documentum.fc.common.IDfId;
import com.documentum.fc.common.IDfTime;
String attributeName,
String attributeValue)
{
IDfSession mySession = null;
try
{
// Query the object to get the correct data type for the
attribute.
int attributeDatatype =
sysObj.getAttrDataType(attributeName);
StringBuffer results = new StringBuffer("");
sysObj.getValue(attributeName).toString());
break;
case IDfType.DF_INTEGER:
sysObj.setInt(attributeName,
Integer.parseInt(attributeValue));
break;
case IDfType.DF_STRING:
sysObj.setString(attributeName,
attributeValue);
break;
case IDfType.DF_TIME:
DfTime newTime =
new DfTime(attributeValue,
IDfTime.DF_TIME_PATTERN2);
sysObj.setTime(attributeName, newTime);
break;
case IDfType.DF_UNDEFINED:
sysObj.setString(attributeName,
attributeValue);
break;
// Use the fetch() method to verify that the object has not
been
// modified.
if (sysObj.fetch(null))
{
results = new StringBuffer("Object is no
longer current.");
}
else
{
}
catch (Exception ex)
{
ex.printStackTrace();
return "Set attribute command failed.";
}
finally
{
Working with properties this way, you deal more directly with the Documentum
Server metadata model than working with encapsulated data in DFC classes that
represent repository types.
if (!testFile.exists())
{
throw new RuntimeException("Test file: " +
testFile.toString() +
" does not exist");
}
You can also create relationship between objects (such as the relationship between
an object and a containing folder or cabinet, or virtual document relationships), so
that you actually pass in a data graph to the operation, which determines how to
handle the data based on whether the objects already exist in the repository. For
example, the following creates a new (contentless) document and links it to an
existing folder.
System.currentTimeMillis();
String repositoryName = defaultRepositoryName;
ObjectIdentity sampleObjId =
new ObjectIdentity(repositoryName);
DataObject sampleDataObject =
new DataObject(sampleObjId,
"dm_document");
sampleDataObject.getProperties().set("object_name", objectName);
sampleFolderRelationship.setName(Relationship.RELATIONSHIP_FOLDER);
sampleFolderRelationship.setTarget(sampleFolderIdentity);
sampleFolderRelationship.setTargetRole(Relationship.ROLE_PARENT);
sampleDataObject.getRelationships().add(sampleFolderRelationship);
return sampleDataObject;
}
14.4 Versioning
This section compares techniques for checkin and checkout of object in DFC and
DFS.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.IDfId;
import com.documentum.operations.IDfCheckoutNode;
import com.documentum.operations.IDfCheckoutOperation;
{
public TutorialCheckOut()
{
}
try
{
// Instantiate a client.
IDfClientX clientx = new DfClientX();
" + docId);
}
else
{
result.append("Checkout failed.");
}
return result.toString();
}
catch (Exception ex)
{
ex.printStackTrace();
return "Exception hs been thrown: " + ex;
}
finally
{
sessionManager.release(mySession);
}
}
}
If any node corresponds to a document that is already checked out, the system does
not check it out again. DFC does not treat this as an error. If you cancel the checkout,
however, DFC cancels the checkout of the previously checked out node as well.
Check in a document as the next major version (for example, version 1.2 would
become version 2.0). The default increment is NEXT_MINOR (for example, version
1.2 would become version 1.3).
package com.emc.tutorial;
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.IDfId;
import com.documentum.operations.IDfCheckinNode;
import com.documentum.operations.IDfCheckinOperation;
idObj);
// Instantiate a client.
IDfClientX clientx = new DfClientX();
cio.setCheckinVersion(IDfCheckinOperation.NEXT_MAJOR);
// getNewObjectId method.
The following are considerations when you are creating a custom checkin operation.
If you specify a document that is not checked out, DFC does not check it in. DFC
does not treat this as an error.
You can specify checkin version, symbolic label, or alternate content file, and you
can direct DFC to preserve the local file.
If between checkout and checkin you remove a link between documents, DFC adds
the orphaned document to the checkin operation as a root node, but the relationship
between the documents no longer exists in the repository.
Executing a checkin operation normally results in the creation of new objects in the
repository. If opCheckin is the IDfCheckinOperation object, you can obtain a
complete list of the new objects by calling
The list contains the object IDs of the newly created SysObjects.
In addition, the IDfCheckinNode objects associated with the operation are still
available after you execute the operation. You can use their methods to find out
many other facts about the new SysObjects associated with those nodes.
Note: In this example and other examples in this document, it is assumed that
the service object (proxy) has already been instantiated and is stored as an
instance variable. “Querying the repository in DFS” on page 268 provides a
more linear example that uses a local variable for the service object.
(objIdSet);
VersionInfo versionInfo = vInfo.get(0);
getIdentity());
System.out.println("isCurrent is " + versionInfo.isCurrent());
System.out.println("Version is " + versionInfo.getVersion());
versionControlService.cancelCheckout(objIdSet);
System.out.println("Checkout cancelled");
return resultDp;
}
" +
"where owner_name=" + ownerName);
IDfCollection co = query.execute(session,
IDfQuery.DF_READ_QUERY );
return co;
}
}
Note: For Oracle on Linux, count(*) in a query returns a double. This issue is
caused by Oracle on Linux.
package com.emc.documentum.fs.doc.samples.client;
import java.util.List;
import com.emc.documentum.fs.datamodel.core.CacheStrategyType;
import com.emc.documentum.fs.datamodel.core.DataObject;
import com.emc.documentum.fs.datamodel.core.DataPackage;
import com.emc.documentum.fs.datamodel.core.OperationOptions;
import
com.emc.documentum.fs.datamodel.core.content.ContentTransferMode;
import
com.emc.documentum.fs.datamodel.core.context.RepositoryIdentity;
import
com.emc.documentum.fs.datamodel.core.profiles.ContentTransferProfile;
import com.emc.documentum.fs.datamodel.core.properties.PropertySet;
import com.emc.documentum.fs.datamodel.core.query.PassthroughQuery;
import com.emc.documentum.fs.datamodel.core.query.QueryExecution;
import com.emc.documentum.fs.datamodel.core.query.QueryResult;
import com.emc.documentum.fs.rt.ServiceException;
import com.emc.documentum.fs.rt.context.ContextFactory;
import com.emc.documentum.fs.rt.context.IServiceContext;
import com.emc.documentum.fs.rt.context.ServiceFactory;
import com.emc.documentum.fs.services.core.client.IQueryService;
/**
* This class demonstrates how to code a typical request to a DFS
core service
* (in this case QueryService). The code goes through the steps of
creating a
* ServiceContext, which contains repository and credential
information, and
* calling the service with the profile.
*
* This sample assumes that you have a working installation
* of DFS that points to a working Documentum Server.
*
*/
public class QueryServiceTest
{
/************************************************************
* You must supply valid values for the following fields: */
/***********************************************************/
/* The module name for the DFS core services */
private static String moduleName = "core";
private IServiceContext serviceContext;
public QueryServiceTest()
{
}
serviceFactory.getRemoteService(IQueryService.class,
serviceContext,
moduleName,
host);
queryEx.setCacheStrategyType(CacheStrategyType.
DEFAULT_CACHE_STRATEGY);
operationOptions);
System.out.println("QueryId == " +
query.getQueryString());
System.out.println("CacheStrategyType
== " +
queryEx.getCacheStrategyType());
DataPackage resultDp = queryResult.getDataPackage();
List<DataObject> dataObjects = resultDp.getDataObjects();
System.out.println("Total objects returned is:
" + dataObjects.size());
for (DataObject dObj : dataObjects)
{
PropertySet docProperties = dObj.getProperties();
String objectId =
dObj.getIdentity().getValueAsString();
String docName = docProperties.get("object_name")
.getValueAsString(
);
System.out.println("Document " + objectId + " name
is "
+ docName);
}
}
catch (ServiceException e)
{
e.printStackTrace();
}
}
}
}
The following sample starts at step 3, with the processId obtained from the data
returned by getProcessTemplates.
defaultRepositoryName));
// workflow attachment
info.addWorkflowAttachment("dm_sysobject", wfAttachment);
// packages
List<ProcessPackageInfo> pkgList = info.getPackages();
for (ProcessPackageInfo pkg : pkgList)
{
pkg.addDocuments(docIds);
pkg.addNote("note for " + pkg.getPackageName() + "
" + noteText, true);
}
// alias
if (info.isAliasAssignmentRequired())
{
List<ProcessAliasAssignmentInfo> aliasList
= info.getAliasAssignments();
for (ProcessAliasAssignmentInfo aliasInfo : aliasList)
{
String aliasName = aliasInfo.getAliasName();
String aliasDescription =
aliasInfo.getAliasDescription();
int category = aliasInfo.getAliasCategory();
if (category == 1) // User
{
aliasInfo.setAliasValue(userName);
}
else if (category == 2 || category == 3) //
group, user or group
{
aliasInfo.setAliasValue(groupName);
}
// Performer.
if (info.isPerformerAssignmentRequired())
{
List<ProcessPerformerAssignmentInfo> perfList
= info.getPerformerAssignments();
for (ProcessPerformerAssignmentInfo perfInfo : perfList)
{
int category = perfInfo.getCategory();
int perfType = perfInfo.getPerformerType();
String name = "";
List<String> nameList = new ArrayList<String>();
if (category == 0) // User
{
name = userName;
}
else if (category == 1 || category ==
2) // Group, user or group
{
name = groupName;
}
else if (category == 4) // work queue
{
name = queueName;
}
nameList.add(name);
perfInfo.setPerformers(nameList);
ObjectIdentity wf = workflowService.startProcess(info);
System.out.println("started workflow: " + wf.getValueAsString());
}
Note: DFS does not delete files created as a result of input parameters or as
returned result of DFS public methods. For example, if a DFS client creates a
document object that is specified by a FileContent parameter or gets a
document object with a content return type of FileContent, then the file
specified by FileContent is not deleted.
For UCF checkins, you use DFS CheckinProfile to specify whether to delete the
local file after UCF checkins complete. Documentum Enterprise Content Services
Reference Guide provides detailed information.
To retrieve dynamic value assistance attribute values, use these classes and methods:
• com.emc.documentum.fs.services.core.impl.
SchemaService.getValueAssistSnapshot method.
• com.emc.documentum.fs.datamodel.core.schema.ValueAssistRequest<E>
• com.emc.documentum.fs.datamodel.core.schema.ValueAssistRequestType
• com.emc.documentum.fs.datamodel.core.schema.ValueAssistSnapshot
• com.emc.documentum.fs.datamodel.core.schema.ValueAssistTypeIdentifier
DFS Javadocs provides more information and examples about the specific classes
and methods.
If the client stacktrace does not provide you with enough information, you can get
the entire stacktrace from the DFS server.
To enable stacktrace on the DFS server, call the setRuntimeProperty method of the
service context (an instance of ServiceContext) to set the dfs.
exception.include_stack_trace property to a Boolean true:
serviceContext.setRuntimeProperty("dfs.exception.include_stack_trace"
, true);
• WEB-INF/classes
• APP-INF/classes
In the configuration file, add new entries to specify the package or class where you
want to log activities, the log level, and the log file location.
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=
%d{ABSOLUTE} %5p [%t] %c - %m%n
#-------------DFS (added for dfs
logs)-------------------------------
Log4j.logger.com.emc.documentum.fs=DEBUG, DFS_LOG
log4j.appender.DFS_LOG=org.apache.log4j.RollingFileAppender
#-- Below mentioned path should be corrected
accordingly for non-windows platforms
log4j.appender.DFS_LOG.File=C\:/Documentum/logs/dfs.log
log4j.appender.DFS_LOG.MaxFileSize=10MB
log4j.appender.DFS_LOG.MaxBackupIndex=10
log4j.appender.DFS_LOG.layout=org.apache.log4j.PatternLayout
log4j.appender.DFS_LOG.layout.ConversionPattern=
%d{ABSOLUTE} %5p [%t] %c - %m%n
<log4j:configuration xmlns:log4j="http://
jakarta.apache.org/log4j/" debug="false">
<appender name="CONSOLE"
class="org.apache.log4j.ConsoleAppender">
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern"
value="[%d{dd/MM/yy hh:mm:ss:sss z}]
%5p %c: %m%n"/>
</layout>
</appender>
<appender name="FILE"
class="org.apache.log4j.RollingFileAppender">
<param name="File" value="C\:/Documentum/logs/
dfs.log"/>
<param name="MaxFileSize" value="1MB"/>
<param name="MaxBackupIndex" value="100"/>
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern"
value="[%d{dd/MM/yy hh:mm:ss:sss z}]
%5p %c: %m%n"/>
</layout>
</appender>
<appender name="ASYNC"
class="org.apache.log4j.AsyncAppender">
<appender-ref ref="CONSOLE"/>
<appender-ref ref="FILE"/>
</appender>
If your deployment of DFS uses the .NET productivity layer, follow the instructions
on this web site to enable tracing.
http://msdn.microsoft.com/en-us/library/ms733025.aspx
DFS uses the System.Diagnostics.Trace class for tracing in UCF .NET. To enable
this, add the following elements to the app.config file:
<system.diagnostics>
<trace autoflush="true">
<listeners>
<add type="System.Diagnostics.TextWriterTraceListener"
name="TextWriter"
initializeData="C:\projects\dfs.ucf.net.trace.log" />
</listeners>
</trace>
</system.diagnostics>
For more information about the Trace class, visit the following web site:
http://msdn.microsoft.com/es-es/library/system.diagnostics.trace.aspx
To dump SOAP messages on the Java client side, set the com.sun.xml.
ws.transport.http.client.HttpTransportPipe.dump system property to true.