DFC Development Guide
DFC Development Guide
DFC Development Guide
Foundation Classes
Version 6
Development Guide
P/N 300005247
EMC Corporation
Corporate Headquarters:
Hopkinton, MA 01748‑9103
1‑508‑435‑1000
www.EMC.com
Copyright ©2000 ‑ 2007 EMC Corporation. All rights reserved.
Published August 2007
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change
without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS
OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY
DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
For the most up‑to‑date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.
All other trademarks used herein are the property of their respective owners.
Table of Contents
Preface ................................................................................................................................. 9
List of Figures
List of Tables
This manual describes EMC Documentum Foundation Classes (DFC). It provides overview and
summary information.
For an introduction to other developer resources, refer to DFC developer support, page 14.
Intended audience
This manual is for programmers who understand how to use Java and are generally familiar with the
principles of object oriented design.
Revision History
The following changes have been made to this document.
Revision History
What Is DFC?
DFC is a key part of the Documentum software platform. While the main user of DFC is other
Documentum software, you can use DFC in any of the following ways:
• Access Documentum functionality from within one of your company’s enterprise applications.
For example, your corporate purchasing application can retrieve a contract from your
Documentum system.
• Customize or extend products like Documentum Desktop or Webtop.
For example, you can modify Webtop functionality to implement one of your company’s business
rules.
• Write a method or procedure for Content Server to execute as part of a workflow or document
lifecycle.
For example, the procedure that runs when you promote an XML document might apply a
transformation to it and start a workflow to subject the transformed document to a predefined
business process.
You can view Documentum functionality as having the following elements:
Reposito‑ One or more places where you keep the content and associated metadata of your
ries organization’s information. The metadata resides in a relational database, and the
content resides in various storage elements.
Content Software that manages, protects, and imposes an object oriented structure on the
Server information in repositories. It provides intrinsic tools for managing the lifecycles of
that information and automating processes for manipulating it.
Client Software that provides interfaces between Content Server and end users. The most
programs common clients run on application servers (for example, Webtop) or on personal
computers (for example, Desktop).
End Users People who control, contribute, or use your organization’s information. They use a
browser to access client programs running on application servers, or they use the
integral user interface of a client program running on their personal computer.
In this view of Documentum functionality, Documentum Foundation Classes (DFC) lies between
Content Server and clients. Documentum Foundation Services are the primary client interface to the
Documentum platform. Documentum Foundation Classes are used for server‑side business logic and
customization.
DFC is Java based. As a result, client programs that are Java based can interface directly with DFC.
When application developers use DFC, it is usually within the customization model of a Documentum
client, though you can also use DFC to develop the methods associated with intrinsic Content Server
functionality, such as document lifecycles.
In the Java application server environment, Documentum client software rests on the foundation
provided by the Web Development Kit (WDK). In the Microsoft personal computer environment,
many customers use Documentum Desktop, an integration with Windows Explorer. Each of these
clients has a customization model that allows you to modify the user interface and also implement
some business logic. However, the principal tool for adding custom business logic to a Documentum
system is to use the Business Object Framework (BOF).
BOF enables you to embody business rules and patterns in reusable elements, called modules. The
most important modules for application developers are type based objects (TBOs) and service based
objects (SBOs). BOF makes it possible to extend some of DFC’s implementation classes. As a result, you
can introduce new functionality in such a way that unmodified existing programs begin immediately
to deliver the new functionality. Aspect modules are similar to TBOs, but enable you to attach
properties and behavior on an instance‑by‑instance basis, independent of the target object’s type.
The Documentum Content Server Fundamentals manual provides a conceptual explanation of the
capabilities of Content Server and how they work. DFC provides a framework for accessing those
capabilities. Using this framework makes your code much more likely to survive future architectural
changes in the Documentum system.
Where Is DFC?
DFC runs on a Java virtual machine (JVM), which can be on:
• The machine that runs Content Server.
For example, to be called from a method as part of a workflow or document lifecycle.
• A middle‑tier system.
For example, on an application server to support WDK or to execute server methods.
For client machines, Documentum 6 now provides Documentum Foundation Services (DFS) as the
primary support for applications communicating with the Documentum platform.
Note: Refer to the DFC release notes for the supported versions of the JVM. These can change from
one minor release to the next.
The DFC Installation Guide describes the locations of files that DFC installs. The config directory
contains several files that are important to DFC’s operation.
Interfaces
Because DFC is large and complex, and because its underlying implementation is subject to change,
you should use DFC’s public interfaces.
Tip: DFC provides factory methods to instantiate objects that implement specified DFC interfaces. If
you bypass these methods to instantiate implementation classes directly, your programs may fail to
work properly, because the factory methods sometimes do more than simply instantiate the default
implementation class. For most DFC programming, the only implementation classes you should
instantiate directly are DfClientX and the exception classes (DfException and its subclasses).
DFC does not generally support direct access to, replacement of, or extension of its implementation
classes. The principal exception to these rules is the Business Object Framework (BOF). For more
information about BOF, refer to Chapter 5, Using the Business Object Framework (BOF).
Client/Server model
The Documentum architecture generally follows the client/server model. DFC‑based programs are
client programs, even if they run on the same machine as a Documentum server. DFC encapsulates its
client functionality in the IDfClient interface, which serves as the entry point for DFC code. IDfClient
handles basic details of connecting to Documentum servers.
You obtain an IDfClient object by calling the static method DfClientX.getLocalClient().
An IDfSession object represents a connection with the Documentum server and provides services
related to that session. DFC programmers create new Documentum objects or obtain references to
existing Documentum objects through the methods of IDfSession.
To get a session, firt create an IDfSessionManager by calling IDfClient.newSessionManager(). Next,
get the session fromt he session manager using the procedure described in the sections and session
managers below. For more information about sessions, refer to .
IDfPersistentObject
An IDfPersistentObject corresponds to a persistent object in a repository. With DFC you usually
don’t create objects directly. Instead, you obtain objects by calling factory methods that have
IDfPersistentObject as their return type.
Caution: If the return value of a factory method has type IDfPersistentObject, you may cast it
to an appropriate interface (for example, IDfDocument if the returned object implements that
interface). Do not cast it to an implementation class (for example, DfDocument). Doing so
produces a ClassCastException.
2. Obtain an IDfClient object by calling the getLocalClient method of the IDfClientX object. For
example, execute the following Java code:
IDfClient c = cx.getLocalClient();
The IDfClient object must reside in the same process as the Documentum client library, DMCL.
3. Obtain a session manager by calling the newSessionManager method of the IDfClient object.
For example, execute the following Java code:
IDfSessionManager sm = c.newSessionManager();
4. Use the session manager to obtain a session with the repository, that is, a reference to an object that
implements the IDfSession interface. For example, execute the following Java code:
IDfSession s = sm.getSession();
Refer to for information about the difference between the getSession and newSession methods of
IDfSessionManager.
5. If you do not have a reference to the Documentum object, call an IDfSession method (for example,
newObject or getObjectByQualification) to create an object or to obtain a reference to an existing
object.
6. Use routines of the operations package to manipulate the object, that is, to check it out, check it in,
and so forth. For simplicity, the example below does not use the operations package. (Refer to
Chapter 4, Working with Document Operations for examples that use operations).
7. Release the session.
try {
IDfDocument document =
(IDfDocument) session.newObject( "dm_document" ); //Step 5
}
finally {
sMgr.release( session ); //Step 7
}
Steps 1 through 4 obtain an IDfSession object, which encapsulates a session for this application
program with the specified repository.
The following example shows how to obtain an IDfClient object in Visual Basic:
Step 5 creates an IDfDocument object. The return type of the newObject method is IDfPersistentObject.
You must cast the returned object to IDfDocument in order to use methods that are specific to
documents.
Step 6 of the example code sets the document object name and saves it.
Note that the return type of the newObject method is IDfPersistentObject. The program explicitly casts
the return value to IDfDocument, then uses the object’s save method, which IDfDocument inherits from
IDfPersistentObject. This is an example of interface inheritance, which is an important part of DFC
programming. The interfaces that correspond to repository types mimic the repository type hierarchy.
Step 7 releases the session, that is, places it back under the control of the session manager, sMgr.
The session manager will most likely return the same session the next time the application calls
sMgr.getSession.
Most DFC methods report errors by throwing a DfException object. Java code like that in the above
example normally appears within a try/catch/finally block, with an error handler in the catch block.
Tip: When writing code that calls DFC, it is a best practiceto include a finally block to ensure that you
release storage and sessions.
dfc.globalregistry.username
dfc.globalregistry.password
The dfc.properties file contains the following properties that are mandatory for using a global registry.
• dfc.bof.registry.repository
The name of the repository. The repository must project to a connection broker that DFC has
access to.
• dfc.bof.registry.username
The user name part of the credentials that DFC uses to access the global registry. Refer to Global
registry user, page 92 for information about how to create this user.
• dfc.bof.registry.password
The password part of the credentials that DFC uses to access the global registry. The DFC installer
encrypts the password if you supply it. If you want to encrypt the password yourself, use the
following instruction at a command prompt:
java com.documentum.fc.tools.RegistryPasswordUtils password
The dfc.properties file also provides an optional property to resist attempts to obtain unauthorized
access to the global registry. For example, the entry
dfc.bof.registry.connect.attempt.interval=60
sets the minimum interval between connection attempts to the default value of 60 seconds.
Performance tradeoffs
Based on the needs of your organization, you can use property settings to make choices that affect
performance and reliability. For example, preloading provides protection against a situation in which
the global registry becomes unavailable. On the other hand, preloading increases startup time. If you
want to turn off preloading, you can do so with the following setting in dfc.properties:
dfc.bof.registry.preload.enabled=false
You can also adjust the amount of time DFC relies on cached information before checking for
consistency between the local cache and the global registry. For example, the entry
dfc.bof.cacheconsistency.interval=60
sets that interval to the default value of 60 seconds. Because global registry information tends to be
relatively static, you might be able to check lees frequently in a production environment. On the
other hand, you might want to check more frequently in a development environment. The check is
inexpensive. If nothing has changed, the check consists of looking at one vstamp object. Refer to the
Content Server Object Reference for information about vstamp objects.
Diagnostic settings
DFC provides a number of properties that facilitate diagnosing and solving problems.
Diagnostic mode
DFC can run in diagnostic mode. You can cause this to happen by including the following setting
in dfc.properties:
dfc.resources.diagnostics.enabled=T
The set of problems that diagnostic mode can help you correct can change without notice. Here are
some examples of issues detected by diagnostic mode.:
• Session leaks
• Collection leaks
DFC catches the leaks at garbage collection time. If it finds an unreleased session or an unclosed
collection, it places an appropriate message in the log.
Configuring docbrokers
You must set the repeating property dfc.docbroker.host, one entry per docbroker. For example,
dfc.docbroker.host[0]=docbroker1.yourcompany.com
dfc.docbroker.host[1]=docbroker2.yourcompany.com
dfc.data.dir
The dfc.data.dir setting identifies the directory used by DFC to store files. By default, it is a folder
relative to the current working directory of the process running DFC. You can set this to another value.
Tracing options
DFC has extensive tracing support. Trace files can be found in a directory called logsunder dfc.data.dir.
For simple tracing, add the following line to dfc.properties:
dfc.tracing.enable=true
That will trace DFC entry calls, return values and parameters.
For more extensive tracing information, add the following lines to dfc.properties.
dfc.tracing.enable=true
dfc.tracing.verbose=true
dfc.tracing.include_rpcs=true
Thsi will include more details and RPCs sent to the server. It is a good idea to start with the simple
trace, because the verbose trace produces much more output to scan and sort through.
Search options
DFC supports the search capabilities of Enterprise Content Integration Services (ECIS) with a set of
properties. The ECIS installer sets some of these. Most of the others specify diagnostic options or
performance tradeoffs.
Performance tradeoffs
Several properties enable you to make tradeoffs between performance and the frequency with which
DFC executes certain maintenance tasks.
DFC caches the contents of properties files such as dfc.properties or dbor.properties. If you change
the contents of a properties file, the new value does not take effect until DFC rereads that file. The
dfc.config.timeout property specifies the interval between checks. The default value is 1 second.
DFC periodically reclaims unused resources. The dfc.housekeeping.cleanup.interval property
specifies the interval between cleanups. The default value is 7 days.
Some properties described in the BOF and global registry settings, page 18 and Search options, page
20 sections also provide performance tradeoffs.
Registry emulation
DFC uses the dfc.registry.mode property to keep track of whether to use a file, rather than the
Windows registry, to store certain settings.
The DFC installer sets this property to file for Unix systems and registry for Windows systems.
You can set the property to file for a Windows system. This is helpful in environments in which you
want to control access to the Windows registry.
Setting the property to file is incompatible with Documentum Desktop. If you use Documentum
Desktop, do not set dfc.registry.mode to file.
In past releases, DFC used the DMCL library to communicate with the server and provided support
for integrating DMCL logging and DFC logging. Since DMCL is no longer used, the features to
integrate its tracing into the DFC log are no longer needed.
Java
From Java, add dctm.jar to your classpath. This file contains a manifest, listing all files that your
Java execution environment needs access to. The javac compiler does not recognize the contents of
the manifest, so for compilation you must ensure that the compiler has access to dfc.jar. This file
contains most Java classes and interfaces that you need to access directly. In some cases you may
have to give the compiler access to other files described in the manifest. In your Java source code,
import the classes and interfaces you want to use.
Ensure that the classpath points to the config directory.
Packages
DFC comprises a number of packages, that is, sets of related classes and interfaces.
• The names of DFC Java classes begin with Df (for example, DfCollectionX).
• Names of interfaces begin with IDf (for example, IDfSessionManager).
Interfaces expose DFC’s public methods. Each interface contains a set of related methods. The
Javadocs describe each package and its purpose.
Note:
• The com.documentum.operations package and the IDfSysObject interface in the
com.documentum.fc.client package have some methods for the same basic tasks (for example,
checkin, checkout). In these cases, the IDfSysObject methods are mostly for internal use
and for supporting legacy applications. The methods in the operations package perform the
corresponding tasks at a higher level. For example, they keep track of client‑side files and
implement Content Server XML functionality.
• The IDfClientX is the correct interface for accessing factory methods (all of its getXxx methods,
except for those dealing with the DFC version or trace levels).
The DFC interfaces form a hierarchy; some derive methods and constants from others. Use the Tree
link from the home page of the DFC online reference (see DFC online reference documentation, page
23 ) to examine the interface hierarchy. Click any interface to go to its definition.
Each interface inherits the methods and constants of the interfaces above it in the hierarchy. For
example, IDfPersistentObject has a save method. IDfSysObject is below IDfPersistentObject in the
hierarchy, so it inherits the save method. You can call the save method of an object of type IDfSysObject.
This chapter describes how to get, use, and release sessions, which enable your application to connect
to a repository and access repository objects.
Note: If you are programming in the WDK environment, be sure to refer to Managing Sessions
in Web Development Kit Development Guide for information on session management techniques and
methods specific to WDK.
This chapter contains the following major sections:
• Sessions, page 25
• Session Managers, page 26
• Getting session managers and sessions, page 26
• Objects disconnected from sessions, page 30
• Related sessions (subconnections), page 31
• Original vs. object sessions, page 31
• Transactions, page 32
• Configuring sessions using IDfSessionManagerConfig, page 32
• Getting sessions using login tickets, page 33
• Principal authentication support, page 35
Sessions
To do any work in a repository, you must first get a session on the repository. A session (IDfSession)
maintains a connection to a repository, and gives access to objects in the repository for a specific
logical user whose credentials are authenticated before the session can connect to the repository.
The IDfSession interface provides a large number of methods for examining and modifying the
session itself, the repository and its objects, as well as for using transactions (refer to IDfSession in
the javadoc for a complete reference).
Session Managers
A session manager (IDfSessionManager) manages sessions for a single user on one or more
repositories. You create a session manager using the DfClient.newSessionManager factory method.
The session manager serves as a factory for generating new IDfSession objects using the
IDfSessionManager.newSession method. Immediately after using the session to do work in the
repository, you should release the session using the IDfSessionManager.release method in a finally
clause. The session initially remains available to be reclaimed by session manager instance that
released it, and subsequently will be placed in a connection pool where it can be shared.
The IDfSessionManager.getSession method checks for an available shared session, and if one is
available uses it instead of creating a new session. This makes for efficient use of content server
connections, which are an extremely expensive resource, in a web programming environment where a
large number of sessions are required.
import com.documentum.com.DfClientX;
import com.documentum.fc.client.IDfClient;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.common.DfException;
. . .
/**
If the session manager has multiple identities, you can add these lazily, as sessions are requested. The
following method adds an identity to a session manager, stored in the session manager referred to
by the Java instance variable sessionMgr. If there is already an identity set for the repository name,
setIdentity will throw a DfServiceException. To allow your method to overwrite existing identities, you
can check for the identity (using hasIdentity) and clear it (using clearIdentity) before calling setIdentity.
public void addIdentity
(String repository, String userName, String password) throws DfServiceException
{
// create an IDfLoginInfo object and set its fields
if (sessionMgr.hasIdentity(repository))
{
sessionMgr.clearIdentity(repository);
}
sessionMgr.setIdentity(repository, loginInfo);
}
Note that setIdentity does not validate the repository name nor authenticate the user credentials.
This normally isn’t done until the application requests a session using the getSession or newSession
method; however, you can authenticate the credentials stored in the identity without requesting a
session using the IDfSessionManager.authenticate method. The authenticate method, like getSession
and newSession, uses an identity stored in the session manager object, and throws an exception if the
user does not have access to the requested repository.
You can only release a managed session that was obtained using a factory method of the session
manager; that is IDfSessionManager.getSession or IDfSessionManager.newSession. Getting a session
in this way implies ownership, and confers responsibility for releasing the session.
If you get a reference to an existing session, which might for example be stored as a data member of a
typed object, no ownership is implied, and you cannot release the session. This would be the case if
you obtained the session using IDfTypedObject.getSession.
The following snippet demonstrates these two cases:
// session is owned
IDfSession session = sessionManager.getSession("docbase");
IDfSysObject object = session.getObject(objectId);
mySbo.doSomething(object);
sessionManager.release(session);
Once a session is released, you cannot release or disconnect it again using the same session reference.
The following code will throw a runtime exception:
IDfSession session = sessionManager.getSession("docbase");
sessionManager.release(session);
sessionManager.release(session); // throws runtime exception
Once you have released a session, you cannot use the session reference again to do anything with
the session (such as getting an object).
IDfSession session = sessionManager.getSession("docbase");
sessionManager.release(session);
session.getObject(objectId); // throws runtime exception
Transactions
DFC supports transactions at the session manager level and at the session level. A transaction
at the session manager level includes operations on any sessions obtained by a thread using
IDfSessionManager.newSession() or IDfSessionManager.getSession after the transaction is started (See
IDfSessionManager.beginTransaction() in the DFC Javadoc) and before it completes the transaction
(see IDfSessionManager.commitTransaction() and IDfSessionManager.abortTransaction()).
A transaction at the session level includes operations on the session that occur after the transaction
begins (see IDfSession.beginTrans()) and occur before it completes (see IDfSession.commitTrans() and
IDfSession.abortTrans()). Previous versions of DFC did not support calling beginTrans() on a session
obtained from a session manager. This restriction has been removed. The code below shows how a
TBO can use a session‑level transaction.
public class MyTBO
{
protected void doSave() throws DfException
{
boolean txStartedHere = false;
if ( !getObjectSession().isTransactionActive() )
{
getObjectSession().beginTrans();
txStartedHere = true;
}
try
{
doSomething(); // Do something that requires transactions
if ( txStartedHere )
getObjectSession().commitTrans();
}
finally
{
if ( txStartedHere && getObjectSession().isTransactionActive())
getObjectSession().abortTrans();
}
}
}
}
}
To get a session for the user using this login ticket, you pass the ticket in place of the user’s password
when setting the identity for the user’s session manager. The following sample assumes that you have
already instantiated a session manager for the user.
public IDfSession getSessionWithTicket
(String repository, String userName) throws DfException
{
// get a ticket using the preceding sample method
if (userSessionMgr.hasIdentity(repository))
{
userSessionMgr.clearIdentity(repository);
}
userSessionMgr.setIdentity(repository, loginInfo);
try
{
System.out.println("Got session: " + sess.getSessionId());
System.out.println("Username: " + sess.getLoginInfo().getUser());
}
// Release the session in a finally clause.
finally
{
sessMgr.release(sess);
}
}
session to the session manager, so that disconnecting the session no longer makes references to the
object invalid.
The primary use of DFC is to add business logic to your applications. The presentation layer is built
using the Web Development Kit, most often via customization of the Webtop interface. Building
a custom UI on top of DFC requires a great deal of work, and will largely recreate effort that has
already been done for you.
While you should not create a completely new interface for your users, it can be helpful to have a small
application that you can use to work with the API directly and see the results. With that in mind, here
is a rudimentary interface class that will enable you to add and test behavior using the Operation API.
These examples were created with Oracle JDeveloper, and feature its idiosyncratic ways of building
the UI. You can use any IDE you prefer, using this example as a guideline.
This chapter contains the following sections:
• The TutorialSessionManager class, page 39
• The DfcTestFrame class, page 41
• The DfcTutorialApplication class, page 44
Figure 2. TutorialSessionManager.java
package dfctestenvironment;
import com.documentum.com.DfClientX;
import com.documentum.fc.client.IDfClient;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.IDfLoginInfo;
// Constructor
public TutorialSessionManager(String rep, String user, String pword) {
try {
// Set the identity of the session manager object based on the repository
// name and login information.
sMgr.setIdentity(m_repository, loginInfo);
// Return the populated session manager to the calling class. The session
// manager object now has the required information to connect to the
// repository, but is not actively connected.
return sMgr;
}
package dfctutorialenvironment;
import com.documentum.fc.client.IDfCollection;
import com.documentum.fc.client.IDfFolder;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfTypedObject;
import java.awt.Button;
import java.awt.Frame;
import java.awt.GridBagConstraints;
import java.awt.GridBagLayout;
import java.awt.Insets;
import java.awt.Label;
import java.awt.List;
import java.awt.Rectangle;
import java.awt.SystemColor;
import java.awt.TextField;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.util.StringTokenizer;
import java.util.Vector;
// Generate UI elements
public DfcTestFrame() {
try {
jbInit();
} catch (Exception e) {
e.printStackTrace();
}
}
// Initialize UI components
textField_arguments.setText("Arguments go here.");
label_results.setText("Messages appear here.");
button_directory.setLabel("Directory");
button_directory.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
button_directory_actionPerformed(e);
}
});
this.add(textField_arguments,
new GridBagConstraints(
0, 0, 3, 1, 1.0, 0.0,
GridBagConstraints.WEST,
GridBagConstraints.HORIZONTAL,
new Insets(5, 10, 0, 15), 527, 0)
);
this.add(list_id,
new GridBagConstraints(
0, 2, 3, 1, 1.0, 1.0,
GridBagConstraints.CENTER,
GridBagConstraints.BOTH,
new Insets(80, 10, 0, 15), 372, 165)
);
this.add(button_directory,
new GridBagConstraints(
0, 1, 1, 1, 0.0, 0.0,
GridBagConstraints.CENTER,
GridBagConstraints.NONE,
new Insets(10, 10, 0, 0), 14, 8)
);
this.add(label_results,
new GridBagConstraints(
0, 3, 2, 1, 0.0, 0.0,
GridBagConstraints.WEST,
GridBagConstraints.NONE,
new Insets(0, 10, 7, 0), 507, 11)
);
}
// Handler for the Directory button. To use the button, enter the arguments
// repository_name, user_name, password, directory_path in the arguments field.
try {
// Cycle through the collection getting the object ID and adding it to the
// m_listIDs Vector. Get the object name and add it to the file list control.
while (folderList.next())
{
IDfTypedObject doc = folderList.getTypedObject();
docId = doc.getString("r_object_id");
docName = doc.getString("object_name");
list_id.add(docName);
m_fileIDs.addElement(docId);
}
Figure 4. DfcTutorialApplication.java
package dfctutorialenvironment;
import java.awt.Dimension;
import java.awt.Frame;
import java.awt.Toolkit;
import java.awt.event.WindowAdapter;
import java.awt.event.WindowEvent;
import javax.swing.UIManager;
( screenSize.width frameSize.width ) / 2,
( screenSize.height frameSize.height ) / 2 );
frame.addWindowListener( new WindowAdapter()
{ public void windowClosing(WindowEvent e) { System.exit(0); } });
frame.setVisible(true);
}
To list the contents of a directory, a comma‑delimited set of values in the arguments field:
repository_name, user_name, password, directory_path. Click the Directory button to update the file listing.
This example demonstrated how you can create an application that connects to the content server
and retrieves information. Appendix A, Sample Test Interface provides a complete code listing of
an expanded version of the DfcTestFrame class that allows you to try out several of the document
manipulation operations described in Operations for manipulating documents, page 57
This chapter describes the way to use DFC to perform the most common operations on documents.
Most information about documents also applies to the broader category of repository objects
represented by the IDfSysObject interface.
The chapter contains the following main sections:
• Introduction to documents, page 49
• Introduction to operations, page 51
• Types of operation, page 52
• Basic steps for manipulating documents, page 53
• Operations for manipulating documents, page 57
• Handling document manipulation errors, page 83
• Operations and transactions, page 85
Introduction to documents
The Documentum Content Server Fundamentals manual explains Documentum facilities for managing
documents. This section provides a concise summary of what you need to know to understand
the remainder of this chapter.
Documentum maintains a repository of objects that it classifies according to a type hierarchy. For this
discussion, SysObjects are at the top of the hierarchy. A document is a specific kind of SysObject. Its
primary purpose is to help you manage content.
Documentum maintains more than one version of a document. A version tree is an original document
and all of its versions. Every version of the document has a unique object ID, but every version has the
same chronicle ID, namely, the object ID of the original document.
Virtual documents
A virtual document is a container document that includes one or more objects, called components,
organized in a tree structure. A component can be another virtual document or a simple document. A
virtual document can have any number of components, nested to any level. Documentum imposes no
limit on the depth of nesting in a virtual document.
Documentum uses two sets of terminology for virtual documents. In the first set, a virtual document
that contains a component is called the component’s parent, and the component is called the virtual
document’s child. Children, or children of children to any depth, are called descendants.
Note: Internal variables, Javadoc comments, and registry keys sometimes use the alternate spelling
descendent.
The second set of terminology derives from graph theory, even though a virtual document forms a
tree, and not an arbitrary graph. The virtual document and each of its descendants is called a node. The
directed relationship between a parent node and a child node is called an edge.
In both sets of terminology, the original virtual document is sometimes called the root.
You can associate a particular version of a component with the virtual document (this is called early
binding) or you can associate the component’s entire version tree with the virtual document. The
latter allows you to select which version to include at the time you construct the document (this
is called late binding).
Documentum provides a flexible set of rules for controlling the way it assembles documents. An
assembly is a snapshot of a virtual document. It consists of the set of specific component versions that
result from assembling the virtual document according to a set of binding rules. To preserve it, you
must attach it to a SysObject: usually either the root of the virtual document or a SysObject created to
hold the assembly. A SysObject can have at most one attached assembly.
You can version a virtual document and manage its versions just as you do for a simple document.
Deleting a virtual document version also removes any containment objects or assembly objects
associated with that version.
When you copy a virtual document, the server can make a copy of each component, or it can create an
internal reference or pointer to the source component. It maintains information in the containment
object about which of these possibilities to choose. One option is to require the copy operation
to specify the choice.
Whether it copies a component or creates a reference, Documentum creates a new containment object
corresponding to that component.
Note: DFC allows you to process the root of a virtual document as an ordinary document. For
example, suppose that doc is an object of type IDfDocument and also happens to be the root of a
virtual document. If you tell DFC to check out doc, it does not check out any of the descendants. If
you want DFC to check out the descendants along with the root document, you must first execute an
instruction like
IDfVirtualDocument vDoc =
doc.asVirtualDocument(CURRENT, false)
If you tell DFC to check out vDoc, it processes the current version of doc and each of its descendants.
The DFC Javadocs explain the parameters of the asVirtualDocument method.
Documentum represents the nodes of virtual documents by containment objects and the nodes of
assemblies by assembly objects. An assembly object refers to the SysObject to which the assembly is
attached, and to the virtual document from which the assembly came.
If an object appears more than once as a node in a virtual document or assembly, each node has a
separate associated containment object or assembly object. No object can appear as a descendant of
itself in a virtual document.
XML Documents
Documentum’s XML support has many features. Information about those subjects appears in
Documentum Content Server Fundamentals and in the XML Application Development Guide.
Using XML support requires you to provide a controlling XML application. When you import an XML
document, DFC examines the controlling application’s configuration file and applies any chunking
rules that you specify there.
If the application’s configuration file specifies chunking rules, DFC creates a virtual document from
the chunks it creates. It imports other documents that the XML document refers to as entity references
or links, and makes them components of the virtual document. It uses attributes of the containment
object associated with a component to remember whether it came from an entity or a link and to
maintain other necessary information. Assembly objects have the same XML‑related attributes
as containment objects do.
Introduction to operations
Operations are used to manipulate documents in Documentum. Operations provide interfaces and
a processing environment to ensure that Documentum can handle a variety of documents and
collections of documents in a standard way. You obtain an operation of the appropriate kind, place
one or more documents into it, and execute the operation.
All of the examples in this chapter pertain only to documents, but operations can be used to work with
objects of type IDfSysObject, not just the subtype IDfDocument.
For example, to check out a document, take the following steps:
1. Obtain a checkout operation.
2. Add the document to the operation.
DFC carries out the behind‑the‑scenes tasks associated with checking out a document. For a
virtual document, for example, DFC adds all of its components to the operation and ensures
that links between them are still valid when it stores the documents into the checkout directory
on the file system. It corrects filename conflicts, and it keeps a local record of which documents it
checks out. This is only a partial description of what DFC does when you check out a document.
Because of the number and complexity of the underlying tasks, DFC wraps seemingly elementary
document‑manipulation tasks in operations.
An IDfClientX object provides factory methods for creating operations. Once you have an IDfClientX
object (say cX) and a SysObject (say doc) representing the document, the code for the checkout looks
like this:
In your own applications, you would add code to handle a null returned by the add method or errors
produced by the execute method.
Types of operation
DFC provides operation types and corresponding nodes (to be explained in subsequent sections) for
many tasks you perform on documents or, where appropriate, files or folders. The following table
summarizes these.
The add method returns the newly created node, or a null if it fails (refer to Handling
document manipulation errors, page 83 ).
b. Set parameters to change the way the operation handles this item and its descendants.
Each type of operation node has methods for setting parameters that are important for that
type of node. These are generally the same as the methods for the corresponding type of
operation. If you do not set parameters, the operation handles this item according to the
setXxx methods.
c. Repeat the previous two substeps for all items you add to the operation.
4. Invoke the operation’s inherited execute method to perform the task.
Note that this step may add and process additional nodes. For example, if part of the execution
entails scanning an XML document for links, DFC may add the linked documents to the operation.
The execute method returns a boolean value to indicate its success (true) or failure (false). See
Handling document manipulation errors, page 83 ) for more information.
5. Process the results.
a. Handle errors.
If it detects errors, the execute method returns the boolean value false. You can use the
operation’s inherited getErrors method to obtain a list of failures.
For details of how to process errors, see Processing the results, page 56.
b. Perform tasks specific to the operation.
For example, after an import operation, you may want to take note of all of the new objects that
the operation created in the repository. You might want to display or modify their properties.
Each operation factory method of IDfClientX instantiates an operation object of the corresponding
type. For example, getImportOperation factory method instantiates an IDfImportOperation object.
Different operations accept different parameters to control the way they carry out their tasks. Some
parameters are optional, some mandatory.
Note: You must use the setSession method of IDfImportOperation or IDfXMLTransformOperation to
set a repository session before adding nodes to either of these types of operation.
An operation contains a structure of nodes and descendants. When you obtain the operation, it
has no nodes. When you use the operation’s add method to include documents in the operation, it
creates new root nodes. The add method returns the node as an IDfOperationNode object. You must
cast it to the appropriate operation node type to use any methods the type does not inherit from
IDfOperationNode (see Working with nodes, page 56).
Note: If the add method cannot create a node for the specified document, it returns a null argument.
Be sure to test for this case, because it does not usually throw an exception.
DFC might include additional nodes in the operation. For example, if you add a repository folder, DFC
adds nodes for the documents linked to that folder, as children of the folder’s node in the operation.
Each node can have zero or more child nodes. If you add a virtual document, the add method creates
as many descendant nodes as necessary to create an image of the virtual document’s structure
within the operation.
You can add objects from more than one repository to an operation.
You can use a variety of methods to obtain and step through all nodes of the operation (see Working
with nodes, page 56 ). You might want to set parameters on individual nodes differently from the
way you set them on the operation.
The operations package processes the objects in an operation as a group, possibly invoking many
DFC calls for each object. Operations encapsulate Documentum client conventions for registering,
naming, and managing local content files.
DFC executes the operation in a predefined set of steps, applying each step to all of the documents in
the operation before proceeding to the next step. It processes each document in an operation only
once, even if the document appears at more than one node.
Once DFC has executed a step of the operation on all of the documents in the operation, it cannot
execute that step again. If you want to perform the same task again, you must construct a new
operation to do so.
Normally, you use the operation’s execute method and let DFC proceed through the execution steps.
DFC provides a limited ability for you to execute an operation in steps, so that you can perform special
processing between steps. Documentum does not recommend this approach, because the number and
identity of steps in an operation may change with future versions of DFC. If you have a programming
hurdle that you cannot get over without using steps, work with Documentum Technical Support
or Consulting to design a solution.
If DFC encounters an error while processing one node in an operation, it continues to process the other
nodes. For example, if one object in a checkout operation is locked, the operation checks out the
others. Only fatal conditions cause an operation to throw an exception. DFC catches other exceptions
internally and converts them into IDfOperationError objects. The getErrors method returns an IDfList
object containing those errors, or a null if there are no errors. The calling program can examine
the errors, and decide whether to undo the operation, or to accept the results for those objects that
did not generate errors.
Once you have checked the errors you may want to examine and further process the results of the
operation. The next section, Working with nodes, page 56, shows how to access the objects and
results associated with the nodes of the operation.
This section shows how to access the objects and results associated with the nodes of an operation.
Note: Each operation node type (for example, IDfCheckinNode) inherits most of its methods from
IDfOperationNode.
The getChildren method of an IDfOperationNode object returns the first level of nodes under the given
node. You can use this method recursively to step through all of the descendant nodes. Alternatively,
you can use the operation’s getNodes method to obtain a flat list of descendant nodes, that is, an
IDfList object containing of all of its descendant nodes without the structure.
These methods return nodes as objects of type IDfOperationNode, not as the specific node type (for
example, IDfCheckinNode).
The getId method of an IDfOperationNode object returns a unique identifier for the node, not the
object ID of the corresponding document. IDfOperationNode does not have a method for obtaining the
object ID of the corresponding object. Each operation node type (for example, IDfCheckinNode) has
its own getObjectID method. You must cast the IDfOperationNode object to a node of the specific
type before obtaining the object ID.
Checking out
The execute method of an IDfCheckoutOperation object checks out the documents in the operation.
The checkout operation:
• Locks the document
• Copies the document to your local disk
• Always creates registry entries to enable DFC to manage the files it creates on the file system
Example 41. TutorialCheckout.java
package dfctutorialenvironment;
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.common.DfId;
import com.documentum.operations.IDfCheckoutNode;
import com.documentum.operations.IDfCheckoutOperation;
// Instantiate a client.
IDfClientX clientx = new DfClientX();
// Set the location where the local copy of the checked out file is stored.
coOp.setDestinationDirectory("C:\\");
// Create the checkout node by adding the document to the checkout operation.
IDfCheckoutNode coNode = (IDfCheckoutNode)coOp.add(doc);
}
}
}
If the operation’s add method receives a virtual document as an argument, it also adds all of the
document’s descendants (determined by applying the applicable binding rules), creating a separate
node for each.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfVirtualDocument;
import com.documentum.fc.common.DfId;
import com.documentum.operations.IDfCheckoutNode;
import com.documentum.operations.IDfCheckoutOperation;
IDfSession mySession,
String docId)
{
try {
String result = "";
// Instantiate a client.
IDfClientX clientx = new DfClientX();
// Set the location where the local copy of the checked out file is stored.
coOp.setDestinationDirectory("C:\\");
}
}
}
Checking in
The execute method of an IDfCheckinOperation object checks documents into the repository. It creates
new objects as required, transfers the content to the repository, and removes local files if appropriate.
It checks in existing objects that any of the nodes refer to (for example, through XML links).
package dfctutorialenvironment;
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.IDfId;
import com.documentum.operations.IDfCheckinNode;
import com.documentum.operations.IDfCheckinOperation;
// Set the version increment. In this case, the next major version
//
cio.setCheckinVersion(IDfCheckinOperation.NEXT_MAJOR);
// Create a document object that represents the document being checked in.
IDfDocument doc =
(IDfDocument) mySession.getObject(new DfId(docId));
// After the item is created, you can get it immediately using the
// getNewObjectId method.
To check in a document, you pass an object of type IDfSysObject or IDfVirtualDocument, not the file on
the local file system, to the operation’s add method. In the local client file registry, DFC records the path
and filename of the local file that represents the content of an object. If you move or rename the file,
DFC loses track of it and reports an error when you try to check it in.
Setting the content file, as in IDfCheckinNode.setFilePath, overrides DFC’s saved information.
If you specify a document that is not checked out, DFC does not check it in. DFC does not treat
this as an error.
You can specify checkin version, symbolic label, or alternate content file, and you can direct DFC to
preserve the local file.
If between checkout and checkin you remove a link between documents, DFC adds the orphaned
document to the checkin operation as a root node, but the relationship between the documents no
longer exists in the repository.
Executing a checkin operation normally results in the creation of new objects in the repository. If
opCheckin is the IDfCheckinOperation object, you can obtain a complete list of the new objects by
calling
IDfList list = opCheckin.getNewObjects();
The list contains the object IDs of the newly created SysObjects.
In addition, the IDfCheckinNode objects associated with the operation are still available after you
execute the operation (see Working with nodes, page 56 ). You can use their methods to find out many
other facts about the new SysObjects associated with those nodes.
Cancelling checkout
The execute method of an IDfCancelCheckoutOperation object cancels the checkout of documents by
releasing locks, deleting local files if appropriate, and removing registry entries.
If the operation’s add method receives a virtual document as an argument, it also adds all of the
document’s descendants (determined by applying the applicable binding rules), creating a separate
operation node for each.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.DfId;
import com.documentum.operations.IDfCancelCheckoutNode;
import com.documentum.operations.IDfCancelCheckoutOperation;
//
cco.setKeepLocalFile(true);
IDfCancelCheckoutNode node = (IDfCancelCheckoutNode)cco.add(doc);
if (node==null) {return "Node is null";}
if (!cco.execute()){
return "Operation failed";
}
return "Successfully cancelled checkout of file ID: " + docId;
}
catch (Exception e){
e.printStackTrace();
return "Exception thrown.";
}
}
}
If the operation’s add method receives a virtual document as an argument, it also adds all of the
document’s descendants (determined by applying the applicable binding rules), creating a separate
operation node for each.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfVirtualDocument;
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.DfId;
import com.documentum.operations.IDfCancelCheckoutNode;
import com.documentum.operations.IDfCancelCheckoutOperation;
// Check to see if the node is null this will not throw an error.
if (node==null) {return "Node is null";}
Importing
The execute method of an IDfImportOperation object imports files and directories into the repository.
It creates objects as required, transfers the content to the repository, and removes local files if
appropriate. If any of the nodes of the operation refer to existing local files (for example, through XML
or OLE links), it imports those into the repository too.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfFolder;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.IDfId;
import com.documentum.fc.common.IDfList;
import com.documentum.operations.IDfFile;
import com.documentum.operations.IDfImportNode;
import com.documentum.operations.IDfImportOperation;
Use the object’s setSession method to specify a repository session and the object’s
setDestinationFolderId method to specify the repository cabinet or folder into which the operation
should import documents.
You must set the session before adding files to the operation.
You can set the destination folder, either on the operation or on each node. The node setting overrides
the operation setting. If you set neither, DFC uses its default destination folder.
You can add an IDfFile object or specify a file system path. You can also specify whether to keep the
file on the file system (the default choice) or delete it after the operation is successful.
If you add a file system directory to the operation, DFC imports all files in that directory and proceeds
recursively to add each subdirectory to the operation. The resulting repository folder hierarchy
mirrors the file system directory hierarchy.
You can also control version labels, object names, object types and formats of the imported objects.
XML processing
You can import XML files without doing XML processing. If nodeImport is an IDfImportNode object,
you can turn off XML processing on the node and all its descendants by calling
nodeImport.setXMLApplicationName("Ignore");
Turning off this kind of processing can shorten the time it takes DFC to perform the operation.
Executing an import operation results in the creation of new objects in the repository. If opImport is
the IDfImportOperation object, you can obtain a complete list of the new objects by calling
IDfList list = opImport.getNewObjects();
The list contains the object IDs of the newly created SysObjects.
In addition, the IDfImportNode objects associated with the operation are still available after you
execute the operation (see Working with nodes, page 56). You can use their methods to find out many
other facts about the new SysObjects associated with those nodes. For example, you can find out object
IDs, object names, version labels, file paths, and formats.
Exporting
The execute method of an IDfExportOperation object creates copies of documents on the local file
system. If the operation’s add method receives a virtual document as an argument, it also adds all
of the document’s descendants (determined by applying the applicable binding rules), creating a
separate node for each.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.DfId;
import com.documentum.operations.IDfExportNode;
import com.documentum.operations.IDfExportOperation;
) throws DfException {
// Create an export node, adding the document to the export operation object.
IDfExportNode node = (IDfExportNode)eo.add(doc);
Copying
The execute method of an IDfCopyOperation object copies the current versions of documents or folders
from one repository location to another.
If the operation’s add method receives a virtual document as an argument, it also adds all of the
document’s descendants (determined by applying the applicable binding rules), creating a separate
node of the operation for each.
If the add method receives a folder (unless you override this default behavior), it also adds all
documents and folders linked to that folder. This continues recursively until the entire hierarchy of
documents and subfolders under the original folder is part of the operation. The execute method
replicates this hierarchy at the target location.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.client.IDfFolder;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.IDfId;
import com.documentum.operations.IDfCopyNode;
import com.documentum.operations.IDfCopyOperation;
) throws DfException {
// Create a copy node, adding the document to the copy operation object.
IDfCopyNode node = (IDfCopyNode)co.add(doc);
Moving
The execute method of an IDfMoveOperation object moves the current versions of documents or folders
from one repository location to another by unlinking them from the source location and linking them
to the destination. Versions other than the current version remain linked to the original location.
If the operation’s add method receives a virtual document as an argument, it also adds all of the
document’s descendants (determined by applying the applicable binding rules), creating a separate
node for each.
If the add method receives a folder (unless you override this default behavior), it adds all documents
and folders linked to that folder. This continues recursively until the entire hierarchy of documents
and subfolders under the original folder is part of the operation. The execute method links this
hierarchy to the target location.
// Create a move node, adding the document to the move operation object.
IDfMoveNode node = (IDfMoveNode)mo.add(doc);
Follow the steps in Steps for manipulating documents, page 53. Options for moving are essentially the
same as for copying.
If the operation entails moving a checked out document, DFC leaves the document unmodified
and reports an error.
Deleting
The execute method of an IDfDeleteOperation object removes documents and folders from the
repository.
If the operation’s add method receives a virtual document as an argument, it also adds all of
the document’s descendants (determined by applying the applicable binding rules), creating a
separate node for each. You can use the enableDeepDeleteVirtualDocumentsInFolders method of
IDfDeleteOperation to override this behavior.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.DfId;
import com.documentum.operations.IDfDeleteNode;
import com.documentum.operations.IDfDeleteOperation;
// Set the deletion policy. You must do this prior to adding nodes to
// the Delete operation.
if (currentVersionOnly == "true") {
delo.setVersionDeletionPolicy(IDfDeleteOperation.SELECTED_VERSIONS);
}
else
{
delo.setVersionDeletionPolicy(IDfDeleteOperation.ALL_VERSIONS);
}
Follow the steps in Steps for manipulating documents, page 53. If the operation entails deleting a
checked out document, DFC leaves the document unmodified and reports an error.
Predictive caching
Predictive caching can help you to improve the user experience by sending system objects to Branch
Office Caching Services servers before they are requested by users. For example, a company‑wide
report could be sent to all repository caches when it is added to the local repository rather that waiting
for a user request on each server. Another use for this capability would be to cache an object in
response to an advance in a workflow procedure, making the document readily available for the
next user in the flow.
void transformXML2HTMLUsingStylesheetObject(
IDfClientX clientx, // Factory for operations
IDfSession session, // Repository session (required)
IDfDocument docStylesheet ) // XSL stylesheet in repository
throws DfException, IOException
{
// Obtain transformation operation
IDfXMLTransformOperation opTran = clientx.getXMLTransformOperation();
void transformXML2HTMLUsingStylesheetFile(
IDfClientX clientx, // Factory for operations
IDfSession session, // Repository session (required)
IDfId idDestFolder ) // Destination folder
throws DfException
{
// Obtain transformation operation
IDfXMLTransformOperation opTran = clientx.getXMLTransformOperation();
void transformXML2HTMLRendition(
IDfClientX clientx, // Factory for operations
IDfSession session, // Repository session (required)
IDfDocument docXml, // Root of the XML document
IDfDocument docStylesheet ) // XSL stylesheet in repository
throws DfException
{
// Obtain transformation operation
IDfXMLTransformOperation opTran = clientx.getXMLTransformOperation();
opTran.setSession( session );
opTran.setTransformation( docStylesheet );
DFC creates a rendition because the output format differs from the input format and you did not call
optran.setDestination to specify an output directory.
After you execute an operation, you can use its getErrors method to retrieve an IDfList object
containing the errors. You must cast each to IDfOperationError to read its error message.
After detecting that the operation’s execute method has returned errors, you can use the operation’s
abort method to undo as much of the operation as it can. You cannot undo XML validation or
transform operations, nor can you restore deleted objects.
// Initialize variables
String strNodeName = "";
You can monitor an operation for progress and errors. Create a class that implements the
IDfOperationMonitor interface and register it by calling the setOperationMonitor method of
IDfOperation. The operation periodically notifies the operation monitor of its progress or of errors
that it encounters.
During execution, DFC calls the methods of the installed operation monitor to report progress or
errors. You can display this information to an end user. In each case DFC expects a response that tells
it whether or not to continue. You can make this decision in the program or ask an end user to decide.
Your operation monitor class must implement the following methods:
• progressReport
DFC supplies the percentage of completion of the operation and of its current step. DFC expects a
response that tells it whether to continue or to abort the operation.
• reportError
DFC passes an object of type IDfOperationError representing the error it has encountered. It
expects a response that tells it whether to continue or to abort the operation.
• getYesNoAnswer
This is the same as reportError, except that DFC gives you more choices. DFC passes an object
of type IDfOperationError representing the error it has encountered. It expects a response of
yes, no, or abort.
The Javadocs explain these methods and arguments in greater detail.
You can undo most operations by calling an operation’s abort method. The abort method is specific
to each operation, but generally undoes repository actions and cleans up registry entries and local
content files. Some operations (for example, delete) cannot be undone.
If you know an operation only contains objects from a single repository, and the number of objects
being processed is small enough to ensure sufficient database resources, you can wrap the operation
execution in a session transaction.
You can also include operations in session manager transactions. Session manager transactions can
include operations on objects in different repositories, but you must still pay attention to database
resources. Session manager transactions are not completely atomic, because they do not use a
two‑phase commit. For information about what session transactions can and cannot do, refer to .
This chapter introduces the Business Object Framework (BOF). It contains the following major sections:
• Overview of BOF, page 87
• BOF infrastructure, page 88
• Service‑based Business Objects (SBOs), page 92
• Type‑based Business Objects (TBOs), page 100
• Calling TBOs and SBOs, page 116
• Aspects, page 124
Overview of BOF
BOF’s main goals are to centralize and standardize the process of customizing Documentum
functionality. BOF centralizes business logic within the framework. Using BOF, you can develop
business logic that
• Always executes, regardless of the client program
• Can extend the implementation of core Documentum functionality
• Runs well in concert with an application server environment
In order to achieve this, the framework leaves customization of the user interface to the clients.
BOF customizations embody business logic and are independent of considerations of presentation
or format.
If you develop BOF customizations and want to access them from the .NET platform, you must take
additional steps. We do not provide tools to assist you in this. You can, however, expose some custom
functionality as web services. You can access web services from a variety of platforms (in particular,
.NET). The Web Services Framework Development Guide provides information about deploying and
using the web services.
BOF infrastructure
This section describes the infrastructure that supports the Business Object Framework.
should implement the marker interface IDfModule. Use the newModule method of IDfClient to
access a simple module.
The hierarchy of folders under /System/Modules/ is the repository’s module registry, or simply its
registry.
Note: Earlier versions of DFC maintained a registry of TBOs and SBOs on each client machine. That
registry is called the Documentum business object registry (DBOR). The DBOR form of registry is
deprecated, but for now you can still use it, even in systems that contain repository based registries.
Where both are present, DFC gives preference to the repository based registry.
Packaging support
Service‑based Business Objects (SBOs), page 92 and Type‑based Business Objects (TBOs), page 100,
describe the mechanics of constructing the classes and interfaces that constitute the most common
types of module. These details are not very different from the way they were in earlier versions.
The key changes are in packaging. This section describes BOF features that help you package your
business logic into modules.
Application Builder (DAB) provides tools to package modules and install them in a repository’s
registry.
To prepare a module for packaging by DAB, you must first prepare a JAR file that contains only the
module’s implementation classes and another JAR file that contains only its (optional) interface classes.
You must also have JAR files containing the interfaces of any modules your module depends on. Then
prepare any Java libraries and documentation that you want to include in the module. DAB can
package items, such as configuration files, that are not in JARs. You can access these from a module
implementation by using the class’s getResourceAsStream method.
Use DAB to package all of these into a module and place the module into a DocApp.
You can use the DocApp Installer (DAI) to install the module into the module registries of the target
repositories. This requires administrator privileges on each repository.
Whether this is the first deployment or an update of your module, the process is the same. For the
first deployment, you must also ensure that the module’s interface JAR is installed on client machines.
For updates, this is not required unless the interface changes. Refer to Deploying module interfaces,
page 91 for more information. Be certain that you have properly configured the global registry for
your DFC instance before attempting to access your custom modules. Refer to Documentum Foundation
Classes Installation Guide for more information.
JAR files
DAB packages JAR files into repository objects of type dmc_jar. The Object Reference Manual describes
the attributes of the dmc_jar type. Those attributes, which DAB sets using information that you
supply, specify the minimum Java version that the classes in the JAR file require. They also specify
whether the JAR contains implementations, interfaces, or both.
DAB links the interface and implementation JARs for your module directly to the module’s top level
folder; that is, to the dmc_module object that defines the module. It links the interface JARs of modules
that your module depends on into the External Interfaces subfolder of the top level folder.
DAB links JARs (in the form of dmc_jar objects) for supporting software into folders of type
dmc_java_library, which are created in the /System/Java Libraries folder. It links the dmc_java_library
folder to the top level folder of each module in which the Java library is included. The Content Server
Object Reference Manual describes the attributes of the dmc_java_library type. The single attribute of
this type specifies whether or not to sandbox the JAR files linked to that folder.
The verb sandbox refers to the practice of loading the given Java library into memory in such a way
that other applications cannot access it. This can have a heavy cost in memory use, but it enables
different applications to use different versions of the same library without conflicts. A module with a
sandboxed Xerces library, for example, uses its own version, even if there is a different version on the
classpath and a third version in use by a different module.
DFC achieves sandboxing by using a shared BOF class loader and separate class loaders for each
module. These class loaders try to load classes first, before delegating to the usual hierarchy of Java
class loaders.
Note: Java libraries can contain interfaces, implementations, or both. Do not include both interfaces
and implementations in your own Java libraries. If the library is a third party software package, you
may have to include both. In this case, do not use interfaces defined in that library in the method
signatures of your classes.
If you prepare a separate JAR for your module’s interfaces but fail to remove those interfaces from the
implementation JAR, you will encounter a ClassCastException when you try to use your module.
You can sandbox libraries that contain only implementations. You can sandbox third party libraries.
Never sandbox a library that contains an interface that is part of your module’s method signature.
DFC automatically sandboxes the implementation JARs of modules.
DFC automatically sandboxes files that are not JARs. You can access them as resources of the
associated class loader.
You must deploy the interface classes of your modules to each client machine, that is, to each machine
running an instance of DFC. Typically, you install the interface classes with the application that uses
them. They do not need to be on the global classpath.
A TBO that provides no methods of its own (for example, if it only overrides methods of DfDocument)
does not need an interface. For a TBO that does not have an interface, there is nothing to install
on the client machines.
In order to use hot deployment of revised implementation classes (see Dynamic delivery mechanism,
page 91), you must not change the module’s interface. You can extend module interfaces without
breaking existing customizations.
directory specified in dfc.data.dir. All applications that use the given DFC installation share the cache.
You can even share the cache among more than one DFC installation.
Global registry
DFC delivers SBOs from a central repository. That repository’s registry is called the global registry.
The global registry user, who has the user name of dm_bof_registry, is the repository user whose
account is used by DFC clients to connect to the repository to access required service‑based objects
or network locations stored in the global registry. This user has Read access to objects in the
/System/Modules, /System/BocsConfig, /dm_bof_registry, and /System/NetworkLocations only, and
no other objects.
The identity of the global registry is a property of the DFC installation. Different DFC installations can
use different global registries, but a single DFC installation can have only one global registry.
In addition to efficiency, local caching provides backup if the global registry repository is unavailable.
By default, DFC preloads all SBO implementation classes from the global registry to the local cache.
That is, DFC downloads these classes, regardless of whether or not any application has tried to
instantiate them. DFC does this only once. Thereafter, it downloads an implementation only if it
changes presumably an infrequent event. Restarting DFC does not cause it to lose the contents of its
local cache. This provides backup if the application loses its connection to the repository containing
the global registry.
The dfc.properties file contains properties that relate to accessing the global registry. Refer to BOF and
global registry settings, page 18 for information about using these properties.
SBO introduction
A service based object (SBO) is a type of module designed to enable developers to access Documentum
functionality by writing small amounts of relevant code. The underlying framework handles most
of the details of connecting to Documentum repositories. SBOs are similar to session beans in an
Enterprise JavaBean (EJB) environment.
SBOs can operate on multiple object types, retrieve objects unrelated to Documentum objects (for
example, external email messages), and perform processing. You can use SBOs to implement
functionality that applies to more than one repository type. For example, a Documentum Inbox
object is an SBO. It retrieves items from a user’s inbox and performs operations like removing and
forwarding items.
You can use SBOs to implement utility functions to be called by multiple TBOs. A TBO has the
references it needs to instantiate an SBO.
You can implement an SBO so that an application server component can call the SBO, and the SBO
can obtain and release repository sessions dynamically as needed.
SBOs are the basis for the web services framework.
SBO architecture
An SBO associates an interface with an implementation class. Each folder under /System/Modules/SBO
corresponds to an SBO. The name of the folder is the name of the SBO, which by convention is the
name of the interface.
SBOs are not associated with a repository type, nor are they specific to the repository in which they
reside. As a result, each DFC installation uses a global registry (see Global registry, page 92). The
dfc.properties file contains the information necessary to enable DFC to fetch SBO implementation
classes from the global registry.
You instantiate SBOs with the newService method of IDfClient, which requires you to pass it a session
manager. The newService method searches the registry for the SBO and instantiates the associated
Java class. Using its session manager, an SBO can access objects from more than one repository.
You can easily design an SBO to be stateless, except for the reference to its session manager.
Note: DFC does not enforce a naming convention for SBOs, but we recommend that you follow the
naming convention explained in Follow the Naming Convention, page 99.
Implementing SBOs
This section explains how to implement an SBO.
An SBO is defined by its interface. Callers cannot instantiate an SBO’s implementation class directly.
The interface should refer only to the specific functionality that the SBO provides. A separate
interface, IDfService, provides access to functionality common to all SBOs. The SBO’s implementation
class, however, should not extend IDfService. Instead, the SBO’s implementation class must extend
DfService, which implements IDfService. Extending DfService ensures that the SBO provides several
methods for revealing information about itself to DFC and to applications that use the SBO.
To create an SBO, first specify its defining interface. Then create an implementation class that
implements the defining interface and extends DfService. DfService is an abstract class that defines
common methods for SBOs.
Override the following abstract methods of DfService to provide information about your SBO:
• getVersion returns the current version of the service as a string.
The version is a string and must consist of an integer followed by up to three instances of dot
integers (for example, 1.0 or 2.1.1.36). The version number is used to determine installation options.
• getVendorString returns the vendor’s copyright statement (for example, ʺCopyright 1994‑2005
EMC Corporation. All rights reserved.ʺ) as a string.
• isCompatible checks whether the class is compatible with a specified service version
This allows you to upgrade service implementations without breaking existing code. Java does
not support multiple versions of interfaces.
• supportsFeature checks whether the string passed as an argument matches a feature that the
SBO supports.
The getVersion and isCompatible methods are important tools for managing SBOs in an open
environment. The getVendorString method provides a convenient way for you to include your
copyright information. The supportsFeature method can be useful if you develop conventions for
naming and describing features.
SBO programming differs little from programming for other environments. The following sections
address the principal additional considerations.
SBOs can maintain state between calls, but they are easier to deploy to multithreaded and other
environments if they do not do so. For example, a checkin service needs parameters like retainLock
and versionLabels. A stateful interface for such a service provides get and set methods for such
parameters. A stateless interface makes you pass the state as calling arguments.
This section presents session manager related considerations for implementing SBOs.
Overview
When implementing an SBO, you normally use the getSession and releaseSession methods of
DfService to obtain a DFC session and return it when finished. Once you have a session, use the
methods of IDfSession and other DFC interfaces to implement the SBO’s functionality.
If you need to access the session manager directly, however, you can do so from any method of a
service, because the session manager object is a member of the DfService class. The getSessionManager
method returns this object. To request a new session, for example, use the session manager’s
newSession method.
Each SBO method that obtains a repository session must release the session when it is finished
accessing the repository. The following example shows how to structure a method to ensure that
it releases its session, even if exceptions occur:
public void doSomething( String strRepository, . . . ) {
IDfSession session = getSession ( strRepository );
try { /* do something */ }
catch( Exception e ) { /* handle error */ }
finally { releaseSession( session ); }
}
To obtain a session, an SBO needs a repository name. To provide the repository name, you can design
your code in any of the following ways:
• Pass the repository name to every service method.
This allows a stateless operation. Use this approach whenever possible.
• Store the repository name in an instance variable of the SBO, and provide a method to set it
(for example, setRepository (strRepository)).
This makes the repository available from all of the SBO’s methods.
• Extract the repository name from an object ID.
A method that takes an object ID as an argument can extract the repository name from the object
ID (use the getDocbaseNameFromId method of IDfClient).
The EMC | Documentum architecture enables SBOs to return persistent objects to the calling program.
Persistent objects normally maintain their state in the associated session object. But an SBO must
release the sessions it uses before returning to the calling program. At any time thereafter, the session
manager might disconnect the session, making the state of the returned objects invalid.
The calling program must ensure that the session manager does not disconnect the session until the
calling program no longer needs the returned objects.
Another reason for preserving state between SBO calls occurs when a program performs a query or
accesses an object. It must obtain a session and apply that session to any subsequent calls requiring
authentication and Content Server operations. For application servers, this means maintaining the
session information between HTTP requests.
The main means of preserving state information are setSessionManager and transactions. Maintaining
state in a session manager, page 37 describes the setSessionManager mechanism and its cost in
resources. Using Transactions With SBOs, page 96 provides details about using transactions with SBOs.
You can also use the DfCollectionEx class to return a collection of typed objects from a service.
DfCollectionEx locks the session until you call its close method.
For testing or performance tuning you can examine such session manager state as reference
counters, the number of sessions, and repositories currently connected. Use the getStatistics method
of IDfSessionManager to retrieve an IDfSessionManagerStatistics object that contains the state
information. The statistics object provides a snapshot of the session manager’s internal data as of the
time you call getStatistics. DFC does not update this object if the session manager’s state subsequently
changes.
The DFC Javadocs describe the available state information.
DFC supports two transaction processing mechanisms: session based and session manager based.
describes the differences between the two transaction mechanisms. You cannot use session based
transactions within an SBO method. DFC throws an exception if you try to do so.
Use the following guidelines for transactions within an SBO:
• Never begin a transaction if one is already active.
The isTransactionActive method returns true if the session manager has a transaction active.
• If the SBO does not begin the transaction, do not use commitTransaction or abortTransaction
within the SBO’s methods.
If you need to abort a transaction from within an SBO method, use the session manager’s
setTransactionRollbackOnly method instead, as described in the next paragraph.
When you need the flow of a program to continue when transaction errors occur, use the session
manager’s setTransactionRollbackOnly. Thereafter, DFC silently ignores attempts to commit the
transaction. The owner of the transaction does not know that one of its method calls aborted the
transaction unless it calls the getTransactionRollbackOnly method, which returns true if some part of
the program ever called setTransactionRollbackOnly. Note that setTransactionRollbackOnly does not
throw an exception, so the program continues as if the batch process were valid.
The following program illustrates this.
void serviceMethodThatRollsBack( String strRepository, IDfId idDoc )
throws DfNoTransactionAvailableException, DfException {
try {
IDfPersistentObject obj = session.getObject( idDoc );
obj.checkout()
modifyObject( obj );
obj.save();
}
catch( Exception e ) {
setTransactionRollbackOnly();
throw new DfException();
}
}
When more than one thread is involved in session manager transactions, calling beginTransaction
from a second thread causes the session manager to create a new session for the new thread.
The session manager supports transaction handling across multiple services. It does not disconnect or
release sessions while transactions are pending.
For example, suppose one service creates folders and a second service stores documents in these
folders. To make sure that you remove the folders if the document creation fails, place the two
service calls into a transaction. The DFC session transaction is bound to one DFC session, so it is
important to use the same DFC session across the two services calls. Each service performs its own
atomic operation. At the start of each operation, they request a DFC session and at the end they
release this session back to the session pool. The session manager holds on to the session as long as
the transaction remains open.
Use the beginTransaction method to start a new transaction. Use the commitTransaction or
abortTransaction method to end it. You must call getSession after you call beginTransaction, or the
session object cannot participate in the transaction.
Use the isTransactionActive method to ask whether the session manager has a transaction active that
you can join. DFC does not allow nested transactions.
The transaction mechanism handles the following issues:
• With multiple threads, transaction handling operates on the current thread only.
For example, if there is an existing session for one thread, DFC creates a new session for the
second thread automatically. This also means that you cannot begin a transaction in one thread
and commit it in a second thread.
• The session manager provides a separate session for each thread that calls beginTransaction.
For threads that already have a session before the transaction begins, DFC creates a new session.
• When a client starts a transaction using the beginTransaction method, the session manager does
not allow any other DFC‑based transactions to occur.
The following example illustrates a client application calling two services that must be inside a
transaction, in which case both calls must succeed, or nothing changes:
sMgr.setIdentity(repo, loginInfo);
IMyService1 s1 = (IMyService1)
client.newService(IMyService1.class.getName(), sMgr);
IMyService2 s2 = (IMyService2)
client.newService(IMyService2.class.getName(), sMgr);
s1.setRepository( strRepository1 );
s2.setRepository( strRepository2 ) ;
sMgr.beginTransaction();
try {
s1.doRepositoryUpdate();
s2.doRepositoryUpdate();
sMgr.commitTransaction();
}
catch (Exception e) {
sMgr.abortTransaction();
}
If either of these service methods throws an exception, the program bypasses commit and executes
abort.
Each of the doRepositoryUpdate methods calls the session manager’s getSession method.
Note that the two services in the example are updating different repositories. Committing or
aborting the managed transaction causes the session manager to commit or abort transactions with
each repository.
Session manager transactions involving more than one repository have an inherent weakness that
arises from their reliance on the separate transaction mechanisms of the databases underlying the
repositories. Refer to for information about what session manager transactions can and cannot do.
DFC does not enforce a naming convention for SBOs, but we recommend that you give an
SBO the same name as the fully qualified name of the interface it implements. For example, if
you produce an SBO that implements an interface called IContentValidator, you might name it
com.myFirm.services.IContentValidator. If you do this, the call to instantiate an SBO becomes simple.
For example, to instantiate an instance of the SBO that implements the IContentValidator interface,
simply write
IContentValidator cv = (IContentValidator)client.newService(
IContentValidator.class.getName(), sMgr);
The only constraint DFC imposes on SBO names is that names must be unique within a registry.
Instantiate a new SBO each time you need one, rather than reusing one. Refer to Calling SBOs, page
116 for details.
Make SBOs as close to stateless as possible. Refer to Stateful and stateless SBOs, page 94 for details.
DFC caches persistent repository data. There is no convenient way to keep a private cache
synchronized with the DFC cache, so rely on the DFC cache, rather than implementing a separate
cache as part of your service’s implementation.
Creating a TBO
The following sections describe how to create a TBO. Here is a summary of the steps required:
The following sections provide more detailed instructions for building TBOs. The sample code
provided is works from the assumption that the TBO is derived from the DfDocument class, and that
its purpose is to extend the behavior of the custom object on checkin and save.
Using Application Builder, create and configure your custom type. For an example implementation,
seeDeploying the SBO and TBO, page 122.
Creating an interface for the TBO is generally recommended, but optional if you do not intend to
extend the parent class of the TBO by adding new methods. If you only intend to override methods
inherited from the parent class, there is no strict need for a TBO interface, but use of such an interface
may make your code more self‑documenting, and make it easier to add new methods to the TBO
should you have a need to add them in the future.
The design of the TBO interface should be determined by which methods you want to expose to client
applications and SBOs. If your TBO needs to expose new public methods, declare their signatures in
the TBO interface. Two other questions to consider are (1) whether to extend the interface of the TBO
superclass (e.g. IDfDocument), and (2) whether to extend IDfBusinessObject.
While the TBO class will need to extend the base DFC class (for example DfDocument), you may want
to make the TBO interface more restricted by redeclaring only those methods of the base class that
your business logic requires you to expose to clients. This avoids polluting the custom interface with
unnecessary methods from higher‑level DFC interfaces. On the other hand, if your TBO needs to
expose a large number of methods from the base DFC class, it may be more natural to have the TBO
interface extend the interface of the superclass. This is a matter of design preference.
Although not a functional requirement of the BOF framework, it is generally accepted practice for the
TBO interface to extend IDfBusinessObject, merging into the TBO’s contract its concerns as a business
object with its concerns as a persistent object subtype. This enables you to get an instance of the TBO
class and call IDfBusinessObject methods without the complication of a cast to IDfBusinessObject:
IMySop mySop = (IMySop) session.getObject(id);
if (mySop.supportsFeature("some_feature"))
{
mySop.mySopMethod();
}
The following sample TBO interface extends IDfBusinessObject and redeclares a few required methods
of the TBO superclass (rather than extending the interface of the superclass):
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.IDfId;
import com.documentum.fc.client.IDfBusinessObject;
/**
* TBO interface intended to override checkout and save behaviors of
* IDfDocument. IDfDocument is not extended because only a few of its
* methods are required IDfBusinessObject is extended to permit calling
* its methods without casting the TBO instance to IDfBusinessObject
*/
public interface IMySop extends IDfBusinessObject
{
public boolean isCheckedOut() throws DfException;
public void checkout() throws DfException;
public IDfId checkin(boolean fRetainLock, String versionLabels)
throws DfException;
public void save() throws DfException;
}
The main class for your TBO is the class that will be associated with a custom repository object type
when deploying the TBO. This class will normally extend the DFC type class associated with the
repository type from which your custom repository type is derived. For example, if your custom
repository type my_sop extends dm_document, extend the DfDocument class. In this case the TBO
class must implement IDfBusinessObject (either directly or by implementing your custom TBO
interface that extends IDfBusinessObject) and it must implement IDfDynamicInheritance.
public class MySop extends DfDocument implements IMySop,
IDfDynamicInheritance
It is also an option to create more hierarchical levels between your main TBO class and the DFC
superclass. For example, you may want to place generic methods used in multiple TBOs in
an abstract class. In this case the higher class will extend the DFC superclass and implement
IDfDynamicInheritance, and the main TBO class would extend the abstract class and implement the
TBO interface. This will result in the correct runtime behavior for dynamic inheritance.
public abstract class MyGenericDoc extends DfDocument
implements IDfDynamicInheritance
public class MySop extends MyGenericDoc implements IMySop
Note that in this situation you would need to package both MyGenericDoc and MySop into the TBO
class jar file, and specify MySop as the main TBO class when deploying the TBO in the repository. For
an example of packaging and deploying business objects see Deploying the SBO and TBO, page 122.
To fulfill its contract as a class of type IDfBusinessObject, the TBO class must implement the following
methods:
• getVersion
• getVendorString
• isCompatible
• supportsFeature
The version support features getVersion and isCompatible must have functioning implementations
(these are required and used by the Business Object Framework) and it is important to keep the TBO
version data up‑to‑date. Functional implementation of the supportsFeature method is optional: you
can provide a dummy implementation that just returns a Boolean value.
For further information see IDfBusinessObject in the Javadocs.
getVersion method
The getVersion method must return a string representing the version of the business object, using the
format <major version>.<minor version> (for example 1.10), which can be extended to include as many
as four total integers, separated by periods (for example 1.10.2.12). Application Builder returns an
error if you try to deploy a TBO that returns an invalid version string.
getVendorString method
The getVendorString method returns a string containing information about the business object vendor,
generally a copyright string.
isCompatible method
The isCompatible method takes a String argument in the format <major version>.<minor version> (for
example 1.10), which can be extended to include as many as four total integers, separated by periods
(for example 1.10.2.12). The isCompatible method, which is intended to be used in conjunction with
getVersion, must return true if the TBO is compatible with the version and false if it is not.
supportsFeature method
The supportsFeature method is passed a string representing an application or service feature, and
returns true if this feature is supported and false otherwise. Its intention is to allow your application
to store lists of features supported by the TBO, perhaps allowing the calling application to switch
off features that are not supported.
Support for features is an optional adjunct to mandatory version compatibility support. Features are a
convenient way of advertising functionality that avoids imposing complicated version checking on
the client. If you choose not to use this method, your TBO can provide a minimal implementation of
supportsFeature that just returns a boolean value.
You can implement business logic in your TBO by adding new methods, or by adding overriding
methods of the class that your TBO class extends. When overriding methods, you will most likely
want to add custom behavior as pre‑ or postprocessing before or after a call to super.<methodName>.
The following sample shows an override of the IDfSysObject.doCheckin method that writes an entry
to the log.
protected IDfId doCheckin(boolean fRetainLock,
String versionLabels,
String oldCompoundArchValue,
String oldSpecialAppValue,
String newCompoundArchValue,
String newSpecialAppValue,
Object[] extendedArgs) throws DfException
{
Date now = new Date();
DfLogger.warn(this, now + " doCheckin() called", null, null);
// your preprocessing logic here
return super.doCheckin(fRetainLock,
versionLabels,
oldCompoundArchValue,
oldSpecialAppValue,
newCompoundArchValue,
newSpecialAppValue,
extendedArgs);
// your postprocessing logic here
}
Override only methods beginning with do (doSave, doCheckin, doCheckout, and similar). The
signatures for these methods are documented in Appendix .
catch (Throwable e)
{
fail("Failed with exception " + e);
}
finally
{
if ((sessionManager != null) && (docbaseSession != null))
{
sessionManager.release(docbaseSession);
}
}
}
Dynamic inheritance
Dynamic inheritance is a BOF mechanism that modifies the class inheritance of a TBO dynamically
at runtime, driven by the hierarchical relationship of associated repository objects. This mechanism
enforces consistency between the repository object hierarchy and the associated class hierarchy. It
also allows you to design polymorphic TBOs that inherit from different superclasses depending on
runtime dynamic resolution of the class hierarchy.
For example, suppose you have the following TBO design, in which repository objects are related
hierarchically, but in which the associated TBO classes each inherit from DFDocument:
If dynamic inheritance is enabled, at runtime the class hierarchy is resolved dynamically to correspond
to the repository object hierarchy, so that the MySop class inherits from GenericSop:
The dynamic inheritance mechanism allows you to design reusable components that exhibit different
behaviors at runtime inherited from their dynamically determined superclass. For example, in the
following design‑time configuration, the MyDoc class is packaged in two TBOs: one in which it is
associated with type my_sop, and one in which it is associated with type my_report:
At runtime, MyDoc will inherit from GenericSop where it is associated with the my_sop repository
object type, and from GenericReport where it is associated with the my_report repository object type.
Methods of DfSysObject
IDfId doAddESignature (String userName, String password, String signatureJustification, String
formatToSign, String hashAlgorithm, String preSignatureHash, String signatureMethodName,
String applicationProperties, String passThroughArgument1, String passThroughArgument2,
Object[] extendedArgs) throws DfException
IDfId doAddReference (IDfId folderId, String bindingCondition, String bindingLabel, Object[]
extendedArgs) throws DfException
void doAddRendition (String fileName, String formatName, int pageNumber, String pageModifier,
String storageName, boolean atomic, boolean keep, boolean batch, String otherFileName, Object[]
extendedArgs) throws DfException
void doAppendFile (String fileName, String otherFileName, Object[] extendedArgs) throws
DfException
IDfCollection doAssemble (IDfId virtualDocumentId, int interruptFrequency, String qualification,
String nodesortList, Object[] extendedArgs) throws DfException
IDfVirtualDocument doAsVirtualDocument (String lateBindingValue, boolean followRootAssembly,
Object[] extendedArgs) throws DfException
void doAttachPolicy (IDfId policyId, String state, String scope, Object[] extendedArgs) throws
DfException
void doBindFile ( int pageNumber, IDfId srcId, int srcPageNumber, Object[] extendedArgs) throws
DfException
IDfId doBranch (String versionLabel, Object[] extendedArgs) throws DfException
void doCancelScheduledDemote (IDfTime scheduleDate, Object[] extendedArgs) throws
DfException
void doCancelScheduledPromote (IDfTime scheduleDate, Object[] extendedArgs) throws
DfException
void doCancelScheduledResume (IDfTime schedule, Object[] extendedArgs) throws DfException
void doCancelScheduledSuspend (IDfTime scheduleDate, Object[] extendedArgs) throws
DfException
void doAppendString (String attrName, String value, Object[] extendedArgs) throws DfException
String doGetString (String attrName, int valueIndex, Object[] extendedArgs) throws DfException
void doInsertString (String attrName, int valueIndex, String value, Object[] extendedArgs) throws
DfException
doSetString (String attrName, int valueIndex, String value, Object[] extendedArgs) throws
DfException
void doRemove (String attrName, int beginIndex, int endIndex, Object[] extendedArgs) throws
DfException
Methods of DfGroup
boolean doAddGroup (String groupName, Object[] extendedArgs) throws DfException
boolean doAddUser (String userName, Object[] extendedArgs) throws DfException
void doRemoveAllGroups (Object[] extendedArgs) throws DfException
void doRemoveAllUsers (Object[] extendedArgs) throws DfException
boolean doRemoveGroup (String groupName, Object[] extendedArgs) throws DfException
boolean doRemoveUser (String userName, Object[] extendedArgs) throws DfException
void doRenameGroup (String groupName, boolean isImmediate, boolean unlockObjects, boolean
reportOnly, Object[] extendedArgs) throws DfException
Methods of DfUser
void doChangeHomeDocbase (String homeDocbase, boolean isImmediate, Object[] extendedArgs)
throws DfException
void doRenameUser (String userName, boolean isImmediate, boolean unlockObjects, boolean
reportOnly, Object[] extendedArgs) throws DfException
Calling SBOs
This section provides rules and guidelines for instantiating SBOs and calling their methods.
The client application should instantiate a new SBO each time it needs one, rather than reusing one.
For example, to call a service during an HTTP request in a web application, instantiate the service,
execute the appropriate methods, then abandon the service object.
This approach is thread safe, and it is efficient, because it requires little resource overhead. The
required steps to instantiate a service are:
1. Prepare an IDfLoginInfo object containing the necessary login information.
2. Instantiate a session manager object.
3. Call the service factory method.
An SBO client application uses the newService factory method of IDfClient to instantiate a service:
public IDfService newService ( String name, IDfSessionManager sMgr )
throws DfServiceException;
The method takes the service name and a session manager as parameters, and returns the service
interface, which you must cast to the specific service interface. The newService method uses the
service name to look up the Java implementation class in the registry. It stores the session manager as
a member of the service, so that the service implementation can access the session manager when it
needs a DFC session.
Calling TBOs
Client applications and methods of SBOs can use TBOs. Use a factory method of IDfSession to
instantiate a TBO’s class. Release the session when you finish with the object.
Within a method of an SBO, use getSession to obtain a session from the session manager. DFC releases
the session when the service method is finished, making the session object invalid.
Use the setSessionManager method to transfer a TBO to the control of the session manager when
you want to:
• Release the DFC session but keep an instance of the TBO.
• Store the TBO in the SBO state.
Refer to Maintaining state in a session manager, page 37 for information about the substantial costs of
using the setSessionManager method.
ITutorialSBO
Create an interface for the service‑based object. This interface provides the empty setFlavorSBO
method, to be overridden in the implementation class. All SBOs must extend the IDfService interface.
import com.documentum.fc.client.IDfService;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.DfException;
TutorialSBO
The TutorialSBO class extends the DfService class, which provides fields and methods to provide
common functionality for all services.
import com.documentum.fc.client.DfService;
import com.documentum.fc.client.IDfSysObject;
import com.documentum.fc.common.DfException;
// Custom method. This method sets a string value on the system object.
// You can set any number of values of any type (for example, int, double,
// boolean) using similar methods.
ITutorialTBO
The interface for the TBO is trivial — its only function is to extend the IDfBusinessObject interface,
a requirement for all TBOs.
import com.documentum.fc.client.IDfBusinessObject;
TutorialTBO
The TutorialTBO is the class that pulls the entire example together. This class overrides the doSave(),
doSaveEx() and doCheckin() methods of DfSystemObject and uses the setFlavorSBO() method of
TutorialSBO to add a string value to objects of our custom type.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.DfDocument;
import com.documentum.fc.client.IDfClient;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.IDfId;
/**
* simple TBO that overrides the behavior of save(), saveLock() and checkinEx()
*/
if( strFeatures.indexOf( s ) == 1 )
return false;
return true;
}
/*
* Overridden IDfSysObject methods. These methods intercept the save(),
* saveLock(), and checkinEx() methods, use the local setFlavor method
* to attach a String value to the current system object, then pass
* control to the parent class to complete the operation.
*/
}
} //end class
d. Choose SBO from the Module type drop‑down box. Leave the Check in... field as Minor
Version.
e. Click Add, next to Interface JAR(s).
f. Navigate to your ITutorialSBO.jar file and add it.
g. Click Add, next to Implementation JAR(s) and add TutorialSBO.jar.
h. From the Class Name drop‑down, choose com.documentum.tutorial.TutorialSBO.
i. Check in the module by right‑clicking ITutorialSBO under the modules forlder and selecting
Check in selected object(s). Then click OK.
5. Insert a new module for ITutorialTBO
a. Choose Insert>Module.
b. Double‑click the new module to edit it.
c. Name the module tutorial_flavor.
d. Select TBO as the module type.
e. Click Add, next to Interface JAR(s).
f. Navigate to your ITutorialTBO.jar file and add it.
g. Click Add, next to Implementation JAR(s).
h. Navigate to your TutorialsTBO.jar file and add it.
i. From the Class Name drop‑down, choose com.documentum.tutorial.TutorialTBO.
j. Click the Dependencies tab.
k. Click the Add button below Required Modules.
l. Enter com.documentum.tutorial.IMySBO as the name.
m. Click Add and select Copy from Docbase.
n. Navigate into the /System/Modules/SBO folder and double‑click ITutorialSBO.
o. Select ITutorialSBO.jar and click Insert.
p. Click OK.
6. Check in the DocApp and close Application Builder.
To see your modules in action, use Webtop to create an object of the tutorial_flavor type. Check
the item out and save it, then look at the complete document properties to see the flavor property
update. Check in the file to update the value again.
Aspects
Aspects are a mechanism for adding behavior and/or attributes to a Documentum object instance
without changing its type definition. They are similar to TBOs, but they are not associated with any
one document type. Aspects also are late‑bound rather than early‑bound objects, so they can be added
to an object or removed as needed.
Aspects are a BOF type (dmc_aspect_type). Like other BOF types, they have these characteristics:
• Aspects are installed into a repository.
• Aspects are downloaded on demand and cached on the local file system.
• When the code changes in the repository, aspects are automatically detected and new code is
“hot deployed” to the DFC runtime.
Examples of usage
One use for aspects would be to attach behavior and attributes to objects at a particular time in their
lifecycle. For example, you might have objects that represent customer contact records. When a
contact becomes a customer, you could attach an aspect that encapsulates the additional information
required to provide customer support. This way, the system won’t be burdened with maintenance
of empty fields for the much larger set of prospective customers.
If you defined levels of support, you might have an additional level of support for “gold” customers.
You could define another aspect reflecting the additional behavior and fields for the higher level of
support, and attach them as needed.
Another scenario might center around document retention. For example, your company might have
requirements for retaining certain legal documents (contracts, invoices, schematics) for a specific
period of time. You can attach an aspect that will record the date the document was created and the
length of time the document will have to be retained. This way, you are able to attach the retention
aspect to documents regardless of object type, and only to those documents that have retention
requirements.
You will want to use aspects any time you are introducing cross‑type functionality. You can use them
when you are creating elements of a common application infrastructure. You can use them when
upgrading an existing data model and you want to avoid performing a database upgrade. You can use
them any time you are introducing functionality on a per‑instance basis.
Creating an aspect
Aspects are created in a similar fashion to other BOF modules.
1. Decide what your aspect will provide: behavior, attributes, or both.
2. Create the interface and implementation classes. Write any new behavior, override existing
behavior, and provide getters and setters to your aspect attributes.
3. Deploy the aspect module. For details, see the Documentum Foundation Classes Release Notes
Version 6.
As an example, we’ll walk through the steps of implementing a simple aspect. Our aspect is designed
to be attached to a document that stores contact information. The aspect identifies the contact as a
customer and indicates the level of service (three possible values — customer, silver, gold). It will also
track the expiration date of the customer’s subscription.
Define the new behavior for your aspect in an interface. In this case, we’ll add getters and setters for
two attributes: service_level and expiration_date.
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.IDfTime;
Now that we have our interface, we can implement it with a custom class.
import com.documentum.fc.client.DfDocument;
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.DfTime;
import com.documentum.fc.common.IDfTime;
import dfctestenvironment.ICustomerServiceAspect;
import java.util.GregorianCalendar;
For details on deploying aspect modules, please see the Documentum Foundation Classes Release Notes
Version 6
TestCustomerServiceAspect
Once you have compiled and deployed your aspect classes and defined the aspect on the server, you
can use the class to set and get values in the custom aspect, and to test the behavior for adjusting the
expiration date by month. This example is compatible with the sample environment described in
Chapter Chapter 3, Creating a Test Application.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfDocument;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.aspect.IDfAspects;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.DfTime;
import com.documentum.fc.common.IDfTime;
try {
String result = "";
return result;
}
catch (Exception ex) {
ex.printStackTrace();
return "Exception thrown.";
}
}
public String setExpirationDate(
IDfSession mySession,
String docId,
String expDate)
{
// Instantiate a client.
IDfClientX clientx = new DfClientX();
try {
String result = "";
IDfTime expirationDate = clientx.getTime(
expDate,
IDfTime.DF_TIME_PATTERN1
);
// Get the document instance using the document ID.
IDfDocument doc =
(IDfDocument) mySession.getObject(new DfId(docId));
doc.setTime("customer_service_aspect.expiration_date",
expirationDate);
}
catch (Exception ex) {
ex.printStackTrace();
return "Exception thrown.";
}
}
try {
...
//Override doSave()
protected synchronized void doSave(
boolean saveLock,
String v,
Object[] args)
{
if (this.getAspects().findString("my_retention_aspect") < 0) {
MyAttachCallback myCallback = new MyAttachCallback();
this.attachAspect("my_retention_aspect", myCallback);
}
super.doSave(saveLock, v, args);
}
By default, dm_sysobject and its sub‑types are enabled for aspects. This includes any custom object
sub‑types. Any non‑sysobject application type can be enabled for use with aspects using the following
syntax.
ALTER TYPE type_name ALLOW ASPECTS
Default aspects
Type definitions can include a default set of aspects. This allows you to modify the data model and
behavior for future instances. It also ensures that specific aspects are attached to all selected object
instances, no matter which application creates the object. The syntax is
ALTER TYPE type_name [SET | ADD | REMOVE] DEFAULT ASPECTS aspect_list
the aspect_list value is a comma‑separated list of dmc_aspect_type object_name values. No quotes are
necessary, but if you choose to use quotes they must be single quotes and surround the entire list. For
example, aspect_list could be a single value such as my_retention_aspect, or it could be multiple values
specified as ’my_aspect_name1, my_aspect_name2’ or my_aspect_name1, my_aspect_name2.
All aspect attributes in a DQL statement must be fully qualified as aspect_name.attribute_name. For
example:
SELECT r_object_id, my_retention_aspect.retained_since
FROM my_sop WHERE my_retention_aspect.years_to_retain = 10
If more than one type is specified in the FROM clause of a DQL statement, aspect attributes should be
further qualified as type_name.aspect_name.attribute_name OR alias_name.aspect_name.attribute_name.
Aspect attributes specified in a DQL statement appear in a DQL statement like a normal attribute,
wherever legally permitted by the syntax.
Fulltext index
By default, full‑text indexing is turned off for aspects. You can control which aspect attributes have
full‑text indexing using the following DQL syntax.
ALTER ASPECT aspect_name FULLTEXT SUPPORT ADD | DROP a1, a2,...
Object replication
Aspect attributes can be replicated in a second repository just as normal attributes are replicated
(“dump and load” procedures). However, the referenced aspects must be available on the target
repository.
Because DFC is the principal low level interface to all Documentum functionality, there are many
DFC interfaces that this manual covers only superficially. They provide access to features that other
documentation covers in more detail. For example, the Server Fundamentals manual describes virtual
documents and access control. The DFC Javadocs provide the additional information necessary to
enable you to take advantage of those features. Similarly, the Enterprise Content Integration (ECI)
Services product includes extensive capabilities for searching Documentum repositories. The DFC
Javadocs provide information about how to use DFC to access that functionality.
This chapter introduces some of the Documentum functionality that you can use DFC to access. It
contains the following major sections:
• Security Services, page 135
• XML, page 136
• Virtual Documents, page 136
• Workflows, page 136
• Document Lifecycles, page 137
• Validation Expressions in Java, page 137
• Search Service, page 138
Security Services
Content Server provides a variety of security features. From the DFC standpoint, they fall into the
following categories:
• User authentication
Refer to for more information.
• Object permissions
Refer to the Server Fundamentals manual and the DFC Javadocs for IDfACL, IDfPermit, and other
interfaces for more information.
DFC also provides a feature related to folder permissions. Users may have permission to view an object
but not have permission to view all of the folders to which it is linked. The IDfObjectPath interface and
the getObjectPaths method of IDfSession provide a powerful and flexible mechanism for finding paths
for which the given user has the necessary permissions. Refer to the Javadocs for more details.
XML
Chapter 4, Working with Document Operations provides some information about working with XML.
DFC provides substantial support for the Documentum XML capabilities. Refer to XML Application
Development Guide for details of how to use these capabilities.
Virtual Documents
Chapter 4, Working with Document Operations provides some information about working with
virtual documents. Refer to Server Fundamentals and the DFC Javadocs for the IDfVirtualDocument
interface for much more detail.
Workflows
The Server Fundamentals manual provides a thorough treatment of the concepts underlying workflows.
DFC provides interfaces to support the construction and use of workflows, but there is almost no
reason to use those interfaces directly. The workflow manager and business process manager software
packages handle all of those details.
Individual workflow tasks can have methods associated with them. You can program these methods
in Java and call DFC from them. These methods run on the method server, an application server that
resides on the Content Server machine and is dedicated to running Content Server methods. The code
for these methods resides in the repository’s registry as modules. Modules and registries, page 88
provides more information about registries and modules.
The com.documentum.fc.lifecycle package provides the following interfaces for use by modules that
implement lifecycle actions:
• IDfLifecycleUserEntryCriteria to implement userEntryCriteria.
• IDfLifecycleUserAction to implement userAction.
• IDfLifecycleUserPostProcessing to implement userPostProcessing
There is no need to extend DfService, but you can do so. You need only implement IDfModule,
because lifecycles are modules, not SBOs.
Document Lifecycles
The Server Fundamentals manual provides information about lifecycles. There are no DFC interfaces
for constructing document lifecycles. Application Builder (DAB) includes a lifecycle editor for that
purpose. You can define actions to take place at various stages of a document’s lifecycle. You can code
these in Java to run on the Content Server’s method server. Such Java methods must implement
the appropriate interfaces from the following:
• IDfLifecycleAction.java
• IDfLifecycleUserAction.java
• IDfLifecycleUserEntryCriteria.java
• IDfLifecycleUserPostProcessing.java
• IDfLifecycleValidate.java
The code for these methods resides in the repository’s registry (see Modules and registries, page
88) as modules.
• Translations are available for all Docbasic functions that you are likely to use in validation
expressions.
We do not provide Java translations of operating system calls, file system access, COM and DDE
functions, print or user interface functions, and other similar functions. We do not provide Java
translations of financial functions.
Search Service
The DFC search service replaces prior mechanisms for building and running queries. You can use
the IDfQuery interface, which is not part of the search service, for simple queries. The search service
provides the ability to run searches across multiple Documentum repositories and, in conjunction with
the Enterprise Content Integration (ECI) Services product, external repositories as well.
The Javadocs for the com.documentum.fc.client.search package provide a description of how to use
this capability.
This is a very simple Java application you can use to try out the code samples in Chapter 4, Working
with Document Operations. It was created using AWT controls for ease of use. The IDE used to create
this application was Oracle JDeveloper, and so it has some of that program’s idiosyncrasies. You may
want to create a similar sample application using an IDE with which you’re familiar, and reference the
button handling routines to create your own.
To run the application, write and compile the enhanced DfcTestFrame.java listing in this appendix
along with the sample classes from these sections of this manual:
• The TutorialSessionManager class, page 39
• The DfcTutorialApplication class, page 44
• Cancelling checkout, page 64
• Checking in, page 61
• Checking out, page 58
• Copying, page 72
• Deleting, page 76
• Exporting, page 70
• Importing, page 67
Figure 14. Sample interface with buttons for typical document manipulation commands
The rationale behind this test interface is that while it is reasonable to expect that you might have to
enter file paths to try out the commands, you shouldn’t have to enter internal document IDs. This
interface primarily performs the lookup of the document IDs so that you don’t have to find them
yourself and enter them.
This is not in any way intended to be a sample of an interface you might create yourself for your users.
The perfect “sample” UI application is Webtop, which demonstrates how Documentum engineers
feel our content management features should be implemented.
The Directory and Import buttons are enabled when you first open the application. Once you have a
directory listing, the other buttons are enabled.
Enter the arguments in the field at the top of the window, then click the button for the operation
you want to test. The arguments are as follows.
Table 3. Arguments for sample commands used with the DFC test frame
Command Arguments
Directory repository, userName, password, directoryPath
Check In repository, userName, password, directoryPath.
Select an item from the file list that has already
been checked out and is ready to check in.
Check Out repository, userName, password, directoryPath.
Select an item from the file list you want to check
out.
Cancel Checkout repository, userName, password, directoryPath.
Select an item from the file list that has already
been checked out.
import com.documentum.fc.client.IDfCollection;
import com.documentum.fc.client.IDfFolder;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfTypedObject;
import java.awt.Button;
import java.awt.Frame;
import java.awt.GridBagConstraints;
import java.awt.GridBagLayout;
import java.awt.Insets;
import java.awt.Label;
import java.awt.List;
import java.awt.Rectangle;
import java.awt.SystemColor;
import java.awt.TextField;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.util.StringTokenizer;
import java.util.Vector;
// Generate UI elements
public DfcTestFrame() {
try {
jbInit();
} catch (Exception e) {
e.printStackTrace();
}
}
// Initialize UI components
textField_arguments.setText("Arguments go here.");
label_results.setText("Messages appear here.");
button_directory.setLabel("Directory");
button_directory.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
button_directory_actionPerformed(e);
}
});
this.add(button_delete,
new GridBagConstraints(
3, 2, 1, 1, 0.0, 0.0,
GridBagConstraints.CENTER,
GridBagConstraints.NONE,
new Insets(15, 0, 0, 0), 7, 8)
);
this.add(button_move,
new GridBagConstraints(
1, 2, 1, 1, 0.0, 0.0,
GridBagConstraints.CENTER,
GridBagConstraints.NONE,
new Insets(15, 10, 0, 0), 22, 8)
);
this.add(button_copy,
new GridBagConstraints(
0, 2, 1, 1, 0.0, 0.0,
GridBagConstraints.CENTER,
GridBagConstraints.NONE,
new Insets(15, 10, 0, 0), 28, 8)
);
this.add(textField_arguments,
new GridBagConstraints(
0, 0, 10, 1, 1.0, 0.0,
GridBagConstraints.WEST,
GridBagConstraints.HORIZONTAL,
new Insets(5, 10, 0, 15), 527, 0)
);
this.add(list_id,
new GridBagConstraints(
0, 3, 10, 1, 1.0, 1.0,
GridBagConstraints.CENTER,
GridBagConstraints.BOTH,
new Insets(40, 10, 0, 15), 372, 160)
);
this.add(button_directory,
new GridBagConstraints(
0, 1, 1, 1, 0.0, 0.0,
GridBagConstraints.CENTER,
GridBagConstraints.NONE,
new Insets(10, 10, 0, 0), 14, 8)
);
this.add(label_results,
new GridBagConstraints(
0, 4, 9, 1, 0.0, 0.0,
GridBagConstraints.WEST,
GridBagConstraints.NONE,
new Insets(0, 10, 7, 0), 507, 11)
);
button_checkOut.setLabel("Check Out");
button_checkOut.setEnabled(false);
button_checkOut.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
button_checkOut_actionPerformed(e);
}
});
button_cancelCheckout.setLabel("Cancel Checkout");
button_cancelCheckout.setEnabled(false);
button_cancelCheckout.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
button_cancelCheckout_actionPerformed(e);
}
});
button_checkIn.setLabel("Check In");
button_checkIn.setEnabled(false);
button_checkIn.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
button_checkIn_actionPerformed(e);
}
});
button_import.setLabel("Import");
button_import.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
button_import_actionPerformed(e);
}
});
button_export.setLabel("Export");
button_export.setEnabled(false);
button_export.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
button_export_actionPerformed(e);
}
});
button_copy.setLabel("Copy");
button_copy.setEnabled(false);
button_copy.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
button_copy_actionPerformed(e);
}
});
button_move.setLabel("Move");
button_move.setEnabled(false);
button_move.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
button_move_actionPerformed(e);
}
});
button_delete.setLabel("Delete");
button_delete.setEnabled(false);
button_delete.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
button_delete_actionPerformed(e);
}
});
this.add(button_checkOut,
new GridBagConstraints(
3, 1, 2, 1, 0.0, 0.0,
GridBagConstraints.CENTER,
GridBagConstraints.NONE,
new Insets(10, 10, 0, 0), 8, 8)
);
this.add(button_export,
new GridBagConstraints(
7, 1, 1, 1, 0.0, 0.0,
GridBagConstraints.CENTER,
GridBagConstraints.NONE,
new Insets(10, 10, 0, 0), 21, 8)
);
this.add(button_import,
new GridBagConstraints(
6, 1, 1, 1, 0.0, 0.0,
GridBagConstraints.CENTER,
GridBagConstraints.NONE,
new Insets(10, 10, 0, 0), 16, 8)
);
this.add(button_checkIn,
new GridBagConstraints(
1, 1, 2, 1, 0.0, 0.0,
GridBagConstraints.CENTER,
GridBagConstraints.NONE,
new Insets(10, 10, 0, 0), 26, 8)
);
this.add(button_cancelCheckout,
new GridBagConstraints(
5, 1, 1, 1, 0.0, 0.0,
GridBagConstraints.CENTER,
GridBagConstraints.NONE,
new Insets(10, 10, 0, 0), 3, 8)
);
}
// Handler for the Directory button. To use the button, enter the arguments
// repository_name, user_name, password, directory_path in the arguments field.
try {
list_id.removeAll();
// Cycle through the collection getting the object ID and adding it to the
// m_listIDs Vector. Get the object name and add it to the file list control.
while (folderList.next())
{
IDfTypedObject doc = folderList.getTypedObject();
docId = doc.getString("r_object_id");
docName = doc.getString("object_name");
list_id.add(docName);
m_fileIDs.addElement(docId);
}
/*
* Handler for the Checkout button. To use the button, enter valid values for
* respository, userName, password, and directory path in the arguments
* field and click the Directory button.
* Choose one of the document names displayed in the list, and click the
* Checkout button.
*/
try {
// Get the internal document ID from the m_fileIDs member variable, based on
// the selected item's position in the list control.
String docId =
m_fileIDs.elementAt(list_id.getSelectedIndex()).toString();
// Populate the TutorialCheckout object with session info and the internal
// ID of the document to be checked out, perform the checkout operation,
// and display the results.
label_results.setText(
tco.checkoutExample(
mySession,
docId
)
);
}
catch (Exception ex) {
System.out.println("Exception hs been thrown: " + ex);
ex.printStackTrace();
}
finally {
/*
* Handler for the cancel checkout action. To use this command, enter the
* repository name, user name, password, and directory path in the argurments
* field, then click the Directory button. From the file list, choose an object
* that you have checked out, then click the Cancel Checkout button.
*/
private void button_cancelCheckout_actionPerformed(ActionEvent e) {
try {
// Populate the session manager with user and repository info.
mySessMgr =
new TutorialSessionManager(repository, userName, password);
/*
* Handler for the Check In button. To use the button, enter valid values for
* respository, userName, password, and directory path in the arguments
* field and click the Directory button.
* Choose one of the document names displayed in the list that you have
* checked out, and click the Check In button.
*/
private void button_checkIn_actionPerformed(ActionEvent e) {
label_results.setText("Attempting to import....");
try {
/*
* Handler for the Export button. To use the button, enter valid values for
* respository, userName, password, directory path, and the target local
* directory to which you want to export in the arguments field, then click
* the Export button.
*/
label_results.setText("Attempting to export....");
try {
}
catch (Exception ex) {
System.out.println("Exception has been thrown: " + ex);
ex.printStackTrace();
}
finally {
mySessMgr.releaseSession(mySession);
} }
/*
* Handler for the Copy button. To use this button, enter valid values for the
* repository, userName, password, sourceDirectory, and destinationDirectory,
* then click the Directory button. Choose a document from the list, then click
* the Copy button.
*/
private void button_copy_actionPerformed(ActionEvent e) {
label_results.setText("Attempting to copy....");
try {
/*
* Handler for the Move button. To use this button, enter valid values for
* the repository, userName, password, sourceDirectory, and
* destinationDirectory, then click the Directory button. Choose a document
* from the list, then click the Move button.
*/
label_results.setText("Attempting to move....");
/*
* Handler for the Delete button. To use this button, enter valid
* commaseparated values for the repository, userName, password, directory;
* as the fifth argument, enter true to delete all versions or false to
* delete only the current version of the document.
* Click the Directory button. Choose a document from the list, then
* click the Delete button.
*/
private void button_delete_actionPerformed(ActionEvent e) {
label_results.setText("Attempting to delete...");
try {
L P
leaks, 19 packages, 22
local files, 62 parents, see virtual documents, terminology
Predictive caching, 78
M progressReport method, 85
manifests, 22
modules, 88 R
Modules folder, 88 Reader class, 82
release method, 16
N releaseSession method, 95, 117
renditions, 82
naming conventions, 22, 93, 99
reportError method, 85
.NET platform, 22, 87
repositories, 88
newObject method, 16
newService method, 93, 116 to 117
newSessionManager method, 16, 116 S
NEXT_MAJOR field, 61 sandboxing, 90
nodes, see virtual documents, terminology; SBOs, see service based objects
operations, nodes schemas, see XML schemas
null returns, 83 service based objects (SBOs), 93
architecture, 93
O implementing, 94
instantiating, 99, 116
OLE (object linking and embedding)
returning TBOs, 117
links, 67
session manager, 117
online reference documentation, 23
specifying a repository, 95
operation monitors, 85
threads, 116
operations
transactions, 96
aborting, 83
session leaks, 19
add method, 55, 83
session managers, 15 to 16, 93
cancel checkout, 64
internal statistics, 96
checkin, 61
transactions, 97
checkout, 58
setCheckinVersion method, 61
delete, 76
setDestination method, 80 to 82
errors, 83
setDestinationDirectory method, 79
T X
threads, 97 to 98 Xalan transformation engine, 80
transactions Xerces XML parser, 79
nested, 98 XML schemas, 79
type based objects (TBOs) XML support, 51, 79 to 80
returning from SBO, 117 See also ignore
XSLT stylesheets, 80 to 82