HDID Manual
HDID Manual
HDID Manual
Version 6.0
MK-93HDID015-00
October 2017
2017 Hitachi, Ltd. All rights reserved.
Hitachi reserves the right to make changes to this Material at any time
without notice and assumes no responsibility for its use. The Materials
contain the most current information available at the time of publication.
Notice: Hitachi products and services can be ordered only under the terms
and conditions of the applicable Hitachi agreements. The use of Hitachi
products is governed by the terms of your agreements with Hitachi Vantara
Corporation.
By using this software, you agree that you are responsible for:
1. Acquiring the relevant consents as may be required under local privacy
laws or otherwise from authorized employees and other individuals to
access relevant data; and
2. Verifying that data continues to be held, retrieved, deleted, or otherwise
processed in accordance with relevant laws.
2
Hitachi Data Instance Director Quick Start Guide
Active Directory, ActiveX, Bing, Excel, Hyper-V, Internet Explorer, the Internet
Explorer logo, Microsoft, the Microsoft Corporate Logo, MS-DOS, Outlook, PowerPoint,
SharePoint, Silverlight, SmartScreen, SQL Server, Visual Basic, Visual C++, Visual
Studio, Windows, the Windows logo, Windows Azure, Windows PowerShell, Windows
Server, the Windows start button, and Windows Vista are registered trademarks or
trademarks of Microsoft Corporation. Microsoft product screen shots are reprinted
with permission from Microsoft Corporation.
All other trademarks, service marks, and company names in this document or
website are properties of their respective owners.
3
Hitachi Data Instance Director Quick Start Guide
4
Hitachi Data Instance Director Quick Start Guide
Contents
Preface................................................................................................. 7
Software version...................................................................................................... 8
Intended audience................................................................................................... 8
Related documents.................................................................................................. 8
Document conventions............................................................................................. 8
Conventions for storage capacity values.....................................................................9
Accessing product documentation........................................................................... 10
Getting help...........................................................................................................10
Comments............................................................................................................. 10
1 Introduction....................................................................................... 13
About Data Instance Director.................................................................................. 14
Architecture........................................................................................................... 14
Features and benefits............................................................................................. 15
5
Hitachi Data Instance Director Quick Start Guide
Glossary..............................................................................................41
6
Hitachi Data Instance Director Quick Start Guide
Preface
This document describes how to install and use Hitachi Data Instance
Director.
□ Software version
□ Intended audience
□ Related documents
□ Document conventions
□ Getting help
□ Comments
Preface 7
Hitachi Data Instance Director Quick Start Guide
Software version
This document revision applies to Data Instance Director version 6.0.
Intended audience
This document is intended for users who want to install Hitachi Data Instance
Director. For guidance on configuring Role Based Access Control, please refer
to the accompanying User Guide.
Related documents
• Hitachi Data Instance Director Software Release Notes, RN-93HDID018
• Hitachi Data Instance Director User’s Guide, MK-93HDID014
• Hitachi Data Instance Director Quick Start Guide, MK-93HDID015
• Hitachi Data Instance Director Microsoft Exchange Server Application
Guide, MK-93HDID012
• Hitachi Data Instance Director Microsoft SQL Server Application Guide,
MK-93HDID011
• Hitachi Data Instance Director Oracle Application Guide, MK-93HDID010
• Hitachi Data Instance Director SAP HANA Application Guide,
MK-93HDID017
Document conventions
This document uses the following typographic conventions:
Convention Description
Bold • Indicates text in a window, including window titles, menus, menu options,
buttons, fields, and labels. Example:
Click OK.
• Indicates emphasized words in list items.
Italic • Indicates a document title or emphasized words in text.
• Indicates a variable, which is a placeholder for actual text provided by the
user or for output by the system. Example:
pairdisplay -g group
(For exceptions to this convention for variables, see the entry for angle
brackets.)
Monospace Indicates text that is displayed on screen or entered by the user. Example:
pairdisplay -g oradb
8 Preface
Hitachi Data Instance Director Quick Start Guide
Convention Description
Status-<report-name><file-version>.csv
• Variables in headings.
[ ] square brackets Indicates optional values. Example: [ a | b ] indicates that you can choose a,
b, or nothing.
{ } braces Indicates required or expected values. Example: { a | b } indicates that you
must choose either a or b.
| vertical bar Indicates that you have a choice between two or more options or arguments.
Examples:
WARNING Warns the user of a hazardous situation which, if not avoided, could
result in death or serious injury.
Preface 9
Hitachi Data Instance Director Quick Start Guide
Logical capacity values (for example, logical device capacity, cache memory
capacity) are calculated based on the following values:
Open-systems:
• OPEN-V: 960 KB
• Others: 720 KB
1 KB 1,024 (210) bytes
1 MB 1,024 KB or 1,0242 bytes
1 GB 1,024 MB or 1,0243 bytes
1 TB 1,024 GB or 1,0244 bytes
1 PB 1,024 TB or 1,0245 bytes
1 EB 1,024 PB or 1,0246 bytes
Getting help
Hitachi Vantara Support Connect is the destination for technical support of
products and solutions sold by Hitachi Vantara. To contact technical support,
log on to Hitachi Vantara Support Connect for contact information: https://
support.hitachivantara.com/en_us/contact-us.html.
Comments
Please send us your comments on this document to
[email protected]. Include the document title and number,
including the revision level (for example, -07), and refer to specific sections
and paragraphs whenever possible. All comments become the property of
Hitachi Vantara Corporation.
10 Preface
Hitachi Data Instance Director Quick Start Guide
Thank you!
Preface 11
Hitachi Data Instance Director Quick Start Guide
12 Preface
Hitachi Data Instance Director Quick Start Guide
1
Introduction
This chapter describes the software installation and initial configuration tasks.
Note: Before you install Data Instance Director, confirm that your hardware
and software meet the requirements that are outlined on the Data Instance
Director product website at https://www.hds.com/products/storage-software/
hitachi-data-instance-director.html.
□ Architecture
Introduction 13
Hitachi Data Instance Director Quick Start Guide
About Data Instance Director
Data Instance Director provides a modern, holistic approach to data
protection, recovery and retention. It has a unique work flow-based policy
engine, presented in an easy-to-use whiteboard-style user interface that
helps map copy-data management processes to business priorities. A wide
range of fully integrated hardware storage-based and host-based
incremental-forever data capture capabilities are included that can be
combined into complex work flows to automate and simplify copy-data
management. With these you can:
• Choose the right technology for each workload, based on service level
requirements, but manage them from one place.
• Drag-and-drop a range of protection, retention and repurposing
capabilities to easily create and manage complex work flows.
• Automate and orchestrate Hitachi storage-assisted snapshots, clones and
replications to eliminate backup windows.
• Automate the mounting of Hitachi storage based snapshots, clones and
replications for proxy backup and repurposing.
Architecture
At the heart of the data management process is the Master Node, which
provides instruction to all other nodes. Unless specifically instructed to do so,
the Master Node does not take part in the data flow, and nodes pass data
among themselves as according to rules distributed from the Master Node.
14 Introduction
Hitachi Data Instance Director Quick Start Guide
Note: It is recommended the the Master not be assigned any other roles.
Introduction 15
Hitachi Data Instance Director Quick Start Guide
16 Introduction
Hitachi Data Instance Director Quick Start Guide
2
Installation and license activation
□ About licenses
1. Front end capacity refers to the total size of the primary data set being
protected by Data Instance Director. For example, if you have 40TB of data
on primary volumes being replicated to a local secondary site and a remote
tertiary site, then the license must cover only the primary data set size
(40TB) even though the total storage capacity is 120TB.
Note: Overrunning the licensed front end capacity will not stop HDID
protecting your data, however some features will be limited until the over
capacity is licensed.
4. Encryption refers to technologies that prevent data from being read either
during in transmission (over-the-wire) or when residing in a repository (data-
at-rest).
• General:
○ A machine must be assigned that controls the Block or File storage
device. This node must be a Windows or RedHat Linux machine with the
HDID Client software installed.
○ All primary data (paths or application data) to be snapshotted or
replicated must exist on the same storage device.
• Hitachi Block:
OS Requirements
Caution:
• Refer to the HDID support matrices at https://www.hitachivantara.com/
products/storage-software/hitachi-data-instance-director.html before
attempting installation, to ensure that you understand the infrastructure
requirements, available functionality and upgrade paths for your particular
environment.
• If you intend to use HDID with Hitachi Block or File based storage
hardware then refer to Hitachi block and file prerequisites on page 18
before proceeding.
Note:
• For a new installation, the Master node must be installed before any Client
nodes.
OS Note
For Linux and Solaris ensure you have execute permissions and use the
Linux and Solaris
command chmod 755 to run the installer.
For example, if 100 GB of usable storage is required, then the total disk size
will be 110 GB (100 GB of usable storage and 10 GB of unused storage).
When installing Data Instance Director on a Solaris node, the GCC runtime
library (version 4.5 or later) must be installed first. Type the following
Solaris
command:
All nodes that will participate in a backup data flow need to have Data
Instance Director installed. A node used only to access the web based user
interface does not need to have any components installed on it.
Procedure
1. Locate and run the installer appropriate for the target OS and hardware
architecture.
The installer filename has the following format:
HDID-Rm.n-m.n.n.nnnnn-ostype-proctype
where:
• m.n-m.n.n.nnnnn - is the version and build number
• ostype - is the target operating system type:
○ Win
○ Linux
○ Solaris
○ AIX
• proctype - is the target processor type:
○ x32
○ x64
○ SPARC
○ PPC
The Setup wizard will be launched if a GUI shell is available. If not then
the same information will be presented using the text mode shell.
2. If a previous installation of Data Instance Director is found, the installer
will prompt you to upgrade or abort the installation launches the upgrade
process.
• Click Yes to upgrade the existing HDID installation on this node.
• Client - Select this option to install all other node types. The specific
roles assumed by Client nodes are defined via the Nodes Inventory
once installation is completed. These roles include:
○ Data Sources (basic hosts, VMs, application servers, etc.)
○ ISMs (for controlling Hitachi Block and File storage hardware, etc.)
○ Repositories (acting as host based backup storage destinations)
○ Replication Destinations (acting as host based mirrors)
7. Specify a node name to be used within Data Instance Director then click
Next.
Note: You do not need to restart the machine. The installer starts
all the necessary Data Instance Director components on the
system.
Note: Initial logon must be done with the username specified for
the User Account during the Master installation. The username
must be qualified with the local domain name master as follows:
<username>@master
or
master/<username>
or
Procedure
1. Install HDID on the controlling node (see How to install HDID on the
controlling cluster node on page 25).
2. Install HDID on the remaining nodes (see How to install HDID on the
remaining cluster nodes on page 26).
3. Add the cofiohub service (see How to add the hub service to a cluster on
page 26).
4. Add HDID licenses for each node (How to add licenses to a cluster on
page 27).
5. Add an authentication domain for the cluster (How to configure
authentication for a cluster on page 28).
Procedure
4. Click Next.
5. Select the Master installation type, then click Next.
6. Change the default node name as all cluster nodes will have the same
name.
Use a generic name such as Master. Click Next to start the installation.
7. Click Finish, then shut down the machine to invoke failover.
Be sure to specifiy the same installation directory, clustered role and node
name that was used in How to install HDID on the controlling cluster node on
page 25.
For each of the remaining nodes in the cluster perform the following steps:
Procedure
Procedure
Install HDID on each of the nodes in the Windows cluster as described in How
to install HDID on a Windows cluster on page 25.
Because each machine in the cluster has a different machine ID, a separate
license is required for each node:
Procedure
1. Navigate to the HDID web UI using the IP address for the cofiohub
service.
2. Log in using the local machine credentials for master\administrator.
3. Add the license key for this machine following the procedure described in
How to Add a License on page 37.
This node is now licensed, but the other nodes in the cluster are not.
4. Browse to the installation directory that you specified when installing
HDID on the controlling cluster node (How to install HDID on the
controlling cluster node on page 25) then navigate to HDID-Install-Dir
\db\config\
5. Right click License.xml and open the file with WordPad.
<licenses>
<entry>ADYFKAE9PMM9BM2KXDNWCEO8PSABE244U8GA</entry>
</licenses>
6. Copy line 2 (the license key for the currently active cluster node) and
insert it as a new line below line 2.
7. Change the license key on line 3 to match the one provided for your
second cluster node.
The following XML should displayed (the license keys will differ from the
ones shown here):
<licenses>
<entry>ADYFKAE9PMM9BM2KXDNWCEO8PSABE244U8GA</entry>
<entry>DFGA54AGFDFHDJK675HH86453GHTFGD553DR</entry>
</licenses>
Install HDID on each of the nodes in the Windows cluster as described in How
to install HDID on a Windows cluster on page 25.
Procedure
1. Navigate to the HDID web UI using the IP address for the cofiohub
service.
2. Log in using the local machine's master\administrator credentials.
3. Add a Domain that will perform authentication for the cluster.
a. Specify the Authentication Space Name.
b. Select the required authentication type (e.g. Active Directory).
c. Select the cluster node as the Proxy.
d. Enter the Active Directory Domain Name.
4. Add an ACP Association that will provide administrator level access to
HDID.
a. Specify the ACP Association Name.
b. Select the User ACP Association type.
c. Browse for the required User Name from the Authentication
Space specified in the previous step.
d. Add the Default Administrator from the Available Profiles listed.
Note:
• The Manage Software Updates RBAC activity must be assigned to users
who perform upgrades. It is recommended that this activity is restricted to
administrative users only.
• Remote upgrades can only be performed for HDID version 6.x. If you are
upgrading from HDID 5.x to 6.x, please refer to How to upgrade to HDID
version 6.0 on page 30.
• Nodes installed on a Microsoft failover cluster need to be upgraded locally.
It's not possible to upgrade remotely because the standby nodes in the
cluster are not available.
If upgrading, obtain the upgrade installer files from your Hitachi Vantara
support representative.
When an upgrade starts, the Data Instance Director services are shutdown,
causing the upgrading OS Host node and any nodes for which it serves as a
proxy, to go offline in the Nodes Inventory. Any active data flows using those
nodes will be temporarily interrupted. These nodes will come back online
again when the services are automatically restarted on the OS Host node,
after the upgrade process is completed, and the affected data flows will
resume operation.
Procedure
HDID-Rm.n-m.n.n.nnnnn-ostype-proctype
where:
• m.n-m.n.n.nnnnn - is the version and build number
• ostype - is the target operating system type:
○ Win
○ Linux
○ Solaris
○ AIX
• proctype - is the target processor type:
○ x32
○ x64
○ SPARC
○ PPC
2. Copy both the installer and configuration files to the C:\Programs Files
\Hitachi\HDID\runtime\updater folder on the Master node.
This folder will need to be created manually if this is the first time an
update has been applied.
3. Click Nodes on the Navigation Sidebar to open the Nodes Inventory.
4. Select the nodes to be upgraded (Master first, then Clients), then click
Upgrade Clients to start the upgrade process.
Note: Only OS Host Client and Master node types can be upgraded
remotely.
When upgrading the Master node, the UI will logout when the node's
services are stopped by the installer. Wait a few minutes, then log back
in again and complete the upgrade of the remaining nodes.
When upgrading Client nodes, each one (and any nodes for which that
Client acts as a Proxy) will go offline temporarily while the upgrade is
applied.
When the nodes come back online, the Version shown on the respective
tile will be updated accordingly.
5. All active data flows should now be reactivated and all destination nodes
should be resynchronized with their sources.
The installer will warn about features that are unsupported before upgrading
but will not detect if unsupported features are being used in your existing
environment.
Note:
• Do not perform the upgrade while hardware replications are being paired.
Wait for pairings to complete.
• Ensure that no Hitachi Block or File snapshots are mounted before
upgrading.
• Ensure that you have upgraded your existing HDID environment to version
5.5.2 or later, following the supported upgrade path.
• You must upgrade all available nodes to version 6.0 before starting to use
the 6.0 environment; existing Client nodes will not be recognised by the
upgraded 6.0 Master.
• HDID 6.0 has improved, granular rules activation. To take advantage of
this, large, complex data flows should be refactored. Consider splitting
them into smaller individual flows. This is best done before upgrading.
• After upgrading, any VMware VADP policies will work as before. However
the policy wizard will not show any of the servers selected under Protect
entire Virtual Machine server and the ESX Server field of the table will be
blank. The required selections must be remade.
• After upgrading, any Exchange Server or SQL Server Application policies
will work as before. However, to take advantage of the new features
available in HDID 6.0, you will need to manually add new Application
nodes and amend any data flows and policies that use them. The rules
compiler will generate warnings where data flows and policies need to be
amended.
• After upgrading, Permissions for the following items will be affected:
○ Dataflows
○ Destination Templates
○ Notifications
For normal creation of these items in 6.0, the user creating them is given
Read/Write Access, allowing that user to see and change them. Users with
the RBAC Override Ownership Permissions privilege can also see and edit
them. Nobody else will be able to read or modify these items unless
granted access.
When upgrading to 6.0, no Permissions are set for these items. This
means that ONLY users with the RBAC Override Ownership Permissions
privilege can see and edit them. For other users to see the upgraded items
they will need to be granted access permissions.
Users without the RBAC Override Ownership Permissions privilege are
prevented from removing all permissions; although they can still remove
their own access if required. Only users with the RBAC Override Ownership
Permissions privilege can remove all permissions.
• After upgrading, all log entries generated prior to the upgrade will be
unavailable in 6.0. If you require access to legacy logs then the logs must
be saved before upgrading. Ensure the log filters are set to their defaults.
The last 25,000 log entries are saved.
• After upgrading, all legacy notifications will be removed. Make a note of
the notifications currently configured in your existing environment before
upgrading. These must be manually re-configured after the upgrade has
completed.
• After upgrading, Repositories that were created only for use as Hardware
Storage Device proxies (ISMs) in existing environments will not be
present. The hardware storage device proxy nodes will be present, but the
unused repositories will be missing from the nodes inventory.
In previous versions of HDID there was no distinction between repositories
and hardware storage device proxies. In 6.0 repositories and hardware
storage proxies are treated as separate entities by node management.
• In previous versions it was possible to create a node group with mixed
node types, although this would cause the rules compiler to generate an
error. On upgrade, the first node in a group is used to set the type of the
group; any subsequent nodes that are of a different type are removed
from the group. Warning logs are generated for any node which is
removed from a group because of type mismatch.
• In previous versions a repository store was created for each source node,
mover type and store template. In 6.0 a repository store is also created
for each policy. If you upgrade a legacy data flow where a source node
backs up to a repository using two policies over a single mover (e.g. two
batch backup policies with different RPOs), HDID 6.0 will create a second
store in the repository so that there is one store for each policy.
When the rules are reactivated after upgrading, the store holding the
legacy backups will complete a fast incremental resynchronization, since it
is already populated. The new store will complete a full initial
Procedure
Note: The rules compiler will generate warnings where data flows
and policies need to be amended to take advantage of new 6.0
features.
4. Once you have set up RBAC, reconfigure the access permissions for each
data flow, destination template, notification, policy and schedule. They
will not yet be visible to any users other than the one performing the
upgrade.
5. If you have not yet done so, refactor any large and parallel data flows as
described above.
6. Review and amend any Exchange Server and SQL Server Application
policies, since these must be modified to take advantage of new 6.0
features.
Refer to the relevant Application Guides listed in Related documents on
page 8 for details of how to implement Application policies and data flows
in HDID 6..
7. Re-configure all notifications events. These will have been deleted during
the upgrade process.
Configure the firewall to allow communication between all HDID nodes on one
open port: 30304 (TCP).
Procedure
Obtain a valid SSL certificate file (.crt or .cer) and private key file (.key)
for the HDID Web Server from your organization's Certificate Authority.
Procedure
You may need to insert the above XML within the following section of the
config file:
<cofioxmllist>
...
</cofioxmllist>
Note: If you cut and paste these code fragments please ensure
that line ends are correctly positioned.
Procedure
Caution:
• Uninstalling the Data Instance Director Master does not deactivate active
rules on Client nodes. To do this, you must either clear the data flow(s)
and redistribute the rules before uninstalling, or uninstall the Clients
individually.
OS Note
Linux and AIX Completely removing HDID from a Linux or AIX machine is straight
forward. Enter the following commands and follow the on-screen
instructions:
/opt/Hitachi/HDID/uninstall
rm -rf /opt/Hitachi/HDID
Procedure
1. Make a note the path for the install directory and any repositories or
ISMs. These will be required later if you wish to completely remove all
traces of the installation and do not want to retain backup data retained
in the repositories.
2. From the Start menu run Uninstall Hitachi Data Instance Director.
Alternatively, navigate to the installation directory and execute the
command uninstall.exe.
A dialog will be displayed asking you to confirm that you wish to proceed.
Running the command uninstall.exe --mode unattended will cause
the uninstall process to proceed without requiring user interaction;
although popups may briefly appear before being automatically
dismissed.
3. Click Yes to proceed with the uninstall process (or No to abort the
process).
A Setup dialog is displayed containing an Uninstall Status bar and
information about what actions are currently being performed.
• HKEY_LOCAL_MACHINE\SOFTWARE\Cofio Software
• HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Cofio Software
• HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services
\CofioHub
• HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services
\dcefltr
• HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services
\sdrefltr
• HKEY_LOCAL_MACHINE\SOFTWARE\Hitachi Data Systems\Hitachi
Data Instance Director
Asynchronous journalling
Transactions are written to disk and also placed in a journal log file, to
protect against data loss in the event of a system failure. Transactions
from the log file are sent to the destination machine.
Asynchronous replication
Transactions are held in memory before being sent over the network. If
the network is unavailable then transactions are written to disk and sent
to the destination machine when the connection is re-established.
Asynchronous replication is optimal for connections with sporadic
efficiency.
B
Backup
A copy that is created for operational and disaster recovery.
Bandwidth throttling
Used to control when and what proportion of available network bandwidth
is used by Data Instance Director for replication.
Batch backup
A process by which the repository is updated periodically using scheduled
resynchronizations. This method involves a scan of the source machine's
file system, but only the changed bytes are transferred and stored. This
method is useful for data that does not change often, such as data
contained on the operating system disk. Linux based source nodes are
only able to perform batch backups. See CDP below.
Glossary 41
Hitachi Data Instance Director Quick Start Guide
C
Clone
An operation where a copy of the database is created in another storage
location in a local or remote site.
D
Data flow
Identifies the data sources, movers and destinations participating in a
backup, along with interconnection paths between them. Policies are
assigned to each node to determine what type of data is backed up.
Data source
A machine hosting a file system or application where the HDID client
software is installed.
Deduplication
A method of reducing the amount of storage space that your organization
requires, to archive data, by replacing multiple instances of identical data
with references to a single instance of that data.
Destination node
A machine that is capable of receiving data for the purposes of archiving.
This machine might be the Data Instance Director Repository, a mirror, a
tape library, cloud storage or tiered storage.
G
Groups
Within the Node Manager window, you can assign multiple machines to
one or more groups. Then, within the Data Flow window, you can assign
policies the nodes within these groups en-mass.
42 Glossary
Hitachi Data Instance Director Quick Start Guide
L
License key
A unique, alphanumeric code that is associated with the unique machine
ID that is generated during the Data Instance Director installation. The
license key must be activated in order to use the software.
Live backup
A backup technique that avoids the need for bulk data transfers by
continuously updating the repository with changes to the source file
system. This is similar to CDP but with longer retention periods and RPOs
being available. Live backups perform byte level change updates whereas
batch backups perform block level change updates.
M
Master node
The machine that controls the actions of other nodes within the Data
Instance Director network.
Mover
Defines the type of data movement operation to be performed between
source and destination nodes, during the creation of a data flow. Batch
movers perform block level data transfers on a scheduled basis, whereas
continuous movers perform byte level data transfers on a near-continuous
basis. Mirror movers are a specific type of continuous mover used to
create uni- and bi-directional mirrors.
P
Policy
A configurable data protection objective that is mapped to machines or
groups, and to the data management agents that implement the policy.
Multiple policies can be assigned to a single node.
R
Recovery Point Objective (RPO)
The frequency at which a backup will occur. This governs the point in time
to which data can be recovered should a restore be needed.
Glossary 43
Hitachi Data Instance Director Quick Start Guide
Replication
An operation where a copy of the data is created in another local or
remote location automatically.
Repository
A destination node that stores data from one or more source nodes. The
Data Instance Director Repository supports live backup, batch backup,
archiving, and versioning policies.
S
Snapshot (Thin Image)
A point in time copy of the data that is based on references to the original
data.
Source node
Any node (server, workstation or virtual machine) that hosts data to be
protected by Data Instance Director. The source node has an Active Data
Change Agent, which is responsible for monitoring the host file system
and performing the relevant actions defined by the policies. Nodes need
to be configured as a source node if they need to transfer locally stored
data to a destination node, or implement data tracking, blocking and
auditing functions. A node can be both a source and destination
simultaneously.
Synchronous replication
Transactions are transferred to the remote storage device immediately
and the write operation is signaled as completed only once data is
confirmed as written to both primary and secondary volumes.
Synchronous replication is optimal for connections with high efficiency.
44 Glossary
Hitachi Data Instance Director Quick Start Guide
Hitachi Data Instance Director Quick Start Guide
Hitachi Vantara
Corporate Headquarters Regional Contact Information
2845 Lafayette Street Americas: +1 866 374 5822 or [email protected]
Santa Clara, CA 95050-2639 USA Europe, Middle East and Africa: +44 (0) 1753 618000 or [email protected]
www.HitachiVantara.com Asia Pacific: +852 3189 7900 or [email protected]
community.HitachiVantara.com