CDP-NSS Administration Guide
CDP-NSS Administration Guide
CDP-NSS Administration Guide
CDP/NSS
ADMINISTRATION GUIDE
Contents
Introduction
Network Storage Server (NSS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14
Continuous Data Protector (CDP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20
Web Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25
Getting started with CDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26
Getting started with NSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
Contents
Storage Pools
Manage storage pools and the devices within storage pools . . . . . . . . . . . . . . . . . . . . .66
Create storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67
Set properties for a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68
Logical Resources
Types of SAN resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72
Virtual devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72
Check the status of a thin disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .74
Service-Enabled Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .75
Create SAN resources - Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76
Prepare devices to become SAN resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76
Create a virtual device SAN resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76
Create a Service-Enabled Device SAN resource . . . . . . . . . . . . . . . . . . . . . . . . . .83
Assign a SAN resource to one or more clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .86
After client assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90
Windows clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90
Solaris clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90
Expand a virtual device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .92
Service-Enabled Device (SED) expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .94
Grant access to a SAN resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95
Unassign a SAN resource from a client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95
Delete a SAN resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95
CDP/NSS Appliances
Start the CDP/NSS appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96
Stop the CDP/NSS appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96
Log into the CDP/NSS appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .98
Telnet access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .98
Check the IPStor Server processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .99
Check physical resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100
Check activity statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .101
Remove a physical storage device from a storage server . . . . . . . . . . . . . . . . . . . . . .102
Configure iSCSI storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .102
Configuring iSCSI software initiator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .102
Configuring iSCSI hardware HBA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103
Uninstall a storage server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104
Contents
iSCSI Clients
Supported platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .106
Requirements for iSCSI clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .106
Configuring iSCSI clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107
Enabling iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107
Configure your iSCSI initiator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107
Add your iSCSI client in the FalconStor Management Console . . . . . . . . . . . . . . . . . .108
Create storage targets for the iSCSI client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .113
Restart the iSCSI initiator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .114
Windows iSCSI clients and failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .114
Disable iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .114
Contents
SAN Clients
Add a client from the FalconStor Management Console . . . . . . . . . . . . . . . . . . . . . . .176
Add a client for FalconStor host applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .177
Security
System management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .178
Data access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .178
Account management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .179
Security recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .179
Storage network topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .180
Contents
Failover
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .181
Shared storage failover sample configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . .184
Failover requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .185
General failover requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .185
General failover requirements for iSCSI clients . . . . . . . . . . . . . . . . . . . . . . . . . . .186
Shared storage failover requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .186
FC-based Asymmetric failover requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . .187
Pre-flight checklist for failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .188
Connectivity failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .188
Default failover behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .189
Storage device path failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .190
Storage device failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .190
Storage server failure (including storage device failure) . . . . . . . . . . . . . . . . . . . . . . . .191
Failover restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .192
Failover setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .192
Recreate the configuration repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .203
Power Control options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .203
Check Failover status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .206
Failover Information report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .206
Failover network failure status report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .207
After failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .207
Manual recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .207
Auto recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .209
Fix a failed server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .209
Recover from a cross-mirror disk failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .210
Re-synchronize Cross mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .211
Remove Cross mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .211
Check resources and swap if possible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .211
Verify and repair a cross mirror configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . .211
Modify failover configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .216
Make changes to the servers in your failover configuration . . . . . . . . . . . . . . . . . .216
Convert a failover configuration into a mutual failover configuration . . . . . . . . . . .217
Exclude physical devices from health checking . . . . . . . . . . . . . . . . . . . . . . . . . . .217
Change your failover intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .218
Verify physical devices match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .218
Start/stop failover or recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .219
Force a takeover by a secondary server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .219
Manually start a server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .219
Manually initiate a recovery to your primary server . . . . . . . . . . . . . . . . . . . . . . . .219
Suspend/resume failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .220
Remove a failover configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .221
Mirroring and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .222
TimeMark/CDP and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .222
Throttle and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .222
CDP/NSS Administration Guide
Contents
Performance
SafeCache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .225
Configure SafeCache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .226
Create a cache resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .226
Global Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .230
SafeCache for groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .231
Check the status of your SafeCache resource . . . . . . . . . . . . . . . . . . . . . . . . . . .231
SafeCache properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .231
Disable your SafeCache resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .231
HotZone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .232
Read Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .232
Prefetch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .232
Configure HotZone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .233
Check the status of HotZone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .237
HotZone Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .239
Disable HotZone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .239
Mirroring
Synchronous mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .240
Asynchronous mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .241
Mirror requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .242
Mirror setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .242
Create cache resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .249
Check mirroring status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .250
Swap the primary disk with the mirrored copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .250
Promote the mirrored copy to become an independent virtual drive . . . . . . . . . . . . . .250
Recover from a mirroring hardware failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .252
Replace a disk that is part of an active mirror configuration . . . . . . . . . . . . . . . . . . . . .252
Replace a failed physical disk without rebooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . .253
Expand the primary disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .254
Manually synchronize a mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .254
Set mirror throttle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .255
Set alternative read mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .256
Set mirror resynchronization priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .256
Rebuild a mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .258
Suspend/resume mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .258
Change your mirroring configuration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .259
Set global mirroring options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .259
Remove a mirror configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .260
Mirroring and failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .260
Contents
Snapshot Resource
Create a Snapshot Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .261
Check status of a Snapshot Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .268
Protect your Snapshot Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .269
Options for Snapshot Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .269
Snapshot Resource shrink and reclamation policies . . . . . . . . . . . . . . . . . . . . . . . . . .270
Enable Reclamation Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .270
Global reclamation policy and retention schedule . . . . . . . . . . . . . . . . . . . . . . . . .272
Disable Reclamation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .273
Check reclamation status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .274
Shrink Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .274
Shrink a snapshot resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .276
Use Snapshot to copy a SAN resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .276
Check Snapshot Copy status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .280
Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .281
Create a group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .281
Groups with TimeMark/CDP enabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .282
Groups with SafeCache enabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .282
Groups with replication enabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .282
Grant access to a group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .283
Add resources to a group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .283
Remove resources from a group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .285
Contents
Replication
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .320
Remote replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .320
Local replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .320
How replication works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .321
Delta replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .321
Continuous replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .321
Replication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .322
Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .322
Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .322
Create a Continuous Replication Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .331
Check replication status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .333
Replication tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .333
Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .334
Replication object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .334
Delta Replication Status Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .335
Replication performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .336
Set global replication options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .336
Tune replication parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .336
Assign clients to the replica disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .337
Switch clients to the replica disk when the primary disk fails . . . . . . . . . . . . . . . . . . . .337
Recreate your original replication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .338
Use TimeMark/TimeView to recover files from your replica . . . . . . . . . . . . . . . . . . . . .339
Change your replication configuration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .339
Suspend/resume replication schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .341
Stop a replication in progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .341
Manually start the replication process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .341
Set the replication throttle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .342
Add a Target Site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .343
Manage Throttle windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .345
Manage Link Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .347
Add link types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .348
Edit link types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .348
Delete link types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .348
Set replication synchronization priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .349
Reverse a replication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .349
Reverse a replica when the primary is not available . . . . . . . . . . . . . . . . . . . . . . . . . . .350
Forceful role reversal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .350
Repair a replica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .351
Relocate a replica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .351
Remove a replication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .352
Expand the size of the primary disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .352
CDP/NSS Administration Guide
Contents
Near-line Mirroring
Near-line mirroring requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .355
Near-line mirroring setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .355
Enable Near-line Mirroring on multiple resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . .363
Whats next? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .363
Check near-line mirroring status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .364
Near-line recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .365
Recover data from a near-line mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .365
Recover data from a near-line replica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .367
Recover from a near-line replica TimeMark using forceful role reversal . . . . . . . . . . . .370
Swap the primary disk with the near-line mirrored copy . . . . . . . . . . . . . . . . . . . . . . . .373
Manually synchronize a near-line mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .373
Rebuild a near-line mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .373
Expand a near-line mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .374
Expand a service-enabled disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .376
Suspend/resume near-line mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .377
Change your mirroring configuration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .377
Set global mirroring options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .377
Remove a near-line mirror configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .378
Recover from a near-line mirroring hardware failure . . . . . . . . . . . . . . . . . . . . . . . . . .379
Replace a disk that is part of an active near-line mirror . . . . . . . . . . . . . . . . . . . . . . . .380
Replace a failed physical disk without rebooting your storage server . . . . . . . . . . . . .380
Set Recovery Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .381
ZeroImpact Backup
Configure ZeroImpact backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .382
Back up a CDP/NSS logical resource using dd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .385
Restore a volume backed up using ZeroImpact Backup Enabler . . . . . . . . . . . . . . . . .386
Multipathing
Load distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .388
Preferred paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .388
Path management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .389
Contents
Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .393
Command Line Interface (CLI) error codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .407
SNMP Integration
SNMPTraps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .430
Implement SNMP support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .433
Microsoft System Center Operations Manager (SCOM) . . . . . . . . . . . . . . . . . . . . . . . .434
HP Network Node Manager (NNM) i9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .435
HP OpenView Network Node Manager 7.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .436
Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .436
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .436
Statistics in NNM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .437
CA Unicenter TNG 2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .438
Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .438
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .438
View traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .439
Statistics in TNG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .439
Launch the FalconStor Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . .439
IBM Tivoli NetView 6.0.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .440
Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .440
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .440
Statistics in Tivoli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .441
BMC Patrol 3.4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .442
Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .442
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .442
View traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .443
Statistics in Patrol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .443
Advanced topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .444
The snmpd.conf file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .444
Use an SNMP configuration for multiple storage servers . . . . . . . . . . . . . . . . . . . . . . .444
IPSTOR-MIB tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .445
Email Alerts
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .465
Modifying Email Alerts properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .476
Email format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .477
Limiting repetitve Emails . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .477
Script/program trigger information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .477
BootIP
BootIP Set up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .480
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .481
Creating a boot image for a diskless client computer . . . . . . . . . . . . . . . . . . . . . . . . . .482
Initializing the configuration of the storage Server . . . . . . . . . . . . . . . . . . . . . . . . . . . .483
Enabling the BootIP from the FalconStor Management Console . . . . . . . . . . . . . . . . .483
CDP/NSS Administration Guide
10
Contents
Troubleshooting / FAQs
Frequently Asked Questions (FAQ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .496
NIC Port Bonding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .497
SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .497
Event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .497
Virtual devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .497
Multipathing method: MPIO vs. MC/S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .498
BootIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .500
SCSI adapters and devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .501
Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .503
Fibre Channel Target Mode and storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .503
Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .505
FalconStor Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .506
iSCSI Downstream Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .506
Power control option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .508
Protecting data in a Windows environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .508
Protecting data in a Linux environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .508
Protecting data in an AIX environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .509
Protecting data in an HP-UX environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .509
Logical resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .510
Network connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .510
Jumbo frames support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .512
Diagnosing client connectivity issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .512
CDP/NSS Administration Guide
11
Contents
Port Usage
SMI-S Integration
SMI-S Terms and concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .607
Using the SMI-S Provider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .608
Launch the Command Central Storage console . . . . . . . . . . . . . . . . . . . . . . . . . .608
Add FalconStor Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .608
View FalconStor Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .609
View Storage Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .609
View LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .609
View Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .609
View Masking Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .610
Enable SMI-S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .610
12
Contents
Index
13
Introduction
As business IT operations grow in size and complexity, many computing
environments are stressed in attempting to keep up with the demand to store and
access data. Information and the effective management of the corresponding
storage infrastructure are critical to a company's success. Reliability, availability, and
disaster recovery capabilities are all key factors in the successful management and
protection of data.
FalconStor Continuous Data Protector (CDP) and Network Storage Server (NSS)
solutions address the growing need for data management, protection, preservation,
and integrity.
14
Introduction
Architecture
NSS
CDP
FalconStor CDP can be deployed in several ways to best fit your organizations
needs. FalconStor CDP is available in multiple configurations suitable for remote
offices, branch offices, data centers, and remote DR sites. Appliances with internal
storage for both physical and virtual servers are available in various sizes for easy
deployment to remote sites or offices. Gateway appliances can be connected to any
existing external storage array, allowing you to use and reuse the storage systems
you already have in place.
FalconStor CDP can also be purchased as a software appliance kit to install on
servers or as a virtual appliance that integrates with virtual server technology.
FalconStor CDP can use both a host-based approach and a fabric-based approach
to capture and track data changes. For a host-based model, a FalconStor DiskSafe
Agent runs on the application server to capture block-level changes made to a
system or data disk without impacting application performance. It mirrors the data to
a back-end FalconStor CDP appliance, which handles all of the data protection
operations. All journaling, snapshot processing, mirroring, and replication occur on
the out-of-band FalconStor CDP appliance, so that primary storage I/O remains
unaffected.
15
Introduction
Components
The primary components of the CDP/NSS Storage Network are the storage server,
SAN clients, and the FalconStor Management Console. These components all sit on
the same network segment, the storage network.
Server
The storage server is a dedicated network storage server. The storage server is
attached to the physical SCSI and/or Fibre Channel storage devices on one or more
SCSI or Fibre Channel busses.
The job of the storage server is to communicate data requests between the clients
and the logical (SAN) resources (logically mapped storage devices on the storage
network) via Fibre Channel or iSCSI.
SAN Clients
SAN Clients are the actual file and application servers. They are sometimes referred
to as IPStor SAN Clients because they utilize the storage resources via the storage
server.
You can have iSCSI or Fibre Channel SAN Clients on your storage network. SAN
Clients access their storage resources via iSCSI initiators (for iSCSI) or HBAs (for
Fibre Channel or iSCSI). The storage resources appear as locally attached devices
to the SAN Clients operating systems (Windows, Linux, Solaris, etc.) even though
the SCSI devices are actually located at the storage server.
Console
The FalconStor Management Console is the administration tool for the storage
network. It is a Java application that can be used on a variety of platforms and
allows IPStor administrators to create, configure, manage, and monitor the storage
resources and services on the storage network.
Physical
Resource
Physical resources are the actual devices attached to this storage server. These can
be hard disks, tape drives, device libraries, and RAID cabinets.
Logical
Resource
All resources defined on the storage server, including physical SAN Resources
(virtual drives, and Service-Enabled Devices), Replica Resources, and Snapshot
Groups.
Clients do not gain access to physical resources; they only have access to logical
resources. This means that an administrator must configure each physical resource
to one or more logical resources so that they can be assigned to the clients.
16
Introduction
Logical Resources consists of sets of storage blocks from one or more physical hard
disk drives. This allows the creation of Logical Resources that contain a portion of a
larger physical disk device or an aggregation of multiple physical disk devices.
Understanding how to create and manage Logical Resources is critical to a
successful storage network. See Logical Resources for more information.
17
Introduction
Acronyms
Acronym
Definition
ACL
ACSL
API
BDC
BMR
CCM
CCS
CDP
CDR
CHAP
CIFS
CLI
DAS
FC
Fibre Channel
FCoE
GUI
GUID
HBA
HCA
IMA
I/O
Input / Output
IPMI
iSCSI
JBOD
LAN
LUN
MIB
NFS
NIC
18
Introduction
Acronym
Definition
NPIV
N_Port ID Virtualization
NSS
NTFS
NT File System
NVRAM
OID
Object Identifier
PDC
POSIX
RAID
RAS
RPC
SAN
SCSI
SDM
SED
Service-Enabled Device
SMI-S
SNMP
SRA
SSD
VAAI
VLAN
VSS
WWNN
WWPN
19
Introduction
Terminology
ApplianceBased
protection
Bare Metal
Recovery
(BMR)
The process of rebuilding a computer after a catastrophic failure. The normal bare
metal restoration process is: install the operating system from the product disks,
install the backup software (so you can restore your data), and then restore your
data.
Central Client
Manager
(CCM)
CDP Gateway
Additional term for a storage server/ CDP appliance that is providing Continuous
Data Protection.
Command Line
Interface
The Command Line Interface (CLI) is a simple interface that allows client machines
to perform some of the more common functions currently performed by the
FalconStor Management Console. Administrators can use the CLI to automate
many tasks, as well as integrate CDP/NSS with their existing management tools.
The CLI is installed as part of the CDP/NSS Client installation. Once installed, a path
must be set up for Windows clients in order to be able to use the CLI. Refer to the
Command Line Interface section for details.
Cross-Mirror
Fail over
(For virtual appliances only). A non-shared storage failover option that provides high
availability without the need for shared storage. Used with virtual appliances
containing internal storage. Mirroring is facilitated over a dedicated, direct IP
connection. This option removes the requirements of shared storage between two
partner storage server nodes. For additional information on using this feature for
your virtual appliances, refer to Cross-mirror failover requirements.
DiskSafe
Agent
The CDP DiskSafe Agent is host-based replication software that delivers block-level
data protection with centralized management for Microsoft Windows-based servers
as part of the CDP solution. The DiskSafe Agent delivers real-time and periodic
mirroring for both DAS and SAN storage to complement the CDP Journaling feature,
TimeMark Snapshots, and Replication.
DynaPath
20
Introduction
E-mail Alerts
Using pre-configured scripts (called triggers), Email Alerts monitors a set of predefined, critical system components (SCSI drive errors, offline device, etc.) so that
system administrators are able to take corrective measures within the shortest
amount of time, ensuring optimum service uptime and IT efficiency. For additional
information, refer to the Email Alerts section.
FalconStor
Management
Console
FileSafe
GUID
FileSafe is a software application that protects your data by backing up files and
folders to another location. Data is backed up to a location called a repository. The
repository can be local (on your computer or on a USB device), remote (on a shared
network server or NAS resource), or on a storage server where the FileSafe Server
option is licensed and enabled. For more information, see the FileSafe User Guide.
The Globally Unique Identifier (GUID) is a unique 128-bit number that is used to
identify a particular component, application, file, database entry, and/or user.
Host Zone
Host-based
protection
HotZone
A CDP/NSS option that automatically re-maps data from frequently used areas of
disks to higher performance storage devices in the infrastructure, resulting in
enhanced read performance for the application accessing the storage. This feature
is not available for CDP connector appliances. For additional information, refer to
the HotZone section.
HyperTrac
The HyperTrac Backup Accelerator (HyperTrac) works in conjunction with CDP and
NSS to increase tape backup speed, eliminate backup windows, and off load
processing from application servers.
HyperTrac for VMware enhances the functionality of VMware Consolidated Backup
(VCB) by allow TimeViews of the production virtual disk to be used as the source of
the VCB snapshot. Unlike the traditional HyperTrac model, the TimeViews are not
mounted directly to the storage server.
HyperTrac for Hyper-V enables mounting production TimeViews for backup via
Microsoft Hyper-V machines. For more information, refer to the HyperTrac User
Guide.
IPMI
iSCSI Client
21
Introduction
iSCSI Target
Logical
Resources
MIB
A Management Information Base (MIB) is an ASCII text file that describes SNMP
network elements as a list of data objects. It is a database of information, laid out in
a tree structure, with MIB objects as the leaf nodes, that you can query from an
SNMP agent. The purpose of the MIB is to translate numerical strings into humanreadable text. When an SNMP device sends a Trap, it identifies each data object in
the message with a number string called an object identifier (OID). Refer to the
SNMP Integration section for additional information.
MicroScan
NPIV
N_Port ID Virtualization (NPIV) allows multiple N_Port IDs to share a single physical
N_Port, this allows us to have an initiator, target and standby occupying the same
physical port This is not supported when using a non-NPIV driver.
NIC Port
Bonding
OID
The Object Identifier (OID) is the unique number written as a sequence of sub
identifiers in decimal notation. For example, 1.3.6.1.4.1.2681.1.2.102. It uniquely
identifies data objects that are the subjects of an SNMP message. When your
SNMP device sends a Trap or a GetResponse, it transmits a series of OIDs, paired
with their current values.
Prefetch
A feature that enables pre-fetching of data for clients. This allows clients to read
ahead consecutively, which can result in improved performance because the
storage server will have the data ready from the anticipatory read as soon as the
next request is received from the client. This will reduce the latency of the command
and improve the sequential read benchmarks in most cases. For additional
information, refer to the Prefetch section.
Read Cache
RecoverTrac
Introduction
Replication
The process by which a SAN Resource maintains a copy of itself either locally or at
a remote site. The data is copied, distributed, and then synchronized to ensure
consistency between the redundant resources. The SAN Resource being replicated
is known as the primary disk. The changed data is transmitted from the primary to
the replica disk so that they are synchronized. Under normal operation, clients do
not have access to the replica disk.
The replication option works with both CDP and NSS solutions to replicate data over
any existing infrastructure. In addition, it can be used for site migration, remote site
consolidation for backup, and similar tasks. Using a TOTALLY Open storagecentric approach, replication is configured and managed independently of servers,
so it integrates with any operating system or application for cost-effective disaster
recovery (DR). For For additional information, refer to the Replication section.
Replication
Scan
Retention
A scan comparing the primary and replica disk for differences. If primary and replica
disk are known to have similar data (bit by bit, not file by file) then a manual scan is
recommended. The initial scan is automatically triggered and all subsequent scans
must be manually triggered (right-click on a device and select Replication > Scan).
TimeMark retention allows you to set TimeMark preservation patterns. The
TimeMark retention schedule can be set by right-clicking on the server, and
selecting Properties --> TimeMark Maintenance tab.
SafeCache
SAN Resource
Provides storage for file and application servers (called SAN Clients). When a SAN
Resource is assigned to a SAN client, a virtual adapter is defined for that client. The
SAN Resource is assigned a virtual SCSI ID on the virtual adapter. This mimics the
configuration of actual SCSI storage devices and adapters, allowing the operating
system and applications to treat them like any other SCSI device. For information on
creating a SAN resource, refer to the Create SAN resources - Procedures section.
23
Introduction
ServiceEnabled Device
Service-Enabled Devices are hard drives or RAID LUNs with existing data that can
be accessed by CDP or NSS to make use of all key CDP/NSS storage services
(mirroring, snapshot, etc.). This can be done without any migration/copying, without
any modification of data, and with minimal downtime. Service-Enabled Devices are
used to migrate existing drives into the SAN.
SMI-S
Snapshot
Snapshot Agent
SNMP
Storage Cluster
Interlink Port
A physical connection between two servers. Version 7.0 and later requires a Storage
Cluster Interlink Port for failover setup. For additional information regarding the
Storage Cluster Interlink, refer to the Failover section.
Thin
Provisioning
For virtual resources, Thin Provisioning allows you to use your storage space more
efficiently by allocating a minimum amount of space for the virtual resource. Then,
when usage thresholds are met, additional storage is allocated as necessary. Thin
Provisioning may be applied to primary storage, replica storage (at the disaster
recovery [DR] site), and mirrored storage. For additional information, refer to the
Thin Provisioning section.
TimeMark
TimeMark technology works with CDP and NSS to enable you to create scheduled
and on-demand point-in-time delta snapshot copies of data volumes. TimeMark
includes the FalconStor TimeView feature, which creates an accessible, mountable
image of any snapshot. This provides a tool to freely create multiple and
instantaneous virtual copies of an active data set. The TimeView images can be
assigned to multiple application servers with read/write access for concurrent,
24
Introduction
independent processing, while the original data set is actively accessed and
updated by the primary application server. For additional information, refer to the
TimeMarks and CDP section.
TimeView
An extension of the TimeMark option that allows you to mount a virtual drive as of a
specific point in time. For additional information, refer to the Recover data using the
TimeView feature section.
Trap
Trigger
VAAI
WWN Zoning
ZeroImpact
Backup Enabler
Web Setup
Once you have physically connected the appliance, powered it on, and the following
steps have been performed via the Web Setup installation and server setup, you are
ready to begin using your CDP or NSS storage server.
This step may have already been completed for you. Refer to the Software Quick
Start Guide for details regarding each of the following steps:
1. Configure the Appliance
The first time you connect, you will be asked to:
Select a language.
(If the wrong language is selected, click your browser back button or go
to: //10.0.0.2/language.php to return to the language selection page.
Read and agree to the FalconStor End User License Agreement.
(Storage appliances only) Configure your RAID system.
Enter the network configuration for your appliance
2. Manage License Keys
25
Introduction
26
Introduction
27
FalconStor
Management Console
The FalconStor Management Console is the administration tool for the storage
network. It is a Java application that can be used on a variety of platforms and
allows administrators to create, configure, manage, and monitor the storage
resources and services on the storage server network as well as run/view reports,
enter licensing information, and add/delete administrators.
The FalconStor Management Console software can be installed on each machine
connected to a storage server. The console is also available via download from your
storage server appliance. If you cannot install the FalconStor Management Console
on every client, you can launch a web-based version of the console from your
browser and enter the IP address of the CDP/NSS server..
If your screen resolution is 640 x 480, the splash screen may be cut off
while the console loads.
The console might not launch on certain systems with display settings
configured to use 16 colors.
The console needs to be run from a directory with write access.
Otherwise, the host name information and message log file retrieved
from the storage server will not be able to be saved to the local directory.
As a result, the console will display event messages as numbers and
console options will not be able to be saved.
You must be signed on as the local administrator of the machine on
which you are installing the Windows console package.
To launch a web-based version of the console, open a browser from any machine
and enter the IP address of the CDP/NSS server (for example: http://10.0.0.2) and
the console will launch. If you have Web Setup, select the Go button next to Install
Management Software and Guides and click the Launch Console link.
In the future, to skip going through Web Setup, open a browser from any machine
and enter the IP address of the VTL server followed by :81, for example: http://
10.0.0.2:81/ to launch the console. The computer running the browser must have
Java Runtime Environment (JRE) version 1.6.
CDP/NSS Administration Guide
28
29
30
You will need to restart the server if you change the hostname.
Note: Do not change the hostname if you are using block devices. If you do, all
block devices claimed by CDP/NSS will be marked offline and seen as foreign
devices.
31
Search for
objects in the
tree
The console has a search feature that helps you find any physical device, virtual
device, or client on any storage server. To search:
1. Highlight a storage server in the tree.
2. Select Edit menu --> Find.
3. Select the type of object to search for and the search criteria.
Once you select an object type, a list of existing objects appears. If you highlight
one, you will be taken directly to that object in the tree.
Alternatively, you can type the full name, ID, ACSL (adapter, channel, SCSI,
LUN), or GUID (Globally Unique Identifier). Once you click the Search button,
you will be taken directly to that object in the tree.
Storage server
status and
configuration
The console displays the configuration and status of the storage server.
Configuration information includes the version of the CDP or NSS software and
base operating system, the type and number of processors, amount of physical and
swappable memory, supported protocols, and network adapter information.
The Event Log tab displays system events and errors.
32
Alerts
The console displays all critical alerts upon login to the server. Select the Display
only the new alerts next time if you only want to see new critical alerts the next time
you log in. Selecting this option indicates acknowledgement of the alerts.
33
If you are not using the Failover option, you will need to contact technical support
when preparing your new storage server.
Auto save
configuration
You can set your system to automatically replicate your system configuration to an
FTP server on a regular basis. Auto Save takes a point-in-time snapshot of the
storage server configuration prior to replication. To use Auto Save:
1. Right-click on the server and select Properties.
2. Select the Auto Save Config tab and enter information automatically saving your
storage server system configuration.
For detailed information about this dialog, refer to the Auto Save Config section.
Licensing
To license CDP/NSS and its options, make sure you have obtained your CDP/NSS
keycode(s) from FalconStor or its representatives. Once you have the license
keycodes, follow the steps below:
1. In the console, right-click on the server and select License.
The License Summary window is informational only and displays a list of the
options supported for this server. You can enter keycodes for your purchased
options on the Keycodes Detail window.
34
2. Press the Add button on the Keycodes Detail window to enter each keycode.
Note: If multiple administrators are logged into a storage server at the same
time, license changes made from one console will take effect in other console
only when the administrator disconnects and then reconnects to the server.
3. If your licenses have not been registered yet, click the Register button on the
Keycodes Detail window.
You can register online if you have an Internet connection.
To register offline, you must save the registration information to a file on your
hard drive and then email it to FalconStors registration server. When you
receive a reply, save the attachment to your hard drive and send it to the
registration server to complete the registration.
Note: Registration information file names can only use alphanumeric
characters and must have a .dat extension. You cannot use a single digit as
the name. For example, company1.dat is valid (1.dat is not valid).
The tabs you see will depend upon your storage server configuration.
2. If you have multiple NICs (network interface cards) in your server, enter the IP
addresses using the Server IP Addresses tab.
CDP/NSS Administration Guide
35
If the first IP address stops responding, the CDP/NSS clients will attempt to
communicate with the server using the other IP addresses you have entered in
the order they are listed.
Notes:
3. On the Activity Database Maintenance tab, indicate how often the SAN data
should be purged.
The Activity Log is a database that tracks all system activity, including all data
read, data written, number of read commands, write commands, number of
errors etc. This information is used to generate SAN information for the CDP/
NSS reports.
36
37
5. On the iSCSI tab, set the iSCSI portal that your system should use as default
when creating an iSCSI target.
If you have multiple NICs, when you create an iSCSI target, this IP address will
be selected by default for you.
6. If necessary, change settings for mirror resynchronization and replication on the
Performance tab.
38
The settings on this tab affect system performance. The defaults should be
optimal for most configurations. You should only need to change the settings for
special situations, such as if your mirror is remotely located.
Mirror Synchronization Throttle - Set the default value for the individual mirror
device to use (since throttle is disabled by default for individual mirror devices).
Each mirror device will be able to synchronize up to the value set here (in KB per
second). If you select 0 (zero), all mirror devices will use their own throttle value
(if set), otherwise there is no limit for the device.
Select the Start initial synchronization when mirror is added option to have
synchronization begin immediately for newly created mirrors. The synchronize
out-of-sync mirror policy does not apply in this case. If the Start initial
synchronization when mirror is added option is not selected, the mirror begins
synchronization based on the policy configured.
Synchronize Out-of-Sync Mirrors - Determine how often the system should
check and attempt to resynchronize active out-of-sync mirrors, how often it
should retry synchronization if it fails to complete, and whether or not to include
replica mirrors. These setting will only be used for active mirrors. If a mirror is
suspended because the lag time exceeds the acceptable limit, that
resynchronization policy will apply instead. This is the mirror policy that applies
to all individual mirrors that contain the following settings:
Check and synchronize out-of-sync mirrors every [n][unit] - Check the
mirror status at this interval and trigger a mirror synchronization when
the mirror is not synchronized.
Up to [n] mirrors at each interval - Indicate the number of mirrors that
can be synchronized concurrently. This rule does not apply to userinitiated operations, such as synchronize, resume, and rebuild. This rule
also does not apply when the Start initial synchronization when mirror is
added option is enabled.
Retry synchronization for each resource up to [n] times when
synchronization failed - Indicate the number of times that an out-of-sync
mirror will retry to synchronize the mirror at the interval set by the Check
and synchronize out-of-sync mirrors every rule. Once the mirror fails to
synchronize the number of times specified, a manual synchronization
will be required to initiate the mirror synchronization again.
Include replica mirrors in the automatic synchronization process Enable this option to include replica mirrors in the automatic
synchronization process. This option is disabled by default, which
means the mirror policy will not apply to any replica device with mirror on
the server. In this case, a manual synchronization is required to re-sync
the replica mirror. When this option is enabled, then the mirror policies
will apply to the replica mirror.
Replication Throttle - Click the Configure Throttle button to launch the Configure
Target Throttle screen, allowing you to set, modify, or delete replication throttle
settings. Refer to Set the replication throttle for additional information.
Enable MicroScan - MicroScan analyzes each replication block on-the-fly during
replication and transmits only the changed sections on the block. This is
beneficial if the network transport speed is slow and the client makes small
CDP/NSS Administration Guide
39
random updates to the disk. The global MicroScan option sets a default in all
replication setup wizards. MicroScan can still be enabled/disabled for each
individual replication via the wizard regardless of the global MicroScan setting.
7. Select the Auto Save Config tab and enter information automatically saving your
storage server system configuration.
You can set your system to automatically replicate your system configuration to
an FTP server on a regular basis. Auto Save takes a point-in-time snapshot of
the storage server configuration prior to replication.
The target server you specify in the Ftp Server Name field must have FTP server
installed and enabled.
The Target Directory is the directory on the FTP server where the files will be
stored. The directory name you enter here (such as ipstorconfig) is a directory
on the FTP server (for example ftp\ipstorconfig). You should not enter an
absolute path like c:\ipstorconfig.
The Username is the user that the system will log in as. You must create this
user on the FTP site. This user must have read/write access to the directory
named here.
In the Interval field, determine how often to replicate the configuration.
Depending upon how frequently you make configuration changes to CDP/NSS,
set the interval accordingly. You can always save manually in between if needed.
To do this, highlight your storage server in the tree, select File menu --> Save
Configuration.
In the Number of Copies field, enter the maximum copies to keep. The oldest
copy will be deleted as each new copy is added.
40
8. On the Location tab, you can enter a specific physical location of the machine.
You can also select an image (smaller than 500 KB) to identify the server
location. Once the location information is saved, the new tab displays in the
FalconStor Management Console for that server.
9. On the TimeMark Maintenance tab, you can set a global reclamation policy.
Manage accounts
Only the root user can manage users and groups or reset passwords. You will need
to add an account for each person who will have administrative rights in CDP/NSS.
You will also need to add a user account for clients that will be accessing storage
resources from a host-based application (such as FalconStor DiskSafe or FileSafe).
To make account management easier, users can be grouped together and handled
simultaneously.
To manage users and groups:
1. Right-click on the server and select Accounts.
A list of all existing users and administrators are listed on the Users tab and a list
of all existing groups is listed on the Groups tab.
41
To add a user:
1. Click the Add button.
42
A user quota limits how much space is allocated to this user for auto-expansion.
Resources managed by this user can only auto-expand if the users quota has
not been reached. The quota also limits how much space a host-based
application, such as DiskSafe, can allocate.
6. Click OK to save the information.
Add a group
To add a group:
1. Select the Groups tab.
2. Click the Add button.
43
On the Groups tab, you can highlight a group and click the Membership button to
add multiple users to that group.
Set a quota
You can set a quota for a user on the Users tab and you can set a quota for a group
on the Groups tab.
The quota limits how much space is allocated to each user. If a user is in a group,
the group quota will override the user quota.
Reset a
password
To change a password, select Reset Password. You will need to enter a new
password and then re-type the password to confirm.
You cannot change the root users password from this dialog. Use the Change
Password option below.
2. Enter your old password, the new one, and then re-enter it to confirm.
44
From this screen, you can select an existing user from the list to delete the user
or reset the Chap secret.
3. Click the Add button to add a new iSCSI user.
45
To apply a patch:
1. Download the patch onto the computer where the console is installed.
2. Highlight a storage server in the tree.
46
System maintenance
The FalconStor Management Console gives you a convenient way to perform
system maintenance for your storage server.
Note: The system maintenance options are hardware-dependent. Refer to your
hardware documentation for specific information.
Network
configuration
If you need to change storage server IP addresses, you must make these changes
using Network Configuration. Using YaST or other third-party utilities will not update
the information correctly.
1. Right-click on a server and select System Maintenance --> Network
Configuration.
47
If you select Static, you must add addresses and net masks.
MTU - Set the maximum transfer unit of each IP packet. If your card supports it,
set this value to 9000 for jumbo frames.
Note: If the MTU is changed from 9000 to 1500, a performance drop will occur. If
you then change the MTU back to 9000, the performance will not increase until
the server is restarted.
Set hostname
Right-click on a server and select System Maintenance --> Set Hostname to change
your hostname. You must restart the server if you change the hostname.
Note: Do not change the hostname if you are using block devices. If you do, all
block devices claimed by CDP/NSS will be marked offline and seen as foreign
devices.
48
Restart IPStor
Restart network
Reboot
Right-click on a server and select System Maintenance --> Restart IPStor to restart
the Server processes.
Right-click on a server and select System Maintenance --> Restart Network to
restart your local network configuration.
Right-click on a server and select System Maintenance --> Reboot to reboot your
server.
Halt
Right-click on a server and select System Maintenance --> Halt to turn off the server
without restarting it.
IPMI
49
Filter - You can filter out components you do not want to monitor. This may be useful
for hardware you do not care about or erroneous errors, such as when you do not
have the hardware that is being monitored. You must enter the Name of the
component being monitored exactly as it appears on the hardware monitor above.
Physical Resources
Physical resources are the actual devices attached to this storage server. SCSI
adapters supported include SAS, FC, FCoE, and iSCSI. The SCSI adapters tab
displays the adapters attached to this server and the SCSI Devices tab displays the
SCSI devices attached to this server. These devices can include hard disks, tape
libraries, and RAID cabinets. For each device, the tab displays the SCSI address
50
(comprised of adapter number, channel number, SCSI ID, LUN) of the device, along
with the disk size (used and available). If you are using FalconStors Multipathing,
you will see entries for the alternate paths as well.
The Storage Pools tab displays a list of storage pools that have been defined,
including the total size and number of devices in each storage pool.
The Persistent Binding tab displays the binding of each storage port to its unique
SCSI ID.
When you highlight a physical device, the Category field in the right-hand pane
describes how the device is being used. Possible values are:
Reserved for virtual device - A hard disk that has not yet been assigned to a
SAN resource or Snapshot area.
Used by virtual device(s) - A hard disk that is being used by one or more
SAN resources or Snapshot areas.
Reserved for Service-Enabled Device - A hard disk with existing data that
has not yet been assigned to a SAN resource.
Used by Service-Enabled Device - A hard disk with existing data that has
been assigned to a SAN resource.
Unassigned - A physical resource that has not been reserved yet.
Not available for IPStor - A miscellaneous SCSI device that is not used by
the storage server (such as a scanner or CD-ROM).
System - A hard disk where system partitions exist and are mounted (i.e.
swap file, file system installed, etc.).
Description
The D icon is indicates that the port is both an initiator AND a
target.
The T icon indicates that this is a target port.
The I icon indicates that this is an initiator port.
The V icon indicates that this disk has been virtualized or is
reserved for a virtual disk.
The S icon indicates that this is a Service-Enabled Device or is
reserved for a Service-Enabled Device.
The a icon indicates that this device is used in the logical resource
that is currently being highlighted in the tree.
51
Icon
Description
The D icon indicates that an adapter using NPIV when it's enabled
in dual-mode.
The storage server detects new devices when you connect to it. When they
are detected you will see a dialog box notifying you of the new devices. At
this point you can highlight a device and press the Prepare Disk button to
prepare it.
The Physical Devices Preparation Wizard will help you to virtualize, serviceenable, unassign, or import physical devices.
52
At any time, you can prepare a single unassigned device by doing the
following: Highlight the device, right-click, select Properties and select the
device category. (You can find all unassigned devices under the Physical
Resources/Adapters node of the tree view.)
For multiple unassigned devices, highlight Physical Resources, right-click
and select Prepare Disks. This launches a wizard that allows you to
virtualize, unassign, or import multiple devices at the same time.
Rescan adapters
1. To rescan all adapters, right-click on Physical Resources and select Rescan.
53
If you only want to scan a specific adapter, right-click on that adapter and select
Rescan.
If you want to discover new devices without scanning existing devices, click the
Discover New Devices radio button and then check the Discover new devices
only without scanning existing devices check box. You can then specify
additional scan details.
2. Determine what you want to rescan.
If you are discovering new devices, set the range of adapters, SCSI IDs, and
LUNs that you want to scan.
54
Use Report LUNs - The system sends a SCSI request to LUN 0 and asks for a
list of LUNs. Note that this SCSI command is not supported by all devices. (If
VSA is enabled and the actual LUN is beyond 256, you will need to use this
option to discover them.)
LUN Range - It is only necessary to use the LUN range if the Use Report Lun
option does not work for your adapter.
Stop scan when a LUN without a device is encountered - This option (used with
LUN Range) will scan LUNs sequentially and then stop after the last LUN is
found. Use this option only if all of your LUNs are sequential.
Auto detect FC HBA SCSI ID - Select this option to enable auto detection of
SCSI IDs with persistent binding. This will scan QLogic HBAs to discover
devices beyond the scan range specified above.
Read partition from inactive path when all the paths are inactive. - Select this
option to force a status check of the from a path that is not in use.
Import a disk
You can import a foreign disk into a CDP or NSS appliance. A foreign disk is a
virtualized physical device containing FalconStor logical resources previously set up
on a different storage server. You might need to do this if a storage server is
damaged and you want to import the servers disks to another storage server.
When you right-click on a disk that CDP/NSS recognizes as foreign and select the
Import option, the disks partition table is scanned and an attempt is made to
reconstruct the virtual drive out of all of the segments.
If the virtual drive was constructed from multiple disks, you can highlight Physical
Resources, right-click and select Prepare Disks. This launches a wizard that allows
you to import multiple disks at the same time.
As each drive is imported, the drive is marked offline because it has not yet found all
of the segments. Once all of the disks that were part of the virtual drive have been
imported, the virtual drive is re-constructed and is marked online.
Importing a disk preserves the data that was on the disk but does not preserve the
client assignments. Therefore, after importing, you must either reassign clients to
the resource.
Notes:
The GUID (Global Unique Identifier) is the permanent identifier for each
virtual device. When you import a disk, the virtual ID, such as SANDisk00002, may be different from the original server. Therefore, you should
use the GUID to identify the disk.
If you are importing a disk that can be seen by other storage servers, you
should perform a rescan before importing. Otherwise, you may have to
rescan after performing the import.
55
Sequential throughput
Random throughput
Sequential I/O rate
Random I/O rate
Latency
SCSI aliasing
SCSI aliasing works with the FalconStor Multipathing option to eliminate a potential
point of failure in your storage network by providing multiple paths to your storage
devices using multiple Fibre Channel switches and/or multiple adapters and/or
storage devices with multiple controllers. In a multiple path configuration, CDP/NSS
automatically detects all paths to the storage devices. If one path fails, CDP/NSS
automatically switches to another.
Refer to the Multipathing chapter for more information.
56
If all paths are online, the following message will be displayed instead: There
are no physical device paths that can be repaired.
2. Select the path to the device that needs to be repaired.
If the path is still missing after the repair or the entire physical device is gone
from the console, the path could not be repaired. You should investigate the
cause, correct the problem, and then rescan adapters with the Discover New
Devices option.
57
Logical Resources
Logical resources are all of the resources defined on the storage server, including
SAN resources, and groups.
SAN
Resources
SAN logical resources consist of sets of storage blocks from one or more physical
hard disk drives. This allows the creation of logical resources that contain a portion
of a larger physical disk device or an aggregation of multiple physical disk devices.
Clients do not gain access to physical resources; they only have access to logical
resources. This means that an administrator must configure each physical resource
to one or more logical resources so that they can be assigned to the clients.
58
When you highlight a SAN resource, you will see a small icon next to each device
that is being used by the resource. In addition, when you highlight a SAN resource,
The GUID (Global Unique Identifier) is the permanent identifier for this virtual device.
The virtual ID, SANDisk-00002, is not. You should make note of the GUID, because,
in the event of a disaster, this identifier will be important if you need to rebuild your
system and import this disk.
Groups
Groups are multiple drives (virtual drives and Service-Enabled drives) that will be
assembled together for SafeCache or snapshot synchronization purposes. For
example, when one drive in the group is to be replicated or backed up, the entire
group will be snapped together to maintain a consistent image.
59
Icon
Write caching
You can leverage a third party disk subsystem's built-in caching mechanism to
improve I/O performance. Write caching allows the third party disk subsystem to
utilize its internal cache to accelerate I/O.
To write cache a resource, right-click on it and select Write Cache --> Enable.
Replication
The Incoming and Outgoing objects under the Replication object display information
about each server that replicates to this server or receives replicated data from this
server. If the servers icon is white, the partner server is "connected" or "logged in". If
the icon is yellow, the partner server is "not connected" or "not logged in".
When you highlight the Replication object, the right-hand pane displays a summary
of replication to/from each server.
For each replica disk, you can promote the replica or reverse the replication. Refer
to the Replication chapter for more information about using replication.
CDP/NSS Administration Guide
60
SAN Clients
Storage Area Network (SAN) Clients are the file and application servers that utilize
the storage resources via the storage server. Since SAN resources appear as locally
attached SCSI devices, the applications, such as file services, databases, web and
email servers, do not need to be modified to utilize the storage.
On the other hand, since the storage is not locally attached, there may be some
configuration needed to locate and mount the required storage. The SAN Clients
access their storage resources via iSCSI initiators (for iSCSI) or HBAs (for Fibre
Channel or iSCSI). The storage resources appear as locally attached devices to the
SAN Clients operating systems (Windows, Linux, Solaris, etc.) even though the
devices are actually located at the storage server site.
When you highlight a specific SAN client, the right-hand pane displays the Client ID,
type, and authentication status, as well as information about the client machine.
The Resources tab displays a list of SAN resources that are allocated to this client.
The adapter, SCSI ID and LUN are relative to this CDP/NSS SAN client only; other
clients that may have access to the SAN resource may have different adapter SCSI
ID and LUN information.
61
If the clients machine name is not resolvable, you can enter an IP address and
then click Find to discover the machine.
3. Determine if you want to limit the amount of space that can be automatically
assigned to this client.
The quota represents the total allowable space that can be allocated for all of the
resources associated with this client. It is only used to restrict certain types of
resources (such as Snapshot Resource and CDP Resource) that expand
automatically. This prevents them from allocating storage space indefinitely.
Instead, they can only expand if the total size of all the resources associated with
the client does not exceed the pre-defined quota for that client.
4. Indicate if you want to enable persistent reservation.
This option allows clustered SAN Clients to take advantage of Persistent
Reserve/Release to control disk access between various cluster nodes.
Note: If you are using AIX SAN Client cluster nodes, this option should be
cleared.
5. Select the clients protocol(s).
If you select iSCSI, you must indicate if this is a mobile client. You will then be
asked to select the initiator that this client uses and add/select users who can
authenticate for this client. Refer to Add iSCSI clients for more information.
If you select Fibre Channel, you will have to select WWPN initiators. You will
then be asked to select Volume Set Addressing. Refer to Add Fibre Channel
clients for more information.
6. Confirm all information and click Finish to add this client
62
63
Console options
To set options for the console, select Tools --> Console Options. Then make any
necessary changes.
64
Menu Label - The application title that will be displayed in the Custom menu.
Command - The file (usually an.exe) that launches this application.
Command Argument - An argument that will be passed to the application. If
you are launching an Internet browser, this could be a URL.
Menu Icon - The graphics file that contains the icon for this application. This
will be displayed in the Custom menu.
65
Storage Pools
A storage pool is a group of one or more physical devices. Creating a storage pool
enables you to provide all of the space needed by your clients in a very efficient
manner. You can create and manage storage pools in a variety of ways, including:
For example, you can classify your storage by tier (low-cost, high-performance,
high-redundancy, etc.) and assign it based on these classifications. Using this
example, you may want to have your business critical applications use storage from
the high-redundancy or high-performance pools while having your less critical
applications use storage from other pools.
Storage pools work with all automatic allocation mechanisms in CDP/NSS. This
capacity-on-demand functionality automatically allocates storage space from a
specific pool when storage is needed for a specific use.
As your storage needs grow, you can easily extend your storage capacity by adding
more devices to a pool and then creating more logical resources or allocating more
space to your existing resources. The additional space is immediately and
seamlessly available.
Can add/remove
storage from pools
Root
Yes
Yes
IPStor Administrator
Yes
Yes
IPStor User
No
No
Refer to the Account management section for additional information regarding user
access rights.
CDP/NSS Administration Guide
66
Storage Pools
67
Storage Pools
On the General tab you can change the name of the storage pool and add/delete
devices assigned to this storage pool.
2. Select the Type tab to designate how each storage pool should be allocated.
68
Storage Pools
The type affects how each storage pool should be allocated. When you are in a
CDP/NSS creation wizard, the applicable storage pool(s) will be presented for
selection. However, you can still select from another storage pool type if needed.
All Types can be used for any type of resource.
Storage is the preferred storage pool to create SAN resources and their
corresponding replicas.
Snapshot is the preferred storage pool for snapshot resources.
Cache is the preferred storage pool for SafeCache resources.
HotZone is the preferred storage pool for HotZone resources.
Journal is the preferred storage pool for CDP resources and CDP resource
mirrors.
CDR is the preferred storage pool for continuous data replicas.
VirtualHeader is the preferred storage pool for the virtual header that is
created for a Service-Enabled Device SAN Resource.
Configuration is the preferred storage pool to create the configuration
repository for failover.
TimeView is the preferred storage pool for TimeView images.
ThinProvisioning is the preferred storage pool for thin disks.
Allocation Block Size allows you to specify the minimum size that will be
allocated when a virtual resource is created or expanded.
Using this feature is highly recommended for thin disks (ThinProvisioning
selected as the type for this storage pool) for several reasons.
The maximum number of segments that is supported per virtual device is 1024.
When Allocation Block Size is not enabled, thin disks are expanded in
increments of 10 GB. With frequent expansion, it is easy to reach the maximum
number of segments. Using Allocation Block Size with the largest block size
feasible for your storage can prevent devices from reaching the maximum.
In addition, larger block sizes mean more consecutive space within each block,
limiting disk fragmentation and improving performance for thin disks.
The default for the Allocation Block Size is 16 GB and the possible choices are
1, 2, 4, 8, 16, 32, 64, 128, and 256 GB.
If you enable Allocation Block Size for resources other than thin disks, ServiceEnabled Devices, or any copy of a resource (replica, mirror, snapshot copy,
etc), you should be aware that the allocation will round up to the next multiple
when you create a resource. For example, if you have the Allocation Block Size
set to 16 GB and you attempt to create a 20 GB virtual device, the system will
create a 32 GB device.
If you do not enable Allocation Block Size, you can specify any size when
creating/expanding devices. You may want to do this for disks that are not thin
disks since they do not expand as often and will rarely reach the maximum
number of segments.
When specifying an Allocation Block Size, your physical disk should be evenly
divisible by the number you specify so that all space can be used. For example,
if you have a 500 GB disk and you select 128 GB as the block size, the system
CDP/NSS Administration Guide
69
Storage Pools
will only be able to allocate three blocks of 128 GB each (128*3=384) from that
disk because the remaining 116 GB is not enough to allocate. When you look at
the Available Disk Space statistics in the console, this remaining 116 GB will be
excluded.
3. Select the Tag tab to set a tag string to limit client side applications to specific
storage pools.
When an application requests storage with a specific tag string, only the storage
pools with the same tag can be used. You can have your own internal application
that has been programmed to use a tag.
4. Select the Security tab to designate which users and administrators can manage
this storage pool.
Each storage pool can be assigned to one or more User or Group. The assigned
users can create virtual devices and allocate space from the storage pools
assigned to them but they cannot create, delete, or modify storage pools.
70
Logical Resources
Once you have physically attached your physical SCSI or Fibre Channel devices to
your storage server you are ready to create Logical Resources to be used by your
CDP/NSS clients. This configuration can be done entirely from the FalconStor
Management Console.
Logical Resources are logically mapped devices on the storage server. They are
comprised of physical storage devices, known as Physical Resources. Physical
resources are the actual SCSI and/or Fibre Channel devices (such as hard disks,
tape drives, and RAID cabinets) attached to the server.
Clients do not have access to physical resources; they have access only to Logical
Resources. This means that physical resources must be defined as Logical
Resources first, and then assigned to the clients so they can access them.
SAN resources provide storage for file and application servers (called SAN Clients).
When a SAN resource is assigned to a SAN client, a virtual adapter is defined for
that client. The SAN resource is assigned a virtual SCSI ID on the virtual adapter.
This mimics the configuration of actual SCSI storage devices and adapters, allowing
the operating system and applications to treat them like any other SCSI device.
Understanding how to create and manage Logical Resources is critical to a
successful CDP/NSS storage network. Please read this section carefully before
creating and assigning Logical Resources.
71
Logical Resources
Virtual devices
IPStor technology gives CDP and NSS the ability to aggregate multiple physical
storage devices (such as JBODs and RAIDs) of various interface protocols (such as
SCSI or Fibre Channel) into logical storage pools. From these storage pools, virtual
devices can be created and provisioned to application servers and end users. This
is called storage virtualization.
Virtual devices are defined as sets of storage blocks from one or more physical hard
disk drives. This allows the creation of virtual devices that can be a portion of a
larger physical disk drive, or an aggregation of multiple physical disk drives.
Virtual devices offer the added capability of disk expansion. Additional storage
blocks can be appended to the end of existing virtual devices without erasing the
data on the disk.
Virtual devices can only be assembled from hard disk storage. It does not work for
CD-ROM, tape, libraries, or removable media.
When a virtual device is allocated to an application server, the server thinks that an
actual SCSI storage device has been physically plugged into it.
Virtual devices are assigned to virtual adapter 0 (zero) when mapped to a client. If
there are more than 15 virtual devices, a new adapter will be defined.
Virtualization
examples
The following diagrams show how physical disks can be mapped into virtual
devices.
SAN
Resources
Virtual Device:
SCSI ID = any.
Adapter number does not need to
match.
Sectors are mapped, combining
sectors from multiple physical
disks.
Adapter = 0
SCSI ID = 1
sectors 019999
Physical
Devices
Adapter = 1
SCSI ID = 3
sectors 09999
Adapter = 1
SCSI ID = 4
sectors 09999
72
Logical Resources
The diagram above shows a virtual device being created out of two physical disks.
This allows you to create very large virtual devices for application servers with large
storage requirements. If the storage device needs to grow, additional physical disks
may be added to increase the size of a virtual device. Note that this will require that
the client application server resize the partition and file system on the virtual device.
SAN
Resources
Virtual Disk:
SCSI ID = any
Adapter number does not need to
match
Sectors are mapped, a single
physical device maps to multiple
virtual devices
Adapter = 1
SCSI ID = 5
sectors 04999
Adapter = 1
SCSI ID = 6
sectors 04999
Physical
Devices
Adapter = 2
SCSI ID = 3
sectors 04999
Adapter = 2
SCSI ID = 3
sectors 50009999
The example above shows a single physical disk split into two virtual devices. This is
useful when a single large device exists, such as a RAID, which could be shared
among multiple client application servers.
Virtual devices can be created using various combining and splitting methods,
although you will probably not create them in this manner in the beginning. You may
end up with devices like this after growing virtual devices over time.
Thin
Provisioning
73
Logical Resources
Because each client sees the full size of its provisioned disk, Thin Provisioning is the
ideal solution for users of legacy databases and operating systems that cannot
handle dynamic disk expansion.
The mirror of a disk with Thin Provisioning enabled is another disk with Thin
Provisioning enabled. When a thin disk is expanded, the mirror also automatically
expands. If the mirrored disk is offline, storage cannot be added to the thin disk
manually.
If the mirror is offline when the threshold is reached and automatic storage addition
is about to occur, the offline mirror is removed. Storage is automatically added to the
Thin Provisioned disk, but the mirror must be recreated manually.
A replica on a thin disk can use space on other virtualized devices as long as space
is available. If there is no space available for expansion, the thin disk on the primary
will be prevented from expanding and a message will display on the console.
Note: When using Thin Provisioning, it is recommended that you create a disk
with an initial size that is at least 15% the maximum size of the disk. Some write
operations, such as creating a file system in Linux, may scatter their writes across
the span of a disk.
74
Logical Resources
The usage percentage is displayed in green as long as the available sectors are
greater than 120% of the threshold (in sectors).
It is displayed in Blue when available sectors are less than 120% of the threshold (in
sectors) but still greater than the threshold (in sectors). The usage percentage is
displayed in Red when the available sectors are less than the threshold (in sectors)..
Note: Do not perform disk defragmentation on a Thin Provisioned disk. Doing so
may cause data from the used sectors of the disk to be moved into non-used sectors and result in unexpected thin-provisioned disk space increase. In fact, any
disk or filesystem utility that might scan or access any unused sector could also
cause a similar unexpected space usage increase.
Service-Enabled Devices
Service-Enabled Devices are hard drives with existing data that can be accessed by
CDP/NSS to make use of all key CDP/NSS storage services (mirroring, snapshot,
etc.), without any migration/copying, without any modification of data, and with
minimal downtime. Service-Enabled Devices are used to migrate existing drives into
the SAN.
Because Service-Enabled Devices are preserved intact, and existing data is not
moved, the devices are not virtualized and cannot be expanded. Service-Enabled
Devices are all maintained in a one-to-one mapping relationship (one physical disk
equals one logical device). Unlike virtual devices, they cannot be combined or split
into multiple logical devices.
75
Logical Resources
CDP and NSS appliances detect new devices as you connect to them (or
when you execute the Rescan command). When new devices are detected,
a dialog box displays notifying you of the discovered devices. At this point
you can highlight a device and press the Prepare Disk button to prepare it.
At any time, you can prepare a single unassigned device by following the
steps below:
Highlight the device and right-click
Select Properties
Select the device category. (You can find all unassigned devices under
the Physical Resources/Adapters node of the tree view.)
For multiple unassigned devices, highlight Physical Resources, right-click
and select Prepare Disks. This launches a wizard that allows you to
virtualize, unassign, or import multiple devices at the same time.
76
Logical Resources
3. Select the storage pool or physical device(s) from which to create this SAN
resource.
You can create a SAN resource from any single storage pool. Once the resource
is created from a storage pool, additional space (automatic or manual
expansion) can only be allocated from the same storage pool.
You can select List All to see all storage pools, if needed.
CDP/NSS Administration Guide
77
Logical Resources
Depending upon the resource type, you may have the option to select to Use
Thin Provisioning for more efficient space allocation.
4. Select the Use Thin Provisioning checkbox to allocate a minimum amount of
space for a virtual resource. When usage thresholds are met, additional storage
is allocated as necessary.
78
Logical Resources
Custom lets you select which physical device(s) to use and lets you
designate how much space to allocate from each.
Express lets you designate how much space to allocate and then
automatically creates a virtual device using an available device.
Batch lets you create multiple SAN resources at one time. These SAN
resources will all be the same size.
79
Logical Resources
80
Logical Resources
If you select Batch, you will see a window similar to the following:
81
Logical Resources
7. (Express and Custom only) Enter a name for the new SAN resource.
The Express screen is shown above and the Custom screen is shown below:
Note:
82
Logical Resources
8. Confirm that all information is correct and then click Finish to create the virtual
device SAN resource.
9. (Express and Custom only) Indicate if you would like to assign the new SAN
resource to a client.
If you select Yes, the Assign a SAN Resource Wizard will be launched.
Note: After you assign the SAN resource to a client, you may need to restart the
client. You will also need to write a signature, create a partition, and format the
drive so that the client can use it.
83
Logical Resources
4. Select the device that you want to make into a Service-Enabled Device.
A list of the storage pools and physical resources that have been reserved for
this purpose are displayed.
5. (Service-Enabled Devices only) Select the physical device(s) for the ServiceEnabled Devices virtual header.
84
Logical Resources
Even though Service-Enabled Devices are used as is, a virtual header is created
on another physical device to allow CDP/NSS storage services to be supported.
6. Enter a name for the new SAN resource.
85
Logical Resources
86
Logical Resources
For Fibre Channel clients, you will see the following screen:
You must have already created a target for this client. Refer to for more
information.
You can add any application server, even if it is currently offline.
Note: You must enter the clients name, not an IP address.
87
Logical Resources
4. If this is a Fibre Channel client and you are using Multipath software (such as
FalconStor DynaPath), enter the World Wide Port Name (WWPN) mapping.
This WWPN mapping is similar to Fibre Channel zoning and allows you to
provide multiple paths to the storage server to limit a potential point of network
failure. You can select how the client will see the virtual device in the following
ways:
One to One - Limits visibility to a single pair of WWPNs. You will need to select
the clients Fibre Channel initiator WWPN and the servers Fibre Channel target
WWPN.
One to All - You will need to select the clients Fibre Channel initiator WWPN.
All to One - You will need to select the servers Fibre Channel target WWPN.
All to All - Creates multiple data paths. If ports are ever added to the client or
server, they will automatically be included in the WWPN mapping.
88
Logical Resources
5. If this is a Fibre Channel client and you selected a One to n option, select which
port to use as an initiator for this client.
6. If this is a Fibre Channel client and you selected an n to One option, select which
port to use as a target for this client.
7. Confirm all of the information and then click Finish to assign the SAN resource to
the client(s).
89
Logical Resources
The SAN resource will now appear under the SAN Client in the configuration
tree view:
Windows clients
If an assigned SAN resource is larger than 3GB, formatting the resource as a FAT
partition will not format properly.
Solaris clients
x86 vs SPARC
If you create a virtual device and format it for Solaris x86, the device will fail to mount
if you try to use that same virtual device under Solaris SPARC.
Label devices
When you create a new virtual device, it needs to be labeled (the drive metrics need
to be specified) and a file system has to be put on the virtual device in order to
mount it. Refer to the steps below.
Note: If the drive has already been labeled and you restart the client, you do not
need to run format and label it again.
Labeling a virtual disk for Solaris:
90
Logical Resources
For further information about the format utility, refer to the man pages.
4. To exit the format utility, type quit at the format prompt.
Creating a file system on a disk managed by the CDP/NSS software:
Warning: Make sure to choose the correct raw device when creating a file system. If
in doubt, check with an administrator.
1. To create a new file system, execute the following command:
newfs /dev/rdsk/c2t0d0s2
where c2t0d0s2 is the name of the raw device.
2. To create a mount point for the new file system, execute the following command:
mkdir /mnt/ipstor1
where /mnt/ipstor1 is the name of the mount point you are creating.
3. To mount the disk managed by the CDP/NSS software, execute the following
command:
When assigning a virtual device from a different storage server, the SAN Client
software must be restarted in order to add the virtual device to the client machine.
The reason for this is that when virtual devices are added from other storage
servers, a new virtual SCSI adapter gets created on the client machine. Since
Solaris does not allow new adapters to be added dynamically, the CDP/NSS Client
software needs to be restarted in order for the new adapter and device to be added
to the system.
91
Logical Resources
Custom lets you select which physical device(s) to use and lets you designate
how much space to allocate from each.
Express lets you designate how much space to allocate and then automatically
creates a virtual device using an available device.
CDP/NSS Administration Guide
92
Logical Resources
The Size to Allocate is the maximum space available on all available devices. If
this drive is mirrored, this number will be half the full amount because the
mirrored drive will need an equal amount of space.
If you select Custom, you will see the following windows:
93
Logical Resources
3. Confirm that all information is correct and then click Finish to expand the virtual
device.
Windows
Dynamic disks
Expansion of dynamic disks using the Expand SAN Resource Wizard is not
supported for clients using Fibre Channel. Due to the nature of dynamic disks, it is
not safe to alter the size of the virtual device. However, dynamic disks do provide an
alternative method to extend the dynamic volume.
To extend a dynamic volume using SAN resources, use the following steps:
1. Create a new SAN resource and assign it to the CDP/NSS Client. This will
become an additional disk which will be used to extend the dynamic volume.
2. Use Disk Manager to write the disk signature and upgrade the disk to "Dynamic.
3. Use Disk Manager to extend the dynamic volume.
The new SAN resource should be available in the list box of the Dynamic Disk
expansion dialog.
Solaris clients
Windows
clients (Fibre
Channel)
Linux clients
(Fibre Channel)
For Windows 2000 and 2003 clients, after expanding a virtual device you should
rescan the physical devices from the Computer Manager to see the expanded area.
AIX clients
Expanding an CDP/NSS virtual disk will not change the size of the existing AIX
volume group. To expand the volume group, a new disk has to be assigned and the
extendvg command should be used to enlarge the size of the volume group.
94
Logical Resources
95
CDP/NSS Appliances
CDP and NSS appliances are also called IPStor servers. Both are storage servers
designed to require little or no maintenance.
All day-to-day CDP/NSS administrative functions can be performed through the
FalconStor Management Console. However, there may be situations when direct
access to the Server is required, particularly during initial setup and configuration of
physical storage devices attached to the Server or for troubleshooting purposes.
If access to the servers operating system is required, it can be done either directly
or remotely from computers on the SAN.
If the server is already started, you can use ./ipstor restart to stop and then start the
processes. When you start the server, you will see the processes start.
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
96
CDP/NSS Appliances
Warning: Stopping the storage server processes will shut down all access to the
storage resources managed by the Server. This can halt processing on your
application servers, or even cause them to crash, depending upon how they behave
if a disk is unexpectedly shut off or removed. It is recommended that you make sure
your application servers are not accessing the storage resources when you stop the
storage server processes.
97
CDP/NSS Appliances
Telnet access
By default, IPStor administrators do not have telnet access to the server. The server
is configured to deny all TCP/IP access, including telnet. To enable telnet:
1. Install the following rpm files on the machine:
#rpm ivh xinetd-..rpm
#rpm ivh telnet-..rpm
2. Enter the following command:
#vi /etc/xinetd.d/telnet
Linux Server
only
To:
Username:/homedirectory:/bin/bash
98
CDP/NSS Appliances
99
CDP/NSS Appliances
These commands display the SCSI devices attached to the IPStor Server. For
example, you will see something similar to the following:
[0:0:0:0]
[0:0:1:0]
[2:0:1:0]
[2:0:2:0]
[2:0:3:0]
disk
disk
disk
disk
disk
100
CDP/NSS Appliances
Description
Page up
Page down
Sort by KB read
Sort by ACSL
Sort by KB written
Sort by WWPN
Start logging
CDP/NSS Administration Guide
101
CDP/NSS Appliances
Option
Description
Quit
For example, if you are using 4 Ethernet devices for an iSCSI connection, run
the following command (using the iscsiadm commands):
iscsiadm -m iface -I iface-eth0 -o new
iscsiadm -m iface -I iface-eth1 -o new
CDP/NSS Administration Guide
102
CDP/NSS Appliances
2. Bind persistently each Ethernet device to a MAC address to ensure that the
same device is always used for iSCSI connection. To do this, use the following
command:
iscsiadm -m iface -I iface-eth0 -o update -n
iface.hwaddress -v <MAC address>
5. Log the iSCSI initiator to the target using the following command:
iscsiadm -m node -L
6. Confirm configured Ethernet devices are associated with targets by running the
following command:
iscsiadm -m session
7. Perform a rescan using the FalconStor Management Console to see all of the
iSCSI devices.
Note the information displayed in the menu header for the current HBA port. By
default, the configuration for HBA 0, port 0 displays.
103
CDP/NSS Appliances
2. To configure the selected HBA port, select option 4 - Port Level Info &
Operations.
Make sure to save your changes the previous port before selecting another port,
otherwise your changes will be lost.
3. To change the IP address of the selected port, select option 2 - Port Network
Setting Menu.
The Port Network Setting Menu interface allows you to configure the IP address
for the selected port.
4. To change target parameters for the selected HBA port, select option 7 - Target
Level Info & Operations.
The HBA Target Menu displays.
5. Discover iSCSI targets by selecting option 10 - Target Discovery Menu.
The HBA Target Discovery Menu displays.
104
CDP/NSS Appliances
105
iSCSI Clients
iSCSI clients are the file and application servers that access CDP/NSS SAN
resources using the iSCSI protocol. Just as the CDP/NSS appliance supports
different types of storage devices (such as SCSI, Fibre Channel, and iSCSI), the
CDP/NSS appliance is protocol-independent and supports multiple outbound target
protocols, including iSCSI Target Mode. This chapter provides an overview for
configuring iSCSI clients with CDP or NSS.
iSCSI builds on top of the regular SCSI standard by using the IP network as the
connection link between various entities involved in a configuration. iSCSI inherits
many of the basic concepts of SCSI. For example, just like SCSI, the entity that
makes requests is called an initiator, while the entity that responds to requests is
called a target. Only an initiator can make requests to a target; not the other way
around. Each entity involved, initiator or target, is uniquely identified.
By default, when a client machine is added as an iSCSI client of a CDP or NSS
appliance, it becomes an iSCSI initiator. The initiator name is important because it is
the main identity of an iSCSI initiator.
Supported platforms
iSCSI target mode is supported for iSCSI initiators on the following platforms:
Windows
VMware
NetWare
Linux
Solaris
HP-UX
AIX
You must install an iSCSI initiator on each of your client machines. iSCSI
software/hardware initiator is available from many sources and needs to be
installed and configured on all clients that will access shared storage. Refer
to the FalconStor certification matrix for a list of supported iSCSI initiators.
You should not install any storage server client software on the client unless
you are using a FalconStor snapshot agent.
106
iSCSI Clients
Enabling iSCSI
In order to add a client using the iSCSI protocol, you must enable iSCSI for your
storage server. To do this, in the FalconStor Management Console, right-click on
your storage server and select Options --> Enable iSCSI.
As soon as iSCSI is enabled, a new SAN client called Everyone_iSCSI is
automatically created on your storage server. This is a special SAN client that does
not correspond to any specific client machine. Using this client, you can create
iSCSI targets that are accessible by any iSCSI client that connects to the storage
server.
Before an iSCSI client can be served by a CDP or NSS appliance, the two entities
need to mutually recognize each other. The following sections take you through this
process.
107
iSCSI Clients
Note: If you have more than one IP address, a screen will display prompting
you to select the IP address that the iSCSI target will be accessible over.
108
iSCSI Clients
If the initiator does not appear, you may need to rescan. You can also manually
add it, if necessary.
4. Select the initiator or select the client to have mobile access.
Stationary iSCSI clients corresponds to specific iSCSI client initiators, and
consequently, the client machine that owns the specific initiator names. Only a
client machine with a correct initiator name can connect to the storage server to
access the resources assigned to this stationary client.
109
iSCSI Clients
5. Add/select users who can authenticate for this client. The user name defaults to
the initiator name. You will also need to enter the CHAP secret.
110
iSCSI Clients
6. Enter the name of the client, select the operating system, and indicate whether
or not the client machine is part of a cluster.
Note: It is very important that you enter the correct client name.
111
iSCSI Clients
10.
112
iSCSI Clients
3. Select the IP address(es) of the storage server to which this client can connect.
You can select multiple IPs if your iSCSI initiator has multipathing support (such
as the Microsoft initiator version 2.0).
If you specified a default portal (in Server Properties), that IP address will be
selected for you.
4. Select an access mode.
Read/Write - Only one client can access this SAN resource at a time. All others
(including Read Only) will be denied access.
Read/Write Non-Exclusive - Two or more clients can connect at the same time
with both read and write access. You should be careful with this option because
if you have multiple clients writing to a device at the same time, you have the
potential to corrupt data. This option should only be used by clustered servers,
because the cluster itself prevents multiple clients from writing at the same time.
Read Only - This client will have read only access to the SAN resource. This
option is useful for a read-only disk.
113
iSCSI Clients
Disable iSCSI
To disable iSCSI for a CDP or NSS appliance, right-click on the server node in the
FalconStor Management Console, and select Options --> Disable iSCSI.
Note that before disabling iSCSI, all iSCSI initiators and targets for this ICDP or NSS
appliance must be removed.
114
Event Log
The Event Log details significant occurrences during the operation of the storage
server. The Event Log can be viewed in the FalconStor Management Console when
you highlight a Server in the tree and select the Event Log tab in the right pane.
The following is a sample log display:
Date
Time
115
ID
Event
Message
116
Select a category of
messages to display.
You can refresh the current Event Log display by right-clicking on the Server and
selecting Event Log --> Refresh.
117
Reports
FalconStor provides reports that offer a wide variety of information:
Individual reports are viewed from the Reports object in the console. Global
replication reports are created from the Servers object.
118
2. Select a report.
Depending upon which report you select, additional windows appear to allow
you to filter the information for the report. Descriptions of each report appear on
the following pages.
3. Select the reporting schedule.
Depending upon which report you select, you can select to run the report for one
time only, or select a daily, weekly, or monthly date range.
119
To create a one-time only report, click the For One Time Only radio button
and click Next
If applicable, specify the date or date range for the report and indicate which
SAN resources and Clients to use in the report.
Selecting Past n days/weeks/months will create reports that generate data
relative to the time of execution.
Include All SAN Resources and Clients Includes all current and previous
configurations for this server (including SAN resources and cients that you may
have changed or deleted).
Include Current Active SAN Resources and Clients Only Includes only those
SAN resource and clients that are currently configured for this server.
The Delta Replication Status report has a different dialog that lets you specify a
range by selecting starting and ending dates.
120
To create a daily report, click the Daily radio button, give the schedule a
name if desired and click Next.
Set the schedule frequency, duration, start time and click Next.
To create a weekly report, click the Weekly radio button.
121
4. If applicable, select the objects necessary to filter the information in the report.
Depending upon which report you selected, you may be asked to select from a
list of storage servers, SCSI adapters, SCSI devices, SAN clients, SAN
resources, or replica resources.
5. If applicable, select which columns you want to display in the report and in which
sort order.
Depending upon which report you selected, you may be able to select which
column fields to display on the report. All available fields are selected by default.
You can also select whether you want the data sorted in ascending or
descending order.
122
View a report
When you create a report, it is displayed in the right-hand pane and is added
beneath the Reports object in the configuration tree.
Expand the Reports object to see the existing reports available for this Server.
123
Schedule a report
Reports can be generated on a regular basis or as needed. Some tips to remember
on scheduling are as follows:
The start and end dates in the report scheduler are inclusive.
When scheduling a monthly report, be sure to select a date that exists in every
month. For example, if you select to run a report on the 31st day, the report will not
be generated on months that do not have 31 days.
When scheduling a report to run every n days in selected months, the first report is
always generated on the first of the month and then every n number of days after.
Therefore if you chose 30 days (n = 30) and there are not 30 days left in the month,
the schedule will jump to the first day of the next month.
Some reports allow you to select a range of dates from the day you are generating
the report for the past n number of days. If you select for the past one day, the
report will be generated for one day.
When scheduling a daily report, it is best practice to schedule the report to run at the
end of the day to capture the most amount of data. Daily report data accumulation
begins at 12:00 am and ends at the scheduled run time.
124
Select the E-mail option in the Report Wizard to enter e-mail addresses to have the
report sent. Enter e-mail addresses, separated by semi-colons. You can also have
the report sent to distribution groups, as long as the E-Mail server being used
supports this feature.
Report types
The FalconStor reporting feature includes many useful reports including allocation,
usage, configuration, and throughput reports. A description of each report follows.
125
126
The Replication Status Summary tab displays a consolidated summary for multiple
servers.
127
The Data tab breaks down the disk space information for each physical device. The
Utilization tab breaks down the disk space information for each logical device.
128
129
130
131
132
133
134
135
136
137
During the creation of the report, you select which SCSI channel to include in the
report.
138
139
140
The following is a sample page from a SAN Client Usage Distribution Report:
141
142
143
The SAN Resource tab displays a graph showing the throughput of the Server. The
horizontal axis displays the time segments. The vertical axis measures the total data
transferred in each time segment for both reads and writes. For example:
The System tab displays the CPU and memory utilization for the same time period
as the main graph:
144
This helps the administrator identify time periods where the load on the Server is
greatest. Combined with the other reports, the specific device, client, or SAN
resource that contributes to the heavy usage can be identified.
The Data tab shows the tabular data that was used to create the graphs:
The Configuration Information tab shows which SAN Resources and Clients were
included in the report.
145
Device Name
SCSI Address
Sectors
Total (MB)
Used (MB)
Available (MB)
146
ID
Resource Name
Type
Category
Size (MB)
147
148
Supported platforms
Fibre Channel target mode is supported on the following platforms:
Windows
VMware
CDP/NSS Administration Guide
149
NetWare
Linux
Solaris
HP-UX
AIX
150
Ports
Your CDP/NSS appliance is equipped with several Fibre Channel ports. The ports
that connect to storage arrays are commonly known as Initiator Ports. The ports that
will interface with the backup servers' FC initiator ports will run in a different mode
known as Target Mode.
VSA
The Volume Set Addressing (VSA) option must be disabled when using a
FalconStor Management Console version later than 6.0 to set up a near-line mirror
on a version 6.0 server. This also applies if you are setting up a near-line mirror from
a version 6.0 server to a later server.
Some storage devices (such as EMC Symmetric storage controller and older HP
storage) use VSA (Volume Set Addressing) mode. This addressing method is used
primarily for addressing virtual buses, targets, and LUNs.
CDP/NSS supports up to 4096 LUN assignments per VSA client when VSA is
enabled.
For upstream, you can set VSA for the client at the time of creation or you can
modify the setting after creation by right-clicking on the client.
When VSA is enabled and the actual LUN is beyond 256, use the Report LUN
option to discover them. Use the LUN range option only if Report LUN does not work
for the adapter.
151
If new devices are assigned (from the storage server) to a VSA-enabled storage
server before loading up the CDP/NSS storage server, the newly assigned devices
will not be discovered during start up. A manual rescan is required.
Zoning
Two types of zoning can be configured on each switch: hard zoning (based on port
#) and soft zoning (based on WWPNs).
Soft zoning is zoning which is implemented in software and uses the WWPN in the
configuration. By using filtering implemented in fibre channel switches, ports cannot
be seen from outside of their assigned zones. The WWPN remains the same in the
zoning configuration regardless of the port location. If a port fails, you can simply
move the cable from the failed port to another valid port without having to
reconfigure the zoning.
CDP/NSS requires isolated zoning where one initiator is zoned to one target in order
to minimize I/O interruptions by non-related FC activities, such as port login/out,
reset, etc. With isolated zoning, each zone can contain no more than two ports or
two WWPNs. This applies to both initiator zones (storage) and target zones (clients).
For example, for the case of upstream (to client) zoning, if there are two client
initiators and two CDP/NSS targets on the same FC fabric and if it is desirable for all
four path combinations to be established, you should use four specific zones, one
for each path (Client_Init1/IPStor_Tgt1, Client_Init1/IPStor_Tgt2, Client_Init2/
IPStor_Tgt1, and Client_Init2/IPStor_Tgt2). You cannot create a single zone that
includes all four ports. The four-zone method is cleaner because it does not allow
the two client initiators nor the two CDP/NSS target ports to see each other. This
eliminates all of the potential issues such as initiators trying to log in to each other
under certain conditions.
The same should be done for downstream (to storage) zoning. If there are two CDP/
NSS initiators and two storage targets on the same fabric, there should be four
zones (IPStor_Init1/Storage_Tgt1, IPStor_Init1/Storage_Tgt2, IPStor_Init2/
Storage_Tgt1, and IPStor_Init2/Storage_Tgt2).
Make sure that storage devices are not zoned directly connected to the client.
Instead, since CDP/NSS will be provisioning the storage to the clients, the target
ports of the storage devices should be zoned to the CDP/NSS initiator ports while
the clients are zoned to the CDP/NSS target ports. Make sure that from the storage
units management GUI (such as SANtricity and NaviSphere), the LUNs are reassigned to the storage server as the host. CDP/NSS will either virtualize these
LUNS (if they are newly created without existing data) or service-enable them
(which preserves existing data). CDP/NSS can then define SAN resources out of
these LUNS and further provision them to the clients as Service-Enabled Devices.
152
Switches
For the best performance, if you are using 4 or 8 Gig switches, all of your cards
should be 4 or 8 Gig cards. For example, the QLogic 2432 or 2462 4GB cards.
Check the certification matrix on the FalconStor website to see a complete list of
certified cards.
NPIV (point-to-point) mode is enabled by default. Therefore, all Fibre Channel
switches must support NPIV.
QLogic HBAs
Target mode
settings
The table below lists the recommended settings (changes are indicated in bold) for
QLogic HBA target mode. These values are set in the fshba.conf file and will
override those set through the BIOS settings of the HBA.
For initiators, please consult the best practice guideline as published by the storage
subsystem vendor. If an initiator is to be used by multiple storage brands, the best
practice is to select a setting that best satisfies both brands. If this is not possible,
consult FalconStor technical support for advice, or separate the conflicting storage
units to their own initiator connections.
Name
Default
Recommendation
frame_size
2 (2048byte)
2 (2048byte)
loop_reset_delay
adapter_hard_loop_id
connection_option
1 (point to
point)
hard_loop_id
0-124
Make sure that both primary
target adapter and secondary
standby adapter (the failover pair)
are set to the SAME value.
fibre_channel_tape_support
0 (disable)
0 (disable)
data_rate
2 (auto)
execution_throttle
255
255
LUNs_per_target
256
256
enable_lip_reset
1 (enable)
1 (enable)
153
Name
Default
Recommendation
enable_lip_full_login
1 (enable)
1 (enable)
enable_target_reset
1 (enable)
1 (enable)
login_retry_count
port_down_retry_count
link_down_timeout
45
45
extended_error_logging_flag
0 (no logging)
0 (no logging)
interrupt_delay_timer
iocb_allocation
512
512
enable_64bit_addressing
0 (disable)
0 (disable)
fibrechannelconfirm
0 (disable)
0 (disable)
class2service
0 (disable)
0 (disable)
acko
0 (disable)
0 (disable)
responsetimer
0 (disable)
0 (disable)
fastpost
0 (disable)
0 (disable)
driverloadrisccode
1 (enable)
1 (enable)
q12xmaxqdepth
255
max_srbs
4096
4096
q12xfailover
q12xlogintimeout
20 seconds
20 seconds
q12xretrycount
20
20
q12xsuspendcount
10
10
q12xdevflag
q12xplogiabsentdevice
0 (no PLOGI)
0 (no PLOGI)
busbusytimeout
60 seconds
60 seconds
displayconfig
retry_gnnft
10
10
recoverytime
10 seconds
10 seconds
154
Name
Default
Recommendation
failbacktime
5 seconds
5 seconds
bind
0 (by Port
Name)
qfull_retry_count
16
16
qfull_retry_delay
q12xloopupwait
10
10
You should use persistent binding for all clients to all QLogic targets.
(For all clients except Solaris SPARC clients) When setting up clients on a Fibre
Channel network using a Fabric topology, we recommend that you set the topology
that each HBA will use to log into your switch to Point-to-Point Only.
If you are using a QLogic HBA, the topology is set through the QLogic BIOS:
Configure Settings --> Extended Firmware settings --> Connection Option: Point-toPoint Only
Note: : For QLogic HBAs, it is recommend that you hard code the link speed of
the HBA to be in line with the switch speed.
155
NetWare clients
Built into the latest QLogic driver is the ability to handle failover. HBA settings are
configured through nwconfig. Do the following after installing the card:
1. Type nwconfig.
2. Go to Driver Options and select Config disk and Storage device drivers.
3. Select an Additional Driver and type the path for the updated driver (i.e
sys:\qlogic).
4. Set the following parameters:
Scan All Luns = yes
FailBack Enabled = yes
Read configuration = yes
Requires configuration = no
Report all paths = yes
Use Portnames = no
Qualified Inquiry = no
Report Lun Zero = yes
GNFT SNS Query = no
Console Alerts = no
Solaris clients
For persistent binding on Solaris clients when using an HBA driver by QLogic
QLA. (If you are using the SUN QLC driver, no configuration steps are necessary
for persistent binding.)
1. Statically assign the targets WWPN by editing the QLogic driver configuration
file located in /kernel/drv/qla2200.conf or /kernel/drv/qla2300.conf depending on
the version of the card.
For example, if the target WWPN is 210000e08b04f136, you need to add the
following to /kernel/drv/qla2200.conf:
hba0-SCSI-target-id-0-fibre-channel-name="200000e08b04f136"
2. Edit the SCSI device driver located in /kernel/drv/sd.conf to add the devices
LUN number that you are assigning in order for the device to be seen.
For example, if you have a device with LUN 1, you would need to add the
following to /kernel/drv/sd.conf:
name="sd" class="scsi" target=0 lun=1;
If you add the above lines for LUN 1 to 8, devices from LUN 1 to LUN 8 will be
scanned when the card loads and any assigned devices will be found. Each time
156
you add a new device from LUN 0 to 8, run devfsadm -i sd instead of editing
the file /kernel/drv/sd.conf and the devices will be found.
3. Reboot the client machine.
Solaris Internal Fibre
Channel drives
Some newer Sun machines, such as the SunBlade 1000, come with internal Fibre
Channel drives instead of SCSI drives. These drives have a qlc driver that is not
compatible with CDP/NSS. The following instructions explain how to migrate a
system boot disk from the qlc driver to the qla2x00fs driver.
Note: Before attempting to migrate from the qlc driver to the qla2200 driver, make
sure the system disk is archived.
Determine the boot device
You will see information similar to the following appear on your screen:
.
.
disk
disk1
.
/pci@8,600000/SUNW,qlc@4/fp@0,0 /disk@1,0
/pci@8,600000/SUNW,qlc@4/fp@0,0 /disk@2,0
The lines you are looking for should all say SUNW,qlc and /fp and /disk.
2. Select the alias that represents the system boot device (disk by default) and
write down the device path.
3. Boot off of the disk.
4. Determine the essential devices by typing:
# df
You will see information similar to the following appear on your screen:
/
.
.
.
5. Note the root device path, which in this example is: /dev/dsk/c3t1d0s0.
Prepare the primary system to boot using the qla2200 driver
157
/etc/vfstab
and write down the full symbolic links for the root device, such as:
/dev/dsk/c3t1d0s0 -> /devices/pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100002037e2dc62,0:a
/dev/rdsk/c3t1d0s0 -> /devices/pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100002037e2dc62,0:a,raw
Note: In case of a failure you will need this information to restore the system.
2. Install CDP/NSS with the Fibre Channel option to the system if it is not installed.
3. Enter single user mode via init 1.
4. Copy:
/usr/local/ipstor/drv/qla2200 to /kernel/drv/qla2200
and
/usr/local/ipstor/drv/sparcv9/qla2200 to /kernel/drv/
sparcv9/qla2200
158
The rootdev is the instance you added into the /etc/path_to_inst files in step B7.
10. Obtain the major number for the sd device by viewing /etc/name_to_major.
The sd device is usually major number 32.
11. Figure out the minor number for the root device node by using the formula:
([instance# * 8] + disk slice#)
Where the instance# is the number you picked in step B7 and the slice# is the
s number you got from step A4.
For example, if the instance# is 61 and slice# is 0 (/dev/dsk/c3t0d0s0), the minor
number will be 488. ([61 * 8] + 0)
12. With the major and minor number, you can now make a device node for the
system. For example:
# cd /devices/pci@8,600000/SUNW,qlc@4/
# mknod sd@1,0:a b 32 488
# mknod sd@1,0:a,raw c 32 488
13. Make links between the device name and the device node.
For example, if your boot device is denoted as /dev/dsk/c3t1d0s0 on the primary
disk, delete the link and re-link it to the newly created device node. Write down
the existing link before you remove it, because you will need to use it to revert
back when failure occurs.
# rm /dev/dsk/c3t1d0s0
# ln is /devices/pci@8,600000/SUNW,qlc@4/sd@1,0:a /dev/dsk/c3t1d0s0
Do not forget the raw device nodes:
# rm /dev/rdsk/c3t1d0s0
# ln is /devices/pci@8,600000/SUNW,qlc@4/sd@1,0:a,raw /dev/rdsk/c3t1d0s0
Note: Make sure you cover all of the devices in /etc/vfstab or you will be
forced into maintenance mode.
159
Windows
HBA Card Type
Disk Timeout
Value
With DynaPath
Without DynaPath
QLogic
Emulex
Node Timeout = 30
Link Timeout = 30
Disk Timeout Value = N/A
Reset FF = 1 (true)
The Disk Timeout Value is a value that needs to be modified at the operating system
level. To enter the disk timeout value:
1. Go to Start --> Run.
160
The LUNS per target should now be set to 64. We can set this value to 256 because
we use Report LUN upstream. However, this is dependent on your requirements
and is based on the number of LUNs.
Clustering
161
Tachyon
With DynaPath
Without DynaPath
scsi timeout = 30
scsi timeout = 30
In HP-UX, PVLink is used for multipathing. The scsi_timeout (aka pv_timeout) value
can be modified using the following command:
pvchange t 180 /dev/dsk/c0t0d0o
Note that this command must be executed for each device. The following
procedures should be used to replace PVlink with DynaPath:
1. Unmount filesystem that volume group is mounted to:
umount /dev/<VGname>/<logical vol name>
162
With DynaPath
Without DynaPath
IBM
Retry Timeout = 30
Emulex
Retry Timeout = 30
Cambex
Retry Timeout = 30
There are no BIOS or OS level changes that can be made for AIX. As indicated, AIX
will hold onto the I/O for 30 seconds without DynaPath. With DynaPath, the I/O will
hold for 300 seconds (5 minutes).
In AIX DynaPath, there are certain configurations that do not need to support the
special failover rescan logic. This includes using switches that support portswapping (certain CISCO, Brocade switches), using HBA drivers that support
dynamic port tracking (e.g. using Emulex LPFC driver, Cambex QLogic HBA driver),
and using versions of AIX e.g. (5.2, or 5.3) that support the dynamic port tracking
option.
With DynaPath
Without DynaPath
QLogic
Emulex
Node Timeout = 30
Link Timeout = 30
Disk timeout value = N/A
163
Solaris 9
HBA Card Type
With DynaPath
Without DynaPath
QLogic
Emulex
Node Timeout = 30
Link Timeout = 30
Disk timeout value = N/A
The changes indicated above should be changed in the *.conf files for their
respective HBAs.
Note: For Sun (qlc) drivers, the clients will not be able to sustain failover at all if
DynaPath is not installed.
QLogic
With DynaPath
N/A
Without DynaPath
DynaPath is not required with NetWare since they have their version of multipathing.
The settings indicated above should be modified at the ql23xx driver line in the
startup.ncf file. The /ALLPATHS and /PORTNAMES options are required if an upper
layer module is going to handle failover (it expects to see all paths).
The Port Down Retry Count and Link Down Retry is configurable in the BIOS
whereas the /XRetry, /XTimeout, and /PortDown values are configured by the driver.
The Port Down Retry Count and the /Portdown values combined will approximately
be the total disk timeout.
164
165
To set a port:
1. In the FalconStor Management Console, expand Physical Resources.
2. Right-click on a HBA and select Options --> Enable Target Mode.
You will get a Loop Up message on your storage server if the port has
successfully been placed in target mode.
3. When done, make a note of all of your WWPNs.
It may be convenient for you to highlight your server and take a screenshot of
the Console.
166
You should not use the NPIV driver if you intend to directly connect a
target port to a client host.
With dual mode, clients will need to be zoned to the alias port (called
Target WWPN). If they are zoned to the base port, clients will not see any
devices.
You will only see the alias port when that port is in target mode.
NPIV allows multiple N_Port IDs to share a single physical N_Port. This
allows us to have an initiator, target and standby occupying the same
physical port. This type of configuration is not supported when not using
NPIV.
As a failover setup best practice, it is recommended that you do not put
more than one standby WWPN on a single physical port.
167
When setting up Fibre Channel failover using multiple Fibre Channel switches, we
recommend the following:
Failover
limitations
If the multiple switches are connected via a Fibre Channel port that acts as
a management port for both switches, the primary storage servers Target
Port and the secondary storage servers Standby Port can be on different
switches.
If the switches are not connected, or if they are not "smart" switches that
can be managed, the primary storage servers Target Port and the
secondary storage servers Standby Port must be on the same switch.
When using failover in Fibre Channel environments, it is recommended that you use
the same type of Fibre Channel HBAs for all CDP/NSS client hosts.
168
Install and run client software and/or manually add Fibre Channel
clients
Client software is only required for Fibre Channel clients running a FalconStor
Snapshot Agent or for clients using multiple protocols.
If you do not install the Client software, you must manually add the Client in the
Console. To do this:
1. In the Console, right-click on SAN Clients and select Add.
2. Select Fibre Channel as the Client protocol.
3. Select WWPN initiators. See Associate World Wide Port Names (WWPN) with
clients.
4. Select Volume Set Addressing.
Volume Set Addressing is used primarily for addressing virtual buses, targets,
and LUNs. If your storage device uses VSA, you must enable it. Note that
Volume Set Addressing is selected by default for HP-UX clients.
5. Enter a name for the SAN Client, select the operating system and indicate
whether or not the client machine is part of a cluster.
If the clients machine name is not resolvable, you can enter an IP address and
then click Find to discover the machine.
6. Indicate if you want to enable persistent reservation.
This option allows clustered SAN Clients to take advantage of Persistent
Reserve/Release to control disk access between various cluster nodes.
Note: If you are using AIX SAN Client cluster nodes, this option should be
cleared.
169
If you are using a switched Fibre Channel environment, CDP/NSS will query
the switch for its Simple Name Server (SNS) database and will display a list
of all available WWPNs. You will still have to identify which WWPN is
associated with each machine.
If you are not using a switched Fibre Channel environment, you can
manually determine the WWPN for each of your ports. There are different
ways to determine it, depending upon the hardware vendor. You may be
able to get the WWPN from the BIOS during bootup or you may have to
read it from the physical card. Check with your hardware vendor for their
preferred method.
To simplify this process, when you enabled Fibre Channel, an Everyone client was
created under SAN Clients. This is a generic client that you can assign to all (or
some) of your SAN resources. It allows any WWPN not already associated with a
Fibre Channel client to have read/write non-exclusive access to any SAN resources
assigned to Everyone.
For security purposes, you may want to assign specific WWPNs to specific clients.
For the rest, you can use the Everyone client.
Do the following for each client for which you want to assign specific virtual devices:
1. Highlight the Fibre Channel Client in the FalconStor Management Console.
2. Right-click on the Client and select Properties.
170
One to One - Limits visibility to a single pair of WWPNs. You will need to
select the clients Fibre Channel initiator WWPN and the servers Fibre
Channel target WWPN.
171
NetWare clients
One to All - You will need to select the clients Fibre Channel initiator
WWPN.
All to One - You will need to select the servers Fibre Channel target
WWPN.
All to All - Creates multiple data paths. If ports are ever added to the client or
server, they will automatically be included in the WWPN mapping.
After you assign a WWPN to the client and assign storage, you will be able to
configure it in several ways depending upon your version of NetWare.
NetWare version 5.x
1. Type nwconfig.
This takes you to the configuration screen.
2. Select Disk Options.
3. Scan for additional devices.
4. Modify disk partitions and Hot Fix.
5. Choose the Falcon IPStor Disk.
6. Initialize the partition table.
7. Create a NetWare disk partition.
172
1. Go to \\public\mgmt\console1\1.2\bin\ConsoleOne.exe.
2. Right-click on the NDS context and select Disk Management.
3. Select to initialize the new storage.
173
If you dont see the new storage, you may have to type scan all at the
command line.
4. Choose either NSS Logical Volumes or Traditional Volumes.
5. Select New and follow the screens to create a NSS volume and pool or a
Traditional volume.
174
175
SAN Clients
Storage Area Network (SAN) Clients are the file and application servers that access
SAN resources. Since SAN resources appear as locally attached SCSI devices, the
applications, such as file services, databases, web and email servers, do not need
to be modified to utilize the storage.
On the other hand, since the storage is not locally attached, there is some
configuration needed to locate and mount the required storage.
176
SAN Clients
177
Security
CDP/NSS utilizes strict authorization policies to ensure proper access to storage
resources on the FalconStor storage network. Since applications and storage
resources are now separated, and it is possible to transmit storage traffic over a
non-dedicated network, extra measures have been taken to ensure that data is only
accessible to those authorized to use it.
To accomplish this, CDP/NSS safeguards the areas of potential vulnerability:
System management
CDP/NSS protects your system by ensuring that only the proper administrators have
access to the systems configuration. This means that the administrators user name
and password are always verified against those defined on the storage server
before access to the configuration is granted.
While the server verifies the administrators login, the root user is the only one who
can add or delete IPStor administrators. The root user can also change other
administrators passwords and has privileges to the operating system. Therefore,
the servers root user is the key to protecting your server and the root user password
should be closely guarded. It should never be revealed to other administrators.
As best practice, IPStor administrator accounts should be limited to trusted
administrators that can safely modify the server configuration. Improper
modifications of the server configuration can result in lost data if SAN resources are
deleted or modified.
Data access
Just as CDP/NSS protects your system configuration by verifying each administrator
as they login, CDP/NSS protects storage resources by ensuring that only the proper
computer systems have access to the systems resources.
For access by application servers, two things must happen, authentication and
authorization.
Authentication is the process of establishing the credentials of a Client and creating
a trusted relationship (shared-secret) between the client and server. This prevents
other computers from masquerading as the Client and accessing the storage.
178
Security
Authentication occurs once per Client-to-Server relationship and occurs the first time
a server is successfully added to a client. Subsequent access to a server from a
client uses the authenticated shared secret to verify the client. Credentials do not
need to be re-established unless the software is re-installed. The authentication
process uses the authenticated Diffie-Hellman protocol. The password is never
transmitted through the network, not even in encrypted form to eliminate security
vulnerabilities.
Authorization is the process of granting storage resources to a Client. This is done
through the console by an IPStor administrator or the servers root user. The client
will only be able to access those storage resources that have been assigned to it.
Account management
Only the root user can manage users and groups or reset passwords. You will need
to add an account for each person who will have administrative rights in CDP/NSS.
You will also need to add a user account for clients that will be accessing storage
resources from a host-based application (such as FalconStor DiskSafe or FileSafe).
To make account management easier, users can be grouped together and handled
simultaneously. To manage users and groups, right-click on the server and select
Accounts. A list of all existing users and administrators are listed on the Users tab
and a list of all existing groups is listed on the Groups tab.
The rights of each are summarized in the table below:
Type of
Administrator
Create/
Delete
Pools
Add/Remove
Storage from
Pools
Create/Modify/
Delete Logical
Resources
Assigns Rights
to IPStor Users
Assign
Storage to
Clients
Root
IPStor
Administrator
x - IPStor Users
can only modify/
delete logical
resources that
they created.
IPStor User
For additional information regarding user access rights, refer to the Manage
accounts section and Manage storage pools and the devices within storage pools.
Security recommendations
In order to maintain a high level of security, a CDP/NSS installation should be
configured and used in the following manner:
179
Security
Disable ports
Disable all unnecessary ports. The only ports required by CDP/NSS are shown in
Port Usage:
180
Failover
Overview
To support mission-critical computing, CDP/NSS-enabled technology provides high
availability for the entire storage network, protecting you from a wide variety of
problems, including:
Connectivity failure
Storage device path failure
Storage device failure
Storage server failure (including storage device failure)
181
Failover
The Failover
Option
The FalconStor failover option provides high availability for CDP and NSS
operations by eliminating the down time that can occur should a storage server
(software or hardware) or a storage device fail. There are two modes of failover:
Best Practice
Primary/
Secondary
Storage
Servers
Shared storage failover - Uses a two-node failover pair to provide node level
redundancy. This model requires a shared storage infrastructure and is
typically Fibre Channel based.
Non-shared storage failover (Cross-mirror failover) - Provides high
availability without the need for shared storage. Used with appliances
containing internal storage. Mirroring is facilitated over a dedicated, direct IP
connection. (Available in a Virtual Appliance environment.)
As a failover setup best practice, it is recommended that you do not put more than
one standby WWPN on a single physical port. Both NSS/CDP nodes in a cluster
configuration require the same number of physical Fibre Channel target ports to
achieve best practice failover configurations.
FalconStors Primary and Secondary servers are separate, independent storage
servers that each have their own assigned clients. The primary storage server is the
server that is being monitored by the secondary Storage server. In the event the
primary fails, the secondary takes over. This is referred to as Active-Passive
Failover.
The terms Primary and Secondary are purely from the clients perspective since
these servers may be configured to monitor each other. This is referred to as Mutual
Failover or Failover. In that case, each server is primary to its own clients and
secondary to the others clients. Each server normally services its own clients. In the
event one server fails, the other will take over and serve the failed servers clients.
Failover/
Takeover
Failover/takeover is the process that occurs when the secondary server takes over
the identity of the primary. In the case of cross-mirroring on a virtual appliance,
failover occurs when all disks are swapped to the secondary server. Failover will
occur under the following conditions:
Recovery/
Failback
Recovery/Failback is the process that occurs when the secondary server releases
the identity of the primary to allow the primary to restore its operation. Once control
has returned to the primary server, the secondary server returns to its normal
monitoring mode.
After recovering a virtual appliance cross-mirror failure, the secondary server swaps
disks back to the primary server after the disks are re-synchronized.
182
Failover
Storage Cluster
Interlink
Sync Standby
Devices
This menu option is available from the console (Failover --> Sync Standby Devices)
and is useful when the Storage Cluster Interlink connection in a failover pair is
broken. Select this option to manually synchronize the standby device information
on both servers once the Storage Cluster Interlink is reconnected.
Asymmetric
mode
Swap
(Fibre Channel only) Asymmetric failover requires standby ports on the secondary
server in case a target port on your primary server fails.
For virtual appliances: Swap is the process that occurs with cross-mirroring when
data functions are moved from a failed virtual disk on the primary server to the
mirrored virtual disk on the secondary server. The disks are swapped back once the
problem is resolved.
183
Failover
184
Failover
Failover requirements
The following are the requirements for setting up a failover configuration:
General failover
requirements
You must have two storage servers. The failover pair should be installed
with identical Linux operating system versions.
Version 7.0 and later requires a Storage Cluster Interlink Port for failover
setup. This is a physical connection (also used as a hidden heartbeat IP)
between two servers. If you wish to disable the Storage Cluster Interlink
heartbeat functionality, contact Technical Support.
Note: When USEQUORUMHEALTH is disabled and there are no clientassociated network interfaces, all network interfaces - including the Storage
Cluster Interlink Port - must go down before failover can occur. When the
Storage Cluster Interlink heartbeat functionality is disabled, it is no longer
treated as a heartbeat IP connection for failover.
Both servers must reside on the same network segment, because in the
event of a failover, the secondary server must be reachable by the clients of
the primary server. This network segment must have at least one other
device that generates a network ping (such as a router, switch, or server).
This allows the secondary server to detect the network in the event of a
failure.
You need to reserve an IP address for each network adapter in your primary
failover server. The IP address must be on the same subnet as the
secondary server and is used by the secondary server to monitor the
primary server's health. In a mutual failover configuration, these IP
addresses are used by the servers to monitor each other's health. The
health monitoring IP address remains with the server in the event of failure
so that the servers health can be continually monitored. Note: The storage
server clients and the console cannot use the health monitoring IP address
to connect to a server.
You must use static IP addresses for your failover configuration. It is also
recommended that the IP addresses of your servers be defined in a DNS
server so they can be resolved.
If you will be using Fibre Channel target mode or iSCSI target mode, you
must enable it on both the primary and secondary servers before creating
your failover configuration.
The first time you set up a failover configuration, the secondary server must
not have any replica resources.
You must have at least one device reserved for a virtual device on each
primary server with enough space to hold the configuration repository that
will be created. The main repository should be established on a RAID5 or
RAID1 file system for ultimate reliability.
It is strongly recommended that you use some type of power control
option for failover servers.
If you are using an external hardware power controller for your failover pair,
you should set it up before creating your failover configuration. Refer to
Power Control options for more information.
185
Failover
General failover
requirements
for iSCSI clients
(Window iSCSI clients) The Microsoft iSCSI initiator has a default retry period of 60
seconds. You must change it to 300 seconds in order to sustain the disk for five
minutes during failover so that applications will not be disrupted by temporary
network problems. This setting is changed through the registry.
1. Go to Start --> Run and type regedit.
2. Find the following registry key:
HKEY_LOCAL_MACHINE\system\CurrentControlSet\control\class\4D6
E97B-xxxxxxxxx\<iscsi adapter interface>\parameters\
Both servers must have at least one Network Interface Card (NIC) each (on
the same subnet). Unlike other clustering software, the heartbeat co-exists
on the same NIC as the storage network. The heartbeat does not require
and should NOT be on a dedicated heartbeat interface and subnet.
The failover pair must have connections to the same common storage; if
storage cannot be seen by both servers, it cannot be accessed from both
servers. However, the storage does not have to be represented the same
way to both servers. Each server needs at least one path to each
commonly-shared physical storage device, but there is no maximum and
they do not need to be equal (i.e., server A has two paths while server B has
four paths). Make sure to properly configure LUN masking on storage arrays
so both storage server nodes can access the same LUNs.
Storage devices must be attached in a multi-host SCSI configuration or
attached on a Fibre loop or switched fabric. In this configuration, both
servers can access the same devices at the same time (both read and
write).
(SCSI only) Termination should be enabled on each adapter, but not on the
device, in a shared bus arrangement.
If you will be using the FalconStor NIC Port Bonding option, you must set it
up before creating a failover configuration. You cannot change or remove
NIC Port Bonding once failover is set up. If you need to change NIC Port
Bonding, you will have to remove failover first.
186
Failover
Cross-mirror
failover
requirements
FC-based
Asymmetric
failover
requirements
187
Failover
Connectivity failure
A connectivity failure can occur due to a NIC, Fibre Channel HBA, cable, or switch/
router failure. You can eliminate potential points of failure by providing multiple paths
to the storage server with multiple NICs, HBAs, cables, switches/routers.
The client always tries to connect to the server with its original IP address (the one
that was originally set in the client when the server was added to the client). You can
re-direct traffic to an alternate adapter by permitting you to specify the alternate IP
addresses for the storage server. This can be done in the console (right-click on the
server and select Properties --> Server IP Addresses tab).
188
Failover
When you set up multiple IP addresses, the clients will attempt to communicate with
the server using an alternate IP address if the original IP address stops responding.
Notes:
In order for failover to occur when there is a failure, the device driver
must promptly report the failure. Make sure you have the latest driver
available from the manufacturer.
In order for the clients to successfully use an alternate IP address, your
subnet must be set properly so that the subnet itself can redirect traffic to
the proper alternate adapter.
The client becomes aware of the multiple IP addresses when it initially
connects to the server. Therefore, if you add additional IP addresses in
the console while the client is running, you must rescan devices
(Windows clients) or restart the client (Unix clients) to make the client
aware of these IP addresses. In addition, if you recover from a network
path failure, you will need to restart the client so that it can use the
original IP address.
Fibre Channel Target failure: If a Fibre Channel target port links down, the
partner server will immediately takeover. This is true regardless of the
number of target ports on the NSS server. For example, the server can use
multiple targets to provide virtual devices to the client. If a target loses
connectivity, the client will still have alternate paths to access those devices.
However, the default behavior is to failover. The default behavior can be
modified by Technical Support.
Network
connection
failure
189
Failover
190
Failover
Because the heartbeat uses the same network path that the server uses to serve its
clients, if the heartbeat cannot be retrieved and there are iSCSI clients associated
with those networks, the secondary server knows that the clients cannot access the
server. This is considered a Catastrophic failure because the server or the network
connectivity is incapacitated. In this case the secondary will immediately initiate a
failover.
191
Failover
Failover restrictions
The following information is important to be aware of when configuring failover:
JBODs are not recommended for failover. If you use a JBOD as the storage
device for a storage server (configured in Fabric Loop), certain downstream
failover scenarios, such as SCSI Aliasing, might not function properly. If a
Fibre connection on the storage server is broken, the JBOD might hang and
not respond to SCSI commands. SCSI Aliasing will attempt to connect using
the other Fibre connection; however, since the JBOD is in an unknown
state, the storage server cannot reconnect to the JBOD, causing CDP/NSS
clients to disconnect from their resources.
In a pure Fibre Channel environment, Network failure will not trigger failover.
Failover setup
You will need to know the IP address(es) of the primary server (and the secondary
server if you are configuring a mutual failover scheme). You will also need the health
monitoring IP address(es). It is a good idea to gather this information and find
available IP addresses before you begin the setup.
1. In the console, right-click on an expanded server and select Failover --> Failover
Setup Wizard.
You will see a screen similar to the following that shows you a status of options
on your server.
192
Failover
4. Select the secondary server and determine if the servers will monitor each other.
Shared storage
failover
193
Failover
Cross mirror failover
(non-shared storage)
5. (Cross-mirror only) Select the disks that will be used for the primary server.
194
Failover
6. (Cross-mirror only) Confirm the disks that will be used for the secondary server.
9. Determine if there are any conflicts with the server you have selected.
CDP/NSS Administration Guide
195
Failover
You will see mismatched devices listed here. For example, if you have a RAID
array and one server sees all eight devices and the other server sees only four
devices, you will see the devices listed here as mismatched.
You must resolve the mismatch before continuing. For example, if the QLogic
driver did not load on one server, you will have to load it before going on.
Note that you can exclude physical devices from failover consideration, if
desired.
10. Determine if you need to rescan this servers physical adapters.
If you fixed any mismatched devices in the last step, you will need to rescan
before the wizard can continue.
If you are re-running the Failover wizard because you made a change to a
physical device on one of the servers, you should rescan before continuing.
If you had no conflicts and have recently used the Rescan option to rescan the
selected server's physical adapters, you can skip the scanning process.
Note: If this is the first time you are setting up a failover configuration, you will
get a warning message if there are any Replica resources on the secondary
server. You will need to remove them and then restart the failover wizard.
196
Failover
12. Verify the Storage Cluster Interlink Port IP addresses for failover setup.
197
Failover
By re-ordering the subnet list, failover can be avoided due to a failure on eth0.
If you are using the Cross-mirror feature, you will not see the 192.168... crossmirror link that you entered earlier listed here.
14. Indicate if you want to use this network adapter.
Select the IP addresses that clients will use to access the storage servers when
using iSCSI, replication and for console communication.
Notes:
198
Failover
15. Enter the health monitoring IP address you reserved for the selected network
adapter.
The health monitoring IP address remains with the server in the event of failure
so that the servers health can be continually monitored. Therefore it is
recommended that you use static IP addresses.
Select health monitoring heartbeat addresses which will be used exclusively by
the storage servers to monitor each others health. These addresses must not
be used for any other purpose.
16. If you want to use additional network adapter cards, repeat the steps above.
199
Failover
17. (Asymmetric mode only) For Fibre Channel failover, select the initiator on the
secondary server that will function as a standby in case the target port on your
primary server fails.
For QLogic HBAs, you will need to select a dedicated standby port for each
target port used by clients. You should confirm that the adapter shown is not the
initiator on your secondary server that is connected to the storage array, and
also that it is not the target adapter on your secondary server. You can only pick
a standby port once. The exception to this rule is when you are using NPIV.
If you are configuring a mutual failover, you will need to set up the standby
adapter for the secondary server as well.
18. Select which Power Control option the primary server is using.
Power Control options force the primary server to release its resources after a
failure. Refer to Power Control options for more information.
200
Failover
HP iLO - This option will power down the primary server in addition to forcing the
release of the servers resources and IP address. In order to use HP iLO,
several packages must be installed on the server and you must have configured
the controllers IP address to be accessible from the storage servers. In this
dialog, enter the HP iLO ports IP address. Refer to HP iLO for more
information.
For Red Hat 5, the following packages are automatically installed on each server
(if you are using the EZStart USB key) in order to use HP iLO power control:
perl-IO-Socket-SSL-1.01-1.fc6.noarch.rpm
perl-Net-SSLeay-1.30-4.fc6.x86_64.rpm
RPC100 - This option will power down the primary server in addition to forcing
the release of the servers resources and IP address. RPC100 is an external
power controller available in both serial and parallel versions. Select the correct
port, depending upon which version you are using. Refer to RPC100 for more
information.
IPMI - This option will reset the power of the primary server, forcing the release
of the servers resources and IP address. In order to use IPMI, you must have
created an administrative user via your IPMI configuration tool. The IP address
cannot be the virtual IP address that was set for failover. Refer to IPMI for more
information.
APC PDU - This option will reset the power of the primary server, forcing the
release of the servers resources and IP address. The APC PDU external
hardware power controller must be set up before you can use it. In this dialog,
enter the IP address of the APC PDU, the community name that was given
Write+ access, and the port(s) that the failover partner is physically plugged into
on the PDU. Use a space to separate multiple ports. Refer to APC PDU for
more information.
For Red Hat 5, you will need to install the following packages on each server in
order to use APC PDU:
lm_sensors-2.10.7-9.el5.x86_64.rpm
net-snmp-5.3.2.2-9.el5_5.1.x86_64.rpm
net-snmp-libs-5.3.2.2-9.el5_5.1.i386.rpm
net-snmp-libs-5.3.2.2-9.el5_5.1.x86_64.rpm
net-snmp-utils-5.3.2.2-9.el5_5.1.x86_64.rpm
19. Select which Power Control option the secondary server is using.
201
Failover
20. Confirm all of the information and then click Finish to create the failover
configuration.
Once your configuration is complete, each time you connect to either server in
the console, you will automatically be connected to the other as well.After
configuring cross-mirror failover, you will see all of the virtual machine disks
listed in the tree, similar to the following:
Notes:
If the setup fails during the setup configuration stage (for example, the
configuration is written to one server but then the second server is
unplugged while the configuration is being written to it), use the Remove
Failover Configuration option to delete the partially saved configuration.
You can then create a new failover configuration.
Do not change the host name of a server that is part of a failover pair.
202
Failover
After a failover occurs, if a client machine is rebooted while either of the failover
servers is powered off, the client must rescan devices once the failover server is
powered back on, but before recovery occurs. If this is not done, the client
machine will need to be rebooted in order to discover the newly restored paths.
203
Failover
This option powers down the primary server in addition to forcing the release of the
servers resources and IP address. HP iLO is available on HP servers with the ILO
(Integrated Lights Out) option. In order to use HP iLO, you must have configured the
controllers IP address to be accessible from the storage servers. The console will
prompt you to enter the HP iLO ports IP address of the server.
Note: The HP iLO power control option depends on the storage server being able
to access the HP iLO port through its regular network connection. If the HP iLO
port is inaccessible, this option will not function. Each time the power control
dialog screen is launched, the username/password fields will be blank. The fields
are available for update but the current username and password information is
not revealed for security purposes. You can make changes by re-entering your
username and password.
RPC100
This option will power down the primary server in addition to forcing the release of
the servers resources and IP address. RPC100 is an external power controller
available in both serial and parallel versions. The console will prompt you to select
the serial or parallel port, depending upon which version of the RPC100 you are
using. Note that the RPC100 power controller only controls one power connection. If
the storage server has multiple power supplies there will be a need for a special
power cable to connect them all.
SCSI Reserve/
Release
(Not available in version 7) This option is not an actual Power Control option, but a
storage solution to prevent two storage servers from accessing the same physical
storage device simultaneously. Note that this option is only available on those
storage devices that support SCSI Reserve & Release. This option will not force a
hung storage server to reboot and will not force the hung server to release its IP
addresses or bring down its FC targets. The secondary server will simply reserve
the primary servers physical resources, thereby preventing the possibility of a
double mount. If the primary server is not actually hung and is only temporarily
unable to communicate with the secondary server through normal means, the
triggering of the SCSI Reserve/Release from the secondary server will trigger a
reservation conflict on the primary server. At this point the primary server will release
both its IP addresses and FC targets so the secondary can successfully take over. If
this occurs the primary server will need to be rebooted before the reservation
conflict can be resolved. The commands, ipstor restart and ipstor
restart all will NOT resolve the reservation conflict.
IPMI
This option will reset the power of the primary server, forcing the release of the
servers resources and IP address. Intelligent Platform Management Interface
(IPMI) is a hardware level interface that monitors various hardware functions on a
server. If IPMI is provided by your hardware vendor, you must follow the vendors
instructions to configure it and you must create an administrative user via your IPMI
configuration tool. The IP address cannot be the virtual IP address that was set for
failover.
204
Failover
If you are using IPMI, you will see several IPMI options on the servers System
Maintenance menu, Monitor, and Filter. Refer to System maintenance for more
information.
You should check the FalconStor certification matrix for a current list of FalconStor
appliances and server hardware that has been certified for use with IPMI.
APC PDU
This option will reset the power of the primary server, forcing the release of the
servers resources and IP address. The APC PDU is an external hardware power
controller that must be set up before you can use it.
To set up the APC PDU power controller:
1. Connect the APC PDU to your network.
2. Via the COM port on the unit, set an IP address that is accessible from the
storage servers.
3. Launch the APC PDU user interface from the COM port or the Web.
4. Enable SNMP on the APC PDU.
This can be found under Network.
5. Add or edit a Community Name and give it Write+ access.
You will use this Community Name as the password for configuration of the
power control option. For example, if you want to use the password apc, you
have to create a Community Name called apc or change the default Community
Name to APC and give it Write+ access.
6. Connect the power plugs of your storage servers to the APC PDU.
Be sure to note which outlets are used for each server.
205
Failover
Failover
settings,
including which
IP addresses
are being
monitored for
failover.
Current status of
failover
configuration.
Red - The server is currently in failover mode and has been taken over by
the secondary server.
Green - The server has taken over the primary server's resources.
Yellow - The user has suspended failover on this server. The current server
will NOT take over the primary server's resources even it detects abnormal
condition from the primary server.
Failover events are also written to the primary server's Event Log, so you can check
there for status and operational information, as well as any errors. You should be
aware that when a failover occurs, the console will show the failover partners Event
Log for the server that failed.
For troubleshooting issues pertaining to failover, refer to the Failover
Troubleshooting section.
206
Failover
After failover
When a failed server is restarted, it communicates with the acting primary server
and must receive the okay from the acting primary server in order to recover its role
as the primary server. If there is a communication problem, such as a network error,
and no notification is received, the failed server remains in a 'ready' state but does
not recover its role as the primary server. After the communication problem has
been resolved, the storage server will then be able to recover normally.
If failover is suspended on the secondary server, or if the failover module is stopped,
the primary will not automatically recover until the ipstorsm.sh recovery
command is entered.
If both failover servers go offline and then only one is brought up, type the
ipstorsm.sh recovery command to bring the storage server back online.
Manual recovery
Manual recovery is the process when the secondary server releases the identity of
the primary to allow the primary to restore its operation. Manual recovery can be
triggered by selecting the Stop Takeover option from the FalconStor Management
Console.
207
Failover
If the primary server is not ready to recover, and you can still communicate with the
server, a detailed failover screen displays.
If the primary server is not ready to recover, and you cannot communicate with the
server, a warning message displays.
208
Failover
Auto recovery
You can enable auto recovery by changing the Auto Recovery option after failover,
when control is returned to the primary server once the primary server has
recovered. Once control has returned to the primary server, the secondary server
returns to its normal monitoring mode.
209
Failover
"/etc/init.d/iscsi
8. Once you have verified that both servers can access the remote storage, restart
CDP/NSS on both servers. Failure to do so will result in server recovery issues.
9. After CDP/NSS has been restarted, verify that both servers are in a ready state
by using the sms -v command.
Both servers should now be recovered and in a healthy state.
CDP/NSS Administration Guide
210
Failover
211
Failover
If everything is working correctly, this option will be labeled Resources and will not
be selectable. The option will be labeled Incomplete Resources for the following
scenarios:
The mirror resource was offline when auto expansion (i.e. Snapshot
resource or CDP journal) occurred but the device is now back online.
You need to create a mirror for virtual resources that existed on the primary
server prior to cross mirror configuration.
1. Right-click on the server and select Cross Mirror --> Verify & Repair.
212
Failover
If everything is working correctly, this option will be labeled Remote Storage and will
not be selectable. The option will be labeled Damaged or Missing Remote Storage
when a physical disk being used by cross mirroring on the secondary server has
been replaced.
Note: You must suspend failover before replacing the storage.
1. Right-click the primary server and select Cross Mirror --> Verify & Repair.
213
Failover
If everything is working correctly, this option will be labeled Local Storage and will
not be selectable. The option will be labeled Damaged or Missing Local Storage
when a physical disk being used by cross mirroring is damaged on the primary
server and has been replaced.
Note: You must suspend failover before replacing the storage.
1. Right-click the primary server and select Cross Mirror --> Verify & Repair.
214
Failover
If everything is working correctly, this option will be labeled Storage and Complete
Resources and will not be selectable. The option will be labeled Resources with
Missing segments on both Local and Remote Storage when a virtual device spans
multiple physical devices and one physical device is offline on both the primary and
secondary server. This situation is very rare and this option is informational only.
1. Right-click on the server and select Cross Mirror --> Verify & Repair.
215
Failover
2. Click the Resources with Missing segments on both Local and Remote Storage
button.
You will see a list of failed devices. Because this option is informational only, no
action can be taken here.
If you make a change to a physical device (such as if you add a network card that
will be used for failover), you will need to re-run the Failover wizard. Be sure to scan
both servers during the wizard.
At that point, the secondary server is permitted to have Replica resources. This
makes it easy for you to upgrade your failover configuration.
216
Failover
Change subnet
If you switch IP segments for an existing failover configuration, the following needs
to be done:
1. Remove failover from both storage servers.
2. Delete the current failover servers from the FalconStor Management Console.
3. Make network modifications to the storage servers (i.e. change IP segments).
4. Add the storage servers back to the FalconStor Management Console.
5. Configure failover using the new IP segment.
217
Failover
The Self-checking Interval determines how often the primary server will check itself.
The Heartbeat Interval determines how often the secondary server will check the
heartbeat of the primary server.
If enabled, Auto Recovery determines how long to wait before returning control to
the primary server once the primary server has recovered.
218
Failover
Type YES in the dialog box to bring the server to a ready state and then force the
server up via the monitor IP address.
219
Failover
Suspend/resume failover
Select Failover --> Suspend Failover to stop monitoring its partner server.
In the case of Active-Active failover, you can suspend from either server. However,
the server that you suspend from will stop monitoring its partner and will not take
over for that partner server in the event of failure. It can still fail over itself. For
example, server A and server B are configured for Active-Active failover. If you go to
server B and suspend failover, server A will no longer fail over to server B. However,
server B can still fail over to server A.
Select Failover --> Resume Failover to restart the monitoring.
Notes: If the cross mirror link goes down, failover will be suspended. Use the
Resume Failover option when the cross mirror link comes back up. The disks will
automatically be re-synced at the scheduled interval or you can manually
synchronize using the cross mirror synchronize option.
Once the connection is repaired, the failover status is not cleared until
failover is resumed on both servers.
220
Failover
If everything is checked, this eliminates the failover relationship and removes the
health monitoring IP addresses from the servers and restores the Server IP
addresses. If you uncheck the IP address(es) for a server, the health monitoring
address becomes the Server IP address.
Note: If you are using cross mirror failover, after removal the cross mirror relationship will be gone but the configuration of your iSCSI initiator will remain and
the disks will still be presented to both primary and secondary servers.
221
Failover
The failover pair must have matching target site names. (This does not
apply to the target server name)
The failover pair can have different throttle settings, even if they are
replicating to the same server.
During failover, the throttle values of the two partners combine and are used
on the "up" server to maintain throttle settings. In other words, from the
software perspective, each server is still maintaining it's throttle. From
hardware perspective, the "up" server is the combined throttle level of itself
and it's partner.
The failover pair's throttle levels may be combined to equal over 100%.
Example: 80%+80%=160%. Note: This percentage is relative to the link
type. This value is the maximum speed allowed, not the instantaneous
speed.
If one of the throttle levels is set to no limit, then in failover state, both
servers throttle level becomes no limit.
It is highly recommended that you avoid the use of different link types. Using
different link types may cause unexpected results in network traffic while in
a failover state.
222
Failover
For failover with HotZone created on local storage, the failover must be setup first.
The local storage cannot be created on standalone server. For additional
information regarding HotZone, refer to HotZone.
223
Failover
4. On the Storage Option screen, select the Allocate from Local Storage option to
allocate space from the high performance disks.
Note: If you need to remove failover setup, it is recommended that you unassign
the physical disks so they can be re-used as virtual devices or SED devices after
failover has been removed.
224
Performance
FalconStor offers several options that can dramatically increase the performance of
your SAN.
SafeCache
The FalconStor SafeCache option improves the overall performance of CDP/NSSmanaged disks (virtual and/or service-enabled) by making use of high-speed
storage devices, such as RAM disk, NVRAM, or solid-state disk (SSD), as a
persistent (non-volatile) read/write cache.
In a centralized storage environment where a large set of database servers share a
smaller set of storage devices, data tends to be randomly accessed. Even with a
RAID controller that uses cache memory to increase performance and availability,
hard disk storage often cannot keep up with application servers I/O requests.
SafeCache, working in conjunction with high-speed devices (RAM disk, NVRAM or
SSDs) to front slower real disks, can significantly improve performance. Since
these high-speed devices are 100% immune to random access, SafeCache can
write data blocks sequentially to the cache and then move (flush) them to the data
disk (random write) as a separate process once the writes have been
acknowledged, effectively accelerating the performance of the slower disks.
The SafeCache default throttle speed is 10,240 KB/s, which can be adjusted
depending on your client IO pattern.
Regardless of the type of high-speed storage device being used as persistent cache
(RAM disk, NVRAM, or SSD), the persistent cache can be mirrored for added
protection using the FalconStor Mirroring option. In addition, SSDs and NVRAM
have a built-in power supply to minimize potential downtime.
225
Performance
SafeCache is fully compatible with the NSS failover option, which allows one server
to automatically fail over to another without any data loss and without any cache
write coherency problems. It is highly recommended that you use a Solid State disk
as SafeCache.
Configure SafeCache
To set up SafeCache for a SAN Resource you must create a cache resource. You
can create a cache resource for a single SAN resource or you can use the batch
feature to create cache resources for multiple SAN resources.
To enable SafeCache:
1. Navigate to Logical Resources --> SAN Resources and right-click on a SAN
resource.
2. Select SafeCache --> Enable.
The Create Cache Resource wizard displays to guide you through creating the
cache resource and allocating space for the storage.
Note: If Cache is enabled, up to 256 unflushed TimeMarks are supported.
Once the Cache has 256 unflushed TimeMarks, new TimeMarks cannot be
created.
226
Performance
For multiple SAN resources, right-click on the SAN Resources object and select
SafeCache --> Enable.
2. Select how you want to create the cache resource.
Note that the cache resource cannot be expanded. Therefore, you should
allocate enough space for your cache resource, taking into account future
growth. If you outgrow your cache resource, you will need to disable it and then
recreate it.
Custom lets you select which physical device(s) to use and lets you designate
how much space to allocate from each.
Express automatically creates the cache resource using the criteria you select:
Select different drive - CDP/NSS will look for space on another hard disk.
Select drives from different adapter/channel - CDP/NSS will look for space
on another hard disk only if it is on a separate adapter/channel.
Select any available drive - CDP/NSS will look for space on any disk,
including the original. This option is useful if you have mapped a device
(such as a RAID device) that appears as a single physical device.
227
Performance
228
Performance
229
Performance
Global Cache
Global SafeCache can be viewed from the FalconStor Management Console by
selecting the Global SafeCache node under Logical Resources.
You can choose to create a global or private cache resource. A global cache allows
you to share the cache with up to 128 resources. To create a global cache, select
Use Global Cache Resource in the Create Cache Resource Wizard.
Notes:
230
Performance
SafeCache properties
You can update the parameters that control how and when data will get flushed from
the cache resource to the CDP/NSS-managed disk. To update these parameters:
1. Right-click on a SAN resource that has SafeCache enabled and select
SafeCache --> Properties.
2. Type a new value for each parameter you want to change.
Refer to the SafeCache configuration section for more details about these
parameters.
231
Performance
HotZone
The FalconStor HotZone option offers two methods to improve performance, Read
Cache and Prefetch.
Read Cache
Read Cache is an intelligent, policy-driven, disk-based staging mechanism that
automatically remaps "hot" (frequently used) areas of disks to high-speed storage
devices, such as RAM disks, NVRAM, or Solid State Disks (SSDs). This results in
enhanced read performance for the applications accessing the storage. It also
allows you to manage your storage network with a minimal number of high-speed
storage devices by leveraging their performance capabilities.
When you configure the Read Cache method, you must divide your virtual or
Service-Enabled disk into zones of equal size. The HotZone storage is then
automatically created on the specified high-speed disk. This HotZone storage is
divided into zones equal in size to the zones on the virtual or service-enabled disk
(e.g.,32 MB), and is provisioned to the disk.
Reads/writes to each zone are monitored on the virtual or service-eServiceEnablednabled disk. Based on the statistics collected, the application determines
the most frequently accessed zones and re-maps the data from these hot disk
segments to the HotZone storage (located on the high-speed disk) resulting in
enhanced read performance for the application accessing the storage. Using the
continually collected statistics, if it is determined that the corresponding hot disk
segment is no longer hot, the data from the high performance disk is moved back
to its original zone on the virtual or service-enabled disk.
Prefetch
Prefetch enables pre-fetching of data for clients. This allows clients to read ahead
consecutively, which can result in improved performance because the data is ready
from the anticipatory read as soon as the next request is received from the client.
This will reduce the latency of the command and improve the sequential read
benchmarks in most cases.
Prefetch may not be helpful if the client is already submitting sequential reads with
multiple outstanding commands. However, the stop-and-wait case (with one read
outstanding) can often be improved dramatically by enabling Prefetch.
Prefetch does not affect writing, or random reading.
Applications that copy large files (i.e. video streaming) and applications that back up
files are examples of applications that read sequentially and might benefit from
Prefetch.
232
Performance
Configure HotZone
1. Right-click on a SAN resource and select HotZone --> Enable.
For multiple SAN resources, right-click on the SAN Resources object and select
HotZone --> Enable.
2. Select the HotZone method to use.
233
Performance
These properties control how the prefetching (read ahead) is done. While you
may need to adjust the default settings to enhance performance, FalconStor has
determined that the defaults shown here are best suited for most disks/
applications.
Maximum prefetch chains - Number of locations from the disk to read
from.
Maximum read ahead - The maximum per chain. This can override the
Read ahead option.
Read ahead - How much should be read ahead at a time. No matter how
this is set, you can never read more than the Maximum read ahead setting
allows.
Chain Timeout - Specify how long the system should wait before freeing
up a chain.
4. (Read Cache only) Select the storage pool or physical device(s) from which to
create this HotZone.
5. (Read Cache only) Select how you want to create the HotZone.
Note that the HotZone cannot be expanded. Therefore, you should allocate
enough space for your SAN resource, taking into account future growth. If you
outgrow your HotZone, you will need to disable it and then recreate it.
Custom lets you select which physical device(s) to use and lets you designate
how much space to allocate from each.
Express automatically creates the HotZone storage using the criteria you select:
Select different drive - CDP/NSS will look for space on another hard disk.
234
Performance
Select drives from different adapter/channel - CDP/NSS will look for space
on another hard disk only if it is on a separate adapter/channel.
Select any available drive - CDP/NSS will look for space on any disk,
including the original. This option is useful if you have mapped a device
(such as a RAID device) that appears as a single physical device.
6. (Read Cache only) Select the disk to use for the HotZone storage.
If you selected Custom, you can piece together space from one or more disks.
235
Performance
Size of each zone - Indicate how large each zone should be. Reads/writes
to each zone on the disk are monitored. Based on the statistics collected,
The application determines the most frequently accessed zones and remaps the data from these hot zones to the HotZone storage. You should
check with your application server to determine how much data is read/
written at one time. The block size used by the application should ideally
match the size of each zone.
Minimum stay time - Indicate the minimum amount of time data should
remain in the HotZone before being moved back to its original zone once it
is determined that the zone is no longer hot.
236
Performance
Access type - Indicate whether the zone should be monitored for reads,
writes, or both.
Access intensity - Indicate how to determine if a zone is hot. Number of
IOs performed at the site uses the amount of data transferred (read/write)
as a determining factor for each zone.
9. Confirm that all information is correct and then click Finish to enable HotZone.
237
Performance
Note that if you manually suspend HotZone from the Console when the device
configured with the HotZone option is running normally, the Suspended field will
display Yes.
You can also see statistics about the zone by checking the HotZone Statistics tab:
The information displayed is initially for the current interval (hour, day, week, or
month). You can go backward (and then forward) to see any particular interval. You
can also view multiple intervals by moving backward to a previous interval and then
clicking the Play button to see everything from that point to the present interval.
Click the Detail View button to see more detail. There you will see the information
presented more granularly, for smaller amounts of the disk.
If HotZone is being used in conjunction with Fibre Channel or iSCSI failover and a
failover has occurred, the HotZone Statistics will not be displayed while in a failover
state. The reason for this is because the server that took over does not contain the
failed servers information on the HotZone Statistics. As a result, the Console will
display empty statistics for the primary server while the secondary has taken over.
Once the failed server is restored, the statistics will display properly. This does not
affect the functionality of the HotZone option while in a failover state.
238
Performance
HotZone Properties
You can configure HotZone properties by right-clicking on the storage server and
selecting HotZone. If HotZone has already been enabled, you can select the
properties option to configure the Zone and Access policies if the HotZone was set
up using the Read Cache method. Alternatively, you will be able to set the Prefetch
Properties if your HotZone has been set up using the Prefetch method.
For additional information on these parameters, see Configure HotZone.
Disable HotZone
The HotZone --> Disable option permanently stops HotZone for the specific SAN
resource.
Because there is no dynamic free space expansion when the HotZone is full, you
can use this option to disable your current HotZone and then manually create a
larger one.
If you want to temporarily suspend HotZone, use the HotZone --> Suspend option
instead. You will then need to use the HotZone --> Resume option to begin using
HotZone again
239
Mirroring
Mirroring provides high availability by minimizing the down time that can occur if a
physical disk fails. The mirror can be defined with disks that are not necessarily
identical to each other in terms of vendor, type, or even interface (SCSI, FC, iSCSI).
With mirroring, the primary disk is the disk that is used to read/write data for a SAN
Client and the mirrored copy is a copy of the primary. Both disks are attached to a
single storage server and are considered a mirrored pair. If the primary disk fails, the
disks swap roles so that the mirrored copy becomes the primary disk.
There are two Mirroring options, Synchronous Mirroring and Asynchronous
Mirroring.
Synchronous mirroring
FalconStors Synchronous Mirroring option offers the ability to define a synchronous
mirror for any CDP/NSS managed disk (virtualized or service-enabled).
In the Synchronous Mirroring design, each time data is written to a designated disk,
the same data is simultaneously written to another disk. This disk maintains an exact
copy of the primary disk. In the event that the primary disk is unable to read/write
data when requested to by a SAN Client, CDP/NSS seamlessly swaps data
functions to the mirrored copy disk.
240
Mirroring
Asynchronous mirroring
FalconStors Asynchronous Mirroring Option offers the ability to define a near realtime mirror for any CDP/NSS-managed disk (virtual or service-enabled) over long
distances between data centers.
When you configure an asynchronous mirror, you create a dedicated cache
resource and associate it to a CDP/NSS-managed disk. Once the mirror is created,
the primary and secondary disks are synchronized if the Start initial synchronization
when mirror is added option is enabled in global settings. This process does not
involve the application server. After the synchronization is complete, all writerequests from the associated application server are sequentially delivered to the
dedicated cache resource. This data is then committed to both the primary and its
mirror as a separate background process. For added protection, the cache resource
can also be mirrored.
STAGING AREA
Data blocks are
written sequentially to
the cache resource to
provide enhanced
write performance.
IPStor
WRITES
ACKNOWLEDGEMENT
STAGING
AREA
PRIMARY
MIRROR
5
Primary Site
Remote Site
PRIMARY DISK
9
7
MIRROR DISK
241
Mirroring
Mirror requirements
The following are the requirements for setting up a mirroring configuration:
Mirror setup
You can enable mirroring for a single SAN resource or you can use the batch feature
to enable mirroring for multiple SAN resources. You can also enable mirroring for an
existing snapshot resource, cache resource, or incoming replica resource.
Note: For asynchronous mirroring, if you want to preserve the write order of data
that is being mirrored asynchronously, you should create a group for your SAN
resources and enable SafeCache for the group. This is useful for large databases
that span over multiple devices. In such situations, the entire group of devices is
acting as one huge device that contains the database. When changes are made
to the database, it may involve different places on different devices, and the write
order needs to be preserved over the group of devices in order to preserve
database integrity. Refer to Groups for more information about creating a group.
1. For a single SAN resource, right-click on the resource and select Mirror --> Add.
For multiple SAN resources, right-click on the SAN Resources object and select
Mirror --> Add.
For an existing snapshot resource or cache resource, right-click on the SAN
resource and select Snapshot Resource or Cache Resource --> Mirror --> Add.
242
Mirroring
2. (SAN resources only) Select the type of mirrored copy you are creating.
3. Select the storage pool or physical device(s) from which to create the mirror.
243
Mirroring
Custom lets you select which physical device(s) to use and lets you designate
how much space to allocate from each.
Express automatically creates the Mirrored Copy using the criteria you select:
Select different drive - Look for space on another hard disk.
Select drives from different adapter/channel - Look for space on another
hard disk only if it is on a separate adapter/channel.
Select any available drive - Look for space on any disk, including the
original. This option is useful if you have mapped a device (such as a
RAID device) that looks like a single physical device.
244
Mirroring
245
Mirroring
246
Mirroring
If you select to monitor the mirroring process, the I/O performance is evaluated
to decide if I/O to the mirror disk is lagging beyond an acceptable limit. If it is,
mirroring will be suspended so it does not impact the primary storage.
Note: Mirror monitoring settings are retained when a mirror is enabled on the
same device.
247
Mirroring
Re-synchronization can be started based on time (every n minutes/hours default is every five minutes) and/or I/O activity (when I/O is less than n KB/MB).
If you select both, the time will be applied first before the I/O activity level. If you
do not select either, the mirror will stay suspended until you manually
synchronize it.
If you select one or both re-synchronization methods, you must also specify how
many times the system should retry the re-synchronization if it fails to complete.
When the system initiates re-synchronization, it does not check lag time and
mirroring will not be suspended if there is too much lag time.
If you manually resume mirroring, the system will monitor the process during
synchronization and check lag time. Depending upon your monitoring policy,
mirroring will be suspended if the lag time gets above the acceptable limit.
Note: If CDP/NSS is restarted or the server experiences a failover while
attempting to re-synchronize, the mirror will remain suspended.
8. Confirm that all information is correct and then click Finish to create the mirroring
configuration.
248
Mirroring
Note that the cache resource cannot be expanded. Therefore, you should
allocate enough space for your SAN resource, taking into account future growth.
If you outgrow your cache resource, you will need to disable it and then
recreate it.
Custom lets you select which physical device(s) to use and lets you designate
how much space to allocate from each.
Express automatically creates the cache resource using the criteria you select:
Select different drive - Look for space on another hard disk.
Select drives from different adapter/channel - Look for space on another
hard disk only if it is on a separate adapter/channel.
Select any available drive - Look for space on any disk, including the
original. This option is useful if you have mapped a device (such as a
RAID device) that looks like a single physical device.
2. Confirm that all information is correct and then click Finish to create the cache
resource.
You can now mirror your cache resource by highlighting the SAN resource and
selecting SafeCache --> Mirror --> Add.
249
Mirroring
Note: In order to update the mirror synchronization status, refresh the Console
screen (View --> Refresh).
250
Mirroring
This feature is useful as a safety net when you perform major system maintenance
or upgrades. Simply promote the mirrored copy and you can perform maintenance
on the primary disk without worrying about anything going wrong. If there is a
problem, you can use the newly promoted virtual drive to serve your clients.
Notes:
251
Mirroring
If one of the mirrored disks has a minor failure, such as a power loss:
1. Fix the problem (turn the power back on, plug the drive in, etc.).
2. Right-click on the SAN resource and select Mirror --> Synchronize.
This re-synchronizes the disks and restarts the mirroring.
252
Mirroring
253
Mirroring
As you expand the primary disk, the wizard only shows half the available
disk space as available because it reserves an equal amount of space for
the mirrored drive.
On a Thin Provisioned disk, if the mirror is offline, it will be removed when
storage is being added automatically. If this occurs, you must recreate
the mirror.
254
Mirroring
Select the Enable Mirror Throttle checkbox and enter the throughput speed for
mirror synchronization. This option is disabled by default. If this option is disabled for
an individual device, the global settings will be followed. Refer to Set global
mirroring options.
The synchronization speed can go up to the specified value, but the actual
throughput depends upon the storage environment.
Note: The mirror throttle settings are retained when the mirror is enabled on the
same device.
The throughput speed can also be set for multiple devices (in batch mode) by rightclicking on Logical Resources in the console and selecting Set Mirror Throttle.
255
Mirroring
256
Mirroring
257
Mirroring
Rebuild a mirror
The Rebuild option rebuilds a mirror from beginning to end and starts the mirroring
process once it is synchronized. The rebuild feature is useful if the mirror disk you
want to synchronize is from a different storage server
A rebuild might be necessary if your disaster recovery site has been servicing clients
due to some type of issue, such as a storm or power outage, at your primary data
center. Once the problem is resolved, the mirror is out of sync. Because the mirror
disk is located on a different storage server in a remote location, the local storage
server must rebuild the mirror from beginning to end.
Before you rebuild a mirror, you must stop all client activity. After rebuilding the
mirror, swap the mirror so that the primary data center can service clients again.
To rebuild the mirror, right-click on a resource and select Mirror --> Rebuild.
You can see the current settings by checking the Mirror Synchronization Status field
on the General tab of the resource.
Suspend/resume mirroring
You can suspend mirroring for an individual resource or for multiple resources.
When you manually suspend a mirror, the system will not attempt to re-synchronize,
even if you have a re-synchronization policy. You will have to resume the mirror in
order to synchronize.
When mirroring is resumed, if the mirror is not synchronized, a synchronization will
be triggered immediately. During the synchronization, the system will monitor the
process and check lag time. Depending upon your monitoring policy, mirroring will
be suspended if the lag time gets above the acceptable limit.
To suspend/resume mirroring for an individual resource:
1. Right-click on a resource and select Mirror --> Suspend (or Resume).
You can see the current settings by checking the Mirror Synchronization Status
field on the General tab of the resource.
To suspend/resume mirroring for multiple resources:
1. Right-click on the SAN Resources object and select Mirror --> Suspend (or
Resume).
2. Select the appropriate resources.
3. If the resource is in a group, select the checkbox to include all of the group
members enabled with mirroring.
258
Mirroring
You can set global mirroring options that affect system performance during
mirroring. While the default settings should be optimal for most configurations, you
can adjust the settings for special situations.
To set global mirroring properties for a server:
1. Right-click on the server and select Properties.
2. Select the Performance tab.
Throttle [n] KB/s (Range 128 - 1048576, 0 to disable) - The throttle
parameter allows you to set the maximum allowable mirror
synchronization speed, thereby minimizing potential impact to
performance for your devices. This option is set at 10 MB per second by
default. If disabled, throughput is unlimited.
Note: Actual throughput depends upon your storage environment.
Select the Start Initial Synchronization when mirror check box to have the
mirror sync when added. By default, the mirror will not automatically
synchronize when added. If this option is not selected, the mirror will not
sync until the next synchronization interval or until a manual
synchronization operation is performed. This option is not applicable for
Near-line recovery and thin disk relocation.
Synchronize Out-of-Sync Mirrors - Indicate how often the system should
check and attempt to re-synchronize active out-of-sync mirrors. The
default is every five minutes and up to two mirrors at each interval. These
settings are also used for the initial synchronization during creation or
loading of the mirror. Manual synchronizations can be performed at any
time and are not included in the number of mirrors at each interval set
here.
Enter the retry value to indicate how often synchronization should be
retried if it fails to complete. The default is to retry 20 times. These settings
will only be used for active mirrors. If a mirror is suspended because the
lag time exceeds the acceptable limit, that re-synchronization policy will
apply instead.
Indicate whether or not to include replica mirrors in the re-synchronization
process by selecting the Include replica mirrors in the automatic
synchronization process checkbox. This is unchecked by default.
Change
properties for a
specific
resource
259
Mirroring
260
Snapshot Resource
TimeMark snapshots allow you to create point-in-time delta snapshot copies of
data volumes. The concept of performing a snapshot is similar to taking a picture.
When we take a photograph, we are capturing a moment in time and transferring
this moment in time to a photographic medium, even while changes are occurring to
the object we focused our picture on. Similarly, a snapshot of an entire device allows
us to capture data at any given moment in time and move it to either tape or another
storage medium, while allowing data to be written to the device.
The basic function of the snapshot engine is to allow images to be created of data
volumes (virtual drives) using minimal storage space. The snapshot initially uses no
disk space. As new data is written to the source volume, the old data blocks are
moved to a temporary snapshot storage area. By combining the snapshot storage
with the source volume, the data can be recreated exactly at it appeared at the time
the snapshot was taken. For added protection, a Snapshot Resource can also be
mirrored.
A trigger is an event that notifies the application when it is time to perform a
snapshot of a virtual device. FalconStors Replication, TimeMark/CDP, Snapshot
Copy, and ZeroImpact Backup options all trigger snapshots.
100%
50%
2 GB or more
20%
Using the table above, if you create a 10 GB SAN resource, your initial Snapshot
Resource will be 2 GB but you can set the Snapshot Resource to expand
automatically, as needed.
If you create a SAN resource that is less than 500 MB, the amount of space
reserved for the Snapshot Resource will be 100% of the virtual drive size. This is
because a smaller-sized volume can overfill quickly, leaving no time for the auto-
261
Snapshot Resource
262
Snapshot Resource
Custom lets you select which physical device(s) to use and lets you designate
how much space to allocate from each.
Express lets you designate how much space to allocate and then automatically
creates a Snapshot Resource using an available device.
Select different drive - The storage server will look for space on another
hard disk.
Select drives from different adapter/channel - The storage server will look
for space on another hard disk only if it is on a separate adapter/channel.
Select any available drive - The storage server will look for space on any
disk, including the original. This option is useful if you have mapped a
device (such as a RAID device) that looks like a single physical device to
your storage server.
263
Snapshot Resource
264
Snapshot Resource
5. Determine whether the storage server should expand your Snapshot Resource if
it runs low and how it should be expanded.
265
Snapshot Resource
throughput, it will take a minimum of five seconds to fill up the rest of the drive.
Therefore, if the maximum throughput is 50 MB/s, the threshold should be set for
when the space is below 250 MB. Of course if the throughput is lower, the
allowance can be lowered accordingly.
The Maximum size allowed for the Snapshot Resource can be set to limit
automatic expansion. Specify 0 for no limit.
Note: If you do not select automatic expansion, old TimeMarks will be deleted
to prevent the Snapshot Resource from running out of space.
6. Configure what your storage server should do if your Snapshot Resource runs
out of space.
The default is to Always maintain write operations. If you are setting the
Snapshot Resource policy on a near-line mirror or replica, the default is to
Preserve all TimeMarks.
This will only occur if you have reached the maximum allowable size for your
Snapshot Resource or if you have chosen not to expand it. Once the maximum
is reached, the earliest TimeMarks will be deleted.
If a Snapshot Resource is associated with a member of a group enabled with
TimeMark, the earliest TimeMark will be deleted for all of the resources.
If you select Preserve all TimeMarks or Preserve recent TimeMarks, the system
will prevent any new writes from getting to the disk once the Snapshot Resource
runs out of space and it cannot allocate any more. As a result, clients can
266
Snapshot Resource
experience write errors. If the client is a production machine, this may not be
desirable.
If you select Enable MicroScan, the data block will be analyzed and only the
changed data will be copied.
7. Determine if you want to use Snapshot Notification.
267
Snapshot Resource
Because Snapshot Resources record block-level changes, not file-level, you may
not see the Usage Percentage decrease when you delete files. This is because
deleted files really still exist on the disk.
The Usage Percentage bar colors indicate usage percentage in relation to the
threshold level:
The usage percentage is displayed in green as long as the available sectors are
greater than 120% of the threshold (in sectors). It is displayed in blue when available
sectors are less than 120% of threshold (in sectors) but still greater than the
threshold (in sectors). The usage percentage is displayed in red when the available
sectors are less than the threshold (in sectors).
Note that Snapshot resources will be marked off-line if the physical resource from
which they have been created is disconnected from a single server in a failover set
prior to a failing over to the secondary server.
268
Snapshot Resource
Expand
Shrink
Reinitialize allows you to refresh your Snapshot Resource and start over. You will
only need to reinitialize your Snapshot Resource if you are not mirroring it and it has
gone offline but is now back online.
Expand allows you to manually expand the size of your Snapshot Resource.
The Shrink Policy allows you to reduce the size of your Snapshot Resource. This is
useful if your snapshot resource does not need all of the space currently allocated to
it.
Based on current usage, when you select the Shrink option, the system calculates
the maximum amount of space that can be used to shrink the Snapshot Resource.
The amount of disk space saved by this operation is calculated from the last block of
data where data is written. If there are gaps between blocks of data, the gaps are not
included in the amount of space saved.
Note: Be sure to stop all I/O to the source resource before starting this operation.
If you have I/O occurring during the shrinking process, the space used for the
Snapshot Resource may increase and the operation may fail.
Delete
Properties
Mirror
Delete allows you to delete the Snapshot Resource for this logical resource.
Properties allows you to change the snapshot resource automatic expansion policy
and snapshot notification policies.
Mirror allows you to protect your Snapshot Resource by creating a mirror of it.
269
Snapshot Resource
Reclaim
Reclaim allows you to free available space in the snapshot resource. Enable the
reclamation policy to automatically free up space when a TimeMark Snapshot is
deleted. Once the snapshot is deleted, space will be reclaimed at the next
scheduled reclamation.
270
Snapshot Resource
Highlight the TimeMark(s) to start the reclamation process and click OK.
Notes:
271
Snapshot Resource
Set the maximum processing time for reclamation - Specify the maximum
time for the reclamation process. Once this threshold is reached, the
reclamation process will stop. Specify 0 to set an unlimited processing
time. It is recommended that you schedule lengthy reclamation processing
during non-peak operation periods.
Note: If reclamation is in progress and failover occurs, the reclamation will fail
gracefully. After failover, the global reclamation policy will use the setting on the
primary server. For example, if the global reclamation schedule has been disabled on primary server, and it is enabled on the secondary server (failover pair).
After failover, the global reclamation schedule will not be triggered on the
device(s) on the primary server.
272
Snapshot Resource
Once the reclamation policy has been configured, at-a-glance information regarding
reclamation settings can be obtained from the FalconStor Management Console -->
Snapshot Resource tab.
Disable Reclamation
To disable the reclamation policy, right-click on the SAN resource in the FalconStor
Management Console and select Snapshot Resource --> Reclaim --> Disable.
Note: If the global reclamation schedule is disabled on the primary server, and it
is enabled on the secondary server (failover pair). After failed over, no global reclamation schedule should trigger on the device(s) on the primary server.
273
Snapshot Resource
Shrink Policy
Just as you can set your snapshot resource to automatically expand when it requires
more space; and you can also set it to "shrink" when it can reclaim unused space.
Setting the shrink policy for your snapshot resources is another way to conserve
space.
The shrink policy allows you to shrink the size of a Snapshot Resource after each
successful scheduled reclamation. The shrink policy can be set for multiple SAN
resources as well as for individual resources.
In order to set a shrink policy, a global or individual reclamation policy must be
enabled for the SAN resource. Shrinkage amounts depend upon the minimum
amount of disk space you set to trigger the shrink policy. When the shrink policy is
triggered, the system calculates the maximum amount of space that can be used to
shrink the snapshot resource. The amount of disk space saved by this operation is
calculated from the last block of data where data is written. When the specified
amount of space to be gained is equal to, or greater than the number entered,
shrinkage occurs. The snapshot resource can shrink down to the minimum size you
set for the resource.
274
Snapshot Resource
3. Set the minimum amount of disk space and the minimum snapshot resource size
that will trigger the shrink policy.
When the amount of space to be reclaimed is equal to, or greater than the
minimum disk space specified here and the minimum Snapshot Resource size is
reached, the shrink policy will be triggered. By default, the Enable this Snapshot
Resource to Shrink option is disabled. The minimum Amount of Disk Space to
Trigger Policy is set to 1 GB.
4. Set the minimum Snapshot Resource size. Enter the amount of space to keep.
The Snapshot Resource will remain equal to or greater this size. The minimum
Snapshot Resource size is 1 GB by default.
Once the shrink policy has been enabled, at-a-glance information regarding shrink
policy settings can be obtained from the FalconStor Management Console -->
Snapshot Resource tab.
275
Snapshot Resource
1. Right-click on the SAN resource that you want to copy and select Copy.
276
Snapshot Resource
Custom lets you select which physical device(s) to use and lets you
designate how much space to allocate from each.
Express automatically creates the target for you from available hard disk
segments.
Select Existing lets you select an existing resource. There are several
restrictions as to what you can select:
- The target must be the same type as the source.
- The target must be the same size as or larger than the source.
Note: All data on the target will be overwritten.
277
Snapshot Resource
278
Snapshot Resource
If you selected Select Existing in step 2, you will see the following window from
which you can select an existing resource:
279
Snapshot Resource
Snapshot Copy events are also written to the servers Event Log, so you can check
there for status information, as well as any errors.
280
Snapshot Resource
Groups
The Group feature allows virtual drives and service-enabled drives to be grouped
together. Groups can be created for different reasons, for CDP purposes, for
snapshot synchronization, for organizational purposes, or for caching using the
SafeCache option.
Snapshot synchronization builds on FalconStors snapshot technology, which
ensures point-in-time consistency for data recovery purposes. Snapshots for all
resources in a group are taken at the same time whenever a snapshot is triggered.
Working in conjunction with the database-aware Snapshot Agents, groups ensure
transactional integrity for database or messaging files that reside on multiple disks.
You can create up to 64 groups. When you create a group, you can configure
TimeMark/CDP, Backup, Replication, and SafeCache (and, indirectly, asynchronous
mirroring) for the entire group. All members of the group get configured the same
way.
Create a group
To create a group:
1. In the FalconStor Management Console, right-click on Groups and select New.
Depending upon which options you enable, the subsequent screens will let you
set group policies for those options. Refer to the appropriate section(s)
(Replication, ZeroImpact Backup, TimeMarks and CDP, or SafeCache) for
details on configuration.
Note that you cannot enable CDP and SafeCache for the same group.
2. Indicate if you would like to add SAN resources to this group.
281
Snapshot Resource
Refer to the following sections for limitations as to which SAN resources can/
cannot join a group.
When you create a group on the primary server, the target server gets a
group also.
When you add resources to a group configured for replication, you can
select any resource that is already configured for replication on the target
server or any resource that does not have replication configured at all. You
cannot select a resource if it is configured for replication to a different server.
CDP/NSS Administration Guide
282
Snapshot Resource
If a watermark policy is used for replication, the retry delay value configured
affects each group member individually rather than the group as a whole.
For example, if replication starts for the group and a group member fails
during the replication process, the retry delay value will take effect. In the
meantime, if another resource in the group reaches its watermark, a group
replication will be triggered for all group members and the retry delay will
become irrelevant.
If you are using continuous replication, the group will have only one
Continuous Replication Resource.
If a group is configured for continuous replication, you cannot add a
resource to the group if the resource has continuous replication enabled.
Similarly, continuous replication can only be enabled for an existing group if
members of the group do not have continuous replication enabled.
If you add a resource to a group that is configured for continuous replication,
the system switches to periodic replication mode until the next regularlyscheduled replication takes place.
There are several ways to add resources to a group. After you create a group, you
will be prompted to add resources. At any time afterwards, you can:
1. Right-click on any group and select Join.
You can also right-click on any SAN resource and select Group --> Join.
2. Select the type of resources that will join this group.
If this is a group with existing members, you will see a list of members instead.
283
Snapshot Resource
284
Snapshot Resource
If you started the wizard from a SAN resource instead of from a group, you will
see the following window and you will select a group, instead of a resource:
When you click Next, you will see the options that must be activated. You will be
taken through the applicable Replication and/or Backup wizard(s) so you can
manually configure each option. (TimeMark is always configured automatically.)
5. Confirm all information and click Finish to add the resource(s) to the group.
Each resource will now have a tab for each configured option except CDP and
SafeCache which share a CDP journal or SafeCache resource as a group.
By default, group members are not automatically assigned to clients. You must
still remember to assign your group members to the appropriate client(s).
285
Snapshot Resource
For groups enabled with Backup or Replication, leaving the group does not
disable Backup or Replication for the resource.
286
While the TimeMark option allows you to track changes to specific points in time,
with Continuous Data Protection (CDP) you can roll back data to any point-in-time.
TimeMark/CDP guards against soft errors, non-catastrophic data loss, including the
accidental deletion of files and software/virus issues leading to data corruption.
TimeMark/CDP protects where high availability configurations cannot, since in
creating a redundant set of data, high availability configurations also create a
duplicate set of soft errors by default. TimeMark/CDP protects data from your slipups, from the butter fingers of employees, unforeseen glitches during backup, and
from the malicious intent of viruses.
The TimeMark/CDP option also provides an undo button for data processing.
Traditionally, when an administrator performed operations on a data set, a full
backup was required before each dangerous step, as a safety net. If the step
resulted in undesirable effects, the administrator needed to restore the data set and
start the process all over again. With FalconStor's TimeMark/CDP option, you can
easily rollback (restore) a drive to its original state.
FalconStors TimeView feature is an extension of the TimeMark/CDP option and
allows you to mount a virtual drive as of a specific point-in-time. Deleted files can be
retrieved from the drive or the drive can be assigned to multiple application servers
for concurrent, independent processing, all while the original data set is still actively
being accessed/updated by the primary application server. This is useful for what if
scenarios, such as testing a new payroll application on your actual, but not live,
data.
Configure TimeMark properties by right-clicking on the TimeMark/CDP option and
selecting Properties.
287
Setup
You will need a Snapshot Resource for the logical resource you are going to
configure. If you do not have one, you will create it through the wizard. Refer to
Create a Snapshot Resource for more information.
1. Right-click on a SAN resource, incoming replica resource, or a Group and select
TimeMark/CDP --> Enable.
For multiple SAN resources, right-click on the SAN Resources object and select
TimeMark/CDP --> Enable.
The Enable TimeMark/CDP Wizard launches.
2. Indicate if you want to enable CDP. Select the checkbox to enable CDP.
CDP enhances the benefits of using TimeMark by recording all changes made to
data, allowing you to recover to any point in time.
Note: If you enable CDP on the replica, it is recommended that you perform
replication synchronization. CDP journaling will not begin until the next
successful replication. You can wait until the next scheduled replication
synchronization or manually trigger synchronization. To manually trigger
replication synchronization, Right-click on the primary server and select
Replication --> Synchronization.
288
3. (CDP only) Select the storage pool or physical device that should be used to
create the CDP journal.
4. (CDP only) Select how you want to create the CDP journal.
The minimum size required for the journal is 1 GB, which is the default size.
Custom lets you select which physical device(s) to use and lets you designate
how much space to allocate from each.
Express lets you designate how much space to allocate and then automatically
creates a CDP journal using an available device.
Select different drive - look for space on another hard disk.
Select drives from different adapter/channel - look for space on another
hard disk only if it is on a separate adapter/channel.
CDP/NSS Administration Guide
289
Select any available drive - look for space on any disk, including the
original. This option is useful if you have mapped a device (such as a
RAID device) that looks like a single physical device.
Note: The CDP Journal performance level is set to Moderate by default. You
can modify this setting (to aggressive) by right-clicking on the SAN resource
and selecting TimeMark/CDP --> CDP Journal --> Performance.
290
291
If TimeMark is enabled for a group, replication must also be enabled at the group
level. You should manually suspend the replication schedule when using this
option to avoid a scheduling conflict.
8. Confirm that all information is correct and then click Finish to enable TimeMark/
CDP.
You now have a TimeMark tab for this resource or group. If you enabled CDP,
you also have a separate CDP tab. If you are using CDP, the TimeMarks will be
points within the CDP journal.
In order for a TimeMark to be created, you must select Create an initial
TimeMark on... policy. Otherwise, you will have enabled TimeMark, but not
created any. You will then need to manually create them using TimeMark/CDP
--> Create.
CDP/NSS Administration Guide
292
If you are configuring TimeMark for an incoming replica resource, you cannot
select the Create an initial TimeMark on... policy. Instead, a TimeMark will be
created after each scheduled replication job finishes.
Depending upon the version of your system, the maximum number of
TimeMarks that can be maintained is 1000. The maximum does not include the
snapshot images that are associated with TimeView resources. Once the
maximum is reached, the earliest TimeMarks will be deleted depending upon
priority. Low priority TimeMarks are deleted first, followed by Medium, High, and
then Critical. When a TimeMark is deleted, journal data is merged together with
a previous TimeMark (or a newer TimeMark, if no previous TimeMarks exist).
Note:
The first TimeMark that is created when CDP is used will have a Medium priority.
Subsequent TimeMarks will have a Medium priority by default, but can be
changed manually. Refer to Add a comment or change priority of an existing
TimeMark for more information.
Note:
This might take some time if the client is busy. You can speed up processing by
skipping snapshot notification if you know that the client will not be updating data
when a TimeMark is taken. Use the Trigger snapshot notification for every n
scheduled TimeMark(s) option to select which TimeMarks should use snapshot
notification.
Note: Once you have successfully enabled CDP on the replica, perform
Replication synchronization.
293
TimeMarks displayed in orange are pending, meaning there is unflushed data in the
CDP journal. Unflushed TimeMarks cannot be selected for rollback or TimeView.
To re-order the list of TimeMarks, click on a column heading to sort the list.
The Quiescent column indicates whether or not snapshot notification occurred when
the TimeMark was created. When a device is assigned to a client, the initial value is
set to No. A Yes in the Quiescent column indicates there is an available agent on the
client to handle the snapshot notification, and the snapshot notification was
successful.
If a device is assigned to multiple clients, such as nodes of a cluster, the Quiescent
column displays Yes only if the snapshot notification is successful on all clients; if
there is a failure on one of the clients, the column displays No.
However, in the case of a VSS cluster, the Quiescent column displays Yes with VSS
when the entire VSS process has successfully completed on the active node and
the snapshot has been created.
If you are looking at this tab for a replica resource, the status will be carried from the
primary resource. For example, if the TimeMark created on the primary virtual
device used snapshot notification, Quiescent will be set to Yes for the replica.
The TimeView Data column indicates whether TimeView data or a TimeView
resource exists on the TimeMark.
The Status column indicates the TimeMark state.
Note: A vdev expanded TimeMark is created automatically when a source
device with CDP is expanded.
294
Right-click on the virtual drive and select Refresh to update the TimeMark Used Size
and other information on this tab. To see how much space TimeMark is using, check
the Snapshot Resource tab.
295
296
Priority affects how TimeMarks will be deleted once the maximum number of
TimeMarks to keep has been reached. Low priority TimeMarks are deleted first,
followed by Medium, High, and then Critical.
Note: Groups with TimeMark/CDP enabled: If a member of a group has its own
TimeMark that needs to be updated, it must leave the group, make the TimeMark
updates individually, and then rejoin the group.
1. Right-click on the TimeMarked SAN resource that you want to update and select
TimeMark/CDP --> Update
2. If desired, add a comment for the TimeMark that will make it easily identifiable
later if you need to locate it.
3. Set the priority for this TimeMark.
297
Once the maximum number of TimeMarks to keep has been reached, the
earliest TimeMarks will be deleted depending upon priority. Low priority
TimeMarks are deleted first, followed by Medium, High, and then Critical.
4. Indicate if you want to use Snapshot Notification for this TimeMark.
Snapshot Notification works with FalconStor Snapshot Agents to initiate a
snapshot request to a SAN client. When used, the system notifies the client to
quiet activity on the disk before a snapshot is taken. Using snapshot notification
guarantees that you will get a transactionally consistent image of your data.
This might take some time if the client is busy. You can speed up processing by
skipping snapshot notification if you know that the client will not be updating data
when this TimeMark is taken.
The use of this option overrides the Snapshot Notification setting in the snapshot
policy.
Copy a TimeMark
The Copy feature works similarly to FalconStors Snapshot Copy option. It allows
you to take a TimeMark image of a drive (for example, how your drive looked at 9:00
this morning) and copy the entire drive image to another virtual drive or SAN
resource. The virtual drive or SAN resource can then be assigned to clients for use
and configured for FalconStor storage services.
1. Right-click on the TimeMarked SAN resource that you want to copy and select
TimeMark/CDP --> Copy.
Note: Do not initiate a TimeMark Copy while replication is in progress. Doing
so will result in the failure of both processes.
298
To copy the TimeMark and TimeView data, select the Copy the TimeMark and
TimeView data checkbox at the bottom left of the screen.
This option is only available if there is TimeView data available. This option is not
available if the TimeView data is in use/mounted or if there is no TimeView. In
this case, you will only be able to create a copy of the disk image at the time of
the timestamp (without new data that has been written to the TimeView). To
capture the new data in this case, see the example below.
For example, if you have assigned a TimeView to a disaster recovery (DR) host
and have started writing new data to the TimeView, when you use TimeMark
Copy you will have a copy of the point in time without the "new" data that was
written to the TimeView. In order to create a full disk copy to include the data in
the TimeView, you will need to unassign the TimeView from the DR host, delete
the TimeView and select the keep the TimeView data persistent option.
Afterwards, TimeMark Copy will include the new data. You can recreate the
TimeView again with the new data and assign back to the DR host.
To revert back to the original TimeMark, you must delete the TimeView again,
but do not select the keep the TimeView data persistent option. This will remove
the new data from the TimeMark.
3. Select how you want to create the target resource.
Custom lets you select which physical device(s) to use and lets you designate
how much space to allocate from each.
Express automatically creates the target for you from available hard disk
segments. You will only have to select the storage pool or physical device that
should be used to create the copy.
Select Existing lets you select an existing resource. There are several
restrictions as to what you can select:
CDP/NSS Administration Guide
299
Use TimeView if you need to restore individual files from a drive but you do not want
to rollback the entire drive to a previous point in time. Simply use TimeView to mount
the virtual drive and then copy the files you need back to your original virtual drive.
TimeView also enables you to perform what if scenarios, such as testing a new
payroll application on your actual, but not live, data. After mounting the virtual drive,
it can be assigned to an application server for independent processing without
affecting the original data set. A TimeView cannot be configured for any of
FalconStors storage services.
Why should you use TimeView instead of Copy? Unlike Copy, which creates a new
virtual drive and requires disk space equal to or larger than the original disk, a
TimeView requires very little disk space to mount. It is also quicker to create a
TimeView than to copy data to a new virtual drive.
Note: Clients may not be able to access TimeViews during failover.
300
Zoom In to see
greater detail for
the selected time
period.
Click to select a
CDP journal tag
that was
manually added.
If this resource has CDP enabled, the top section contains a graph with marks
that represent TimeMarks.
The graph is a relative reflection of the data changing between TimeMarks within
the available journal range. The vertical y axis represents data usage per
TimeMark; the height of each mark represents the Used Size of each TimeMark.
The horizontal x axis represents time. Each mark on the graph indicates a single
TimeMark. You will not see TimeMarks that have no data.
Because the graph is a relative reflection of data, and the differences in data
usage can be very large, the proportional height of each TimeMark might not be
very obvious.
For example, if you have one TimeMark with a size of 500 MB followed by
several much smaller TimeMarks, the 500 MB TimeMark will be much more
visible.
Similarly, if the maximum number of TimeMarks has been reached and older
TimeMarks have been deleted to make way for newer ones, journal data is
merged together with a previous TimeMark (or a newer TimeMark, if no previous
exist). Therefore, it is possible that you will see one large TimeMark containing
all of the merged data.
Also, since the length of the x axis can reflect a range as small as one hour to 30
days, the location of an actual data point is approximate. Zooming in and using
the Search button will allow you to get a more accurate location of a particular
data point.
If CDP is enabled, you can use the visual slider to create a TimeView from any
point in the CDP journal or you can create a TimeView from a scheduled
TimeMark.
301
You can also click the Select Tag button to select a CDP journal tag that was
manually added or was automatically added by CDP after a rollback occurred.
Note that you will only see the tags for which there was subsequent I/O.
If CDP is not enabled, you will only be able to create a TimeView from a
scheduled TimeMark.
2. To create a TimeView from a scheduled TimeMark, select Create TimeView from
TimeMark Snapshots, highlight the correct TimeMark, and click OK.
If this is a replica server, the timestamp of a TimeMark is the timestamp of the
source (not the replicas local time).
3. To create a TimeView from the CDP journal, use the slider or type in an
approximate time.
For example, if you are trying to find a deleted file, select a time prior to when the
file was deleted. If this was an active file, aim for a time just prior to when the file
was deleted so that you can recover the most up-to-date version.
If you are positive that the time you selected is correct, you can click OK to
create a TimeView. If you are unsure of the exact time, you can zoom into an
approximate time period to see greater detail, such as seconds, milliseconds,
and even microseconds.
4. If you need to see greater detail, click Zoom In.
TimeMark period
Five minute range within
this TimeMark
You can see the I/O that occurred during this five minute time frame displayed in
seconds.
CDP/NSS Administration Guide
302
If you zoomed in and dont see what you are looking for, you can click the Scroll
button. It will move forwards or backwards by five minutes within the period of
this TimeMark.
You can also click the Search button to locate data or a period with limited or no
I/O.
At any point, if you know what time you want to select, you can click OK to return
to the main dialog so that you can click OK to create a TimeView. Otherwise, you
can zoom in further to see greater detail, such as milliseconds and
microseconds.
You can then use the slider to select a time just before the file was deleted.
It is best to select a quiet time without I/O to get the most stable version of the
file.
303
5. After you have selected the correct point in time, click OK to return to the main
dialog and then click OK to create a TimeView.
6. Select the physical resource for SAN TimeView Resource.
304
Notes:
The Timeview only uses physical space when I/O is written to the TimeView
device. New write I/O may trigger expansion to allocate more physical
space for the TimeView when no more space is available. Read I/O does
not require additional physical space.
The maximum size to which a TimeView device can be allocated is 5%
more than the primary device. For example: Maximum TimeView body size
= 1.05 X primary device size. The allocated size will be checked for both
policy and user triggers to expand when necessary.
The formula for allocating the initial size of the physical space for the
TimeView is as follows:
If the primary device size is less than 5GB, the initial TimeView
size = primary size X 1.05 (the maximum TimeView size)
If the primary device size is greater than 5GB, the initial TimeView
size = 5GB
If creating a TimeView from a VSS TimeMark, the initial TimeView
size = 32MB (as shown in the screen above)
For best performance, it is recommended that you do not lower the default
initial size of the TimeView if you intend to write to the TimeView device (i.e.
when using HyperTrac).
Once the TimeView is deleted, the space becomes available. TimeViews
cannot be shrunk once the space is allocated.
305
306
Remap a TimeView
With TimeViews, you can mount a virtual drive as of a specific point-in-time, based
on your existing TimeMarks or your CDP journal. If you are finished with one
TimeView but need to create another for the same virtual device, you can remap the
TimeView to another point-in-time. When remapping, a new TimeView is created
and all of the client connections are retained. To remap a TimeView, follow the steps
below:
Note: It is recommend that you disable the TimeView from the client (via the
Device Manager on Windows machines) before remapping it.
Delete a TimeView
Deleting a TimeView also involves deleting the SAN resource. To delete a
TimeView:
1. Right-click on the TimeView and select Delete.
2. Select whether you want to Keep the TimeView data to be persistent when recreated with the same TimeMark.
This option allows you to save the TimeView data on the TimeMark and restore
the data when it is recreated with the same TimeMark.
3. Type Yes in the box and click OK to confirm the deletion.
CDP/NSS Administration Guide
307
The first option allows you to remove all TimeView data from selected
virtual device(s)
The second option allows you to select specific TimeView data for
deletion.
308
2. Select the storage policy to be used when space starts to run low.
Specify the threshold as a percentage of the space used (1 - 99%). The
default is the same value as the snapshot resource threshold. Once the
specified threshold is met, automatic expansion is triggered.
Automatically allocate more space for the TimeView device. Check this
option to allow the system to allocate additional space (according to the
following settings) once the threshold is met.
Enter the percentage to Increment space by. The default is the same
value as the snapshot resource threshold.
Enter the maximum size (in MB) allowed for the TimeView device. This
is the maximum size limit used by automatic expansion. The default is
0, which means maximum TimeView size.
309
After rolling a drive back, TimeMarks made after that point in time will be deleted but
all of the CDP journal data will be available, if CDP is enabled. Therefore it is
possible to perform another rollback and select a journal date ahead of the previous
time, essentially rolling forward.
Group rollback allows you to rollback up to 32 (the default) disks to a TimeMark or
CDP data point. To perform a group rollback, right-click on the group and select
Rollback. TimeMarks that are common to all devices in a group will display in the
wizard.
1. Unassign the Client(s) from the virtual drive before rollback.
For non-Windows Clients, type ./ipstorclient stop from /usr/local/
ipstorclient/bin.
Note: To avoid the need to reboot a Windows 2003 client, unassign the SAN
resource from the client now and then reassign it just before re-attaching your
client using the FalconStor Management Console.
310
Do not initiate a TimeMark rollback to a raw device while data is currently being
written to the raw device. The rollback will fail because the device will fail to
open.
If you have already created a TimeView from the CDP journal and want to roll
back your virtual device to that point in time, right-click on the TimeView and
select Rollback to.
3. Select a specific point in time or select the TimeMark to which you want to
rollback.
If CDP is enabled and you have previously rolled back this drive, you can select
a future journal date.
If you selected a TimeView in the previous step, you will not have to select a
point in time or a TimeMark.
4. Confirm that you want to continue.
A TimeMark will be taken automatically at the point of the rollback and a tag will
be added into the journal. The TimeMark will have the description !!XX-- POST
CDP ROLLBACK --XX!! This way, if you later need to create a TimeView, it will
contain data from the new TimeMark forward to the TimeView time. This means
you will see the disk as it looked immediately after rollback plus any data written
to the disk after the rollback occurred until the time of the TimeView.
It is recommended that you remove the POST CDP ROLLBACK after a
successful rollback because it counts towards the TimeMark count for that
member.
5. When done, re-attach your Client(s).
Note: If DynaPath is running on a Windows client, reboot the machine after rollback.
To change a policy:
1. Right-click on the virtual drive and select TimeMark/CDP --> TimeMark -->
Properties.
311
312
To set the retention policy, right-click on the SAN resource and select TimeMark/
CDP --> Properties. Then select the TimeMark Retention tab.
Keep the maximum number of TimeMarks. This number can vary depending
upon your system resources and license.
Keep the ___ most recent TimeMarks. (The maximum is 1000. The default
is 8.)
Keep TimeMarks based on the following rule:
Keep all TimeMarks for the past [1-168 hours or 1-365 days] hours /
days. (default is 1 day)
Keep hourly TimeMarks for the past [0 - 365] days and use the
TimeMark closest to [0 - 59] as the hourly TimeMark. (default is 1 day
and 0 as the hour)
Keep daily TimeMarks for the past [0 - 365] days and use the TimeMark
closest to [0 - 23] as the daily TimeMark.
Keep weekly TimeMarks for the past [0 - 110] weeks and use the
TimeMark closest to [Monday - Sunday] as the weekly TimeMark.
Keep monthly TimeMarks for the past [0 - 120] months and use the
TimeMark closest to [1 - 31 as the monthly TimeMark.
Specifying the number of TimeMarks to keep for each level allows you to define
snapshot preserving patterns to organize your TimeMarks. By indicating the number
of TimeMarks to keep at each level, you can specify how many TimeMarks to keep
for any or all of the categories. The categories are hourly, daily, weekly, and monthly.
313
This feature allows you to save a pre-determined number of TimeMarks and delete
the rest. The TimeMarks that are preserved are the result of the pruning process.
This method allows you to keep only meaningful snapshots.
When defining the TimeMark retention policy, you are prompted to specify the offset
of the moment to keep, i.e. Use the TimeMark closest to___. For example, for daily
TimeMarks, you are asked to specify which hour of the day to use for the TimeMark.
For weekly TimeMarks, you are asked which day of the week to keep. If you set an
offset for which there is no TimeMark, the closest one to that time is taken.
The default offset values correspond to typical usage based on the fact that the
older the information, the less valuable it is. For instance, you can take TimeMarks
every 20 minutes, but keep only those snapshots taken at the minute 00 each hour
for the last 24 hours.
Suspend/resume CDP
[For CDP only]
You can suspend/resume CDP for an individual resource. If the resource is in a
group, you can suspend/resume CDP at the group level. Suspending CDP does not
delete the CDP journal and it does not delete any TimeMarks. When CDP is
resumed, data resumes going to the journal.
314
Delete TimeMarks
The Delete option lets you delete one or more TimeMark images for a virtual drive.
Depending upon which TimeMark(s) you delete, this may or may not free up space
in your Snapshot Resource. A general rule is that you will only free up Snapshot
Resource space if the earliest TimeMark is deleted. If other TimeMarks are deleted,
you will need to run reclamation to free up space. Refer to Snapshot Resource
shrink and reclamation policies.
1. Right-click on the virtual drive and select TimeMark/CDP --> Delete.
2. Highlight one or more TimeMarks and click Delete.
3. Type yes to confirm and click OK to finish.
315
2 port bond
4 port bond
Dual 2 port bond
8 port bond
Dual 4 port bond
You can think of this group as a single virtual adapter that is actually made up of
multiple physical adapters. To the system and the network, it appears as a single
interface with one IP address. However, throughput is increased by a factor equal to
the number of adapters in the group. Also, NIC Port Bonding detects faults
anywhere from the NIC out into the network and provides dynamic failover in the
event of a failure.
You can define a virtual network interface (NIC) which sends and receives traffic to/
from multiple physical NICs. All interfaces that are part of a bond have SLAVE and
MASTER definitions.
316
For two teams, enter the IP Address and Netmask of each Master and
click OK.
317
For one team containing only eth0 and eth1, enter the IP Address and
Netmask of the master and click OK.
NIC Port Bonding can be configured to use round robin load-balancing, so the
first frame is sent on eth0, the second on eth1, the third on eth0 and so on. The
bonding choices are shown below:
Bonding choices:
No Bonding
Eth0/Eth1
(1 group), 2 port
Eth0/Eth1/Eth2/Eth3
(1 group), 4 port
Eth0/Eth1/Eth2/Eth3/Eth4/Eth5/Eth6/Eth7
(1 group), 8 port
Eth0/Eth2,Eth1/Eth3
(2 group), 4 port
Eth0/Eth2/Eth4/Eth6, Eth1/Eth3/Eth5/Eth7
(2 group), 8 port
318
Change IP address
During the bonding process, you will have the option to select a new IP address.
319
Replication
Overview
Replication is the process by which a SAN resource maintains a copy of itself either
locally or at a remote site. The data is copied, distributed, and then synchronized to
ensure consistency between the redundant resources. The SAN resource being
replicated is known as the primary disk. The changed data is transmitted from the
primary to the replica disk so that they are synchronized. Under normal operation,
clients do not have access to the replica disk.
If a disaster occurs and the replica is needed, the administrator can promote the
replica to become a SAN resource so that clients can access it. Replica disks can be
configured for CDP or NSS storage services, including backup, mirroring, or
TimeMark/CDP, which can be useful for viewing the contents of the disk or
recovering files.
Replication can be set to occur continuously or at set intervals (based on a schedule
or watermark). For performance purposes and added protection, data can be
compressed or encrypted during replication.
Remote
replication
Remote replication allows fast, data synchronization of storage volumes from one
CDP or NSS appliance to another over the IP network.
With remote replication, the replica disk is located on a separate CDP or NSS
appliance, called the target server.
Local
replication
Local replication allows fast, data synchronization of storage volumes within one
CDP or NSS appliance. It can be used within metropolitan area Fibre Channel
SANs, or can be used with IP-based Fibre Channel extenders.
320
Replication
With local replication, the replica disk is connected to the CDP or NSS appliance via
a gateway using edge routers or protocol converters. Because there is only one
CDP or NSS appliance, the primary and target servers are the same server.
With standard, delta replication, a snapshot is taken of the primary disk at prescribed
intervals based on the criteria you set (schedule and/or watermark value).
Continuous
replication
With FalconStors Continuous Replication, data from the primary disk is continuously
replicated to a secondary disk unless the system determines it is not practical or
possible, such as when there is insufficient bandwidth. In these types of situations
the system automatically switches to delta replication. After the next regularlyscheduled replication takes place, the system automatically switches back to
continuous replication.
For continuous replication to occur, a Continuous Replication Resource is used to
stage the data being replicated from the primary disk. Similar to a cache, as soon as
data comes into the Continuous Replication Resource, it is written to the replica
disk. The Continuous Replication Resource is created during the replication
configuration.
There are several events that will cause continuous replication to switch back to
delta replication, including when:
321
Replication
Replication configuration
Requirements
The following are the requirements for setting up a replication configuration:
Setup
You can enable replication for a single SAN resource or you can use the batch
feature to enable replication for multiple SAN resources.
You need Snapshot Resources for the primary and replica disks. If you do not have
them, you can create them through the wizard. Refer to Create a Snapshot
Resource for more information.
1. For a single SAN resource, right-click on the resource and select Replication -->
Enable.
For multiple SAN resources, right-click on the SAN Resources object and select
Replication --> Enable.
The Enable Replication for SAN resources wizard launches. Each primary disk
can only have one replica disk. If you do not have a Snapshot Resource, the
wizard will take you through the process of creating one.
322
Replication
323
Replication
Note: If you are using TimeMark with CDP, you must use Continuous Mode
replication.
Continuous Mode - Select if you want to use FalconStors Continuous
Replication. After the replication wizard completes, you will be prompted to
create a Continuous Replication Resource for the primary disk.
The TimeMark options listed below for continuous mode are primarily used for
devices assigned to a VSS-enabled client to maintain the TimeMark
synchronization on both the primary and replica disks.
Create Primary TimeMark - This option allows you to create the primary
TimeMark when a replica TimeMark is created by a user of the replication
schedule and the primary TimeMark option is enabled.
Synchronize Replica TimeMark - This option allows you to synchronize
the replica TimeMark when a primary TimeMark is created by a user or
TimeMark schedule.
Delta Mode - Select if you want replication to occur at set intervals (based on
schedule or watermark).
324
Replication
Preserve Replication TimeMark - If you did not select the Use Existing
TimeMark option, a temporary TimeMark is created when replication
begins. This TimeMark is then deleted after the replication has completed.
Select Preserve Replication TimeMark to create a permanent TimeMark
that will not be deleted when replication has completed (if the TimeMark
option is enabled). This is convenient way to keep all of the replication
TimeMarks without setting up a separate TimeMark schedule.
325
Replication
An initial replication for individual resources begins immediately upon setting the
replication policy. Then replication occurs according to the specified policy.
You must select at least one policy but you can have multiple. You must specify
a policy even if you are using continuous replication so that if the system
switches to delta replication, it can automatically switch back to continuous
replication after the next regularly-scheduled replication takes place.
Any number of continuous replication jobs can run concurrently. However, by
default, 20 delta replication jobs can run, per server, at any given time. If there
are more than 20, the highest priority disks begin replication first while the
remaining disks wait in the queue in the order of their priority. As soon as one of
the jobs finishes, the disk with the next highest priority in the queue begins.
Note: Contact Technical Support for information about changing this value
but note that additional replication jobs will increase the load and bandwidth
usage of your servers and network and may be limited by individual hardware
specifications.
Start replication when the amount of new data reaches - If you enter a
watermark value, when the value is reached, a snapshot will be taken and
replication of that data will begin. If additional data (more than the watermark
value) is written to the disk after the snapshot, that data will not be replicated
until the next replication. If a replication that was triggered by a watermark fails,
the replication will be re-started based on the retry value you enter, assuming the
system detects any write activity to the primary disk at that time. Future
watermark-triggered replications will not start until after a successful replication
occurs.
If you are using continuous replication and have set a watermark value, make
sure that it is a value that can actually be reached; otherwise snapshots will
rarely be taken. Continuous replication does not take snapshots, but you will
need a recent, valid snapshot if you ever need to rollback the replica to an earlier
TimeMark during promotion.
If you are using SafeCache, replication is triggered when the watermark value of
data is moved from the cache resource to the disk.
Start initial replication on mm/dd/yyyy at hh:mm and then every n hours/
minutes thereafter - Indicate when replication should begin and how often it
should be repeated.
If a replication is already occurring when the next time interval is reached, the
new replication request will be ignored.
Note: if you are using the FalconStor Snapshot Agent for Microsoft
Exchange 5.5, the time between each replication should be longer than the
time it takes to stop and then re-start the database.
326
Replication
327
Replication
8. Select how you want to create the replica disk on the target server.
Custom lets you select which physical device(s) to use and lets you designate
how much space to allocate from each.
Express automatically creates the replica for you from available hard disk
segments. You will only have to select the storage pool or physical device that
should be used to create the replica resource. This is the default setting.
Select Existing lets you select an existing resource. There are several
restrictions as to what you can select:
The target must be the same type as the primary.
The target must be the same size as the primary.
The target can have Clients assigned to it but they cannot be connected
during the replication configuration.
Note: All data on the target will be overwritten.
328
Replication
329
Replication
When will
replication
begin?
If you have configured replication for an individual resource, the system will begin
synchronizing the disks immediately after the configuration is complete if the disk is
attached to a client and is receiving I/O activity.
Replication for
a group
If you have configured replication for a group, synchronization will not start until one
of the replication policies (time or watermark) is triggered. If replication fails for one
group member, it is skipped and replication continues for the rest of the group. After
successful replication, group members will have a TimeMark created on their
replica. In order for the group members that were skipped to have the same
TimeMark on its replica, you will need to remove them from the group, use the same
TimeMark to replicate again, and then re-join the group.
If you
configured
continuous
replication
If you are using continuous replication, you will be prompted to create a Continuous
Replication Resource for the primary disk and a Snapshot Resource for the replica
disk. If you are not using continuous replication, the wizard will only ask you to
create a Snapshot Resource on the replica.
Because old data blocks are moved to the Snapshot Resource as new data is
written to the replica, the Snapshot Resource should be large enough to handle the
amount of changed data that will be replicated. Since it is not always possible to
CDP/NSS Administration Guide
330
Replication
know how much changed data will be replicated, it is a good idea for you to enable
expansion on the target servers Snapshot Resource. You then need to decide what
to do if your Snapshot Resource runs out of space (reaches the maximum allowable
size or does not have expansion enabled). The default is to preserve all TimeMarks.
This option stops writing data to the source SAN resource if there is no more space
available or there is a disk failure in order to preserve all TimeMarks.
Protect your
replica
resource
For added protection, you can mirror or TimeMark an incoming replica resource by
highlighting the replica resource and right-clicking on it.
331
Replication
Custom lets you select which physical device(s) to use and lets you designate
how much space to allocate from each.
Express lets you designate how much space to allocate and then automatically
creates the resource using an available device.
Note: The Continuous Replication Resource maximum size is 1 TB and
cannot be expanded. Therefore, you should allocate enough space for the
resource. By default, the size will be 256 MB or 5% of the size of your primary
disk (or 5% of the total size of all members of this group), whichever is larger.
If the primary disk regularly experiences a large number of writes, or if the
connection to the target server is slow, you may want to increase the size,
because if the Continuous Replication Resource should become full, the
system switches to delta replication mode until the next regularly-scheduled
replication takes place. If you outgrow your resource, you will need to
disable continuous replication and then re-enable it.
3. Verify the physical devices you have selected, confirm that all information is
correct, and then click Finish.
On the Replication tab, you will notice that the Replication Mode is set to Delta.
Replication must be initiated once before it switches to continuous mode. You
can either wait for the first scheduled replication to occur or you can right-click
on your SAN resource and select Replication --> Synchronize to force replication
to occur.
332
Replication
Replication tab
The following are examples of what you will see by checking the Replication tab for
a primary disk:
With Continuous
Replication enabled
With Delta
Replication
333
Replication
All times shown on the Replication tab are based on the primary servers clock.
Accumulated Delta Data is the amount of changed data. Note that this value will not
display accurate results after a replication has failed. The information will only be
accurate after a successful replication.
Replication Status / Last Successful Sync / Average Throughput - You will only see
these fields if you are connected to the target server.
Transmitted Data Size is based on the actual size transmitted after compression or
with MicroScan performed.
Delta Sent represents the amount of data sent (or processed) based on the
uncompressed size.
If compression and MicroScan are not enabled, the Transmitted Data Size will be
the same as Delta Sent and the Current/Average Transmitted Data Throughput will
be the same as Instantaneous/Average Throughput.
If compression or MicroScan is enabled and the data can be compressed or blocks
of data have not changed and will not be sent, the Transmitted Data Size is going to
be different from Delta Sent and both Current/Average Transmitted Data Throughput
will be based on the actual size of data (compressed or Micro-scanned) sent over
the network.
Event Log
Replication events are also written to the primary servers Event Log, so you can
check there for status and operational information, as well as any errors.
Replication object
The Incoming and Outgoing objects under the Replication object display information
about each server that replicates to this server or receives replicated data from this
server. If the servers icon is white, the partner server is "connected" or "logged in". If
the icon is yellow, the partner server is "not connected" or "not logged in".
334
Replication
335
Replication
Replication performance
Set global
replication
options
You can set global replication options that affect system performance during
replication. While the default settings should be optimal for most configurations, you
can adjust the settings for special situations.
To set global replication properties for a server:
1. Right-click on the server and select Properties.
2. Select the Performance tab.
Click the Configure Throttle button to configure by target site(s)/server(s) to limit
the maximum replication speed thus minimizing potential impact to network
traffic.
Enable MicroScan - MicroScan analyzes each replication block on-the-fly during
replication and transmits only the changed sections on the block. This is
beneficial if the network transport speed is slow and the client makes small
random updates to the disk. This global MicroScan option overrides the
MicroScan setting for each individual virtual device.
Tune replication
parameters
You can run a test to discover maximum bandwidth and latency for remote
replication within your network.
1. Right-click on a server under Replication --> Outgoing and select Replication
Parameters.
2. Click the Test button to see information regarding the bandwidth and latency of
your network.
While this option allows you to measure the bandwidth and latency of the
network between the two servers (replication source and target), it is not a tool to
test the connectivity of the network. Therefore, if there is a network connection
issue or connection failure, the Test button will not work (and should not be used
for testing the network connection between the servers).
336
Replication
Switch clients to the replica disk when the primary disk fails
Because the replica disk is used for disaster recovery purposes, clients do not have
access to the replica. If a disaster occurs and the replica is needed, the
administrator can promote the replica to become the primary disk so that clients can
access it. The Promote option promotes the replica disk to a usable resource. Doing
so breaks the replication configuration. Once a replica disk is promoted, it cannot
revert back to a replica disk.
You must have a valid replica disk in order to promote it. For example, if a problem
occurred (such as a transmission problem or the replica disk failing) during the first
and only replication, the replicated data would be compromised and therefore could
not be promoted to a primary disk. If a problem occurred during a subsequent
replication, the data from the Snapshot resource will be used to recreate the replica
from its last good state.
Note:
To promote a replica:
1. In the Console, right-click on an incoming replica resource under the Replication
object and select Replication --> Promote.
If the primary server is not available, you will be prompted to roll back the replica
to the last good TimeMark, assuming you have TimeMark enabled on the
CDP/NSS Administration Guide
337
Replication
replica. When this occurs, the wizard will not continue with the promotion and
you will have to check the Event Log to make sure the rollback completes
successfully. Once you have confirmed that it has completed successfully, you
need to re-select Replication --> Promote to continue.
2. Confirm the promotion and click OK.
3. Assign the appropriate clients to this resource.
4. Rescan devices or restart the client to see the promoted resource.
338
Replication
339
Replication
340
Replication
341
Replication
Setting the throttle instructs the application to keep the network speed constant.
Although network traffic bursts may still occur, depending on the environment, the
throttle tries to remain at the set speed.
Throttle configuration settings are retained for each server even after replication has
been disabled. When replication is enabled again, the previous throttle settings will
be present.
Once you have set up replication and/or a target site, you can configure your throttle
settings.
The throttle can be set and edited from various locations in the console as well as
from the command line interface.
To set the throttle, navigate to the Replication node in the console, right-click
on Outgoing and select Throttle --> Configure.
342
Replication
Set the throttle via Server Properties --> Performance tab --> click the
Configure Throttle button.
Highlight the server or target site that you want to edit and click the Edit
button. (Target sites are indicated by a T icon.)
343
Replication
Navigate to the Replication node in the console, right-click on Outgoing and select
Target Site --> Add.
Once a target site has been added, it displays, along with the individual servers, in
the FalconStor Management Console under the Replication --> Outgoing node. You
can right-click on the target site in the console to delete, edit or export it.
344
Replication
345
Replication
To edit throttle windows times, navigate to the Replication node in the console, rightclick on Outgoing and select Throttle --> Manage Throttle Windows. Then click the
Edit button.
Add a throttle window
Time is entered as a 24 hour time period. For example, 5:00 p.m. would be entered
as 17:00. Make sure the times do not overlap with an existing window. For example,
if one window has an end time of 12:00, the next window must start at 12:01.
Delete a throttle window
You can also delete any custom (user-created) throttle window to cancel the
schedule. Built-in throttle windows cannot be deleted. To delete a custom throttle
window, navigate to the Replication node in the console, right-click on Outgoing and
select Throttle --> Manage Throttle Windows. Then click the Delete button.
Throttle tab
Right-click on the target server or target site and click the Throttle tab for information
on Link Type, default throttle, and any selected throttle windows.
346
Replication
Throttle speed displays the maximum speed, not necessarily the actual speed. For
example, a throttle speed of 30 Mbps indicates a speed of 30 Mbps or less. The
speed is determined by multiplying the throttle percentage to the link type speed.
For example, a default throttle of 30% of a 100Mbps Link Type would be (30%) x
(100Mbps) = 30Mbps.
Actual speed may or may not be evenly distributed across all Target Sites and
servers. Actual speed depends on many factors, such as disk performance, network
traffic, functions enabled (encryption, compression, MicroScan), and other
processes in progress (TimeMark, Mirror, etc).
347
Replication
If your link type is not listed in the pre-populated/build-in list, you can add a custom
link type by navigating to the Replication node in the console, right-clicking on
Outgoing and selecting Throttle --> Manage Link Types. Then click the Add button.
Then enter the Link Type, a brief description, and the speed in Megabytes per
second (Mbps).
Edit link types
Custom link types can be modified by clicking the Edit button. However, built-in link
types cannot be edited.
To edit a custom link type, navigate to the Replication node in the console, right-click
on Outgoing and select Throttle --> Manage Link Types. Then click the Edit button
Delete link
types
Link Types can be deleted as long as they are not currently in use by any target site
or server. Custom link types can be deleted when no longer needed. Built-in link
types cannot be deleted.
To delete a custom link type, navigate to the Replication node in the console, rightclick on Outgoing and select Throttle --> Manage Link Types. Then click the Delete
button.
348
Replication
This allows you to prioritize the order that device/group will begin replicating if
scheduled to start at the same time. This option can be set for a single resource or a
single group via the Replication submenu or for multiple resources or group from the
context menu of Replication Outgoing node.
2. Enter the New Target Server host name or IP address to be used by the new
primary server to connect to the new target server for replication.
349
Replication
The forceful role reversal operation can be performed even if the CDP
journal has unflushed data.
The forceful role reversal operation can be performed even if data is not
synchronized between the primary and replica server.
The snapshot policy, TimeMark/CDP, and throttle control policy settings
are not swapped after the repair operation for replication role reversal.
350
Replication
The replication repair operation must be performed from the NEW primary
server.
Note: If the SAN resource is assigned to a client in the original primary server,
it must be unassigned in order to perform the repair on the new primary.
Repair a replica
When performing a repair, the following status conditions may display:
Description
Valid
Invalid
TimeMark Rollback
in Progress
Relocate a replica
The Relocate feature allows replica storage to be moved from the original replica
server to another server while preserving the replication relationship with the
primary server. Relocating reassigns ownership to the new server and continues
CDP/NSS Administration Guide
351
Replication
replication according to the set policy. Once the replica storage is relocated to the
new server, the replication schedule can be immediately resumed without the need
to rescan the disks.
Before you can relocate the replica, you must import the disk to the new CDP or
NSS appliance. Refer to Import a disk if you need more information.
Once the disk has been imported, open the source server, highlight the virtual
resource that is being replicated, right-click and select Relocate.
Notes:
352
Replication
353
Near-line Mirroring
Near-line mirroring allows production data to be synchronously mirrored to a
protected disk that resides on a second storage server. You can enable near-line
mirroring for a single SAN resource or multiple resources.
With near-line mirroring, the primary disk is the disk that is used to read/write data
for a SAN Client and the mirrored copy is a copy of the primary. Each time data is
written to the primary disk, the same data is simultaneously written to the mirror disk.
TimeMark or CDP can be configured on the near-line server to create recovery
points. The near-line mirror can also be replicated for disaster recovery protection.
If the primary disk fails, you can initiate recovery from the near-line server and roll
back to a valid point-in-time.
Application
Servers
IPStor
Service
Enabled Disk
Synchronous
Mirror
Production
IPStor
Synchronous
Mirror
Nearline
354
Near-line Mirroring
When you create a near-line disk, the primary server performs a rescan to
discover new devices. If you are configuring multiple near-line mirrors, the scans
can become time consuming. Instead, you can select to prepare the near-line
disk now and then manually rescan physical resources and discover new
resources on the primary server. Afterwards, you will have to re-run the wizard
and select the existing, prepared disk.
CDP/NSS Administration Guide
355
Near-line Mirroring
If you are enabling near-line mirroring for multiple disks, the above screen will
not display.
3. Select the storage pool or physical device(s) for the near-line mirrors virtual
header information.
356
Near-line Mirroring
Confirm or specify the IP address the near-line server will use to connect to the
primary server when a TimeMark is created, if snapshot notification is used. If
needed, you can specify a different IP address from what you used when you
added the primary server as a client of the near-line server.
357
Near-line Mirroring
If you select to monitor the mirroring process, the I/O performance will be
checked to decide if I/O to the mirror disk is lagging beyond an acceptable limit.
If it is, mirroring will be suspended so it does not impact the primary storage.
Monitor mirroring process every n seconds - Specify how frequently the system
should check the lag time (delay between I/O to the primary disk and the mirror).
Checking more or less frequently will not impact system performance. On
systems with very low I/O, a higher number may help get a more accurate
representation.
Maximum lag time for mirror I/O - Specify an acceptable lag time (1 - 1000
milliseconds) between I/Os to the primary disk and the mirror.
Suspend mirroring - If the I/O to the mirror disk is lagging beyond the specified
level of acceptance, mirroring will be suspended when the following conditions
are met:
When the failure threshold reaches n% - Specify what percentage of I/O must
pass the lag time test. For example, you set the percentage to 10% and the
maximum lag time to 15 milliseconds. During the test period, 100 I/O occurred
and 20 of them took longer than 15 milliseconds to update the mirror disk. With a
20% failure rate, mirroring would be suspended.
358
Near-line Mirroring
When the outstanding I/Os reaches n - Specify the minimum number of I/Os that
can be outstanding. When the number of outstanding I/Os are above the
specified number, mirroring is suspended.
Note: If a mirror becomes out of sync because of a disk failure or an I/O error
(rather than having too much lag time), the mirror will not be suspended.
Because the mirror is still active, re-synchronization will be attempted based
on the global mirroring properties that are set for the server. Refer to Set
global mirroring options for more information.
359
Near-line Mirroring
Custom lets you select which physical device(s) and which segments to
use and lets you designate how much space to allocate from each.
Express lets you select which physical device(s) to use and automatically
creates the near-line resource from the available hard disk segments.
Select existing lets you select an existing virtual device that is the same
size as the primary or a previously prepared (but not yet created) near-line
mirror resource. (The option to only prepare a near-line disk appeared on
the first Near-line Mirror wizard dialog.)
360
Near-line Mirroring
Note: Do not change the name of the near-line resource if the server is a nearline mirror or configured with near-line mirroring.
361
Near-line Mirroring
12. Confirm that all information is correct and then click Finish to create the near-line
mirroring configuration.
362
Near-line Mirroring
Whats next?
Near-line disks
are prepared
but not created
If you prepared one or more near-line disks and are ready to create near-line
mirrors, you must manually rescan physical resources and discover new devices on
the primary server. Afterwards, you must re-run the Near-line Mirror wizard for each
primary disk and select the existing, prepared disk. This will create a near-line mirror
without re-scanning the primary server.
Near-line mirror
is created
After creating your near-line mirror, you should enable TimeMark or CDP on the
near-line server. This way your data will have periodic snapshots and you will be
able to roll back your data when needed.
For disaster recovery purposes, you can also enable replication for a near-line disk
to replicate the data to another location.
363
Near-line Mirroring
364
Near-line Mirroring
Near-line recovery
The following is required before recovering data:
If you are using the FC protocol, zone the appropriate initiators on your
near-line server with the targets on your primary server.
You must unassign the primary disk from its client(s).
If enabled, disable mirroring for the near-line disk.
If enabled, suspend replication for the near-line disk.
All SAN resources must be online and accessible.
If the near-line mirror is part of a group, the near-line mirror must leave the
group prior to recovery.
TimeMark must be enabled on the near-line resource and the near-line
replica, if one exists.
At least one TimeMark must be available to rollback to during recovery.
If you have been using CDP and want to rollback to a specific point-in-time,
you may want to create a TimeView first and view it to make sure it contains
the appropriate data that you want.
Note: If the near-line recovery fails due to a TimeMark rollback failure, device
discovery failure, etc., you can retry the near-line recovery by selecting Near-line
Mirror Resources --> Retry Recovery on the Near-line Disk.
365
Near-line Mirroring
You can select to roll back to any TimeMark. If this resource has CDP enabled
and you want to select a specific point-in-time, type in the exact time.
Once you click Ok, the system will roll back the near-line mirror to the specified
point-in-time and will then synchronize the data back to the primary server.
When the process is completed, your screen will look similar to the following:
366
Near-line Mirroring
5. When the Mirror Synchronization Status shows the status as Synchronized, you
can select Near-line Mirror Resource --> Resume Config to resume the
configuration of the near-line mirror.
This re-sets the original near-line configuration so that the primary server can
begin mirroring to the near-line mirror.
6. Re-assign your primary disk to its client(s).
Recovery is performed via the console from the near-line resource as described
below:
1. Right-click on the near-line resource and select Replication -->
Recovery --> Prepare/Start
367
Near-line Mirroring
368
Near-line Mirroring
4. Select the TimeMark to rollback to restore your drive to a specific point in time.
Once you click Ok, the system will roll back the near-line mirror to the specified
point-in-time.
5. Perform Replication synchronization from the REVERSED Near-line Replica
Disk to the near-line disk after successful rollback.
This will synchronize the rollback data from the REVERSED replica to Near-line
Disk and primary disk since the near-line disk is the replica now and the primary
disk is the mirror of the near-line disk.
To do this:
Right-click on the REVERSED Near-line replica disk.
Select Replication-> Synchronize.
6. Perform Role Reversal to switch Near-line Disk back as Replication Primary
Disk and resume the Near-line Mirroring configuration.
To do this:
Right-click on the REVERSED Near-line Replica Disk.
Select Replication-> Recovery-> Resume Config.
369
Near-line Mirroring
7. Click OK to switch the role of the Near-line disk and the Near-line replica and
resume near-line mirroring.
8. Re-assign your primary disk to its client(s).
370
Near-line Mirroring
Click OK to switch the roles of the replica disk and primary server.
371
Near-line Mirroring
Select the TimeMark you are rolling back to and click OK.
3. Perform Repair Replica from the reversed Near-line Replica after the near-line
server is online.
Note: You must set the near-line disk to Recovery Mode before repairing the
replica.
4. Right-click on the reversed Near-line Replica and select Replication --> Repair
372
Near-line Mirroring
7. When the Mirror Synchronization Status shows the status as Synchronized, you
can select Near-line Mirror Resource --> Resume Config to resume the
configuration of the near-line mirror.
This re-sets the original near-line configuration so that the primary server can
begin mirroring to the near-line mirror.
8. Once near-line mirror configuration has resumed, you can resume the Near-line
Mirror, Replication, and CDP.
9. Re-assign your primary disk to its client(s)
373
Near-line Mirroring
374
Near-line Mirroring
375
Near-line Mirroring
4. Click finish to confirm the expansion of the near-line mirror and the replica.
You are automatically routed back to the beginning of the Expand SAN
Resource Wizard to expand the primary server.
Note: Thin provisioning is not supported with near-line mirroring.
376
Near-line Mirroring
You can set global mirroring options that affect system performance during all types
of mirroring (near-line, synchronous, or asynchronous). While the default settings
should be optimal for most configurations, you can adjust the settings for special
situations.
To set global mirroring properties for a server:
1. Right-click on the server and select Properties.
2. Select the Performance tab.
Synchronize Out-of-Sync Mirrors - Determine how often the system should
check and attempt to resynchronize active out-of-sync mirrors, how often it
should retry synchronization if it fails to complete, and whether or not to include
replica mirrors. These settings will only be used for active mirrors. If a mirror is
suspended because the lag time exceeds the acceptable limit, that resynchronization policy will apply instead.
The mirrored devices must be the same size. If you want to enlarge the primary
disk, you will need to enlarge the mirrored copy to the same size. When you use
the Expand SAN Resource Wizard, it will automatically lead you through
expanding the near-line mirror disk first.
377
Near-line Mirroring
Change
properties for a
specific primary
resource
You can change the following near-line mirroring configuration for a primary
resource:
For a near-line mirroring resource, you can only change the IP address that is used
by the near-line server to connect to the primary server.
To change the configuration:
1. Right-click on a near-line resource and select Near-line Mirror Resource -->
Properties.
2. Make the appropriate change and click OK.
378
Near-line Mirroring
If a disaster occurs at the site where the primary and near-line server are housed, it
is possible to recover both disks if you had replication configured for the near-line
disk to a remote location.
In this case, after removing the mirroring configuration and physically replacing the
failed disks, you can perform a role reversal to replicate all of the data back to the
near-line disk.
Afterwards, you can recover the data from the near-line mirror back to the primary
disk.
If one of the mirrored disks has a minor failure, such as a power loss:
1. Fix the problem (turn the power back on, plug the drive in, etc.).
2. Right-click on the primary resource and select Near-line Mirror --> Synchronize.
This re-synchronizes the disks and restarts the mirroring.
If the near-line
server is set up
as a failover
pair and is in a
failed state
If the you are performing a near-line recovery and the near-line server is set up as a
failover pair, always add the first and second nodes of the failover set to the primary
for recovery.
1. Select the proper initiators for recovery
2. Assign both nodes back to the primary for recovery.
Note: There are cases where the server may not show up in the list since the
machine maybe down and the particular port is not logged into the switch. In this
situation, you must know the complete WWPN of your recovery initiator(s). This is
important in cases where you need to manually enter the WWPN into the recovery wizard to avoid any adverse effects during the recovery process.
379
Near-line Mirroring
380
Near-line Mirroring
echo "1" > /sys/class/scsi_device/1:0:0:0/device/delete
3. Execute the following to re-add the device so that Linux can recognize the drive:
echo "scsi add-single-device x x x x">cat /proc/scsi/scsi.
381
ZeroImpact Backup
FalconStors ZeroImpact Backup Enabler allows you to perform a local raw device
tape backup/restore of your virtual drives.
A raw device backup is a low-level backup or full copy request for block information
at the volume level. Linuxs dd command generates a low-level request.
Examples of Linux applications that have been tested with the storage server to
perform raw device backups include BakBones NetVault version 7.42 and
Symantec Veritas NetBackup version 6.0.
Using the FalconStor ZeroImpact Backup Enabler with raw device backup software
eliminates the need for the application server to play a role in backup and restore
operations. Application servers on the SAN benefit from better performance and the
elimination of overhead associated with backup/restore operations because the
command and data paths are rendered exclusively local to the storage server. This
results in the most optimal data transfer between the disks and the tape, and is the
only way to achieve net transfer rates that are limited only by the disks or tapes
engine. The backup process automatically leverages the FalconStor snapshot
engine to guarantee point-in-time consistency.
To ensure full transactional integrity, this feature integrates with FalconStor
Snapshot Agents and the Group Snapshot feature.
382
ZeroImpact Backup
2. Enter a raw device name for the virtual device that you want to back up.
Use an existing TimeMark snapshot - (This option is only valid if you are using
FalconStors TimeMark option on this SAN resource.) If a TimeMark exists for
this virtual device, that image will be used for the backup. It may or may not be a
current image at the time backup is initiated. If a TimeMark does not exist, a
snapshot will be taken.
Create a new snapshot - A new snapshot will be created for the backup,
ensuring the backup will be made from the most current image.
CDP/NSS Administration Guide
383
ZeroImpact Backup
384
ZeroImpact Backup
You can also back up a logical resource to another logical resource. Prior to
doing so, all target logical resources must be detached from the client
machine(s), and have backup enabled so that the raw device name for the
logical resource can be used instead of specifying st0 for the tape device.
When the back up is finished, you will only see one logical resource listed in the
Console. This is caused by the fact that when you reserve a hard drive for use
as a virtual device, the storage server writes partition information to the header
and the Console uses this information to recognize the hard drive. Since a Linux
dd will do an exact copy of the hard drive, this partition information will exist on
the second hard drive, will be read by the Console, and only one drive will be
shown. If you need to make a usable copy of a virtual drive, you should use
FalconStors Snapshot Copy option.
385
ZeroImpact Backup
386
Multipathing
The Multipathing option may not be available in all IPStor, CDP, and NSS versions.
Check with your vendor to determine the availability. This option allows the storage
server to intelligently distribute I/O traffic across multiple Fibre Channel (FC) ports to
maximize efficiency and enhance system performance.
Because it uses parallel active storage paths between the storage server and
storage arrays, CDP/NSS can transparently reroute the I/O traffic to an alternate
storage path to ensure business continuity in the event of a storage path failure.
Multipathing is possible due to the existence of multiple HBAs in the storage server
and/or multiple storage controllers in the storage systems that can access the same
physical LUN.
The multiple paths cause the same LUN to have multiple instances in the storage
server.
387
Multipathing
Load distribution
Automatic load distribution allows for two or more storage paths to be
simultaneously used for read/write operations, enhancing performance by
automatically and equally dispersing data access across all of the available active
paths.
Preferred paths
Some storage systems support the concept of preferred paths, which means the
system determines the preferred paths and provides the means for the storage
server to discover them.
388
Multipathing
Path management
From the FalconStor Management Console, you can specify a preferred path for
each physical device. Right-click on the device and select Alias.
389
Multipathing
You can see multipathing information from the console by checking the Alias tab for
a LUN (under Fibre Channel Devices). For each device, you see the following:
390
391
Common arguments
The following arguments are used throughout the CLI. For each, a long and short
variation is included. You can use either one. The short arguments ARE case
sensitive. For arguments that are specific to each command, refer to the section for
that command.
Short Argument
Long Argument
Value/Description
-s
--server-name
-u
--server-username
-p
--server-password
-S
--target-name
-U
--target-username
-P
--target-password
-c
--client-name
-v
--vdevid
-v
--source-vdevid
-V
--target-vdevid
-a
--access-mode
-f
--force
-n
--vdevname
-X
--rpc-timeout
Note: You only need to use the --server-username (-u) and --serverpassword (-p) arguments when you log into a server. You do not need them
for subsequent commands on the same server during your current session.
392
Commands
Below is a list of commands you can use to perform CDP/NSS functions from the
command line. You should be aware of the following as you enter commands:
The following table provides a summary of the command line interface options along with a description.
Description
Login/Logout of the storage server
iscli login
This command allows you to log into the specified storage server
with a given username and password.
iscli logout
Client Properties
iscli setfcclientprop
iscli getclientprop
iscli setiscsiclientprop
393
Description
iSCSI Targets
iscli createiscsitarget
This command creates an iSCSI target. <client-name>, <ipaddress>, and <access-mode> are required. A default iSCSI target
name will be generated if <iscsi-target-name> is not specified.
iscli deleteiscsitarget
This command deletes an iSCSI target. <client-name> and <iscsitarget-name> are required.
iscli assigntoiscsitarget
iscli unassignfromiscsitarget
iscli getiscsitargetinfo
iscli setiscsitargetprop
This command allows you to add a CDP/NSS user. You must log in
to the server as "root" in order to perform this operation.
iscli setuserpassword
Mirroring
iscli createmirror
This command allows you to create a mirror for the specified virtual
device. The virtual device can be a SAN, or Replica resource.
iscli getmirrorstatus
iscli syncmirror
iscli swapmirror
This command reverses the roles of the primary disk and the
mirrored copy.
CDP/NSS Administration Guide
394
Description
iscli promotemirror
iscli removemirror
iscli enablealternativereadmirror
iscli disablealternativereadmirror
iscli
getalternativereadmirroroption
iscli migrate
iscli getmirrorpolicy
iscli setmirrorpolicy
The Mirror policy is for resources enabled with the mirroring option.
You can set the options to check the mirror health status, suspend,
resume and re-synchronize the mirror when it is necessary.
iscli suspendmirror
iscli resumemirror
395
Description
iscli getvdevlist
iscli getclientvdevlist
iscli renamevdev
iscli assignvdev
iscli unassignvdev
iscli expandvdev
iscli deletevdev
iscli setassignedvdevprop
iscli addclient
iscli deleteclient
iscli enableclientprotocol
iscli disableclientprotocol
iscli getvidbyserialno
396
Description
iscli addthindiskstorage
iscli setthindiskproperties
iscli getthindiskproperties
iscli getvdevserial
iscli replacefcclientwwpn
iscli rescanfcclient
Email Alerts
iscli enablecallhome
iscli disablecallhome
Failover
iscli getfailoverstatus
Replication
iscli createreplication
iscli startreplication
iscli stopreplication
iscli suspendreplication
iscli resumereplication
397
Description
iscli promotereplica
iscli removereplication
iscli getreplicationstatusinfo
iscli setreplicationproperties
This command allows you to set the replication policy for a virtual
device or group configured for replication.
iscli getreplicationproperties
iscli relocate
This command relocates a replica after the replica disk has been
physically moved to a different server.
iscli scanreplica
iscli getreplicationthrottles
iscli setreplicationthrottles
This command allows you to configure the throttle level for target
sites or windows. Can accept a file. The path of the file in the
command must be the full path
iscli getthrottlewindows
iscli setthrottlewindows
iscli removethrottlewindows
iscli addthrottlewindows
iscli addlinktypes
iscli gettargetsitesinfo
iscli
addtargetservertotargetsite
398
Description
iscli
deletereplicationtargetsite
iscli
createreplicationtargetsite
This command creates a target site. You can create a target site
with multiple target servers within at once by listing in their host
names in the command or use a file. The format of the file is one
server per line. The path of the file in the command must be the full
path.
iscli
removetargetserverfromtargetsi
te
iscli removelinktypes
iscli setlinktypes
iscli getlinktypes
Server configuration
iscli getserverversion
This command allows you to view the storage version version and
build number.
iscli saveconfig
iscli restoreconfig
Snapshot Copy
iscli snapcopy
iscli getsnapcopystatus
Physical Device
iscli getpdevinfo
iscli getadapterinfo
399
Description
iscli rescandevices
iscli importdisk
iscli preparedisk
iscli renamephysicaldevice
iscli deletephysicaldevice
iscli restoresystempreferredpath
This command allows you to restore the system preferred path for
a physical device.
TimeMark/CDP
iscli enabletimemark
400
Description
iscli createtimemark
iscli disabletimemark
iscli updatetimemarkinfo
This command is only available in version 5.1 or later and lets you
add a comment or change the priority of an existing TimeMark. A
TimeMark timestamp is required to update the TimeMark
information.
iscli deletetimemark
iscli copytimemark
iscli selecttimemark
iscli deselecttimemark
iscli rollbacktimemark
iscli gettimemark
iscli settimemarkproperties
iscli gettimemarkproperties
401
Description
iscli gettimemarkstatus
iscli createtimeview
iscli remaptimeview
iscli suspendcdpjournal
iscli resumecdpjournal
iscli getcdpjournalstatus
This command gets the current size and status of your CDP
journal, including all policies.
iscli removetimeviewdata
iscli getcdpjournalinfo
iscli createcdpjournaltag
This command lets you manually add a tag to the CDP journal. The
-A (--cdp-journal-tag) tag can be up to 64 characters long and
serves as a bookmark in the CDP journal. Instead of specifying the
timestamp, the tag can be used when creating a TimeView.
iscli getcdpjournaltags
Snapshot Resource
iscli createsnapshotresource
iscli deletesnapshotresource
iscli expandsnapshotresource
iscli setsnapshotpolicy
402
Description
iscli getsnapshotpolicy
This command allows you to view the snapshot policy settings for
the specified resource.
iscli enablereclamationpolicy
This command allows you to set the reclamation policy settings for
the specified resource.
iscli disablereclamationpolicy
iscli startreclamation
iscli stopreclamation
iscli updatereclaimpolicy
iscli getreclamationstatus
iscli
reinitializesnapshotresource
iscli getsnapshotresourcestatus
iscli setreclamationpolicy
iscli setglobalreclamationpolicy
iscli getsnapshotgroups
iscli createsnapshotgroup
403
Description
iscli deletesnapshotgroup
iscli joinsnapshotgroup
iscli leavesnapshotgroup
iscli enablereplication
iscli disablereplication
404
Description
Cache resources
iscli createcacheresource
iscli getcacheresourcestatus
iscli setcacheresourceprop
iscli getcacheresourceprop
iscli suspendcacheresource
iscli resumecacheresource
iscli deletecacheresource
Report data
iscli getreportdata
This command allows you to get report data from the specified
server and save the data to an output file in csv or text file format.
Event log
iscli geteventlog
The date range can be specified to get the event log for a specific
range. The default is to get all of the event log messages if a date
range is not specified.
Backup
iscli enablebackup
iscli disablebackup
405
Description
iscli stopbackup
This command allows you to stop the backup activity for a virtual
device or a group.
If a group is specified and the group is enabled for backup, the
backup activity for all resources in the group is stopped. If the
backup option is not enabled for the group, but some of the
resources in the group are enabled for backup, the backup activity
for the resources in the group is stopped.
iscli setbackupproperties
iscli getbackupproperties
Xray
iscli getxray
406
Text
0x90020001
Invalid arguments.
0x90020002
0x90020003
0x90020004
0x90020005
0x90020006
0x90020007
0x90020008
0x90020009
0x9002000a
0x9002000b
0x9002000c
Invalid size.
0x9002000d
0x9002000e
0x9002000f
0x90020010
Invalid client.
Note: Make sure you use the SAN client names that are created on the server. These
names may be different from the actual hostname or the ones in /etc/hosts.
0x90020011
0x90020012
0x90020013
0x90020014
0x90020015
0x90020016
0x90020017
0x90020018
407
Text
0x90020019
0x9002001a
0x9002001b
0x9002001c
Failed to parse the dynamic configuration file from the target server.
0x9002001d
0x9002001e
0x9002001f
0x90020020
0x90020021
0x90020022
0x90020023
0x90020024
0x90020025
0x90020026
0x90020027
0x90020028
You have to run login command with valid user id and password or
provide server user id and password through the command.
0x90020029
You have to run login command with valid user id and password or
provide target server user id and password through this command.
0x9002002a
0x9002002b
The size of the source disk and target disk does not match.
0x9002002c
0x9002002d
0x9002002e
0x9002002f
0x90020030
0x90020031
0x90021000
?CLI_RPC_FAILED.
0x90021001
?CLI_RPC_COMMAND_FAILED.
0x90022000
0x90022001
408
Text
0x90022002
0x90022003
0x90022004
Note: For CLI to work with server name instead of IP address, the server name has to be
resolved on the client side and server side. This can happen, for example when the
server hostname is not in DNS or /etc/hosts file.
0x90022005
Note: Check network interface is not down on the server to make sure RPC calls go
through.
0x90022006
0x90022007
0x90022008
0x90022009
0x9002200a
0x9002200b
0x9002200c
0x9002200d
0x9002200e
0x9002200f
0x90022010
0x90022011
0x90022012
0x90022013
0x90022014
0x90022015
0x90022016
The size of the primary disk does not match the size of the replica
disk. The minimum size for the expansion is %1 MB in order to
synchronize them.
0x90022017
409
Text
0x90022018
0x90022019
0x9002201a
0x9002201b
0x9002201c
0x9002201d
0x9002201e
0x9002201f
0x90022020
0x90022021
0x90022022
0x90022023
0x90022024
0x90022025
0x90022026
0x90022027
0x90022028
Invalid TimeMark.
0x90022029
0x90022030
0x90022031
0x90022032
Failed to write to the output file: %1. Please check to see if you
have enough space.
410
Text
0x90022033
0x90022034
0x90022035
0x90022036
0x90022037
0x90022038
0x90022039
0x9002203a
After rollback, some of the timemarks will no longer exist and there
are TimeView resources created for those timemarks. Please delete
the TimeView resources first if you want to rollback the timemark.
0x9002203b
0x9002203c
0x9002203d
0x9002203e
0x9002203f
0x90022040
0x90022042
0x90022043
0x90022044
0x90022045
411
Text
0x90022046
0x90022047
0x9002204
0x90022048
0x9002204a
0x9002204b
0x9002204c
0x9002204d
0x9002204e
0x9002204f
0x90022050
0x90022051
0x90022052
0x90022053
0x90022054
0x90022055
0x90022056
0x90022057
0x90022058
0x90022059
0x9002205a
0x9002205b
0x9002205c
0x9002205d
412
Text
0x9002205e
0x9002205f
0x90022060
0x90022061
0x90022062
?CLI_ERROR_DIR_EXIST
0x90022063
?CLI_PARSE_NAS_USER_CONF_FAILED
0x90022064
0x90022065
0x90022066
0x90022067
0x90022068
The batch mode processing is not completed for all the requested
virtual devices.
0x90022069
0x9002206a
0x9002206b
0x9002206c
0x9002206d
0x9002206e
0x9002206f
0x90022070
0x90022071
0x90022072
0x90022073
0x90022074
0x90022076
0x90022077
0x90022078
413
Text
0x90022079
0x90022083
?CLI_ERROR_BMR_COMPATIBILITY.
0x90022084
?CLI_ERROR_ISCSI_COMPATIBILITY.
0x90022085
This command \"%1\"is not supported for this server version: %2.
0x90022086
0x9002208a
0x9002208e
0x9002208f
0x90022090
0x90022091
0x90022092
0x90022093
0x90022094
0x90022095
0x90022096
0x90022097
0x90022098
0x90022099
0x9002209a
0x900220a7
0x900220a8
0x900220a9
0x900220aa
0x900220b0
?CLI_NAS_SMB_HOME.
0x900220b1
?CLI_SNMP_MAX_TRAPSINK.
0x900220b2
?CLI_SNMP_NO_TRAPSINK.
414
Text
0x900220b3
?CLI_SNMP_OUT_INDEX
0x90023000
0x90023001
0x90023002
0x90023003
0x90023004
Ip address is needed.
0x90023005
0x90023006
0x90023007
0x90023008
0x90023009
Duplicated IP address.
0x90023010
0x90023011
0x90023012
0x90023013
0x90023100
0x90023101
0x90023103
0x90023104
Replica size exceeds the licensed Worm Size limit on the target
server: %1 GB
0x90023105
NAS resource will exceed the licnesed Worm Size limt: %1 GB.
0x90023102
?CLI_TARGET_SERVER_NOT_WORM_KERNEL.
0x90023106
0x90023107
0x90023108
0x90023109
0x9002310a
415
Text
0x9002310b
0x9002310c
0x9002310d
0x9002310e
0x9002310f
0x90023110
0x90023111
0x90023112
0x90023113
0x90023114
0x90023115
0x90023116
0x90023117
0x90023118
0x90023119
0x9002311a
0x9002311b
0x9002311c
0x9002311d
0x9002311e
0x9002311f
416
Text
0x90023120
0x90023121
0x90023122
This virtual device is still valid for cross mirror setup. Manual
swapping is not allowed.
0x90023123
0x90023124
0x90023125
0x90023126
0x90023127
0x90023128
The specified data point is not valid for post rollback TimeView
creation.
0x90023129
0x90023130
This virtual device is still valid for cross mirror setup. Manual
swapping is not allowed.
0x90023131
0x90023132
0x90023133
0x90023134
0x90023135
0x90023136
0x90023137
?CLI_ERROR_MULTI_STAGE_REPL_NOT_SUPPORTED.
0x90023138
0x90023139
0x9002313a
0x9002313b
417
Text
0x9002313c
0x90024001
0x90024002
0x90024003
?CLI_ERROR_REPL_TRANSMITTED_INFO_NOT_SUPPORTED.
0x90024004
The virtual device info serial number is not supported for this
version of server: %1.
0x90024005
0x90024006
0x90024007
0x90024008
Disable Mirror Swap option is not supported for this server version:
%1.
0x90024101
0x90024102
0x90024103
0x90024104
0x90024105
0x90024106
0x90024107
0x90024108
0x90024109
The servers are not Near-line Mirroring partners for the specified
virtual device.
0x9002410a
0x9002410b
0x9002410c
0x9002410d
0x9002410e
0x90024201
The virtual device has been assigned to a client and the virtual
device's userACL doesn't match the snapshot group's userACL.
418
Text
0x90024202
0x90024203
0x90024204
0x90024205
0x90024206
0x90024207
0x90024208
0x90024209
0x90024301
0x90024302
0x90024303
0x90024304
No infini-band license.
0x90024305
0x90024306
0x90024307
0x90024308
0x90024351
0x90024401
0x90024402
0x90024403
0x90024404
0x90024501
0x90024502
0x90024601
419
Text
0x90024602
0x90024603
0x90024604
0x90024605
0x90024606
0x90024607
0x90024608
0x90024609
0x9002460a
0x9002460b
0x9002460c
0x9002460d
0x9002460e
0x9002460f
0x90024610
The version of the primary server has to be the same or later than
the version of the Near-line server for Near-line mirroring setup.
0x90024611
The version of the primary server has to be the same or later than
the version of the replica server for replication setup.
0x90024612
0x90024613
0x90024614
0x90024615
0x90024616
0x90024617
420
Text
0x90024618
Near-line disk is enabled with mirror. Please remove the mirror from
Near-line disk before performing Near-line recovery.
0x90024619
0x9002461a
0x9002461b
0x9002461c
0x9002461d
0x9002461e
0x9002461f
Near-line server failover partner client does not exist on the Primary
server
0x90024620
0x90024621
0x90024622
0x90024623
0x90024624
0x90024625
0x90024626
0x90024627
0x90024628
0x90024629
0x9002462a
0x9002462b
0x9002462c
0x9002462d
The Primary server is configured for failover, but the failover partner
server is not configured properly for Near-line Mirroring.
0x9002462e
0x9002462f
421
Text
0x90024630
Near-line Disk is not assigned to the Primary server client on Nearline server.
0x90024631
0x90024632
0x90024633
0x90024634
0x90024635
0x90024636
0x90024637
0x90024638
0x90024639
0x9002463a
0x9002463b
0x9002463c
CDP Journal is enabled and active for replica disk. Please suspend
CDP Journal and wait for the data to be flushed.
0x9002463d
0x9002463e
0x9002463f
0x90024640
0x90024641
0x90024642
0x90024643
HotZone is enabled and active for the primary disk. Please suspend
the HotZone first.
0x90024644
0x90024645
CDR is enabled for the primary disk. Please disable CDR first.
422
Text
0x90024646
CDR is enabled for the primary group. Please disable CDR first.
0x90024647
0x90024648
0x90024649
0x9002464a
0x9002464b
0x9002464c
0x9002464d
0x9002464e
0x9002464f
0x90024650
CDR is enabled for the original primary disk. Please disable CDR
first.
0x90024651
CDR is enabled for the original primary group. Please disable CDR
first.
0x90024652
?CLI_MICRSCAN_COMPRESSION_CONFLICT_ON_TARGET.
0x90024653
0x90024654
The option for discardable changes for the timeview is not enabled.
0x90024655
0x90024655
?CLI_INVALID_NEARLINE_CONFIG.
0x90024656
?CLI_INVALID_NEARLINE_DISK.
0x90024657
The option for discardable changes is not supported for this type of
resource.
0x90024658
0x90024659
0x9002465a
There is still cached data not being flushed to the timeview. Please
flush the changes first if you do not want to discard the changes
before deleting the timeview.
0x9002465b
The option for discardable changes for the timeview is not enabled.
0x9002465c
423
Text
0x9002465f
0x90024665
Primary server user id and password are required for the target
server to establish the communication information.
0x90024666
The resource is a Near-line Disk and the Primary Disk is a thin disk.
Expansion is not supported for think disk.
0x90024667
The options for snapshot resource error handling are not supported.
0x90024668
0x90024669
0x9002466a
0x9002466b
0x9002466c
0x9002466e
0x9002466f
0x90024670
0x90024671
0x90024672
0x90024673
0x90024674
0x90024675
0x90024676
0x90024677
0x90024678
TimeView data exists on the replica TimeMark. Please specify -vf (-force-to-replicate) option to force the replication.
424
Text
0x90024679
0x9002467a
0x9002467b
0x9002467c
0x9002467d
0x9002467e
0x9002467f
0x90024680
0x90024681
0x90024682
0x90024683
0x90024684
0x90024685
0X90024686
0x90024687
0x90024688
0x90024689
0x90024690
0x90024691
0x90024692
0x90024693
0x90024694
425
Text
0x90024695
Sync CDR replica TimeMark setting is not supported for this version
of server: %1.
0x90024696
0x90024697
0x90024698
It appears that the nearline disk is still available. Please login into
the nearline server before removing the configuration.
0x90024699
0x9002469a
0x9002469b
0x9002469c
0x9002469d
0x9002469e
0x90024700
0x9002469f
0x90024701
Read the partition from inactive path option is not supported for this
version of server: %1.
0x90024702
Use report luns option and lun ranges option are mutually exclusive.
0x90024703
0x90024704
0x90024705
Select TimeMark with timeview data is not supported for this version
of server: %1.
0x90024706
0x90024707
0x90024708
426
Text
0x90024709
0x9002470a
0x9002470b
0x9002470c
0x9002470d
0x90024719
0x9002471a
0x9002471b
0x9002471c
0x9002471d
0x9002471e
0x9002471f
0x90024720
0x90024721
0x90024722
0x90024723
0x90024724
The specified virtual device does not have CDP enabled for the
journal related options.
0x09021000
0x80020500
0x800B0100
0x8023040b
427
Text
0x80230406
0x80230403
0x80230404
0x80230405
0x80230406
0x80230407
0x80230408
0x80020600
0x8023040c
428
SNMP Integration
CDP/NSS provides SNMP support to integrate CDP/NSS management into an
existing enterprise management solution such as HP OpenView, HP Network Node
Manager (NNM), Microsoft System Center Operations Manager (SCOM), CA
Unicenter, IBM Tivoli NetView, and BMC Patrol.
For Dell appliances, SNMP integration with Dell OpenManage is supported.
Information can be obtained via your MIB browser (i.e. query Dells OID with
OpenView) or via the Dell OpenManage software.
For HP appliances, SNMP integreation with HP Advanced Server Management
(ASM) is supported. Information can be obtained via your MIB browser or from the
HP Systems Insight Manager (SIM).
CDP/NSS uses the MIB (Management Information Base) to determine what data
can be monitored. The MIB is a database of information that you can query from an
SNMP agent.
A MIB module contains actual specifications and definitions for a MIB. A MIB file is
just a text file that contains one or more MIB modules.
There are three major areas of management:
429
SNMP Integration
SNMPTraps
Simple Network Management Protocol (SNMP) is used to monitor systems for fault
conditions, such as disk failures, threshold violations, etc.
Essentially, SNMP agents expose management data on the managed systems as
variables. The variables accessible via SNMP are organized in hierarchies. These
hierarchies, and other metadata (such as type and description of the variable), are
described by Management Information Bases (MIBs).
An SNMP-managed network consists of three key components:
Managed device
Agent software which runs on managed devices
Network management system (NMS) software which runs on the
manager
The following table list the name and description of the CDP/NSS modules .
CDP/NSS module
Description
IPStor SNMPD
IPStor Configuration
IPStor Base
IPStor HBA
430
SNMP Integration
CDP/NSS module
CDP/NSS
Event Log
messages
Description
IPStor Authentication
storage server
(Compression)
storage server
(FSNBase)
storage server
(Upcall)
storage server
(Transport)
storage server
(Application)
IPStor Advanced
Backup
IPStor Target
IPStor iSCSI
(Daemon)
IPStor Communication
IPStor Logger
CDP/NSS Event Log messages can be sent to your SNMP manager. By default,
Event Log messages (informational, warnings, errors, and critical errors) will not be
sent. From the FalconStor Management Console, you can determine which type of
messages should be sent. To select the Trap level:
CDP/NSS Administration Guide
431
SNMP Integration
1. Right-click on the server and select Properties --> SNMP Maintenance --> Trap
Level.
2. After selecting a Trap Level, click Add to enter the name of the server receiving
the traps (or IP address if the name is not resolvable), and a Community name.
Five levels are available:
None (Default) No messages will be sent.
Critical - Only critical errors that stop the system from operating properly
will be sent.
Error Errors (failure such as a resource is not available or an operation
has failed) and critical errors will be sent.
Warning Warnings (something occurred that may require maintenance
or corrective action), errors, and critical errors will be sent.
Informational Informational messages, errors, warnings, and critical error
messages will be sent.
432
SNMP Integration
To complete the implementation, you must install software on your SNMP manager
machine and then configure the manager to support CDP/NSS.
Since this process is different for each SNMP manager, please refer to the
appropriate section below.
433
SNMP Integration
434
SNMP Integration
The trap configuration can be set by logging into the NNMi9 console from web and
following the steps below:
1. From the HP Network Node Manager console, navigate to Workspaces -->
Configuration, and select Incident Configuration.
2. Select the New icon under the SNMP Traps tab.
Enter the Basics and then click Save and Close:
Name : IPSTOR-information
SNMP ObJect ID : .1.3.6.1.4.1.7368.0.9
Category : IPStor
Family : IPStor
Severity : Critical
Message Format: $oid
Navigate to Incident Browsing --> SNMP Traps to see the trap collection
information
Upload MIB
The MIB browser can be launched from the HP Network Node Manager console by
selecting Tools --> MIB Browser.
1. Upload the MIB file from the HP Network Node Manager console by selecting
Tools --> Upload Local MIB File.
The Upload Local MIB File window launches.
2. Browse to select the MIB file from the CDP/NSS storage server and click Upload
MIB.
The Upload MIB File Data Results screen displays an upload summary.
435
SNMP Integration
Configuration
You need to define which hosts will receive traps from your storage server(s) and
determine which CDP/NSS components to monitor. To do this:
1. In the NNM, highlight a storage server and select Tools --> SNMP MIB Browser.
2. In the tree, expand private --> enterprises --> ipstor --> ipstorServer --> trapReg
and highlight trapSinkSettingTable.
The default read-only community is public. The default read-write community is
falcon.
Set the Community name to "falcon" so that you will be allowed to change
the configuration.
Click the Start Query button to query the configuration.
From the MIB values field, select a host to receive traps. You can set up to
five hosts to receive traps. If the value is 0, the host is invalid or not set.
In the SNMP set value field, enter the IP address or machine name of the
host that will receive traps.
Click the Set button to save the configuration in snmpd.conf.
3. In the SNMP MIB Browser, select private --> enterprises --> ipstor -->
ipstorServer --> alarmTable.
Click the Start Query button to query the alarms.
In the MIB values field, select which CDP/NSS components to monitor.
You will be notified any time the component goes down. A description of
each is listed in the SNMPTraps section.
In the SNMP set value field, enter enable or 1 to enable.
Click the Set button to enable the trap you selected.
CDP/NSS Administration Guide
436
SNMP Integration
Statistics in NNM
In addition to monitoring CDP/NSS components and receiving alerts, you can view
CDP/NSS statistics in NNM. There are two ways to do this:
CDP/NSS
menu
MIB browser
1. Highlight a storage server or Client and select Tools --> SNMP MIB Browser.
2. In the tree, expand private --> enterprises --> ipstor --> ipstorServer.
If this is a Client, select ipstorClient.
From here you can view information about this storage server. If you run a query
at the ipstorServer level, you will get a superset of all of the information from all
of the sub-categories.
For more specific information, expand the sub-categories.
For more information about each of the statistics, you can click the Describe
button or refer to the IPSTOR-MIB.txt file that is in your \\OpenView\snmp_mibs
directory.
437
SNMP Integration
Configuration
You need to define which hosts will receive traps from your storage server(s) and
determine which CDP/NSS components to monitor. To do this:
1. Run Unicenters Auto Discovery.
If you have a repository with existing machines and then you install the storage
server software, Unicenter will not automatically re-classify the machine and
mark it as a storage server.
2. If you need to re-classify a machine, open the Unicenter TNG map, highlight the
machine, select Reclassify Object, select Host --> IPStor SNMP and then
change the Alarmset Name to IPStorAlarm.
If you want to re-align the objects on the map after re-classification, select
Modes --> Design --> Folder --> Arrange Objects and then the appropriate
network setting.
3. Restart the Unicenter TNG map.
4. To define hosts, right-click on storage server --> Object View.
5. Click Object View, select Configure Toolbar, set the Get Community and Set
Community to falcon, and set the Model to ipstor.mib.
falcon is the default community name (password). If it was changed in the
snmpd.conf file (on the storage server), enter the appropriate community name
here.
438
SNMP Integration
8. In the New Value field, enter the IP address or machine name of the host that will
receive traps (such as your Unicenter TNG server).
Your screen will now show that machine.
9. Highlight alarmEntry.
10. Highlight the alarmStatus field for a component, right click and select Attribute
Set.
11. Set the value to enable for on or disable for off.
View traps
1. From your Start --> Programs menu, select Unicenter TNG --> Enterprise
Management --> Enterprise Managers.
2. Double-click on the Unicenter machine.
3. Double-click on Event.
4. Double-click on Console Logs.
Statistics in TNG
You can view statistics about CDP/NSS directly from the ObjectView screen.
To do this, highlight a category in the tree and the CDP/NSS information will be
displayed in the right pane.
439
SNMP Integration
Configuration
You need to define which hosts will receive traps from your storage server(s). To do
this:
1. In NetView, highlight a storage server on the map and click the Browse MIBs
button.
2. In the tree, expand enterprises --> ipstor --> ipstorServer --> trapReg -->
trapSinkSettingTable --> trHost.
The default read-only community is public. The default read-write community is
falcon.
3. Set the Community Name so that you will be allowed to change the
configuration.
4. Click the Get Values button.
5. Select a host to receive traps. You can set up to five hosts to receive traps. If the
value is 0, the host is invalid or not set.
6. In the New Value field, enter the IP address or machine name of the Tivoli host
that will receive traps.
7. Click the Set button to save the configuration in snmpd.conf.
440
SNMP Integration
Statistics in Tivoli
In addition to monitoring CDP/NSS components and receiving alerts, you can view
CDP/NSS statistics in NetView. There are two ways to do this:
CDP/NSS
menu
1. Highlight a storage server or Client and select IPStor from the menu.
2. Select the appropriate menu option.
For a Server, you can view:
Memory used
CPU load
SCSI commands
MB read/written
Read/write errors
For a Client, you can view:
SCSI commands
Error report
These reports are provided by CDP/NSS as a convenient way to view statistical
information without having to go through the MIB browser.
You can add your own reports to the menu by using NetViews MIB builder. Refer
to NetViews documentation for details on using the MIB builder.
MIB browser
1. Highlight a storage server or Client and click Tools --> MIB --> Browser.
2. In the tree, expand private --> enterprises --> ipstor -->Server.
If this is a Client, select ipstorClient.
3. Select a category.
4. Click the Get Values button.
The information is displayed in the bottom section of the dialog.
441
SNMP Integration
Configuration
You need to define which hosts will receive traps from your storage server(s) and
determine which CDP/NSS components to monitor. To do this:
1. In the Patrol Console, on the Desktop tab, right-click the ServerInfo item in the
IPS_Server subtree of one storage server and select KM Commands -->
trapReg --> trapSinkSettingEntry.
The default read-only community is public. The default read-write community is
falcon.
CDP/NSS Administration Guide
442
SNMP Integration
2. Select a host to receive traps. You can set up to five hosts to receive traps. If the
value is '0', the host is invalid or not set.
3. In the Host fields, enter the IP address or machine name of the host that will
receive traps.
4. In the Community fields, enter the community.
5. Click the Set button to save the configuration in snmpd.conf.
6. In the Patrol Console, on the Desktop tab, right-click the ServerInfo item in the
IPS_Server subtree of one storage server and select KM Commands -->
alarmTable --> alarmEntry.
Set the status value to enable(1) for on or disable(0) for off.
View traps
1. In the Patrol Console, on the Desktop tab, right-click the IPS_Trap_Receiver -->
SNMPTrap_Receiver of the Patrol Console machine and select KM Commands -> Start Trap Receiver to let the Patrol Console machine start receiving traps.
2. After turning the trap receiver on, you can double-click the SNMP_Traps icon in
the SNMPTrap_Receiver subtree of the Patrol Console machine to get the
results of the traps that have been received.
Statistics in Patrol
In addition to monitoring CDP/NSS components and receiving alerts, you can view
storage server statistics in Patrol. There are two ways to do this:
IPStor icon
443
SNMP Integration
MIB browser
Advanced topics
This information applies to all SNMP managers.
Note: In order for the configuration to take effect, you must restart the SNMPD
module on each storage server to which you copied these files.
444
SNMP Integration
IPSTOR-MIB tree
Once you have loaded the IPSTOR-MIB file, MIB Browser parses it into a tree
hierarchy structure. The table below describes many of the tables and fields. Refer
to the IPSTOR-MIB.txt file that is in your \\OpenView\snmp_mibs directory for a
complete list.
Table / Field descriptions
Server Information
serverName
loginMachineName
serverVersion
osVersion
The operation system version of the host the storage server is running.
kernelVersion
processorTable
A table containing the information of all processors in the host which the
storage server is running.
processorInfo: The specification of a processor type and power.
memory
swap
The swap space of the host which the storage server is running
netInterfaceTable
A table containing the information of all network interfaces in the host which
the storage server is running.
netInterfaceInfo: The specification containing MAC, IP address and MTU of
a network interface.
FailoverInformationTable
445
SNMP Integration
serverOption
MTCPVersion
performanceTable
A table containing the information of the performance in the host which the
storage server is running.
performanceMirrorSyncTh: The Mirror Synchronization Throttle of the
performance table.
performanceSyncMirrorInterval The Synchronize out-of-sync mirrors Interval
of the performance table.
performanceSyncMirrorRetry The Synchronize out-of-sync mirrors retry
times of the performance table.
performanceSyncMirrorUpnum The Synchronize out-of-sync mirrors up
numbers at each interval of the performance table.
performanceInitialMirrorSync The option of starting initial synchronize when
mirror is added of the performance table.
performanceIncludeReplicaMirror: The option of including replica mirror in
the automatic synchronize process of the performance table.
performanceReplicationMicroScan It indicates the MicroScan option of
Replication is enable or disable of the storage server.
serverRole
smioption
ServerIPaliasTable
A table containing the information of the IP alias in the host which the storage
server is running.
ServerIPAliasIP: The storage server IP Alias
PhysicalResources
numOfAdapters
numOfDevices
446
SNMP Integration
A table containing the information of all the installed SCSI adapters of the
storage server.
adapterNumber: The SCSI adapter number.
adapterInfo: The model name of the SCSI adapter.
scsiDeviceTable
StoragePoolTable
447
SNMP Integration
The amount of logical resources which including the SAN, NAS, and Replica
devices are available in the storage server.
SnapshotReservedArea
448
SNMP Integration
SANResourceTable
449
SNMP Integration
450
SNMP Integration
451
SNMP Integration
452
SNMP Integration
453
SNMP Integration
srCacheFlushSpeed : The flush speed will be sent at one time during the
flush process.
srCacheTotalSizeQuantity : The allocated size quantity when creating the
cache resource.
srCacheTotalSizeUnit : The allocated size unit when creating the cache
resource. 0 = KB. 1 = MB. 2 = GB. 3 = TB.
srCacheFreeSizeQuantity : The free resource size quantity before reaching
the maximum resource size.
srCacheFreeSizeUnit : The free resource size unit before reaching the
maximum resource size. 0 = KB. 1 = MB. 2 = GB. 3 = TB.
srCacheOwnResourceID : The Cache resource ID assigned by the storage
server.
srCacheTotalSize64 : The allocated size when creating the cache resource.
srCacheFreeSize64 : The free resource size which is representing in
megabyte unit before reaching the maximum resource size.
srCacheStatus : Current safecache device's status of the SAN resource.
srWriteCacheproperty : The property represents the write cache is enabled
or disabled of the SAN resource.
srMirrorTable : Table containing the mirror property created by the SAN
device.
srMirrorResourceID : The SAN resource ID that enables the mirror property.
srMirrorType : The mirror type when a SAN resource enable the mirror
property.
srMirrorSyncPriority : The mirror synchronization priority when a SAN
resource enable the mirror property.
srMirrorSuspended Whether the mirror is suspended.
srMirrorThrottle : The mirror throttle value for SAN resource.
srMirrorHealthMonitoringOption : The status represents the mirror health
monitoring option is enable or disable.
srMirrorHealthCheckInterval : The Interval to Check and report mirror health
status.
srMirrorMaxLagTime : The Maximum acceptable lag time for mirror I/O.
srMirrorSuspendThPercent: Suspends mirroring when the threshold of
failure reaches the percentage of the failure conditions.
srMirrorSuspendThIOnum: Suspends mirroring when the outstanding IOs is
greater than or equal to the threshold.
srMirrorRetryPolicy : The status represents the mirror synchronization retry
policy is enable or not.
srMirrorRetryInterval : The mirror synchronization retry at specified interval.
srMirrorRetryActivity : The mirror synchronization retry when I/O activity is
below or at threshold.
srMirrorRetryTimes : The maximum mirror synchronization retry times.
454
SNMP Integration
455
SNMP Integration
456
SNMP Integration
457
SNMP Integration
458
SNMP Integration
ReplicaResourceTable
459
SNMP Integration
ReplicaPhyAllocLayoutTable
A table containing the physical layout information for the replica resources.
rrpaVirtualID : The replica resource ID assigned by the storage server.
rrpaVirtualName : The replica resource name created by the user.
rrpaName : The physical device name.
rrpaType : Represents the type(Primary, or Mirror) of the physical layout.
rrpaSCSIAddress : The SCSI address with <Adapter:Channel:SCSI:LUN>
format of the replica resource.
rrpaFirstSector : The first sector of the physical device which is allocated by the
replica resource.
rrpaLastSector : The last sector of the physical device which is allocated by the
replica resource.
rrpaSize : The amount of the allocated size which is representing with
megabyte unit within a physical device.
rrpaSizeQuantity : The amount of the allocated size quantity within a physical
device.
rrpaSizeUnit : The amount of the allocated size unit within a physical device.
The size unit of the device. 0 = KB. 1 = MB. 2 = GB. 3 = TB.
rrpaFirstSector64 : The first sector of the physical device which is allocated by
the replica resource.
rrpaLastSector64 : The last sector of the physical device which is allocated by
the replica resource.
rrpaSize64 : The amount of the allocated size which is representing with
megabyte unit within a physical device
460
SNMP Integration
461
SNMP Integration
snapshotgroupCDPInfoTable
462
SNMP Integration
snapshotgroupSafeCache
InfoTable
463
SNMP Integration
snapshotgroupMembers
snapshotgroupMemberTable
464
Email Alerts
Email Alerts is a unique FalconStor customer support utility that proactively identifies
and diagnoses potential system or component failures and automatically notifies
system administrators via email.
With Email Alerts, the performance and behavior of servers can be monitored so
that system administrators are able to take corrective measures within the shortest
amount of time, ensuring optimum service uptime and IT efficiency.
Using pre-configured scripts (called triggers), Email Alerts monitors a set of predefined, critical system components (SCSI drive errors, offline device, etc.).
With its open architecture, administrators can easily register new elements to be
monitored by these scripts. When an error is triggered, Email Alerts uses the built-in
CDP/NSS X-ray feature to capture the appropriate information. This includes the
CDP/NSS event log, as well as a snapshot of the CDP/NSS appliances current
configuration and environment. The technical information needed to diagnose the
reported problem is then sent to a system administrator.
Configuration
Email Alerts can be configured to meet your business needs. You can specify who
should be notified about which events. The triggers can be defined to combine any
of the scripts listed below. For example, it can be used to monitor a particular Thin
disk or all Thin disks.
To configure Email Alerts:
1. In the Console, right-click on your storage server and select Options --> Enable
Email Alerts.
465
Email Alerts
SMTP Server - Specify the mail server that Email Alerts should use to
send out notification emails.
SMTP Port - Specify the mail server port that Email Alerts should use.
SMTP Username/Password - Specify the user account that will be used by
Email Alerts to log into the mail server.
User Account - Specify the email account that will be used in the From
field of emails sent by Email Alerts.
Target Email - Specify the email address of the account that will receive
emails from Email Alerts. This will be used in the To field of emails sent
by Email Alerts.
CC Email - Specify any other email accounts that should receive emails
from Email Alerts.
Subject - Specify the text that should appear on the subject line. The
general subject defined during setup will be followed by the trigger specific
subject. If the trigger does not have a subject, the trigger name and
parameters are appended to the general email subject. For the
syslogcheck.pl trigger, the first alert category is appended to the
general email subject. If the email is sent based on event severity, the
event ID will be appended to the general email subject.
Interval - Specify the time period between each activation of Email Alerts.
The Test button allows you to test the configuration by sending a test
email.
466
Email Alerts
3. Enter the contact information that should appear in each Email Alerts email.
4. Set the triggers that will cause Email Alerts to send an email.
467
Email Alerts
Triggers are the scripts/programs that perform various types of error checking
when Email Alerts activates. By default, FalconStor includes scripts/programs
that check for low system memory, changes to the CDP/NSS XML configuration
file, and relevant new entries in the system log.
Note: If the system log is rotated prior to the Email Alerts checking interval
and contains any triggers but the new log does not have any triggers in it,
then no email will be sent. This is because only the current log is checked, not
the previous log.
The following are the some of the default scripts that are provided:
activity.pl - (Activity check) - This script checks to see if an fsstats activity
statistics file exists. If it does, an email alert is sent with the activity file
attached.
cdpuncommiteddatachk.pl -t 90 - This script checks for uncommitted
data on CDP and generates an email alert message if the percentage of
uncommitted data is more than that specified. By default, the trigger gets
activated when the percentage of uncommitted data is 90%.
chkcore.sh 10 (Core file check) - This script checks to see if a new core
file has been created by the operating system in the bin directory of CDP/
NSS. If a core file is found, Email Alerts compresses it, deletes the original,
and sends an email report but does not send the compressed core file
(which can still be large). If there are more than 10 (variable) compressed
core files under $ISHOME/bin directory, it will keep latest 10 compressed
core files and delete the oldest ones.
defaultipchk.sh eth0 10.1.1.1 (NIC IP address check) - This script checks
that the IP address for the specified NIC matches what is specified here. If
it does not, Email Alerts sends an email report. You can add multiple
defaultipcheck.sh triggers for different NICs (for example eth1 could
be used in another trigger). Be sure to specify the correct IP address for
each NIC.
diskusagechk.sh / 95 (Disk usage check) - This script checks the disk
space usage at the root of the file system. If the percentage is over the
specified percentage (default is 95), Email Alerts sends an email report.
You can add multiple diskusagechk.sh triggers for different mount
points (for example, /home could be used in another trigger).
fccchk.pl - (QLogic HBA check) - This script checks each QLogic adapter
initiator port and sends an email alert if there is a status change from
Online to Not Online. The script also checks QLogic link status and sends
an email alert if the status of FC Link Down changes from OK to Not
OK.fmchk.pl and smchk.pl - These scripts (for checking if the fm and
ipstorsm modules are responding) are disabled.
ipstorstatus.sh (IPStor status check) - This script checks if any module of
CDP/NSS has stopped. If so, Email Alerts sends an email report.
468
Email Alerts
469
Email Alerts
470
Email Alerts
The following options are available to customize your x-ray. Regardless of which
option you choose, the bash_history file is created containing a history of the
471
Email Alerts
System Information - When this option is selected, the X-ray creates a file
called info which contains information about the entire system,
including: host name, disk usage, operating system version, mounted file
systems, kernel version, CPU, running processes, IOCore information,
uptime, and memory. In addition, if an IPMI device is present in the server,
the X-ray info file will also include the following files for IPMI:
ipmisel - IPMI system event log
ipmisensor - IPMI sensor information
ipmifru - IPMI built-in FRU information
IPStor Configuration - This information is retrieved from the /usr/local/
ipstor/etc/<hostname> directory. All configuration information (ipstor.conf,
ipstor.dat, IPStorSNMP.conf, etc.), except for shared secret information, is
collected.
SCSI Devices - SCSI device information included in the info file.
IPStor Virtual Device - Virtual device information included in the info file.
Fibre Channel - Fibre Channel information.
Log File - The Linux system message file, called messages, is located in
the
/var/log directory. All storage server messages, including status and
error messages are stored in this file.
Loaded Kernel - Loaded kernel modules information is included in the
info file.
Network Configuration - Network configuration information is included in
the info file.
472
Email Alerts
6. Indicate the terms that should be tracked in the system log by Email Alerts.
The system log records important events or errors that occur in the system,
including those generated by CDP/NSS. This dialog allows you to rule out
entries in the system log that have nothing to do with CDP/NSS, and to list the
types of log entries generated by CDP/NSS that Email Alerts needs to examine.
Entries that do not match the entries entered here are ignored, regardless of
whether or not they are relevant to CDP/NSS.
The trigger for monitoring the system log is syslogchk.pl. To inform the
trigger of which specific log entries need to be captured, you can specify the
general types of entries that need to be inspected by Email Alerts. On the next
dialog, you can enter terms to ignore, thereby eliminating entries that match
these general types, yet can still be disregarded. The resulting subset contains
all entries for which Email Alerts needs to send out email reports.
Each line is a regular expression. The regular expression rules follow the pattern
for AWK (a standard Unix utility).
Note: By default, the system log file is included in the X-ray file which is not
sent with each notification email.
473
Email Alerts
474
Email Alerts
8. Select the severity level of server events for which you want to receive an email
alert.
By default, the alert security level is set to None. You can select one of the
following severity levels
Critical - checks only the critical severity level
Error - checks the error and any severity level higher than error.
Warning - checks the warning and any severity level higher than warning.
Informational - checks all severity levels.
9. Confirm all information and then click OK to enable Email Alerts.
475
Email Alerts
The General tab displays server and message configuration and allows you
to send a test email.
The Signature tab allows you to edit the contact information that appears in
each Email Alerts email.
The Trigger tab allows you to set triggers that will cause Email Alerts to send
an email as well as set up an alternate email.
The Attachment tab allows you to select the information (if any) to send with
the email alert. You can send log files or X-Ray files.
The System Log Check tab allows you to add, edit, or delete syntax from the
log entries that need to be captured. You can also specify the general types
of entries that need to be inspected by Email Alerts.
The System Log Ignore tab allows you to select system log entries to ignore,
thereby eliminating entries that will cause Email Alerts to send out email
reports.
476
Email Alerts
Email format
The email body contains the messages return by the triggers. The alert text starts
with the category followed by the actual message coming from the system log. The
first 30 lines are displayed. If the email body is more than 16 KB, it will be
compressed and sent as an attachment to the email. The signature defined during
Email alerts setup appears at the end of email body.
You can specify an email address to override the default Target Email or a text
subject to override the default Subject. To do this:
1. Right-click on your storage server and select Email Alerts --> Trigger tab.
477
Email Alerts
The alternate email address along with the Subject is saved to the $ISHOME/
etc/callhome/trigger.conf file when you have finished editing.
Note: If you specify an email address, it overrides the return code. Therefore, no
attachment will be sent, regardless of the return code.
New script/
program
Return codes
The trigger can be a shell script or a program (Java, C, etc.). If you create a new
script/program, you must add it in to the $ISHOME/etc/callhome/
trigger.conf file so that Email Alerts knows of its existence.
Return codes determine what happens as a result of the scripts/programs
execution. The following return codes are valid:
In order for a trigger to send useful information in the email body, it must redirect its
output to the environment variable $IPSTORCLHMLOG.
The following is the content of the storage server status check trigger,
ipstorstatus.sh:
478
Email Alerts
#!/bin/sh
RET=0
if [ -f /etc/.is.sh ]
then
. /etc/.is.sh
else
echo Installation is not complete. Environment profile is missing in
/etc.
echo
exit 0 # don't want to report error here so have to exit with error
code 0
fi
$ISHOME/bin/ipstor status | grep STOPPED >> $IPSTORCLHMLOG
if [ $? -eq 0 ] ; then
RET=1
fi
exit $RET
If any CDP/NSS module has stopped, this trigger generates a return code of 2 and
sends an attachment of all files under $ISHOME/etc and $ISHOME/log.
479
BootIP
FalconStors boot over IP services, powered by IPStor, for Windows and Linuxbased storage servers allows you to maximize business continuity and return on
investment. BootIP enables IT Managers to provision disk storage and its related
services to achieve maximum return on investment (ROI).
BootIP leverages the proven SAN management infrastructure and storage services
available in FalconStors network storage infrastructure to ensure business
continuity, high availability and effective disaster recovery planning.
BootIP Set up
Setting up BootIP involves several steps, which are outlined below:
1. Prepare a sample computer with the operating system and all the applications
installed.
2. Install CDP or NSS on a server computer.
3. Install Microsoft iSCSI initiator boot version and DiskSafe on the sample
computer.
The Microsoft iSCSI Software Initiator enables connection of a Windows host to
an external iSCSI storage array using Ethernet NICs. For boot version, using
configurations to boot Windows Server 2003/vista/2008 hosts.
When installing Microsoft iSCSI Software Initiator, check the item Configure
iSCSI Network Boot Support and select the network interface driver for the NIC
that will be used to boot via iSCSI.
4. Install the FalconStor Management Console.
You can also create a boot image for client computers that do not have disks. To do
this, you need to prepare a computer to be used for your boot image.
1. Make sure everything is installed on the computer, including the operating
system and the applications that the client computers will use.
2. Once you have prepared the computer, use DiskSafe to backup the computer to
create a boot image for diskless client computers.
3. After preparing the boot image, create TimeMarks from the boot image, and then
mount the TimeMarks as individual TimeViews and respectively assign them to
the diskless computers.
4. Configure the diskless computers to boot up from the network.
Using DiskSafe can help you to clone a boot image from the sample computer and
put the image on an IPStor-managed virtual disk. You can then set up the BootIP
from the server and use the boot image to boot the diskless client computers.
480
BootIP
Prerequisites
A valid Operating System (OS) image must be prepared for iSCSI remote boot. The
conditions of a valid OS image for an iSCSI boot client are listed below:
The OS must be one of the following:
Note: The VID should be the virtual device ID of the iSCSI disk.
481
BootIP
482
BootIP
483
BootIP
Un-authentication mode.
CHAP mode.
Set the authentication and Recovery password from iSCSI client properties
You can also set the authentication and Recovery password from iSCSI client
properties. To do this:
1. Navigate to [Client Host Name] and expand it.
2. Right-Click on iSCSI and select Properties.
An iSCSI Client Properties window displays.
3. Select User Access to set authentication.
4. Optional: Select Allow unauthenticated access. The user neednt to authenticate
for remote boot.
484
BootIP
Note: Mutual CHAP secret is not currently supported for iSCSI authentication.
VID is the virtual ID of mirror disk or Time View device. You can confirm the VID from
the SAN Resource mirror disk or from the TimeView you assigned for remote
boot General tab in the FalconStor Management Console.
485
BootIP
486
BootIP
If you run the Sysprep.exe file from the %systemdrive%\Sysprep folder, the
Sysprep.exe file removes the folder and the contents of the folder.
6. On the system preparation tool, choose the shutdown mode as shutdown and
click Reseal to prepare the computer.
The computer should shutdown itself.
7. Optional: You can use Snapshot Copy or TimeView to assign them to the other
clients and remote boot to initialize the other systems of windows 2003.
Using the Setup Manager tool to create the Sysprep.inf answer file
Once you have automated the deployment of windows 2003, you can use the
sysprep.ini to customize the windows initial settings, such as user name,
organization, host name, product key, networking component, workgroup, timezone, etc.
To install the Setup Manager tool and to create an answer file, follow these steps:
1. Navigate to the Deploy.cab file that you replaced and double-click on it to open it.
2. On the Edit menu, click Select All
3. On the Edit menu, click Copy to Folder.
4. Click Make New Folder and enter a name for the Setup Manager folder. For
example, type setup manager, and then press Enter.
5. Click Copy.
6. Open the new folder that you created, and double-click the Setupmgr.exe file.
The Windows Setup Manager Wizard launches.
7. Follow the instructions in the wizard to create a new answer file.
8. Select the Sysprep setup to generate the sysprep.inf
9. Select Yes, fully automate the installation.
Later, you will be prompted to enter the license key code.
10. Select to automatically generate computer name or specify a computer name.
11. Save the sysprep.inf to the C:\Sysprep\
12. Click Finish to exit the Setup Manager wizard.
487
BootIP
1. On a reference computer, install the operating system and any programs that
you want installed on your destination computers.
2. Use DiskSafe to clone the system disk to the storage server
3. Boot the mirror disk remotely (Setting related BootIP configuration)
4. Open the Windows system Image Manager (Start --> All Programs --> Microsoft
Windows AIK --> Windows System Image Manager)
5. Copy Install.wim from the product installation package (source) to your disk.
6. Create a catalog on the WAIK.
7. On the File menu, click Select Windows Image.
8. Navigate to the location where you saved install.wim, and then click Open.
You are prompted to select an image.
9. Select the appropriate version of windows Vista/2008, and then click OK.
10. On the File menu, click New Answer File.
11. If a message displays that a catalog does not exist, click OK to create one.
12. From the windows image, choose the proper component.
13. From the Answer file, you can set the following options:
Auto-generate a computer name
Add or edit Organization and Owner Information
Set the language and locale
Set the initial tasks screen not to show at logon
Set server manager not to show at logon
Set the Administrator password
Create a second administrative account and set the password
Run a post-image configuration script under the administrator account at
logon
Set automatic updates to not configured (to be configured post-image)
Configure the network location
Configure screen color/resolution settings
Set the time zone
488
BootIP
To apply the settings in auditSystem and auditUser, boot to Audit mode by using the
sysprep /audit command. The machine will shutdown and you can use Snapshot
Copy from the FalconStor Management Console to clone the mirror disk and remote
boot to initialize the other systems of windows Vista/2008.
Creating a TimeMark
Once your boot image has been created and is on the storage server, it can be used
as a base image for your diskless client computers. You will need to create separate
boot images for each computer that you want to boot up remotely.
In order to create a separate boot image for a computer, you need to create a
TimeMark of the base image first, then create a TimeView from the TimeMark. The
TimeView can be assigned to an individual client computer for remote boot.
To create a TimeMark of the base boot image:
1. Launch the FalconStor Management Console if you have not done so yet.
2. Select your virtual disk under SAN Resources.
3. Right-click the on the disk and select TimeMark --> Enable.
A message box appears, prompting you to create the SnapShot Resource for
your virtual disk.
4. Click OK and follow the instructions of the Create SnapShot Resource Wizard to
create the SnapShot Resource.
5. Click Finish when you are done with the creation process. The Enable TimeMark
Wizard appears.
6. Click Next and specify the schedule information if you want to create TimeMarks
regularly.
You can skip the next two steps if you have specified the schedule information
as TimeMarks will be created automatically based on your schedule.
7. Click Finish when you are done.
The Wizard closes and you are returned to the main window of FalconStor
Management Console.
489
BootIP
8. From the FalconStor Management Console, right-click your virtual disk and
select TimeMark --> Create.
The Create TimeMark dialog box appears.
9. Type a comment for the TimeMark and click OK.
The TimeMark is created.
Creating a TimeView
After creating a TimeMark of your base boot image, you can create a TimeView from
the TimeMark, and then assign the TimeView to a diskless computer for remote
boot.
To create a TimeView from a TimeMark:
1. Start the FalconStor Management Console - if it is not running yet.
2. Right-click your virtual disk and select TimeMark --> TimeView. The Create
TimeView dialog box appears.
3. Select the TimeMark from which you want to create a TimeView and click OK.
4. Type a name for the TimeView in the TimeView Name box and click OK.
The TimeView is created.
Note: Only one TimeView can be created per TimeMark. If you want to create
multiple TimeViews for multiple diskless computers, you will need to create
multiple TimeMarks from the base boot image first.
490
BootIP
The BootIP boots the image that is assigned to the smallest target ID with LUN 0.
491
BootIP
6. Check Allow mirror disks with existing partitions to restore to the original disk,
and then click Yes.
7. Select the original primary disk from the eligible mirror disks list and click Next.
4. The system will warn you that the mirror disk is a local disk.
8. Click Yes.
9. Finish the protect disk wizard and DiskSafe starts to synchronize the current
data to local disk.
10. Once synchronization has finished, and the restore process succeeds, you can
shutdown the server normally.
11. Disable the BootIP from the iSCSI client or setting Boot from local disk.
12. Local boot the client with the disk you restored.
13. Once the system successful boots up, open the DiskSafe Management Console
and remove the protection that you just created for recovery.
14. Re-protect the disk.
Note: After the remote boot, verify the status of services and applications to
make sure everything is up and ready after start up.
Make sure your boot up disk is from the FALCON IPSTOR DISK SCSI Disk Device.
To do this, navigate to Disk Management and right-click on the first disk (Disk 0). It
should show FALCON IPSTOR DISK SCSI Disk Device.
492
BootIP
493
BootIP
494
Troubleshooting / FAQs
This section helps you through some frequently asked questions and issues that
may be encountered when setting up and running the CDP/NSS storage network,
including the following topics:
Logical resources
Storage Server
Failover
Network connectivity
Replication
TimeMark
SafeCache
Service-Enabled Devices
Cross-mirror failover on a virtual appliance
NIC Port Bonding
SNMP
Event log
Virtual devices
Multipathing method: MPIO vs. MC/S
BootIP
SCSI adapters and devices
Fibre Channel Target Mode and storage
Replication
iSCSI Downstream Configuration
Windows client debug information
Storage server X-ray
Error codes
495
Troubleshooting / FAQs
Answer
Changing a storage server IP address using a thirdparty utility is not supported. You will need to
change storage server IP addresses via the console
System Maintenance --> Network Configuration.
rm rf IPStor
496
Troubleshooting / FAQs
Answer
SNMP
Question
Answer
Event log
Question
Answer
Virtual devices
Question
Answer
497
Troubleshooting / FAQs
Answer
498
Troubleshooting / FAQs
499
Troubleshooting / FAQs
BootIP
Question
Answer
500
Troubleshooting / FAQs
Answer
disk
3ware
Logical Disk 0
1.2
disk
3ware
Logical Disk 1
1.2
disk
disk
disk
IBM-PSG ST318203FC
IBM-PSG ST318203FC
IBM-PSG ST318304FC
501
Troubleshooting / FAQs
Question
Answer
502
Troubleshooting / FAQs
Failover
Question
Answer
What is VSA?
Answer
503
Troubleshooting / FAQs
Question
Answer
504
Troubleshooting / FAQs
Question
Is ALUA supported?
Answer
Replication
Question
Answer
505
Troubleshooting / FAQs
Answer
Answer
506
Troubleshooting / FAQs
Question
Answer
507
Troubleshooting / FAQs
Answer
Answer
Answer
508
Troubleshooting / FAQs
Answer
Answer
509
Troubleshooting / FAQs
Logical resources
The following table describes the icons that are used to show the status of logical
resources:
Icon
Description
Network connectivity
Storage servers, clients and consoles are all attached to one another through an
Ethernet network. In order for all of the components to work properly together, their
network connectivity should be configured properly.
To test connectivity between machines (servers, clients and consoles,) there are
several things that can be done. This example shows a user testing connectivity
from a client or console to a server named knox.
To test connectivity from one machine to the storage server, you can execute the
ping utility from a command line prompt. For example, if your storage server is
named knox, execute:
ping knox
510
Troubleshooting / FAQs
If the storage server is running and attached to the network, you should receive a
response like this:
Pinging knox [10.1.1.99] with 32 bytes of data:
Reply from 10.1.1.99: bytes=32 time<10ms TTL=255
Reply from 10.1.1.99: bytes=32 time<10ms TTL=255
Reply from 10.1.1.99: bytes=32 time<10ms TTL=255
Reply from 10.1.1.99: bytes=32 time<10ms TTL=255
Ping statistics for 10.1.1.99:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
If the Server is not available, you may get a response like this:
Pinging knox [10.1.1.99] with 32 bytes of data:
Request timed out.
Request timed out.
Request timed out.
Request timed out.
Ping statistics for 10.1.1.99:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
This means that either the machine is not running, or is not properly attached to the
network. If you get a response like this:
Unknown host knox.
This means that your machine cannot find the storage server by name. There could
be two reasons for this. First, it may be that the storage server is not running or
connected to the network, and therefore has not registered itself to the name service
on your network.
Second, it may be that the storage server is running, but is not known by name,
possibly because the name service, such as DNS, is not running, or your machine is
not referring to the proper name service.
Refer to your networks reference material on how to configure your networks name
service.
If your storage server is available, you can execute the following command on the
Server to verify that the CDP/NSS ports are both up:
netstat a |more
CDP/NSS Administration Guide
511
Troubleshooting / FAQs
Both ports 11576 and 11577 should be listed. In addition, port 11576 should be
listening.
Linux SAN
Client
You may see the following message when executing ./IPStorclient start or
./IPStorclient restart if the Linux Client cannot locate the storage server on the
network:
Creating IPStor Client Device [FAILED]
Failed to connect to Storage Server 0, -1
To resolve, restart the services on both the storage server and the Linux Client.
Check the General Info tab for the Client in the Console to see if the Client
has been authenticated. In order for a Client to be able to access storage,
you must establish a trusted relationship between the Client and Server and
you must assign storage resources to the Client.
If you make any Client configuration changes in the Console, you must
restart the Client in order for the changes to take effect.
Clients may not achieve the maximum throughput when writing over gigabit.
If you are noticing slower than expected speeds when writing over gigabit,
you can do the following:
Turn on TCP window scaling on the storage server:
/proc/sys/net/ipv4/tcp_window_scaling
1 is on. 0 is off.
On Windows, go to Run and type regedit. Add the following:
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\
Parameters]
"Tcp1323Opts"=dword:00000001
CDP/NSS Administration Guide
512
Troubleshooting / FAQs
"GlobalMaxTcpWindowSize"=dword:01d00000
"TcpWindowSize"=dword:01d00000
To see if the storage server client has connectivity to the storage server over the
Ethernet network, refer to Network connectivity.
Windows Client
Problem
Cause/Resolution
513
Troubleshooting / FAQs
2. To filter the events being written to the Event Viewer, select one of the levels in
the Log Level field.
Note that regardless of which level you choose, there are several events that will
always be written to the Event Viewer (driver not loaded, service failed to start,
service started, service stopped).
Five levels are available for use:
Off No activity will be recorded.
Errors only Only errors will be recorded.
Brief Errors and warnings will be recorded.
Detailed (Default) Errors, warnings and informational messages will be
recorded.
Trace This is the highest level of activity tracing. Debugging messages
will be written to the trace log. In addition, all errors, warnings and
informational messages will be recorded in the Event Viewer.
3. If you select the Trace level, specify which portions of the storage server Client
will be traced.
Warning: Adjusting these parameters can impact system performance. They
should not be adjusted unless directed to do so by FalconStor technical support.
514
Troubleshooting / FAQs
Cause/Resolution
515
Troubleshooting / FAQs
Cause/Resolution
Cause/Resolution
516
Troubleshooting / FAQs
ISCMD command log: Run the ISCMD command with option DEBUG=2.
The debugging message will be written to the log file ISCMD.LOG located in
the directory SYS:\SYSTEM.
For example: ISCMD Start Server=serverIPAddress Debug=2
IPStor SAN client trace log: Run the command SANDRV +debug +ip3 on
the NetWare System Console. The trace log will be written to the log file
TRACELOG.XML located in the directory SYS:\SYSTEM.
Problem
Cause/Resolution
517
Troubleshooting / FAQs
Storage Server
Storage server X-ray
The X-ray feature is a diagnostic tool used by your Technical Support team to help
solve system problems. Each X-ray contains technical information about your
server, such as server messages and a snapshot of your server's current
configuration and environment. You should not create an X-ray unless you are
requested to do so by your Technical Support representative.
To create the X-ray file for multiple servers:
1. Right click on the Servers node in the console and select X-ray.
A list of all of your storage servers displays.
2. Select the servers for which you would like to create an X-ray and click OK.
If the server is not listed, click the Add button to add the server to the list.
518
Troubleshooting / FAQs
3. Select the X-ray options based upon the discussion with your Technical Support
representative and set the file name..
519
Troubleshooting / FAQs
Failover
Problem
Cause/Resolution
After failover, when you connect to the newlypromoted primary server, the failover status is
not correct.
520
Troubleshooting / FAQs
Problem
Cause/Resolution
Cause/Resolution
521
Troubleshooting / FAQs
Replication
Problem
Cause/Resolution
Replication fails.
522
Troubleshooting / FAQs
TimeMark
Problem
Cause/Resolution
SafeCache
Problem
Cause/Resolution
Cause/Resolution
523
Troubleshooting / FAQs
Service-Enabled Devices
Problem
Cause/Resolution
524
Troubleshooting / FAQs
Error codes
The following table contains a description of some common error codes.
Type
Text
Probable Cause
Suggested Action
1005
Error
1006
Error
1008
Error
1016
Critical
Physical device
associated with primary
virtual device may have
had a failure.
1017
Critical
1022
Error
1023
Error
Failed to connect to
physical device [Device
number]. Switching to alias
to [ACSL].
An adapter/cable might
have a problem.
1030
Error
Try later.
1031
Error
1032
Error
525
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
1033
Error
1034
Error
1035
Error
1036
Error
1037
Error
1038
Error
Memory is low.
1039
Error
Replication failed
because of the
indicated error.
1040
Error
1043
Error
A SCSI command
terminated with a nonrecoverable error condition
that was most likely
caused by a flaw in the
medium or an error in the
recorded data. Please
check the system log for
additional information.
526
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
1044
Error
A SCSI command
terminated with a nonrecoverable hardware
failure (for example,
controller failure, device
failure, parity error, etc.).
Please check the system
log for additional
information.
1046
Error
1047
Error
1048
Error
Network problem.
1049
Error
1050
Error
1051
Error
A merge is occurring on
the replica server.
1052
Error
1053
Error
527
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
1054
Error
Replication cannot
proceed -- a merge is in
progress for virtual device
[Device number].
A merge is occurring on
source.
1055
Error
1056
Error
1059
Error
1060
Error
1061
Critical
1066
Error
Replication cannot
proceed -- snapshot
resource area does not
exist for remote virtual
device [Device ID].
1067
Error
Replication cannot
proceed -- unable to
connect to replica server
[Server name].
1068
Error
Replication cannot
proceed -- group [Group
name] is corrupt.
528
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
1069
Error
Replication cannot
proceed -- virtual device
[Device ID] no longer has
a replica or the replica
device does not exist.
1070
Error
Replication cannot
proceed -- replication is
already in progress for
group [Group name].
1071
Error
Replication cannot
proceed -- Remote vid %1
does not exist or is not a
replica device.
1072
Error
Replication cannot
proceed -- missing a
remote replica device in
group [Group name].
See 1069.
1073
Error
Replication cannot
proceed -- unable to open
configuration file.
529
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
1074
Error
Replication cannot
proceed -- unable to
allocate memory.
1075
Error
Replication cannot
proceed -- unexpected
error %1.
1078
Error
Replication cannot
proceed -- mismatch
between our snapshot
group [Group name] and
replica server.
1079
Error
1080
Error
Replication cannot
proceed -- failed to create
TimeMark on virtual device
[Device ID].
1081
Error
Replication cannot
proceed -- failed to create
common TimeMark for
group [Group name].
See 1080.
1082
Warning
Replication was
stopped by the user.
None.
530
Troubleshooting / FAQs
Type
Text
1083
Warning
Replication was
stopped by the user.
None.
1084
Warning
A SCSI command
terminated with a
recovered error condition.
Please check system log
for additional information.
1085
Error
1087
Error
1090
Warning
Obsolete, replaced by
1096 and 1097
1096
Warning
Retry later.
1097
1098
Warning
Error
Replication cannot
proceed -- Failed to get
virtual device delta
information for virtual
device %1.
Probable Cause
Suggested Action
531
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
1099
Error
Replication cannot
proceed -- Failed to get
virtual device delta
information for virtual
device %1 due to device
offline.
1100
Error
Replication cannot
proceed -- Failed to
communicate with replica
server to trigger replication
for virtual device %1.
Replication cannot
proceed -- Failed to
communicate with replica
server to triggr replication
for group %1.
Replication cannot
proceed -- Failed to
update virtual device meta
data for virtual device %1.
1101
1102
Error
Error
1103
Error
Replication cannot
proceed -- Failed to initiate
replication for virtual
device %1 due to server
busy.
Failed to start
replication for virtual
device because the
system was busy.
1104
Error
Replication cannot
proceed -- Failed to initiate
replication for group %1
due to server busy.
Failed to start
replication for group
because the system
was busy.
1201
Warning
1203
Error
532
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
1204
Error
Downstream storage
path failure.
1206
Error
Downstream storage
path failure.
1207
Error
Downstream storage
path failure.
1208
Warning
1210
Warning
Downstream storage
path failure.
1211
Warning
Unexpected path
configuration.
1212
Warning
Storage connectivity
failure.
1214
Error
Downstream storage
path failure.
1215
Warning
Downstream storage
path failure or manual
trespass.
1216
Warning
Downstream storage
path failure or manual
trespass.
1217
Warning
Downstream storage
path failure or manual
trespass.
1218
Warning
Downstream storage
path failure or manual
trespass.
1230
Error
TimeMark [TimeMark]
cannot be created during
disk rollback. Time-stamp:
[Time].
Disk rollback is in
progress.
533
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
3003
Error
Number of CCM
connections has reached
the maximum limit %1.
3009
Error
Check network
communication and client
access from the server; try
to restart the ccm module on
the server.
3010
Error
3014
Error
3016
Error
3017
Warning
3020
Error
3021
Error
3022
Error
3023
Error
3024
Error
534
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
7001
Error
Unexpected loss of
environment variables defined
in /etc/.is.sh on the server.
7002
Error
Patch %1 failed -- it
applies only to build %2.
7003
Error
7004
Warning
Patch %1 installation
failed -- it has already
been applied.
None.
7005
Error
Patch %1 installation
failed -- prerequisite
patch %2 has not been
applied.
7006
Error
Patch %1 installation
failed -- cannot copy new
binaries.
7008
Warning
None.
7009
Error
7010
Error
7011
Error
Patch %1 failed -- it
applies only to kernel
%2.
10001
Error
535
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
10002
Error
10003
Error
Failed to initialize
configuration [File name].
10004
Error
10005
Error
10006
Error
Failed to write
configuration [File name].
10054
Error
10059
Error
536
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
10100
Error
10101
Error
Failed to update
configuration [File name].
10102
Error
See 10004.
10200
Warning
10210
Error
537
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
10211
Warning
10212
Error
10213
Error
10214
Error
10215
Error
10240
Error
10241
Error
Physical Adapter
[Adapter number] could
not be located in /proc/
scsi/.
538
Troubleshooting / FAQs
Type
Text
Probable Cause
10242
Suggested Action
Critical
Duplicate Physical
Adapter number [Adapter
number] in /proc/scsi/.
10244
Error
10245
Error
10246
Error
10247
Error
10250
Warning
10251
Warning
10254
Error
10257
Error
539
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11000
Error
11001
Error
See 11000.
11002
Error
See 11000.
11003
Error
See 11000.
11004
Error
See 11000.
11006
Error
The server
communication module
failed to start.
11007
Warning
11030
Error
OS cron package
configuration error.
540
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11031
Error
11032
Error
11033
Error
11034
Error
11035
Error
11036
Error
11101
Error
11103
Error
11104
Error
11106
Error
11107
Error
541
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11112
Error
If there is a valid
configuration file saved, it
can be restored to the
system. Make sure to use
reliable storage devices for
critical system information.
11114
Error
11115
Warning
11222
Error
11201
Error
None.
11202
Error
See 11107.
11203
Error
See 10100.
542
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11204
Error
11211
Error
See 10006.
11212
Error
11216
Error
See 11114.
11219
Error
11220
Error
543
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11221
Error
11233
Error
11234
Error
11237
Error
544
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11240
Error
11242
Error
See 11240.
11244
Error
11245
Error
11247
Error
545
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11249
Error
11251
Error
11253
Error
11257
Error
See 11101.
See 11101.
11259
Error
11261
Error
11262
Error
See 11112.
See 11112.
11263
Error
See 10006.
See 10006.
11266
Error
See 10004.
11268
Error
11270
Error
See 10004.
546
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11272
Error
11274
Error
Retry later.
11276
Error
11278
Error
See 10004.
11280
Error
Secure communication
channel information for a
failover setup, a replication
setup, or a Near-line mirroring
setup could not be created.
Check if specified IP
address can be reached
from the failover secondary
server, replication primary
server, or Near-line server.
11282
Error
See 10004.
11285
Error
11287
Error
See 11240.
11289
Error
11291
Error
See 10004.
11294
Error
See 11000.
See 11000.
547
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11293
Error
11295
Error
See 11112.
See 11112.
11296
Error
11299
Critical
11300
Error
11301
Error
548
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11302
Error
11303
Error
Authentication failed in
stage [%1] for client at IP
address [IP address].
11306
Error
11307
Error
User %1 at IP address
%2 is not a member of
the IPStor Administrator's
group.
11308
Error
11309
Error
User ID %1 at IP address
%2 is invalid.
11310
Error
11408
Error
Synchronizing the
system time with [host
name]. A system reboot
is recommended.
11410
Warning
Enclosure Management:
%1
Check enclosure
configuration.
549
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11411
Error
Enclosure Management:
%1
Check enclosure
configuration.
11506
Error
11508
Error
11510
Error
11511
Error
Check if network
configuration is configured
properly. Also check if
system memory is running
low for allocation.
11512
Error
11514
Error
11516
Error
550
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11518
Error
11520
Error
11522
Error
11524
Error
See 11240.
See 11240.
11530
Error
11532
Error
11534
Error
See 10004.
11535
Error
11537
Error
551
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11539
Error
See 10004.
11541
Error
11542
Error
11544
Error
11546
Error
11548
Error
11553
Error
11554
Error
11556
Error
11560
Error
11561
Error
552
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11563
Error
11565
Error
11567
Error
11568
Error
11569
Error
11571
Error
11572
Error
553
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11574
Error
11575
Error
11577
Error
11578
Error
See 11569.
See 11569.
11581
Error
See 11240.
11583
Error
See 11569.
See 11569.
11585
Error
11587
Error
11590
Error
554
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11591
Error
Failed to expand
snapshot storage for
virtual device [Device ID]:
maximum segment
exceeded (error code
[Return code]).
11594
Error
11598
Error
11599
Error
Currently up to 64 segments
are supported; in order to
prevent this from happening,
create a bigger CDP journal
to avoid frequent
expansions.
11605
Error
11608
Error
11609
Error
11610
Error
555
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11613
Error
11615
Error
Retry later.
11618
Error
Failed to select
TimeMark %1 for virtual
device %2: TimeMark %3
has already been
selected.
11619
Error
Failed to select
TimeMark %1 character
device for virtual device
%2.
11621
Error
Failed to create
TimeMark %1 for virtual
device %2.
11623
Error
Failed to delete
TimeMark %1 for virtual
device %2.
11625
Error
11627
Error
11631
Error
Failed to expand
snapshot storage for
virtual device %1 (error
code %2).
556
Troubleshooting / FAQs
11633
Type
Error
Error
Text
Probable Cause
Suggested Action
Check failover module
status.
Check system disk status.
11637
Error
11638
Error
11639
Error
11640
Error
Failed to expand
snapshot resource for
virtual device %1. The
virtual device is assigned
to user %2. The quota for
this user is %3 MB and
the total size allocated to
this user is %4 MB, which
exceeds the limit.
557
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11641
Error
11643
Error
Retry later.
11644
Error
Take TimeView
[TimeView name] id
[Device ID] offline
because the source
TimeMark has been
deleted.
11645
Error
None.
11649
Error
11655
Error
11656
Warning
11657
Warning
11658
Warning
11659
Warning
558
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11660
Error
Failed to allocate a %1
MB DiskSafe mirror disk
in storage pool %2.
There is only %3 MB free
space left in that storage
pool.
11661
Error
11662
Error
Failed to create a %1 MB
DiskSafe snapshot
resource. There is not
any storage pool with
enough free space.
11665
Error
11667
Error
11668
Error
11672
Error
Check if a snapshot
operation is pending for the
virtual device or the group.
Check disk status and
system status.
11673
Error
Check if a snapshot
operation is pending for the
virtual device or the group.
Check disk status and
system status.
559
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11676
Error
11681
Error
11683
Error
11684
Error
11686
Error
11688
Error
11690
Error
11692
Error
11694
Error
11695
Error
11696
Error
560
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11701
Error
11706
Error
11707
Warning
Deleting TimeMark %1
on virtual device %2 to
maintain snapshot
resource threshold is
initiated.
11708
Error
Retry later.
11711
Error
11713
Error
11715
Error
11716
Error
License registration
information could not be
retrieved.
11717
Error
Check connectivity to
registration server; check file
system is not read-only for
creation of intermediary files.
11730
Error
561
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11738
Error
11740
Warning
A snapshot is going to be
created while the mirror is outof-sync in a Near-line setup.
11741
Warning
11742
Warning
11761
Error
11770
Error
11771
Error
11773
Error
11775
Error
562
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11777
Error
11295
Error
If there is a valid
configuration file saved, it
can be restored to the
system. Make sure to use
reliable storage devices for
the critical system
information.
11296
Error
11299
Critical
11300
Error
563
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11301
Error
11302
Error
11303
Error
Authentication failed in
stage %1 for client at IP
address %2.
11306
Error
11307
Error
User %1 at IP address
%2 is not a member of
the server Administrator's
group.
11309
Error
User ID %1 at IP address
%2 is invalid.
11310
Error
564
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11506
Error
It is possibly due to a
connection error or the system
being busy.
11508
Error
11510
Error
11511
Error
Check if network
configuration is configured
properly. Also check if the
system memory is running
low.
11512
Error
11514
Error
11516
Error
11518
Error
565
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11520
Error
11522
Error
11524
Error
11530
Error
11532
Error
566
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11534
Error
11535
Error
11537
Error
11539
Error
11541
Error
567
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11542
Error
11544
Error
11546
Error
11548
Error
11553
Error
11554
Error
11556
Error
11560
Error
11561
Error
11563
Error
11565
Error
568
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11567
Error
11568
Error
11569
Error
11571
Error
11572
Error
11574
Error
11575
Error
11577
Error
569
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11578
Error
11581
Error
11583
Error
11585
Error
11590
Error
11591
Error
Failed to expand
snapshot storage for
virtual device %1:
maximum segment
exceeded (error code
%2).
570
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11594
Error
11598
Error
11599
Error
Currently up to 64 segments
are supported; in order to
prevent this to happen,
create a bigger CDP journal
to avoid frequent
expansions.
11605
Error
11608
Error
11609
Error
11610
Error
Failed to create
TimeView for virtual
device %2 TimeMark %3.
11613
Error
11615
Error
Retry later.
571
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11618
Error
Failed to select
TimeMark %1 for virtual
device %2: TimeMark %3
has already been
selected.
11619
Error
Failed to select
TimeMark %1 character
device for virtual device
%2.
11621
Error
Failed to create
TimeMark %1 for virtual
device %2.
11623
Error
Failed to delete
TimeMark %1 for virtual
device %2.
11625
Error
11627
Error
Failed to rollback
TimeMark to timestamp
%1 for virtual device %2.
11631
Error
Failed to expand
snapshot storage for
virtual device %1 (error
code %2).
11632
Error
572
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11633
Error
11637
Error
11638
Error
11639
Error
11640
Error
Failed to expand
snapshot resource for
virtual device %1. The
virtual device is assigned
to user %2. The quota for
this user is %3 MB and
the total size allocated to
this user is %4 MB, which
exceeds the limit.
11641
Error
573
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11642
Error
Failed to create
temporary TimeView
from TimeMark %1 to
copy TimeView data for
virtual device %2.
11643
Error
Retry later.
11644
Error
Take TimeView %1 id %2
offline because the
source TimeMark has
been deleted.
11645
Error
11649
Error
11655
Error
11656
Error
11657
Error
11658
Error
11659
Error
574
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11660
Error
Failed to allocate a %1
MB DiskSafe mirror disk
in storage pool %2.
There is only %3 MB free
space left in that storage
pool.
11661
Error
11662
Error
Failed to create a %1 MB
DiskSafe snapshot
resource. There is not
any storage pool with
enough free space.
11665
Error
11667
Error
11668
Error
11672
Error
11676
Error
575
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11681
Error
11683
Error
11684
Error
11686
Error
11688
Error
11690
Error
11692
Error
11694
Error
11695
Error
11696
Error
11701
Error
576
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11706
Error
11707
Error
Deleting TimeMark %1
on virtual device %2 to
maintain snapshot
resource threshold is
initiated.
11708
Error
Retry later.
11711
Error
11713
Error
11715
Error
11716
Error
11717
Error
Check connectivity to
registration server; check file
system is not read-only for
creation of intermediary files.
11722
Error
11730
Error
577
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11738
Error
11740
Error
A snapshot is going to be
created while the mirror is outof-sync in a Near-line setup.
11741
Error
11742
Error
11761
Error
11770
Error
11771
Error
11773
Error
11775
Error
578
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
11777
Error
11900
Error
11901
Error
11902
Error
11910
Error
13300
Error
Failed to authenticate to
the primary server -Failover Module stopped.
13301
Error
Failed to authenticate to
the local server -Failover Module stopped.
See 13300.
See 13300.
13302
Error
13303
Error
13307
Error
13308
Error
Invalid failover
configuration detected.
Failover will not occur.
13309
Error
Quorum disk or
communication failure.
579
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
13316
Error
13317
Error
Failed to release IP
address [IP address].
None.
13319
Error
See 11240.
13320
Error
See 13300.
See 13300.
580
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
13700
Error
Failed to allocate
memory -- Self-Monitor
Module stopped.
13701
Error
Failed to release IP
address [IP address].
See 13317.
See 13317.
13702
Error
Check network
configuration.
13703
Error
See 13319.
See 13319.
13704
Error
13710
Critical
Contact FalconStor or a
representative to obtain
proper license.
13711
Critical
Contact FalconStor or a
representative to obtain
proper license.
581
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
13800
Error
None.
13804
Critical
13817
Critical
13818
Critical
13820
Warning
13821
Error
582
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
13822
Critical
13827
Error
13828
Warning
13829
Warning
See 13828.
13830
Error
13832
Error
13833
Error
13834
Error
13835
Error
13836
Error
Failed to get
configuration files from
repository. Check and
correct the configuration
disk.
583
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
13841
Error
13842
Warning
None.
13843
Error
13844
Error
Failed to write to
repository.
13845
Warning
13848
Warning
Failover occurred.
None.
13849
Warning
13850
Error
13851
Error
13853
Error
Secondary notified
primary to go up because
secondary is unable to
take over.
584
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
13856
Error
There is a heartbeat
communication issue between
failover partners.
13858
Critical
13860
Error
Failed to merge
configuration file.
13861
Error
13862
Error
13863
Critical
Primary server is
commanded to resume.
13864
Critical
13877
Error
13878
Error
13879
Critical
Secondary server
detected kernel module
failure; you may need to
reboot server %1.
13880
Critical
13881
Error
13882
Error
585
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
13888
Warning
Secondary server is
temporarily busy.
13895
Critical
15000
Error
15002
Error
15003
Error
Memory is low.
15004
Error
15005
Error
15006
Error
586
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
15008
Error
15016
Error
15018
Error
15019
Error
Memory is low.
15020
Error
15021
Error
15022
Error
587
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
15024
Warning
15032
Error
15034
Error
15035
Error
Stop unnecessary
processes or delete some
TimeMarks and try again. If
this happens frequently,
increase the amount of
physical memory to
adequate level.
15036
Error
15307
Error
15308
Error
588
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
15040
Error
15041
Error
15050
Error
15051
Error
15052
Error
15053
Error
15054
Error
589
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
15055
Error
15056
Error
16002
Error
Failed to create
TimeMark for group %1.
16003
Error
Failed to delete
TimeMarks because they
are in rollback state.
Try again.
16004
Error
Failed to delete
TimeMarks because
TimeMark operation is in
progress to get TimeMark
information.
TimeMark operation is in
progress.
Try again.
16010
Error
Group cache/CDP
journal is enabled for
virtual device %1, vdev
signature is not set for
vss.
16106
Error
16107
Error
16108
Error
16109
Error
590
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
16110
Error
16111
Error
16120
Error
16121
Error
16122
Error
16123
Error
16124
Error
16125
Error
16126
Error
591
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
16200
Error
Check FC WWPNs.
16211
Error
16212
Error
16213
Error
16214
Error
16215
Error
16217
Error
Check parameters.
16219
Error
16220
Error
592
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
16232
Error
16234
Error
16236
Error
16238
Error
Retry later.
16240
Error
16242
Error
Retry later.
16252
Error
Failed to initialize
statistics log scheduler
configuration.
16254
Error
16256
Error
16258
Error
Failed to retrieve
statistics log schedules.
16260
Error
16262
Error
Failed to remove
statistics log schedule(s).
17001
Error
593
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
17002
Error
17003
Error
17004
Error
Replication cannot
proceed due to
replication control area
failure.
17005
Error
Replication cannot
proceed due to
replication control area
failure.
17006
Error
17011
Error
17012
Error
17013
Error
17014
Error
17015
Error
Replication failed
because local snapshot
used up all of the
reserved area.
17016
Error
Replication failed
because the replica
snapshot used up all of
the reserved area.
31003
Error
594
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
31004
Error
31005
Error
Failed to allocate
memory.
Memory is low.
31011
Error
IPSTORUMOUNT: Failed
to unmount %1.
595
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
31013
Error
IPSTORMOUNT: Failed
to mount %1.
31017
Error
31020
Error
31023
Error
IPSTORNASMGTD:
Failed to create file [File
name].
See 31020.
See 31020.
31024
Error
IPSTORNASMGTD:
Failed to lock file [File
name].
31025
Error
IPSTORNASMGTD:
Failed to open file [File
name].
31028
Warning
31029
Error
See 31020.
See 31020.
31030
Error
See 31020.
See 31020.
596
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
31031
Error
Failed to remove
directory [Directory
name].
31032
Error
Failed to execute
program '[Program
name]'.
31034
Warning
31035
Error
31036
Error
31037
Error
No action needed.
31039
Error
31040
Error
OS limit reached.
597
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
31041
Error
Exceed maximum
number of reserved NAS
users.
31042
Error
Exceed maximum
number of reserved NAS
groups.
31043
Error
See 31020.
31044
Error
See 31020.
31045
Error
31046
Error
31047
Error
Synchronization daemon
is not running.
See 11240.
31048
Error
31049
Error
31050
Error
31051
Error
598
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
31054
Error
Failed to get my
hostname
31055
Error
31056
Warning
None.
31058
Warning
31060
Warning
31061
Error
See 31032
31062
Error
See 31017.
See 31017.
31064
Error
31066
Error
31067
Error
See 31017.
599
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
31068
Error
dynamic configuration
does not match %1
31069
Error
31071
Error
Missing file %1
31072
Error
See 31017.
31073
Error
600
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
See
31017.
31074
Error
See 31017.
31075
Error
31076
Critical
31078
Error
See 31017
Run the command "stat /
dev/isdev/kisconf" to check
the file.
31079
Error
50000
Error
iSCSI: Missing
targetName in login
normal session from
initiator %1
50002
Error
601
Troubleshooting / FAQs
Type
Text
Probable Cause
Suggested Action
50003
Error
51001
Warning
RAID: %1
51002
Error
RAID: %1
51003
Critical
RAID: %1
51004
Warning
Enclosure: %1
Check enclosure
configuration.
For any error not listed in this table, please contact FalconStor Technical Support.
602
Port Usage
This appendix contains information about the ports used by CDP and NSS.
CDP/NSS uses the following ports for incoming requests. Network firewalls should
allow access through these ports for successful communications. In order to
maintain a high level of security, you should disable all unnecessary ports. The ports
are not used unless the associated option is enabled in CDP/NSS. For FalconStor
appliances, the ports marked are enabled by default.
Protocol
Port
Usage
TCP
20
UDP
20
TCP
21
UDP
21
TCP
22
Standard Secure Shell (SSH) port for remote connection to the server
TCP
23
UDP
23
TCP
25
UDP
25
UDP
67
UDP
68
UDP
69
TFTP (Trivial File Transfer Protocol) port for iSCSI Boot (BootIP) option
HTTP
80
Standard HTTP port to access FalconStor Web Setup and also used for online
registration of license key codes.
Note: Port 80 is used to send license material to the FalconStor license server for
registration. The registration reply is then sent back using HTTP protocol, where a
local random port number is used on the server in the same way as Web-based
pages. The firewall does not block the reply if the 'established bit' is set to let
established traffic in.
HTTP
81
Standard HTTP port to access FalconStor Management Console via Web Start
TCP
111
603
Port Usage
Protocol
Port
Usage
UDP
111
Note: NFS port usage is assigned through the SUNRPC protocol. The ports
vary, so it is not possible or convenient to keep checking them and
reprogramming a firewall. Most firewalls have a setting to "Enable NFS" upon
which they will change the settings if the ports themselves change.
UDP
123
Standard Network Time Protocol (NTP) transport layer to access external time
servers
UDP
137
UDP
138
TCP
139
UDP
161
HTTPS
443
UDP
623
HTTPS
1311
TCP
2009
UDP
2009
TCP
2049
UDP
2049
TCP
3260
Communication port between iSCSI clients and the server. Also used for iSCSI Boot
(BootIP) option
UDP
4011
TCP
5001
TCP
8009
TCP
8443
TCP
11576
Secure RPC communication port between FalconStor Management Console and the
server
TCP
11577
UDP
11577
TCP
11578
UDP
11578
TCP
11579
604
Port Usage
Protocol
Port
Usage
UDP
11579
TCP
11580
TCP
11582
TCP
11588
TCP
11762
TCP
18651
605
SMI-S Integration
Large Storage Systems and Storage Area Networks (SANs) are emerging as a
prominent and independent layer of IT infrastructure in enterprise class and
midrange computing environments. Examples of applications and functions driving
the emergence of new storage technology include:
The FalconStor SMI-S Provider for CDP and NSS storage offers CDP and NSS
users the ability to centrally manage multi-vendor storage networks for more efficient
utilization.
FalconStor CDP and NSS solutions use the SMI-S standard to expose the storage
systems it manages to SMI-S Client. The storage systems supported by FalconStor
include Fibre Channel disk arrays and SCSI disk arrays. A typical SMI-S Client can
discover FalconStor devices through this interface. It utilizes CIM-XML while is a
WBEM protocol that uses XML over HTTP to exchange Common Information Model
(CIM) information.
The SMI-S server is included in CDP and NSS versions 6.15 Release 2 and later.
606
SMI-S Integration
openPegasus
The FalconStor SMI-S Provider uses an existing open source CIM Object Manager
(CIMOM) called openPegasus for a portable and modular solution. It is an opensource implementation of the DMTF CIM and WBEM standards.
openPegasus is packaged in tog-pegasus-[version].rpm with Red Hat Linux and is
automatically installed CDP and NSS appliances with version 6.15 R2 and later. If it
has not been installed on your appliance, you can install it using the following
command: -rpm -ivh --nodeps tog-pegasus*.rpm
Command
Central Storage
(CCS)
607
SMI-S Integration
608
SMI-S Integration
View LUNs
To view logical unit numbers (LUNs):
1. Select the LUNs tab in the sub-menu on the top of the main control panel.
A summary of CDP/NSS vitual disks displays. Assigned vitual disks display as
Unknown LUNs [Maskd to Unknown Host(s)] or (Un)claimed LUNs, while
unassigned virtual disks display as Unallocated LUNs [Unmasked]..
2. Select an individual LUN to view the storage pool it is in, and the physical LUN it
relies upon.
View Disks
1. To view disks, select the Disks tab in the sub-menu to view LUN information.
A summary of physical storage displays. And the individual disks display.
609
SMI-S Integration
2. Select individual disk to view which storage pool it is in and which storage
volume it was created from.
Enable SMI-S
To enable SMI-S, right-click on the server in the FalconStor Management Console
and select Properties.
Then highlight the SMI-S tab and select the Enable SMI-S checkbox..
610
611
612
Preconfigured storage
Preconfigured storage enclosures are shipped with a default RAID 6 configuration that
consumes all available resources. In the FalconStor Management console, default
devices that have been mapped to the FalconStor host are visible under Physical
Resources --> Physical Devices --> SCSI Devices.
Mapped
LUs
Note: Other devices displayed in this location are not related to storage. PERC 6/i
devices are internal devices on the CDP/NSS appliance; the Universal Xport
device is a system device housing a driver that provides access to storage.
In the RAID Management console, these devices are known as Logical Units (LUs) (refer
to Logical Unit Mapping). The FalconStor RAID Management console lets you
reconfigure these default devices as needed.
When mapped LUs are available in the FalconStor Management console, you can create
San Resources. The last digit of the SCSI address (A:C:S:L) corresponds to the LUN
number that you choose in the Mapping dialog.
Refer to Logical Resources in the CDP/NSS Administration Guide and FalconStor
Management Console online help for details on configuring these physical devices
as virtual devices and assigning them to clients.
Unconfigured storage
If your storage array has not been preconfigured, you must prepare storage using
functions in the RAID Management console before you can create SAN resources in
the FalconStor Management console:
613
614
Discover storage
This procedure locates in-band or out-of-band storage.
1. Click the Discover button (upper-right edge of the display).
2. In the Discover Storage dialog, select the discovery method.
615
Select Automatic if you do not know the IP address. This option can detect
only in-band storage and will require additional time to search the subnet.
After discovery, each storage array profile is listed in the Discover Storage dropdown. Select a profile to display components in the RAID Management console.
You can use the keyboard to navigate through the Discover Storage list. Page Up/
Page Down jump between the first and last items in the list; Up and Down cursor
arrows scroll through all items in the list.
Action menu
616
To discover storage, click Add to display the Discover Storage dialog. Continue as
described above.
To remove a storage profile, click its checkbox and then click Remove. After you do
this, the profile you removed will still exist, but its storage will not be visible from the
host server.
617
Navigation
pane
The navigation pane includes objects for all components in the storage array you
selected in the Discover Storage drop-down list. Double-click an object to expand
and display the objects below it; double-click again to collapse. When you select any
object, related information is displayed in the content pane to the right. Some items
include a right-click menu of management functions, while others are devoted to
displaying status information.
618
Status bar
The Status Bar at the bottom of the screen identifies - from left to right - the host
machine, the storage array name and its WWID, and that date/time of the last
update to storage configuration.
Menu bar
Action menu - Click Manage Storage to display a dialog that lets you display a
storage profile and discover new storage (equivalent of Discover Storage).
Tools menu - Click Manage Event Log to view or clear the event log for the selected
storage profile.
Click Exit to close the RAID Management console and return to the FalconStor
Management console.
Tool bar
Click Exit to close the RAID Management console and return to the FalconStor
Management console.
Click About to display product version and copyright information.
Rename storage
You can change the storage name that is displayed for the Storage object in the
navigation pane. To do this:
1. Right-click the Storage object and click Rename.
619
2. Select the controller from the drop-down list. The dialog displayed from the
object for an individual controller provides settings for that controller only.
3. Set the IP address, subnet mask, and gateway as needed, then click Apply.
Caution: Improper network settings can prevent local or remote clients from
accessing storage.
620
View enclosures
A storage array includes one storage enclosure head (numbered Enclosure 0) and,
if connected, expansion enclosures (numbered Enclosure 1 to Enclosure x). Select
the Enclosures object to display summary information for components in all
enclosures in the selected storage profile.
Individual enclosures
Select a specific storage enclosure object to display quantity and status information
for its various components, including batteries, power supply/cooling fan modules,
power supplies, fans, and temperature sensors.
621
Expansion enclosure
622
and
Controller is online.
and
and
and
and
and
You can configure connection settings for both controllers from this object (refer to
Configure controller connection settings).
RAID controller firmware must be upgraded from time to time (refer to Upgrade
RAID controller firmware).
623
You can configure connection settings for both controllers from this object (refer to
Configure controller connection settings).
624
The enclosure image in the content pane provides information about any drive,
regardless of the disk object you have selected in the navigation pane. Enclosure 0
always represents the storage enclosure head. Enclosures 1 through x represent
expansion enclosures. (When an enclosure has 24 drives, drive images are oriented
vertically.) Hover your mouse over a single drive image to display enclosure/slot
information and determine whether the drive is assigned or unassigned. Hovering
adds a yellow outline to the drive. Slot statuses include:
625
Select the Disk Drives object to display summary and status information for all drives
in all enclosures in the selected profile, including layout, status, disk mode, total
capacity, and usable capacity, as well as interactive enclosure images (refer to
Interactive enclosure images).
626
You can also configure the selected drive to be a global hot spare.
627
When the procedure is done, the disk icon is changed to standby mode in all
interactive enclosure displays (refer to Interactive enclosure images).
When the procedure is done, the disk icon image changes to unassigned in all
interactive enclosure displays.
628
From this object, you can create a RAID array, then create Logical Units (LUs) on
any array and map them to FalconStor hosts (refer to Create a RAID array and
Create a Logical Unit).
629
3. Select physical disks in the interactive enclosure image. Drive status must be
Optimal, Unassigned (view hover text to determine status). For most effective
use of resources, all disks in a RAID array should have the same capacity. If you
select a disk with a different capacity than the others you have selected, a
warning (Warning: disks differ in capacity) will be displayed.
As you select disks, the Number of Disks in RAID and RAID Capacity values
increase; selected disks show a check mark.
4. Select Create when you are done.
Several messages will be displayed while the RAID is created; a confirmation
message will display when the process is complete. The storage profile is
updated to include the new array.
630
3. If you began the procedure from the RAID Arrays object, select the RAID array
on which you want to create the LU from the RAID drop-down list, which shows
the current capacity of the selected array.
If you began the procedure from an individual array, the current capacity for that
array is already displayed.
4. Enter a capacity for the LU and select GB, TB, or MB from the drop-down list.
5. The Logical Unit Owner (the enclosure controller) is selected by default; do not
change this selection.
6. You can assign (map) the LU to the FalconStor host at this time. The Map LUN
option is selected by default. You can do this now, or uncheck the option and
map the LU later (refer to Unmapped Logical Units).
7. Select a host from the drop-down list.
8. Choose a LUN designation from the drop-down list of available LUNs.
9. Select Create when you are done.
631
632
Select a RAID array object to display summary details and status information about
physical disks assigned to the array, as well as the mapped Logical Units (LUs) that
have been created on the array. When you select an array, the associated disks are
outlined in green in the interactive enclosure image.
633
When the array has been deleted, the storage profile is updated automatically.
634
To check current actions, right-click the object for an individual array and select
Check Actions. A message reporting the progress of any pending action will be
displayed.
635
In the RAID Array area of the console, the array icon shows that its status as
degraded (
) and disk status is displayed as failed.
636
Right-click the array object in the navigation pane and select Replace Physical Disk.
The Replace Physical Disk dialog shows the failed disk. In the array image in the
dialog, select an unassigned, healthy disk to replace the failed disk. The disk you
select will show a green check mark and the disk ACSL will be displayed in the
dialog.
Click Replace Disk. A rebuild action will start. While this action is in progress, the
icons for the replacement disk and the disk being replaced will change to replace
.
When the action is done, replacement disk status changes to assigned/optimal.
637
Logical Units
Double-click an Array object to display the objects for mapped Logical Units (LUs)
on the array. Select an LU object to display status, capacity, WWPN, RAID
information, ownership, cache, and other information.
638
1. Right-click the Logical Unit object in the console and select Define LUN
mapping. (You can also do this from LUS listed under the Unmapped Logical
Units object; refer to Unmapped Logical Units.)
2. Choose a LUN from the drop-down list of available LUNs and select OK.
Several messages will be displayed while the LUN is assigned and a
confirmation message will display when the process is complete. The storage
profile is updated.
After you perform a rescan in the FalconStor Management console, you can prepare
the new device for assignment to clients. In the console, the last digit of the SCSI
address (A:C:S:L) corresponds to the LUN number you selected in the Mapping
dialog.
639
Rename LU
You can change the name that is displayed for an LU in the navigation pane at any
time. To do this:
1. Right-click the LU object and click Rename.
640
641
You can expand this object to display unmapped and mapped LUs.
642
From this object, you can define LUN mapping for any LU with Optimal status (refer
to Define LUN mapping).
Select an individual unmapped LU to view configuration details.
From this object you can rename the LU (refer to Rename LU) or define LUN
mapping (refer to Define LUN mapping).
643
This screen includes the mapped Logical Units that are visible in the FalconStor
console, where the last digit of the SCSI address (A:C:S:L) corresponds to the
number in the LUN column of this display - this is the LUN number you selected in
the Mapping dialog.
644
3. To complete Stage 1, browse to the download location and select the firmware
file.
If you also want to upgrade non-volatile static random access memory
(NVSRAM), browse to the download location again and select the file.
Click Next when you are done.
4. To complete Stage 2, transfer the selected files to a server location specified by
Technical Support.
5. To complete Stage 3, download the firmware to controllers.
6. In Stage 4, activate the firmware.
645
Event log
To display an event log for the selected storage profile, select Tools --> Manage
Event Log --> View Event Log in the menu bar.
All events are shown by default; three event types are recorded.
- Informational events that normally occur.
- Warnings related to unusual component conditions.
- Critical errors such as device failure or loss of connectivity.
Select an event type in the Events list to display only one event category.
Click a column heading to sort event types, components, locations, or
descriptions.
Select an item in the Check Component list to display events only for the
RAID array, RAID controller modules, physical disks, virtual disks, or
miscellaneous events.
646
Storage information
To display information about storage, make sure the Check Storage Components
option is checked.
Choose a storage profile from the drop-down list. Click Refresh to update the display
with changes to storage resources that may have been made by another user in the
RAID Management console.
If you uncheck this option, information about storage is removed from the display
immediately.
647
Server information
To include information about the host server and other devices, make sure the Host
IPMI option is checked. You can display information for as many or as few
categories as you like:
Chassis status
Management controller (MC) status
Sensor information
FRU device information
LAN Channel information
648
Index
A
Access control
Groups 283
SAN Client 63
SAN Resources 95
Storage pools 70
Access rights
Groups 283
IPStor Admins 42
IPStor Users 42
Read Only 86
Read/Write 86
Read/Write Non-Exclusive 86
SAN Client 63
SAN Resources 95
Accounts
Manage 41
ACSL
Change 63
Activity Log 36
Adapters
Rescan 53
Administrator
Management 41
AIX Client 62, 177
Delete SAN Resource 95
Expand virtual device 94
SAN Resource re-assignment 86
Alias 56, 190
APC PDU 201, 205
Appliance
Check physical resources 100
Log into 98
Remove storage device 102
Start 96
Statistics 101
Stop 96
telnet access 98
Uninstall 104
Appliance-based protection 20
Asymmetric Logical Unit Access (ALUA) 505
Authentication 178
Authorization 179
Auto Recovery 219
Auto Save 34, 40
AWK 474
Backup
dd command 385
To tape drive 385
ZeroImpact 382
Block devices 53, 496
Troubleshooting 497
BMC Patrol
SNMP integration 442
Statistics 443
View traps 443
C
CA Unicenter TNG
Launch FalconStor Management Console 439
SNMP integration 438
Statistics 439
View traps 439
Cache resource 226
Create 226
Disable 231
Enlarge 231
Suspend 231
Write 60
capacity-on-demand 66
CCM error codes 534
CCS
Veritas Command Central Storage 607
CDP journal 300
Add tag 296
Mirror 296
Protect 296
Recover data 300
Status 295
Tag 296, 302
Visual slider 300
CDP/NSS
Licensing 34
CDP/NSS Server
Properties 35
Central Client Manager (CCM) 20
CHAP secret 45
Cisco switches 162
CLI
Troubleshooting 523
Client
Add 61, 176
CDP/NSS Administration Guide
649
Index
Log Options 64
Logical Resources 58
Options 64
Physical Resources 50
Replication 60
Rescan adapters 53
SAN Clients 61
Save/restore configuration 33
Search 32
Server properties 35
Start 28
System maintenance 47
Troubleshooting 505
User interface 32
Continuous Data Protection (CDP) 287
Continuous replication 321, 330
Enable 324
Resource 331, 332
Create Primary TimeMark - 324
Cross mirror
Check resources & swap 211
Configuration 193
Recover from disk failure 210
Requirements 187
Re-synchronize 211
Swap 183
Troubleshooting 521
Verify & repair 211
D
Data access 178
Data migration 75
Data protection 261
Data tab 139
dd command 385
Debugging 513
Delta Mode 324
Delta Replication Status Report 127, 333
Devices
Failover 190
Scan LUNs greater than zero 502
Disaster recovery
Import a disk 55
Replication 23, 320
Save/restore configuration 33
Disk
Foreign 55
IDE 53
Import 55
650
Index
E
Email Alerts
Configuration 465
Exclude system log entries 474
Include system log entries 473
Modifying properties 476
Signature 467
System log check 473
System log ignore 474
Triggers 467, 477
Custom email destination 477
New script 478
Output 478
Return codes 478
Sample script 478
X-ray 471
EnableNOPOut 114
Encryption
Replication 327
Event Log 32, 115
Command Line Interface 407
Export 117
Filter information 116
Print 117
Refresh 117
Sort information 116
Troubleshooting 510
Expand virtual device 92
Linux clients 94
Solaris clients 94
Troubleshooting 497
Windows 2000 Dynamic disks 94
Windows clients 94
Export data
From reports 123
F
Failover 181, 182, 520
And Mirroring 222, 260
Asymmetric 183
Auto Recovery 207, 218
Auto recovery 209
Check Consistency 218
Command Line Interface 407
Configuration 185
Connect to primary after failover 198
Consistency check 218
Convert to mutual failover 217
Cross mirror
Check resources & swap 211
Configuration 193
Recover from disk failure 210
Re-synchronize 211
Swap 183
Verify & repair 211
Exclude physical devices 217
Fibre Channel Target failure 189
Fix failed server after failover 209
Force a takeover 219
HBAs 168
Heartbeat monitor 191
Intervals 218
Mutual failover 182
Network connection failure 189
Network connectivity failure 182
Physical device change 216
Power control 203
APC PDU 201, 205
HP iLO 201, 204
IPMI 201, 204
RPC100 201, 204
SCSI Reserve/Release 204
Primary/Secondary Servers 182
Recovery 182, 207, 218
Remove configuration 221
Replication note 353
Requirements 185
Asymmetric mode 187
Clients 186
Cross mirror 187
General 185
Shared storage 186
Sample configuration 184
Self-monitor 191
Server changes 216
Server failure 191
Setup 192
Status 206
651
Index
filesystem utility 75
Filtered Server Throughput Report 143
Foreign disk 55
format utility 91, 94
G
Global Cache 230
Global options 336
Groups 59, 281
Access control 283
Add resources 283
Create 281
Replication 282
GUID 21, 55, 59
H
Halt server 49
health monitoring 199
heartbeat 199
High availability 181
Host-based protection 21
Hostname
Change 31, 48
HotZone 21, 232
Configure 233
Disable 239
Prefetch 232
Read Cache 232
Status 237
Suspend 239
HP iLO 201, 204
HP OpenView
SNMP integration 436
HP-UX 26
HP-UX Client 62, 177
Delete SAN Resource 95
HyperTrac 21
I
IBM Tivoli NetView
SNMP integration 440
IDE drives 53
Import
Disk 55
In-Band Protection 20
Installation
SNMP
BMC Patrol 442
CA Unicenter TNG 438
CDP/NSS Administration Guide
652
Index
HP OpenView 436
IBM Tivoli NetView 440
IP address
changing 496
IPBonding
mode options 318
IPMI 49, 201, 204, 472
Filter 50
Monitor 49
IPStor Admins
Access rights 42
IPStor Server
Checking processes 99
IPStor Users
Access rights 42
ipstorconsole.log 64
iSCSI Client 21
Failover 114, 186
Troubleshooting 515
iSCSI Target 22
iSCSI Target Mode 106
Initiators 106
Targets 106
Windows
Add iSCSI client 108
Disable 114
Enable 107
Stationary client 109
ismon
Statistics 101
J
Jumbo frames 48, 512
K
Keycodes 34
kisdev# 385
L
Label devices 90
Licensing 30, 34
Link Aggregation 318
Linux Client 62, 177
Expand virtual device 94
Troubleshooting 516
Local Replication 23, 320
Logical Resources 22, 58
Expand 92
Icons 59, 510
M
MaxRequestHoldTime 114
MCS 498
Menu
Customize Console 65
MIB 22
MIB file 429
loading 430, 497
MIB module 429
Microscan 22, 39, 327, 336
Microsoft iSCSI initiator 114
default retry period 114
Migrate
Drives 75
Mirroring 240
And Failover 222, 260
CDP journal 296
Configuration 242
Configuration repository 195
Expand primary disk 254
Fix minor disk failure 252
Global options 259
Monitor 247
Performance 38, 259
Promote the mirrored copy 250
Properties 259
Rebuild 258
Recover from failure 252
Remove configuration 260
Replace a physical disk 253
Replace disk in active configuration 252
Replace failed disk 252
Replication note 353
Requirements 242
Resume 258
Resynchronization 39, 248, 259
Setup 242
Snapshot resource 269
Status 250
Suspend 258
653
Index
Swap 250
Synchronize 254
MPIO 498
MTU 48
Multipathing 56, 387
Aliasing 56
Load distribution 388
load distribution 388
Path management 389
Mutual CHAP 45, 46
O
OID 22, 25
Out of kernel resources error 525
N
Near-line mirroring 354
After configuration 363
Configuration 355
Fix minor disk failure 379
Global options 377
Monitor 358
Overview 354
Performance 377
Properties 378
Rebuild 373
Recover data 365
Recover from failure 379
Remove configuration 378
Replace a physical disk 380
Replace disk in active mirror 380
Replace failed disk 379
Requirements 355
Resume 377
Re-synchronization 359
Rollback 366
Setup 355
Status 364
Suspend 377
Swap 373
Synchronize 373
NetView
SNMP integration 440
Statistics 441
NetWare Client
Assigning resources 172
QLogic
driver 156
Troubleshooting 517
Network configuration 30, 47
Network connectivity 510
Failure 182
NIC Port Bonding 22, 316
P
Passwords
Add/delete administrator password 41
Change administrator password 41, 44
Patch
Apply 46
Rollback 46
Path failure 190
Performance 225
Mirror 38
Mirroring 259
Near-line mirroring 377
Replication 38, 336
Persistent binding 51, 156, 160, 504
Clients 155
Downstream 151
Troubleshooting 503
Persistent reservation 112
Physical device
Prepare 52
Rename 53
Repair 56
Test throughput 56
Physical Resource Allocation Report 135
Physical resources 50, 71
Check 100
Icons 51
IDE drives 53
Prepare Disks 76
Troubleshooting 497
Physical Resources Allocation Report 134
Physical Resources Configuration Report 133
Ports 180
Power Control options 200, 203
Prefetch 22, 232
Prepare disks 52, 76
pure-ftpd package 48
654
Index
PVLink 162
Q
Qlc driver 157
QLogic
Configuration 153
HBA 160, 200
iSCSI HBA 103
Ports 166
Target mode settings 153
Queue Depth 516
Quiescent 294
Quota
Group 43, 44
User 43, 44
R
RAID Management
Array 625
Automatic discovery 616
Check actions 635
Console 615
Navigation tree 618
Controller modules 623
Controller settings 620
Discover Storage 615, 617
Automatic 616
Expansion enclosures 617
Manual 615
Disk drive
Assigned 625
Available 625
Empty 625
Failed 625
Hot spare 625
Remove 628
Set 628
Removed 625
Standby 625
Disk drive images 625
Disk drives 625
Interactive images 625
Enclosures 621
Expansion enclosures 621
FalconStor Management console
Discover storage 647
Enclosures tab 647
IPMI information 648
Firmware upgrade 645
655
Index
S
SafeCache 23, 225, 326
Cache resource 226
Configure 226
Disable 231
Enlarge 231
Properties 231
Status 231
Suspend 231
Troubleshooting 523
SAN Client 61
Access control 63
Add 61, 176
Fibre Channel 169
iSCSI 108
AIX 62, 177
Assign SAN Resources 86
Definition 16
HP-UX 62, 177
iSCSI 106
Linux 62, 177
Solaris 62, 90, 177
Windows 90
SAN Client / Resources Allocation Report 141
SAN Client Usage Distribution Report 140
SAN Resource tab 139
SAN Resource Usage Distribution Report 143
656
Index
Troubleshooting 524
Service enabled devices
Creating 76
SMI-S 24
Snapshot 261
Agent 24
notification 267, 293
trigger 293
Resource
Check status 268
Delete 269
Expand 269
Mirror 269
offline 496
Options 269
Properties 269
Protect 269
Reinitialize 269
Shrink Policy 269
Troubleshooting 496
Setup 261
Snapshot Copy 276
Status 280
Snapshot Resource
expand 265
SNMP
Advanced topics 444
BMC Patrol 442
CA Unicenter TNG 438
Changing the community name 444
HP OpenView 436
IBM Tivoli NetView 440
Implementing 433
Integration 429
Limit to subnetwork 444
Manager on different network 444
Traps 37, 430
Troubleshooting 523
Using a configuration for multiple Storage
Servers 444
snmpd.conf 444
Software updates
Add patch 46
Rollback patch 46
Solaris 157
Internal Fibre Channel drives 157
Solaris Client 62, 177
Expand virtual device 94
Persistent binding 156
657
Index
Troubleshooting 518
Virtual devices 90
Statistics
ismon 101
Stop Takeover option 208
Storage 24
Remove device 102
Storage Cluster Interlink 183, 185
Port 24, 185, 197
Storage device path failure 190
Storage Pool Configuration Report 146
Storage pools 66
Access control 70
Administrators 66
Allocation Block Size 69
Create 67
Manage 66
Properties 68
Security 70
Set access rights 70
Tag 70
Type 68
Storage quota 43
Storage Server
Authentication 178
Authorization 178
Connect in Console 29
definition 16
Discover 29, 33
Import a disk 55
Network configuration 47
Save/restore configuration 33
Scan LUNs greater than zero 502
Troubleshooting 518
uninstall 496
X-ray 518
Swapping 211
Sync Standby Devices 183, 521
Synchronize Out-of-Sync Mirrors 39, 377
Synchronize Replica TimeMark 324
System
Disk 51
log 473
Management 178
tab 139
System maintenance 47
Halt 49
IPMI 49
Network configuration 47
Reboot 49
Restart network 49
Restart the server 49
Set hostname 48
T
Tachyon HBAs 162
Target mode settings
QLogic 153
Target port binding 151
target server 320
Thin Provisioning 24, 73, 78, 242, 322
Throttle 39
speed 347
tab 346
Throttle window
Add 346
Delete 346
Edit 346
Throughput
Test 56
TimeMark 24
Replication note 353
retention 23, 276
Troubleshooting 523
TimeMark/CDP 287
Add comment 296
Change priority 296
Copy 298
Create manually 296
Delete 315
Disable 315
Failover 222
Free up storage 315
Maximum reached 293
Policies 311, 314
Priority 293, 297
Replication 315
Resume CDP 314
Roll forward 310
Rollback 310
Scheduling 290
Setup 288
Status 294
Suspend CDP 314
TimeView 287, 300
TimeView 25, 287, 300
Recover data 300
Remap 307
Tivoli
658
Index
Examples 72
Volume set addressing 151, 169, 503
VSA 151, 169, 503
enable for client 503
W
watermark value 326
Windows 2000 Dynamic disks
Expand virtual device 94
Windows Client
Expand virtual device 94
Troubleshooting 513
Virtual devices 90
World Wide Port Names 170
Write caching 60
WWN Zoning 25
WWPN 88, 170
mapping 88
X
X-ray 518
CallHome 471
System Information file 472
Y
YaST 47
Z
ZeroImpact 25
backup 382
Zoning 152
Soft zoning 152
U
UEFI 500
USEQUORUMHEALTH 185
User Quota Usage Report 147
V
VAAI 25
Virtual devices 72
Creating 76
Expand 92
expansion FAQ 497, 498
Virtualization 72
CDP/NSS Administration Guide
659