Vcs Notes
Vcs Notes
Vcs Notes
Release Notes
Linux
Legal Notice
Copyright © 2010 Symantec Corporation. All rights reserved.
Symantec, the Symantec Logo, Veritas Storage Foundation and Veritas are trademarks or
registered trademarks of Symantec Corporation or its affiliates in the U.S. and other
countries. Other names may be trademarks of their respective owners.
The Veritas Cluster Server 5.0 Release Notes can be viewed at the following URL:
http://entsupport.symantec.com/docs/283850
The product described in this document is distributed under licenses restricting its use,
copying, distribution, and decompilation/reverse engineering. No part of this document
may be reproduced in any form by any means without prior written authorization of
Symantec Corporation and its licensors, if any.
THE DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS,
REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT,
ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO
BE LEGALLY INVALID. SYMANTEC CORPORATION SHALL NOT BE LIABLE FOR INCIDENTAL
OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH THE FURNISHING,
PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE INFORMATION CONTAINED
IN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE.
The Licensed Software and Documentation are deemed to be commercial computer software
as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19
"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, "Rights in
Commercial Computer Software or Commercial Computer Software Documentation", as
applicable, and any successor regulations. Any use, modification, reproduction release,
performance, display or disclosure of the Licensed Software and Documentation by the U.S.
Government shall be solely in accordance with the terms of this Agreement.
Symantec Corporation
350 Ellis Street
Mountain View, CA 94043
http://www.symantec.com
Technical Support
Symantec Technical Support maintains support centers globally. Technical
Support’s primary role is to respond to specific queries about product features
and functionality. The Technical Support group also creates content for our online
Knowledge Base. The Technical Support group works collaboratively with the
other functional areas within Symantec to answer your questions in a timely
fashion. For example, the Technical Support group works with Product Engineering
and Symantec Security Response to provide alerting services and virus definition
updates.
Symantec’s maintenance offerings include the following:
■ A range of support options that give you the flexibility to select the right
amount of service for any size organization
■ Telephone and Web-based support that provides rapid response and
up-to-the-minute information
■ Upgrade assurance that delivers automatic software upgrade protection
■ Global support that is available 24 hours a day, 7 days a week
■ Advanced features, including Account Management Services
For information about Symantec’s Maintenance Programs, you can visit our Web
site at the following URL:
www.symantec.com/techsupp/
Customer service
Customer service information is available at the following URL:
www.symantec.com/techsupp/
Customer Service is available to assist with the following types of issues:
■ Questions regarding product licensing or serialization
■ Product registration updates, such as address or name changes
■ General product information (features, language availability, local dealers)
■ Latest information about product updates and upgrades
■ Information about upgrade assurance and maintenance contracts
■ Information about the Symantec Buying Programs
■ Advice about Symantec's technical support options
■ Nontechnical presales questions
■ Issues that are related to CD-ROMs or manuals
Maintenance agreement resources
If you want to contact Symantec regarding an existing maintenance agreement,
please contact the maintenance agreement administration team for your region
as follows:
Symantec Early Warning Solutions These solutions provide early warning of cyber attacks, comprehensive threat
analysis, and countermeasures to prevent attacks before they occur.
Managed Security Services These services remove the burden of managing and monitoring security devices
and events, ensuring rapid response to real threats.
Consulting Services Symantec Consulting Services provide on-site technical expertise from
Symantec and its trusted partners. Symantec Consulting Services offer a variety
of prepackaged and customizable options that include assessment, design,
implementation, monitoring, and management capabilities. Each is focused on
establishing and maintaining the integrity and availability of your IT resources.
Educational Services Educational Services provide a full array of technical training, security
education, security certification, and awareness communication programs.
To access more information about Enterprise services, please visit our Web site
at the following URL:
www.symantec.com
Select your country or language from the site index.
Release Notes
This document includes the following topics:
■ Introduction
■ No longer supported
■ Operating system fresh installs and upgrades for VCS 5.0 MP4
■ Fixed issues
■ Known issues
■ Software limitations
8 Release Notes
Introduction
■ Documentation errata
■ VCS documentation
Introduction
This document provides important information about Veritas Cluster Server (VCS)
version for Linux. Review this entire document before you install or upgrade VCS.
The information in the Release Notes supersedes the information provided in the
product documents for VCS.
For the latest information on updates, patches, and software issues for this release,
use the following TechNote on the Symantec Enterprise Support website:
http://entsupport.symantec.com/docs/281993
You can download the latest version of Veritas Cluster Server Release Notes from
the link that is provided in the TechNote.
VCS also provides agents to manage key enterprise applications. Before configuring
an enterprise agent with VCS, verify that you have a supported version of the
agent.
See the Veritas Cluster Server Agent for Oracle Installation and Configuration
Guide for more information.
Table 1-1 New and modified attributes for VCS 5.0 MP4 agents for upgrades
from VCS 5.0
DiskGroup
Modified
attributes
PanicSystemOnDGLoss 0
MultiNICA
New attributes
[boolean] Mii 0
SambaShare
Modified
attributes
Release Notes 11
Changes introduced in this release
Table 1-1 New and modified attributes for VCS 5.0 MP4 agents for upgrades
from VCS 5.0 (continued)
ArgList { "SambaServerRes:ConfFile",
"SambaServerRes:LockDir",
ShareName, ShareOptions,
"SambaServerRes:Ports",
SambaServerRes }
Share
New attributes
Modified
attributes
New and modified attributes for the DB2 agent in VCS 5.0 MP4
The new and modified attributes for the DB2 agent in VCS 5.0 MP4 are as follows:
12 Release Notes
Changes introduced in this release
Table 1-2 New and modified attributes for VCS 5.0 MP4 DB2 agent for
upgrades from VCS 5.0
Db2udb
Deleted attributes
ContainerName N/A
ContainerType N/A
New attributes
[boolean] UseDB2start 0
New and modified attributes for the Sybase agent in VCS 5.0
MP4
The new and modified attributes for the Sybase agent in VCS 5.0 MP4 are as
follows:
Table 1-3 New and modified attributes for VCS 5.0 MP4 Sybase agent for
upgrades from VCS 5.0
Sybase
Release Notes 13
Changes introduced in VCS 5.0 MP3
Table 1-3 New and modified attributes for VCS 5.0 MP4 Sybase agent for
upgrades from VCS 5.0 (continued)
New attribute
Modified attribute
SybaseBk
New attribute
Modified attribute
Change in attributes
This release has the following changes for VCS attributes:
■ AYATimeout - VCS heartbeat attribute
The default value of the heartbeat attribute AYATimeout is changed from 300
seconds to 30 seconds. [622413]
■ Preonline - VCS service group attribute
You can now localize the Preonline attribute for the nodes in a cluster. [530440]
■ AutoFailOver - VCS service group attribute
If you have configured system zones in campus clusters, you can fail over that
service group manually across system zones.
See the Veritas Cluster Server User’s Guide for more information.
16 Release Notes
Changes introduced in VCS 5.0 MP3
New attributes
This release introduces the following new system attributes:
■ HostMonitor—Monitors the usage of resources on the host.
■ HostUtilization—Indicates the usage percentages of resources on the host.
This release introduces the following new service group attributes:
■ PreSwitch—Indicates whether the VCS engine should switch a service group
in response to a manual group switch.
■ PreSwitching—Indicates that the VCS engine invoked the PreSwitch action
function for the agent; however, the action function is not yet complete.
This release introduces the following new resource type level attribute:
■ OfflineWaitLimit—Indicates the number of monitor intervals to wait for the
resource to go offline after completing the offline procedure. Increase the
value of this attribute if the resource is likely to take a longer time to go offline.
See the Veritas Cluster Server User’s Guide for more information.
Table 1-4 New and modified attributes for VCS 5.0 MP3 agents for upgrades
from VCS 5.0
Apache
New attributes
PidFile
IntentionalOffline 0
DiskGroup
New attributes
UmountVolumes 0
Modified
attributes
DNS
New attributes
ResRecord
CreatePTR
OffDelRR
LVMVolumeGroup
New attributes
SupportedActions { volinuse }
Mount
New attributes
RegList { VxFSMountLock }
Release Notes 19
Changes introduced in VCS 5.0 MP3
Table 1-4 New and modified attributes for VCS 5.0 MP3 agents for upgrades
from VCS 5.0 (continued)
VxFSMountLock 0
Modified
attributes
SupportedActions { "mountpoint.vfd",
"mounted.vfd", "vxfslic.vfd",
"chgmntlock", "mountentry.vfd"
}
NFSRestart
New attributes
Share
New attributes
SupportedActions { "direxists.vfd" }
■ Support for manual failover of service groups across system zones in campus
clusters
The AutoFailOver attribute controls the service group behavior in response
to service group and system faults. For campus clusters, you can set the value
of the AutoFailOver attribute as 2 to manually fail over the service group across
the system zones that you defined in the SystemZones attribute.
The manual failover functionality requires that you enable the HA/DR license
and that the service group is a non-hybrid service group.
See the Veritas Cluster Server User’s Guide and the Veritas Cluster Server Bundled
Agents Reference Guide for more information.
Note: If the NICs are connected to different switches or hubs, you must establish
connection between the switches or hubs.
See the Veritas Cluster Server Installation Guide for instructions to configure
private heartbeats that use aggregated interfaces.
See the Veritas Cluster Server Installation Guide for instructions to configure I/O
fencing using DMP policy.
See the Veritas Cluster Server User’s Guide for more information on the
functionality of this agent.
■ Viewing fire drill logs—If a service group is configured with a physical fire drill
group, a tab labelled Fire Drill Logs appears on the secondary tab bar in the
24 Release Notes
Changes introduced in VCS 5.0 MP1
Group:Summary view. Click this tab to view the VCS log messages about the
fire drill group on the remote cluster and the resources that belong to it.
See the Veritas Cluster Server User's Guide for information about fire drills.
unknown No fire drill has been run or the Cluster Management Console has
come online after the most recent fire drill
failed Fire drill group did not come online on the secondary cluster
If multiple management servers are connected to the global cluster that contains
the primary global group, the table does not show fire drill status for that group.
5 In the Cluster:Groups view, in the Groups Listing table, click the name of the
primary global group.
6 In the Group:Summary view, in the Remote Operations task panel, click Run
fire drill.
You can view results of the fire drill in the Cluster:Groups view, the
Group:Summary view, and in the Group:Fire Drill Logs view.
DisableClusStop Do not process the hastop -all command; process all other
hastop commands.
major.minor.maintenance_patch_num.point_patch_num
For example:
5.0.30.0
Change in behavior: New option for the hastart and had commands
Use the -v option to retrieve concise information about the VCS version. Use the
-version option to get verbose information.
■ VCS
■ Cluster Management Console
■ Database agents
■ Application agents
■ Virtual fire drill support
Release Notes 31
Features introduced in VCS 5.0
■ VCS
■ Cluster Management Console
■ Database agents
■ Application agents
■ Replication agents
■ Global clustering
■ Fire drill support
Note:
Database agents are included on the VCS 5.0 disc. The replication and application
agents are available via the Veritas High Availability Agent Pack.
New attributes
VCS 5.0 introduces the following new attributes. See the Veritas Cluster Server
User’s Guide for more information.
Resource type attributes:
■ AgentFile—Complete name and path of the binary for an agent. Use when the
agent binaries are not installed at their default locations.
■ AgentDirectory—Complete path of the directory in which the agent binary and
scripts are located. Use when the agent binaries are not installed at their default
locations.
Cluster attributes:
■ EngineShutdown—Provides finer control over the hastop command.
■ BackupInterval—Time period in minutes after which VCS backs up
configuration files.
■ OperatorGroups—List of operating system user account groups that have
Operator privileges on the cluster.
■ AdministratorGroups—List of operating system user account groups that have
administrative privileges on the cluster.
■ Guests—List of users that have Guest privileges on the cluster.
System attributes:
32 Release Notes
Features introduced in VCS 5.0
Removed attributes
VCS 5.0 does not use the following attributes:
■ DiskHbStatus—Deprecated. This release does not support disk heartbeats.
Symantec recommends using I/O fencing.
■ MajorVersion—The EngineVersion attribute provides information about the
VCS version.
■ MinorVersion—The EngineVersion attribute provides information about the
VCS version.
■ You can enable the AgentDebug attribute to get more debugging information
from the agent and the database.
Note: The system from where you install VCS must run the same Linux distribution
as the target systems.
34 Release Notes
VCS system requirements
Supported hardware
The compatibility list contains information about supported hardware and is
updated regularly. For the latest information on supported hardware visit the
following URL:
http://entsupport.symantec.com/docs/330441
Before installing or upgrading Veritas Cluster Server, review the current
compatibility list to confirm the compatibility of your hardware and software.
Table 1-7 Supported Linux operating system and kernel versions (continued)
RHEL 4 Update 8
RHEL 5 Update 4
Table 1-7 Supported Linux operating system and kernel versions (continued)
2.6.9-89.EL
2.6.9-89.ELlargesmp
Note: If your system runs an older version of either Red Hat Enterprise Linux or
SUSE Linux Enterprise Server, you must upgrade the operating system before
you attempt to install the VCS software. Refer to the Oracle, Red Hat, or SUSE
documentation for more information on upgrading your system.
Symantec supports only Oracle, Red Hat, and SUSE distributed kernel binaries.
Symantec products operate on subsequent kernel and patch releases provided
the operating systems maintain kernel ABI (application binary interface)
compatibility.
Information about the latest supported Red Hat erratas and updates and SUSE
service packs is available in the following TechNote. The TechNote also includes
any updates to the supported operating systems and software. Read this TechNote
before you install Symantec products.
http://entsupport.symantec.com/docs/281993
Release Notes 37
VCS system requirements
RHEL 4 compat-libgcc-296-2.96-132.7.2.i386.rpm
compat-libstdc++-296-2.96-132.7.2.i386.rpm
compat-libstdc++-33-3.2.3-47.3.i386.rpm
glibc-2.3.4-2.41.i686.rpm
libgcc-3.4.6-10.i386.rpm
libstdc++-3.4.6-10.i386.rpm
compat-libstdc++-33-3.2.3-47.3.x86_64.rpm
glibc-2.3.4-2.41.x86_64.rpm
glibc-common-2.3.4-2.41.x86_64.rpm
libgcc-3.4.6-10.x86_64.rpm
libstdc++-3.4.6-10.x86_64.rpm
java-1.4.2-gcj-compat-1.4.2.0-27jpp.noarch.rpm
RHEL 5 compat-libgcc-296-2.96-138.i386.rpm
compat-libstdc++-33-3.2.3-61.i386.rpm
compat-libstdc++-296-2.96-138.i386.rpm
glibc-2.5-24.i686.rpm
libgcc-4.1.2-42.el5.i386.rpm
libstdc++-3.4.6-10.i386.rpm
compat-libstdc++-33-3.2.3-61.x86_64.rpm
glibc-2.5-24.x86_64.rpm
glibc-common-2.5-24.x86_64.rpm
libgcc-4.1.2-42.el5.x86_64.rpm
libstdc++-3.4.6-10.x86_64.rpm
java-1.4.2-gcj-compat-1.4.2.0-40jpp.115.noarch.rpm
38 Release Notes
VCS system requirements
SLES 9 compat-32bit-9-200407011229.x86_64.rpm
glibc-32bit-9-200710191304.x86_64.rpm
compat-2004.7.1-1.2.x86_64.rpm
glibc-2.3.3-98.94.x86_64.rpm
libgcc-3.3.3-43.54.x86_64.rpm
libstdc++-3.3.3-43.54.x86_64.rpm
SLES 10 compat-32bit-2006.1.25-11.2.x86_64.rpm
glibc-32bit-2.4-31.54.x86_64.rpm
compat-2006.1.25-11.2.x86_64.rpm
compat-libstdc++-5.0.7-22.2.x86_64.rpm
glibc-2.4-31.54.x86_64.rpm
libgcc-4.1.2_20070115-0.21.x86_64.rpm
libstdc++-4.1.2_20070115-0.21.x86_64.rpm
SLES11 glibc-2.9-13.2
glibc-32bit-2.9-13.2
libgcc43-4.3.3_20081022-11.18
libgcc43-32bit-4.3.3_20081022-11.18
libstdc++43-4.3.3_20081022-11.18
libstdc++43-32bit-4.3.3_20081022-11.18
SLES11 glibc-32bit-2.9-13.2
libgcc43-32bit-4.3.3_20081022-11.18
libgcc43-4.3.3_20081022-11.18
libstdc++33-3.3.3-11.9
libstdc++43-32bit-4.3.3_20081022-11.18
libstdc++43-4.3.3_20081022-11.18
Release Notes 39
VCS system requirements
SLES10 glibc-2.4-31.74.1
glibc-64bit-2.4-31.74.1
libgcc-4.1.2_20070115-0.29.6
libgcc-64bit-4.1.2_20070115-0.29.6
libstdc++-4.1.2_20070115-0.29.6
libstdc++-64bit-4.1.2_20070115-0.29.6
libstdc++33-3.3.3-7.8.1 libstdc++33-64bit-3.3.3-7.8.1
RHEL5 compat-glibc-headers-2.3.4-2.26
compat-libgcc-296-2.96-138
compat-libstdc++-296-2.96-138
compat-libstdc++-33-3.2.3-61
glibc-2.5-34
glibc-common-2.5-34
glibc-headers-2.5-34
libgcc-4.1.2-44.el5
libstdc++-4.1.2-44.el5
Supported software
Veritas Cluster Server supports the previous and next versions of Storage
Foundation to facilitate product upgrades.
Refer to the LBN <URL> and the SCL <URL> for the latest updates on software
support.
Veritas Cluster Server supports the previous and next versions of Storage
Foundation to facilitate product upgrades, when available.
VCS supports the following volume managers and files systems:
■ ext2, ext3, reiserfs, NFS, and bind on LVM2, Veritas Volume Manager (VxVM)
4.1 and 5.0, and raw disks.
■ Veritas Volume Manager (VxVM) with Veritas File System (VxFS)
40 Release Notes
VCS system requirements
Note: Veritas Storage Foundation 5.0 and later versions support only 64-bit
architecture on Linux. See Veritas Storage Foundation Release Notes for more
details.
Table 1-9 Supported software for the VCS agents for enterprise applications
DB2 5.0 MP4 5.0 and DB2 Enterprise 8.1, 8.2 RHEL4, RHEL5
later Server Edition
9.1, 9.5 SLES9, SLES10
OEL4, OEL5
Table 1-9 Supported software for the VCS agents for enterprise applications
(continued)
11g R2
RHEL5, SLES10
Sybase 5.0 MP4 5.0 and Sybase Adaptive 12.5.x, 15 RHEL4, RHEL5
later Server Enterprise
SLES9, SLES10
OEL4, OEL5
15 SLES10
SLES10, RHEL5
Note: The VCS agent for Orace version 5.2 with 5.0 MP4 provides intentional
offline functionality for the Oracle agent. If you installed the 5.2 agent with an
earlier version of VCS, you must disable the intentional offline functionality of
the Oracle agent.
See the Installation and Configuration Guide for the agents for more details.
For a list of the VCS application agents and the software that the agents support,
see the Veritas Cluster Server Agents Support Matrix at Symantec website.
No longer supported
VCS no longer supports the following:
■ CampusCluster agent
■ Apache agent configuration wizard
■ The updated Oracle agent does not support Oracle 8.0.x and Oracle 8.1.x.
42 Release Notes
About upgrading to 5.0 MP4
Table 1-10 lists the supported upgrade paths for Red Hat Enterprise Linux and
Oracle Enterprise Linux.
VCS upgrade and RHEL VCS 4.1 MP4 on RHEL4 U3 VCS 5.0 MP4 RHEL4 U3 and
upgrade later
VCS 5.0 on RHEL4 U3
VCS upgrade and OEL VCS 5.0 MP2 on OEL4 U4 VCS 5.0 MP4 on OEL4 U4 and
upgrade later
VCS 5.0 MP3 on OEL4 U5
Table 1-11 lists the supported upgrade paths for SUSE Linux Enterprise Server
44 Release Notes
Operating system fresh installs and upgrades for VCS 5.0 MP4
Table 1-11 Supported upgrade paths for SUSE Linux Enterprise Server
VCS upgrade and SLES VCS 4.1 MP4 on SLES9 SP3 VCS 5.0 MP4 SLES9 SP4
upgrade
VCS 5.0 on SLES9 SP3
VCS 5.0 MP3 on SLES10 SP2 VCS 5.0 MP4 SLES10 SP2
VCS upgrade VCS 5.0 MP3 on SLES10 SP2 VCS 5.0 MP4 on SLES10 SP2
VCS 5.0 MP2 on SLES9 SP3 VCS 5.0 MP4 on SLES9 SP3
VCS 5.0 RU4 on SLES10 SP3 VCS 5.0 MP4 on SLES10 SP3
VCS 5.0 RU3 on SLES10 SP2 VCS 5.0 MP4 on SLES10 SP2
VCS 5.0 RU4 SLES10 SP3 VCS 5.0 MP4 SLES10 SP3
VCS 5.0 RU4 SLES11 VCS 5.0 MP4 SLES11
Note: Not all platforms and products have a full installer for this release. In these
cases, you must install an earlier version of the product and upgrade the product
to the 5.0 MP4 release.
The VCS 5.0 MP4 release supports only fresh installs of VCS using the installer
script for SLES 10 SP3 Linux on IBM™Power or SLES 11 for Linux on IBM™Power.
Release Notes 45
Before upgrading from 4.x using the script-based installer
VCS supports both fresh installs and upgrades from the SF 5.0 Release Update
(RU) 4 release or later for SLES 10 SP3 for x86_64-bit platforms in this release.
See the SUSE documentation as well as the installation section of this document
for more information on upgrading your system.
2 Stop the application agents that are installed on the VxVM disk (for example
the NBU agent).
Perform the following steps to stop the application agents:
■ Take the resources offline on all systems that you want to upgrade.
■ Stop the application agents that are installed on VxVM disk on all the
systems.
This command does not list any processes in the VxVM installation
directory.
3 Make sure that LLT, GAB, and VCS are running on all of the nodes in the
cluster. The installer program cannot proceed unless these processes are
running.
# lltconfig
LLT is running
# gabconfig -a
=====================================================
Port a gen cc701 membership 01
Port h gen cc704 membership 01
# ./installer
or
# ./installmp
The installer starts the product installation program with a copyright message.
It then specifies where it creates the logs. Note the log's directory and name.
3 If you are using the installer, then from the opening Selection Menu, choose:
I for "Install/Upgrade a Product."
4 Enter the names of the nodes that you want to upgrade. Use spaces to separate
node names. Press the Enter key to proceed.
The installer runs some verification checks on the nodes.
5 When the verification checks are complete, press the Enter key to continue.
The installer lists the rpms to upgrade.
Release Notes 47
About phased upgrade
6 The installer stops the product processes, uninstalls rpms, installs, upgrades,
and configures VCS.
7 The installer lists the nodes that Symantec recommends you restart.
If you are upgrading from 4.x, you may need to create new VCS accounts if you
used native OS accounts.
You can fail over all Downtime equals the time that is taken to offline and online the
your service groups to service groups.
the nodes that are up.
You have a service Downtime for that service group equals the time that is taken
group that you cannot to perform an upgrade and restart the node.
fail over to a node that
runs during upgrade.
sg3 sg4
# hagrp -state
2 Offline the parallel service groups (sg1 and sg2) and the VXSS group from
the first subcluster. Switch the failover service groups (sg3 and sg4) from the
first subcluster (node01 and node02) to the nodes on the second subcluster
(node03 and node04).
3 On the nodes in the first subcluster, stop all VxVM volumes (for each disk
group) that VCS does not manage.
4 Make the configuration writable on the first subcluster.
# haconf -makerw
7 Verify that the service groups are offline on the first subcluster that you want
to upgrade.
# hagrp -state
Output resembles:
8 Perform this step on the nodes (node01 and node02) in the first subcluster if
the cluster uses I/O Fencing. Use an editor of your choice and change the
following:
■ In the /etc/vxfenmode file, change the value of the vxfen_mode variable
from scsi3 to disabled. You want the line in the vxfenmode file to resemble:
vxfen_mode=disabled
UseFence = NONE
# cp /etc/llttab /etc/llttab.bkp
# cp /etc/llthosts /etc/llthosts.bkp
# cp /etc/gabtab /etc/gabtab.bkp
# cp /etc/VRTSvcs/conf/config/main.cf \
/etc/VRTSvcs/conf/config/main.cf.bkp
# cp /etc/VRTSvcs/conf/config/types.cf \
/etc/VRTSvcs/conf/config/types.cf.bkp
# /opt/VRTSat/bin/vssat showbackuplist
B|/var/VRTSat/.VRTSat/profile/VRTSatlocal.conf
B|/var/VRTSat/.VRTSat/profile/certstore
B|/var/VRTSat/ABAuthSource
B|/etc/vx/vss/VRTSat.conf
Quiescing ...
Snapshot Directory :/var/VRTSatSnapShot
The program starts with a copyright message and specifies the directory
where it creates the logs.
4 The installer performs a series of checks and tests to ensure communications,
licensing, and compatibility.
5 When you are prompted, reply y to continue with the upgrade.
6 The installer ends for the first subcluster with the following output:
The upgrade is finished on the first subcluster. Do not reboot the nodes in
the first subcluster until you complete the Preparing the second subcluster
procedure.
# hastatus -summ
-- SYSTEM STATE
-- System State Frozen
A node01 EXITED 1
A node02 EXITED 1
A node03 RUNNING 0
A node04 RUNNING 0
-- GROUP STATE
-- Group System Probed AutoDisabled State
2 Stop all VxVM volumes (for each disk group) that VCS does not manage.
3 Make the configuration writable on the second subcluster.
# haconf -makerw
Release Notes 55
About phased upgrade
# hagrp -state
#Group Attribute System Value
SG1 State node01 |OFFLINE|
SG1 State node02 |OFFLINE|
SG1 State node03 |OFFLINE|
SG1 State node04 |OFFLINE|
SG2 State node01 |OFFLINE|
SG2 State node02 |OFFLINE|
SG2 State node03 |OFFLINE|
SG2 State node04 |OFFLINE|
SG3 State node01 |OFFLINE|
SG3 State node02 |OFFLINE|
SG3 State node03 |OFFLINE|
SG3 State node04 |OFFLINE|
VxSS State node01 |OFFLINE|
VxSS State node02 |OFFLINE|
VxSS State node03 |OFFLINE|
VxSS State node04 |OFFLINE|
8 Perform this step on node03 and node04 if the cluster uses I/O Fencing. Use
an editor of your choice and change the following:
■ In the /etc/vxfenmode file, change the value of the vxfen_mode variable
from scsi3 to disabled. You want the line in the vxfenmode file to resemble:
vxfen_mode=disabled
UseFence = NONE
UseFence = SCSI3
vxfen_mode=scsi3
# gabconfig -xc
# haconf -makerw
Perform the operating system upgrade. After you finish the operating system,
enable VCS, VXFEN, GAB and LLT.
To enable VCS, VXFEN, GAB and LLT
◆ On second subcluster, perform following commands:
# chkconfig llt on
# chkconfig gab on
# chkconfig vxfen on
# chkconfig vcs on
The program starts with a copyright message and specifies the directory
where it creates the logs.
Release Notes 59
About phased upgrade
6 The installer ends for the first subcluster with the following output:
vxfen_mode=scsi3
# gabconfig -a
GAB Port Memberships
===============================================================
Port a gen nxxxnn membership 0123
Port b gen nxxxnn membership 0123
Port h gen nxxxnn membership 0123
60 Release Notes
About phased upgrade
4 Run an hastatus -sum command to determine the status of the nodes, service
groups, and cluster.
# hastatus -sum
-- SYSTEM STATE
-- System State Frozen
A node01 RUNNING 0
A node02 RUNNING 0
A node03 RUNNING 0
A node04 RUNNING 0
-- GROUP STATE
-- Group System Probed AutoDisabled State
5 After the upgrade is complete, mount the VxFS file systems and start the
VxVM volumes (for each disk group) that VCS does not manage.
In this example, you have performed a phased upgrade of VCS. The service groups
were down when you took them offline on node03 and node04, to the time VCS
brought them online on node01 or node02.
Release Notes 61
About phased upgrade
The program starts with a copyright message and specifies the directory
where it creates the logs.
4 The installer performs a series of checks and tests to ensure communications,
licensing, and compatibility.
5 When you are prompted, reply y to continue with the upgrade.
6 The installer ends for the first subcluster with the following output:
The upgrade is finished on the first subcluster. Do not reboot the nodes in
the first subcluster until you complete the Preparing the second subcluster
procedure.
62 Release Notes
Fixed issues
Fixed issues
Refer to the following sections depending on the VCS version:
■ See “Issues fixed in VCS 5.0 MP4” on page 62.
■ See “Issues fixed in VCS 5.0 MP3” on page 67.
■ See “Issues fixed in VCS 5.0 MP1” on page 70.
■ See “Issues fixed in VCS 5.0” on page 72.
For a list of additional issues fixed in this release, see the following TechNote:
http://entsupport.symantec.com/docs/285869
1469787 The file descriptor opened by HAD on /dev/llt now closes on exec.
1440459 Modified the init.d script of vxfen to ensure that the vxfen service
starts after the vxvm-boot service, when the system boots up.
1424929 Fixed the race condition while sending the GAB CONNECTS message.
1414709 After all the resources get offline, the IntentOnline attribute of the
service group resets to 0.
1404384 Global groups can now switch over to the target cluster when the
Preswitch's attribute value is set to TRUE.
1386527 Removed the buffer overflow that occurred during CPU usage
computation.
Release Notes 63
Fixed issues
1369622 Steward process now starts as a daemon when invoked with 'steward
-start'.
1433143 File not found errors while executing custom SQL script.
1484665 Process agent will not probe if encrypted password has spaces in it.
1509742 The HAD h port is not coming up after upgrading the first node of the
cluster.
1516696 Online local hard and firm dependencies are not honored during HAD
startup (not persistent).
1538207 AGFW does not clear the Monitor Timedout flag on Agent RESTART.
1544263 Oracle agent picks up only the last corresponding action from Oracle
error message ignoring the previous error numbers.
1556549 Parent group not autostarted when some of the resoures are online
before VCS is started.
1596691 Default domain name is shown as A for secure cluster when haxxx is
run for remote host
1630437 NFS clients get permission denied errors after NFS server failover.
1631012 LLT: For UDP4, lltconfig configures a link even if an IP address is not
plumbed on that link.
1633973 Group faulted while offline on node fails-over if that node is rebooted;
for global group with no Authority, policy ignores it.
1634399 Mount agent does not support NFS version 4 in Linux, even though
NFS agent supports it.
1665036 VxFEN startup process on retry can give error RFSM GAB err 16 when
cluster is fencing a node out of the cluster.
1671221 On switch, Partial Group goes in ONLINE state, if group was partial
due to a resource reported IntentionalOffline state on node1.
1703756 Error message prompt while online global parallel group for the first
time.
1705397 The clean of the IP resource would take offline the underneath NIC
resource.
1731157 DNS Agent clean does not complete if the resource is reported offline
outside of the VCS.
1749323 LLT should give error if an attempt is made to configure more than 8
links (LLT_MAX_LINK)
1786224 NIC agent does not detect network cable pull event and it does not
report OFFLINE.
1792984 .nfs_lock_ file not removed after system crash and monitor complains.
1834858 Remote Group faults when set up as monitoronly and local SG is taken
offline.
1836575 SMTP notification email should contain Entity name in subject line.
1839299 GAB (fd 18) is continuously returning EAGAIN for vxconfigd port w.
1852513 DiskGroupSnap - assumes all nodes are part of the campus cluster
configuration.
66 Release Notes
Fixed issues
1898247 Netlsnr offline script does not kill listener process when IP is plumbed
but the underlying MultiNicA resource is faulted.
1922408 vxfentsthdw should detect storage arrays which interpret NULL keys
as valid for registrations/reservations.
1923602 During link failure multinic agent failed to fail over the VIP to other
active devices.
1927676 LLT ports are left in an inconsistent state when multiple GAB clients
unregister.
1945539 Add check for ohasd daemon for 11g R2 in ASMInst agent.
1957467 VCS init scripts should honor the number of systems configured in
main.cf or the GAB configuration while deciding on mode (single node
/ multi node) to start VCS.
1974589 Removing a link from LLT results in PANIC due to an invalid lower
STREAMS queue.
Release Notes 67
Fixed issues
612587 The haclus -wait command does not hang now when cluster name is
not specified.
618961 On SUSE nodes, when the fencing driver calls the kernel panic routine,
this routine could get stuck in sync_sys() call and cause the panic to
hang. This allows both sides of the split-brain to remain alive. This
issue is fixed so that the cluster node panics after split-brain.
797703 The output of the vxfenadm command with -d option had unwanted
"^M" character attached to the RFSM state information of all nodes.
The output of the vxfenadm command now does not display the
unwanted "^M" characters.
805121 Partial groups go online erroneously if you kill and restart the VCS
engine.
861792 Service groups are now listed in the groups table in the Recover Site
wizard.
862507 GAB_F_SEQBUSY is now not set when the sequence request is not
sent.
896781 The issue that caused systems to panic intermittently after you disable
array side switch port is fixed.
900244 When a service group switches over, NFS clean is not called anymore.
926849 Fixed an issue that prevented service groups with IPMultiNIC resources
from failing back.
989935 The clean and monitor functions of the Application agent can now
detect the same process/application run by different users.
1016548 The issue that caused node panic with message “GAB: Port f halting
system due to network failure” is fixed.
1032572 The vxfen-startup script is modified to use the xpg4 awk utility
(/usr/xpg4/ bin/awk).
68 Release Notes
Fixed issues
1038373 Fixed an issue where LLT was causing panic because it was referencing
freed memory.
1050999 Resolved a HAD issue that caused the ShutdownTimeout value to not
work correctly.
1051193 The vxfen unconfigure command now uses the correct key to
preempt/abort from now faulted paths.
1055379 vxfentsthw works without error when raw device file is not present .
1056559 Fixed an issue where the NFSRestart monitor threw a, “too many open
files error.”
1057418 The I/O fencing component can now retrieve the serial number of
LUNs from a Pillar Data array. I/O fencing can now start without any
issues and port b comes up for PillarData Arrays.
1061056 Child process of the agents now do not make calls to the logging
functions of the VCSAgLog object.
1067667 When the VCS engine is stopping with evacuation, even if resource
faults for a group with OnlineRetryLimit greater than zero, the
evacuation completes.
1099651 Fixed the issue where the Process agent flooded the log and message
files with the V-16-1001-9004 error message.
1102457 The vxfen initialization code has been modified to properly initialize
parameters and avoid dereferencing of null pointer.
1133171 When new resources are added, the service threads in the
corresponding agent will get created upto a maximum of NumThreads
threads.
1133223 All the thread unsafe calls are replaced with thread safe calls.
Release Notes 69
Fixed issues
1137118 The fencing driver retries startup for some time before deciding on a
pre-existing split brain.
1156189 The Sybase resource can now come online even if –s appears in the
path of the server name of the resource attribute definition.
1174520 The VCS engine now does not assert if a service group with single
system in SystemList is unfreezed.
1186414 The hastart command and the triggers now run on the locale specified
by the LANG variable.
1189542 Service Group state now does not turn from PARTIAL to ONLINE after
killing the VCS engine. The issue is fixed to show the correct state of
the service group.
1205904 Mount agent’s problem to monitor two Mount resources with same
MountPoint but different BlockDevice is fixed.
1214464 The VCS agent for Sybase now uses the getent passwd command
to check the user details defined in the Owner attribute.
1217482 Fixed the cluster_connector.sh script to use the correct signal to stop
the Java process when the CMC group goes offline.
1228356 The VCS agent for DB2 now uses getent passwd to check the user
details defined in the Owner attribute.
1230862 The nfs_postoffline trigger can now read the value of the NFS agent’s
Nservers attribute.
70 Release Notes
Fixed issues
1247347 The Mount agent error message now indicates that there are trailing
spaces in the BlockDevice or MountPoint attribute value. The user
must remove these extra spaces for the resource to be correctly
configured.
1259756 When NFSv4Support is set to 0, the nfsd and mountd daemons start
with "--no-nfs-version 4" option.
1280144 Subroutines are added to set and reset locale in the ag_i18n_inc
module.
1285439 The clean script of the VCS agent for Sybase now supports removal
of the IPC resources that the Sybase processes allocate.
1298642 The Mount agent umounts all the bindfs filesystems associated with
the mount point before taking the resource offline.
1379532 The online script of the VCS agent for Oracle is fixed to use the startup
force command when the value of the StartUpOpt attribute is set to
STARTUP_FORCE.
784335 The Oracle agent cannot identify the shell when the /etc/passwd file
has multiple occurrence of the $Owner string.
702594 The Oracle agent does export SHLIB_PATH and other environment
in CSH.
627647 The Action entry point for Oracle fails because set_environment()
function prototype differs.
625490 For the agent framework module, ag_i18n_inc.sh does not invoke
halog when script entry points use the VCSAG_LOGDBG_MSG API,
even if the debug tag is enabled.
620529 Cluster Management Console does not display localized logs. If you
installed language packs on the management server and on VCS 5.0
cluster nodes, Cluster Management Console did not initially show
localized logs.
615582 The RefreshInfo entry point for Mount agent generates erroneous
messages.
609555 The Remote Group Agent wizard in the Java GUI rejects the connection
information for the remote cluster with the domain type other than
the local cluster. Fix: The RGA Wizard can now connect to all supported
domain types irrespective of the domain type of local cluster.
72 Release Notes
Fixed issues
608926 The template file for the DB2 agent does not contain the complete
information for building a DB2 MPP configuration. The template does
not include a service group required in the configuration.
598476 If you have a service group with the name ClusterService online on
the last running node on the cluster, the hasim -stop command appears
to hang.
570992 Cluster Management Console does not display some icons properly.
545469 The Monitor entry point does not detect an online when the Oracle
instance is not started by the user defined in the Owner attribute.
244988 Very large login name and password takes all the service groups
offline. Fix: For group name, resource name, attribute name, type
name, and VCS username and password, the string size is limited to
1024 characters.
n/a The concurrency violation trigger could not offline a service group if
the group had a parent online on the system with local firm
dependency. The concurrency violation continued until the parent
was manually taken offline.
n/a The configuration page for the Symantec Web server (VRTSWeb)
offered two Japanese locale options. Both options had UTF-8 encoding,
and there were no functional difference between the two.
n/a The agent for Oracle obtained its initialization parameters from the
pfile. VCS could not monitor Oracle instances created from the spfile.
314206 A known issue in Red Hat Enterprise Linux 4 could cause unmount to
fail. When an NFS client does some heavy I/O, unmounting a resource
in the NFS service group may fail while taking the service group offline.
Refer to bugzilla id 154387 for more information.
252347 Behavior of parent group is incorrect when groups are linked with
online global firm and child group faults.
Known issues
The following issues are open for this release of VCS.
set-flow window:100
$ lltconfig -F window:100
Note: LLT over UDP might give problems on Red Hat Enterprise Linux. The systems
might keep logging warnings, CPU usage might increase and the systems might
hang.
51030 Unable to find a suitable remote failover target for global group %s.
Administrative action is required
file to replace name of NICs with that of aggregated interfaces before you start
VCS when the installer prompts after product configuration.
See Veritas Cluster Server Installation Guide.
service group comes online on the node but the parent service group does not
come online on that node. [1363506]
The following error is displayed in the VCS Engine logs for resources of the parent
service group: "VCS WARNING V-16-1-10285 Cannot online: resource's group is
frozen waiting for dependency to be satisfied"
Workaround: In such a scenario, while taking the parent service group resources
offline, use the following command for the last resource:
hagrp –offline service_group -sys system_name -clus cluster_name
Here, service_group is the name of the parent service group, system_name is the
name of the system on which the service group is brought offline, and cluster_name
is the name of the cluster to which the system belongs.
However, VCS kernel modules are built only for the non-Xen kernels:
# cat kvers.lst
2.6.18-8.el5v
2.6.18-8.el5
80 Release Notes
Known issues
Workaround: Set up your system for booting into the non-Xen kernels. For
instructions, refer to the OS vendor's documentation.
msg=audit(1189772065.053:232113): avc:
denied { search } for pid=29652
comm="vgdisplay" name="LVMVolumeGroup" ...
All partitions fault even if there are errors on only one partition
with the IndepthMonitor database
This issue occurs in an MPP environment when multiple partitions use the same
database. If the Databasename attribute is changed to an incorrect value, all
partitions using the database fault. [568887]
file oraerror.dat, which consists of a list of Oracle errors and the actions to be
taken.
See the Veritas Cluster Server Agent for Oracle Installation and Configuration
Guide for a description of the actions.
Currently, the reference file specifies the NOFAILOVER action when the following
Oracle errors are encountered:
The NOFAILOVER action means that the agent sets the resource’s state to OFFLINE
and freezes the service group. You may stop the agent, edit the oraerror.dat file,
and change the NOFAILOVER action to another action that is appropriate for your
environment. The changes go into effect when you restart the agent.
Health check may not work for Oracle 10g R1 and 10g R2
For Oracle 10g R1 and 10g R2, if you set MonitorOption to 1, health check
monitoring may not function when the following message is displayed [589934]:
/usr/lib/libXp.so.6
/usr/lib/libXp.so.6.2.0
Workaround: You must open port 14150 on all the cluster nodes.
Upgrading from VCS 4.x (with CC5.0) leaves the CMC group
frozen
If a VCS 4.x cluster running Cluster Connector 5.0 is upgraded to VCS 5.0 MP3,
the CMC service group and other groups ( except for ClusterService) are frozen
and cannot come online after upgrading and rebooting the node. [1367670 ]
Workaround: The CMC group must be brought offline manually before upgrading
from VCS 4.x (with CC 5.0) to VCS 5.0MP3.
# haconf -makerw
Workaround: In the Global Application Group Selection panel, select only service
groups that are in the online or partial state. Do not select service groups that are
in the faulted state.
/opt/VRTSweb/bin/stopApp cmc
loadClusterQueryTimeout=60000
Adjust the value as needed to allow complete initial load of your cluster
information.
3 Start the Cluster Management Server web console:
You must manually remove the CMC_SERVICES domain using the command line.
To manually remove all the peripherals in the CMC_SERVICES domain, enter the
following command:
You can determine the host name using the following command:
vssat showpd
/opt/VRTScssim on Unix
Perform the following steps:
■ Navigate to the conf/config directory under this cluster specific directory.
■ Open the types.cf file in an editor and change all instances of the string
"i18nstr" to "str".
■ Open the SFWTypes.cf file in an editor if it exists in this directory and change
all instances of the string "i18nstr" to "str".
■ Repeat these steps for the following files if they exist: MSSearchTypes.cf,
SQLServer2000Types.cf, ExchTypes.cf, SRDFTypes.cf.
The perform check option in the virtual fire drill wizard does
not work in Japanese locale
Running the perform check command in the virtual fire drill wizard in the Japanese
locale results in the following error message [865446]:
Workaround: Change the locale to English, when you run fire drill wizard with
the perform check option.
Software limitations
The following limitations apply to this release.
96 Release Notes
Software limitations
VCS deletes user-defined VCS objects that use the HostMonitor object
names
If you had defined the following objects in the main.cf file using the reserved
words for the HostMonitor daemon, then VCS deletes these objects when the VCS
engine starts [1293092]:
■ Any group that you defined as VCShmg along with all its resources.
■ Any resource type that you defined as HostMonitor along with all the resources
of such resource type.
■ Any resource that you defined as VCShm.
Symantec recommends that you use the Gold configuration for the
DiskGroupSnap resource.
VxVM site for the diskgroup remains detached after node reboot in
campus clusters with fire drill
When you bring the DiskGroupSnap resource online, the DiskGroupSnap agent
detaches the site from the target diskgroup defined. The DiskGroupSnap agent
invokes VCS action entry points to run VxVM commands to detach the site. These
commands must be run on the node where the diskgroup is imported, which is at
the primary site.
If you attempt to shut down the node where the fire drill service group or the
diskgroup is online, the node goes to a LEAVING state. The VCS engine attempts
to take all the service groups offline on that node and rejects all action entry point
requests. Therefore, the DiskGroupSnap agent cannot invoke the action to reattach
the fire drill site to the target diskgroup. The agent logs a message that the node
is in a leaving state and then removes the lock file. The agent’s monitor function
declares that the resource is offline. After the node restarts, the diskgroup site
still remains detached. [1272012]
Workaround:
You must take the fire drill service group offline using the hagrp -offline
command before you shut down the node or before you stop VCS locally.
If the node has restarted, you must manually reattach the fire drill site to the
diskgroup that is imported at the primary site.
vssat showbackuplist
B| /var/VRTSat/.VRTSat/profile/VRTSatlocal.conf
B| /var/VRTSat/.VRTSat/profile/certstore
B| /var/VRTSat/RBAuthSource
B| /var/VRTSat/ABAuthSource
B| /etc/vx/vss/VRTSat.conf
Quiescing ...
Snapshot Directory :/var/VRTSatSnapShot
cd /var/VRTSatSnapShot/
cp ABAuthSource /var/VRTSat/
cp RBAuthSource /var/VRTSat/
cp VRTSat.conf /etc/vx/vss/
cd /var/VRTSatSnapShot/
cp -rp profile /var/VRTSat/.VRTSat/
NFS locking
Due to RHEL 4 Update 2, update 3, and SLES 9 SP3 issues, lock recovery is not yet
supported. Refer to issue 73985 for RHEL issues and bugzilla id 64901 for SLES
9 issues.
NFS failover
This issue occurs on SLES 9 systems.
If the NFS share is exported to the world (*) and the NFS server fails over, NFS
client displays the following error, “Permission denied”.
Workaround: Upgrade nfs-utils to the package version “nfs-utils-1.0.6-103.28”.
Mount agent
The Mount agent mounts a block device at only one mount point on a system.
After a block device is mounted, the agent cannot mount another device at the
same mount point.
Share agent
To ensure proper monitoring by the Share agent, verify that the /var/lib/nfs/etab
file is clear upon system reboot. Clients in the Share agent must be specified as
fully qualified host names to ensure seamless failover.
has a memory leak that can gradually consume the host system’s swap space. This
leak does not occur on Windows systems.
Documentation errata
Veritas Cluster Server Installation Guide
This section covers the additions or corrections to the Veritas Cluster Server
Installation Guide for 5.0 MP3.
Note: If you want to install the VCS Java Console on a Windows workstation, you
must do it after you install and configure VCS. Refer to the "Installing the Java
Console on a Windows workstation" topic in the guide.
Release Notes 105
Documentation errata
# cd /mnt/cdrom/dist_arch/cluster_server/rpms
Where dist is rhel4, rhel5, sles9, or sles10, and arch is i686 or x86_64 for
RHEL and i586 or x86_64 for SLES.
4 Install the RPM using the rpm -i command.
#rpm -i VRTScscm-5.0.40.00-MP4_GENERIC.noarch.rpm
The system clocks of the root broker and authentication brokers must be in
sync.
■ Topic: Setting up /etc/llttab
Issue: The name of the sample file is incorrect.
Replace the following text:
The order of directives must be the same as in the sample file
/opt/VRTSllt/llttab.
With:
The order of directives must be the same as in the sample file
/opt/VRTSllt/sample-llttab.
■ Topic: Configuring I/O fencing
Issue: In step 6 of the procedure, the sample command to check the updated
/etc/vxfenmode configuration is incorrect.
Replace the following command:
With:
2 If you plan to upgrade the patch levels on more than one VCS node, repeat
steps 3 to 7 on each of those nodes.
Release Notes 107
Documentation errata
3 Stop VCS.
# hastop -local
# killall CmdServer
# /etc/init.d/vxfen stop
# /etc/init.d/gab stop
# /etc/init.d/llt stop
6 Upgrade the patch level for the operating system to one of the supported
patch levels.
7 Reboot the upgraded node.
8 Switch back the service groups to the upgraded node.
To run haxxx commands non-interactively using the LDAP credentials you stored
1 Set the VCS_HOST environment variable.
2 Unset the VCS_DOMAIN and VCS_DOMAINTYPE environment variables if
these are already set. After unsetting if you run the haxxx commands, VCS
does not prompt for the password.
Refer to the corresponding manual page for more information on the commands.
# hastop -all
2 Make sure that the port h is closed on all the nodes. Run the following
command on each node to verify that the port h is closed:
# gabconfig -a
The script cleans up the disks and displays the following status messages.
Warning: The cluster might panic if any node leaves the cluster membership before
the vxfenswap script replaces the set of coordinator disks.
where:
-t specifies that the disk group is imported only until the node restarts.
-f specifies that the import is to be done forcibly, which is necessary if one
or more disks is not accessible.
-C specifies that any import locks are removed.
4 Turn off the coordinator attribute value for the coordinator disk group.
5 To remove disks from the coordinator disk group, use the VxVM disk
administrator utility vxdiskadm.
6 Perform the following steps to add new disks to the coordinator disk group:
■ Add new disks to the node.
■ Initialize the new disks as VxVM disks.
■ Check the disks for I/O fencing compliance.
■ Add the new disks to the coordinator disk group and set the coordinator
attribute value as "on" for the coordinator disk group.
See the Veritas Cluster Server Installation Guide for detailed instructions.
Note that though the disk group content changes, the I/O fencing remains in
the same state.
7 Make sure that the /etc/vxfenmode file is updated to specify the correct disk
policy.
See the Veritas Cluster Server Installation Guide for more information.
8 From one node, start the vxfenswap utility. You must specify the diskgroup
to the utility.
The utility performs the following tasks:
■ Backs up the existing /etc/vxfentab file.
■ Creates a test file /etc/vxfentab.test for the diskgroup that is modified on
each node.
112 Release Notes
Documentation errata
■ Reads the diskgroup you specified in the vxfenswap command and adds
the diskgroup to the /etc/vxfentab.test file on each node.
■ Verifies that the serial number of the new disks are identical on all the
nodes. The script terminates if the check fails.
■ Verifies that the new disks can support I/O fencing on each node.
9 If the disk verification passes, the utility reports success and asks if you want
to commit the new set of coordinator disks.
10 Review the message that the utility displays and confirm that you want to
commit the new set of coordinator disks. Else skip to step 11.
However, the same error can occur when the private network links are working
and both systems go down, system 1 restarts, and system 2 fails to come back up.
From the view of the cluster from system 1, system 2 may still have the
registrations on the coordinator disks.
To resolve actual and apparent potential split-brain conditions
◆ Depending on the split-brain condition that you encountered, do the following:
3 Restart system 1.
4 Restart system 2.
114 Release Notes
Documentation errata
# hastop -all
Make sure that the port h is closed on all the nodes. Run the following
command to verify that the port h is closed:
# gabconfig -a
3 Import the coordinator disk group. The file /etc/vxfendg includes the name
of the disk group (typically, vxfencoorddg) that contains the coordinator
disks, so use the command:
where:
-t specifies that the disk group is imported only until the node restarts.
-f specifies that the import is to be done forcibly, which is necessary if one
or more disks is not accessible.
-C specifies that any import locks are removed.
4 To remove disks from the disk group, use the VxVM disk administrator utility,
vxdiskadm.
You may also destroy the existing coordinator disk group. For example:
■ Verify whether the coordinator attribute is set to on.
■ If the coordinator attribute value is set to on, you must turn off this
attribute for the coordinator disk group.
5 Add the new disk to the node, initialize it as a VxVM disk, and add it to the
vxfencoorddg disk group.
6 Test the recreated disk group for SCSI-3 persistent reservations compliance.
7 After replacing disks in a coordinator disk group, deport the disk group:
8 Verify that the I/O fencing module has started and is enabled.
# gabconfig -a
# vxfenadm -d
Make sure that I/O fencing mode is not disabled in the output.
9 If necessary, restart VCS on each node:
# hastart
Default: 0
ProcessOnOnly agent
Replace the agent's description with the following text:
The ProcessOnOnly agent starts and monitors a process that you specify. You can
use the agent to make a process highly available or to monitor it. This resource’s
Operation value is OnOnly. VCS uses this agent internally to mount security
processes in a secure cluster.
Replace the text under the Dependency heading with the following text:
No child dependencies exist for this resource.
Remove or ignore figure 5-4 under the Dependency header, it is incorrect.
haremajor
haremajor – Change the major numbers for disk partitions or volumes.
SYNOPSIS
haremajor -sd major_number
haremajor -vx major_number_vxio major_number_vxspec
haremajor -atf major-number
AVAILABILITY
VRTSvcs
Release Notes 117
Documentation errata
DESCRIPTION
The haremajor command can be used to reassign major numbers of block devices
such as disk partitions or Veritas Volume Manager volumes. NFS clients know
the major and minor numbers of the block device containing the file system
exported on the NFS server. Therefore, when making the NFS server highly
available, it is important to make sure that all nodes in the cluster that can act as
NFS servers have the same major and minor numbers for the block device. The
haremajor command can be used to change major numbers used by a system when
necessary. Use the -sd option to reassign a major number for a disk partition
managed by the SD driver. Minor numbers will automatically be the same on both
systems. Use the -vx option to reassign the major number for the Volume Manager
volume device driver (vxio) as well as the vxspec device used by the Volume
Manager process vxconfigd. Currently assigned major numbers can be determined
by entering the command:
grep ‘^vx’ /etc/name_to_major
Note that minor numbers for volumes can be changed using the reminor option
of the vxdg command; see the manual page for vxdg(1M). Use the -atf option to
reassign the major number for the ATF (Application Transparent Failover) driver
on a system.
OPTIONS
■ -sd majornumber
Reassign the major number used by the SD driver on the system to
major_number.
■ -vx major_number_vxio
Reassign the major numbers used by the Volume Manager device drivers.
■ -atf major_number
Reassign a major number used by ATF driver to major_number.
SEE ALSO
vxdg(1M)
COPYRIGHTS
Copyright © 2008 Symantec.
All rights reserved.
118 Release Notes
VCS documentation
VCS documentation
The software disc contains the documentation for VCS in Portable Document
Format (PDF) in the cluster_server/docs directory.
You can access the VCS 5.0 MP4 documentation online at the following URL:
http://www.symantec.com/business/support/overview.jsp?pid=15107
Table 1-17 lists the documentation for the VCS component - Symantec Product
Authentication Service.
export LC_ALL=C
See incident 82099 on the Red Hat Linux support website for more
information.
2 Add the following line to /etc/man.config:
MANPATH /opt/VRTS/man
MANSECT 1:8:2:3:4:5:6:7:9:tcl:n:l:p:o
to
MANSECT 1:8:2:3:4:5:6:7:9:tcl:n:l:p:o:3n:1m
Documentation feedback
Your feedback on product documentation is important to us. Send suggestions
for improvements and reports on errors or omissions to
[email protected]. Include the title and document version (located
on the second page), and chapter and section titles of the text on which you are
reporting.
120 Release Notes
VCS documentation