Veritas Guide

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 7

Veritas Guide

Entities Cluster Highest level entity which contains all service groups supported across all related systems that have membership within a specific cluster implementation. The database cluster contains several systems(hosts) used to support the database server in the event of failure of the system running the database. There is a database cluster and for some implementations there may be an application server cluster. Service Group A service group is a logical grouping of resources. There is a service group called oracle-ha-grp within the database cluster. There is a service group called APP- erv within the application server cluster. System a system is one node within a service group. A machine is usually a system. !n a two node system there are two systems. !deally one should be in an "#$!#% state& the other in an "''$!#% state. 'ailovers are from one system to another. (eritas will detect the failure of one system and bring the other online. Resource Any service& program& hardware& volume manager ob)ect or other entity within the computing environment placed under the control of the cluster within a ervice *roup.

Veritas Cluster Server Commands (VCS)


+un these as root,root To See Status hastatus sum The state will show 'A-$T%. if the service group did not start properly

To start Cluster hastart -force This will try to start the vcs cluster on a particular node,system To Clear Fault hagrp clear /service group name0 -sys /system name0 for e1ample for the database systems2 hagrp clear oracle-ha-grp -sys pcsvcs-db3 for the app server2 hagrp clear APP- erv sys pcsvcs-as3 To Bring a Service Group Offline and Online hagrp online /service group name0 -sys /system name0 for e1ample for the database systems2 hagrp offline oracle-ha-grp -sys pcsvcs-db3 hagrp online oracle-ha-grp -sys pcsvcs-db3 for the app server2 hagrp offline APP- erv sys pcsvcs-as3 hagrp online APP- erv sys pcsvcs-as3

To Bring a Resource Offline and Online hares offlline /resource name0 -sys /system name0 for e1ample for the database systems2 hares offline listener -sys pcsvcs-db3 (Perform these two steps in this order44) hares offline oracle -sys pcsvcs-db3 hares online oracle -sys pcsvcs-db3 (Perform these two steps in this order44) hares online listener -sys pcsvcs-db3 for the app server2 hares offline ps5init sys pcsvcs-as3

To S itc! a System hagrp switch /service group name0 -to /system name0 for e1ample& assume the database has failed over and you want to bring the processing database bac6 to the primary machine2 hagrp switch oracle-ha-grp -to pcsvcs-db3 To Stop a System hastop 7-all8 9hastop: will shut down a system 9hastop all: will shut down all systems within a cluster

VCS "irectories !nstallation2 $ogs2 ,var,(+T vcs,log <onfiguration files2 ,etc,(+T vcs,conf,config (main.cf) (engine5;.log oracle5;.log)

Veritas Volume #anager Operations


C!ec$ t!e status of volume manager o%&ects (volumes' ple(' su%dis$' dis$ group) v1print ht = more (status should be %#A>$%. for all) To start Volume #anager G)* ,opt,(+T vmsa,bin,vmsa %1ample. set .! P$A? to your !P e1port .! P$A?@3;.3;.3;.3;2;.; (bash or 6orn shell) vmsa A

To c!ec$ dis$ status v(print list ample output2 # vxdisk list DEVICE TYPE c1t0d0s2 sliced c1t1d0s2 sliced c$t0d0s2 sliced c$t1d0s2 sliced c$t'd0s2 sliced c$t(d0s2 sliced c)t0d0s2 sliced c)t1d0s2 sliced c)t2d0s2 sliced c)t'd0s2 sliced c)t(d0s2 sliced c)t10d0s2 sliced DISK r tdisk r t#irr r c$t0 c$t1 c$t' c$t( c)t0 c)t1 * c)t' c)t( * GROUP r td! r td! r%d! r%d! r%d! r%d! r%d! r%d! * r%d! r%d! * STATUS "li"e "li"e "li"e &%ili"! "li"e &%ili"! "li"e "li"e "li"e "li"e "li"e "li"e "li"e "li"e

#"T%2 The above output shows two dis6 flagged as 9failing:. This status indicates that communication was lost at one point with the dis6 but has been restored. This status can be cleared and the dis6s monitored. !f it recurrs there is an underlying problem with the dis6 and it most li6ely needs to be replaced. To clear t!e failed status for a dis$ vxedit *! disk!r +,"%#e set &%ili"!- && disk"%#e r i" t.is c%se vxedit *! vxedit *! r%d! set &%ili"!- && c$t0 r%d! set &%ili"!- && c$t1

To c!ec$ t!e status of mirrors v1print lp %ach mirrored volume contains at least two ple1es and each should have a state of 9A<T!(%: as in the following output2 Ple12 info2 type2 state2 assoc2 flags2 Ple12 info2 type2 state2 assoc2 flags2 voldata-;3 len@3;BCDE;B contiglen@3;BCD3F; layout@ T+!P% columns@B width@3FD state@A<T!(% 6ernel@%#A>$%. io@read-write vol@voldata sd@cGt;-3E&cGt3-3E&cGtD-3E&cGtC-3E busy complete voldata-;F len@3;BCDE;B contiglen@3;BCD3F; layout@ T+!P% columns@B width@3FD state@A<T!(% 6ernel@%#A>$%. io@read-write vol@voldata sd@cBt;-3E&cBt3-3E&cBtD-3E&cBtC-3E busy complete

To c!ec$ t!e overall status v1print ht = more To resi+e a volume containing a file system ,usr,lib,v1vm,bin,v1resiHe -' v1fs -g oradg voldataF Dg To administer dis$s under volume manager (add dis$ ' replace dis$' etc) I v1dis6adm (olume Janager upport "perations Jenu2 (olumeJanager,.is6 3 F G B E K L D C 3; 33 3F 3G 3B 3E 3K list M MM N Add or initialiHe one or more dis6s %ncapsulate one or more dis6s +emove a dis6 +emove a dis6 for replacement +eplace a failed or removed dis6 Jirror volumes on a dis6 Jove volumes from a dis6 %nable access to (import) a dis6 group +emove access to (deport) a dis6 group %nable (online) a dis6 device .isable (offline) a dis6 device Jar6 a dis6 as a spare for a dis6 group Turn off the spare flag on a dis6 -nrelocate subdis6s bac6 to a dis6 %1clude a dis6 from hot-relocation use Ja6e a dis6 available for hot-relocation use $ist dis6 information .isplay help about menu .isplay help about the menuing system %1it from menus

elect an operation to perform2

Change the log size .%cl+s .%c "& .%cl+s .%c "& .%cl+s *dis,l%/ 0t dis,l%/ t.e c+rre"t l ! si1e2 *#%ker3 0t #%ke t.e c "&i!+r%ti " 3rit%4le2 *# di&/ 5 !Si1e 102)0000 0610782 *d+#, * #%ker *dis,l%/

Change a resource attribute example (NetworkHosts) .%c "& *#%ker3 .%res *# di&/ 7+lti*eri0*9&e0 :et3 rk; sts <%ddress "e< <%ddresst3 < <%ddresst.ree< .%c "& *d+#, *#%ker .%res *dis,l%/ 7+lti*eri0*9&e0 Change a type attribute Y + c%" vie3=veri&/ / +r c.%"!es 3it.> .%t/,e *dis,l%/ 7+lti:ICA 7 di&/ 3it.> .%c "& .%t/,e .%t/,e .%c "& *#%ker3 *# di&/ 7+lti:ICA 7 "it rI"terv%l ?0 *# di&/ 7+lti:ICA 7 "it rTi#e +t ?0 *d+#, *#%ker

Create a ne

Volume #anager volume and file system,

- ?ou must first create a volume (concatenated or striped) then add a file system to it. large file support should be enabled in the file system as in the e1amples. remove old first remove volume from cluster ee +emove volume resource from cluster below. umount ,psprd3-rdo 7umount file system8 v1assist remove volume J#T-psprd3-rdo 7remove old volume8 create new v1assist -g oradg ma6e J#T-psprd3-rdo3 F;;m oradg;C 7create volume on a specific dis68

m6fs -' v1fs -o largefiles ,dev,v1,rds6,oradg,J#T-psprd3-rdo3 7add a file system to the volume8 mount -' v1fs -o largefiles ,dev,v1,ds6,oradg,J#T-psprd3-rdo3 ,J#T-psprd3-rdo3 7mount file system enable large files8 add mirror v1assist g oradg mirror psprd3-rdo3 oradg;3 Remove volume resource from cluster, hares ma6erw hares delete ("$-psprd-rdo hares delete J#T-psprd-rdo hares dump ma6ero 7add mirror on specific dis68

-dd ne

volume and file system to cluster,

haconf ma6erw hares add J#T-psprd3-rdo3 Jount oracle-ha-grp hares -modify J#T-psprd3-rdo3 %nabled 3 hares -modify J#T-psprd3-rdo3 JountPoint ,psprd3-rdo3 hares -modify J#T-psprd3-rdo3 >loc6.evice ,dev,v1,ds6,oradg,psprd3-rdo3 hares -modify J#T-psprd3-rdo3 ' Type v1fs hares add ("$-psprd3-rdo3 Jount oracle-ha-grp hares -modify ("$-psprd3-rdo3 %nabled 3 hares -modify ("$-psprd3-rdo3 (olume psprd3-rdo3 hares -modify ("$-psprd3-rdo3 .is6*roup oradg haconf dump -ma6ero

You might also like