Implementing SAP R3 4.5B Using Microsoft
Implementing SAP R3 4.5B Using Microsoft
Implementing SAP R3 4.5B Using Microsoft
SG24-5170-01
International Technical Support Organization Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
SG24-5170-01
November 1999
Take Note! Before using this information and the product it supports, be sure to read the general information in Appendix A, Special notices on page 181.
Second Edition (November 1999) This edition applies to SAP R/3, Release 4.5B for Microsoft Windows NT 4.0. Comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. HZ8 Building 678 P.O. Box 12195 Research Triangle Park, NC 27709-2195 When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you.
Copyright International Business Machines Corporation 1998, 1999. All rights reserved. Note to U.S Government Users - Documentation related to restricted rights - Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp.
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Chapter 1. Introduction to high availability computing 1.1 Clustering a means to an end . . . . . . . . . . . . . . . . . 1.2 What clustering provides . . . . . . . . . . . . . . . . . . . . . . . 1.3 Business data to replicate or not? . . . . . . . . . . . . . . 1.4 Disk sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 MSCS-based solutions . . . . . . . . . . . . . . . . . . . . . . . . 1.5.1 Shared-disk configurations . . . . . . . . . . . . . . . . . 1.6 What the user sees . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 SAP R/3 support for Microsoft clustering . . . . . . . . . . . 1.8 IBM Netfinity clustering solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 . .1 . .1 . .2 . .2 . .3 . .3 . .4 . .4 . .4 . .5 . .5 . .7 . .9 .10 .11 .14 .16 .16 .18 .19 .19 .20 .22 .22 .24 .24 .25 .27 .27 .29 .29 .30 .30 .30 .32 .32 .33 .34 .34 .35 .36 .37 .37
Chapter 2. SAP R/3 and high availability . . . . . . . . . . . . . . 2.1 SAP R/3 components . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Using MSCS with SAP R/3 . . . . . . . . . . . . . . . . . . . . . . . 2.3 Alternate approaches to SAP R/3 high availability . . . . . . 2.3.1 Cold standby server . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Replicated database . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Replicated DBMS server . . . . . . . . . . . . . . . . . . . . . 2.3.4 Multiplatform solutions . . . . . . . . . . . . . . . . . . . . . . 2.4 Microsoft Cluster Server . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Resource Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.4 Resource types . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.5 Resource states . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.6 Resource groups . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.7 Quorum resource . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.8 TCP/IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.9 Additional comments about networking with MSCS . 2.4.10 Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.11 Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.12 Failback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.13 LooksAlive and IsAlive . . . . . . . . . . . . . . . . . . . . . 2.5 Microsoft Cluster Service and SAP R/3 . . . . . . . . . . . . . . 2.5.1 SAP R/3 extensions . . . . . . . . . . . . . . . . . . . . . . . . 2.5.2 Cluster Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.3 Group SAP-R/3 <SID> . . . . . . . . . . . . . . . . . . . . . . 2.5.4 Database resource group . . . . . . . . . . . . . . . . . . . . 2.6 Backup and recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 Complete backup . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.2 Tape drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.3 Unique file identification . . . . . . . . . . . . . . . . . . . . . 2.6.4 Backup scheduling . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.5 Offline backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.6 MSCS-special files . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 3. Hardware configuration and MSCS planning . . . . . . . . . . . . . .39 3.1 Checklist for SAP MSCS installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .40 3.1.1 Minimum cluster hardware requirements . . . . . . . . . . . . . . . . . . . . . .40
iii
3.1.2 Minimum cluster software requirements . . . . . . . . . . . . 3.2 Certification and validation of hardware . . . . . . . . . . . . . . . . 3.2.1 iXOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Microsoft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Certification of SAP R/3 and Windows 2000 . . . . . . . . 3.3 Netfinity sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Terms and definitions . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 What is to be sized? . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 Sizing methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Disk layouts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Operating system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Shared external drives . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 ServeRAID SCSI configurations . . . . . . . . . . . . . . . . . . . . . 3.5.1 Netfinity EXP15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 Netfinity EXP200 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.3 Ultra2 and LVDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.4 SAP R/3 disk configurations. . . . . . . . . . . . . . . . . . . . . 3.5.5 Cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.6 SCSI tuning recommendations. . . . . . . . . . . . . . . . . . . 3.6 Fibre Channel configurations . . . . . . . . . . . . . . . . . . . . . . . . 3.6.1 IBM Netfinity Fibre Channel components . . . . . . . . . . . 3.6.2 Cluster configurations with Fibre Channel components 3.6.3 Fibre Channel tuning recommendations . . . . . . . . . . . . 3.7 Network configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.1 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.2 Different name resolution methods . . . . . . . . . . . . . . . 3.7.3 Resolve IP address to the correct host name . . . . . . . . 3.7.4 Redundancy in the network path . . . . . . . . . . . . . . . . . 3.7.5 Auto-sensing network adapters . . . . . . . . . . . . . . . . . . 3.7.6 Redundant adapters . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.7 Load balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 4. Installation and verification . . . . . . . . . 4.1 General overview of the installation process . . . . 4.2 Setting up security . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Windows NT security planning . . . . . . . . . . . 4.2.2 SAP R/3 security planning . . . . . . . . . . . . . . 4.2.3 DBMS security planning. . . . . . . . . . . . . . . . 4.3 Hardware configuration and installation . . . . . . . . 4.4 Windows NT installation . . . . . . . . . . . . . . . . . . . 4.5 MSCS pre-installation testing. . . . . . . . . . . . . . . . 4.6 Basic Windows NT tuning . . . . . . . . . . . . . . . . . . 4.6.1 Server service tuning . . . . . . . . . . . . . . . . . . 4.6.2 Page file tuning . . . . . . . . . . . . . . . . . . . . . . 4.6.3 4 GB tuning . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.4 Remove unnecessary drivers and protocols . 4.7 Microsoft Cluster Server installation . . . . . . . . . . 4.8 Service pack and post-SP installation steps . . . . 4.9 MSCS verification . . . . . . . . . . . . . . . . . . . . . . . . 4.9.1 Checking the mapping of host names . . . . . 4.9.2 Test the failover process . . . . . . . . . . . . . . . 4.10 Create the installation user account . . . . . . . . . . 4.11 SAP and DBMS installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41 42 43 44 45 46 46 47 47 50 50 55 62 62 64 64 65 66 67 68 68 72 76 76 77 79 80 81 81 82 83
iv
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
4.12 SAP verification . . . . . . . . . . . . . . . . . . . . . . . . . 4.12.1 Connection test . . . . . . . . . . . . . . . . . . . . . . 4.12.2 SAP system check . . . . . . . . . . . . . . . . . . . 4.12.3 System log check . . . . . . . . . . . . . . . . . . . . 4.12.4 Profiles check . . . . . . . . . . . . . . . . . . . . . . . 4.12.5 Processes check . . . . . . . . . . . . . . . . . . . . . 4.12.6 SAPWNTCHK . . . . . . . . . . . . . . . . . . . . . . . 4.13 DBMS verification . . . . . . . . . . . . . . . . . . . . . . . . 4.13.1 Oracle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.13.2 DB2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.13.3 SQL Server . . . . . . . . . . . . . . . . . . . . . . . . . 4.14 Backbone configuration. . . . . . . . . . . . . . . . . . . . 4.15 SAP cluster verification . . . . . . . . . . . . . . . . . . . . 4.16 Tuning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.16.1 Advanced Windows NT tuning . . . . . . . . . . . 4.16.2 General SAP tuning. . . . . . . . . . . . . . . . . . . 4.16.3 SAP tuning . . . . . . . . . . . . . . . . . . . . . . . . . 4.16.4 SAP tuning documentation and publications 4.16.5 DB tuning . . . . . . . . . . . . . . . . . . . . . . . . . . 4.17 Configuring a remote shell . . . . . . . . . . . . . . . . . Chapter 5. Installation using Oracle . . . . . . . . . . 5.1 Preliminary work . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Oracle installation . . . . . . . . . . . . . . . . . . . . . . . 5.3 Installation of the R/3 setup tool on Node A . . . 5.4 Installation of the R/3 Central Instance on Node 5.5 Install the R3Setup files for cluster conversion . 5.6 SAP cluster conversion . . . . . . . . . . . . . . . . . . . 5.7 Oracle cluster conversion . . . . . . . . . . . . . . . . . 5.8 Completing the migration to an MSCS . . . . . . . .. .. .. .. A .. .. .. ..
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.110 .110 .110 .110 .111 .112 .113 .114 .114 .114 .114 .114 .117 .119 .119 .119 .122 .126 .127 .127 .129 .130 .130 .133 .133 .135 .136 .139 .143 .145 .146 .146 .149 .151 .152 .153 .154 .156 .159 .160 .160 .161 .162 .164 .164 .165 .166 .167
Chapter 6. Installation using DB2 . . . . . . . . . . . . . . . . 6.1 Preliminary work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 DB2 installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Install the R3Setup tool on Node A . . . . . . . . . . . . . . 6.4 Install the Central Instance and Database Instance. . 6.5 DB2 cluster conversion . . . . . . . . . . . . . . . . . . . . . . . 6.6 Install R3SETUP files for cluster conversion . . . . . . . 6.7 SAP cluster conversion . . . . . . . . . . . . . . . . . . . . . . . 6.8 Complete the Migration to MSCS . . . . . . . . . . . . . . . Chapter 7. Installation using SQL Server . . 7.1 Preliminary work . . . . . . . . . . . . . . . . . . . . 7.2 Install SQL Server and SAP R/3 . . . . . . . . 7.3 Install the R/3 Setup tool . . . . . . . . . . . . . . 7.4 Install the SAP R/3 Central Instance . . . . . 7.5 Convert the database to cluster operation . 7.6 Install the cluster conversion tool . . . . . . . 7.7 SAP cluster conversion . . . . . . . . . . . . . . . 7.8 Complete the MSCS migration . . . . . . . . . 7.9 Removal of unused resources . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. .. ..
Chapter 8. Verifying the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .169 8.1 How to troubleshoot the system at the end of the installation . . . . . . . . .169 v
Contents
8.2 Log files. . . . . . . . . . 8.2.1 MSCS log file . . 8.2.2 SAP log files . . 8.3 Services . . . . . . . . . 8.3.1 Oracle . . . . . . . 8.3.2 DB2 . . . . . . . . . 8.3.3 SQL Server . . . 8.4 Accounts and users . 8.4.1 Oracle . . . . . . . 8.4.2 DB2 users . . . . 8.4.3 SQL Server . . . 8.5 R3Setup . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
.. .. .. .. .. .. .. .. .. .. .. ..
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
.. .. .. .. .. .. .. .. .. .. .. ..
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
.. .. .. .. .. .. .. .. .. .. .. ..
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
.. .. .. .. .. .. .. .. .. .. .. ..
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
.. .. .. .. .. .. .. .. .. .. .. ..
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
.. .. .. .. .. .. .. .. .. .. .. ..
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
.. .. .. .. .. .. .. .. .. .. .. ..
170 170 172 173 173 174 175 175 176 177 178 179
Appendix A. Special notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .181 Appendix B. Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 B.1 International Technical Support Organization publications . . . . . . . . . . . . . . 183 B.2 Redbooks on CD-ROMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 B.3 Related Web sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 B.3.1 Netfinity technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 B.3.2 Windows NT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 B.3.3 Microsoft Cluster Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 B.3.4 SAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 B.3.5 Oracle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 B.3.6 DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 B.3.7 SQL Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 B.4 Downloadable documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 B.4.1 Microsoft Windows NT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .185 B.4.2 Microsoft Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 B.4.3 SAP R/3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 B.5 SAPSERV FTP site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 B.6 OSS notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 B.7 Knowledge Base articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 B.8 Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 How to get ITSO redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 List of abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 ITSO redbook evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
vi
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Preface
Many of today's corporations run their businesses on the SAP R/3 application suite. High availability of these applications to the user community is essential. Downtime means lost sales, profits and worse. The combination of SAP R/3 and Microsoft Cluster Server is an important element in providing the required access to key business applications. This book will help you plan and install SAP R/3 4.5B in a Microsoft Cluster Server environment running Microsoft Windows NT 4.0 on Netfinity servers. Installation procedures complement the existing SAP documentation and cover the integration of the three major database management systems: Oracle, DB2 and SQL Server. The first two chapters introduce the concepts of clustering and high availability in SAP R/3 environments. Chapter 3 describes planning needed before implementing a clustered SAP R/3 configuration. Discussions include certification of hardware and software components, server sizing, disk layout and network configurations. SAP R/3 and Windows 2000 are also discussed. The installation part of the book is a step-by-step set of instructions to lead the reader through the process of installing SAP R/3, Microsoft Cluster Server and the particular database you choose to install, Oracle, DB2 or SQL Server. Finally, Chapter 8 offers tips on how to verify your installation and where to look if you have problems. This book should be especially helpful to people who are involved with planning or installing SAP R/3 in a Microsoft Cluster Server environment. Such people will often be technical planners and IT managers in user organizations who need to understand the requirements of such an installation, IT professionals or consultants who need to install SAP R/3 in an MSCS environment, and experienced SAP R/3 professionals who need to understand the ramifications of clustering on SAP R/3 and how to implement such a solution.
vii
electronic circuit repair and the other is with a prototype of the IBM TrackPoint strain gauge design. He has a masters degree in Mechanical Engineering from Manhattan College, New York. Edward Charles is a Senior IT Specialist in IBM Global Services, Network Integration from Fort Lauderdale in Florida. He has over 10 years of experience designing enterprise networks. He holds a BA in Management Information Systems and an MBA. He is a certified SAP R/3 consultant on UNIX/Oracle Release 4.x. Edward is also a certified Cisco Network Associate, CNE and MCP. For the past five years he has been consulting and teaching networking for IBM. David Dariouch is a Systems Engineer in France. He has one year of experience within IBM, as an Advisory IT specialist supporting Netfinity technologies and network operating systems implementation on the Netfinity Presales Technical Support team. His area of expertise includes Windows NT-based ERP solutions, Netfinity hardware and software engineering. He is part of the IBM SAP Technical Focus Group in EMEA. David holds an Engineering Degree in Computer Science from the Ecole Speciale de Mecanique et dElectricite school of engineering in Paris. Olivier De Lampugnani is an IT Specialist working for IBM Global Services in the Service Delivery EMEA West group, located in La Gaude, France. He has been working with IBM for three years. His areas of expertise include all Microsoft Windows NT products and Netfinity hardware. Olivier is a Microsoft Certified Professional in Windows NT Server and Workstation. Mauro Gatti is an Advisory IT Specialist in the Netfinity Presales Technical Support in Italy. He is a Microsoft Certified System Engineer. He is responsible for supporting the main SAP installations on Netfinity servers in Italy. He is part of the IBM SAP Technical Focus Group in EMEA. Before joining IBM a year ago, Mauro was a trainer and consultant for Microsoft, Hewlett Packard, IBM and other companies. Mauro holds a degree in Physics and a PhD in Theoretical Physics. Bill Sadek is an Advisory Specialist for SAP R/3 at the International Technical Support Organization, Raleigh Center. He is a certified SAP R/3 Application consultant in Sales and Distribution. He writes extensively on SAP R/3 and Windows NT. Before joining the ITSO, Bill worked in IBM Global Services as an SAP R/3 Architect and SAP R/3 Sales and Distribution Consultant. Bill has 23 years of experience with IBM.
viii
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Figure 1. The team (L-R): Matthew, Mauro, David Watts, Bill, Ed, David Dariouch (inset: Olivier)
This book is based on High Availability for SAP R/3 on IBM Netfinity Servers, SG24-5170-00. Thanks to the authors: Bill Sadek Peter Dejaegere Guy Hendrickx Fabiano Matassa Torsten Rothenwaldt Thanks to the following people from the ITSO Center in Raleigh: Gail Christensen Tate Renner Linda Robinson Shawn Walsh Thanks to the following people from IBM for their invaluable contributions to the project: Steve Britner, PC Institute, Raleigh Andrew Castillo, North America ERP Solution Sales, San Jose Peter Dejaegere, IBM SAP International Competency Center, Walldorf Rainer Goetzmann, IBM SAP International Competency Center, Walldorf Andreas Groth, Second Level EMEA Technical Support, Greenock Thomas Knueppel, Netfinity SAP Sales, Stuttgart Gregg McKnight, Netfinity Development Performance, Raleigh Salvatore Morsello, Netfinity Presales Technical Support, Milan Kiron Rakkar, MQSeries Early Programs, Raleigh Torsten Rothenwaldt, Netfinity Technical Support, Frankfurt
Preface
ix
Ralf Schmidt-Dannert, Advanced Technical Support for SAP R/3, Foster City Siegfried Wurst Thanks to the following people from SAP AG for their invaluable contributions to the project: Frank Heine Reiner Hille-Doering
Comments welcome
Your comments are important to us! We want our redbooks to be as helpful as possible. Please send us your comments about this or other redbooks in one of the following ways: Fax the evaluation form found in ITSO redbook evaluation on page 205 to the fax number shown on the form. Use the online evaluation form found at http://www.redbooks.ibm.com/ Send your comments in an Internet note to [email protected]
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
more complicated applications (for example, SAP R/3 servers), there must be a certain sequence to the restart. Certain resources, such as shared disks and TCP/IP addresses, must be transferred and started on the surviving server before the application can be restarted. Beyond that, other applications (for example database servers) must have clustering awareness built into them so that transactions can be rolled back and logs can be parsed to ensure that data integrity is maintained. Microsoft Cluster Server provides high availability only. The Microsoft solution does not as yet address scalability, load balancing of processes nor near-100% up time. These can currently be achieved only through more mature clustering, such as that which is implemented in RS/6000 SPs. Microsoft also offers its Windows Load Balancing Service, part of Windows NT 4.0 Enterprise Edition. It installs as a standard Windows NT networking driver and runs on an existing LAN. Under normal operations, Windows Load Balancing Service automatically balances the networking traffic between the clustered computers.
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Server B
5170-01
The external resources (that is, the drives and the data on the drives) are controlled (or owned) by one of the two servers. Should that server fail, the ownership is transferred to the other server. The external resources can be divided into groups so that each server can own and control a subset of them. The shared-disk configuration is also known as a swing-disk configuration.
This configuration offers the advantages of lower cost and quick recovery. Only one set of disks is required to hold the data used by both servers. The external disk enclosure should be configured for RAID to ensure data redundancy. When a failure does occur, since there is only one copy of the data, the time to recover and bring the failed system back online is minimal.
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
There are three critical R/3 system components that cannot be made redundant by configuring multiple instances of them on different machines: DBMS Enqueue service Message service Thus these three components are single points of failure in the SAP R/3 architecture. Thus the R/3 components can be divided into two groups, depending on their importance for system availability: Services that are R/3 system-wide single points of failure The enqueue and the message processes, and the DBMS cannot be configured redundantly and require human intervention for recovery. These services should be centralized to reduce the number of servers requiring additional protection against failure. Services that are not single points of failure These services may be configured redundantly and should be distributed on several servers. The following table summarizes the elements of an R/3 system and their recovery characteristics:
Table 1. Redundancy and recovery of SAP R/3 services
R/3 services SAPGUI Dispatcher Dialog Batch Update Enqueue Message Gateway Spool saprouter atp server Database
Recovery Manual restart by user; reconnect to previous context may be used Manual restart of application service by system administrator Automatic restart by dispatcher Automatic restart by dispatcher Automatic restart by dispatcher Automatic restart by dispatcher Manual restart by system administrator Automatic restart by dispatcher Automatic restart by dispatcher Manual restart by system administrator
Note: Automatic restart by the dispatcher assumes that the dispatcher itself is not affected by the failure. It is possible that locks held by one user may be simultaneously granted to another user.
A second group of critical points comes from the network environment. The two file shares SAPLOC and SAPMNT must be available. Some installations use an 6
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Internet Domain Name Service (DNS) or Windows Internet Name Service (WINS) for name resolution. DNS or WINS are also single points of failure. Additionally, the Windows NT domain controller is needed to log on to the R/3 service accounts. To summarize, the following components must be protected to make the R/3 system highly available: DBMS R/3 application service on the enqueue/message host (central instance) File shares SAPLOC and SAPMNT Name resolution service (DNS or WINS) Windows NT domain controller
The domain controller will be made redundant by configuring multiple domain controllers as primary and backup controllers. In a similar fashion, there are ways to provide redundant DNS or WINS. The details are out of the scope of this redbook. How to protect the R/3 central instance (services and file shares) and the database is discussed in the remainder of this chapter.
5170-01
This cluster configuration (database server and central instance of the same R/3 system on two cluster nodes with mutual failover) is the only one supported by SAP. There is no support for other possible combinations (for example, failover of a central production system to a test server, or failover of the central instance to another application server, or running an additional application server on the DBMS machine). A special cluster combination may work. But the R/3 installation procedure is not prepared to handle it, and there will be no support from SAP for daily operations or release upgrades. A supported SAP R/3 cluster configuration has to fulfill the following conditions: The hardware must be IBM Server Proven. For a complete list of IBM tested Netfinity MSCS solutions, go to the Web site: http://www.pc.ibm.com/us/netfinity/cluster_server.html The hardware must be in the Microsoft Hardware Compatibility list. Note that such approval includes: Both servers The shared disk subsystem between them (controllers and storage expansions) The cluster-private network connection (only for long distance configurations) For a complete list of Microsoft certified IBM Netfinity MSCS solutions , go to the Web site: http://www.microsoft.com/hwtest/hcl/ The hardware must be certified for SAP R/3, as for every R/3 system. For a list of certified IBM Netfinity configurations, see the Internet web site: http://www.r3onnt.de/
Note
See 3.2, Certification and validation of hardware on page 42 for more details on the certification processes.
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
The Operating system release, the Windows NT Service Pack release, the DBMS release and the SAP R/3 release are SAP supported. To check if your SAP R/3 version supports MSCS you should obtain the latest update of the OSS Note 0106275, Availability of R/3 on Microsoft Cluster Server. At the time of writing, the following support was available or planned with Intel-based Windows NT servers:
Table 2. MSCS support for Intel-based Windows NT servers with different SAP R/3 releases
DBMS Oracle 7.3 Oracle 8.0.4 Oracle 8.0.5 Microsoft SQL 6.5 Microsoft SQL 7.0 Informix Adabas DB2
Note: Refer to OSS Note 0106275, Availability of R/3 on Microsoft Cluster Server for the latest support matrix.
Note: This is not a complete list. It does not intend to express the significance of the products mentioned, nor their compliance with a particular R/3 release tested by IBM or SAP. Even if the description of the product may lead to the assumption that it works with MSCS, this does not say anything about the functionality with SAP R/3. Failover products are inherently complex. Setting up R/3 and the DBMS to fail over correctly is a tremendous task including the handling of registry keys, DLLs, environment variables, failover scripts, and other objects that may be undocumented and changed with the next release of the R/3 or DBMS software. Thus you should always ensure that you get appropriate support, comparable
with that provided by SAP for MSCS, from the vendor over the full lifetime of your system.
10
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Thus a cold standby server is worth considering if the sizing of your SAP R/3 systems leads to a central installation with production R/3 services and DBMS on the same machine, and if the restrictions about service availability are acceptable in your business.
The idea is to have a second DBMS in standby mode on a different node that continuously receives information about database changes made on the production node. Then the standby DBMS can apply (replicate) these changes. If the initial data were identical on both nodes, and if no change information has been lost, then the standby DBMS maintains an exact copy of the production database. When the production node fails then the R/3 system can be connected to the standby node. The following are among the issues that you should consider when selecting a strategy for database replication: Synchronous versus asynchronous replication Synchronous replication considers a transaction as completed only when the remote DBMS also committed successful execution of all changes. Asynchronous replication performs the remote updates at a later time. Thus asynchronous replication schemes may suffer transaction loss in the case of failure, while synchronous replication schemes by definition guarantee no transaction loss. Log-based versus statement-based replication Log-based replication evaluates or copies the redo logs of the DBMS to get the change information for the remote node. Log-based implementations tend to be asynchronous, while statement-based implementations are mostly synchronous.
11
Level of replication Some solutions provide replication at the schema or table level, while others perform replication at the database level. Schema level replication is more flexible but requires extra effort in that the system administrator has to define the list of tables, either whole tables or subsets (rows or columns), to be replicated. BLOB handling There may be limitations for the replication of binary large objects (BLOBs). From the solutions mentioned above, Oracle Standby Database, Replicated Standby Database for DB2/UDB (both at the database level), and Microsoft SQL Server Replication (at the schema or table level) implement asynchronous log-based replication. Symmetric Replication from Oracle is statement based and may be synchronous or asynchronous at the schema or table level. Informix High-availability Data Replication (HDR) offers asynchronous and synchronous log-based replication at the database level. There are different ways to use data replication for R/3 high availability, including the following: 1. Maintain complete, hot standby database The R/3 system switches to the standby database and continues to function without any loss of transactions. Synchronous replication is required. From the products above, only Informix HDR in synchronous mode may be used for this purpose. This can be considered as an alternative to remote mirroring of the database disks. 2. Maintain an incomplete, warm standby database If it is sufficient to have a warm standby database (loss of some transactions is acceptable) rather than a hot standby, then any of the log-based replication schemes may be used. As an example of a warm standby database, we outline the configuration of Oracle Standby Database. The hardware architecture and the operating system must be the same on both nodes. The version of the Oracle software on the standby node must be the same or higher as on the production node. 1. Take a complete backup of the data files of the production server. 2. Create a standby database control file. 3. If the backup was not an offline backup, then archive the current online redo log of the production database. 4. Transfer the backed up data files, the control file, and all archived redo log files to the standby machine. 5. To begin with actual replication, start the DBMS on the standby machine, mount the database in standby mode, and let the DBMS operate in recovery mode. 6. To keep the update gap between the production and the standby database as small as possible, you have to ensure that the archived redo log files from the production system are transferred to the standby system as soon as possible.
12
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Archiving
File Transfer
Permanent Recovery
The greatest advantage of this approach, compared with clustering, is the more distant separation of production and the standby system. Even geographic distribution is simple because only a network connection with small bandwidth is needed for redo log file transfer. There are no cluster-related restrictions on which kind of hardware to use. The greatest disadvantage is that only the database is protected. Failures of the R/3 central instance services or file shares are not covered. If you plan to prepare a standby central instance on the standby database node also, keep in mind that changes in non-database files (like R/3 profiles) are not replicated. Other disadvantages are: The standby database is in recovery mode and always lags slightly behind the production database. The loss of transactions during the failover may include all changes stored in up to three redo log files: 1. One because it was the last active file 2. A second because of incomplete archiving 3. A third because of incomplete transfer to the standby server After work resumes using the standby database, all users have to be made aware that transactions may be lost and that they have to check whether the last changes they made are still available. Oracle does not provide anything to transfer the archived redo logs from the primary node to the standby node. However, Libelle Informatik of Germany provides a third-party tool. See: http://www.libelle.de The remote database is mounted in standby mode and cannot be used in any way other than for recovery. An Oracle database mounted in standby mode cannot be opened in the standard way. This prevents an accidental opening, which would invalidate the standby state of the database. But if the production database becomes unavailable, the standby database has to be activated, shut down, and then opened for normal use. This procedure should be performed by an experienced system administrator. Automating this procedure may give unexpected results if, for example, only a network outage or a reboot of the production server suspends a log file transfer.
13
It is complicated and time-consuming to fail back to the original production database because of the risk of losing transactions. Basically the same procedure as for setting up the standby database is needed for failback. Structural database changes such as adding or dropping data files, or modifying parameters stored in the INIT.ORA file, are not propagated to the standby database. If there are failures on the standby system, then inconsistencies may result, and tablespaces may be lost. In this situation, the only way is to set up the standby database from scratch again. In addition, you have to consider the effects on the R/3 application servers connected to the database, when failover to the standby database is performed. Because the standby node must have its own network addresses for receiving the log files, the R/3 processes cannot reconnect to the address known as the DBMS service. This may be solved by a restart using different R/3 profiles to connect to the standby database or by using an alias name for the DBMS node and remapping the alias with DNS. There are third-party products available which resolve problems in daily operations of a replicated database. For example, the Libelle Database Mirror (for Oracle only, see http://www.libelle.de/) helps create the replica, controls the transfer of the archived redolog files, detects and replicates structural database changes. Without such DBMS-specific tools, a replicated database should be considered only if the system administrator has detailed knowledge and experience with the DBMS used. Of course, such tools don't change the basic architectural principles of the database replication (long distance, hot standby versus warm standby with time delay and possible loss of transactions) which are the criteria to decide if such a solution is appropriate (or may be combined with MSCS).
14
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
used to enhance the DBMS performance, using multiple active instances concurrently. OPS is implemented on the shared workload cluster model, in contrast to MSCS which is based on the partitioned workload cluster model. With OPS, the failure of one database server is only a special case in permanent load balancing between instances. Instance A Control File SGA SGA Instance B
LGWR
DBWR
LGWR
Redo Log
Files
Oracle Parallel Server is available on Windows NT beginning with Version 7.3.3 on certified hardware platforms with between two and eight DBMS nodes. For more detailed information about the OPS implementation for Windows NT and the hardware, see the Internet at: http://www.oracle.com/nt/clusters/ The availability of the R/3 system with OPS depends on the hardware platform and R/3 release. SAP supports OPS only for approved installations. You must ensure that you have this approval before going into production. Only the setup with an idle instance is currently supported. Two instances are configured, all R/3 processes connect to the same instance. The other instance is idle and acts as a hot standby. If the production instance fails, the R/3 processes reconnect to the standby instance. Compared with a replicated database, this approach has some advantages: In the event of failure, no committed transactions are lost. There is no operator intervention necessary to fail over the DBMS. Setup and failback are easier because no backup and restore are necessary. Some of the disadvantages of OPS when used with SAP R/3 are the same as for replicated databases: OPS protects against DBMS failures only. Problems of the SAP R/3 central instance services or file shares are not covered. OPS imposes more restrictions on the hardware that may be used. Because all instances must have direct (shared) access to all database disks, a network connection is not sufficient. There must be a shared disk subsystem between all database servers. Thus the options for the separation of the nodes are limited by the shared storage technology. Another disadvantage is that OPS for Windows NT is incompatible with MSCS. Both products cannot be on the same nodes.
15
For more information see the following redbooks available from http://www.redbooks.ibm.com: Disaster Recovery with HAGEO: An Installers Companion, SG24-2018 Bullet-Proofing Your Oracle Database with HACMP: A Guide to Implementing AIX Databases with HACMP, SG24-4788 Oracle Cluster POWERsolution Guide, SG24-2019 High Availability Considerations: SAP R/3 on DB2 for OS/390, SG24-2003 To protect the Windows NT server with the R/3 central instance, an MSCS configuration, a cold standby server, or an alternate failover solution can be used. When using MSCS, keep in mind the restrictions of R/3 support for MSCS (see 2.2, Using MSCS with SAP R/3 on page 7). There are plans for cluster configurations with the central instance on one node and another application server of the same R/3 system (with a different system number) on the other node (no failover of this second instance), but currently the failover of an R/3 instance to a server with other R/3 services is not supported by SAP. This approach has advantages for large installations. If a cluster configuration for AIX or mainframe systems already exists, then the R/3 database may be integrated easily and fast. Additional infrastructure for high availability (network environment, appropriate operating service, disaster recovery planning) is available immediately. Because the DBMS exploits a cluster implementation more mature than MSCS, there are more choices for the configuration, and the cluster can also be used to boost database performance (Oracle Parallel Server for AIX or DB2 Parallel Edition). If there is no AIX or mainframe cluster available, then a multiplatform approach may be more expensive than a pure Windows NT environment. The costs include hardware, software and training. Because of the variety of platforms, maintenance is more complicated and requires more specialists. Multiplatform solutions are favored primarily for large installations with strong performance requirements.
16
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
resource is one way Microsoft is positioning Windows NT as a viable alternative to UNIX in large-scale business and technical environments. MSCS is particularly important as it provides an industry-standard clustering platform for Windows NT and it is tightly integrated into the base operating system. This provides the benefits of a consistent application programming interface (API) and a software development kit (SDK) that allow application vendors to create cluster-aware applications which are relatively simple to install. The first release of MSCS, also referred to as Phase 1, links two servers together to allow system redundancy. Even before the release of MSCS, hardware manufacturers such as IBM provided redundancy for many server components, including power supplies, disks, and memory. This, however, would only protect you from component failure and not application failure. Providing system redundancy means that a complete server can fail and client access to server resources will remain largely unaffected. MSCS extends this by also allowing for software failures at both operating system and application levels. If the operating system fails, all applications and services can be restarted on the other server. Failure of a single application is managed by MSCS individually. This, in effect, means that a failure can occur, but the cluster as a whole remains intact, still servicing its users requests. MSCS achieves this by continually monitoring services and applications. Any program that crashes or hangs can be immediately restarted on the same server or on the other server in the cluster. If a failure does occur, the process of restarting the application on the other server is called failover. Failover can occur either automatically, such as when an application or a whole server crashes, or manually. By issuing a manual failover, the administrator is able to move all applications and resources onto one server and bring the first server down for maintenance. When the downed server is brought back online, applications can be transferred back to their original server either manually or automatically. Returning resources to their original server is often referred to as failback. The current MSCS allows only two configured servers, or nodes, to be connected together to form a cluster (Microsoft has stated that future versions will support more nodes). The nodes are made available to client workstations through LAN connections. An additional independent network connection is used for internal housekeeping within the cluster. Both nodes have access to a common disk subsystem. Figure 6 shows the basic hardware configuration to support MSCS:
17
Public LAN
Cluster Interconnect
Figure 6. Basic cluster configuration
Two common cluster topologies are Shared Disk and Shared Nothing. From a purely hardware point of view MSCS is a Shared Disk clustering technology. But this can be misleading because from a cluster point of view MSCS is a Shared Nothing technology. Indeed disks are only shared in that they are accessible by both systems at a hardware level. MSCS allocates ownership of the disks to one server or the other. In normal operation, each disk is accessed only by its owning machine. A system can access a disk belonging to the second system only after MSCS has transferred ownership of the disk to the first machine. In MSCS terminology, the applications, data files, disks, IP addresses, and any other items known to the cluster are called resources. Cluster resources are organized into groups. A group can reside on either node, but on only one node at any time, and it is the smallest unit that MSCS can fail over.
2.4.1 Resources
Resources are the applications, services, or other elements under the control of MSCS. The status of resources is supervised by a Resource Monitor. Communication between the Resource Monitor and the resources is handled by resource dynamic link library (DLL) files. These resource DLLs, or resource modules, detect any change in state of their respective resources and notify the Resource Monitor, which, in turn, provides the information to the Cluster Service. Figure 7 shows this flow of status data between the Cluster Service and a clusters resources:
18
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Node A
Cluster Service
Resource Monitor
Resource DLL
Resource DLL
Resource
Resource
Group 1
Figure 7. Communication between the Cluster Service and the resources
2.4.3 Dependencies
Dependencies are used within Microsoft Cluster Server to define how different resources relate to each other. Resource interdependencies control the sequence in which MSCS brings those resources online and takes them offline. As an example, we will look at a file share for Microsoft Internet Information Server (IIS). A file share resource requires a physical disk drive to accommodate the data available through the share. To bind related resources together, they are placed within an MSCS Group. Before the share can be made available to users, the physical disk must be available. However, physical disks and file shares initially are independent resources within the cluster and would both be brought online simultaneously. To make sure that resources in a group are brought online in the correct sequence, dependencies are assigned as part of the group definition.
19
Other items are required to make a fully functional file share, such as an IP address and a network name. These are included in the group, and so is an Internet Information Server (IIS) Virtual Root (see 2.4.4, Resource types on page 20 for more information on this resource. The group structure is shown in Figure 8.
5170-01
IP Address resource
This diagram shows the hierarchy of dependencies within the group as a tree structure, where an arrow points from one resource to another resource upon which it depends. We see that the IIS Virtual Root is dependent on two other resources: A File Share resource that is itself dependent on a Physical Disk resource A Network Name resource that is itself dependent on an IP Address resource When Microsoft Cluster Server is requested to bring the IIS directory online, it now knows that it must use the following sequence of steps: 1. Bring the Physical Disk and the IP Address resources online. 2. When the Physical Disk becomes available bring the File Share online and simultaneously, 3. When the IP Address becomes available bring the Network Name online, then 4. When both the File Share and the Network Name become available, bring the IIS Virtual Root online. As described in 2.4.6, Resource groups on page 22, all dependent resources must be placed together in a single group and a resource can only belong to one group.
20
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
DHCP Server
Documentation error
Although support for this resource type is discussed in the Microsoft Cluster Server Administrator's Guide, a DHCP server resource is not supported. Refer to Microsoft Knowledge Base article Q178273. Distributed Transaction Coordinator This resource type allows you to use Microsoft Distributed Transaction Coordinator (MSDTC) in MSCS. Two dependencies are required for this resource: a Physical Disk resource and a Network Name resource. File Share The File Share resource type lets you share a directory on one of the clustered disks in your configuration to give access to that directory to network clients. You will be asked to enter the name of the share, the network path, a comment, and the maximum number of users that can connect to the share at the same time. The configuration of a File Share resource type is identical to the configuration of a file share in Windows NT Explorer. File Shares require a Physical Disk resource and a Network Name resource. Generic Application The Generic Application resource type allows existing applications that are otherwise not cluster-aware to operate under the control of MSCS. These applications can then fail over and be restarted if a problem occurs. There are no mandatory resource dependencies. Microsoft Cluster Server is often demonstrated using the Windows NT clock program. To do so, the clock.exe program is defined to the cluster as a Generic Application. Generic Service This resource type can be used for services running on Windows NT. You must enter the exact name of the service at the creation of the resource. Just as for Generic Applications, the Generic Service resource does not have any resource dependencies. IIS Virtual Root The IIS Virtual Root resource type provides failover capabilities for Microsoft Internet Information Server Version 3.0 or later. It has three resource dependencies: an IP Address resource, a Physical Disk resource, and a Network Name resource. IP Address An IP Address resource type can be used to assign a static IP address and subnetmask to the network interface selected in the Network to Use option during the definition of the resource. IP Addresses do not have any dependencies. Microsoft Message Queue Server This resource type supports clustered installations of Microsoft Message Queue Server (MSMQ) and is dependent on a Distributed Transaction
21
Coordinator resource, a Physical Disk resource, and a Network Name resource. Network Name The Network Name resource type gives an identity to a group, allowing client workstations to see the group as a single server. The only dependency for a Network Name is an IP Address resource. For example, if you create a group with a Network Name resource called FORTRESS1 and you have a File Share resource with the name UTIL, you can access it from a client desktop entering the path \\FORTRESS1\UTIL. This will give access to the directory on the share regardless of which cluster node actually owns the disk at the time. Physical Disk When you first install MSCS on your nodes, you are asked to select the available disks on the common subsystem. Each disk will be configured as a Physical Disk resource. If you find it necessary to add more disks after the installation, you would use the Physical Disk resource. This resource does not have any dependencies. Print Spooler The Print Spooler resource type allows you to create a directory on a common storage disk in which print jobs will be spooled. Two resources are needed to create a Print Spooler resource: a Physical Disk resource and a Network Name resource. Time Service This is a special resource type that maintains date and time consistency between the two nodes. It does not have any dependencies. The cluster must not have more than one Time Service resource.
22
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Note
Dependent resources must be grouped together. When one resource is listed as a dependency for another resource, then the two resources must be placed in the same group. If all resources are ultimately dependent on the one resource (for example, a single physical disk), then all resources must be in the same group. This means that all cluster resources would have to be on a single node, which is not ideal. Any cluster operation on a group is performed on all resources within that group. For example, if a resource needs to be moved from node A to node B, all other resources defined in the same group will be moved. Figure 9 depicts how MSCS groups might be distributed between the nodes:
Node A
Group 1
Resource A Resource B Resource C
Node B
Group 2
Resource D
Group 3
Resource E
Group 4
Resource F Resource G
Group states A resource group can be in any one of the following states: Online - all resources in the group are online. Offline - all resources in the group are offline. Partially Online - some resources in the group are offline and some are online. Virtual servers Virtual server is a cluster group which contains an IP address and a Network Name resource and optionally a disk and other resources. Groups that contain at least an IP Address resource and a Network Name resource appear on the network as servers. They appear in Network Neighborhood on Windows clients and are indistinguishable from real servers as far as a client is concerned. These groups are, therefore, sometimes referred to as virtual servers. To gain the benefits of clustering, your network clients must connect to virtual servers and not the physical node servers. For example, if you create a group with a Network Name resource called IIS_Server and then browse your network, you will see an entry (a virtual server) called IIS_Server in the same domain as the physical servers. Although you can browse for the physical server names, you 23
should not use them for connections as this will circumvent the cluster failover functionality.
2.4.8 TCP/IP
MSCS uses TCP/IP to communicate with network applications and resources. Cluster IP addresses cannot be assigned from a Dynamic Host Configuration Protocol (DHCP) server. These include IP Address resources, the cluster administration address (registered at the installation of MSCS), and the addresses used by the nodes themselves for intracluster communication. Note that each node will usually have at least two network adapter cards installed. Although a single network connection can be used, Microsoft recommends using a private network for cluster traffic. One adapter is used to allow communication over the external network for administration and management of the cluster and for user access to cluster resources. The physical server IP addresses assigned to these adapters could be obtained through DHCP, but it is important that users attach to clustered addresses. We recommend the use of static IP addresses for all adapters in your cluster; otherwise, if a DHCP leased address expires and cannot be renewed, the ability to access the cluster may be compromised (see Microsoft Knowledge Base article Q170771). The second adapter in each machine is for intracluster communication, and will typically have one of the TCP/IP addresses that conform to those reserved for private intranets. Table 3 shows the allocated ranges for private IP addresses:
24
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
IP address range
Description
A single Class A network 16 contiguous Class B Networks 256 contiguous Class C Networks
For more information refer to TCP/IP Tutorial and Technical Overview, GG24-3376, and Chapter 3 of Microsoft Cluster Server Administration Guide.
Note
You must have TCP/IP installed on both servers in order to use MSCS. Applications that use only NetBEUI or IPX will not work with the failover ability of MSCS. However, NetBIOS over TCP/IP will work.
25
ping node_a The address from which the ping is answered must be the address assigned to the adapter card in the public LAN. The Windows NT utility ipconfig shows all addresses in the order of the gethostbyname() list. If the result you get from any of the above methods is in the wrong order, correct it before installing MSCS. When the network adapters are of the same type, then it is sufficient to simply exchange their outgoing cable connections. If you have different adapter types (for example, 10/100 EtherJet for the cluster-private link and redundant FDDI for the public LAN), then you need a way to control the internal IP address order. Under Windows NT 3.51, the IP address list is in the same order as the TCP/IP network card binding order; therefore, altering the TCP/IP network card binding order (Control Panel > Network > Bindings) will change the IP address order returned by gethostbyname(). Unfortunately, under Windows NT 4.0, the binding order does not influence the IP address order; using the Move Up or Move Down buttons in the Bindings tab of the network control will not work. This is documented in the Microsoft Knowledge Base article number Q171320. To solve the problem, you have to manually add a registry value, DependOnService, to change the IP address order.
SP4 fixes the problem
As documented in Microsoft Knowledge Base article number Q164023, this problem has been resolved for Windows NT Server 4.0 in Service Pack 4. Assuming that your two network adapter cards have the driver names Netcard1 and Netcard2 (the IBM Netfinity 10/100 EtherJet Adapter has the driver name IBMFE), ping and ipconfig show you Netcard1 first, but your public LAN is on Netcard2. To change the IP address order so that Netcard2's address is listed first, you must edit the Windows NT Registry as follows (remember that this can have serious consequences if not done carefully): 1. Start regedt32.exe (Regedit.exe will not work since it cannot add complex value types) and select the following subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Netcard1 2. Add a new value with the following specifications: Value Name: Value Type: Data: DependOnService REG_MULTI_SZ Netcard2
3. Exit the Registry Editor and reboot the machine. ping and ipconfig should now show you the addresses in the required order.
Tip
Note that the new registry value, DependOnService, will be deleted whenever Windows NT rebuilds the network bindings. Thus after each modification of network parameters you should verify that the order is still correct. If you change the IP settings frequently, you will save time by exporting the value to a .REG file for convenient registry merging.
26
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
2.4.10 Domains
The following information specifies the criteria for clustered MSCS servers in regard to domains: 1. The two servers must be members of the same domain. 2. A server can only be a member of one cluster. 3. The following are the only valid domain relationships between cluster nodes: A primary domain controller and a backup domain controller Two backup domain controllers Two stand-alone servers
Note
For SAP, the servers must be configured as stand-alone. In general, we recommend that the nodes are set up as stand-alone servers. This will remove the additional workload generated by the authentication chores and the domain master browser role performed by domain controllers. However, there are situations, such as when domain size is small, when it may be appropriate for nodes also to be domain controllers.
2.4.11 Failover
Failover is the relocation of resources from a failed node to the surviving node. The Resource Monitor assigned to a resource is responsible for detecting its failure. When a resource failure occurs, the Resource Monitor notifies the Cluster Service, which then triggers the actions defined in the failover policy for that resource. Although individual resource failures are detected, remember that only whole groups can fail over. Failovers occur in three different circumstances: manually (that is, at the request of an administrator), automatically, or at a specific time as set by IBM Cluster System Manager. Automatic failovers have three phases: 1. Failure detection 2. Resource relocation 3. Application restart (usually the longest part of the failover process) An automatic failover is triggered when the group failover threshold is reached within the group failover period. These are configuration settings, defined by the administrator.
Group and resource failover properties
Both groups and resources have failover threshold and period properties associated with them. The functions these properties control, however, depend on whether they are associated with a group or a resource. 1. Resource failover settings Failover threshold is the number of times in the specified period that MSCS allows the resource to be restarted on the same node. If the threshold count is exceeded, the resource and all other resources in that group will fail over to the other node in the cluster.
27
Failover period is the time (in seconds) during which the specified number of attempts to restart the resource must occur before the group fails over. After exceeding the threshold count of restart attempts, MSCS fails over the group that contains the failing resource and every resource in that group will be brought online according to the startup sequence defined by the dependencies. 2. Group failover settings Failover threshold is the maximum number of times that the group is allowed to fail over within the specified period. If the group exceeds this number of failovers in the period, MSCS will leave it offline or partially online, depending on the state of the resources in the group. Failover period is the length of time (in hours) during which the group will be allowed to fail over only the number of times specified in Threshold. For example, consider an application clock.exe in group CLOCKGROUP. Other resources in the group include a File Share resource and a Physical Disk resource, as shown in Figure 10.
Node A
Node B
Fail over
The administrator who set up this cluster has assigned a failover threshold of 3 with a failover period of 60 seconds to the clock.exe resource and a failover threshold of 5 with a failover period of 1 hour to the CLOCKGROUP group. Consider now the situation when clock.exe continually fails. The program (a Generic Application resource type) will be restarted on Node A three times. On the fourth failure within one minute, it and its group, CLOCKGROUP, will fail over to Node B. This counts as one CLOCKGROUP failover. When CLOCK fails four times (that is, one more than the resource threshold) on Node B, it will fail over to Node A. This counts as the second CLOCKGROUP failover. After the fifth CLOCKGROUP failover within one hour (Node A->B->A->B->A->B), MSCS will not attempt a restart of CLOCK, nor will it fail over CLOCKGROUP. Instead, it will leave CLOCK in the failed state and CLOCKGROUP will be placed in the partially online state. The other resources in the group will be placed in the
28
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
failed state if they are dependent on CLOCK; they will remain online if they are not dependent on CLOCK.
2.4.12 Failback
Failback is a special case of failover and is the process of moving back some or all groups to their preferred owner after a failover has occurred. A groups preferred owner is the node in the cluster that you have declared as the one upon which you prefer the group of resources to run. If the preferred owner fails, all of its clustered resources will be transferred to the surviving node. When the failed node comes back online, groups that have the restored node as their preferred owner will automatically transfer back to it. Groups that have no preferred owner defined will remain where they are. You can use the preferred owner settings to set up a simple load-balancing configuration. When both servers are running with failback enabled, the applications and resources will move to their preferred owner, thereby balancing out the workload on the cluster according to your specifications. When you create a group, its default failback policy is set to disabled. In other words, when a failover occurs, the resources will be transferred to the other node and will remain there, regardless of whether the preferred node is online. If you want failback to occur automatically, you have the choice of setting the group to failback as soon as its preferred node becomes available, or you can set limits so the failback occurs during a specific period, such as outside of business hours.
functioning. Every 60 seconds, the IsAlive function is called to perform a more rigorous test to check that the clock is operating correctly.
Superficial versus complete checks
Exactly what constitutes a superficial check or a complete check in the descriptions above is determined by the programmer who wrote the resource DLL. For generic applications such as the clock program, the two tests may be identical. More sophisticated resources such as database elements will usually implement a different test for each entry point.
30
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Figure 11 shows an example of the dependencies that relate the resources in the Cluster Group at the end of an SAP R/3 installation with Oracle DBMS:
5170-01
A slightly different group is created during the DB2/UDB installation as shown in Figure 12:
5170-01
Figure 13 shows the group created during the SQL Server installation:
5170-01
MSDTC resource
The MSDTC dependencies must be removed and then the resource must be deleted as described in 7.9, Removal of unused resources on page 167.
31
SAP-R/3 <SID>
5170-01
SAPMNT
SAPLOC
SAP-R/3 Netname
<shareddisk>:\usr\sap
Figure 14. SAP-R/3 ITS group dependencies
SAP-R/3 IP Address
ITS.WORLD
5170-01
Disk M:
Disk K:
Disk L:
In order to make the DB2/UDB DBMS cluster aware, IBM has implemented some extensions. One of the main extensions is the creation of the DB2WOLF.DLL DLL.
32
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
5170-01
Disk K:
Disk L:
Disk M:
Many extensions have been introduced in SQL Server 7.0 Enterprise Edition in order to make the DBMS cluster aware. The main extension are the creation of the SQAGTRES.DLL and SQLSRVRES.DLL. Figure 17 shows the dependencies in the SQL Server group.
ITSSQL Server Agent 7.0 SQL Server Agent 7.0 resource (SQAGTRES.DLL) SQL Server 7.0 resource (SQLSRVRES.DLL) ITSSQL Vserver Disk M: Disk N: Disk O: Network Name resource (CLUSRES.DLL)
Disk K: Disk L:
5170-01
Generic Service ITS SQL resource Network Name (CLUSRES.DLL) IP Address resource (CLUSRES.DLL)
ITSSQL IP Address
33
The MSCS model of partitioned disk access causes some problems for backup and recovery which we will discuss in this section: A backup program running on a cluster node may not be able to access all shared disks. Tape devices cannot be shared cluster resources. Files on the shared disks need to be identified in a unique fashion. Backup scheduling has to take into account the cluster resource situation. Bringing databases and applications offline requires special care. Additional MSCS files may not be handled by backup software. Here we give only a short overview of what to consider for cluster backup. The redbook Using Tivoli Storage Management in a Clustered NT Environment, SG24-5742 discusses these cluster problems in detail. A future SAP R/3 redbook will cover implementing backup solutions for SAP R/3 on IBM Netfinity servers in detail.
34
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
to perform the backup. This is another reason to attach tape drives to both cluster nodes.
Backup Server Catalog Available backups of object 1 1998/08/12 Node_A H: DIR_X\FILE_Y 1998/08/10 Node_A H: DIR_X\FILE_Y Available backups of object 2 1998/08/14 Node_B H: DIR_X\FILE_Y 1998/08/13 Node_B H: DIR_X\FILE_Y 1998/08/11 Node_B H: DIR_X\FILE_Y
Node_A
Node_B
Figure 18. Same file from shared disk but cataloged differently
There are two approaches to guarantee consistency: Alias definitions at the backup server This means defining at the server that files from node As disk H: are to be considered as identical with files from node Bs disk H: ("A\\H: is an alias for B\\H:"). Aliases have to be defined at the level of individual disks because a
35
general alias valid for all disks (node A is an alias for node B) would also merge local disk backups. This approach requires prerequisites in the backup server software. Using virtual names for backup clients With this method, we assume that each shared cluster disk belongs to a resource group with at least one virtual network name in that group. In an R/3 cluster, each disk belongs either to the database group (with virtual database name) or to the R/3 group (with virtual R/3 name). The quorum disk may be moved manually into any group (the cluster group with the cluster virtual name is the natural choice). Now we configure three backup clients on each cluster node: Client using the normal node name to identify itself Client using the virtual database name Client using the virtual R/3 name The backup procedures are implemented in such a way that each of these clients performs the backup only for these disks which belong to the corresponding resource group. Because the backup client which saves the file H:\DIR_X\FILE_Y is known to the backup server by a virtual name, all copies of H:\DIR_X\FILE_Y are kept under this name. Thus all files on shared disks are cataloged on the backup server in a unique manner, independent of which cluster node sent the data. The Tivoli Storage Manager uses a special option for the backup client to identify cluster disks and then implements the second approach using the cluster name in the backup file catalog (see redbook Using Tivoli Storage Management in a Clustered NT Environment, SG24-5742).
36
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
In an R/3 cluster you may use this technique to back up files of the R/3 disk (directory \USR\SAP). But you cannot make an online database backup in the same way because the database files are open, and database operations would interfere with backup access, leading to an inconsistent backup. Offline database backups in a cluster require additional considerations. When using the Tivoli Storage Manager for cluster backup, backup clients are added as generic services to the R/3 resource group as well as the database group. To schedule backups from the R/3 CCMS (transaction DB13), you have to set up a remote function call environment. This is described in the SAP OSS note 0114287.
37
38
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
R/3 configurations for MSCS require careful planning because of complexity and cluster-specific restrictions. These restrictions are different for each type of shared storage subsystem and should be checked before the components are ordered. For configuring an R/3 cluster, you should be aware of general guidelines to increase server availability, especially: An SAP system may only be installed on hardware with certification. iXOS R/3 NT Competency Center (R/3 NTC) certifies hardware platforms for SAP on Microsoft Windows NT. For a list of certified platforms and the rules applied for the certification process, see: http://www.r3onnt.com A Windows NT server may only be installed on hardware with certification. Microsoft evaluates hardware compatibility using the Windows NT Hardware Compatibility Test (HCT). Each hardware that you use must be included in the Windows NT Hardware Compatibility List (HCL). To access the HCL, see: http://www.microsoft.com/hcl/ Hardware that passes testing is included on the Windows NT Hardware Compatibility List (HCL). For IBM employees on the internal IBM network, the letters from the Microsoft WHQL to IBM identifying particular Netfinity cluster configurations that have passed the Microsoft Cluster HCT can be found at: http://w3.kirkland.ibm.com/Certification/prevcert.asp Install an uninterruptible power supply (UPS) system Add redundant power and cooling options to the Netfinity servers. Configure the Advanced System Management adapter or processor in your server. Install a second copy of the Windows NT Operating System so that you can quickly boot to the second version if the production copy fails for some reason.
39
Get the latest information and drivers from IBM Web sites. Go to http://www.pc.ibm.com/support. For ServeRAID installation, review IBM Netfinity High Availability Cluster Solutions for IBM ServeRAID-3H, ServeRAID-3HB and ServeRAID-3L Installation and User's Guide, available from http://www.pc.ibm.com/support: Select Select Select Select Server from Select a brand Clustering from Select your family Online publications Installation Guides
Review the IBM Cluster Checklist, available from http://www.pc.ibm.com/us/searchfiles.html: Enter keywords cluster checklist Check for the latest drivers and BIOS level for your hardware.
40
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Minimum requirements Two identical Netfinity servers, certified for clustering At least 256 MB of RAM 4xRAM for virtual memory or 1200 MB whichever is larger Two IBM ServeRAID-3L/H adapters with the latest BIOS and firmware levels for internal disk connectivity Two IBM ServeRAID-3H adapters with the latest BIOS and firmware levels for external disk connectivity External disk enclosure Two IBM ServeRAID-3L/H adapters with the latest BIOS and firmware levels for internal disk connectivity Two Fibre Channel host adapters One Fibre Channel RAID controller One HUB Fibre Channel hub with at least four GBICs External disk enclosure Two network cards for private network communications Two network cards for public network communications Four physical IP addresses for the network cards Three virtual IP addresses for the cluster
Fibre Channel
Network
If you plan to install Windows NT Service Pack 5 (SP5) without installing Service Pack 4 (SP4) first, you should remember to install the additional components in SP4 for Y2K compliance. These include Microsoft Internet Explorer 4.01 Service Pack 1 and Microsoft Data Access Components 2.0 Service Pack 1. Alternatively, install SP4 first, then SP5. Refer to SAP OSS Note 0030478 to get latest information about Service Packs and SAP R/3. Currently, the situation with MSCS is as follows: for R/3 systems using MSCS, SP4 may only be used if the Microsoft hot fixes are imported according to OSS Note 0144310 (for MS SQL Server 7.0 only hotfix "RNR20.DLL" is required). If possible, you should immediately implement Service Pack 5. For the application of Service Packs in cluster systems, you must read Note 0144310. The use of Service Packs is also described in 4.8, Service pack and post-SP installation steps on page 102. The minimum space requirements on the local disks and the shared disks are summarized in Table 5:
41
Minimum space requirement 500 MB for Microsoft Windows NT 4.0 Enterprise Edition, Service Pack 4, Microsoft Internet Explorer 4.01 Service Pack 1, and Microsoft Data Access Component 2.0 Service Pack 1 3 MB for MSCS 10 MB for SAP cluster files 1200 MB for Windows NT page file Oracle: 600 MB for Oracle server software RDBMS 8.0.5, 10 MB for Oracle FailSafe software V 2.1.3 DB2: 100 MB for DB2/CS Software V5.2 SQL: 65 MB for MS SQL Server 7.0 Enterprise Edition program files 100 KB for Cluster quorum resource 1 GB for SAP R/3 executable files 10 GB initially for SAP R/3 data files 120 MB for Online redo logs set A 120 MB for Online redo logs set B 120 MB for Mirrored online redo logs set A 120 MB for Mirrored online redo logs set B 6 GB for the backup of online redo logs 2 GB for SAPDBA directories 100 KB for Cluster quorum resource 1000 MB initially for SAP R/3 data files 100 MB for DB2 database files 10 GB initially for SAP R/3 data files 1 GB at least for DB2 log files 1 GB at least for DB2 archived log files 100 KB for Cluster quorum resource 6 GB initially for R/3 data files 45 MB for SQL server master DB 300 MB for SQL server temp DB 1 GB minimum for the transaction log files (Log Device)
DB2: shared volumes requirements (external enclosure) Microsoft SQL: shared volumes requirements (external enclosure)
42
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
3.2.1 iXOS
iXOS Software AG (iXOS), in close cooperation with SAP, has developed a server hardware certification process for the purpose of investigating the performance and stability of SAP R/3 on Windows NT platforms. This certification is essential for all hardware manufacturers. Since 1993, the R/3 NT Competency Center (R3 NTC) of iXOS has exclusively performed the certification process, as an independent assessor, for a large number of Windows NT platform hardware vendors. Note that SAP supports the operation of certified R/3 systems only. There are five different hardware certification categories in place as developed by iXOS in conjunction with SAP. The hardware certification testing is executed by iXOS: 1. Initial certification: The first, and only the first, Windows NT server offering from a hardware vendor must undergo the initial certification. A very detailed test sequence is performed. 2. Ongoing certification: All server offerings for R/3 on Windows NT currently offered to the market are subject to the process of the ongoing certification. Twice a year, the ongoing certification tests are repeated on each server offering. The ongoing certification is executed to validate the operation of the system in conjunction with a new release of Windows NT or an upgrade of hardware or firmware by the hardware vendor. 3. Ongoing controller certification: This level of certification allows for a set of reduced tests to be performed in order to certify I/O controllers offered by hardware vendors. Once an I/O controller is certified, by iXOS, in one Windows NT server product offered by a hardware vendor, it is certified for use in all the hardware vendors Windows NT server products that have been certified by iXOS. 4. S/390 certification: This certification requires reduced tests to be performed against an already certified NT platform that is to have access to a DB2 database on an S/390. For this certification category, it is the connectivity type that is certified. Each connectivity type (for example, FDDI, Fast Ethernet, ESCON) must be certified once per hardware vendor. 5. Outgoing certification: Hardware platforms no longer being sold for R/3 on Windows NT, but still used by customers in a productive environment are subject to an outgoing certification. To enter the SAP R/3 on Windows NT market, a hardware vendor must secure an initial certification for its server platform. The first server offering and all subsequent server offerings a hardware vendor may supply to the market are subjected to the ongoing certification until they are no longer offered for SAP R/3 on Windows NT. 3.2.1.1 Hardware components According to the iXOS certification process a hardware platform consists of three different types of components: 1. Critical components The chip set: A platform is defined by its chip set and corresponding chip set extensions that enable the data transfer between processor, memory,
43
and I/O. Changing the chip set requires that a Windows NT server system must undergo an ongoing certification. The I/O controller: A particular I/O controller must be certified once for each hardware vendor. If an I/O controller has been successfully tested with one Windows NT server offering supplied by a hardware vendor, it is certified with all other iXOS certified Windows NT server offerings supplied by the vendor (if supported by the vendor). The same applies to the certification of S/390 connectivity types. 2. Peripheral components The hardware vendor is obligated to provide a list of all peripheral components associated with the server system to be used in support of SAP R/3 on Windows NT. The hardware vendor guarantees function and support of the components listed. If any of the peripheral components are replaced the list is to be updated by the hardware vendor and no new certification is necessary. Peripheral components are: Hard disks Memory Network adapter Backup device 3. Non-critical components All components that are not defined as critical components or peripheral components are defined as non-critical components. Changing non-critical components does not affect the certification of the platform. Non-critical components are, for example: Monitor Graphic adapter Mouse Others
I/O subsystems
SAP R/3 certification rules do not require the certification of I/O subsystems. However, iXOS offers, to vendors of I/O subsystems, special tests that validate the stability and measure the I/O performance of the storage solution.
3.2.2 Microsoft
Microsoft evaluates hardware compatibility using the Windows NT Hardware Compatibility Tests (HCTs). The HCTs are run for the purpose of testing the interaction between device drivers and hardware. These tests issue the full range of commands available to applications and operating systems software, and are designed to stress hardware beyond the level of most real-world situations. At the Windows Hardware Quality Labs (WHQL), Microsoft personnel run the HCTs and report results to the hardware manufacturer. Hardware that passes testing is included on the Windows NT Hardware Compatibility List (HCL). The HCL may be viewed by visiting http://www.microsoft.com/hcl/. A validated cluster configuration can potentially include any server that is on the Microsoft HCL for Windows NT server. For validating hardware in a cluster configuration, the Microsoft Cluster HCT is executed.
44
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
The most important criteria for MSCS hardware is that it be included in a validated cluster configuration on the Microsoft HCL, indicating it has passed the Microsoft Cluster HCT. Microsoft will only support MSCS when used on a validated cluster configuration. Only complete configurations are validated, not individual components. The complete cluster configuration consists of two servers and a storage solution. Microsoft allows hardware manufacturers to run the Microsoft Cluster HCT at their own facilities. The result of a successful test is an encrypted file that is returned to Microsoft for validation. Validated cluster configurations may be viewed by selecting the Cluster category at: http://www.microsoft.com/hcl/ IBM currently tests MSCS solutions with the Netfinity 10/100 Ethernet Adapter (34L0901) and a crossover cable as cluster interconnect. This is the configuration submitted to Microsoft for certification, and it is recommended that you follow this practice. The MSCS certification rules allow replacement of the interconnect cards with another 100% compliant NDIS PCI card listed on the Windows NT HCL (see http://www.microsoft.com/hwtest/sysdocs/wolfpackdoc.htm). Consult with the Netfinity ServerProven list to be sure that the alternate card is listed as one that is tested and supported on Netfinity servers.
Cluster component candidates
The Microsoft HCL also has the categories Cluster Fibre Channel Adapter, Cluster/SCSI Adapter, Cluster/RAID, Cluster RAID Controller, and Cluster RAID System. It is important to note that inclusion of these components on the HCL does not qualify a component for MSCS support services unless the component was included in a validated cluster configuration. Make sure you consult the Cluster category on the HCL to view the validated configuration with included storage adapter. These other cluster categories are intended for vendors, system integrators, and test labs that are validating complete cluster configurations.
Additionally, for IBM employees who can access the internal IBM network, information regarding IBM Netfinity cluster configurations, validated by the WHQL, can be viewed at the IBM Center for Microsoft Technologies (CMT) Web site at http://w3.kirkland.ibm.com/Certification/prevcert.asp. The IBM CMT Web site posts letters from the WHQL to IBM identifying particular Netfinity cluster configurations that have passed the Microsoft Cluster HCT.
45
Is also listed on the Microsoft Hardware Compatibility List (HCL) for Windows 2000 (on the Microsoft Web page) Is officially supported for SAP installations on Windows 2000 by the hardware partner. The hardware partner must test and support all peripherals (including network adapter, tape device, graphic adapter and so on) that will be used by SAP customers for Windows 2000. New hardware must be certified by iXOS as before. Hardware partners must notify SAP and iXOS as to which hardware will be supported Windows 2000. iXOS will add a Windows 2000 list to the Web page with all qualifying hardware platforms.
SAP / IBM
. R/3 version . Modules . DB version . OS version . Hardware
CUSTOMER
. # of users . Reporting . Batch . Load profile . Customizing
Performance Sizing
general information about the numbers of users the IBM SAP R/3 system needs to support. Further along in the R/3 implementation planning, customers will know more about SAP R/3, the R/3 applications they plan to use, and their potential R/3 transaction activity. At that time, another sizing estimate should be requested based on more detailed information. It is important to understand that the sizing estimate is a presales effort, which is based on benchmark performance data; it should not replace capacity planning for installed systems. The sizing estimate can be used for preinstallation planning. However, during the process of implementing R/3, customers should work with an IBM/SAP Capacity Planning consultant to monitor and predict the ongoing resource requirements for the production R/3 system. The IBM/SAP sizing methodology is continually reviewed and revised to provide the best possible estimate of the IBM hardware resources required to run SAP R/3. Guidelines for sizing R/3 come from a number of sources, including SAP, SAP R/3 benchmarks, and customer feedback. Based on information from these sources and the sizing questionnaire completed by the customers, the IBM ERP sales team will analyze the SAP R/3 requirements and recommend an IBM hardware configuration with a target CPU utilization of 65%. See the IBM ERP sales team for the sizing questionnaire. There are basically two different methods for interactive sizing: user-based sizing and transaction-based sizing. In general, user-based sizing determines the total interactive load of an SAP complex by summing up the total number of mathematical operation financial type users. The method then makes a processor recommendation based on a 65% loading condition of these mathematical operation users. This allows significant capacity for peak interactive conditions, batch loads, report generation, and other forms of complex loading. The transaction-based sizing methods developed and used by IBM sums the total amount of normalized financial (FI) transactions an SAP complex will see, and then assuming an average amount of dialogs per transaction computes a total normalized dialog load. The new transaction-based method developed by SAP and the hardware partner council is based on measurements that have been achieved on a reference machine for every single transaction evaluated at this point in time (1-3 transactions per module). Both user-based sizing and transaction-based sizing have relative advantages and disadvantages. In fact, we recommend that you obtain both user data and transaction data from a customer when doing a sizing (per the sizing questionnaire). This will allow you to have the data to cros-check the user-based sizing with ransaction sizing. The objective of the IBM/SAP sizing methodology is to estimate the hardware resources required to support the peak hour of business processing. Our sizing philosophy is that if we size the hardware to provide acceptable response time for the peak application workload, then all workloads outside of the peak hour should also provide acceptable response time. The peak hour is the busiest hour of activity from an information-processing standpoint. It is the hour in which the CPU utilization is the highest. In identifying the peak hour, you should consider how your processing volumes vary throughout the year and select a peak hour during the busiest time of the year. A survey in 48
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
the user departments of the various SAP R/3 modules may be helpful. Typically, the peak hour occurs somewhere between 8:00 a.m. and 6:00 p.m., but this can vary. In Figure 20, the thick line shows the transaction volumes for all of the SAP R/3 modules used in one organization, with the peak hour occurring from 10:00 a.m. to 11:00 a.m.
Identifying the Peak Hour of Processing and the Potential SAP R/3 Workload
200 peak hour 180
160
140 Gen'l Ledger A/P Payments A/R Invoices Asset xfers Inventory Prod. Orders IM Payroll QM Total
Transactions
120
100 80
60
40 20
0 9:00 10:00 11:00 12:00 13:00 Time 14:00 15:00 16:00 17:00 18:00
Figure 20. Identifying the peak hour of processing and the potential SAP R/3 workload
The SAP R/3 functions that will be in use during that hour (refer to Figure 20) have to be determined also. For user-based sizing, it will be the R/3 modules that will be active during the peak hour and the numbers of users of each module. For transaction-based sizing, we will break down the modules by transaction type and specify the number of transactions to be processed during the peak hour. For example, in user-based sizing, customers could indicate that twenty Financial Accounting (FI) users will be active; for transaction-based sizing, customers would specify some number of A/P Payments, A/R Invoices, GL Postings, etc. It is important to understand that every SAP R/3 module/transaction to be used by the organization should not be included. Instead, only those R/3 modules/transactions that will be active during the peak hour should be reported. The batch processing workload should also be identified. For user-based sizing, the batch volume is a percentage of the total system workload. By default, we recommend that 20% of the total workload during the peak hour is batch processing. For transaction-based sizing, it is the batch transaction volumes for the peak hour. For further information, you can contact the IBM national ERP Solutions team or the IBM/SAP International Competency Center (ISICC) in Walldorf, Germany. For IBM employees, refer to the ISICC intranet Web site: http://w3.isicc.de.ibm.com
49
Two electronic sizing tools are available to help IBM sales team, partners, and customers. The SAP QuickSizer is accessible from the Internet on the SAPNET Web site. (http://sapnet.sap.com, user ID, and password are required). The second tool is the ISICC R/3 Sizing Tool, available on the ISICC Lotus Notes Server accessible only by IBM employees at the time of writing. To access this server, please review the instructions on the ISICC intranet Web site.
Throughout this book, you will find references to the usual conventions; <SID> denotes the three-character SAP system identification code.
As a minimum, you should use four hard disk drives in two separate RAID-1 arrays: Because page files have a write/read ratio of 50% and more, placing them on RAID-5 arrays would decrease performance. To ensure fast recovery even in the case of losing more than one hard disk drive, the second copy of Windows NT should be installed on a RAID array different from that with the production system. Using only two large hard disk drives or combining all drives in a single RAID array would not reach our objectives. For these disks, hardware RAID must be used for SCSI clustering configuration and Fibre Channel configuration. With the ServeRAID II and ServeRAID 3H/3HB, it is possible to also attach the operating system disks to shared channels (using ServeRAID merge groups). This seems to be efficient usage of controllers and storage enclosures. However, we do not recommend this because it increases the complexity of the cluster installation. Some recovery and maintenance tasks are more difficult because disconnecting a server from the shared bus prevents booting this machine. With respect to the growth of typical R/3 databases, the savings would only be short-term.
Windows NT page file
SAP recommends you create a Windows NT page file of four times the RAM amount for non-cluster configuration, and five times the amount for cluster configuration. However, one Windows NT page file has an upper size limit of 4 GB. For performance reasons, it is recommended to install all the page files on separate physical disks. Therefore, when installing a large SAP configuration by using the Netfinity 7000 M10 or the Netfinity 8500R, adding one or two expansion enclosures for external disks may be necessary. The EXP15 can be split into two SCSI channels, each one used by a node of the cluster and connected to a ServeRAID adapter that is not configured for clustering.
51
According to these considerations, we should have the following disk layout on both cluster nodes (Table 6):
Table 6. Windows NT disk layout recommendations
Purpose
Windows NT 4.0 EE production operating system Oracle: complete software in \ORANT Microsoft SQL Server: client software DB2 software in \SQLLIB Client for backup software System management software Page file #2 (up to 4 GB) Second copy of operating system Client for backup software Page file #1 (up to 4 GB)
Disk Type
Windows NT disk 0
Two internal disk drives 9.1 GB in a RAID-1 array with two logical drives
E:\ (4 GB) D:\ (4.5 GB) F:\ (4.5 GB) G:\ (4 GB) H:\ (4 GB)
Windows NT disk 1
Two internal or external disk drives 9.1 GB in a RAID-1 array with two logical drives
Page file #3 (up to 4 GB) Page file #4 (up to 4 GB) 9.1 GB internal or external disk drives in RAID-1 array
Additional disk for Page file #5 (up to 4 GB) if you follow the 5 times RAM recommendation. Hot Spare 9.1 GB disk drive
Note: As per Knowledge Base article Q114841, the boot partition can be up to 7.8 GB in size. All the Windows NT disks described in Table 6 do not always have to be created. The configuration will depend on the Netfinity server model and RAM amount installed. Each model has a different number of hot-swap bays for hard disk drives and different maximum RAM size. Windows NT disk 0 Partition C: We use this primary NTFS partition for the production Windows NT 4.0 Enterprise Edition operating system (called boot partition).
Partitions
To avoid any confusion with the drive letter assignment for each partition, we recommend you use only primary partitions. The Windows NT 4.0 installation procedure does not allow a second primary partition on Windows NT disk 0. We first have to create a primary partition (D:) on Windows NT disk 1, install Windows NT 4.0 there, boot from that one, and with Windows NT Disk Administrator, create the second primary partition on Windows NT disk 0. We then install Windows NT 4.0 on drive C:.
52
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Partition E: This partition contains only a second page file (the first one is in priority on the F: partition). It will be used if the page file total size has to be greater than the 4 GB installed on F:. Windows NT disk 1 Partition D: We use this drive as an emergency (or backup) Windows NT (NTFS, primary partition). Configuring an emergency Windows NT is a good installation practice and recommended by SAP. To avoid as many problem sources as possible, this system should be configured as a stand-alone server, and it should not contain any software that is not absolutely necessary for basic restore operations. Partition F: This drive contains a page file up to 4 GB. If the page file size has to be greater, partition E: has to be used for a total size up to 8 GB. Beyond this value, more disks in RAID-1 arrays have to be added. The advantages of using two Windows NT disks are higher availability and faster recovery. If disk 0 fails we can boot from (the already present and configured) disk 1 (with a Windows NT formatted diskette and some necessary, basic Windows NT files). We are able to fix driver, registry, and other problems on disk 0, or we immediately start a full restore from a previous, offline backup to disk 0. For more details on Windows NT availability and recovery, see the redbook Windows NT Backup and Recovery with ADSM, SG24-2231.
Page file size
Windows NT 4.0 Enterprise Edition does not allow you to create a page file greater than 4 GB, and is not able to operate more than 4 GB of RAM. Therefore, the maximum total page file size that can be needed can never exceed 4 GB x 5 = 20 GB (according to the SAP recommendation for running MSCS). Intel has developed two extended server memory architectures. These technologies are known as Physical Address Extension (PAE) and Page Size Extension (PSE). The PSE driver allows Windows NT 4.0 (Enterprise Edition only) to use memory beyond 4 GB. At the time of writing, no information is available regarding the certification of this driver and the ability for the database software to run on such system. Windows 2000 will use the PAE feature on Intel 32-bit (IA-32) servers to support more than 4 GB of physical memory. PAE allows up to 64 GB of physical memory to be used as regular 4 KB pages, providing better performance than is available through the Intel PSE36 driver. Therefore, you will not need the PSE36 driver as described on this Web site for use with Windows 2000. For more information about PSE and PAE, see Chapter 6 of the redbook Netfinity Performance Tuning with Windows NT 4.0, SG24-5287. Nevertheless, the physical memory beyond 4 GB is not pageable. Therefore, the total page file will never have to be greater than 20 GB, divided in several 4 GB
53
max files (or 16 GB if you intend to use only four times the RAM). Therefore, here is a description of a sample operating system disk layout for each Netfinity server, with the maximum page file size: Netfinity 5000 Maximum RAM: 2 GB Maximum pagefile: 10 GB Maximum OS disk configuration (see Table 6): Windows NT disk 0: two internal disks in RAID-1 Windows NT disk 1 and 2: three internal disks in RAID-1E No internal bay available for hot-spare disk. Netfinity 5500 Maximum RAM: 1 GB Maximum page file: 5 GB Maximum OS disk configuration (see Table 6): Windows NT disk 0: two internal disks in RAID-1 (with no page file) Windows NT disk 1: three internal disks in RAID-1 (page file on drive F:) One internal bay available for hot-spare disk Netfinity 5500 M10 Maximum RAM: 2 GB Maximum page file: 10 GB Maximum OS disk configuration (see Table 6): Windows NT disk 0: two internal disks in RAID-1 Windows NT disk 1 and 2: three internal disks in RAID-1E One internal bay available for hot-spare disk Netfinity 5500 M20 Maximum RAM: 4 GB Maximum page file: 20 GB Maximum OS disk configuration (see Table 6): Windows NT disk 0: two internal disks in RAID-1 Windows NT disk 1: two internal disks in RAID-1 Windows NT disk 2: two internal disks in RAID-1 Windows NT disk 3: two external disks in RAID-1 (EXP15 in a non-cluster configuration) Windows NT disk 4: two external disks in RAID-1 (EXP15 in a non-cluster configuration) One external bay available for hot-spare disk (if the EXP15 is divided into two separate SCSI buses, each one used by one node of the cluster) Netfinity 7000 M10 Maximum RAM: 8 GB Maximum page file: 20 GB Maximum OS disk configuration (see Table 6): Windows NT disk 0: two internal disks in RAID-1 Windows NT disk 1: two internal disks in RAID-1 Windows NT disk 2, 3 and 4: four external disks in RAID-1E (EXP15 in a non-cluster configuration) One external bay available for hot-spare disk (if the EXP15 is divided into two separate SCSI buses, each one used by one node of the cluster)
54
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Netfinity 8500 R Maximum RAM: 16 GB Maximum page file: 20 GB Maximum OS disk configuration (see Table 6): Windows NT disk 0: two internal disks in RAID-1 Windows NT disk 1, 2, 3 and 4: four external disks in RAID-1 Enhanced (EXP15 in a non-cluster configuration) One external bay available for hot-spare disk (if the EXP15 is divided into two separate SCSI buses, each one used by one node of the cluster)
Drive letters in Windows NT
Whatever the database or the disk subsystem technology used, only a limited number of Windows NT partitions can be created, one for each of the 26 letters of the alphabet. This can be a critical weakness for a very large database, as half of the letters are already in use by system files, SAP software, or database logs. Therefore, the Windows NT partitioning and number of arrays for the data files should be created carefully, taking into consideration the limitation of both SCSI and Fibre Channel. See 3.5, ServeRAID SCSI configurations on page 62 and 3.6, Fibre Channel configurations on page 68 for detailed explanations about these limitations.
55
In an MSCS environment, you cannot store any database related files on this disk, because this disk is a requirement for the SAP application server. 3.4.2.3 Oracle files The DBMS data and log files must reside on shared Windows NT disks different from the quorum and the R/3 disk. There are recommendations for Oracle and SQL Server and DB2 to distribute the files over several RAID arrays for security and performance reasons these are discussed in the following sections. The Oracle software home directory (default is \ORANT) will be installed locally on each cluster node. A natural choice for placing this directory is the production Windows NT operating system partition (in our disk layout, C: on Windows NT disk 0). The redo log files, and the Oracle FailSafe repository are on shared disks. Redo logs are fundamental Oracle transaction logs. Oracle organizes logs in redo log groups containing identical copies of the logs. One of the copies is said original log while the other is said mirrored log. The Oracle LGWR (Log Writer) process writes simultaneously on both files, hence these files are really specular. The purpose is to have a backup of log files to be used if there is any problem with the original logs. Since both these log files are continuously written by the LGWR process it is fundamental to put them in different disks. Once Oracle fills one of the log files, a log switch is performed, that is, the LGWR process starts to write on the log files in another log group. Simultaneously the ARCH (Archiver) background process starts to copy the logs from the filled redo log group to the archive directory (SAP archive). The archiving is necessary because when the LGWR has exhausted all the redo log groups the next log switch brings it back to the original redo log group and then it begins to overwrite the original log. Thus, to avoid to losing the data in the logs it is necessary to archive them. Since the ARCH process reads the data from the log directories while the LGWR is writing in them it is important to have more than one redo log group. In this way while ARCH is reading from one log file on one disk, the LGWR is writing in a second log file in a different disk. For the same reason, it is important to have a dedicated SAP archive. The ARCH can write data on the archive disk without competing with the LGWR for the I/O resources. During the installation, R3Setup creates four redo log groups having the structure shown in Table 7:
Table 7. Redo log groups
Log files ORILOGA MIRRORLOGA ORILOGB MIRRORLOGB ORILOGA MIRRORLOGA ORILOGB MIRRORLOGB
For the reasons explained above, it is important to have at least groups 11 and 13 in a separate physical disk than the one containing groups 12 and 14.
56
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
For further details see: Oracle Architecture by S. Bobrowski, Chapter 10 Oracle 8 Tuning by M. J. Corey et al., Chapter 3 The Oracle DBWR (Database Writer) process gains access to data and indexes simultaneously. Hence it is fundamental to create different tablespaces for data and indexes and put them on different disks. Besides this basic tuning rule, a second fundamental rule says that you must try to put most often used tables in different disks. SAP R/3 4.5B tries to satisfy these requirements creating 27 different tablespaces scattered in the six SAPDATA directories. If you have exactly six different disks for the SAPDATA directories you can be confident on the correct distribution of your data and indexes. If you have still more space you can try to further improve this distribution. If you have less space you can try to improve the R3Setup distribution of tablespaces. If large customer data is expected, SAP recommends to store at least the four following tablespaces in separate physical disks:
Table 8. Tablespaces
If, for performance reasons, you need a nondefault data file distribution you can customize the configuration modifying manually the CENTRALDB.R3S file. For further details see: Oracle 8 Tuning by M. J. Corey et al., Chapter 3. To ensure that no data is lost in a single disk or RAID array failure, the following three groups of Oracle files should be on different shared RAID arrays: Data files (tablespace files) At least one set (original or mirrored) of online redo log files Archived redo log files We recommend Oracle software mirroring for the database online redo log files, even if hardware mirroring is already provided. There are a lot of possible causes for corrupt online redo log files (disk failures, driver errors, internal Oracle errors), and not all of them are covered by hardware RAID. To avoid controversy (responsibility) and because of support implications, we advise using Oracle mirroring of online redo log files. The additional level of security outweighs the small loss of disk space and performance. Another Oracle-specific directory that has to reside on a shared Windows NT disk is the Oracle FailSafe Repository (directory \ORANT\FailSafe). It is used to store information (50-100 MB) about the Oracle databases in the cluster that are configured for failover (in an SAP MSCS cluster, this is only the R/3 database). Because of cluster resource dependencies, the Oracle FailSafe Repository must reside on a disk that will belong to the Oracle cluster resource group. The 57
OracleAgent80<virtual_DB_name> MSCS resource depends on the Oracle FailSafe Repository as well as on the Oracle <SID> Network Name resource. Thus the Oracle FailSafe Repository disk belongs to the Oracle resource group. You may place this directory on any Windows NT disk used for Oracle data or log files, or configure a separate shared disk for clarity. For more details on different configurations see the SAP manuals R/3 Installation on Windows NT-Oracle Database, Release 4.5B (SAP product number 51005499, May 1999) and Conversion to Microsoft Cluster Server: Oracle, Release 4.0B 4.5A 4.5B (SAP product number 51005504) (May 1999). Recommended disk configuration The following disk configuration is not the only possible layout, but is an optimal one based on Oracle, SAP and Microsoft recommendations for excellent level of security and performance.
Table 9. Oracle / MSCS recommended disk layout
Purpose
\MSCS \USR\SAP \ORACLE\<SID>\orilogA \ORACLE\<SID>\orilogB \ORACLE\<SID>\sapbackup \ORACLE\<SID>\sapcheck \ORACLE\<SID>\mirrlogA \ORACLE\<SID>\mirrlogB \ORACLE\<SID>\sapreorg \ORACLE\<SID>\saptrace \ORACLE\<SID>\saparch \ORANT\FailSafe \ORACLE\<SID>\sapdata1 \ORACLE\<SID>\sapdata2 \ORACLE\<SID>\sapdata3 \ORACLE\<SID>\sapdata4 \ORACLE\<SID>\sapdata5 \ORACLE\<SID>\sapdata6
Disk Type
2 disk drives in a RAID-1 array with 1logical drive 2 disk drives in a RAID-1 array with 1logical drive 2 disk drives in a RAID-1 array with 1logical drive
Windows NT disk 7
L:\ ArchLog
2 disk drives in a RAID-1 array with 1logical drive Number of drives depends on the database size: RAID-1, RAID-1E, RAID-5 or RAID-5E
No Name
Hot Spare
At least two EXP15 expansion enclosures are necessary for configuring SAP and Oracle in a cluster environment. 10 disk drives are used for installing the MSCS quorum, SAP files and all the online redo logs. Then, data files can be stored on as many drives as necessary, depending on the database size. Five arrays are
58
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
necessary for the software installation on the shared disks. For large database, two Fibre Channel RAID controller units may be useful. For better performance, we recommend you create only RAID-1 arrays when using ServeRAID. With ServeRAID, you will see a significant performance difference between RAID-1 and RAID-5 because ServeRAID caching must be set to write through for the shared drives to ensure consistency in the case of server failure. The Fibre Channel controllers have mirrored caches external to the servers, thus cache consistency is ensured independent of server failures, so RAID-5 performance with write-back caching is comparable with RAID-1 (especially when the RAID-5 array is large).
Log files
Database data files and both sets of online redo log files should always be distributed over different physical disks. As the online redo logs are written synchronously, they produce the most I/O activity of all database files. In large configurations, this activity can become critical. The recommendation is then to add two new RAID-1 arrays to physically separate the active online logs A and B, as well as the mirrored logs A and B. 3.4.2.4 IBM DB2 files DB2 UDB Enterprise Edition has to be installed locally on both nodes. The standard directory is C:\SQLLIB. The conversion into a clustered database is done after the regular R/3 installation. The clustered DB2 instance stored in the directory \DB2PROFS and the databases <SID> and <SID>adm need to be installed on a shared drive. The database data files, the active database logs, and the archived database logs have to be stored on different NT disks to avoid that data is lost or at least the amount of data lost is minimized in the case of a RAID array failure. The disk holding the active database logs is the disk with the largest number of synchronous disk writes. Try to connect this disk to a channel or adapter with low load. We recommend to separate this disk at least from the one storing the archived database logs, because every time the DB2 user exit process is started, a log file is copied from the active log files disk to the archived log files disk, while the database is writing new log file data to the active log disk. In a cluster configuration, it is not possible to store the archived database logs on the same drive as the R/3 binaries. The DB2 user exit process is always started on the machine running the database instance. If the R/3 server is running on the other node, the DB2 user exit process has no access to the R/3 Files drive. Figure 21 shows this configuration. The DB2 RDBMS runs on Node B and controls the database <SID>, the active database log files, and the invocation of the DB2 user exit (DB2UE). When a database log file becomes inactive, the RDBMS calls the DB2UE (1). The DB2UE in turn copies the database log file from the active logs disk to the archived logs disk (2), and then deletes the original database log file (3).
59
Node A
virtual R/3 server RSDR3
Node B
virtual DB server RSDDB
Windows NT shared disks
DB2 RDBMS
1 DB2 User Exit
2
Windows NT shared disks Windows NT shared disks
Figure 21. Database log archives must be separate from R/3 binaries
For database data, you can use one RAID-5 array or several RAID-1 arrays. If you plan to use the IBM ServeRAID controller, try to use RAID-1 arrays. The reason behind this is, that you have to disable the write back cache for all shared disk drives. Without this cache, write access to RAID-5 arrays is very slow because every write access requires that besides the real data, the new parity be written back to disk in synchronous mode. If you use a different controller type in conjunction with RAID-5, make sure that it has a write back cache and that this cache can be activated in a cluster environment. The maximum number of RAID-1 arrays to be configured for the database data should not exceed six for a new installation because R3SETUP offers only the distribution of SAPDATA1-SAPDATA6 to different Windows NT disks. You can add additional drives later as needed. Recommended Disk Configuration For performance reasons, the database data files should be distributed over six Windows NT disks. One possible, optimal distribution of database data on different Windows NT disks is shown in Table 10:
60
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Disk name and letter Windows NT disk 4 Windows NT disk 5 Windows NT disk 6 Windows NT disk 7 I:\ Quorum J:\ SAPExe K:\ DBLog L:\ ArchLog
Purpose \MSCS
Disk type Two disk drives in a RAID-1 array with one logical drive Two disk drives in a RAID-1 array with one logical drive Two disk drives in a RAID-1 array with one logical drive Two disk drives in a RAID-1 array with one logical drive
\USR\SAP
\DB2\<SID>\log_dir
\DB2\<SID>\log_archive \DB2<SID>\saprest \DB2\<SID>\sapreorg \db2rsd \DB2PROFS\db2rsd \DB2\<SID>\saparch \DB2\<SID>\sapdata1 \DB2\<SID>\sapdata2 \DB2\<SID>\sapdata3 \DB2\<SID>\sapdata4 \DB2\<SID>\sapdata5 \DB2\<SID>\sapdata6
Number of drives depends on the database size: RAID-1, RAID-1E, RAID-5 or RAID-5E
No Name
Hot Spare
Four arrays are necessary for the software installation on the shared disks. For large databases, two Fibre Channel RAID controller units may be useful. 3.4.2.5 SQL Server 7.0 files The DBMS data and log files must reside on shared Windows NT disks different from the quorum and the R/3 disk. The SQL Server database software consists of a server and a client part. The client software must be installed locally on each cluster node (default directory: \MSSQL). A natural choice for placing this directory is the production Windows NT operating system partition (in our disk layout, C: on Windows NT disk 0). The server part of the software (default is another \MSSQL directory), the SQL Server master database, and the container files with application data and logs (called SQL Server devices) are on shared disks. To ensure that no data is lost in a single disk or RAID array failure, the following three types of SQL Server devices should be on different shared RAID disk arrays: Data devices Log devices TEMP device
61
Disk name and letter Purpose Windows NT disk 4 Windows NT disk 5 Windows NT disk 6 Windows NT disk 7 Windows NT disk 8,etc. I:\ Quorum J:\ SAPExe K:\ SQLSrv L:\ SQLLog M:\ ,etc. Data \MSCS
Disk type Two disk drives in a RAID-1 array with one logical drive Two disk drives in a RAID-1 array with one logical drive Two disk drives in a RAID-1 array with one logical drive Two disk drives in a RAID-1 array with one logical drive
\usr\sap
\<SID>DATA<n>\<SID>DATA<n>.DAT \<SID>DATA<n>\<SID>DATA<n>.DAT \<SID>DATA<n>\<SID>DATA<n>.DAT \<SID>DATA<n>\<SID>DATA<n>.DAT \<SID>DATA<n>\<SID>DATA<n>.DAT \<SID>DATA<n>\<SID>DATA<n>.DAT Number of drives depends on the database size: RAID-1, RAID-1E, RAID-5 or RAID-5E
No Name
Hot Spare
Four arrays are necessary for the software installation on the shared disks. For large databases, two Fibre Channel RAID controller units may be useful, provided that each controller unit not handle more than eight logical units.
Windows NT limitation
Do not forget that the ultimate limitation is the 26 letters used by Windows NT for all the disks. Some letters are already in use for the floppy disk drives, the CD-ROM drives, and so on.
62
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
The EXP15 supports Ultra SCSI data transfers of up to 40 MBps at distances of up to 12 meters from the server using LVDS cabling.
The EXP15 contains an electronics board (ESM or Environmental Services Monitor board) that interfaces between the external SCSI cables and hot-swap backplanes. The ESM board provides two main functions: Status reporting for the subsystem through the SCSI interface SCSI connection between the subsystem and the server The EXP15 has two hot-swap redundant 350W power supplies. Each power supply contains its own power cord. In addition, two hot-swap cooling units containing separate dual fans provide cooling redundancy. If a failure occurs with either of the redundant power supplies or cooling fans, an LED will light to indicate a fault and its location. The EXP15 has two SCSI connections, both using the VHDCI .8 mm 16-bit SCSI connector. (The EXP10 uses standard 68-pin SCSI connectors.) There are three configurations possible with the EXP15: two independent five-drive buses, one 10-drive bus, and a 2-node clustering configuration: Configuration for two independent SCSI buses To configure both SCSI buses independent of each other, connect one cable from a ServeRAID channel to the bus 1 connector (as shown in Figure 23) and another ServeRAID channel to the bus 2 connector. To separate the two buses, set switch 1 in the switch block to on (up). In this configuration, each bus contains five disks.
Option Switch Block Bus 1 Connector Bus 2 Connector
Configuration for one bus To configure the Netfinity EXP15 as a single 10-disk SCSI bus, attach one external SCSI cable from the ServeRAID adapter to the bus 1 connector, then
63
set switch 1 to off (down). There is no need to connect a terminator to the bus 2 connector as the EXP15 will automatically terminate the bus. Clustering configuration To use the EXP15 in a cluster with a maximum of 10 drives on a single SCSI bus, connect the external SCSI cable from the ServeRAID adapter in server one to the SCSI bus 1 connector and connect another SCSI cable from a ServeRAID adapter of server 2 to the SCSI bus 2 connector, then set switch 1 to off (down). This configuration shares the data storage of the EXP15 between two clustered servers.
The EXP200 supports Wide Ultra2 (80 MBps) transfer speeds at up to 20 meters using LVDS SCSI cabling. The EXP200 shares the same drive options as the new Netfinity 8500R and Netfinity 5600 servers.
64
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
65
Server A
EtherJet 500 MHz ServeRAID 3H EtherJet ServeRAID 3H Channel 1 Channel 1 Crossover Cable (Heartbeat) EtherJet ServeRAID 3H EtherJet ServeRAID 3H
Server B
500 MHz
RAID-1 Arrays
RAID-1 Arrays
EXP15 Enclosure 1
Pentium III Xeon Microprocessors Arrays
P P A B A B A B A B HS HS
0 8 1 9 2 10 3 11 4 12
SCSI ID
EXP15 Enclosure 2
Arrays Channel 2 0 8 1 9 2 10 3 11 4 12
P P C D C D C D C D HS HS
Figure 25 shows the ServeRAID-3H/HB basic cluster configuration. When running Microsoft Cluster Server software, you no longer need to interconnect the third channel of your controllers to which the quorum is connected, (formerly known as a the SCSI heartbeat connection). You can now attach internal or external drives to the third channel of your ServeRAID 3HB controller. For the ServeRAID II adapter, you need to form the SCSI heartbeat connection between the Channel 3 connectors on the adapter in each server that has the MSCS quorum disk defined. This will require the Third Channel Cable Kit (part 76H5400) for each of the two adapters. Because both adapters have 0.8 mm VHDCI female connectors as external interface, there are two possibilities to form the connection itself: A Netfinity Ultra2 SCSI cable with 0.8 mm VHDCI male connectors on both ends using either a 4.3 m cable, part 03K9311 or a 2 m cable, part 03K9310. A combination of a 0.8 mm to 68-pin cable (4.3 m, part 01K8029 or 3 m, part 01K8028) with a 68-pin to 0.8 mm adapter, part 01K8017).
66
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
ServeRAID order
When you install multiple ServeRAID adapters in the same server, you must install the device that will manage the startup drives also called boot drives in a PCI slot that is scanned before subsequent ServeRAID adapters. These specifications are not the same between the different server. To see you server specific information please refer to the IBM Shared Disk Clustering Hardware Reference available from: http://www.pc.ibm.com/netfinity/clustering
There are two network adapters configured in Figure 25, but there can be more, depending on your networking requirements. We discuss this in 3.7, Network configurations on page 76.
Hence, our recommendations are: Configure two separate ServeRAID adapters: One for the local disks containing the operating system and page files One for the shared disks containing the database files Set the stripe sizes as follows: 64 KB for the local disks (best performance for the page files) 8 KB for the shared disks (best performance for the database) Set Read Ahead as follows: On for the local disks Off for the shared disks Set the cache policy as follows: Write-through for all shared disks Write-back for all local disks You should install the battery backup cache option for the ServeRAID controller connected to the local disks.
67
Please refer to 4.16.5, DB tuning on page 127 for a discussion on how to tune the RDBMS being used in the installed SAP R/3 environment:
Review 3.2, Certification and validation of hardware on page 42 to check the current certified hardware configurations for MSCS and SAP R/3. The Netfinity Fibre Channel RAID Controller Unit and Netfinity Fibre Channel Hub are supported only in rack installations. You may combine these components with tower model servers, but we recommend that you install the whole cluster as a rack solution to protect the optical cables from damage. We provide only a general overview of the Fibre Channel technology here. For further detailed information and an installation guide, refer to the redbook Implementing Netfinity Disk Subsystems: ServeRAID SCSI, Fibre Channel and SSA, SG24-2098.
68
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Netfinity Fibre Channel RAID Controller Unit with one RAID controller (a second redundant FailSafe RAID controller is optional) Netfinity Fibre Channel Hub, Gigabit Interface Converter (GBIC) options (short-wave and long-wave), short-wave optical cables EXP15 storage expansion enclosures with up to 10 Ultra SCSI disk drives 3.6.1.1 Fibre Channel PCI Adapter The IBM Netfinity Fibre Channel PCI Adapter, 01K7297, is a direct memory access bus master, half-length, host adapter. It uses the ISP2100 chip, which combines a RISC processor, a fiber protocol module with gigabit transceivers, and a PCI local bus interface in a single-chip solution. The adapter supports all Fibre Channel peripherals that support Private Loop Direct Attach (PLDA) and Fabric Loop Attach (FLA). The external connector is an SC style that supports short-wave fiber optic cabling with total cable length up to 500 meters. The adapter is necessary to connect Netfinity servers to the Fibre Channel network. But in contrast to the ServeRAID SCSI, the Fibre Channel PCI Adapter does not provide any RAID functionality. All RAID functions are performed by the Fibre Channel RAID controller. 3.6.1.2 Fibre Channel RAID Controller Unit The IBM Netfinity Fibre Channel RAID Controller Unit, 35261RU, is a 19-inch rack mount component (four high units) which may contain one or two RAID Controllers (shipped with one controller). The controllers share a 3.3 volt battery backup unit for protecting the cache in the event of a power failure.
9 20
8-
a 02
Hot-Swap RAID Controller (optional Second Redundant Controller) Battery Backup Unit for Cache
Figure 26. Netfinity Fibre Channel RAID Controller Unit
The host connection for each controller is provided with Fibre Channel. To connect the EXP15 storage enclosure, six independent low voltage differential signaling (LVDS) Ultra2 SCSI buses are available (with active termination and 0.8 mm VHDCI connectors). The RAID controller unit supports RAID-1, RAID-1+0
69
(striped mirror sets), RAID-3, and RAID-5. Data is buffered in a 128 MB buffer cache, that is protected by the internal battery.
LVD SCSI Connections to EXP10s or EXP15s Host ID Switches
UPS Connection
5 6 4
3 2
RS-232C Connections
Ethernet Connections
Power Supply
Power Supply
The controller unit has 9-pin RS-232 connectors for each installed RAID controller, used for configuration, monitoring, and diagnostics, even when the host is not available. Remote configuration and monitoring across a network is possible with Ethernet cable connections for each controller. Redundant power supplies, fans, the battery unit, and the controllers are all hot-swappable. When you install the optional Netfinity Fibre Channel FailSafe RAID Controller (part 01K7296), both controllers can be set up as a redundant pair, either in an active/passive configuration, or in an active/active configuration, which is recommended for better performance. Each RAID array is shown to the servers as one or more logical units (LUN=logical unit number). LUNs are parts of a RAID array. A LUN is, from the operating systems point of view, similar to a logical drive on a ServeRAID array, although the implementation at the SCSI protocol layer is different. Windows NT treats the LUN as one Windows NT disk. But in contrast to ServeRAID, the failover unit in MSCS is the LUN (not the RAID array). Each LUN can fail over independently from other LUNs sharing the same RAID array. In active/active mode, both controllers may own LUNs and serve them to the hosts. All LUNs are completely independent. Multiple LUNs residing on the same RAID array can be owned by different controllers. This may give performance benefits if one controller is a bottleneck. In active/passive mode, one controller owns all LUNs, while the other operates as hot standby. For the configuration and management of the Fibre Channel RAID Controller, the SYMplicity Storage Manager software is provided. 3.6.1.3 Netfinity Fibre Channel Hub The IBM Netfinity Fibre Channel Hub (35231RU) is a 7-port central interconnection for Fibre Channel Arbitrated Loops. LED indicators provide status information to indicate whether a port is active or bypassed. (No remote management options are provided.) Hub cascading is supported to a depth of two, thus up two 37 ports may be connected.
70
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
2098-02a
Each port requires a Gigabit Interface Converter (GBIC) to connect it to the attached node. The hub supports any combination of short-wave or long-wave optical GBICs. The GBICs are hot-pluggable into the hub, which means that you can add servers and storage modules to the Fibre Channel Arbitrated Loop dynamically without powering off the hub or any connected devices. If you remove a GBIC from the hub port, then that port is automatically bypassed. The remaining hub ports continue to operate normally. Conversely, if you plug a GBIC into a hub port, it will automatically be inserted into the loop and become a Fibre Channel node if valid Fibre Channel data is received from the attached device. Four short-wave GBICs are shipped with the hub. They support connections to Netfinity Fibre Channel PCI Adapters and Netfinity Fibre Channel RAID Controller Units (using 5 meter or 25 meter short-wave Netfinity Fibre Channel cables or customer-supplied short-wave cables up to 500 meters). There are also long-wave GBICs available for attaching customer-supplied long-wave optical cables between hubs. Any combination of short-wave and long-wave GBICs is supported. Netfinity Fibre Channel PCI Adapters and Netfinity Fibre Channel RAID Controller Units can only be connected to short-wave GBICs. A Fibre Channel hub is required when more than two devices (also called nodes) should be connected as a Fibre Channel Arbitrated Loop (FC-AL). Without a hub, failure of one device would break the loop because there is no redundancy in an FC-AL. The hub ensures that the loop topology is maintained. Each hub port receives serial data from an attached Fibre Channel node and retransmits the data out of the next hub port to the next node attached in the loop. This includes data regeneration (both signal timing and amplitude) supporting full distance optical links. The hub detects any Fibre Channel loop node that is missing or is inoperative and automatically routes the data to the next operational port and attached node in the loop.
5170-01
71
Port
Port
Port
Description
5170-01
IBM Netfinity Fibre Channel PCI Adapter IBM Netfinity Fibre Channel Hub (Note 1) IBM Netfinity Fibre Channel Cable 5 m IBM Netfinity Fibre Channel Cable 25 m IBM Netfinity Fibre Channel RAID Controller Unit (Notes 2 and 3) IBM Netfinity Fibre Channel FailSafe RAID Controller (Note 4)
Notes: 1 This part number does not apply in EMEA; use SFCH1xx instead (xx refers to a country-specific power cord and has to be replaced by the appropriate country code). The hub is shipped with four short-wave GBICs (part 03K9308. The part number for long-wave GBIC is 03K9307). 2 Includes one Netfinity Fibre Channel RAID controller. 3 This part number does not apply in EMEA; use SFCU1xx instead (xx refers to a country-specific power cord and has to be replaced by the appropriate country code). 4 To be installed in the Fibre Channel RAID Controller Unit.
Port
72
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Public Network
Server A
EtherJet ServeRAID 3H EtherJet Fibre Channel Fibre Channel Hub RAID-1 Arrays RAID-1 Arrays Crossover Cable (Heartbeat) EtherJet ServeRAID 3H EtherJet Fibre Channel
Server B
500 MHz
500 MHz
Standard RAID Controller Empty Slot 1 Pentium III Xeon Microprocessors 2 3 4 5 6 Pentium III Xeon Microprocessors
EXP15 Enclosure 1
HS P
A B A B A B A B A
Arrays
C D C D C D C D C
0 8 1 9 2 10 3 11 4 12
SCSI ID
0 8 1 9 2 10 3 11 4 12
Note: There are three single points of failure in this configuration: The RAID controller The hub The Fibre Channel cable between the hub and RAID controller Because all access to the shared disks is operated by these components which exist only once in the cluster, a failure of one of these components would cause both cluster servers to lose access to the shared disks. Thus all cluster disk resources would fail, and the DBMS and the R/3 processes would fail also. The first step to avoid such situations is adding a redundant Fibre Channel FailSafe RAID Controller in the controller unit. Attaching this second controller to the hub would eliminate the first single point of failure only. To avoid having the hub or any FC cable as the critical point, a second hub is needed. With two hubs, each hub forms a separate FC loop. The RAID controllers are connected to different loops, while the servers are connected to both loops. This gives the configuration as shown in Figure 31 on page 74.
HS P
EXP15 Enclosure 2
73
Public Network
Server A
EtherJet 500 MHz EtherJet ServeRAID 3H Fibre Channel Fibre Channel Fibre Channel Hub EtherJet Crossover Cable (Heartbeat) EtherJet ServeRAID 3H Fibre Channel Fibre Channel
Server B
500 MHz
RAID-1 Arrays
Standard RAID Controller 4 Pentium III Xeon Microprocessors Failsafe RAID Controller 1 2 3 4 5 6 4 Pentium III Xeon Microprocessors
EXP15 Enclosure 1
A A B B C C D D E E
EXP15 Enclosure 2
A A B B C C D D F F
EXP15 Enclosure 3
P P A A B B C C D D HS HS
Arrays
0 0 1 1 2 2 3 3 4 4
Without special precautions, the operating system would not recognize the disk access paths provided through the two Fibre Channel adapters as redundant paths to the same common set of LUNs. To ensure that the second FC adapter is considered an alternate path to the LUNs accessed by the first adapter (and vice versa), the SYMplicity Storage Manager software adds a redundant disk array controller (RDAC) driver and a resolution daemon. This software supports a fully redundant I/O path design (adapter, cable, and RAID controller) with host-level failure recovery, transparent to applications. The RDAC driver is not a kind of RAID software it does not mirror between the FC loops. It ensures only Fibre Channel link integrity in the case of failure. The failing path or I/O initiator will be detected, and the backup RDAC will take its place. Because any mirroring is done by the Fibre Channel RAID Controllers between SCSI channels, this is not a method for disaster protection using FC. The maximum distance between disk drives in the same RAID-1 array is limited by LVDS SCSI cable length. This configuration combines large disk space, high performance and high safety. When both RAID controllers in each controller unit are set as an active/active pair, then any bottlenecks in the controllers or FC paths are avoided. If a SCSI
74
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
channel becomes a bottleneck, then the drive buses of the EXP15 may be attached separately (see Figure 31 on page 74). Expanding this configuration may be done by adding more EXP15 until all six SCSI channels of the controller unit are used. Then a second controller unit (again with two redundant RAID controllers) can be attached by way of the hubs to the FC PCI adapters, giving again six SCSI channels. For performance reasons, no more than two controllers should be attached to a single FC PCI adapter.
Logical units and MSCS
The concept of LUNs allows very effective usage of disk space. We can combine several smaller Windows NT disks on the same RAID array using different LUNs. Because all LUNs are completely independent, these Windows NT disks may even belong to different cluster resource groups. But you should not combine such Windows NT disks as LUNs of the same RAID array, that need to be separated for data security. Failure of an array (because of multiple disk failures) causes all LUNs of that array to fail. Thus we never provide LUNs with data files and LUNs with log files from the same array. The Fibre Channel LUNs will be configured in three different MSCS resource groups which may fail over independently. The R/3 group (disk with the \USR\SAP directory tree), the Database group (containing all Windows NT disks with a \ORACLE\<SID> tree or \DB2\<SID> tree or the MSSQL directories, and the Oracle FailSafe Repository), and the Cluster Group (containing the Cluster Name, Cluster IP Address, and the Time Service). The Oracle FailSafe Repository may be located on any Oracle disk or on a separate partition, should not be installed on the quorum disk, as per OSS note 0112266. The only FC topology that guarantees that no cluster component constitutes a single point of failure is using two independent loops with redundant RAID controllers, as shown in Figure 31 on page 74.
Number of LUNs
There are two limits on the number of logical units you can have: Host adapter: Prior to v6.16 of the PCI Fibre Channel host adapter driver, Windows NT limited the maximum number of logical units (LUNs) per RAID controller unit to eight (whether the controller unit has a single controller or redundant controllers). Thus, if you had a RAID controller unit with two active controllers, the total number of LUNs between them could not be more than eight. However, with the release of v6.16, up to 256 LUNs are supported by the driver. RAID controller: The Fibre Channel RAID Controller supports up to 32 LUNs. The limit is therefore now 32 LUNs. The maximum number of disks configurable into a LUN depends on the RAID level defined: RAID-0: 20 disks per LUN RAID-1: 30 disks per LUN (15 usable)
75
RAID-3: 20 disks per LUN (19 usable) RAID-5: 20 disks per LUN (19 usable) If the system runs out of disk space, then you have two possibilities: You can create additional RAID-1 or RAID-5 arrays (each with one or more LUNs) and place them into the appropriate cluster group. Be aware of the limited number of drive letters available in Windows NT. You can expand the existing LUNs by adding more disks. This can be done dynamically with the SYMplicity Storage Manager software. (Note, this does not change the size of the Windows NT partitions. You will need a third party tool to perform this function.) At installation, the SYMplicity Storage Manager software is limited to 16 RAID controllers. This limit is determined by the System_MaxControllers parameter setting in the C:\Program Files\SYMSM\RMPARAMS file. If your system has more than 16 RAID controllers, change the parameter in the RMPARAMS file to reflect the actual value. Each hub is shipped with four short-wave GBICs, and can accept up to seven.
76
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
For an understanding of Microsoft Cluster Server (MSCS) concepts, review the chapter "Microsoft Cluster Server Concepts" in the Microsoft Cluster Server Administrators Guide, \SUPPORT\BOOKS\MSCSADM.HLP on the Windows NT 4.0 Enterprise Edition CD 1.
Backbone Network
Node A Cluster Interconnect Node B APP1 APP2
Public Network
Clients
Clients
3.7.1 Components
The components that make up the network are: Backbone network In large R/3 systems the network adapters can cause bottlenecks so the use of a multiple network interface card is recommended. The backbone network terminology is used to describe the LAN on which interserver traffic will be concentrated. This is a very fast segment of 100 Mbps or more. This is the preferred path for local traffic between the SAP application servers and the database server. Refer to 4.16.2.9, Network planning on page 121, where a more complete description can be found. Public network All servers and clients have a direct connection or route to this network. This will be in a routed, switched, subnetted, or bridged environment. The whole enterprise uses this network for everything from file server access, print severs, e-mail, etc. To maximize the bandwidth of the R/3 system traffic between the sapgui and dialog instances, we must be aware of the total usage and capacity of this network. In a TCP/IP network the ideal way to handle this situation is by creating Virtual LANs (VLANs). See Switched, Fast and Gigabit Ethernet by Robert Breyer & Sean Riley (Macmillan Technical Publishing) for details on how to create a VLAN. Private network
5170-01
Shared disks
77
The term private network is used in many different ways. This example refers to the crossover-connected (100Base-T) heartbeat (or interconnect) cable. MSCS is constantly checking the status of the two nodes. The server that is in passive mode uses this cable to poll the other server as to the status of all the resources being managed in the cluster. Provided weve configured the public network to accept all communications traffic (that is, client and interconnect traffic), if this interconnect link goes down, the cluster will use the public network to do polling instead. In this way, we have two redundant paths (public and backbone) for the cluster heartbeat. If a resource fails on the active node then through the use of this cable, the cluster knows when to fail over. All Windows NT nodes are members of the same domain, SAPDOM. For the two Windows NT servers forming the MSCS cluster, five network names and seven static IP addresses are needed as shown in Table 13. We have also mentioned two more static IP addresses to show the connection to the backbone network:
Table 13. Network parameters for MSCS nodes
Server
NetBIOS name
Description
TCP/IP addresses
SERVERA
SERVERB
SAPCLUS
sapclus
Cluster alias (for MSCS administration) R/3 alias (for connecting to the R/3 services) DBMS alias (for connecting to the DBMS services)
ITSSAP
itssap
ITSDBMS
itsdbms
Public network:192.168.0.52
Server names
The MSCS part of the SAP installation manuals uses the names Node A and Node B for the two MSCS nodes. It does not matter which node is called A and which is called B. We used the real computer names SERVERA and SERVERB from our test installation. This convention makes it easier to understand the screen captures throughout this redbook. The TCP/IP HOSTS file (in directory \WINNT\SYSTEM32\DRIVERS\ETC) was used for naming resolution, containing entries for all five IP addresses on the public network. Note that the cluster-private LAN (crossover Ethernet connection) is used for the MSCS communication only. The virtual names ITSSAP and ITSDBMS on the public LAN are used for user access and communication between R/3 work
78
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
processes and the database. Thus the public network in our example has to handle the SAPGUI traffic (which is usually of low volume) and the database traffic (from R/3 and ADSM (applications server), which may be of very high volume). A small bandwidth network will not be appropriate. A common practice in large SAP R/3 environments is to separate the users network (small bandwidth public LAN) completely from the network for database access (large bandwidth SAP backbone). You may configure the SAP backbone as additional network, thus having three network interface cards in the cluster nodes. SAP recommends that during installation all settings be restricted so the entire communication runs via the normal public network. The SAP installation documentation is also formulated under this assumption. After the installation is completed, you can reconfigure the cluster so that the communication to the database is carried out via the backbone network. Proceed as follows: 1. Choose the IP address for the database on the backbone network. 2. Change the name resolution for the virtual database name so that the database name is displayed on the new IP number (in all HOSTS files or in the DNS server). 3. Change the IP resource in the database resource group so that it contains the IP address on the backbone network. The network to which this address is bound must also be modified. 4. Stop and start the database and R/3 resource groups. You can simply create IP resources in the R/3 resource group, so that the R/3 system can also be reached via the backbone network (by the ADSM server, for example). Additional steps are not required because R/3 automatically receives the incoming data from all existing IP addresses. You can also include additional IP resources in the database group (for example, to be able to administrate the database via the other network). Some databases (for example, Oracle) simply connect their entry ports during installation to certain addresses. In this case you must make additional configuration entries so that the new IP resource can be used. With Oracle, additional entries are needed in the file LISTENER.ORA.
79
Windows environment. The advantage of WINS is its simplicity, robustness, and flexibility. The disadvantage is WINS is only suited for smaller, flatly structured homogeneous networks. Hosts file The HOSTS file is an ASCII text file that statically maps local and remote host names to IP addresses, located in \WINNT\SYSTEM32\DRIVERS\ETC. This file is read from top to bottom and as soon as a match is found for a host name, the file stops being read. The HOSTS file is a reliable and simple solution, since all the information that is needed to resolve a name to an IP address is stored on the computer. This local administration gives a very high performance. The cost of this performance is the need for a lot of maintenance if the environment is constantly changing. An example is: 192.168.0.1 servera node1 where node1 is the name of the alias. The use of a second (mirrored) WINS or DNS server is strongly recommended. Hint: When using a DNS server or HOSTS file with fully qualified names (for example, server1.company.com), you should enter the DNS domain in the Domain Suffix Search Order for the TCP/IP protocol in the Network Control Panel applet.
80
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Networks that support client-to-cluster communication (either with or without supporting node-to-node communication) are referred to as public networks. We recommend that you configure the public network to be both the node-to-node and the client-to-cluster communications for added network redundancy. Before you install the MSCS software, you must configure both nodes to use the TCP/IP protocol over all interconnects. Also, each network adapter must have an assigned static IP address that is on the same network as the corresponding network adapter on the other node. Therefore, there can be no routers between two MSCS nodes. However, routers can be placed between the cluster and its clients. If all interconnects must run through a hub, use separate hubs to isolate each interconnect. Note: MSCS does not support the use of IP addresses assigned from a Dynamic Host Configuration Protocol (DHCP) server for the cluster administration address (which is associated with the cluster name) or any IP Address resources. You should use static IP addresses for the Windows NT network configuration on each node.
81
mode of half or full duplex. A poor cable installation can cause some ports on a 10/100 switch to have different speeds. The solution is to not use auto-sensing and to manually configure the network interface cards to the setting of the network, for example, 100 Mbps/full duplex. The switches and hubs must be able to support the 100 Mbps full duplex type of configuration. See Knowledge Base article Q174812 for more information.
82
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
definitions. Again the exact behavior depends on the adapter's chipset and driver. To avoid any problem, you should not configure redundant adapters for the cluster heartbeat, but fail over to the public LAN instead.
83
84
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
4.9, MSCS verification on page 105 4.10, Create the installation user account on page 107 4.11, SAP and DBMS installation on page 108 4.12, SAP verification on page 110 4.13, DBMS verification on page 114 4.14, Backbone configuration on page 114 4.15, SAP cluster verification on page 117 4.16, Tuning on page 119 4.17, Configuring a remote shell on page 127 Covered in the DBMS chapters
In sections 4.4 to 4.16, the installation process is further divided into 29 steps as shown in Figure 34:
Copyright IBM Corp. 1998, 1999
85
Step 1: Configure internal disks, page 89 Step 2: Fill in general worksheet, page 90 Step 3: Windows NT installation on Node A, page 90 Step 4: Windows NT installation on Node B, page 92 Step 5: Modify BOOT.INI on Node B, page 94 Step 6: Configure disks, page 94 Step 7: SYMplicity Storage Manager configuration, page 95 Step 8: MSCS pre-install verification, page 96 Step 9: Configure the Server service, page 97 Step 10: Set the page file size, page 98
Figure 34. More detailed installation process
Step 11: Modify BOOT.INI for 4 GB tuning, page 99 Step 12: Remove drivers and protocols, page 100 Step 13: Install MSCS on Node A, page 101 Step 14: Install MSCS on Node B, page 102 Step 15: Installation of troubleshooting tools, page 102 Step 16: Windows NT Service Pack installation, page 102 Step 17: DNS or HOSTS file and transport directory configuration, page 103 Step 18: Installation of IE, ADSI, and MMC, page 105 Step 19: Install the latest SAP R/3 DLLs, page 105 Step 20: Apply the Post SP4 DLLs, page 105
Step 21: Correction of Network Bindings, page 106 Step 22: MSCS verification, page 107 Step 23: Create the installation user account, page 107 Step 24: Installing SAP and the DBMS, page 108 Step 25: SAP tests, page 110 Step 26: DBMS verification, page 114 Step 27: Backbone configuration, page 115 Step 28: SAP tuning, page 119 Step 29: DB tuning, page 127 Step 30: Configure a remote shell, page 127
86
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
This section provides general hints and tips about the subject of planning the security of an SAP environment.
Backbone 178.0.0.0
DB 10.0.0.0 APP CI
Public 192.168.0.0
USERS Domain
Public 192.168.0.0
Clients PDC BDC
Clients
5170-01
PDC
BDC
87
Throughout this chapter and the rest of the redbook, take note of the case of parameters such as accounts, groups, and variables. The way they are written is case sensitive. For example if <sid> is its, then: <SID> equates to ITS <sid> equates to its Failure to adhere to the case of the parameters will likely result in a broken installation.
Account
Where
Details
Lab values
Cluster service account User who administers the R/3 system SAP service account Global group of SAP administrators Local group of SAP administrators CI server account DB server account Other SAP application servers account
Domain controller of the SAPDOM domain Domain controller of the SAPDOM domain Domain controller of the SAPDOM domain Domain controller of the SAPDOM domain Local account on SAP application servers and DB server Domain controller of the SAPDOM domain Domain controller of the SAPDOM domain Domain controller of the SAPDOM domain
User ID name2 <sid>adm1,3 SAPService<SID>1,4 (for DB2 UDB, it is sapse<sid> SAP_<SID>_GlobalAdmin1 SAP_<SID>_LocalAdmin1
NetBIOS name unique in the USERS and SAPDOM domain5 NetBIOS name unique in the USERS and SAPDOM domain5 NetBIOS name unique in the USERS and SAPDOM domain2
88
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Account
Where
Details
Lab values
Notes : 1 The case of the parameter must be exactly as specified. 2 The cluster service account must be a domain user and belong to the local Administrators group in both nodes of the cluster. Moreover, the account must have the following rights: Backup files and directories, Increase quotas, Increase scheduling priority, Load and unload device drivers, Lock pages in memory, Logon as a service and Restore files and directories 3 The <sid>adm account must be a domain user and belong to the Domain Admins, Domain Users, and SAP_<SID>_Global Admin groups. Besides the account must have the following rights: Replace a Process Level Token, Increase Quotas, and Act as a part of the operating system. After the installation it is recommended to remove the <sid>adm account from the Domain Admins groups to improve security. 4 The SAP service account must be a domain user and have the following rights: Logon as a service. 5 We recommend to use only alphabetic and numeric characters and not to use more than eight characters.
We recommend you also plan the distribution of the adapters in the PCI slots before beginning the installation. This distribution can have a major impact on the performance of your system. For information, see Netfinity Performance Tuning with Windows NT 4.0, SG24-5287.
See Figure 34, on page 86 for a pictorial view of the installation steps described here. Step 1: Configure internal disks Configure the internal disks as described in 3.5, ServeRAID SCSI configurations on page 62. Relevant information on how to configure the disks is also contained in 3.4, Disk layouts on page 50.
89
Step 2: Fill in general worksheet Before starting the installation it is useful to gather information that could be relevant during the setup. Table 15 should be filled out with the relevant data:
Table 15. General worksheet
Parameters
Lab values
Your values
Windows NT domain for SAP and DB servers PDC of SAP domain NetBIOS Name PDC of SAP domain IP address PDC of SAP domain subnet mask Windows NT domain for user accounts PDC of USERS domain NetBIOS Name PDC of USERS domain IP address PDC of USERS domain subnet mask
Step 3: Windows NT installation on Node A Installation begins with Node A. Before installing, fill inTable 16. At the end of the installation install Service Pack 3:
Table 16. Node A worksheet
Parameters
Lab values
Your values
The Standby operating system must be installed before the Active one. The Active operating system must be installed after the Standby one. Change the default value \WINNT only if you are installing the Standby operating system in the same logical drive in which the Active operating system is installed. Use the default value \WINNT. The maximum is 8 characters. Only letters, numbers and the "-" symbol are allowed. The last character cannot be a "-" symbol. Follow the general recommendations about passwords in a secure environment. The server can not be a PDC or a BDC. Leave default. Do not install RAS. Do not install IIS. MSCS only supports TCP/IP.
D: C: \WINNT
\WINNT SERVERA
Administrator built-in account password Domain role Components to install RAS installation Microsoft IIS Network protocols
90
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Parameters
Lab values
Your values
Only static addresses should be used with MSCS. Private network must be a separate subnet. The use of the IANA reserved network 10.0.0.0 is good practice. Use a different network only when this network is already used for other purposes.
No 10.0.0.11
Subnet mask (private) Default gateway (private) IP address (public) Subnet mask (public) Default gateway (public) IP address (backbone) Subnet mask (backbone) Default gateway (backbone) DNS domain name DNS Service search order DNS suffix search order Primary WINS server (private network) Secondary WINS server (private network) Primary WINS server (public) Secondary WINS server (public network) Primary WINS server (backbone network) Secondary WINS server (backbone network) Enable DNS for Windows resolution Enable LMHOSTS lookup Domain name SAP Domain administrator User ID If you disable LMHOSTS lookup the Hosts file becomes useless. Name of the SAP domain. If it is not possible for security reasons to get the administrator account the Node A computer account must be created in the PDC of the SAP Domain. List of DNS servers. List of DNS suffixes. Should contain own domain. Leave empty. Leave empty. IP address of the primary WINS server. IP address of the secondary WINS server. Leave blank or configure a WINS server for the backbone network. Leave blank or configure a WINS server for the backbone network. Must be blank Must be blank
255.255.255.0 Blank 192.168.0.11 255.255.255.0 192.168.0.1001 172.16.0.11 255.255.255.0 Blank itso.ral.ibm.com 192.168.0.1011 None None None 192.168.0.101 None None None Checked Checked SAPDOM Administrator
91
Parameters
Lab values
Your values
If it is not possible for security reasons to get the administrator account the Node A computer account must be created in the PDC of the SAP Domain.
ibm
Note : The IP address used in our lab is reserved for private use. However, this is not necessary for your installation except where noted.
At the end of the installation when the administrator logs on for the first time the Windows NT Enterprise Edition Installer is automatically started. Do not try to configure the cluster now. The cluster will be installed later. Step 4: Windows NT installation on Node B After installing Service Pack 3 on Node A, the installation of Windows NT EE on Node B can begin. Before installing fill in Table 17. At the end of the installation of Node B, Service Pack 3 must be installed:
Table 17. Node B worksheets
Parameters
Lab values
Your values
The Standby operating system must be installed before the Active one. The Active operating system must be installed after the Standby one. Change the default value \WINNT only if you are installing the Standby operating system in the same logical drive in which the Active operating system is installed. Use the default value \WINNT. The maximum is 8 characters. Only letters, numbers and the "-" symbol are allowed. The last character cannot be a "-" symbol. Follow the general recommendations about passwords in a secure environment. The server can not be a PDC or a BDC. Leave default. Do not install RAS. Do not install IIS. MSCS only supports TCP/IP. Only static addresses should be used with MSCS.
D: C: \WINNT
\WINNT SERVERB
Administrator built-in account password Domain role Components to install RAS installation Microsoft IIS Network protocols Dynamically assigned address
ibm
92
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Parameters
Lab values
Your values
Private network must be a separate subnet. The use of the IANA reserved network 10.0.0.0 is good practice. Use a different network only when this network is already used for other purposes.
10.0.0.21
Subnet mask (private) Default gateway (private) IP address (public) Subnet mask (public) Default gateway (public) IP address (backbone) Subnet mask (backbone) Default gateway (backbone) DNS Domain name DNS Service search order DNS suffix search order Primary WINS server (private) Secondary WINS server (private) Primary WINS server (public) Secondary WINS server (public) Primary WINS server (backbone) Secondary WINS server (backbone) Enable DNS for Windows resolution Enable LMHOSTS lookup Domain name SAP Domain administrator User ID Name of the SAP domain. If it is not possible for security reasons to get the administrator account the Node A computer account must be created in the PDC of the SAP Domain. If it is not possible for security reasons to get the administrator account the Node A computer account must be created in the PDC of the SAP Domain. List of DNS servers. List of DNS suffixes. Leave empty. Leave empty. IP address of the primary WINS server. IP address of the secondary WINS server. Leave blank or configure a WINS server for the backbone network. Leave blank or configure a WINS server for the backbone network. Not required if all the application servers are in the same site. Not required for the private network.
255.255.255.0 Blank 192.168.0.21 255.255.255.0 192.168.0.1001 172.16.0.21 255.255.255.0 None itso.ral.ibm.com 192.168.0.1011 None None None 192.168.0.1011 None None None Checked Checked SAPDOM Administrator
ibm
93
Parameters
Lab values
Your values
ibm
Notes: 1 The IP address used in our lab is reserved for private use. However, this is not necessary for your installation except where noted.
At the end of the installation when the administrator logs on for the first time the Windows NT Enterprise Edition Installer is automatically started. Do not try to configure the cluster now. MSCS will be installed in step 14 on page 102. At the end of the installation remember to configure either the DNS server, WINS server or the HOSTS file. If you plan to use the HOSTS file, it is important to remember that the file is case sensitive. See Knowledge Base article Q101746 TCP/IP Hosts File Is Case Sensitive for details. It is also important not to install Service Packs 4 or 5 at this point. SP4 and SP5 are installed after MSCS is installed. You should also reference the following Knowledge Base articles: Q195462 "WINS registration and IP address behavior for MSCS" (general information how to use) Q193890 "Recommended WINS configuration for MSCS" (disable WINS client from the private LAN) Q217199 "Static WINS entries cause the network name to go offline" (do not use static entries in WINS for clustered computers) Step 5: Modify BOOT.INI on Node B Modify the BOOT.INI file of Node B to avoid simultaneous contention of shared drives. This can be obtained by adding a different boot timeout in the servers: [boot loader] timeout=5 default=multi(0)disk(0)rdisk(0)partition(1)\WINNT [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINNT="Active Operating System" multi(0)disk(0)rdisk(0)partition(1)\WINNT="Active Operating System" /basevideo /sos multi(0)disk(1)rdisk(0)partition(1)\WINNT="Standby Operating System" multi(0)disk(1)rdisk(0)partition(1)\WINNT="Standby Operating System" /basevideo /sos On Node B only, set this parameter to 5 seconds
Figure 36. BOOT.INI file
Step 6: Configure disks Now you must create your own worksheets describing the disk layout.The distribution of files is DBMS dependent. Examples on how to configure the disks are provided in 3.4, Disk layouts on page 50:
94
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Oracle: Table 10 on page 61 DB2: Table 10 on page 61 SQL Server: Table 12 on page 72 Format shared drives and assign meaningful disk labels to the local and shared drives by means of Windows NT Disk administrator on both nodes. Step 7: SYMplicity Storage Manager configuration If the cluster exploits FC-AL technology configure SYMplicity Storage Manager as described in 3.6, Fibre Channel configurations on page 68. See also 3.4, Disk layouts on page 50 for a detailed description of DBMS dependent on how to configure the drives. At the end of the installation you may see the following error message in the Windows NT Event Viewer: Event ID: 8032. Source Browser. Type Error. The browser service has failed to retrieve the backup list too many times on transport \Device\NetBT_IBMFEPCI1. The backup browser is stopping. A simple procedure to correct this problem is to disable the browser service on the private and backbone network. The selective disabling of the browser service is described in Knowledge Base article Q158487 Browsing Across Subnets w/a Multihomed PDC in Windows NT 4.0. In our lab configuration the CI and DB serves had three identical network adapters. Figure 37 shows the registry key, HKey_Local_Machine\System\CurrentControlSet\Services, containing the configuration of network adapters:
Figure 37 shows four subkeys: IBMFEPCI, IBMFEPCI1, IBMFEPCI2 and IBMFEPCI3. The first key contains information about the network driver, while each of the other three corresponds to one particular network adapter. The network adapter can be recognized looking in the \Parameters\Tcpip subkey. Figure 38 shows how we solved the problem by adding a new value in the subkey HKey_Local_Machine\System\CurrentControlSet\Services\Browser\Parameters:
95
Attention: The Cluster Diagnostic Utility will destroy the content of shared disks. To start the Cluster Verification Utility follow the procedure explained in Help. You need to disconnect one of the nodes from the SCSI channel before beginning the test on the other node. To use the Cluster Diagnostic Utility follow these instructions: 1. On Node A, execute the command CLUSTSIM /S from the command prompt. 2. Wait until you get the message saying you can start the utility on Node B. 3. On Node B, execute the command CLUSTSIM /N:ServerA from the Command Prompt, where ServerA is the name of Node A. 4. Examine the log files. Further information can be found in Windows NT Microsoft Cluster Server by Richard R. Lee.
96
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Configure the page file. 4 GB tuning. Remove unnecessary drivers and protocols. Each of these is now discussed.
This parameter can be found also in Windows NT Workstation systems, but it cannot be changed. Some applications can consider writing immediately to disk (write through) very important. To get this effect, they need to bypass the cache memory system and so they need to manage the memory directly. For these applications the setting of the Server service has no relevance. An example of an application doing its own memory management and for which the tuning of this parameter is ignored is Microsoft SQL Server (see Optimization and Tuning of Windows NT, Version 1.4 by Scott B. Suhy). The effect of the change is to reduce the threshold in systems having large memory. The main reason is that the DBMS and SAP also have their own caches
97
and the use of many levels of caches decreases performance instead of improving it.
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
afford having so many disks for the paging activity is to follow the following less restrictive rules (Table 18):
Table 18. Page file size recommendations
Memory
Page file
4.6.3 4 GB tuning
Step 11: Modify BOOT.INI for 4 GB tuning Windows NT is a 32-bit operating system using a linear virtual addressing memory system. The virtual addresses are divided into 2 GB sections: user mode and system mode as shown in Figure 40. The user processes can only exploit the 2 GB user mode section:
Process Virtual Memory Without 4 GB Tuning FFFFFFFFh System Addressable Memory (2 GB) Application Addressable Memory (2 GB) 00000000h Process Virtual Memory With 4 GB Tuning System Address Memory (1 GB) Application Address Memory (3 GB)
5170-01
FFFFFFFFh c0000000h
80000000h
00000000h
While this is enough for many systems it may not be enough in large SAP and DBMS installations. To resolve these situations Windows NT Enterprise Edition introduced a new feature known as 4 GB tuning. By specifying a /3GB parameter in the BOOT.INI file, as shown in Figure 41, this changes how the operating system manages the virtual addressing: 3GB for user processes and 1GB for system processes. Figure 40 shows the result. [boot loader] timeout=30 default=multi(0)disk(0)rdisk(0)partition(2)\WINNT [operating systems] multi(0)disk(0)rdisk(0)partition(2)\WINNT="Windows NT Server Version 4.00" /3GB multi(0)disk(0)rdisk(0)partition(2)\WINNT="Windows NT Server Version 4.00 [VGA mode]" /basevideo /sos For 4 GB tuning, you should specify /3GB here
99
No new API is necessary to allow applications to exploit this new feature, but executables need to be modified in order to see this extra space. See the following references for a detailed description on how to configure the SAP system in order to exploit this feature: Knowledge Base article Q171793 Information on Application Use of 4GT RAM tuning OSS note 0110172 NT: Transactions with large storage requirements As described in the Microsoft document FAQ: All You Ever Wanted To Know about Windows NT Server 4.0 Enterprise Edition, to be able to exploit this feature the following requirements have to be satisfied: The application is memory intensive The application is able to utilize more than 2 GB of memory The server has more than 2 GB of RAM All the other components of the system have enough computing capacity
100
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Parameters
Lab values
Your values
Cluster name Cluster Service User ID Cluster Service password Windows NT SAP domain name Shared cluster disks Quorum drive Network name for the private network adapter Network name for the public network adapter Network name for the backbone network adapter Networks available for internal cluster communication Cluster IP address Cluster Subnet Mask Network
NetBIOS names rules. This account must be created in the SAP domain before beginning the MSCS setup1 . Password rules.
See disk layout worksheet. See disk layout worksheet. Select Enable for cluster use and Use for private communications. Select Enable for cluster use and Use for all communications. Select Enable for cluster use and Use for all communications. Arrange the order as shown in the Lab value column. In the same subnet of public network adapters. The same public network adapters.
I, J, K, L, etc. I: Private Public Backbone Private Backbone Public 192.168.0.502 255.255.255.0 Public
Notes : 1 The cluster service account must be a domain user and belong to the local Administrators group in both nodes of the cluster. Moreover, the account must have the following rights: Backup files and directories, Increase quotas, Increase scheduling priority, Load and unload device drivers, Lock pages in memory, Logon as a service and Restore files and directories. 2 The IP address used in our lab is reserved for private use. However, this is not necessary for your installation except where noted.
Now, start the installation of MSCS on Node A. At the end of the installation reboot Node A. Do not try to open the Cluster Administration utility yet as it is necessary to update the HOSTS file before doing so. Update the HOSTS file with the following lines: 127.0.0.1 192.168.0.50 localhost sapclus
101
Step 14: Install MSCS on Node B Install the Microsoft Cluster Service on Node B. Before beginning the installation update the HOSTS file as described in Step 13. Useful documentation to be used during and after the installation to troubleshoot problems are: MSCS Administrators Guide (from Microsoft TechNet) MSCluster Service Troubleshooting and Maintenance by Martin Lucas (from Microsoft TechNet) Deploying Microsoft Cluster Server by the Microsoft Enterprise Services Assets Team (from TechNet) Microsoft Cluster Server Release Notes (from the Windows NT 4.0 Enterprise Edition CD) Windows NT Microsoft Cluster Server by Richard R. Lee Step 15: Installation of troubleshooting tools The most important suggestions we can give to the reader to allow in-depth troubleshooting of cluster problems are to: Configure the cluster logging feature as described in Knowledge Base article Q168801 How to Enable Cluster Logging in Microsoft Cluster Server Install a network monitoring tool such as Microsoft Network Monitor to be able to analyze the network traffic between the cluster nodes, other SAP application servers, and clients. Most often cluster problems are actually network problems.
Step 17: DNS or HOSTS file and transport directory configuration After having completed the SP installation it is necessary to configure either the DNS server, WINS server or the HOSTS file. If the choice is to use HOSTS files it is important to remember that the TCP/IP HOSTS file is case sensitive. See Knowledge Base article Q101746 TCP/IP Hosts file is Case Sensitive for details. You should also reference the following Knowledge Base articles: Q195462 WINS registration and IP address behavior for MSCS (general information how to use) Q193890 Recommended WINS configuration for MSCS (disable WINS client from the private LAN) Q217199 Static WINS entries cause the network name to go offline (do not use static entries in WINS for clustered computers) Figure 42 shows how to configure the HOSTS file during an Oracle installation. The information necessary for the configuration is taken from: Table 16 on page 90 Table 17 on page 92 Table 32 on page 136 Table 34 on page 141 4.14, Backbone configuration on page 114
127.0.0.1 192.168.0.1 192.168.0.2 192.168.0.50 192.168.0.51 192.168.0.52 10.0.0.1 10.0.0.2 172.16.0.1 172.16.0.2
localhost servera serverb sapclus itssap itsora serverai serverbi serverab serverbb
SAPTRANSHOST
Figure 43 shows the HOSTS file used in our DB2 installation. The information necessary for the configuration is taken from: Table 16 on page 90 Table 17 on page 92 Table 44 on page 154 Table 45 on page 155 4.14, Backbone configuration on page 114.
103
127.0.0.1 192.168.0.1 192.168.0.2 192.168.0.50 192.168.0.51 192.168.0.52 10.0.0.1 10.0.0.2 172.16.0.1 172.16.0.2
localhost servera serverb sapclus itssap itsdb2 serverai serverbi serverab serverbb
SAPTRANSHOST
Figure 44 shows the HOSTS file used in our SQL Server installation. The main difference is the introduction of the msdtc line. The information necessary for the configuration is taken from: Table 16 on page 90 Table 17 on page 92 Table 51 on page 164 Table 53 on page 165 4.14, Backbone configuration on page 114 localhost servera serverb sapclus itssap itssql serverai serverbi serverab serverbb
127.0.0.1 192.168.0.1 192.168.0.2 192.168.0.50 192.168.0.51 192.168.0.52 10.0.0.1 10.0.0.2 172.16.0.1 172.16.0.2
SAPTRANSHOST
Note: itssql and msdtc are specific to the SQL Server database installation.
We raise to your attention the alias SAPTRANSHOST in the three HOSTS files shown above (Figure 42, Figure 43, and Figure 44). This alias allows R3Setup to recognize servera as transport host. This line must be changed at the end of the installation. Since the transport directory is on the shared drives, the use of the host name is not correct, so the virtual name of the SAP system (that is <sid>sap) should be used. We are assuming that you are using the shared drives of the cluster for the transport directory. If the transport directory is on an external system follow these steps: 1. On the transport host create the directory \USR\SAP\TRANS. 2. On this directory grant Everyone Full Control to the group. 3. If no instance will be installed on the transport host share this directory as SAPMNT. 4. Update the HOSTS file (or the DNS server) to make SAPTRANSHOST an alias for the transport host.
104
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Step 18: Installation of IE, ADSI, and MMC Install Microsoft Internet Explorer 4.01 (or later) on both servers by running the IE4SETUP.EXE program from the \MS_IE4\WIN95_NT\EN subdirectory of the PRESENTATION CD. Do not install the Active Desktop feature. Install the Active Directory Service Interfaces (ADSI) component on both nodes by running the ADS.EXE program from the \NT\I386\MMC subdirectory of the SAP Kernel CD-ROM. Install the Microsoft Management Console (MMC) on both nodes by running the IMMC11.EXE program from the \NT\I386\MMC subdirectory from the SAP KERNEL CD Step 19: Install the latest SAP R/3 DLLs Install the latest version of the Dynamic Link Libraries on both nodes by running the R3DLLINS.EXE program from the \NT\I386\NTPATCH subdirectory of the SAP Kernel CD-ROM.
Step 20: Apply the Post SP4 DLLs Note: This only applies if you installed SP4. Apply the RNR20.DLL and CLUSRES.DLL Post-SP4 Hot fixes. The first fix can be downloaded from. ftp.microsoft.com /bussys/winnt/winnt-public/fixes/usa/nt40/hotfixes-postSP4/Rnr-fix The CLUSRES.DLL can be downloaded directly from the Microsoft site or installed as part of Service Pack 5. See OSS note 0134141 for details.
people responsible for the Microsoft Cluster installation and others responsible for the SAP installation. To make everyone sure that the cluster is working and is correctly configured it is highly recommended to test the cluster configuration in as much detail as possible.. The main recommended tests are: Check the HOSTS file Test the failover process
Node
Test
PING <NodeA_Name> PING <NodeB_Name> PING <Cluster_Name> PING <NodeA_Name> PING <NodeB_Name> PING <Cluster_Name>
The reply to the PING commands must come from the public network. Be sure that this is the case. If this condition is not satisfied when pinging your own host name the SAP installation cannot begin and the following simple procedure should be performed. Note: As per Microsoft Knowledge Base article Q164023, this procedure only applies to servers having SP4 or later installed: 1. Open the Network applet in the Control Panel. 2. Click Bindings > Show Bindings for all Protocols 3. Select the TCP/IP protocol. 4. Change the order in which the cards are listed. They should be in the following order: 1. Public network 2. Backbone network 3. Private network If the above condition is not satisfied while pinging any other host name the correct answer can be obtained by simply correcting the HOSTS file.
106
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Test
Node A
Node B
Manual failover Manual failover Microsoft CASTEST1 Regular shutdown Regular shutdown Blue screen2 Blue screen2 Power off Power off
Clock Clock
Notes : 1 The Microsoft CASTEST is contained in the \MSCS\SUPPORT directory in the Second CD of Windows NT EE. Instructions on how to use the test are contained in the CASREAD.TXT file. 2 The blue screen can be achieved killing the WINLOGON.EXE process by means of the KILL.EXE utility contained in the Windows NT Resource Kit.
If you do not have enough time to complete all the steps at least follow this recommendation:
Basic test
Use the Microsoft CASTEST utility; this should be considered the official MSCS test utility. Carefully examine the log file that is produced verify what happens when you power off one of the nodes. Specific hardware problems due to the lack of power on one of the I/O adapters are not seen by the Microsoft utility.
107
Act as a part of the Operating System Increase Quotas Log on as a Service Replace a Process Level Token 2. Fill out the SAP installation worksheets in this chapter and the database chapters. We provide, in each of the next sections dedicated to SAP installation, a SAP installation worksheet that has to be filled (see Table 29 on page 133 for an Oracle example). In this worksheet we collect all the information necessary for the local SAP and DBMS installation. These worksheets are DBMS dependent. They must be filled using the recommended disk layouts provided in 3.4, Disk layouts on page 50 as reference. The following Windows NT accounts and groups are created automatically by R3Setup. <sid>adm account on the SAPDOM domain controller with the following advanced user rights: Log on as a Service Replace a Process Level Token Increase Quotas Act as a part of the Operating System SAPService<SID> account on the SAPDOM domain controller SAP_<SID>_GlobalAdmin global group on the SAPDOM domain controller SAP_<SID>_LocalAdmin local group on each SAP R/3 application server and DB server Put the SAP_<SID>_GlobalAdmin global group in the SAP_<SID>_LocalAdmin local group on each SAP R/3 application server and DB server Assign the advanced user right Logon as a Service to the user SAPService<SID> on each SAP R/3 application server and DB server Once installation is complete you should do the following to improve the security of the system: Delete the SAP installation account that you created in step 23 (sapinst in our lab)
108
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Oracle install
DB2 install
Install R/3 setup tool R3SETUP Install SAP R/3 CI&DI R3SETUP -f CENTRDB.R3S Install R/3 files for conversion R3SETUP -f NTCLUSCD.R3S SAP cluster conversion R3SETUP -f NTCMIGNx.R3S
Install R/3 setup tool R3SETUP Install SAP R/3 CI&DI R3SETUP -f CENTRDB.R3S
Install R/3 setup tool R3SETUP Install SAP R/3 CI&DI R3SETUP -f CENTRDB.R3S
Install R/3 files for conversion R3SETUP -f NTCLUSCD.R3S SAP cluster conversion R3SETUP -f NTCMIGNx.R3S Complete MSCS migration R3SETUP -f UPDINSTV.R3S
Install R/3 files for conversion R3SETUP -f NTCLUSCD.R3S SAP cluster conversion R3SETUP -f NTCMIGNx.R3S Complete MSCS migration R3SETUP -f UPDINSTV.R3S
You will note from Figure 46 that the main difference between the installation procedures is the point when the database cluster conversion occurs. These steps are highlighted. See OSS note 0138765 Cluster Migration: Terminology and Procedure for further information. A detailed description of installation is provided in the next three chapters.
Must always be logged on as the installation user
The whole installation must be done logged on using the same installation user account as described in step 23 on page 107 (the only exception is step 24.15 on page 138). This account must be a member of the Domain Admins group. See OSS note 134135 4.5B R/3 Installation on Windows NT (General) for additional information. Depending on the database you are using, proceed as follows: If you are installing SAP with Oracle go to Chapter 5, Installation using Oracle on page 129. If you are installing SAP with DB2 go to Chapter 6, Installation using DB2 on page 145. If you are installing SAP with SQL Server go to Chapter 7, Installation using SQL Server on page 159.
109
Client
User
Default password
The purpose of the tests is to confirm that it is possible to connect to the SAP server from a client in which the SAP GUI has been installed and exploit the servers services. If you cannot connect to the SAP application server you can give up your SAP system.
110
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
111
112
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
A healthy system has many waiting dialog process as there are in Figure 49.
4.12.6 SAPWNTCHK
As described in OSS note 0065761 Determine cOnfiguration Problems under Windows NT, it is possible to download from Sapserv<x> FTP servers (see B.3, Related Web sites on page 183 for details) a couple of utilities, SAPNTCHK and SAPWNTCHK, to test the SAP configuration. The only difference between these utilities is in the graphic interface. Detailed information about the utilities is contained in the readme files enclosed. As an example of the result, Figure 50 shows the output of SAPWNTCHK:
113
4.13.1 Oracle
Oracle verification: You can log on to the DBMS using Oracle Enterprise Manager. To log on you can use the standard Oracle count: User ID: system Password: manager Service: <SID>.world Connect as: normal You are then asked if you want to create the Oracle Repository and you have to answer OK. Oracle Fail Safe verification: You can log on to the Oracle Fail Safe manager using the following account: User ID: <sid>adm Password: ibm Cluster alias: SAPCLUS Domain: SAPDOM Then you can start the Oracle Fail Safe Cluster Verification utility by clicking Troubleshooting > Verify Cluster. Look at the log to see if there is any problem.
4.13.2 DB2
See 5.8 Verification of installation in the redpaper SAP R/3 and DB2 UDB in a Microsoft Cluster Server Environment, available from: http://www.redbooks.ibm.com/redpapers.html
114
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Step 27: Backbone configuration Modify the IP address resource in the DB group to use the backbone instead of the public network. Modify the application and DB server HOSTS file in a coherent way. Figure 51 shows how to modify the Oracle cluster group:
Generic Service resource (CLUSRES.DLL)
ITS.WORLD
5170-01
Disk M:
Disk K: Backbone Configuration Replace the public network address 196.168.0.52 with the backbone address 172.16.0.52
Disk L:
Disk O:
Disk N:
These are the changes in the HOSTS file (Figure 52): 127.0.0.1 192.168.0.1 192.168.0.2 192.168.0.50 192.168.0.51 #192.168.0.52 10.0.0.1 10.0.0.2 172.16.0.1 172.16.0.2 172.16.0.52 localhost servera serverb sapclus itssap itsora serverai serverbi serverab serverbb itsora
#ADDED LINE
115
DB2ITS
5170-01
Disk K:
Disk L:
Disk M:
Backbone Configuration Replace the public network address 196.168.0.52 with the backbone address 172.16.0.52
Figure 54 shows the changes in the HOSTS file: 127.0.0.1 192.168.0.1 192.168.0.2 192.168.0.50 192.168.0.51 #192.168.0.52 10.0.0.1 10.0.0.2 172.16.0.1 172.16.0.2 172.16.0.52 localhost servera serverb sapclus itssap itsdb2 serverai serverbi serverab serverbb itsdb2
SAPTRANSHOST
#MODIFIED LINE
#ADDED LINE
Disk K: Disk L:
5170-01
Generic Service ITS SQL resource Network Name (CLUSRES.DLL) IP Address resource (CLUSRES.DLL)
ITSSQL IP Address
116
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
127.0.0.1 192.168.0.1 192.168.0.2 192.168.0.50 192.168.0.51 #192.168.0.52 10.0.0.1 10.0.0.2 172.16.0.1 172.16.0.2 172.16.0.52
localhost servera serverb sapclus itssap itssql serverai serverbi serverab serverbb itssql
SAPTRANSHOST
#MODIFIED LINE
#ADDED LINE
The structure of database resource dependency trees shown in Figure 51, Figure 53, and Figure 55 is unrelated to the I/O technology (SCSI or FCAL) used to access the shared drives, but there is an important difference on the labels near and inside the boxes describing the disks. With FC-AL technology, disks are seen as Physical Disk resources and MSCS sees them by means of the Microsoft DLL CLUSRES.DLL. With ServeRAID, instead of this DLL, ServeRAID adapters use the IPSHA.DLL DLL and disks are seen as IBM IPSHA ServeRAID logical disk resources.
Test
Action
Manual failover using the Cluster Administrator Manual failover using the Cluster Administrator Manual failover using the Cluster Administrator1 Manual failover using the Cluster Administrator
Move Cluster Group from Node A to Node B Move SAP Group from Node A to Node B Move DB Group from Node B to Node A Move Cluster Group from Node B to Node A
Cluster Group: Node A SAP Group: Node A DB Group: Node B Cluster Group: Node A SAP Group: Node A DB Group: Node B Cluster Group: Node A SAP Group: Node A DB Group: Node B Cluster Group: Node B SAP Group: Node B DB Group: Node A
Cluster Group: Node B SAP Group: Node A DB group: Node B Cluster Group: Node A SAP Group: Node B DB group: Node B All groups on Node A
117
Test
Action
Manual failover using the Cluster Administrator Manual failover using the Cluster Administrator1 Regular shutdown of Node A
Move SAP Group from Node B to Node A Move DB Group from Node A to Node B Shut down Node A Shut down Node B Shut down Node A Shut down Node B Run KILL.EXE on Node A as per note 2 Run KILL.EXE on Node B per Note 2 Run KILL.EXE on Node A per Note 2 Run KILL.EXE on Node B per Note 2 Power off
Cluster Group: Node B SAP Group: Node B DB Group: Node A Cluster Group: Node B SAP Group: Node B DB Group: Node A Cluster Group: Node A SAP Group: Node A DB Group: Node B Cluster Group: Node A SAP Group: Node A DB Group: Node B Cluster Group: Node B SAP Group: Node B DB Group: Node A Cluster Group: Node B SAP Group: Node B DB Group: Node A Cluster Group: Node A SAP Group: Node A DB Group: Node B Cluster Group: Node A SAP Group: Node A DB Group: Node B Cluster Group: Node B SAP Group: Node B DB Group: Node A Cluster Group: Node B SAP Group: Node B DB Group: Node A Cluster Group: Node A SAP Group: Node A DB Group: Node B Cluster Group: Node A SAP Group: Node A DB Group: Node B Cluster Group: Node B SAP Group: Node B DB Group: Node A Cluster Group: Node B SAP Group: Node B DB Group: Node A
Cluster Group: Node B SAP Group: Node A DB group: Node A All groups on Node B
10
11
12
13
14
15
16
Power off
17
Power off
18
Power off
Notes: 1 DB2 currently does not support the manual failover of the DB Group using the Cluster Administrator if there are active connections open to the database. 2 The blue screen can be obtained killing the WINLOGON.EXE process by means of the KILL.EXE utility contained in the Windows NT Resource Kit.
118
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
4.16 Tuning
Many books would be necessary to describe how to tune the SAP and DBMS servers. We cannot face these problems but a few recommendations can be given.
To alter the number of each type of work process follow these steps: 1. Click Tools > CCMS > Configuration > Profile Maintenance. 2. Select Profile > Dyn. Switching > Display parameters. 3. Select the instance profile (in our lab: ITS_DVEMGS00_ITSSAP). 4. Click Basic Maintenance > Change. See SAP R/3 System Administration on page 380 for details. General SAP recommendations on how to distribute work processes and how to alter the number of work processes are contained in 4.16.2.2, 4.16.2.3, and 4.16.2.4 in this redbook. 4.16.2.2 Run Dialog and Update work processes on dedicated servers To avoid resource contention between Update and Dialog work processes and also to allow specific server tuning, it is recommended to put dialog work processes and update work processes in different servers. 4.16.2.3 Run Enqueue work processes on a dedicated server The Central Instance server is the only SAP server in which Enqueue and Message work processes run. Since this server is a focal point for all the message flow between nodes and also because the overall performance of the
119
system depends on the speed of the locking activity of the Enqueue work process, it is recommended to use a dedicated server for Enqueue and Message work processes. 4.16.2.4 Keep an optimal ratio between work processes An excessively high or low number of work processes can decrease performance. General recommended ratios between work processes are: One Update work process of type V1 (UPD) is able to write the data coming from four Dialog work processes in the DB. One Update work process of type V2 (UP2) is able to write statistical data coming from 12 Dialog work processes. One Background work process (BTC) is able to serve four Dialog work processes. 4.16.2.5 Distribute the users between the application servers Users can be automatically distributed between application servers during the logon phase, achieving a logon load balancing. This dynamic balancing can be obtained by accessing logon groups as described in Chapter 14, SAP R/3 System Administration. Besides this, users should use SAPLOGON or SAP Session Manager. The SAPGUI does not allow you to exploit the logon groups. Here is a short description of how logon load balancing works: 1. The user logs onto the SAP System by SAPLOGON or SAP Session Manager. 2. The user request is directed to the Message Server on the Central Instance node. 3. The Message Server listens on the TCP port defined on the Services file in the line containing the string sapms<SID>. 4. The Message Server logs the user on the SAP application server having the lowest load. To determine the load of the server, the Message Server uses two parameters: the response time and the maximum number of users. These parameters can be configured by the SAP administrator (transaction code SMLG). If any R/3 instance belonging to the logon group has exceeded the maximum load limits, the limits are simply ignored. The distribution of users by means of logon groups can improve performance but can also worsen performance if you do not really need this distribution. Indeed, SAP application servers need to have ABAP preprocessed code in their buffers to avoid the preprocessing being done on a per-user basis. Moving users from one server to another can mean a move from a server with optimal buffers to another having nonoptimal buffers. For this reason, it is recommended you create logon groups on a per-module basis. For instance, if you have enough servers you could create one logon group for FI/CO users and a second logon group for SD users. 4.16.2.6 Operations modes Often, daily and night activities are different in an SAP system. The first one is mainly stressing Dialog work processes while the second mainly concerns Batch work processes. To optimize the system for these different configurations you can use operation modes. This technique allows you to configure the system in two
120
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
different ways and schedule the switch between these ways at predetermined hours. This means you can have two operation modes with more dialog processes and more background processes and you can switch from the day mode to night mode at predetermined hours. A detailed description of how to configure operation modes is contained in Chapter 14 of SAP R/3 System Administration. 4.16.2.7 Page file striping Improving the page file I/O increases the overall SAP performance, so it is recommended to stripe the page file as much as possible. You can either create up to 16 page files in different disks or use hardware technology like RAID-1 enhanced and RAID-10 to obtain the striping. It is also important to create large page files because this is the basis of zero administration memory management" as described in OSS note 0088416. 4.16.2.8 Memory management As of release 4.0, SAP exploits a new technique to manage the memory known as zero administration memory management. This technique aims to make the tuning of memory parameters automatic. If you need to tune these parameters manually see OSS note 0088416. 4.16.2.9 Network planning Splitting the network in two (a public one used by clients to connect to the application servers, and a backbone one used by application servers to communicate to the DB server) can improve performance. As shown in Figure 57, the backbone allows you to separate the dialog traffic between the SAPGUI, the Message, Enqueue, and Dialog work processes from the update traffic between Update work processes and the DB server:
Update path
SAPDOM Domain
Backbone network
DB
5170-01
CI
UPD
APP1 MSG PDC Public network Dialog path Message path SAPLOGON Clients PDC
Figure 57. Message flow
APP2
DIA
USERS Domain
Public network
Clients [Mary]
121
With the usual configuration exploiting logon load balancing, the traffic consists of the following main steps: 1. User Mary opens the connection using either SAPLOGON or SAP Session Manager. 2. The connection request is sent along the public network to the Central Instance host. 3. The Message Service on the CI node receives the requests and sends the user to the application server having the lowest load (in Figure 57). 4. The Dispatcher service of APP1 queues up the request until one of the Dialog work processes becomes ready to serve a new request 5. When a Dialog work process becomes free the dispatcher retrieves the request from the queue and assigns the request. 6. The Dialog work process rolls in the data, completes the request, and then rolls out out the data. 7. The next user dialog step is still served by APP1, but not necessarily by the same Dialog work process. 8. When the business transaction is complete the dialog service transfers the control of the transaction to the update service by means of the ABAP statement Commit Work. 9. The Update service selects one Update work process and transfers it to the update record containing the update request. 10.The update work process using the backbone network connects to the DB and passes the information to the DBMS. See 3.7, Network configurations on page 76 for further information. The introduction of a backbone network requires further steps after the cluster installation as described in 4.14, Backbone configuration on page 114. See also SAP document Network Integration of R/3 Servers (document number 51006371).
122
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
4.16.3.1 Workload analysis Workload analysis can be started from the SAPGUI by clicking Tools > CCMS > Control/Monitoring > Performance Menu > Workload> Analysis. By clicking Oracle (or SQL or DB2, depending on the DBMS used, you can analyze the DBMS. This corresponds to Transaction code ST04. Figure 58 appears:
123
Alternatively, you can click ITSSAP to analyze the SAP application server. This corresponds to dTransaction code ST03 as shown in Figure 59:
The Av. wait time has to be no more than 1 percent of the average total response time. If this parameter is higher, either the number of work processes is inadequate or there is something blocking their activity.
124
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
4.16.3.2 Buffer Cache quality analysis Buffer analysis can be started by clicking Tools > CCMS > Control/Monitoring > Performance Menu > Setup/Buffers > Buffers. This corresponds to Transaction code ST02 (see Figure 60):
125
4.16.3.3 Database reorganization Database reorganization can be started by clicking Tools > CCMS > DB Administration > DBA Planning Calendar or Transaction code DB13. Then it is necessary to double-click the actual day. This produces Figure 61:
You must then select the start time, period, calendar, and type of reorganization.
126
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
4.16.5 DB tuning
Step 29: DB tuning For detailed information on how to tune the DBMS see: Oracle Architecture by S. Bobrowski, Chapter 10 Oracle 8 Tuning by M. J. Corey et al, Chapter 3 DB2 Universal Database and SAP R/3 Version 4 by Diane Bullock et al, Chapters 8 and 9. SAP R/3 Performance Tuning Guide for Microsoft SQL Server 7.0. from http://support.microsoft.com
For more details about configuring a remote shell for DB2, see 5.7.1 Setting up remote function call of the redpaper SAP R/3 and DB2 UDB in a Microsoft Cluster Server Environment available from http://www.redbooks.ibm.com.
127
128
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Substeps described in this chapter: Step 24.1: Fill in the installation worksheets, page 130
Oracle install
A
B
A A
B B
24.2: Install Oracle on Node A, page 130 24.3: Install Oracle patch 8.0.5.1.1 on Node A, page 131 24.4: Install Oracle on Node B, page 131 24.5: Install Oracle Patch 8.0.5.1.1 on Node B, page 131 24.6: Install Oracle Fail Safe V2.1.3 on Node A, page 131 24.7: Install Oracle Fail Safe Patch 2.1.3.1 on Node A, page 132 24.8: Install Oracle Fail Safe V2.1.3 on Node B, page 132 24.9: Install Oracle Fail Safe Patch 2.1.3.1 on Node B, page 133
Install R/3 setup tool R3SETUP Install SAP R/3 CI&DI R3SETUP -f CENTRDB.R3S Install R/3 files for conversion R3SETUP -f NTCLUSCD.R3S SAP cluster conversion R3SETUP -f NTCMIGNx.R3S
A
B
Step 24.12: Install the cluster conversion files on Node A, page 135 Step 24.13: Install the cluster conversion files on Node B, page 136 Step 24.14: Converting Node A for operation in the cluster, page 136 Step 24.15: Converting Node B for operation in the cluster, page 138 Step 24.16: Converting the R/3 database to Fail Safe on A, page 139 Step 24.17: Converting the R/3 database to Fail Safe on B, page 141 Step 24.18: Completing the migration to MSCS on Node A, page 143 Step 24.19: Completing the migration to MSCS on Node B, page 144
A
B
A
B
Figure 62. Installation process for SAP R/3 on SQL Server 7.0
A overall description of the installation process is shown in Figure 62. You will note that some steps are to be done on Node A, some on Node B, and some are to be carried out on both nodes. Before beginning the installation we recommend you read the following documents: The continually updated OSS note 0134135 4.5 B R/3 Installation on Windows NT (general). OSS note note 0114287 SAPDBA in a Microsoft Cluster Server environment.
129
The document Checklist - Installation Requirements: Windows NT. The main points of the checklist are discussed in 3.1, Checklist for SAP MSCS installation on page 40. Oracle: Conversion to Microsoft Cluster Server: Oracle 4.0B 4.5A 4.5B (doc. 51005504). The continually updated OSS note 0134070 4.5 B Installation on Windows NT: Oracle containing a description of the known problems. The installation guide Oracle DBMS: R/3 Installation on Windows NT: Oracle Database Release 4.5B (51004599).
Step 24
The steps in this database chapter are substeps of step 24 on page 108 in Chapter 4.
Throughout the entire installation process, make sure you are always logged on as the installation user (in our lab, sapinst). This user must be a domain administrator as described in step 23.
Parameters
Lab values
Your values
Company name Oracle Home: Name Oracle Home: Location Oracle Home: Language Path update
130
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Parameters
Lab values
Your values
Type of installation
At the end of the installation reboot the system. Note: If you realize you have used the Administrator account instead of the sapinst one, you can correct the error with this simple procedure: log on as sapinst and repeat all the installation steps of this subsection. Step 24.3: Install Oracle patch 8.0.5.1.1 on Node A To install the Oracle patch on Node A, do the following: 1. Log on to Node A as the installation user (in our lab, sapinst). 2. Move all cluster resources to Node A using Microsoft Cluster Administrator 3. From the Services applet in the Control Panel, stop all Oracle Services (it should be necessary to stop only the OracleTNSListener80 service). 4. Insert the Oracle RDBMS CD-ROM and start the Oracle Installer program by running SETUP.EXE in the \NT\I386\PATCHES\8.0.5.1.1.\WIN32\INSTALL directory. 5. You are asked to provide many values we have collected in Table 25.
Table 25. Oracle patch installation
Parameters
Lab values
Your values
Company name Oracle Home: Name Oracle Home: Location Oracle Home: Language Software Asset Manager Oracle 8 Server Patch components
IBM DEFAULT_HOME C:\orant English_SAP Select only Oracle8 Server Patch 8.0.5.1.1 Select all the components
6. Ignore the request to run the scripts. 7. At the end of the installation reboot the system. Step 24.4: Install Oracle on Node B Repeat step 24.2 on page 130 for Node B. Step 24.5: Install Oracle Patch 8.0.5.1.1 on Node B Repeat step 24.3 on page 131 for Node B. Step 24.6: Install Oracle Fail Safe V2.1.3 on Node A To install OFS on Node A, do the following: 1. Log on to Node A as the installation user (in our lab, sapinst). 2. Move all cluster resources to Node A using Microsoft Cluster Administrator.
131
3. From the Services applet in the Control Panel, stop all Oracle Services (it should be necessary to stop only the OracleTNSListener80 service). 4. Insert the Oracle Fail Safe CD and start the Oracle installer program by double clicking the program ORAINST.EXE in the \NT\I386\WIN32\INSTALL directory. 5. You are asked to provide many values we have collected in Table 26.
Table 26. Oracle Fail Safe installation
Parameters
Lab values
Your values
Company name Oracle Home: Name Oracle Home: Location Oracle Home: Language Path update Software Asset Manager
IBM DEFAULT_HOME C:\orant English_SAP Accept path change Select only Oracle Fail Safe Manager 2.1.3.0.0 and Oracle Fail Safe Server 2.1.3.0.0 under Available Products Select all
6. If you are installing on Node B, ignore the Oracle Fail Safe discovery error message. 7. Continue on to step 24.7 without rebooting the system. Step 24.7: Install Oracle Fail Safe Patch 2.1.3.1 on Node A To install the OFS patch on Node A, do the following: 1. Insert the Oracle Fail Safe CD and start the Oracle installer program by running ORAINST.EXE in the \NT\I386\2131\WIN32\INSTALL directory. 2. You are asked to provide many values we have collected in Table 27.
Table 27. Oracle Fail Safe patch
Parameters
Lab values
Your values
Company name Oracle Home: Name Oracle Home: Location Oracle Home: Language Path update Software Asset Manager
IBM DEFAULT_HOME C:\orant English_SAP Leave path unchanged Select only Oracle Fail Safe Server 2.1.3.1.0 from the list of available products
3. At the end of the installation exit the Oracle installer and reboot the node. Step 24.8: Install Oracle Fail Safe V2.1.3 on Node B Repeat step 24.6 on page 131 for Node B.
132
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Step 24.9: Install Oracle Fail Safe Patch 2.1.3.1 on Node B Repeat step 24.7 on page 132 for Node B.
Values
Lab values
Your values
See R/3 Installation on Windows NT: Oracle Database Release 4.5 B. page 4-5 Leave default
ITS
c:\Users\itsadm\install Yes
Parameters
Lab values
Your values
See R/3 Installation on Windows NT: Oracle Database Release 4.5 B. page 4-5. Use uppercase characters.
ITS
133
Parameters
Lab values
Your values
Number of the central system (SAPSYSNR) Drive of the \usr\sap directory (SAPLOC) Windows NT domain name (SAPNTDOMAIN) Central transport host (SAPTRANSHOST) Character set settings (NLS_CHARACTERSET) Default \Oracle<SID> drive SAPDATA_HOME SAPDATA1
Any two-digit number between 00 and 97 On the shared disks Not on the Quorum disks Not on Oracle disks
00 J:
SAPDOM SAPTRANSHOST WE8DEC Home directory for SAPDATA<x> files On the shared disks Not on the Quorum disks Not on the SAPLOC disk Not on the OrigLog disks Not on the MirrorLog disks Not on the Archive disks N: N:
SAPDATA2 SAPDATA3 SAPDATA4 SAPDATA5 SAPDATA6 OrigLogA OrigLogB MirrorLogA MirrorLogB SapArch SapBackup SapCheck SapReorg SapTrace sapr3 account password RAM that is reserved to the R/3 system (RAM_INSTANCE) Empty directory to which the Export1-CD is to be copied
Same as for SAPDATA1 Same as for SAPDATA1 Same as for SAPDATA1 Same as for SAPDATA1 Same as for SAPDATA1 Same as for SAPDATA1 Same as for SAPDATA1 Same as for SAPDATA1 Same as for SAPDATA1 Same as for SAPDATA1
O: P: P: N: O: K: K: L: L: M: K: K: L: L: ibm
Change the default only if you install multiple R/3 Systems in a single host Leave default
default
Leave default
134
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Parameters
Lab values
Your values
Empty directory to which the Export2-CD is to be copied Port number of the message server (PORT) <sid>adm password SAPService<SID> password Do you want to use the SAP Gateway? (R2_CONNECTION) Number of R3load processes (PROCESSES) Operating system platform for which the report loads will be imported
For Oracle, set this to 1 regardless of the number of CPUs Must be Windows NT
5 Windows NT
Parameters
Lab values
Your values
See R/3 Installation on Windows NT: Oracle Database. Release 4.5 B. page 4-5. Use uppercase characters. Leave default
ITS
3. Log on to Node B as the installation user (in our lab, sapinst) 4. Start the cluster conversion program by running NTCLUST.BAT in the \NT\COMMON directory on the CD-ROM drive.
135
5. You are asked to provide many values you can take from Table 31:
Table 31. R/3 setup files for cluster conversion: Node B
Parameters
Lab values
Your values
See R/3 Installation on Windows NT: Oracle Database Release 4.5 B. page 4-5. Use uppercase characters. Leave default
ITS
6. Continue on to step 24.13 without rebooting Node A. Step 24.13: Install the cluster conversion files on Node B Repeat step 24.12 for Node B.
Parameters
Lab values
Your values
Virtual host name of the R/3 Cluster Group (NETWORKNAME) Virtual IP address of the R/3 Cluster Group (IPADDRESS) Subnet mask for the virtual IP Address for the R/3 Cluster group (SUBNETMASK) Name of the public network used for the R/3 Cluster group (NETWORKTOUSE) Name of the network to which the virtual IP Address belongs as defined in MSCS If the Hosts file has been configured the proposed value is the correct one.
ITSSAP 192.168.0.51
255.255.255.0
Public
136
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Parameters
Lab values
Your values
See R/3 Installation on Windows NT: Oracle Database Release 4.5 B. page 4-5. Use uppercase characters. See Note 1 See Note 1 See Note 1 Attention! Do not use the virtual host name of the Oracle group here. Instead use the local host name of the Node A. See Note 1 See Note 1 See Note 1 See Note 1 See Note 1 See Note 1 See Note 1 See Note 1 See Note 1 See Note 1 See Note 1 See Note 1 See Note 1 See Note 1 See Note 1 See Note 1 See Note 1
ITS
Number of the central system (SAPSYSNR) Drive of the \usr\sap directory (SAPLOC) Windows NT domain name (SAPNTDOMAIN) Virtual host name of the R/3 Oracle Group (DBHOSTNAME)
00 J: SAPDOM servera
Character set settings (NLS_CHARACTERSET) Default \Oracle<SID> drive SAPDATA_HOME SAPDATA1 SAPDATA2 SAPDATA3 SAPDATA4 SAPDATA5 SAPDATA6 OrigLogA OrigLogB MirrorLogA MirrorLogB SapArch SapBackup SapCheck SapReorg SapTrace sapr3 account password RAM that is reserved to the R/3 system (RAM_INSTANCE) Port number of the message server (PORT)
WE8DEC N: N: O: P: P: N: O: K: K: L: L: M: K: K: L: L: ibm
Change the default only if you install multiple R/3 Systems in a single host. Leave the default value.
default
3600 (default)
137
Parameters
Lab values
Your values
<sid>adm password SAPService<sid> password Do you want to use the SAP Gateway? (R2_CONNECTION)
Note 1 : Value previously used. Possible error
ibm ibm No
The R3Setup program may stop at this point. You may see the error message shown in Figure 78 on page 179. If this happen proceed as described in 8.5, R3Setup on page 179. 4. Exit from R3Setup. 5. Start R3Setup a second time. 6. Provide the password values as required. Step 24.15: Converting Node B for operation in the cluster 1. When R3Setup has finished, log off and log on again to Node A as <sid>adm (in our lab itsadm), do not use sapinst here. (Note: You are logging on to Node A here, not Node B). 2. Connect to the Oracle Instance Manager by clicking Start > Oracle Enterprise Manager > Instance Manager. Use the following information: User name: internal Password: oracle Service: ITS.WORLD Connect as: Normal 3. Shut down the instance as shown in Figure 63.
138
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
4. At the warning message click Yes. 5. Select Immediate in the Shutdown Mode window. 6. Move the Cluster Group and the Disk groups to Node B using Microsoft Cluster Administrator. 7. Take the SAP-R/3 ITS group offline and move it to Node B. 8. On Node B bring all the resources in the SAP-R/3 ITS group online except the SAP-R/3 ITS resource. You can do this by starting from the bottom of the dependency tree and following the dependencies up the tree. 9. Log on to Node B as itsadm. 10.Click Start > SAP R/3 Setup for ITS > SAP R3 Setup - Configuring Node B for a MSCS, which corresponds to launching the program. R3SETUP.EXE -f NTCMIGNB.R3S 11.You are asked to provide many values you must take from Table 32. Do not change any value in the table. We particularly recommend you be careful with the DBHOSTNAME. You must use servera again, not serverb.
139
1. Shut down Node A. 2. Shut down Node B. 3. Restart Node A. 4. Restart Node B. 5. Log on to Node A as the installation user (in our lab, sapinst). 6. Restart the Oracle Instance by clicking Start > Oracle Enterprise Manager > Instance Manager and connect using these values: User name: internal Password: oracle Service: ITS.WORLD Connect as: Normal 7. Select Database Open as shown in Figure 64:
8. Your are asked to provide the local parameter file as shown in Figure 65. Select initITS.ora
140
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Step 24.17: Converting the R/3 database to Fail Safe on B Do the following to convert the database on Node B to a Fail Safe database: 1. Copy the INIT<SID>.ORA file (in our lab, INITITS.ORA) from \<Oracle_Home>\Database (in our lab c:\orant\database) in Node A to the same directory on Node B. 2. Click on Start > Programs > Oracle Enterprise Manager > Oracle Fail Safe Manager and connect using the values in Table 33.
Table 33. Connecting to Fail Safe Manager
Parameters
Meanings
Lab values
Your values
User ID of the account used for the Cluster Server service Password of the account used for the Cluster Server service Cluster Group network name Windows NT domain
3. Create the Fail Safe group ORACLE<SID> (in our lab ORACLEITS). In Oracle Fail Safe Manager, click Groups > Create. Provide information from Table 34.
Table 34. Conversion of Oracle DB to a Fail Safe DB: Table 1
Parameters
Meanings
Lab values
Your values
Name of the Oracle Cluster Group Network used to connect to the Oracle Cluster Group Oracle Cluster Group: Network Name Oracle Cluster Group: IP Address
141
Parameters
Meanings
Lab values
Your values
Oracle Cluster Group: Subnet Mask Leave default Leave default Prevent failback
4. In Oracle Fail Safe Manager click SAPCLUS > Databases > Standalone Databases, right-click ITS.WORLD and select Add to Group. 5. Provide the following values:
Table 35. Conversion of a Oracle DB to a Fail Safe DB: Table 2
Values
Meanings
Lab values
Your values
Group Name Service Name Instance Name Database Name Parameter file <SID> \<Oracle_Home>\databa se\init<SID>.ora where <Oracle_Home> is environment variable Internal oracle Leave default Leave default
Account to access the database Password Oracle Fail Safe Policy: Pending Timeout Oracle Fail Safe Policy: Is Alive interval Oracle Fail Safe Policy: The cluster software has to restart the DB in case of failure
6. Confirm the operation Add Database to Fail Safe Group and click Yes. 7. When the Add Agent to Group OracleITS window appears select M: (this is the SAP Archive drive in our lab) as the disk for the Oracle Fail Safe repository. 8. Move the SAP cluster group to Node B using Microsoft Cluster Administrator. 9. Using either SAPPAD.EXE or Notepad, open the file J:\usr\sap\Its\sys\profile\Default.pfl and modify the SAPDBHOST line as shown in Figure 66. Change servera to itsora as shown.
142
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
For troubleshooting, detailed information can be found in Oracle Fail Safe Concepts and Administration Release 2.0.5 (doc. A57521-01). Particularly relevant are: The description of the modifications in the TNSNAMES.ORA and LISTENER.ORA files in Section 2.5. The Troubleshooting chapter, Chapter 6. Appendix B, containing the list of Oracle Fail Safe Messages.
143
4. You are asked to provide some values we have collected in Table 36.
Table 36. Completion of the MSCS migration on Node A
Parameters
Meanings
Lab values
Your values
Network Name for Cluster Resource Central Instance (CIHOSTNAME) SAP System Name (SAPSYSTEMNAME) See R/3 Installation on Windows NT: Oracle Database Release 4.5 B. page 4-5. Use uppercase characters. Any two digit number between 00 and 97
ITSSAP
ITS
00
5. Reboot Node A. Step 24.19: Completing the migration to MSCS on Node B Do the following: 6. Log on to Node B as the installation user (in our lab, sapinst). 7. Move all cluster groups from Node A to Node B using Microsoft Cluster Administrator. 8. Click on Start > Programs > SAP R/3 Setup for ITS > SAP R/3 Setup Completing the Migration to an MSCS (Instvers), which corresponds to using the command: R3Setup.exe -f UPDINSTV.R3S 9. You are asked to provide some values we have collected in Table 37.
Table 37. Completion of the MSCS migration on Node B
Parameters
Meanings
Lab values
Your values
Network Name for Cluster Resource Central Instance (CIHOSTNAME) SAP System Name (SAPSYSTEMNAME) See R/3 Installation on Windows NT: Oracle Database Release 4.5 B. page 4-5. Use uppercase characters. Any two-digit number between 00 and 97.
ITSSAP
ITS
00
144
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Substeps described in this chapter: Step 24.1: Fill in the installation worksheets, page 146 Step Step Step Step Step Step 24.2: Install DB2 on Node A, page 146 24.3: Install DB2 FixPak on Node A, page 147 24.4: Dropping the sample DB and rebooting Node A, page 148 24.5: Install DB2 on Node B, page 148 24.6: Install DB2 FixPak on Node B, page 149 24.7: Dropping the sample DB and rebooting Node B, page 149
DB2 install
A A
B B B
Install R/3 setup tool R3SETUP Install SAP R/3 CI&DI R3SETUP -f CENTRDB.R3S
A A A
Step 24.8: Create a new source for the SAP Kernel CD-ROM, page 149 Step 24.9: Check/Correct Conversion R3S Files, page 150 Step 24.10: Install the R3SETUP tool on Node A, page 150 Step 24.11: Install the CI and DI on Node A, page 151
Step 24.12: Modify DB2MSCS.CFG & run DB2MSCS on Node A, page 152 Step 24.13: Install R3Setup files for cluster conversion on A, page 153 Step 24.14: Install R/3 files for cluster conversion on Node B, page 154 Step 24.15: Converting Node A for operation in a cluster, page 154 Step 24.16: Converting Node B for operation in a cluster, page 155 Step Step Step Step Step 24.17: Migrating MSCS on Node A, page 156 24.18: Check the Services file on Node A, page 157 24.19: Check the DB2<SID> Service settings on Node A, page 157 24.20: Check the Services file on Node B, page 158 24.21: Check the DB2<SID> Service settings on Node B, page 158
A
B
Install R/3 files for conversion R3SETUP -f NTCLUSCD.R3S SAP cluster conversion R3SETUP -f NTCMIGNx.R3S Complete MSCS migration R3SETUP -f UPDINSTV.R3S
A
B
A
B
A A A
B B
Figure 67. Installation process for SAP R/3 on SQL Server 7.0
An overall description of the installation process is shown in Figure 67. You will note that some steps are to be done on Node A, some on Node B, and some are to be carried out on both nodes. The main references for the installation are: R/3 Installation on Windows NT DB2 Common Server Release 4.5B (doc. 5100 5502). Conversion to Microsoft Cluster Server: DB2 Universal Database Server 4.5B (doc. 5100 6418). The redpaper, SAP R/3 and DB2 UDB in a Microsoft Cluster Environment, available from http://www.redbooks.ibm.com.
Copyright IBM Corp. 1998, 1999
145
DB2 Universal Database and SAP R/3 Version 4, SC09-2801. The continually updated OSS note 0134135 4.5 B R/3 Inst. on Windows NT (general). Checklist - Installation Requirements: Windows NT. The main points of the checklist are discussed in 3.1, Checklist for SAP MSCS installation on page 40. The continually updated OSS note 0134159 4.5 B Installation on Windows NT: DB2/CS.
Throughout the entire installation process, make sure you are always logged on as the installation user (in our lab, sapinst). This user must be a domain administrator as described in step 23.
Do not restart the system until after dropping the DB2 database. If you do restart the system, results are unpredictable and you may have to restart the installation process. 1. Check SAPNET for more DB2 information by viewing OSS Notes 0134135 and 0134159. 2. Log on as the installation user (in our lab, sapinst). 3. Insert the DB2 RDBMS CD-ROM and start the DB2 Installer by running SETUP.EXE in the \NT_i386\DBSW directory. Figure 68 appears.
146
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
4. Follow the installation process to install DB2 on Node A. Use Table 38 to help you enter all the installation parameters. Note: Do not reboot Node A at the end of the installation.
Table 38. DB2 installation values for Node A
Parameters
Lab values
Your values
DB2 Universal Database Enterprise Edition Typical Leave default. Destination Drive needs to be a local non-shared drive. Leave default name. Enter password No
Step 24.3: Install DB2 FixPak on Node A 1. Insert the DB2 RDBMS CD and start the DB2 Installer by running SETUP.EXE in the \NT_i386\FIXPAK directory. Figure 69 appears.
147
2. Deselect both values. 3. When prompted, do not reboot the server. Step 24.4: Dropping the sample DB and rebooting Node A 1. Log off and log on as the installation user (in our lab, sapinst) to enable the changes to the environment variables such as the search path. 2. Open a command prompt. 3. Remove the default database instance of DB2 using the command below. C:\DB2IDROP DB2 4. Reboot Node A. Step 24.5: Install DB2 on Node B
Attention
Do not restart the system until after dropping the DB2 database. If you do restart the system, results are unpredictable and you may have to restart the installation process. 1. Insert the DB2 RDBMS CD and start the DB2 Installer by running SETUP.EXE in the \NT_i386\DBSW directory. 2. Follow the installation process to install DB2 on Node B. Use Table 39 to help you enter all the installation parameters. Note: Do not reboot Node B at the end of the installation.
Table 39. DB2 Installation values for Node B
Parameter
Lab values
Your values
148
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Parameter
Lab values
Your values
Step 24.6: Install DB2 FixPak on Node B 1. Insert the DB2 RDBMS CD and start the DB2 Installer program by double clicking the program SETUP.EXE in the \NT_i386\FIXPAK directory. Figure 69 on page 148 appears. 2. Deselect both values. 3. When prompted, do not reboot the server. Step 24.7: Dropping the sample DB and rebooting Node B 1. Log off and log on as the installation user (in our lab, sapinst) to enable the changes to the environment variables such as the search path. 2. Open a command prompt. 3. Remove the default database instance of DB2 using the command below. C:\DB2IDROP DB2 4. Reboot Node B.
149
6. Copy the content of the SAPPATCHES directory into C:\SAPKERNEL\NT\COMMON. 7. This is now the location of the new Kernel CD when the R3Setup program ask for the Kernel CD. Step 24.9: Check/Correct Conversion R3S Files The files that convert the stand-alone server to a cluster is administered by two files called NTCMIGNA.R3S for Node A and NTCMIGNB.R3S for Node B. These files are located in the C:\SAPKERNEL\NT\COMMON directory. Open each file with C:\SAPKERNEL\NT\I386\SAPPAD.EXE. Verify the l (lowercase L) is present in the DB2INSTANCE variable as shown in Figure 70: [DB_ENV] DB2DBDFT=@SAPSYSTEMNAME@ DB2DB6EKEY=@DB2DB6EKEY@ DB6EKEY=@DB2DB6EKEY@ DSCDB6HOME=@CIHOSTNAME@ DB2INSTANCE=db2l@LOWER_SAPSYSTEMNAME@ DB2CODEPAGE=819 [INST_ENV] DB2DBDFT=@SAPSYSTEMNAME@ DB2DB6EKEY=@DB2DB6EKEY@ DB6EKEY=@DB2DB6EKEY@ DSCDB6HOME=@CIHOSTNAME@ DB2INSTANCE=db2l@LOWER_SAPSYSTEMNAME@ DB2CODEPAGE=819
Ensure there is a lowercase L in each of these two locations in both files. If the lowercase L is not there, enter one. Note: Later versions of the files may have this corrected.
Step 24.10: Install the R3SETUP tool on Node A 1. Log on as the installation user (in our lab, sapinst). 2. Move all cluster resources to Node A. 3. Explore the SAPKERNEL CD directory and start the R3SETUP program by double clicking the program R3SETUP.BAT in the C:\SAPKERNEL\NT\COMMON directory. 4. Proceed with the installation using Table 40 for the installation parameters.
Table 40. Values for the installation of R3SETUP on Node A
Value
Lab value
Your values
SAP system name SAPSYSTEMNAME Path to installation directory INSTALL PATH Do you want to log off? CDINSTLOGOFF_NT_IND
See R/3 Installation on Windows NT: DB2 Database. Release 4.5 B. page 4-5 Leave default
ITS
Default Yes
150
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Parameters
Lab values
Your values
SAP system name (SAPSYSTEMNAME) Number of the central system (SAPSYSNR) Drive of the \usr\sap directory SAPLOC Windows NT domain name (SAPNTDOMAIN) Central transport host SAPTRANSHOST Encryption Key DB2DB6EKEY DB2<sid> password <sid>adm password SAPServer<SID> password DB2 instance directory DFTDBPATH DIAGPATH LOGDIR_DRIVE LOGARCHIVEDIR_DRIVE SAPREORGDIR_DRIVE
See R/3 Installation on Windows NT: DB2 Database Release 4.5 B. page 4-5 Any two digit number between 00 and 97 On the shared disks Not on the Quorum disks Not on DB2 disks
ITS
<SID><servername>
L: L:
LOG_DIR directory on shared disks LOG_ARCHIVE directory on shared disks SAPREORG subdirectory on the same drive as the LOG_ARCHIVE directory (shared disks) On the shared disks Not on the Quorum disks Not on SAPLOC disk Not on the LOG_DIR disks Not on the LOG_ARCHIVE disks Same as above Same as above
K: L: L:
SAPDATA1_DRIVE
N:
SAPDATA2_DRIVE SAPDATA3_DRIVE
O: P:
151
Parameters
Lab values
Your values
SAPDATA4_DRIVE SAPDATA5_DRIVE SAPDATA6_DRIVE sapr3 user password RAM that is reserved to the R/3 system (RAM_INSTANCE) Kernel CD location Port number of the message server (PORT) Database Services (PORT) R2_Connection SMSTEMP Number of R3load processes (PROCESSES)
P: N: O: ibm default
This is the copy of the CD on the local drive. Leave the default value Leave the default value
When RAM is 512 MB, use 2. When RAM is > 512 MB, use a value equal to the number of CPUs.
No Windows NT
152
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
DB2MSCS.CFG before modifications CLUSTER_NAME=<cluster name> GROUP_NAME=DB2 <SAPSID> Group DB2_INSTANCE=DB2<SAPSID> IP_NAME=DB2 IP <SAPSID> IP_ADDRESS=<virtual IP address of DB2 group> IP_SUBNET=<subnet mask of DB2 group> IP_NETWORK=<network named used to communicate> NETNAME_NAME=DB2 NetName <SAPSID> NETNAME_VALUE=<hostname of DB2 group> DISK_NAME=<resource name of shared disk containing database files> DISK_NAME=<...more shared disks> INSTPROF_DISK=<resource name of the shared disk where DBDFTPath is pointing to>
DB2MSCS.CFG with lab modifications CLUSTER_NAME=SAPCLUS GROUP_NAME=DB2 ITS Group DB2_INSTANCE=DB2ITS IP_NAME=DB2 IP ITS IP_ADDRESS=192.168.0.52 IP_SUBNET=255.255.255.0 IP_NETWORK=PUBLIC NETNAME_NAME=DB2 NetName ITS NETNAME_VALUE=ITSDB2 DISK_NAME=DISK K: DISK_NAME=DISK L: DISK_NAME=DISK M: DISK_NAME=DISK N: DISK_NAME=DISK O: DISK_NAME=DISK P: INSTPROF_DISK=DISK L:
4. Stop the service DB2<SID> by issuing DB2STOP at a command prompt. Note: If executing the command DB2STOP you may get the error: SQL1390C The environment variable DB2INSTANCE is not defined or is invalid. If this happens, check the value of the environment variable DB2INSTANCE. If this value is still DB2 then you must do the following: a. Delete the environment variable DB2INSTANCE from the system environment using the System applet in Control Panel. b. Set the value for DB2INSTANCE in your current command prompt session to DB2<SID> using the command: set DB2INSTANCE=DB2<SID> c. Re-execute the DB2STOP command. 5. Change to the C:\USERS\<SID>ADM\INSTALL directory. 6. Run the utility DB2MSCS. Note: The values for the DISK_NAME and INSTPROF_DISK are not only dependent on the drive letter, but also the type of disk subsystem used. For our example we used a Fibre Channel controller, thus there was no need for the prefix IPSHA.
Parameters
Lab values
Your values
153
Step 24.14: Install R/3 files for cluster conversion on Node B Copy the new SAPKERNEL directory from Node A to Node B. Open the explorer to C:\SAPKERNEL\NT\Commom, start the conversion program by double-clicking the program NTCLUST.BAT.
Table 43. Values for the cluster convention on Node B
Parameters
Lab values
Your values
When you initially launch the Configuring Node A for an MSCS program,you will get the following error message: Error: OSUSERSIDADMRIGHTS_NT_DB6 installationDo Phase failed You get this because the R/3 setup program does not wait for the resources to go online. You will need to launch the program a second time. 2. Proceed with the conversion using Table 44 for the parameters.
Table 44. Values for converting Node A to MSCS
Parameters
Lab value
Your values
Virtual Name for the R/3 Setup Group SAP R/3 Group IP Address Subnet Mask Network to Use SAP system name (SAPSYSTEMNAME) See R/3 Installation on Windows NT: DB2 Database Release 4.5 B. page 4-5 Any two-digit number between 00 and 97 On the shared disks Not on the Quorum disks Not on DB2 disks
Number of the central system (SAPSYSNR) Drive of the \usr\sap directory SAPLOC
154
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Parameters
Lab value
Your values
Windows NT domain name SAPNTDOMAIN Hostname of R/3 Database Server (DBHOSTNAME) Encryption Key DB2DB6EKEY DB2<sid> password <sid>adm password RAM that is reserved to the R/3 system (RAM_INSTANCE) SAPService<SID> password Gateway R2_Connection See Figure 71 on page 153
SAPDOM ITSDB2
<SID><servername>
ibm No
Step 24.16: Converting Node B for operation in a cluster 1. Log on to Node B as the installation user (in our lab, sapinst). 2. Take SAP_R/3 <SID> offline. (Note: only take this resource offline.) 3. Move all resources to Node B and bring online. 4. Click on Start > Program > SAP R/3 Setup > Configuring Node B for an MSCS. 5. Enter the values as per Table 45.
Table 45. Values for converting Node B to MSCS
Parameters
Lab values
Your values
Virtual Name for the R/3 Setup Group NETWORKNAME SAP R/3 Group IP Address IPADDRESS Subnet Mask Network to Use NETWORKTOUSE SAP system name (SAPSYSTEMNAME) See R/3 Installation on Windows NT: DB2 Database Release 4.5 B. page 4-5 Any two-digit number between 00 and 97
ITSSAP
00
155
Parameters
Lab values
Your values
Drive of the \usr\sap directory (SAPLOC) Windows NT domain name (SAPNTDOMAIN) Hostname of R/3 Database Server (DBHOSTNAME) Encryption Key (DB2DB6EKEY) DB2<SID> password <sid>adm password Port number of the message server (PORT) Database Services port SAPService<SID> password SAPR3 Password
On the shared disks Not on the Quorum disks Not on DB2 disks
ITSDB2
<SID><servername>
156
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
[RFCUPDATEINSTVERS_IND_IND] CLASS=CRfcJob RFCREPNAME=NO RFCSTEP=3 CIHOSTNAME= SAPSYSTEMNAME= SAPSYSNR= CONFIRMATION=CIHOSTNAME SAPSYSTEMNAME SAPSYSNR Start / verify the R/3 system is running. 4. Click Start > Program > SAP R/3 Setup > Completing the Migration to MSCS.
Table 46. Values for completing the migration to MSCS
Parameter
Lab values
Your values
Virtual Name for the R/3 Setup Group (CIHOSTNAME) SAP system name (SAPSYSTEMNAME) See R/3 Installation on Windows NT: DB2 Database Release 4.5 B. page 4-5 Any two-digit number between 00 and 97
ITSSAP
ITS
00
Step 24.18: Check the Services file on Node A The file C:\WINNT\System32\drivers\etc\Services on both nodes need to contain the lines below if the defaults were taken for the port numbers: sapdp00 3200/tcp sapdp00s 4700/tcp sapgw00 3300/tcp sapgw00s 4800/tcp sapmsITS 3600/tcp sapdb2ITS 5912/tcp sapdb2ITSi 5913/tcp Step 24.19: Check the DB2<SID> Service settings on Node A The service DB2<SID> needs to be set to manual and is to be administered by db2<sid>. This is because this service is at the cluster level and cannot be running on both nodes at the same time in the same cluster.
157
Step 24.20: Check the Services file on Node B Repeat step 18 on page 157 for Node B. Step 24.21: Check the DB2<SID> Service settings on Node B Repeat step 19 on page 157 for Node B.
158
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Substeps described in this chapter: Step 24.1: Fill in the installation worksheets, page 160
Step 24.2: Install SQL Server 7.0 EE on Node A, page 160 Step 24.3: Install SQL Server 7 SP1 on Node A, page 161 Step 24.4: Install the R/3 Setup tool on Node A, page 161
Install R/3 setup tool R3SETUP Install SAP R/3 CI&DI R3SETUP -f CENTRDB.R3S
Step 24.5: Install the R/3 CI and DBMS on Node A, page 162
Step 24.6: Run the Failover Cluster Wizard on Node A, page 164
Install R/3 files for conversion R3SETUP -f NTCLUSCD.R3S SAP cluster conversion R3SETUP -f NTCMIGNx.R3S Complete MSCS migration R3SETUP -f UPDINSTV.R3S
Step 24.7: Install the R/3 cluster conversion tool on Node A, page 164 Step 24.8: Install the R/3 cluster conversion tool on Node B, page 165 Step 24.9: Converting Node A for operation in a cluster, page 165 Step 24.10: Converting Node B for operation in a cluster, page 166 Step 24.11: Completing the migration to MSCS, page 166 Step 24.12: Removal of unused resources, page 167
A
B
A
B
A A
Figure 73. Installation process for SAP R/3 on SQL Server 7.0
An overall description of the installation process is shown in Figure 73. You will note that some steps are to be done on Node A, some on Node B, and some are to be carried out on both Nodes.
Attention
If you do not install the programs in the following order, the software products can fail and require that the disks be reformatted and installation restarted. The main references for the SQL Server installation are: R/3 Installation on Windows NT: Microsoft SQL Server, Release 4.5B (document number 51005503) Conversion to Microsoft Cluster Server: MS SQL Server, 4.0B 4.5A 4.5B (document number 51005504) How to Install SQL Server 7.0, Enterprise Edition on Microsoft Cluster Server: Step by Step Instructions. (white paper from http://www.microsoft.com/sql)
159
From this point, we assume that Windows NT 4.0 Enterprise Edition is installed on both nodes with Service Pack 3, and MSCS is running.
Step 24
These steps are part of installation step 24 as described on page 108 in Chapter 4.
Throughout the entire installation process, make sure you are always logged on as the installation user (in our lab, sapinst).
Parameters
Lab values
Your values
Select Custom See 3.4.2.5, SQL Server 7.0 files on page 61 to know to which target disk drive to install. Program files: The default location is \MSSQL7 Database files for the Master, Msdb, and Pubs databases: We recommend you install them in a different directory: \MSSQL7DB Confirm the default settings
Select components
Leave default
160
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Parameters
Lab values
Your values
Required values
Char Set: 850 Multilingual Sort Order: Binary Order Unicode Collation: Binary Order Leave default Customize the settings for each service SQL Server: sqlsvc / password / SAPDOM SQL Agent: sqlagent / password / SAPDOM clussvc/ password/ SAPDOM
The components Named Pipes, TCP/IP sockets, and Muliprotocol must be installed. Specify the user name of the two service accounts created prior to SQL Server installation. Repeat this step for both SQL Server and SQL Agent. Do not select AutoStart service. (For the cluster administrator account) Specify the cluster service account information
Remote information
4. After SQL Server is installed, it is normal for the cluster group to be offline. By default, neither SQL Server nor SQL Server Agent are automatically started when the installation is complete. 5. Reboot the sever and test the SQL Server installation as follows: Start SQL Server on Node A Register Server A Perform simple queries Set up SQLMail if you intend to use it Stop SQL Server Step 24.3: Install SQL Server 7 SP1 on Node A 1. Download Service Pack 1 from the Microsoft FTP server at the following address or get it from the Microsoft TechNet CDs: ftp://ftp.microsoft.com/bussys/sql/public/fixes/usa/sql70 2. All the cluster resources must be online on Node A. 3. Start the Service Pack installation program, and use the information in Table 48:
Table 48. Service Pack 1 for SQL Server 7 Values Restrictions and recommendations Lab values
Connect to Server Remote information Specify the SAP cluster administrator account
161
2. Check that the TEMP environment variable has been set, using the System applet in the Windows NT Control Panel. TEMP is normally set to C:\TEMP. Make sure that the specified directory really exists in the file system. 3. From the SAP Kernel CD-ROM, start the program \NT\COMMON\R3SETUP.BAT. 4. You are asked to provide values you can take from the following table:
Table 49. Installation of R/3 Setup tool
Parameters
Lab values
Your values
SAP system name SAPSYSTEMNAME Path to installation directory INSTALL PATH Do you want to log off? CDINSTLOGOFF_NT_IND
5. At the end of this step, you will be logged off from your Windows NT session. For the next step, you must log on with the same Windows NT user account, because the installation of the R/3 setup tool assigns the rights necessary for performing an installation to the user is installing the R/3 setup tool.
Parameters
Lab values
Your values
SAP system name (SAPSYSTEMNAME) Number of the central system (SAPCISYSNR) SAPLOC - Drive of the \usr\sap directory Windows NT domain name (SAPNTDOMAIN) Central transport host (SAPTRANSHOST)
<SID> Any two-digit number between 00 and 97 On the shared disks Not on the Quorum disks Not on SQL Server disks
162
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Parameters
Lab values
Your values
RAM that is reserved to the R/3 system (RAM_INSTANCE) Port number of the message server (PORT) <SID>adm password SAPService<SID> password SAP gateway TEMPDATAFILESIZE TEMPDATAFILEDRIVE Type of installation Leave the default value See 3.4.2.5, SQL Server 7.0 files on page 61 Choose custom if you intend to optimize the parameters for data and log files. Otherwise, select No. On the shared disks Not on the Quorum disks Not on the SAPLOC disk Not on the Log disks Not on the Archive disks On the shared disks Not on the Quorum disks Not on the SAPLOC disk Not on the Log disks Not on the Archive disks On the shared disks Not on the Quorum disks Not on the SAPLOC disk Not on the Log disks Not on the Archive disks On the shared disks Not on the Quorum disks Not on the SAPLOC disk Not on SAPDATA disks Not on the Archive disks Leave the default value
Leave the default (2176 MB in our configuration) 3600 (default) password password No 300 (default) K: Automatic installation
DATAFILEDRIVE1
M:
DATAFILEDRIVE2
N:
DATAFILEDRIVE3
O:
LOGFILEDRIVE
L:
Number of CPUs
4 NT
5. When R3Setup has obtained all the information it needs, it automatically begins with installation processing. This phase can take up to several hours depending on the global configuration and the server performance. 6. Reboot Node A.
163
Parameters
Lab values
Your values
By default there is no password. Type the SQL Server service account. This is the virtual IP address for accessing the database on the public network, with the correct subnet mask. Name of the virtual server.
Server Name
4. After the installation process, you have to reboot Node B and Node A. 5. When the system is restarted, log on to Node A as the installation user (in our lab, sapinst). 6. Move all resources back to Node A. 7. Redefine the shares on the \USR\SAP directory. You have to manually configure two shares: SAPLOC and SAPMNT (you must respect these names), which points to the same directory: \USR\SAP. 8. Restart the Windows NT services: SAPOSCOL and SAP<SID>. 9. Restart the database service in the Cluster Administrator. 10.All the resources for SQL Server are now displayed in the Cluster Administrator and the database can be moved between nodes.
164
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
3. You are asked to provide the installation directory: enter the same directory used during the installation of the R/3 instance.
Table 52. Installation of the cluster conversion tool
Parameter
Lab values
Your values
<SID>
4. You are automatically logged off from your Windows NT session. Step 24.8: Install the R/3 cluster conversion tool on Node B Log on to Node B as the installation user (in our lab, sapinst), and run NTCLUST.BAT per step 24.7.
Parameters
Lab values
Your values
Virtual name for the R/3 system. The name of the SAP cluster group. Do not enter the name of the cluster. Virtual IP address for the R/3 system on the public network. The IP address for the SAP cluster group. Do not enter the IP address of the cluster.
ITSSAP
192.168.0.51
Subnet mask Network to use (NETWORKTOUSE) SAP system name (SAPSYSTEMNAME) Number of the central system (SAPCISYSNR) SAPLOC - drive of the \USR\SAP directory Specify the name of the public network. <SID> Any two-digit number between 00 and 97 SAP software shared disk
165
Parameters
Lab values
Your values
Windows NT domain name (SAPNTDOMAIN) Database virtual name DBHOSTNAME RAM that is reserved to R/3 RAM_INSTANCE <sid>adm password SAP gateway (R2_CONNECTION)
When all entries have been made, the R3SETUP converts the R/3 instance on Node A for operation in a cluster. 5. When the processing is finished, take the SAP R/3 cluster group offline in the Clsuter Administrator, and move it to Node B. On Node B, bring all the resources in that group online, except the R/3 resource. Step 24.10: Converting Node B for operation in a cluster 1. Log on to Node B as the installation user (in our lab, sapinst). 2. Make sure that the cluster resources for the R/3 system (SAP_R/3<SID>) within MSCS are owned by Node A. 3. Click Start > Programs > SAP R/3 Setup > Configuring Node B for MSCS. 4. You are prompted to enter values for a number of parameters. Use Table 53 for assistance. When all entries have been made, the R3SETUP converts the R/3 instance on Node B for operation in a cluster. 5. When R3SETUP has finished, start the R/3 cluster resource SAP_R/3<SID>. You should now have the SQL Server group online on Node A and the SAP R/3 group online on Node B. At this stage, you can swap the two cluster groups between the two nodes manually to make them run where you want them to run.
166
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
2. You are prompted to enter values for a number of parameters described in Table 54:
Table 54. Completing the migration to MSCS
Parameter
Lab values
Your values
Central Instance host name CIHOSTNAME SAP system name (SAPSYSTEMNAME) Number of the central system (SAPCISYSNR)
ITSSAP ITS 00
3. Restart the server. The R/3 system has now been fully converted and is able to operate in the cluster and make use of the cluster features. Test whether the clsuter failover mechanism is working properly by simulating a failover, first on the R/3 group and then on the database group.
167
168
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Your first step should be to try to isolate which of these components needs a thorough analysis. The following hints can be useful in this phase: If you have connectivity problems from a SAPGUI or you are not able to fail over from one node to the other node, this does not necessarily mean you have a problem with the SAP system. Quite often, network switches or routers are the causes of a non-working SAP system. Also, routing tables on the SAP servers or on the clients must be examined. A network expert able to analyse the network traffic should be called. If you need to understand whether you have a cluster or SAP problem, proceed as described in this point. In both Node A and Node B create a hardware profile (NoSAPNoDB) in which all the SAP and DBMS Services are disabled. You can get a list of services you need to disable from 8.3, Services on page 173. Reboot both nodes and test the cluster using Microsoft CASTEST utility (see 4.9.2, Test the failover process on page 107 for more details).
169
If the test is successful you know that your cluster is properly working and so the problem can be looked for in the SAP or DBMS installation. If the test is not successful you can use the CASTEST log and Microsoft Cluster log in order to understand what is not working.
To provide you with some information about the interpretation of this log, we will show some small excerpts of real logs. This first excerpt, Figure 75, describes the start of the Cluster Service on Node servera.
170
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
189::19-17:59:33.935 189::19-17:59:33.935 189::19-17:59:33.951 189::19-17:59:33.951 11f::19-17:59:33.967 11f::19-17:59:33.967 11f::19-17:59:33.967 11f::19-17:59:33.967 11f::19-17:59:33.967 11f::19-17:59:33.967 11f::19-17:59:33.967 11f::19-17:59:33.967 11f::19-17:59:33.967 11f::19-17:59:33.967
Cluster Service started - Cluster Version 2.224. OS Version 4.0.1381 - Service Pack 4. We're initing Ep... [DM]: Initialization [DM] DmpRestartFlusher: Entry [DM] DmpStartFlusher: Entry [DM] DmpStartFlusher: thread created [NM] Initializing... [NM] Local node name = servera.
The circled line shows the Cluster Service starting: 11f is the ID of the thread issuing the log 19-17:59:33.967 is the GMT time stamp Cluster Service started - Cluster Version 2.224 is the event description The central lines are characterized by the element [DM]. This is an acronym for Database Manager, the cluster component through which the changes in the cluster configuration are done. Table 55 is a list of typical components found in a cluster log:
Table 55. Components listed in the log
Acronym
Component
Database Manager Node Manager Failover Manager API support Log Manager Cluster Service State of a node before joining the cluster State of the node when the node tries to join the cluster Event Processor Resource Monitor Global Update Manager
To understand what the cluster does, it is necessary to understand the internal architecture of MSCS. Good references are: Clustering Architecture (Microsoft white paper) Windows NT Microsoft Cluster Server by Richard R. Lee
171
A fundamental document containing a complete list of Event Viewer errors due to cluster problems with description enclosed is: MS Cluster Server Troubleshooting and Maintenance by Martin Lucas The next excerpt from the MSCS log, Figure 76, shows the completion of the start process: 11f::19-17:59:35.857 [FM] FmJoinPhase2 complete, now online! 11f::19-17:59:35.860 [INIT] Cluster Started! Original Min WS is 204800, Max WS is 1413120. 189::19-17:59:35.860 [CPROXY] clussvc initialized 140::19-17:59:42.656 Time Service <Time Service>: Status of Time Service request to sync from node serverb is 0.
Figure 76. MSCS log excerpt cluster started
The next lines in the log, shown in Figure 77, describe the arbitration process to get access to the quorum: 16a::19-18:00:07.672 180::19-18:00:07.672 180::19-18:00:36.860 16a::19-18:00:36.860 [NM] Checking if we own the quorum resource. Physical Disk <Disk I:>: SCSI, error reserving disk, error 170. Physical Disk <Disk I:>: Arbitrate returned status 0. [FM] Successfully arbitrated quorum resource 9abfb375 -540e- 11d3- bd6a- 00203522d044. 16a::19-18:00:36.860 [FM] FMArbitrateQuoRes: Current State 2 State=2 Owner 2 16a::19-18:00:36.860 [FM] FMArbitrateQuoRes: Group state :Current State 0 State=0 Owner 2 16a::19-18:00:36.860 [NM] We own the quorum resource.
Figure 77. MSCS log excerpt quorum arbitration
The SAP installation procedure was changed with the release of R/3 4.5A. As described in OSS note 0138765 Cluster Migration: Terminology and Procedure, the installation is composed of two main phases: The ordinary (non-cluster) installation Cluster migration In the first phase, the CENTRDB.TPL template is used and relevant information is logged in the CENTRDB.LOG file. In the second phase, the NTCMIGNA.TPL (for Node A) and NTCMIGNB.TPL (for Node B) templates are used. In this phase the corresponding logs NTCMIGNA.LOG, and NTCMIGNB.LOG should be analyzed. During the creation of the cluster group, three main programs are used:
172
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
INSAPRCT responsible for registering the SAP Resource Type CRCLGRP responsible for creating the SAP cluster group COCLGRP responsible for creating the R/3 resource These three programs write errors in the R3CLUS.LOG file. See OSS note 0112266 R/3 + MSCS Cluster Server: Frequent Questions + Tips for further information. As of Release 4.5A, there is a specific R3Setup step dedicated to the correction of the table INSTVERS whose name is Completing cluster installation (Instvers) (see OSS note 0112266 R/3+MSCS Cluster Server: Frequent Questions + Tips). These steps are logged in the UPDISTINTV.LOG file. If the installation is complete, but the instance does not start, a good source of information can be STARTUP.LOG where the start of the instance is logged. Further information can be found in OSS note 0002033 Startup fails, sapstart.sem, startsap, sapstart.
8.3 Services
It is possible to take a complete description of the services and of their running status using the SCLIST utility from the Windows NT Resource Kit. The following subsections contain the list of services at the end of the installation with a few comments about their meaning.
8.3.1 Oracle
Table 56 contains a complete list of relevant services running on a cluster node just after the completion of the installation.
Table 56. Oracle Services
Service
User account
Startup
Meaning
Cluster Server
ClusSvc
Automatic
Windows NT4.0 EE Service implementing the cluster features like resource monitoring, failover, and so on. Listens for and responds to job and event requestes sent from the OEM console2 . Listens for and responds to job and event requestes sent from the OEM console Oracle version 8 provides a client cache service that allows a client on most platforms to store information retrieved from a Names Server in its local cache
OracleAgent80
System
Manual
OracleAgent80ITSORA
System
Manual
OracleClientCache80
System
Manual
173
Service
User account
Startup
Meaning
OracleDataGatherer
System
Manual
Gathers performance statistics for the Oracle Performance Manager. Enables information from database queries to be published to a Web page at specified time intervals2 . Oracle cluster service. Oracle instance ITS service. Listens for and accepts incoming connection requests from client applications2 . SAP instance service SAP Operating System Collector service.
OracleExtprocAgent
System
Manual
SAPITS_00 SAPOSCOL
SAPServiceITS SAPServiceITS
Manual Automatic
Notes : 1 For this service the setting Allow Service to Interact with Desktop must be checked. 2 From Oracle book: Oracle 8. Getting Started
8.3.2 DB2
Table 57 contains a complete list of relevant DB2 services running on a cluster node just after the completion of the installation:
Table 57. DB2 Services
Service
User account
Startup
Meaning
Cluster Server
ClusSvc
Automatic
Windows NT4.0 EE Service implementing the cluster features like resource monitoring, failover, ... DB2 Administration Server (DAS) instance1 Local database instance used by work processes to access the R/3 database2 This service controls application behavior by setting limits and defining actions when the limits are exceeded. DB2 Java Applet server - to support Java Applets D B2 ITS instance DB2 Security Service3
db2admin System
Automatic Automatic
DB2 Governor
db2admin
Manual
DB2 JDBC Applet Server DB2-DB2ITS (or DB2ITS) DB2 Security Server
174
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Service
User account
Startup
Meaning
SAPITS_00 SAPOSCOL
sapseITS sapseITS
Manual Automatic
Notes: 1 See Chapter 4 in The Universal Guide to DB2 for Windows NT, SC09-2800, for details on the meaning of this instance. 2 See SAP R/3 and DB2 UDB in a Microsoft Cluster environment, , section 3.3.2 for more details. 3)See Chapter 2 in The Universal Guide to DB2 for Windows NT, where the limited usage of this service in the most recent releases of DB2 is explained.
Service
User account
Startup
Meaning
Cluster Server
ClusSvc
Automatic
Windows NT4.0 EE Service implementing the cluster features like resource monitoring, failover, ... MS SQL Server (instance ITS)1 MS SQL Server agent (instance ITS) allowing the scheduling of periodic activities1 Virtual Server Service for intance ITS SAP instance ITS SAP Operating System Collector
MSSQLServer$ITSSQL SQLServerAgent$ITSSQL
sqlsvc sqlsvc
VSrvSvc$ITSSQL
SYSTEM
SAPITS_00 SAPOSCOL
SAPServiceITS SAPServiceITS
Notes 1 See MS SQL Server Introduction (Microsoft Technet), Chapter 7 for details
175
8.4.1 Oracle
Table 59 shows the accounts stored in the Windows NT account databases on the primary domain controller (PDC) of the SAP domain and in the account database of the cluster nodes:
Table 59. Accounts on the PDC
Account
Cluster Service Account Lab value: ClusSvc Belongs to: Domain users
Back up files and directories, Increase quotas, Increase scheduling priority, Load and unload device drivers, Lock pages in memory, Log on as a service, Restore files and directories Act as a part of the OS, Log on as a service, Replace a process level token Act as part of the operating system, Increase quotas, Replace a process level token Access this computer from network, Log on as a service
<sapsid>adm Lab value: itsadm Belongs to: Domain users, Domain Admins, SAP_ITS_GlobalAdmin SAPService<SAPSID> Lab value: SAPServiceITS Belongs to: Domain users, SAP_ITS_GlobalAdmin Global Group SAP_<SAPSID>_GlobalAdmin Lab value: SAP_ITS_GlobalAdmin Contains: itsadm and SAPServiceITS
Table 60 shows the accounts configured in the Windows NT account database on both nodes of the cluster:
Table 60. Accounts on the cluster nodes
Group
Contains
Local Group ORA_<SAPSID>_DBA (lab value ORA_ITS_DBA) Local Group ORA_<SAPSID>_OPER (lab value ORA_ITS_OPER) Local Group SAP_<SAPSID>_Local Admin (lab value SAP_ITS_Local Admin)
Contains SAPDOM\ClusSvc and SAPDOM\itsadm Contains SAPDOM\itsadm and SAPDOM\SAPServiceITS Contains SAPDOM\SAP_ITS_Global Admin
User
Granted roles
System privileges
DBSNMP
176
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
User
Granted roles
System privileges
Account/Group
Cluster Service Account (lab value: ClusSvc) Belongs to: Administrators, Domain users
Back up files and directories, Increase quotas, Increase scheduling priority, Load and unload device drivers, Lock pages in memory, Log on as a service, Restore files and directories Not applicable
Back up files and directories, Increase quotas, Increase scheduling priority, Load and unload device drivers, Lock pages in memory, Log on as a service, Restore files and directories Access this computer from the network, Act as part of the operating system, Log on as a service, Replace a process-level token Access this computer from the network, Act as part of the operating system, Increase quotas, Log on as a service, Replace a processlevel token Access this computer from the network, Log on as a service
db2<sapsid> (lab value: db2its) Belongs to: Domain users, SYSADM <sapsid>adm (lab value: itsadm) Belongs to: Domain Admins, Domain users, SAP_ITS_GlobalAdmin, SYSCTRL sapse<sapsid> (lab value: sapseits) Belongs to: Domain users, SAP_ITS_GlobalAdmin, SYSCTRL Global Group SAP_<SAPSID>_GlobalAd min (lab value SAP_ITS_GlobalAdmin) Contains: itsadm Local Group SYSADM Contains: db2its Local Group SYSCTRL Contains: itsadm, sapseits
Act as a part of the operating system, Increase quotas, Log on as a service, Replace a process level token
Not applicable
Not applicable
Not applicable
Table 63 shows the accounts stored in Windows NT databases in Server A and Server B.
177
Accounts
User rights
db2admin Belongs to: Administrators sapr3 Belongs to: Users Local group: SAP_<SAPSID>LocalAdmin Contains the global group SAPDOM\SAP_ITS_GlobalAdmin
Act as part of the operating system, Create a token object, debug programs, increase quotas, Log on as a service, replace a process level token,
Users/Groups
Authorities
User: DB2ITS User: ITSADM User: SAPR3 User: SAPSEITS Group: PUBLIC
All All Connect database, create tables, create packages, create schemas implicitly All Create schemas implicitly
Account
Where
Details
Lab value
sqlsvc
Standard MS SQL login with administrator privileges Account previously used to connect the work processes to the DBMS Login stored in the MS SQL Server DB Local account on the MS SQL Servers Local account on the MS SQL Servers
sa (systems administrator)
sa
sapr31
sapr3
SQLAgentCMDExec2
SQLAgentCMDExec
MTSImpersonators
MTSImpersonators
178
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Account
Where
Details
Lab value
Notes: 1 The account sapr3 still exists in SAP R/3 Release 4.5x but is no longer used as described in OSS note 0157372 2 See MS SQL Server Transact-SQL and Utilities Reference. Volume 2 in Microsoft Technet for a description of meaning of this account
8.5 R3Setup
Sometimes the SAP installation program R3Setup shows an unpredictable behavior. Errors like the one in Figure 78 can appear:
The general strategy to face any R3Setup error is: exit from the R3Setup and try again. If the problem persists an error analysis will be necessary.
179
180
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
181
The following terms are trademarks of other companies: C-bus is a trademark of Corollary, Inc. Java and HotJava are trademarks of Sun Microsystems, Incorporated. Microsoft, Windows, Windows NT, and the Windows 95 logo are trademarks or registered trademarks of Microsoft Corporation. PC Direct is a trademark of Ziff Communications Company and is used by IBM Corporation under license. Pentium, MMX, ProShare, LANDesk, and ActionMedia are trademarks or registered trademarks of Intel Corporation in the U.S. and other countries. SET and the SET logo are trademarks owned by SET Secure Electronic Transaction LLC. UNIX is a registered trademark in the United States and other countries licensed exclusively through X/Open Company Limited. Other company, product, and service names may be trademarks or service marks of others.
182
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
System/390 Redbooks Collection Networking and Systems Management Redbooks Collection Transaction Processing and Data Management Redbooks Collection Lotus Redbooks Collection Tivoli Redbooks Collection AS/400 Redbooks Collection Netfinity Hardware and Software Redbooks Collection RS/6000 Redbooks Collection (BkMgr Format) RS/6000 Redbooks Collection (PDF Format) Application Development Redbooks Collection IBM Enterprise Storage and Systems Management Solutions
183
B.3.1 Netfinity technology Internet sites: http://www.pc.ibm.com/netfinity http://www.pc.ibm.com/support IBM intranet sites (available within IBM only): http://performance.raleigh.ibm.com/ Netfinity performance Web site http://netfinity.sl.dfw.ibm.com/ IBM ATS Web site http://argus.raleigh.ibm.com/ Netfinity hardware development http://devtlab.greenock.uk.ibm.com/ Greenock development B.3.2 Windows NT http://www.microsoft.com/security http://NTSecurity.ntadvice.com http://www.trustedsystems.com http://www.microsoft.com/hcl Microsoft Hardware Compatibility List (select Cluster) B.3.3 Microsoft Cluster Server http://www.microsoft.com/ntserver/ntserverenterprise/ B.3.4 SAP Internet sites: http://www.sap.com Main SAP AG Web site http://www.sapnet.sap.com Technical Web site; to access this site it is necessary to have an account http://www.sapnet.sap.com/r3docu Documentation http://www.sapnet.sap.com/technet TechNet http://www.sapnet.sap.com/securityguide Security http://www.sapnet.sap.com/notes OSS notes http://www.r3onnt.com/ IXOS Web site containing certified platforms http://www.ibm.com/erp/sap IBM-SAP alliance page http://www.microsoft.com/industry/erp/sap/ Microsoft-SAP alliance http://www.microsoft.com/germany/sap/ Microsoft-SAP Germany http://www.sapfaq.com/ Frequently asked questions Web site http://www.sap-professional.org/ SAP professional organization http://www.saptechjournal.com/ SAP Technical Journal online IBM intranet sites (accessible to IBM employees only): http://w3.isicc.de.ibm.com/ ISICC Web page (Germany) http://w3.isicc.ibm.com/ ISICC Web page
184
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
B.3.5 Oracle http://www.oracle.com/ Oracle Web site http://technet.oracle.com/ Oracle TechNet B.3.6 DB2 http://www.software.ibm.com/data/db2/udb/udb-nt/ IBM DB2 on Windows NT http://www.software.ibm.com/data/partners/ae1partners/ IBM DB2 partners B.3.7 SQL Server http://www.microsoft.com/sql/ Microsoft SQL Server Web site
185
R/3 Installation on Windows NT DB2 Common Server, Release 3.5B, 51005502 R/3 Installation on Windows NT MS SQL Server, Release 3.5B, 51005503 Conversion to Microsoft Cluster Server: IBM DB2 for NT, Release 3.5B, 51006418 Conversion to Microsoft Cluster Server: Oracle 4.0B 4.5A 4.5B, 51005504 Conversion to Microsoft Cluster Server: MS SQL Server, Release 4.0B, 4.5A, 4.5B, 51005948 Security R/3 Security Guide: Volume I. An Overview of R/3 Security Services R/3 Security Guide: Volume II. R/3 Security Services in Detail R/3 Security Guide: Volume III. Checklist Networks Network Integration of R/3 Frontends, 51006473 Network Integration of R/3 Servers, 51006371 Tuning Tuning SAP R/3 for Intel Pentium Pro Processor-based Servers Running Windows NT Server 3.51 and Oracle 7 Version 7.2 from http://www.intel.com/procs/servers/technical/SAP/281860.pdf SAP/Oracle/AIX. Performance Tuning Tips by John Oustalet and Alter Orb SAP R/3 Performance Tuning Guide for Microsoft SQL Server 7.0 from http://www.microsoft.com/SQL/productinfo/sapp.htm
186
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
This is screen.# IBM internal Network - without socks-server #----------------------------------------------direct 9.0.0.0 255.0.0.0 # SAP's DMZ - socks-server siccfw1.isicc.ibm.com #----------------------------------------------sockd @=9.165.214.110 147.204.0.0 255.255.0.0 # Internet - socks-server socks.de.ibm.com #----------------------------------------------sockd @=9.165.255.62 0.0.0.0 0.0.0.0 Updates to the SOCKS.CNF file are maintained by the ISICC team in Walldorf, Germany. The information above is current as of August 19, 1999. Send an e-mail to [email protected] if you have questions. The current ISICC SOCKS.CNF information can be found at either of the following URLs: ftp://sicc980.isicc.de.ibm.com/perm/socks/os2/socks.conf ftp://9.165.228.33/perm/socks/os2/socks.conf 2. Configure your browser for a manual proxy configuration with SOCKS server 9.165.214.110 and port:1080. 3. Make sure you do not have an FTP proxy server identified. 4. Use ftp://147.204.2.5 as the address to access SAPSERVx. Note: You should reset your browsers proxy configuration to its original settings when you are finished with access to SAPSERVx.
187
0124141 0126985 0128167 0132738 0134073 0134135 0134141 0134159 0138765 0140960 0140990 0142731 0144310 0146751 0151508 0154700 0156363 0166966
Hot Package 40B08 (IPA-TEM) Configuration of Ataman Remote Shell for DB2CS/NT Service Pack 4 on NT MSCS with Oracle products INFORMIX: Using SAPDBA in MSCS or distributed environment 4.5B R/3 Installation on Windows NT: MS SQL Server 4.5B R/3 Installation on Windows NT (General) Conversion to a Microsoft Cluster Server 4.5B 4.5B R/3 Installation on Windows NT: DB2/CS Migration to a Microsoft Cluster Server 4.5A MSCS Installation R/3 3.x on MS SQL Server 7.0 NT MSCS: How to backup/recover the CLUSDB DBCC Checks for SQL Server 7.0 Installing the NT SP4 on R/3 MSCS clusters Converting MS SQL Server 6.5 to 7.0 in cluster Resource Requirements for Release 4.6A MSCS Cluster Verification Utility MSCS: NET8 Configuration for Oracle Printing in Microsoft Cluster Environment
188
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Dependencies unavailable in Properties tab Dependencies page empty when running Resource Wizard Error: Cluster resource dependency can not be found Resource Parameters tab is missing Resources go offline and online repeatedly Resource failover time Information about the cluster group Access violation in resource monitor (fixed with SP5)
Time service resource Q174331 Error when adding second time service Q174398 How to force time synchronization between MSCS nodes Quorum resource Q175664 Q172944 Q172951 Q225081 Q238173 Error creating dependency for quorum resource How to change quorum disk designation How to recover from a corrupted quorum log Cluster resources quorum size defaults to 64 KB Quorum checkpoint file may be corrupted at shutdown
Cluster disks Q171052 Q175278 Q175275 Q176970 Q174797 Q196655 Q189149 Q172968 Q195636 Software FT sets are not supported in MSCS How to install additional drives on shared SCSI bus How to replace shared SCSI controller with MSCS Chkdsk /f does not run on the shared cluster disk How to run Chkdsk on a shared drive How to set up file auditing on cluster disk Disk counters on clustered disk record zero values Disk subsystem recovery documentation error Fibre Channel system loses SCSI reservation after multiple restarts (fixed with SP4) Q193779 MSCS drive letters do not update using DiskAdmin (fixed with SP4) Q215347 Cluster disk with more than 15 logical drives fails to go online (fixed with SP5) Cluster networks general, IP protocols Q101746 Q158487 Q171390 Q171450 Q168567 Q170771 Q178273 Q174956 TCP/IP Hosts file is case sensitive Browsing across subnets w/ a multihomed PDC in Windows NT 4.0 Cluster service doesn't start when no domain controller available Possible RPC errors on cluster startup Clustering information on IP address failover Cluster may fail if IP address used from DHCP server MSCS documentation error: no DHCP server failover support WINS, DHCP, and DNS not supported for failover
Cluster networks name resolution Q195462 Q193890 Q217199 Q183832 WINS registration and IP address behavior for MSCS Recommend WINS configuration for MSCS Static WINS entries cause the network name to go offline GetHostName() must support alternate computer names (fixed with SP4) Q171320 How to change the IP address list order returned
189
Q164023 Applications calling GetHostByName() for the local host name may see the list of IP addresses in an order that does not match the binding order (fixed with SP4) Cluster networks network interfaces Q174812 Q201616 Q175767 Q176320 Q175141 Q174945 Q174794 Effects of using autodetect setting on cluster NIC Network card detection in MSCS Behavior of multiple adapters on same network Impact of network adapter failure in a cluster Cluster service ignores network cards How to prevent MSCS from using specific networks How to change network priority in a cluster
Applications and services general Q171452 Q175276 Q174837 Q188984 Q198893 Q174070 Q181491 Q224595 Q188652 Q184008 Q176522 Using MSCS to create a virtual server Licensing policy implementation with MSCS Microsoft BackOffice applications supported by MSCS Office 97 not supported in a clustered environment Generic application: Effects of checking Use Network Name for Computer Name in MSCS Registry replication in MSCS MS Foundation Class GenericApp resources fail DCOM client cannot establish CIS session using TCP/IP address (fixed with SP5) Error replicating registry keys (fixed with SP4) SQL Server cluster setup may fail on third-party disk drives IIS Server instance error message with MSCS
Applications and services Microsoft SQL Server Q192708 Installation order for MSCS support for SQL Server V6.5 or MS Message Queue Server Q187708 Cannot connect to SQL Virtual Server via sockets (fixed with SP4) Q185806 SQL Server service stopped when IsAlive fails to connect (fixed with SQL Server SP5a (U.S.) for V6.5) Q216674 Automatic SQL cluster failover does not work with WNT 4.0 SP4 Q195761 SQL Server 7.0 frequently asked questions: failover Q219264 Order of installation for SQL Server 7.0 clustering setup Q223258 How to install the WinNT Option Pack on MSCS with SQL Server 6.5 or 7.0 Q183672 How to upgrade a clustered MS MessageQueue SQL to SQL Enterprise Edition Applications and services Oracle Fail Safe Q219303 Oracle Fail Safe does not function after SP4 installed (fixed with SP5) Troubleshooting and debugging Q168801 How to enable cluster logging in MSCS Q216237 Cluster server will not start if cluster log directory is not created (fixed with SP5) Q216240 Cluster log is overwritten when cluster server starts Q216329 Cluster log filling with erroneous security descriptor information (fixed with SP5)
190
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
How to use the -debug option for cluster service ClusterAdmin can connect to all NetBIOS names How to keep ClusterAdmin from reconnecting to a cluster Cluster node may fail to join cluster Restarting cluster service crashes services.exe (fixed with SP4) Services continue to run after shutdown initiated (fixed with SP4) Cluster server has Clusdb corruption after power outage (fixed with SP5) Q219309 Disk error pop-up causes cluster service to stop (fixed with SP5) Q233349 Cluster service issues event 1015 every four hours after applying SP5
191
192
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Send orders by e-mail including information from the redbooks fax order form to: In United States Outside North America Telephone Orders United States (toll free) Canada (toll free) Outside North America 1-800-879-2755 1-800-IBM-4YOU Country coordinator phone number is in the How to Order section at this site:
e-mail address [email protected] Contact information is in the How to Order section at this site:
http://www.elink.ibmlink.ibm.com/pbl/pbl
http://www.elink.ibmlink.ibm.com/pbl/pbl
Fax Orders United States (toll free) Canada Outside North America 1-800-445-9269 1-403-267-4455 Fax phone number is in the How to Order section at this site:
http://www.elink.ibmlink.ibm.com/pbl/pbl
This information was current at the time of publication, but is continually subject to change. The latest information may be found at the redbooks Web site.
IBM Intranet for Employees
IBM employees may register for information on workshops, residencies, and redbooks by accessing the IBM Intranet Web site at http://w3.itso.ibm.com/ and clicking the ITSO Mailing List button. Look in the Materials repository for workshops, presentations, papers, and Web pages developed and written by the ITSO technical professionals; click the Additional Materials button. Employees may access MyNews at http://w3.ibm.com/ for redbook, residency, and workshop announcements.
193
First name
Last name
Company
Address
City
Card issued to
Signature
We accept American Express, Diners, Eurocard, Master Card, and Visa. Payment by credit card not available in all countries. Signature mandatory for credit card payment.
194
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
List of abbreviations
ABAP ADSI ADSM API ARCH ASCII ATS BIOS BLOB BTC CCMS CD-ROM CI CMT CPU DAT DB DBA DBMS DBWR DHCP DI DIA DLL DLT DNS DTC EDI EE ENQ ERP ESCON ESM FAQ FC FCAL FDDI Advanced Business Application Programming Active Directory Service Interfaces ADSTAR Distributed Storage Manager application programming interface archiver American National Standard Code for Information Interchange Advanced Technical Support basic input/output system binary large objects batch SAP Computer Center Management System compact disk-read only memory central instance IBM Center for Microsoft Technologies central processing unit digital audio tape database database administrator database management system database writer Dynamic Host Configuration Protocol database instance dialog dynamic linked library digital linear tape domain name server Distributed Transaction Coordinator electronic data interchange Enterprise Edition enqueue Enterprise Resource Planning enterprise systems connection Environmental Services Monitor frequently asked questions Fibre Channel Fibre Channel Arbitrated Loop fiber distributed data interface LAN LED LGWR LUN LVDS MMC MS MSCS MSDTC MSMQ MSSQL NNTP NTC ODBC OEM OFS OLTP OPS OS FI FLA FTP GB GBIC GL GMT GUI HACMP HAGEO HCL HCT HDR IBM ICMP ICSM IE IIS ISICC financial accounting Fabric Loop Attach file transfer protocol gigabytes Gigabit Interface Converter general ledger Greenwich mean time graphical user interface high availability cluster multi-processing High Availability Geographic Cluster hardware compatibility list hardware compatibility test High-availability Data Replication International Business Machines Internet control message protocol IBM Cluster Systems Management Internet Explorer Internet Information Server IBM/SAP International Competency Center local area network light emitting diode log writer logical unit number low voltage differential signalling Microsoft Management Console Microsoft Microsoft Cluster Server Microsoft Distributed Transaction Coordinator Microsoft Message Queue Server Microsoft SQL Server NetNews transfer protocol NT Competency Center open database connectivity other equipment manufacture Oracle Fail Safe online transaction processing Oracle Parallel Server operating system
195
OSS PA PAE PDC PLDA PSE RAID RAM RDAC RDBMS RISC RPM SAPS SCSI SD SGA SID SMS SMTP SNA SNMP SP SPO SQL SSA TCP/IP UDB UPD UPS VHDCI WHQL WINS
online service system Personnel Administration Physical Address Extension primary domain controller Private Loop Direct Attach Page Size Extension redundant array of independent disks random access memory redundant disk array controller relational database management system reduced instruction set computer revolutions per minute SAP Application Performance Standard small computer system interface sales and distribution system global area system identification Systems Management Server simple mail transfer protocol systems network architecture simple network management protocol service pack spooler structured query language serial storage architecture transmission control protocol/internet protocol Universal Database update uninterruptable power supply very high density connector interface Windows Hardware Quality Labs Windows Internet Name Service
196
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
Index
01K7296 70, 72 01K7297 69, 72 01K8017 66 01K8028 66 01K8029 66 03K9305 72 03K9306 72 03K9307 72 03K9308 72 03K9310 66 03K9311 66 35231RU 70, 72 35261RU 69, 72 4 GB tuning 99 76H5400 66
Numerics
A
accounts 88 <sid>adm 108 DB2 177 Oracle 176 SAPService<SID> 108 SQL Server 178 Active Directory Services Interface 105 ADSM 35 AIX and NT solutions 16 asynchronous replication 11 Ataman 127 auto-sensing network adapters 81
B
backbone configuration 114 backbone network 77, 79 backup 33 alias definitions 35 MSCS open files 37 offline backups 37 scheduling 36 virtual names 36 backup copy of Windows NT 90 batch work processes 120 binding order 80, 100 BLOBs 12 block size 67 BOOT.INI 94, 99 browser service error 95 buffer analysis 125
certification 42 categories 43 hardware components 43 HCL 39 IBM 42 iXOS 39, 42 Microsoft 8, 39, 44 SAP 8 CIHOSTNAME DB2 157 Oracle 144 SQL Server 167 CLUSDB files, backing up 37 CLUSRES.DLL 105, 117 cluster cluster conversion files Oracle 135 logging tool 102 Cluster Diagnostic Utility 96 Cluster Verification Utility 96 clustering 1 backups 33 cold standby 10 configurations 3 EXP15, use with 64 Fibre Channel 72 replicated database 11 replicated DBMS server 14 ServeRAID 66 shared disks 3 shared nothing 50 swing-disk 3 ClusterProven 42 cold standby 10 components 171 computer name 90 configuration 39
D
database reorganization 126 DATAFILEDRIVE SQL Server 163 DB13 126 DB2 accounts 177 active database logs 59 CAR files 149 CDINSTLOGOFF_NT_IND 150 CD-ROM copy 149 central instance installation 151 CIHOSTNAME 157 cluster conversion files 153 cluster, convert Node A to 154 database instance 151 DB2<SID> Service 157 DB2DB6EKEY 151, 155 DB2INSTANCE 150
C
CASTEST 107 CDINSTLOGOFF_NT_IND DB2 150 SQL Server 162, 165 CENTRDB.R3S 133
197
DB2MSCS.CFG 152 DB2UE (user exit) 59 DBHOSTNAME 155 DFTDBPATH 151 disk layout 59 dropping the database 148 fixpack 147, 149 installation 145 INSTVERS 156 IPADDRESS 155 MSCS, migration to 156 NETWORKNAME 155 NETWORKTOUSE 155 Node A installation 146 Node B installation 148 NTCLUST.BAT 153 NTCMIGNA.R3S 150 package files 149 PORT 152 PROCESSES 152 R2_CONNECTION 152 R3Setup install 150 RAID levels 60 RAM_INSTANCE 152, 155 REPORT_NAMES 152 SAPDATA 151 SAPLOC 151, 156 SAPNTDOMAIN 151, 155 SAPSYSNR 151, 154, 157 SAPSYSTEMNAME 150, 154, 157 SAPTRANSHOST 151 security planning 88 services 174 SMSTEMP 152 user exit (DB2UE) 59 verify installation 114 worksheets central instance 151 cluster conversion files 153 convert to cluster 154 MSCS, migrating to 157 Node A installation 147 Node B installation 148 R3Setup on Node A 150 DBHOSTNAME DB2 155 SQL Server 166 DEFAULT.PFL 111 dependencies, implications of (in MSCS) 23 DFTDBPATH DB2 151 DHCP 81, 91 DHCP, use of (in MSCS) 24 dialog traffic 121 dialog work processes 119 disk layout 50, 52 DB2 59 hot spares 65 log files 50, 65 merge groups (ServeRAID) 51
Netfinity servers 54 operating system 50 Oracle 56, 58 Oracle Fail Safe Respository 57 page file 50, 51, 53, 54 performance 65 quorum 50 RAID levels 65 recommendation DB2 60 Oracle 58 SCSI 67 redo logs 59 SAP R/3 files 55 shared disks 55 size of disks 65 SQL Server 61 disk space required 41 disk subsystem quorum resource in MSCS 24 terminology, common or shared disk? DNS 7, 79, 91, 103 domain controller 7 domain name 91 domain requirements for MSCS 27
18
E
enqueue work processes 119 EXP15 62, 69 sample configurations 66, 72
F
failover See also MSCS Failover Cluster Wizard (SQL Server) fault tolerance 1 Fibre Channel 3, 39, 68 active/active mode 70, 76 cache 70 cache policy 59 clustering configurations 72 components 68 configurations 41 Ethernet connectors 70 EXP15 62, 69 failover 70 FailSafe RAID Controller 73 GBIC 71 hardware 68 host adapter 69 hub 70 logical units (LUNs) 70 long-wave 71 LUNs 75 LUNs, number of 75 maximum disks per LUN 75 nodes 71 performance 75 quorum 55 164
198
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
RAID controller 69 RAID levels 75 RDAC driver 74 redundant configuration 74 redundant pair 70 RS-232 70 segment size 76 shared disks 73 short-wave 71 single points of failure 73 SYMplicity Storage Manager 74, 76 topology 68 write cache 76
G
GBIC 71 group See resource
H
HACMP 16 HAGEO 16 hardware configuration 39 HCL (hardware compatibility list) 39 heartbeat redundant network adapters, use of high availability 1 HOSTS file 78, 80, 94, 103
82
I
IBM Cluster System Manager See ICSM ICSM IBM Cluster System Manager failover, scheduled 27 Informix High-availability Data Replication 12 INIT.ORA 14 installation 85 4 GB tuning 99 ADSI 105 backbone configuration 114 BOOT.INI 94, 99 browser service error 95 cluster logging tool 102 cluster verification 117 Cluster Verification Utility 96 computer name 90 database installation 108 database verification 114 DB2 See DB2, installation DHCP 91 disk configuration 94 DNS 91, 103 domain controller 90 domain name 91 drivers, unnecessary 100 failover test 107
hardware verification utility 96 HOSTS file 94 installation check 110 internal disks 89 Internet Explorer 105 IP addresses 91 LMHOSTS 91 MMC 105 MSCS verification 105 Network Monitor 102 Oracle See Oracle, installation overview 85 page file size 98 PDC 90 PING tests 106 private network 91 protocols, unnecessary 100 R3DLLINS 105 sapinst account 107 SAPNTCHK 113 SAPService<SID> 108 SAPWNTCHK 113 security 86 Server service 97 Service Packs 94, 102 fixes 105 services, unnecessary 100 <sid>adm 108 SQL Server See SQL Server SYMplicity Storage Manager 95 tests 110 user account 107 verify the SAP install 110 verifying the installation 169 WINS 91, 103 installation HOSTS file 103 INSTVERS 156 interconnect MSCS 17 redundant network adapters, use of 82 Internet Explorer 105 IP address MSCS, potential problem 25 IP addresses 91 IPADDRESS DB2 155 Oracle 136 SQL Server 165 IPSHA.DLL 117 ISICC 49 iXOS 39, 42
K
KILL.EXE 118 Knowledge Base articles Q101746 94, 103 Q114841 52 Q158487 95
Index
199
Q164023 Q168801 Q170771 Q171793 Q172944 Q172951 Q174812 Q175767 Q176320 Q193890 Q195462 Q217199
80, 106 102 24 100 37 37 82 82 82 94, 103 94, 103 94, 103
L
LMHOSTS 91 LMHOSTS file 79 load balancing 2, 83 log files 50, 65 MSCS 170 SAP R/3 172 log-based replication 11 LOGFILEDRIVE SQL Server 163 logical drives 65 LUNs adding 76 maximum disks 75 number of 75 LVDS 64, 72
failover 27 failover example 28 failover properties for resources and groups failover, phases of 27 failover, smallest unit of 22 hardware configuration 17 importance of 17 IP address, potential problem 25 IPX, use of 25 IsAlive 29 load-balancing 29 LooksAlive 29 managing with ICSM 27 NetBEUI use of 25 nodes, number supported in a cluster 17 Oracle FailSafe 25 preferred owner 29 quorum resource 24 resource group states 23 resource groups 22 resource hierarchy 20 Resource Monitor 19, 29 resource states 22 resource types 20 resources 18 SAP R/3 25 TCP/IP, role of 24 virtual servers 23 multiplatform solutions 16
27
M
memory management 121 message work processes 119 Microsoft Cluster Server 3 backup of open files 37 certified hardware 8, 39 checklist for installation 40 CLUSRES.DLL 105 groups SAP-R/3 <SAPSID> 32 SQL <SAPSID> 32 HCL 39 log files 170 Oracle Parallel Server, not compatible with 15 quorum 55 Service Packs 41 verification 105 worksheet 101 Microsoft HCL 39 Microsoft Management Console 105 Microsoft SQL Server Replication 12 MSCS Microsoft Cluster Server application failure, support for 17 dependencies between resources 19 DHCP, use of 24 domain requirements 27 failback 17, 29 failback policy 29
N
Netfinity Cluster Enabler 68 network configuration 76 auto-sensing adapters 81 backbone 77 , 79 binding order 80, 100 DHCP 81 DNS 79 heartbeat 78 HOSTS file 78, 80 interconnect 78 load balancing 83 multiple adapters 80 name resolution 79 PING errors with multiple adapters private network 78 public network 77 redundancy 81 server names 78 TCP/IP addresses 78 VLAN 77 WINS 79 Network Monitor 102 NETWORKNAME DB2 155 Oracle 136 SQL Server 165 NETWORKTOUSE DB2 155 Oracle 136
80
200
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
SQL Server 165 NLS_CHARACTERSET Oracle 134, 137 NTCLUSCD.R3S 135 NTCMIGNA.R3S 136 NTCMIGNB.R3S 139
O
offline backups 37 Oracle accounts 176 ARCH 56 central instance 133 CIHOSTNAME 144 cluster conversion files 135 convert to cluster 136 DBHOSTNAME 137 DBWR 57 installation 129 Instance Manager 138, 140 IPADDRESS 136 LGWR 56 LISTENER.ORA 143 migration to MSCS, completing 143 NETWORKNAME 136 NETWORKTOUSE 136 NLS_CHARACTERSET 134, 137 Node A installation 130 Node B installation 131 Oracle Fail Safe 131 converting to 139 group ORACLE<SID> 141 installing 131 Is Alive interval 142 OFS Repository 57 patch 2.1.3.1 132 pending timeout 142 OracleTNSListener80 service 132 OSS notes 129 patch 8.0.5.1.1 131 PORT 135, 137 PROCESSES 135 R2_CONNECTION 135, 138 R3Setup installation 133 RAM_INSTANCE 134, 137 redo logs 56, 57 SAPDATA 134, 137 SAPDATA_HOME 134 SAPDBHOST 142 SAPLOC 134, 137 SAPNTDOMAIN 134, 137 SAPSYSNR 134, 137, 144 SAPSYSTEMNAME 133, 144 SAPTRANSHOST 134 security planning 88 services 173 tablespaces 57 TNSNAMES.ORA 143 verify installation 114 worksheets
central instance 133 cluster conversion 136 cluster conversion files 135, 136 MSCS, migration to 144 OFS group 141 Oracle Fail Safe 132 Oracle install 130 Oracle patch 131 R3Setup 133 Oracle Fail Safe converting to 139 Is Alive interval 142 OFS Repository 75 pending timeout 142 verify installation 114 worksheet 132 Oracle Parallel Server 14, 68 MSCS, not compatible with 15 Oracle Standby Database 12 Oracle Symmetric Replication 12 OS/390 and NT solutions 16
P
PAE 53 page file 50, 51, 53, 54, 98, 121 PCI scan order 67 PDC 90 PING errors with multiple adapters 80 PING tests 106 PORT DB2 152 Oracle 135, 137 SQL Server 163 private network 78, 91 PROCESSES DB2 152 Oracle 135 SQL Server 163 PSE 53 public network 77
Q
QuickSizer 50 quorum 50, 55, 65 SCSI heartbeat cable 66 quorum resource MSCS 24
R
R2_CONNECTION DB2 152 Oracle 135 SQL Server 166 R3DLLINS 105 R3Setup DB2 central instance 151 cluster conversion files 153
Index
201
convert to cluster 154 installation 150 migration to MSCS 166 Oracle central instance 133 cluster conversion files 135 convert to cluster 136 install 133 migration to MSCS, completing 143 worksheet 133 SQL Server 161 cluster conversion tool 164 SAP cluster conversion 165 RAID-1, use of 51 RAM_INSTANCE DB2 152, 155 Oracle 134, 137 SQL Server 163, 166 ratio of work processes 120 redo logs 13, 56 redundancy server components 17 redundant network adapters 82 use on interconnect 82 redundant network path 81 remote shell 127 replicated database 11 asynchronous 11 BLOBs 12 failover 14 issues 11, 13 levels 12 log-based 11 products 12 redo logs 13 replicated DBMS server, compared with 15 SAP R/3, use with 12 standby databases 12 statement-based 11 synchronous 11 replicated DBMS server 14 failover 14 issues 15 Oracle Parallel Server 14 redo logs 14 replicated database, compared with 15 Replicated Standby Database for DB2/CS 12 REPORT_NAMES DB2 152 SQL Server 163 resource See MSCS RMPARAMS file 76 RZ10 111
S
S/390 and NT solutions 16 SAP QuickSizer 50 SAP R/3 availability features 5
buffer analysis 125 certified hardware 8, 39 checklist for installation 40 cluster verification 117 configurations supported 8 connection test 110 database reorganization 126 dialog traffic 121 disk space required 41 features, availability 5 hardware certification 39 hardware minimums 40 installation 85 installation check 110 log files 172 MSCS, use with 7 multiple instances 5 network configuration 76 process restart 5 profiles 111 quorum 65 reconnect 5 remote shell 127 restart of processes 5 security 86 single points of failure 6 sizing 46 system log 110 tests 110 tuning 119, 122 publications 126 ServeRAID 67 update traffic 121 verify cluster operation 117 verify the installation 110 work processes 119 SAPCISYSNR SQL Server 162, 165 SAPDATA DB2 151 Oracle 134, 137 SAPDATA_HOME Oracle 134 SAPLOC DB2 151, 156 Oracle 134 SQL Server 165 SAPLOGON 120 SAPNTCHK 113 SAPNTDOMAIN DB2 151, 155 Oracle 134, 137 SQL Server 162, 166 SAP-R/3 <SAPSID> cluster group SAPS 46 SAPService<SID> 108 SAPSYSNR DB2 151, 154, 157 Oracle 134, 144 SAPSYSTEMNAME
32
202
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
DB2 150, 151, 154, 157 Oracle 133, 144 SQL Server 162, 165 SAPTRANSHOST 104 DB2 151 Oracle 134 SQL Server 162 SAPWNTCHK 113 scan order 67 scheduling backups 36 SCSI recommendations 67 security 86 segment size 76 Server service 97 ServeRAID 3, 62 battery backup cache 67 cache policy 59, 67 clustering 66 configurations 39, 62 EXP15 62 failover 65 hot spares 65 logical drives 65 LVDS 64 merge groups 51 quorum 55 RAID levels 59 read ahead 67 recommendations 67 scan order 67 stripe size 67 tuning recommendations 67 Ultra2 SCSI 64 ServerProven 42 Service Packs 41 services DB2 174 Oracle 173 SQL Server 175 shadow database See replicated database shared disks 55 shared nothing 50 <SID> 50 <sid>adm 108 single points of failure 6 sizing 46 methodology 47, 48 Netfinity servers 54 SAPS 46 tools 50 transaction-based sizing 48 user-based sizing 48 SM21 110 SM28 110 SM51 112 SMSTEMP DB2 152 SQL <SAPSID> cluster group 32
SQL Server 159 accounts 178 CDINSTLOGOFF_NT_IND 162, 165 central instance 162 CIHOSTNAME 167 client component 61 cluster conversion tool 164 DATAFILEDRIVE 163 DBHOSTNAME 166 devices 61 disk layout 61 Failover Cluster Wizard 164 installation 159, 160 IPADDRESS 165 LOGFILEDRIVE 163 MSCS, migration to 166 NETWORKNAME 165 NETWORKTOUSE 165 Node A 160 PORT 163 PROCESSES 163 R2_CONNECTION 166 R3Setup install 161 RAID, use of 61 RAM_INSTANCE 163, 166 REPORT_NAMES 163 SAP cluster conversion 165 SAP gateway 163 SAP service 164 SAPCISYSNR 162, 165 SAPLOC 165 SAPNTDOMAIN 162, 166 SAPOSCOL service 164 SAPSYSTEMNAME 162, 165 SAPTRANSHOST 162 Service Pack 161 services 164, 175 TEMPDATAFILE 163 verify installation 114 verify the install 161 worksheets central instance 162 cluster conversion tool 165 Failover Cluster Wizard 164 migration to MSCS 167 R3Setup install 162 SAP cluster conversion 165 SQL Server 7 160 SQL1390C (error message) 153 ST02 125 ST04 123 standby database 12 standby server 16 statement-based replication 11 stripe size Fibre Channel 76 ServeRAID 67 swing-disk 3 SYMplicity Storage Manager 74, 76, 95 RMPARAMS file 76
Index
203
synchronous replication
11
T
tape drives 34 TCP/IP DHCP in MSCS configurations 24 MSCS 24 TCP/IP addresses 78 TEMPDATAFILE SQL Server 163 transaction codes DB13 37, 126 RZ10 111 SM21 110 SM28 110 SM51 112 ST02 125 ST04 123 transport host 104 trouble shooting 169 tuning 67, 119 block size 67 database tuning 127 Fibre Channel 76 page file size 98 recommendations 67 stripe size 67 Windows NT 96
U
Ultra2 SCSI 64 update traffic 121 update work processes 119 UPDINSTV.R3S 143, 144, 156 UPS 39
V
verification 117 verifying the installation 169 VHDCI cables 66 Vinca StandbyServer 9 virtual names for backup 36 virtual servers MSCS 23 VLAN 77
NT Resource Kit 96 page file 50, 51, 53, 54, 98, 121 security 87 Server service 97 Service Packs 41, 94, 102 fixes 105 tuning 96, 119 Windows Load Balancing Service 2 Windows NT Server UNX, as an alternative to 17 WINS 7, 79, 91, 103 work processes 119 worksheets DB2 central instance 151 cluster conversion files 153 convert to cluster 154 MSCS, migrating to 157 Node A installation 147 Node B installation 148 R3Setup on Node A 150 general 90 Microsoft Cluster Server 101 Node A 90 Node B 92 Oracle central instance 133 cluster conversion 136 cluster conversion files 135, 136 installation 130 MSCS, migration to 144 Oracle Fail Safe 132 Oracle Fail Safe group 141 patch 131 R3Setup 133 SQL Server central instance 162 cluster conversion tool 165 Failover Cluster Wizard 164 migration to MSCS 167 R3Setup install 162 SAP cluster conversion 165 SQL Server 7 install 160
W
what the user sees 4 Windows 2000 certification 45 PAE memory addressing Windows NT 4 GB tuning 99 accounts 88 backup copy 53, 90 drive letters 55 Event Viewer errors 98 Network Monitor 102 53
204
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
_ IBM employee
Please rate your overall satisfaction with this book using the scale: (1 = very good, 2 = good, 3 = average, 4 = poor, 5 = very poor)
Overall Satisfaction
Please answer the following questions:
__________
Was this redbook published in time for your needs? If no, please explain:
Yes___ No___
Comments/Suggestions:
205
Implementing SAP R/3 4.5B Using Microsoft Cluster Server on IBM Netfinity Servers
SG24-5170-01
SG24-5170-01