Distributed Functions
Distributed Functions
Distributed Functions
com/redbooks
Front cover
DB2 9 for z/OS:
Distributed Functions
Paolo Bruni
Nisanti Mohanraj
Cristian Molaro
Yasuhiro Ohmori
Mark Rader
Rajesh Ramachandran
Establish connectivity to and from DB2
systems
Balance transaction workload
across data sharing members
Explore the functions of Data
Server Drivers and Clients
International Technical Support Organization
DB2 9 for z/OS: Distributed Functions
July 2009
SG24-6952-01
Copyright International Business Machines Corporation 2009. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
First Edition (July 2009)
This edition applies to Version 9.1 of IBM DB2 for z/OS (program number 5635-DB2).
Note: Before using this information and the product it supports, read the information in Notices on
page xxiii.
Copyright IBM Corp. 2009. All rights reserved. iii
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
The team that wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxviii
Summary of changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxix
July 2009, First Edition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxix
September 2009, First Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxix
May 2011, Second Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxix
Part 1. Distributed database architecture and configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Chapter 1. Architecture of DB2 distributed systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1 Before you start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Distributed data access topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 Remote request . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.2 Remote unit of work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.3 Distributed unit of work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.4 Distributed request . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 DRDA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.1 Functions and protocols of DRDA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3.2 Conversations between the AR and AS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.3 Building blocks of DRDA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.4 The DRDA process model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4 Implementation of DRDA in the DB2 family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4.1 DB2 for z/OS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4.2 DB2 for i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4.3 DB2 Server for VSE and VM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4.4 IBM Informix Dynamic Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4.5 DB2 for Linux, UNIX and Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4.6 IBM Data Server Driver for ODBC and CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4.7 IBM Data Server Driver for JDBC and SQLJ . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4.8 IBM Data Server Driver Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.4.9 pureQuery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.5 Implementation of DRDA by non-IBM products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.6 DB2 for z/OS Distributed Data Facility architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.6.1 What DDF is . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.6.2 Distributed configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.6.3 DDF implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.6.4 Network protocols used by DDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.7 Connection pooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.7.1 Inactive connection support in a DB2 for z/OS server . . . . . . . . . . . . . . . . . . . . . 22
iv DB2 9 for z/OS: Distributed Functions
1.7.2 Connection pooling using the IBM Data Server Drivers . . . . . . . . . . . . . . . . . . . . 24
1.7.3 Transaction pooling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.8 Federated data support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.8.1 IBM InfoSphere Federation Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.8.2 IBM InfoSphere Classic Federation Server for z/OS. . . . . . . . . . . . . . . . . . . . . . . 29
Chapter 2. Distributed database configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.1 DB2 for z/OS both as a requester and a server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.1.1 Basic configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.1.2 Parallel sysplex environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.2 DB2 for LUW and DB2 for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.2.1 DB2 for LUW ESE as requester to DB2 for z/OS server. . . . . . . . . . . . . . . . . . . . 37
2.2.2 DB2 for z/OS as requester to DB2 for LUW as server . . . . . . . . . . . . . . . . . . . . . 37
2.2.3 DB2 for z/OS as an intermediate server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.2.4 DB2 for z/OS as requester to a federation server . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.3 IBM Data Server Drivers and Clients as requesters . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.3.1 DB2 distributed clients: Historical view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.3.2 IBM Data Server Drivers and Clients overview. . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.3.3 Connecting to a DB2 data sharing group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.3.4 Choosing the right configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.3.5 Ordering the IBM Data Server Drivers and Clients . . . . . . . . . . . . . . . . . . . . . . . . 50
2.4 DB2 Connect to DB2 for z/OS: Past, present, and future . . . . . . . . . . . . . . . . . . . . . . . 51
2.4.1 DB2 Connect Client as requester . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.4.2 DB2 Connect Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.4.3 Connecting to a DB2 data sharing group from DB2 Connect . . . . . . . . . . . . . . . . 55
2.4.4 DB2 Connect: A case of managing access to DB2 threads . . . . . . . . . . . . . . . . . 56
2.5 DB2 Connect Server on Linux on IBM System z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
2.5.1 DB2 Connect on Linux on z with HiperSockets. . . . . . . . . . . . . . . . . . . . . . . . . . . 58
2.5.2 HiperSockets and DB2 data sharing configurations . . . . . . . . . . . . . . . . . . . . . . . 61
2.6 DB2 for z/OS requester: Any (DB2) DRDA server . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2.7 XA Support in DB2 for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Part 2. Setup and configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Chapter 3. Installation and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.1 TCP/IP setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.1.1 UNIX System Services setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.1.2 Language Environment considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.1.3 Basic TCP/IP setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.1.4 TCP/IP settings in a data sharing environment. . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.1.5 Sample DB2 data sharing DVIPA and Sysplex Distributor setup . . . . . . . . . . . . . 79
3.1.6 Starting DDF with TCP/IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.2 DB2 system configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.2.1 Defining the shared memory object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.2.2 Configuring the Communications Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.2.3 DB2 installation parameters (DSNZPARM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
3.2.4 Updating the BSDS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.2.5 DDF address space setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.2.6 Stored procedures and support for JDBC and SQLJ . . . . . . . . . . . . . . . . . . . . . 104
3.3 Workload Manager setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3.3.1 Enclaves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3.3.2 Managing DDF work with WLM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3.4 DB2 for LUW to DB2 for z/OS setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.4.1 IBM Data Server Drivers and Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Contents v
3.4.2 DB2 Connect. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
3.5 DRDA sample setupFrom DB2 for z/OS requester to DB2
for LUW on AIX server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
3.6 Character conversion: Unicode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
3.7 Restrictions on the use of local datetime formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
3.8 HiperSockets: Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Chapter 4. Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
4.1 Guidelines for basic DRDA security setup over TCP/IP . . . . . . . . . . . . . . . . . . . . . . . 130
4.1.1 Security options supported by DRDA access to DB2 for z/OS. . . . . . . . . . . . . . 130
4.1.2 Authorization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
4.1.3 Important considerations when setting security related DSNZPARMs . . . . . . . . 132
4.1.4 Recommendation for tightest security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.2 DRDA security requirements for an application server . . . . . . . . . . . . . . . . . . . . . . . . 140
4.2.1 Characteristics of a typical application server security model. . . . . . . . . . . . . . . 140
4.2.2 Considerations for DRDA security behind the application server . . . . . . . . . . . . 141
4.2.3 Identifying a client user coming from the application server . . . . . . . . . . . . . . . . 142
4.2.4 Network trusted context and roles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
4.3 Encryption options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
4.3.1 DRDA encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
4.3.2 IP Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
4.3.3 Secure Socket Layer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
4.3.4 DataPower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
4.4 Addressing dynamic SQL security concerns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
4.4.1 Using DYNAMICRULES(BIND) to avoid granting table privileges . . . . . . . . . . . 173
4.4.2 Using stored procedures for static SQL security benefits . . . . . . . . . . . . . . . . . . 175
4.4.3 Static SQL options of JDBC to realize static SQL security benefits . . . . . . . . . . 176
4.4.4 Static execution of dynamic SQL to benefit from static SQL security . . . . . . . . . 176
Part 3. Distributed applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Chapter 5. Application programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
5.1 Accessing data on a DB2 for z/OS server from a DB2 for z/OS requester . . . . . . . . . 186
5.1.1 System-directed versus application-directed access . . . . . . . . . . . . . . . . . . . . . 186
5.1.2 Program preparation when DB2 for z/OS is the AR . . . . . . . . . . . . . . . . . . . . . . 190
5.1.3 Using DB2 for z/OS as a requester going outbound to a non-DB2 for
z/OS server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
5.2 Migrating from DB2 private protocol to DRDA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
5.2.1 DB2 performance trace to show private protocol use . . . . . . . . . . . . . . . . . . . . . 195
5.2.2 The PRIVATE to DRDA REXX migration tool: DSNTP2DP . . . . . . . . . . . . . . . . 195
5.3 Program preparation steps when using non-DB2 for z/OS Requesters . . . . . . . . . . . 200
5.3.1 Connecting and binding packages from DB2 CLP . . . . . . . . . . . . . . . . . . . . . . . 200
5.3.2 Using the DB2Binder utility to bind packages used by the Data
Server Drivers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
5.4 Using the non-Java-based IBM Data Server Drivers . . . . . . . . . . . . . . . . . . . . . . . . . 203
5.4.1 Using the IBM Data Server Driver for ODBC and CLI. . . . . . . . . . . . . . . . . . . . . 203
5.4.2 Using the IBM Data Server Driver Package in a .NET environment . . . . . . . . . . 204
5.4.3 db2cli.ini and db2dsdriver.cfg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
5.5 Using the IBM Data Server Driver for JDBC and SQLJ . . . . . . . . . . . . . . . . . . . . . . . 207
5.5.1 Connecting to a DB2 for z/OS server using the Type 4 driver . . . . . . . . . . . . . . 208
5.5.2 Coding static applications using SQLJ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
5.6 Developing static applications using pureQuery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
5.6.1 When should you use pureQuery? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
5.6.2 pureQuery programming styles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
vi DB2 9 for z/OS: Distributed Functions
5.6.3 pureQuery client optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
5.7 Remote application development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
5.7.1 Limited block fetch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
5.7.2 Multi-row FETCH. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
5.7.3 Understanding the differences between limited block FETCH and
multi-row FETCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
5.7.4 Fast implicit CLOSE and COMMIT of cursors. . . . . . . . . . . . . . . . . . . . . . . . . . . 219
5.7.5 Multi-row INSERT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
5.7.6 Multi-row MERGE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
5.7.7 Heterogeneous batch updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
5.7.8 Progressive streaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
5.7.9 SQL Interrupts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
5.7.10 Remote external stored procedures and native SQL procedures. . . . . . . . . . . 224
5.8 XA transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
5.8.1 Using the Type 4 driver to enable direct XA transactions . . . . . . . . . . . . . . . . . . 226
5.8.2 Using the IBM non-Java-based Data Server Drivers/Clients to enable
direct XA transactions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
5.9 Remote application recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Chapter 6. Data sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
6.1 High availability aspects of DRDA access to DB2 for z/OS . . . . . . . . . . . . . . . . . . . . 234
6.1.1 Key components for DRDA high availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
6.1.2 z/OS WLM in DRDA workload balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
6.1.3 The sysplex awareness of clients and drivers. . . . . . . . . . . . . . . . . . . . . . . . . . . 239
6.1.4 Network resilience using Virtual IP Addressing. . . . . . . . . . . . . . . . . . . . . . . . . . 242
6.1.5 Advanced high availability for DB2 for z/OS data sharing. . . . . . . . . . . . . . . . . . 243
6.1.6 Scenario with Q-Replication for high availability . . . . . . . . . . . . . . . . . . . . . . . . . 246
6.2 Recommendations for common deployment scenarios . . . . . . . . . . . . . . . . . . . . . . . 247
6.2.1 DB2 data sharing subsetting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
6.2.2 Application Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
6.2.3 Distributed three-tier DRDA clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
6.3 DB2 failover scenario with and without Sysplex Distributor . . . . . . . . . . . . . . . . . . . . 259
6.3.1 Configuration for scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
6.3.2 Application states for scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
6.3.3 Results without Sysplex Distributor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
6.3.4 Results with Sysplex Distributor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
6.4 Migration and coexistence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Part 4. Performance and problem determination. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Chapter 7. Performance analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
7.1 Application flow in distributed environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
7.2 System topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
7.2.1 Database Access Threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
7.2.2 Accumulation of DDF accounting records. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
7.2.3 zIIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
7.2.4 Using RMF to monitor distributed data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
7.3 Checking settings in a distributed environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
7.3.1 db2set: DB2 profile registry command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
7.3.2 db2 get dbm cfg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
7.3.3 db2 get cli configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
7.3.4 db2pd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
7.3.5 Getting database connection information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
7.3.6 Getting online help for db2 commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
Contents vii
7.3.7 Other useful sources of information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
7.4 Obtaining information about the host configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . 307
7.4.1 Verification of currently active DSNZPARMs . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
7.4.2 SYSPLAN and SYSPACKAGES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
7.4.3 Resource Limit Facility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
7.4.4 GET_CONFIG and GET_SYSTEM_INFO stored procedures . . . . . . . . . . . . . . 312
7.4.5 DB2 commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
Chapter 8. Problem determination. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
8.1 Traces at DB2 Client and DB2 Gateway. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
8.1.1 The n-tier message communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
8.1.2 CLI traces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
8.1.3 DRDA traces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
8.1.4 JDBC traces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
8.2 Accounting for distributed data with the EXCSQLSET command. . . . . . . . . . . . . . . . 343
8.2.1 TCP/IP packet tracing on z/OS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
8.3 Network analyzer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
8.4 DB2 for z/OS tracing capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
8.4.1 Collecting traces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
8.4.2 Using OMEGAMON PE for reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
8.4.3 IFCIDs for DRDA problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
8.4.4 WLM classification rules and accounting information . . . . . . . . . . . . . . . . . . . . . 387
8.4.5 Step-by-step performance analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
Part 5. Appendixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
Appendix A. DRDA-related maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
A.1 Recent DRDA APARs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
Appendix B. Configurations and workload. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
B.1 Configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
B.2 The TRADE workload. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
B.2.1 IBM Trade performance benchmark sample for WebSphere Application
Server V6.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
B.2.2 TRADE installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
B.3 Using Trade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
Appendix C. Sample applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
C.1 Sample Java program to call a remote native SQL procedure. . . . . . . . . . . . . . . . . . 420
C.1.1 Using the Type 4 driver to call a native SQL procedure (BSQLAlone.Java) . . . 420
C.2 XA transaction samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
C.2.1 createRegisterXADS.java. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
C.2.2 Test application for XA transaction (XATest.Java) . . . . . . . . . . . . . . . . . . . . . . . 426
C.3 Progressive streaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
C.3.1 Progressive streaming: XMLTest_RedJava . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Appendix D. Sample programs for performance analysis . . . . . . . . . . . . . . . . . . . . . 443
D.1 Stress tests script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
D.2 REXX parser of GTF trace for IFCID 180. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
viii DB2 9 for z/OS: Distributed Functions
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
How to get Redbooks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
Copyright IBM Corp. 2009. All rights reserved. ix
Figures
1-1 Unit of work concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1-2 Two-phase commit protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1-3 Remote request . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1-4 Remote unit of work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1-5 Distributed unit of work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1-6 Distributed request . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1-7 DRDA network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1-8 Functions and protocols used by DRDA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1-9 DRDA process model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1-10 Connecting to DDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1-11 Address spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1-12 DDF - Using shared private storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1-13 Communication using TCP/IP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1-14 Communication using SNA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1-15 Inactive connection support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1-16 Connection pooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1-17 Transaction pooling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1-18 Sysplex workload balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1-19 InfoSphere Federation Server products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1-20 InfoSphere Federation Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1-21 Classic Federation Server for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1-22 Supported data sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2-1 Connecting from DB2 for z/OS to DB2 for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2-2 Connecting to a DB2 data sharing group from a DB2 for z/OS system . . . . . . . . . . . . 36
2-3 DB2 for LUW ESE requester connecting to DB2 for z/OS. . . . . . . . . . . . . . . . . . . . . . . 37
2-4 DB2 for z/OS requester connecting to DB2 for LUW ESE or DB2 for LUW WSE . . . . 38
2-5 DB2A as an intermediate server between a requester and a server . . . . . . . . . . . . . . 38
2-6 DB2 for z/OS as requester in a federation server, unprotected update scenario . . . . . 39
2-7 IBM Data Server Driver for JDBC and SQLJ connecting directly to DB2 for z/OS . . . . 42
2-8 IBM Data Server Driver for ODBC and CLI connecting directly to DB2 for z/OS . . . . . 43
2-9 IBM Data Server Driver Package connecting directly to DB2 for z/OS. . . . . . . . . . . . . 44
2-10 IBM Data Server Runtime Client connecting directly to DB2 for z/OS . . . . . . . . . . . . 44
2-11 IBM Data Server Client connecting directly to DB2 for z/OS . . . . . . . . . . . . . . . . . . . 45
2-12 IBM Data Server Drivers connecting to a DB2 data sharing group. . . . . . . . . . . . . . . 46
2-13 Current configuration with DB2 Connect Client and Server . . . . . . . . . . . . . . . . . . . . 49
2-14 DB2 Connect Client and Server replaced with IBM Data Server Drivers . . . . . . . . . . 50
2-15 DB2 Connect Client connecting to DB2 for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2-16 DB2 Connect Server providing access to DB2 for z/OS. . . . . . . . . . . . . . . . . . . . . . . 53
2-17 Example of DB2 Connect Server providing connection concentration . . . . . . . . . . . . 54
2-18 DB2 Connect Server in a Web application server environment . . . . . . . . . . . . . . . . . 54
2-19 DB2 Connect Client connecting to a DB2 data sharing group . . . . . . . . . . . . . . . . . . 55
2-20 DB2 Connect Server providing access to a DB2 data sharing group . . . . . . . . . . . . . 56
2-21 Controlling DB2 threads with a DB2 Connect Server . . . . . . . . . . . . . . . . . . . . . . . . . 58
2-22 HiperSocket: Example of multiple LPAR communication . . . . . . . . . . . . . . . . . . . . . . 59
2-23 Server consolidation and HiperSockets and Linux on System z for DB2 Connect . . . 60
2-24 HiperSocket: DB2 Connect using HiperSocket to communicate with DB2 for z/OS. . 61
2-25 HiperSockets in a data sharing environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
2-26 XA transaction support with DB2 Connect or WebSphere Application Server.. . . . . . 64
x DB2 9 for z/OS: Distributed Functions
2-27 XA transaction support without DB2 Connect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2-28 XA transaction support in a DB2 data sharing environment . . . . . . . . . . . . . . . . . . . . 65
3-1 Output of D9C1DIST started task as seen from SDSF. . . . . . . . . . . . . . . . . . . . . . . . . 70
3-2 Output of LISTUSER STC OMVS command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3-3 JCL of TCP/IP job, from SDSF display, showing high level qualifier for TCP/IP. . . . . . 74
3-4 Starting point for TCP.HOSTS.LOCAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3-5 VipaDynamic statements including Sysplex Distributor definition for SC70 . . . . . . . . . 78
3-6 VipaDynamic statements with backup SD for SC64 . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3-7 VipaDynamic statements with backup SD for SC63 . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3-8 BSDS specification with member-specific DVIPA and group DVIPA. . . . . . . . . . . . . . . 79
3-9 DB2 for z/OS V8: Port statements binding a specific IP address to the DB2 ports. . . . 79
3-10 Output of D TCPIP,,N,CONN command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3-11 Contents of /SC63/etc/hosts including DVIPA addresses. . . . . . . . . . . . . . . . . . . . . . 81
3-12 Output of D TCPIP,,NETSTAT,HOME command . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3-13 Output of D TCPIP,,N,CONN command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3-14 Example of DISPLAY DDF from DB9A standalone DB2 . . . . . . . . . . . . . . . . . . . . . . 84
3-15 Example of DISPLAY DDF from D9C1 data sharing member . . . . . . . . . . . . . . . . . . 84
3-16 Example of DISPLAY DDF from D9C2 data sharing member . . . . . . . . . . . . . . . . . . 84
3-17 Example of DISPLAY DDF from D9C3 data sharing member . . . . . . . . . . . . . . . . . . 85
3-18 Results of DISPLAY VIRTSTOR,HVSHARE showing default definition. . . . . . . . . . . 86
3-19 DSNTIPE panel specifying MAXDBAT and CONDBAT values . . . . . . . . . . . . . . . . . 91
3-20 DSNTIPR panel showing DDF values for subsystem DB9A. . . . . . . . . . . . . . . . . . . . 93
3-21 DSNTIP5 panel showing values for subsystem DB9A . . . . . . . . . . . . . . . . . . . . . . . . 95
3-22 DSNJU003 for DB9A BSDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
3-23 DB9A DSNJU004 output with DDF values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
3-24 DSNJU003 for D9C1 BSDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
3-25 D9C1 DSNJU004 output for member D9C1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
3-26 DSNJU003 input to add DVIPA and Group DVIPA to the BSDS for D9C1. . . . . . . . 100
3-27 DSNJU003 input to add DVIPA and Group DVIPA to the BSDS for D9C2. . . . . . . . 100
3-28 DSNJU003 input to add DVIPA and Group DVIPA to the BSDS for D9C3. . . . . . . . 100
3-29 DSNJU004 output for member D9C1 with DVIPA specified . . . . . . . . . . . . . . . . . . . 101
3-30 Output from -D9C1 DISPLAY DDF showing DVIPA specifications. . . . . . . . . . . . . . 101
3-31 Output from -D9C1 DISPLAY DDF showing DNS support for the group . . . . . . . . . 102
3-32 DSNJU003 input to add ALIAS definitions to D9C1 . . . . . . . . . . . . . . . . . . . . . . . . . 102
3-33 DSNJU003 input to add ALIAS definition to D9C2 . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3-34 DSNJU004 output showing LOCATION ALIAS and DVIPA with IPV4 . . . . . . . . . . . 103
3-35 JCL for D9C1DIST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
3-36 WLM: Choosing Classification Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
3-37 WLM: Subsystem Type Selection: Choosing classification rules for started tasks . . 107
3-38 WLM: STC service classes and report classes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
3-39 WLM: Transaction Name Group (TNG) for all DB2s in our sysplex . . . . . . . . . . . . . 108
3-40 WLM: STCHI service class goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
3-41 SDSF display showing service classes for D9C1 address spaces . . . . . . . . . . . . . . 109
3-42 WLM: DDFONL service class goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
3-43 WLM: DDFDEF service class goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
3-44 WLM: DDFBAT service class goals for our default DDF service class. . . . . . . . . . . 111
3-45 WLM: DDFTOT service class definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
3-46 WLM: DDFTST service class definition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
3-47 A subset of WLM classification rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
3-48 Sample db2dsdriver.cfg for our environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
3-49 Sample db2dsdriver.cfg provided with the driver . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
3-50 Two-tier configuration to our standalone DB2 for z/OS, DB9A. . . . . . . . . . . . . . . . . 117
3-51 Three-tier configuration to our standalone DB2 for z/OS . . . . . . . . . . . . . . . . . . . . . 118
Figures xi
3-52 Two-tier connection to our DB2 for z/OS data sharing group . . . . . . . . . . . . . . . . . . 120
3-53 Three-tier connection to our DB2 for z/OS data sharing group. . . . . . . . . . . . . . . . . 121
3-54 Steps to configure DB2 for z/OS as DRDA AR to DB2 for LUW as DRDA AS. . . . . 123
3-55 DSNTIPF panel where you specify your system CCSIDs. . . . . . . . . . . . . . . . . . . . . 126
3-56 TCP profile extract with HiperSocket definitions - part 1. . . . . . . . . . . . . . . . . . . . . . 127
3-57 TCP profile extract with HiperSocket definitions - part 2. . . . . . . . . . . . . . . . . . . . . . 128
4-1 DRDA connection flow of DB2 for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
4-2 Connecting to DB2 using RACF PassTickets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
4-3 Authentication process using Kerberos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
4-4 Setting for the DataSource custom properties on WebSphere Application Server . . . 139
4-5 Syntax diagram for the DSNLEUSR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
4-6 Typical DRDA access Web application server security model . . . . . . . . . . . . . . . . . . 141
4-7 Client Informations settings through WebSphere Application Server admin console . 142
4-8 Client information setting example using datasource ODBC settings. . . . . . . . . . . . . 143
4-9 Configure WebSphere Application Server DataSource custom properties to use AES
encryption. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
4-10 IPSec overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
4-11 The padlock symbol indicates encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
4-12 SSL overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
0-1 BSDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
4-13 Define RACF resources for policy agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
4-14 Definition for policy agent started task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
4-15 Definition for policy agent environment file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
4-16 Definition for policy agent main configuration file . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
4-17 Sample server definition for AT-TLS configuration file . . . . . . . . . . . . . . . . . . . . . . . 156
4-18 Add TTLS parameter to TCP/IP stack configuration. . . . . . . . . . . . . . . . . . . . . . . . . 157
4-19 Activate DIGTCERT and DIGTRING class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
4-20 Create a self-signed server CA certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
4-21 Create private server certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
4-22 Create server keyring and add certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
4-23 Export server CA certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
4-24 Change BSDS to enable the secured port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
4-25 The IBM Key Management tool window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
4-26 Create a new Key Database file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
4-27 Setting password for key database and making stash file . . . . . . . . . . . . . . . . . . . . 164
4-28 Import the DB2 server certificate to Key Database. . . . . . . . . . . . . . . . . . . . . . . . . . 164
4-29 Enter the label for the DB2 server Certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
4-30 The key database after imported the DB2 server certificate. . . . . . . . . . . . . . . . . . . 165
4-31 Settings for $DB2ISNTPORF and $DB2INSTDEF DB2 profile variables . . . . . . . . . 166
4-32 The network capture of DRDA request (using Wireshark) . . . . . . . . . . . . . . . . . . . . 170
4-33 The network capture of DRDA Data encrypt. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
4-34 The network capture of DRDA with SSL enabled . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
4-35 Sample datasource custom properties settings for WebSphere Application Server. 172
4-36 Overview of preparation steps to execute JDBC application in static mode. . . . . . . 177
5-1 Connection management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
5-2 Execution of remote packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
5-3 Options for tool DSNTP2DP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
5-4 Sample output for packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
5-5 Sample output from PLANS data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
5-6 .NET application code sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
5-7 .NET configuration file. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
5-8 SQLJ application sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
5-9 Customizing and binding an SQLJ application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
xii DB2 9 for z/OS: Distributed Functions
5-10 pureQuery runtime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
5-11 The internal workings of pureQuery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
5-12 Enabling XA transaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
5-13 .NET application that uses XA transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
6-1 DRDA access to DB2 data sharing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
6-2 The logical flow of DRDA AR clients obtaining server information . . . . . . . . . . . . . . . 241
6-3 Sysplex Distributor connection assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
6-4 Client reroute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
6-5 Multi-sysplex configuration scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
6-6 DB2 data sharing subsetting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
6-7 Java application server scenario configuration using WebSphere Application Server 250
6-8 Setting Type 4 driver configuration properties file to WebSphere Application Server. 252
6-9 Non-Java-based application server scenario configuration using .NET . . . . . . . . . . . 254
6-10 Distributed three tier DRDA clients scenario. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
6-11 Failover test scenario with Sysplex Distributor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
7-1 2 tier architecture representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
7-2 3-tier architecture representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
7-3 Effects of ACCUMACC on some of the fields of the accounting records . . . . . . . . . . 278
7-4 Reverting to ACCUMACC=NO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
7-5 Accounting report including zIIP CPU usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
7-6 CPU report with zIIP and zAAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
7-7 Calculating zIIP redirect % . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
7-8 RMF Spreadsheet reporter: creating a working set menu . . . . . . . . . . . . . . . . . . . . . 292
7-9 RMF Spreadsheet reporter: creating a working set . . . . . . . . . . . . . . . . . . . . . . . . . . 293
7-10 RMF Spreadsheet reporter: selecting reports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
7-11 RMF report example: Physical Total Dispatch Time %. . . . . . . . . . . . . . . . . . . . . . . 294
7-12 DB2 PE System parameters GUI view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
7-13 OSC Panel DSNZPARMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
7-14 OMEGAMON PE showing the execution of a DISPLAY RLIMIT command . . . . . . . 311
7-15 START RLIMIT example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
8-1 3-tier architecture simplified . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
8-2 2-tier architecture simplified . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
8-3 Traces and tools in a n-tier environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
8-4 CLI settings in the Configuration Assistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
8-5 Add CLI parameters in the Configuration Assistant . . . . . . . . . . . . . . . . . . . . . . . . . . 328
8-6 CLI Settings panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
8-7 CLI/ODBC Setting, Windows Control panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
8-8 Changing the CLI setting using the DB2 Configuration Assistant . . . . . . . . . . . . . . . . 346
8-9 CLI settings: Transaction section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
8-10 Accounting information in OMEGAMON PE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
8-11 Network analyzer options panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
8-12 Network analyzer main panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
8-13 DRDA SECMEC DDM code point monitored by Wireshark . . . . . . . . . . . . . . . . . . . 358
8-14 Apply DRDA protocol filter to Wireshark capture . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
8-15 Filtering packets in Wireshark. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
8-16 Analysis of DRDA packets in a spreadsheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
8-17 Configuration Assistant server authentication: No encryption. . . . . . . . . . . . . . . . . . 361
8-18 Password and user ID in network analyzer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
8-19 Configuration assistant enable encryption option . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
8-20 Configuration Assistant Server Authentication, Enable encryption option selected . 363
8-21 Creating spreadsheets reports from OMEGAMON Warehouse tables. . . . . . . . . . . 374
8-22 Custom DDF Statistics chart, blocks sent by statistics interval. . . . . . . . . . . . . . . . . 375
8-23 Custom DDF accounting chart, DBAT wait time versus commits per location . . . . . 376
Figures xiii
8-24 Analysis of zIIP utilization by workstation name . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
8-25 Mapping the GTF header . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
8-26 WLM workload classification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
8-27 Flowchart describing the problem determination method in n-tier environment . . . . 390
B-1 Starting z/OS and AIX configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
B-2 Configuration after implementing DVIPA support, plus Linus on IBM System z. . . . . 403
B-3 Location subsets in our DB2 data sharing group.. . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
B-4 Trade overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
B-5 WebSphere Application Server admin console after installation . . . . . . . . . . . . . . . . 413
B-6 JDBC Provider defined from configuration script . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
B-7 TradeDataSource from WebSphere Application Server admin console. . . . . . . . . . . 414
B-8 Modify DataSource to Type 4 Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
B-9 Restart application server from WebSphere Application Server admin console . . . . 415
B-10 Finish installation by populating Trade Database. . . . . . . . . . . . . . . . . . . . . . . . . . . 415
B-11 Verify your installation by logging into Trade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
B-12 Trade home panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
B-13 Test Trade scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
xiv DB2 9 for z/OS: Distributed Functions
Copyright IBM Corp. 2009. All rights reserved. xv
Examples
2-1 Updating tables in order in a single UOW. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3-1 Displaying an OMVS user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3-2 Defining a superuser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3-3 Port reservations for two DB2 subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3-4 Port reservations for three members of a DB2 data sharing group. . . . . . . . . . . . . . . . 77
3-5 Port reservation statements for our three-way data sharing group. . . . . . . . . . . . . . . . 81
3-6 Port reservations for three members including aliases . . . . . . . . . . . . . . . . . . . . . . . . . 82
4-1 Authentication mechanism by catalog database command . . . . . . . . . . . . . . . . . . . . 131
4-2 Authentication rejection with and without extended security. . . . . . . . . . . . . . . . . . . . 134
4-3 Change RACF password on connect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4-4 Activates PTKTDATA class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
4-5 New profiles to remote DB2 subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
4-6 Catalog a database using Kerberos authentication . . . . . . . . . . . . . . . . . . . . . . . . . . 137
4-7 Example for setting authentication mechanism at DB2 Connect . . . . . . . . . . . . . . . . 138
4-8 Example of executing SYSPROC.DSNLEUSR(from DB2 Connect). . . . . . . . . . . . . . 139
4-9 Display of inserted row of SYSPROC.DSNLEUSR. . . . . . . . . . . . . . . . . . . . . . . . . . . 140
4-10 Sample configuration file contents of data server driver . . . . . . . . . . . . . . . . . . . . . . 143
4-11 Sample for setting client information from Java applications . . . . . . . . . . . . . . . . . . 144
4-12 Sample setting client information from WebSphere Application Server applications 144
4-13 Sample setting client information in ODBC/CLI applications . . . . . . . . . . . . . . . . . . 145
4-14 Sample setting client information in ADO.NET application (Visual Basic) . . . . . . . . 145
4-15 SYSLOG output from misconfiguration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
4-16 Catalog database with AES option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
4-17 Sample data server driver configuration for AES encryption . . . . . . . . . . . . . . . . . . 148
4-18 Using AES from Java application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
4-19 Catalog database with DRDA data stream encryption . . . . . . . . . . . . . . . . . . . . . . . 149
4-20 Sample data server driver configuration for data stream encryption. . . . . . . . . . . . . 149
4-21 Using data stream encryption for Java applications . . . . . . . . . . . . . . . . . . . . . . . . . 150
4-22 Generating keystore for Java clients. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
4-23 Import server certificate to your keystore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
4-24 List the keystore entry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
4-25 Starting the IBM Key Management tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
4-26 Contents of SSLClientconfig.ini . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
4-27 sample Java code for SSL connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
4-28 Executing Java application using SSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
4-29 Sample db2dsdriver.cfg configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
4-30 Catalog DB2 sever to use SSL connection to DB2 Connect . . . . . . . . . . . . . . . . . . 169
4-31 Sample SSL connection using DB2 Connect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
4-32 Sample SYSLOG output for failed SSL connection. . . . . . . . . . . . . . . . . . . . . . . . . . 170
4-33 The db2cli.ini configuration for static SQL profiling capture mode . . . . . . . . . . . . . . 178
4-34 Sample CLI script used to test static profiling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
4-35 Static profiling capture file. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
4-36 db2cli.ini configuration for static SQL profiling match mode . . . . . . . . . . . . . . . . . . . 180
4-37 Capture - match log file. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
5-1 Three-part table names. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
5-2 Aliases for three-part table names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
5-3 Incorrect alias definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
5-4 Correct remote alias definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
xvi DB2 9 for z/OS: Distributed Functions
5-5 Explicit CONNECT statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
5-6 Binding packages at a remote site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5-7 Binding packages into the plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
5-8 Binding remote SPUFI packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
5-9 Connecting from DB2 CLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
5-10 Binding DB2 Connect packages on a remote DB2 for z/OS. . . . . . . . . . . . . . . . . . . 200
5-11 Using command db2bfd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
5-12 Executing remote SQL interactively from DB2 CLP . . . . . . . . . . . . . . . . . . . . . . . . . 202
5-13 Creating and calling a a native SQL procedure from DB2 CLP . . . . . . . . . . . . . . . . 202
5-14 Using the DB2Binder utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
5-15 Bind options as a property for the DB2Binder class . . . . . . . . . . . . . . . . . . . . . . . . . 203
5-16 Connecting to DB2 for z/OS through CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
5-17 The db2dsdriver.cfg file. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
5-18 Determining the driver version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
5-19 Using the getConnection() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
5-20 Connecting to DB2 for z/OS through the DataSource interface . . . . . . . . . . . . . . . . 208
5-21 Capturing dynamic SQL statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
5-22 Configuring target packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
5-23 Using the StaticBinder utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
5-24 Running the static application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
5-25 Block fetching in a Java program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
5-26 Retrieving a rowset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
5-27 Implicit multi-row fetching in a Java program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
5-28 Setting a property to enable extended diagnostic. . . . . . . . . . . . . . . . . . . . . . . . . . . 219
5-29 Multi-row INSERT in a PL/I program. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
5-30 Using addBatch() in a Java program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
5-31 Multi-row MERGE in a Java program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
5-32 Issuing SQL interrupts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
5-33 Creating and calling a native SQL procedure using the Type 4 driver . . . . . . . . . . . 224
5-34 Enabling XA transaction through explicit XA API . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
6-1 RMF workload activity report example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
6-2 Example of the server list from the DISPLAY DDF DETAIL command. . . . . . . . . . . . 236
6-3 Example trace output from the global transport objects pool . . . . . . . . . . . . . . . . . . . 238
6-4 Display server list information from DB2 Connect Server . . . . . . . . . . . . . . . . . . . . . . 238
6-5 Verify level of Type 4 driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
6-6 Example of configuration properties file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
6-7 Sample setting for db2dsdriver.cfg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
7-1 DIS THD(*) DETAIL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
7-2 OM/PE Statistics Report Short showing DBAT QUEUED-MAXIMUM ACTIVE > 0 . . 272
7-3 OM/PE Statistics Report Long showing DBAT QUEUED-MAXIMUM ACTIVE > 0 . . 273
7-4 Thread cancelled because of idle thread timeout threshold reached . . . . . . . . . . . . . 274
7-5 Activating accounting rollup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
7-6 SET SYSPARM RELOAD command example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
7-7 De-Activating accounting rollup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
7-8 Output of /d m=cpu command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
7-9 Extract of hlq.SDSNMACS(DSNDQWAC) macro showing zIIP related fields . . . . . . 285
7-10 OMEGAMON PE RECTRACE command example . . . . . . . . . . . . . . . . . . . . . . . . . 286
7-11 OMEGAMON PE Record Trace extract showing zIIP related fields . . . . . . . . . . . . . 286
7-12 SDSF view of enclaves showing zIIP utilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
7-13 RMF online monitoring of enclaves. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
7-14 RMF Enclave Classification Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
7-15 RMF batch reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
7-16 db2set -all . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Examples xvii
7-17 db2set -lr command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
7-18 db2 get dbm cfg output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
7-19 Getting the port on which DB2 for LUW listen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
7-20 GET CLI CONFIGURATION syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
7-21 GET CLI CONFIGURATION output example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
7-22 db2pd sysplex syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
7-23 db2pd usage example: Getting the server list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
7-24 Syntax of the list database directory command . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
7-25 db2 list db directory command output example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
7-26 Syntax of the list node directory command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
7-27 list node directory command output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
7-28 Syntax of the list dcs directory command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
7-29 list dcs directory command output example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
7-30 Getting online help using the CLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
7-31 Using the -h option with a system command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
7-32 Using the command db2licm. . . . . . . . . . . . . . . . . . . . . . . . 303
7-33 Using the command db2level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
7-34 Getting the service associated with the DB2 Server . . . . . . . . . . . . . . . . . . . . . . . . . 304
7-35 Getting the port number from /etc/services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
7-36 Getting the IP address of a DB2 Connect or ESE server from its dns entry. . . . . . . 304
7-37 Getting your IP address in a windows machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
7-38 Getting the IP address of an AIX server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
7-39 DB2 SYSPRINT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
7-40 ping -a example command for resolving a host name . . . . . . . . . . . . . . . . . . . . . . 305
7-41 Updating db2 diaglevel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
7-42 Extract of and OMEGAMON PE System Parameter Report. . . . . . . . . . . . . . . . . . . 307
7-43 STOP RLIMIT example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
7-44 Example of Java program calling SYSPROC.GET_INFO . . . . . . . . . . . . . . . . . . . . 312
7-45 Execution a Java program calling SYSPROC.GET_INFO . . . . . . . . . . . . . . . . . . . . 314
7-46 Extract of xml_output.xml file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
7-47 DIS THD output example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
7-48 Display of INACTIVE THREADS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
7-49 DIS THD(*) LOCATION(::9.12.4.121) command output . . . . . . . . . . . . . . . . . . . . . 316
7-50 DISPLAY LOCATION example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
7-51 DIS DDF syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
7-52 DIS DDF command example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
7-53 DIS DDF DETAIL example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
7-54 DIS THD(*) TYPE(INACTIVE) command output. . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
7-55 STOP DDF command syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
7-56 STOP DDF output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
7-57 DISPLAY DFF output (suspending) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
7-58 DISPLAY LOCATION output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
7-59 DISPLAY DDF after RESUME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
8-1 GET CLI CONFIGURATION syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
8-2 GET CLI CONFIGURATION output example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
8-3 Execution of UPDATE CLI CFG commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
8-4 GET CLI CFG command output after changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
8-5 How to find the db2cli.ini file in an AIX server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
8-6 Using the db2set command to display the variable DB2CLIINIPATH. . . . . . . . . . . . . 332
8-7 COMMON section of db2cli.ini . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
8-8 CLI trace example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
8-9 Locating the db2drdat in DB2 Connect and ESE in AIX . . . . . . . . . . . . . . . . . . . . . . . 334
8-10 db2drdat command syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
xviii DB2 9 for z/OS: Distributed Functions
8-11 db2drdat on. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
8-12 Stopping the db2drdat traces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
8-13 DRDA trace example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
8-14 Enabling traces using the DriverManager interface . . . . . . . . . . . . . . . . . . . . . . . . . 338
8-15 Sample modification to connection string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
8-16 Sample JCC Type 4 trace file (partial output) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
8-17 Sample SQLJ trace file (partial output) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
8-18 Sample JCC Type 4 trace file (partial output) showing SQL Error . . . . . . . . . . . . . . 341
8-19 Starting db2trc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
8-20 db2trc dump command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
8-21 db2trc off command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
8-22 Format db2trc trace file command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
8-23 Sample formatted db2trc report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
8-24 SET CLIENT USERID example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
8-25 SET CLIENT WRKSTNNAME example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
8-26 SET CLIENT APPLNAME example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
8-27 SET CLIENT ACCTNG example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
8-28 Using accounting information for DDF work classification . . . . . . . . . . . . . . . . . . . . 344
8-29 Enclave details showing the Accounting Information field . . . . . . . . . . . . . . . . . . . . 344
8-30 RMF enclave report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
8-31 db2 get cli cfg and accounting CLI settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
8-32 DRDA network trace and CLI accounting settings . . . . . . . . . . . . . . . . . . . . . . . . . . 347
8-33 DIS THD and accounting information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
8-34 Timeout report when not using CLI user information . . . . . . . . . . . . . . . . . . . . . . . . 349
8-35 Timeout report when using CLI custom information . . . . . . . . . . . . . . . . . . . . . . . . . 350
8-36 Sample procedure for capturing CTRACE information . . . . . . . . . . . . . . . . . . . . . . . 350
8-37 Output of display trace command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
8-38 IPCS primary options menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
8-39 PCS CTRACE option menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
8-40 PCS CTRACE reporting options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
8-41 Trace initialization message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
8-42 Sample formatted trace output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
8-43 Formatted report from IPCS for packet trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
8-44 Summary of the IP trace CSV format file using the inhouse tool . . . . . . . . . . . . . . . 355
8-45 Analysis of DRDA packets in a flat file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
8-46 DRDA SECCHK Non encrypted example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
8-47 DRDA SECCHK Encrypted enabled example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
8-48 START TRACE command syntax, partial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
8-49 Starting IFCID 180 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
8-50 START TRACE output example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
8-51 DIS TRACE command example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
8-52 STOP TRACE command example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
8-53 Switch SMF command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
8-54 Identify SMF dump data set from SMF Dump job. . . . . . . . . . . . . . . . . . . . . . . . . . . 366
8-55 Extract DB2 SMF records from SMF dump dataset . . . . . . . . . . . . . . . . . . . . . . . . . 366
8-56 SYS1.PROCLIB(GTFDRDA), a GTF proclib example . . . . . . . . . . . . . . . . . . . . . . . 367
8-57 SYS1.PARMLIB(GTFDRDA), GTF option example . . . . . . . . . . . . . . . . . . . . . . . . . 367
8-58 Start GTF system log output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
8-59 Start GTF: Answer to message AHL125A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
8-60 START TRACE with destination GTF. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
8-61 STOP GTF trace. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
8-62 Use of LOCATION for starting tracing for non z/OS clients . . . . . . . . . . . . . . . . . . . 370
8-63 OMEGAMON PE Accounting Trace reportshort . . . . . . . . . . . . . . . . . . . . . . . . . . 372
Examples xix
8-64 OMEGAMON PE Accounting Trace report long . . . . . . . . . . . . . . . . . . . . . . . . . . 373
8-65 OMEGAMON PE report example: Rows involved in multi rows operations . . . . . . . 378
8-66 Sample PL/I program: multi row insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
8-67 OMEGAMON PE trace report example, IFCID(192). . . . . . . . . . . . . . . . . . . . . . . . . 380
8-68 OMEGAMON PE IFCID 192 report example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
8-69 GTF Trace header extract of hlq.SDSNMACS(DSNDQWGT) . . . . . . . . . . . . . . . . . 381
8-70 GTF trace extract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
8-71 GTF trace header extract of hlq.SDSNMACS(DSNDQW02), IFCID 180 . . . . . . . . . 384
8-72 JCL sample: calling IFCID 180 formatting tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
8-73 IFCID 180 formatting example, DATA=N option. . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
8-74 IFCID 180 formatting example, DATA=Y option . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
B-1 Running the configuration and installation script in AIX . . . . . . . . . . . . . . . . . . . . . . . 407
C-1 SQL procedure CALL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
C-2 Creating and registering an XA datasource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
C-3 Sample JDBC application running an XA transaction using the JDBC XA API . . . . . 426
C-4 XAUtil class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
C-5 Properties file XADataSource1_T4S390 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
C-6 Using Progressive Streaming: XMLTest_RedJava . . . . . . . . . . . . . . . . . . . . . . . . . . 439
D-1 Korn shell script: executing a DB2 query from UNIX . . . . . . . . . . . . . . . . . . . . . . . . . 444
D-2 Korn shell: executing a query script in the background . . . . . . . . . . . . . . . . . . . . . . . 446
D-3 JCL for execution of REXX parser of GTF containing IFCID 180. . . . . . . . . . . . . . . . 446
D-4 REXX code: parsing of GTF trace and IFCID 180 . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
xx DB2 9 for z/OS: Distributed Functions
Copyright IBM Corp. 2009. All rights reserved. xxi
Tables
1-1 Terminology table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1-2 Conversation of AR and AS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1-3 Components of the DRDA process model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2-1 Recent history of DB2 client products. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2-2 IBM Data Server Drivers and Clients comparison. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2-3 Replacing DB2 Connect Server with IBM Data Server Drivers or Clients. . . . . . . . . . . 48
2-4 IBM Data Server Client Packages: Latest downloads (V9.7) . . . . . . . . . . . . . . . . . . . . 51
3-1 SYSIBM.LOCATIONS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3-2 SYSIBM.IPNAMES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
3-3 SYSIBM.USERNAMES. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3-4 DSNZPARM parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
3-5 DDF work classification attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
3-6 Commands to connect 2-tier to DB2 for z/OS and the information required. . . . . . . . 117
3-7 Commands to connect 3-tier to DB2 for z/OS and the information required. . . . . . . . 119
3-8 Commands to connect 2-tier to DB2 data sharing group and the information required120
3-9 Commands to connect 3-tier to DB2 data sharing group and the information required122
3-10 Connecting DB2 for z/OS to DB2 for LUW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
4-1 Security options for DB2 for z/OS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
5-1 Type 1 and Type 2 CONNECT statements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
5-2 db2cli.ini and db2dsdriver.cfg configuration parameters. . . . . . . . . . . . . . . . . . . . . . . 207
5-3 Comparing pureQuery dynamic and static SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
5-4 Comparing limited block FETCH and multi-row FETCH. . . . . . . . . . . . . . . . . . . . . . . 219
5-5 Type 4 driver properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
5-6 Native versus external procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
5-7 Comparison across requesters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
5-8 Fetch/insert feature support by client/driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
6-1 Server list and calculated ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
6-2 The numbers of active connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
6-3 Recommendation for common deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
6-4 Recommended settings for the non-Java-based IBM Data Server Driver . . . . . . . . . 254
6-5 Recommended settings for DB2 Connect server configuration . . . . . . . . . . . . . . . . . 258
6-6 Test results after D9C1 failed without Sysplex Distributor . . . . . . . . . . . . . . . . . . . . . 261
6-7 Test results after D9C1 failed with Sysplex Distributor . . . . . . . . . . . . . . . . . . . . . . . . 264
7-1 Summary of DSNZPARM parameters affecting DBATs . . . . . . . . . . . . . . . . . . . . . . . 271
7-2 Fields affected by roll up for distributed and parallel tasks . . . . . . . . . . . . . . . . . . . . . 275
7-3 ACCUMUID acceptable values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
7-4 ACCUMACC and effects on some accounting fields . . . . . . . . . . . . . . . . . . . . . . . . . 278
7-5 SYSPLAN columns of interest for distributed workloads . . . . . . . . . . . . . . . . . . . . . . 309
7-6 SYSPACKAGES columns of interest for distributed workload . . . . . . . . . . . . . . . . . . 310
8-1 Traces available on distributed components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
8-2 Some DRDA command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
8-3 Trace level options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
8-4 Allowable constraints for each trace type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
8-5 Most common allowable destinations for each trace type. . . . . . . . . . . . . . . . . . . . . . 372
8-6 Extract of constants definitions, z/Architecture Reference Summary. . . . . . . . . . . . 382
8-7 Extract of work qualifiers and their abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
A-1 DB2 9 current DRDA-related APARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
xxii DB2 9 for z/OS: Distributed Functions
Copyright IBM Corp. 2009. All rights reserved. xxiii
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
xxiv DB2 9 for z/OS: Distributed Functions
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX
CICS
Cognos
DataPower
DB2 Connect
DB2 Universal Database
DB2
developerWorks
Distributed Relational Database
Architecture
DRDA
HiperSockets
i5/OS
IBM
Informix
InfoSphere
iSeries
Language Environment
OMEGAMON
OS/390
Parallel Sysplex
RACF
Redbooks
Redbooks (logo)
RETAIN
SecureWay
System i
System z10
System z9
System z
Tivoli
VTAM
WebSphere
z/Architecture
z/OS
z/VM
z9
zSeries
The following terms are trademarks of other companies:
Cognos, and the Cognos logo are trademarks or registered trademarks of Cognos Incorporated, an IBM
Company, in the United States and/or other countries.
Oracle, JD Edwards, PeopleSoft, Siebel, and TopLink are registered trademarks of Oracle Corporation and/or
its affiliates.
Hibernate, Interchange, and the Shadowman logo are trademarks or registered trademarks of Red Hat, Inc. in
the U.S. and other countries.
SAP, and SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other
countries.
J2EE, Java, JDBC, JDK, JRE, JVM, Solaris, and all Java-based trademarks are trademarks of Sun
Microsystems, Inc. in the United States, other countries, or both.
ESP, Excel, Microsoft, MS, SQL Server, Visual Basic, Visual Studio, Windows, and the Windows logo are
trademarks of Microsoft Corporation in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
Copyright IBM Corp. 2009. All rights reserved. xxv
Preface
Distributed Relational Database Architecture (DRDA) is a set of protocols that permits
multiple local and remote database systems and application programs to work together. Any
combination of relational database management products that use DRDA can be connected
to form a distributed relational database management system. DRDA coordinates
communication between systems by defining what can be exchanged and how it must be
exchanged.
DB2 for z/OS Distributed Data Facility (DDF) is a built-in component that provides the
connectivity to and from other servers or clients over the network. DDF is a full-function
DRDA-compliant transaction monitor which, equipped with thread pooling and connection
management, can support very large networks. Different z/OS workload management
priorities can be assigned to different, user-specified classes of DDF-routed application work.
In this IBM Redbooks publication, we describe how to set up your DDF environment and
how to deploy the DDF capabilities in different configurations, including how to develop
applications that access distributed databases.
We also describe a set of more advanced features, such as thread pooling and high
availability distributed configurations, in a DB2 data sharing environment, as well as the
traces available to do performance monitoring and problem determination.
In summary, we show how a high-volume, highly available transactional application can be
successfully implemented with a DB2 for z/OS data server accessed by all types of
application servers or clients running on the same or different platform.
The team that wrote this book
This book was produced by a team of specialists from around the world working in Silicon
Valley Laboratory, San Jose, California.
Paolo Bruni is an Information Management software Project Leader at the International
Technical Support Organization based in Silicon Valley Lab, San Jose. He has authored
several IBM Redbooks about DB2 for z/OS and related tools, and has conducted DB2
workshops and seminars worldwide. During Paolo's many years with IBM, both in
development and in the field, his work has been mostly related to database systems.
Nisanti Mohanraj is a Software Engineer in the DB2 for z/OS Distributed Development Team
in the IBM Silicon Valley Lab. She has 7 years of experience in distributed database
connectivity and DRDA and 9 years overall in DB2 development. Nisanti holds a Masters
Degree in Computer Science from the University of Virginia.
Cristian Molaro is an independent consultant and DB2 instructor based in Belgium. He is
IBM Data Champion and an IBM Certified DBA and Application Developer for DB2 for z/OS
V7, V8, and V9. His main activity is linked to DB2 for z/OS administration and performance.
Cristian is co-author of the IBM Redbooks Enterprise Data Warehousing with DB2 9 for z/OS,
SG24-7637 and 50 TB Data Warehouse Benchmark on IBM System z, SG24-7674. He holds
a Chemical Engineer degree and a Master in Management Sciences. He can be reached at
[email protected].
xxvi DB2 9 for z/OS: Distributed Functions
Yasuhiro Ohmori is an IT Specialist with IBM Japan Systems Engineering Co., Ltd. (ISE)
under GTS in Japan. He has more than 7 years of experience in technical support for DB2 for
z/OS. Yasuhiro has worked with several major customers in Japan implementing DB2 for
z/OS and has conducted workshops for IBMers in Japan. His areas of expertise include DB2
for z/OS, DRDA implementation, DB2 Connect, and related topics.
Mark Rader is a Consulting IT Specialist with IBM Advanced Technical Support (ATS),
located in Chicago. He has 25 years of technical experience with IBM, including large
systems, communications, and databases. He has worked primarily with DB2 for MVS,
OS/390, and z/OS for more than 20 years, and has specialized in data sharing,
performance, and related topics for the past 9 years. Mark has been in ATS for the last 8
years.
Rajesh Ramachandran is a Senior Software Engineer in IBM System z e-Business
Services. He currently works in the Design Center in Poughkeepsie as an IT Architect and
DB2 assignee. He has 12 years of experience in application development on various
platforms, which include z/OS, UNIX, and Linux utilizing COBOL, Java, CICS, and
Forte.
The authors in SVL. From left to right: Yasuhiro, Nisanti, Cristian, Paolo, Mark, and Rajesh
Thanks to the following people for their contributions to this project:
Rich Conway
Roy Costa
Bob Haimowitz
Emma Jacobs
International Technical Support Organization
Atkins Chun
Jaijeet Chakravorty
Margaret Dong
Shivram Ganduri
Sherry Guo
Jeff Josten
Keith Howell
Gopal Krishnan
Roger Miller
Preface xxvii
Todd Munk
Jim Pickel
Manish Sehgal
Hugh Smith
Bart Steegmans
Derek Tempongko
Anil Varkhedi
Daya Vivek
Dan Weis
Sofilina Wilhite
Paul Wilms
IBM Silicon Valley Lab
Kanchana Padmanabhan
IBM Glendale
Toshiaki Sota
IBM Japan Systems Engineering
Gus Kassimis
IBM Software Group, Raleigh
Brent Gross
Anju Kaushik
Melanie Stopfer
IBM Toronto Lab
Rick Butler
Bank of Montreal (BMO) Financial Group, Toronto
Thanks to the authors of the first edition of this book, Distributed Functions of DB2 for z/OS
and OS/390, SG24-6952-00, published in June 2003:
Bart Steegmans
Neale Armstrong
Cemil Cemiloglu
Srirengan Venkatesh Kumar
Satoru Todokoro
Become a published author
Join us for a two- to six-week residency program! Help write a book dealing with specific
products or solutions, while getting hands-on experience with leading-edge technologies. You
will have the opportunity to team with IBM technical professionals, Business Partners, and
Clients.
Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you
will develop a network of contacts in IBM development labs, and increase your productivity
and marketability.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
xxviii DB2 9 for z/OS: Distributed Functions
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an e-mail to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Copyright IBM Corp. 2009. All rights reserved. xxix
Summary of changes
This section describes the technical changes made in this edition of the book and in previous
editions. This edition may also include minor corrections and editorial changes that are not
identified.
Summary of Changes
for SG24-6952-01
for DB2 9 for z/OS: Distributed Functions
as created or updated on May 26, 2011.
July 2009, First Edition
This revision of this First edition, published July 2009, reflects the addition, deletion, or
modification of new and changed information described below. Change bars mark meaningful
changes. Minor typographical corrections might not have change bars.
September 2009, First Update
This revision reflects the addition, deletion, or modification of new and changed information
described below.
New information
Added clarification in 5.8.2, Using the IBM non-Java-based Data Server Drivers/Clients to
enable direct XA transactions on page 227.
Added APARS in Appendix A, DRDA-related maintenance on page 395.
Added Example C-4 on page 428 and Example C-5 on page 439 in Appendix C, Sample
applications on page 419.
Changed information
Updated text in Preface on page xxv to reflect the correct title of the previous edition.
Updated text in 1.7.3, Transaction pooling on page 25.
Updated text in 3.1.3, Basic TCP/IP setup on page 71.
Updated text in Table 5-7 on page 230.
Updated text in three bullets in 5.9, Remote application recommendations page 230.
Updated Table 6-3 on page 251.
Updated text in 8.2.1, TCP/IP packet tracing on z/OS on page 352.
Updated APARS in Appendix A, DRDA-related maintenance on page 395.
May 2011, Second Update
This revision reflects the addition, deletion, or modification of new and changed information
described below.
xxx DB2 9 for z/OS: Distributed Functions
Changed information
Corrected text of reference in 5.7.6, Multi-row MERGE on page 221 and text in
Example 5-31 on page 221.
Copyright IBM Corp. 2009. All rights reserved. 1
Part 1 Distributed database
architecture and
configurations
In this part we introduce the concepts and protocols of DRDA and describe the layout and the
components of the possible configurations where DB2 for z/OS can play a client or server
role.
This part contains the following chapters:
Chapter 1, Architecture of DB2 distributed systems on page 3
Chapter 2, Distributed database configurations on page 33
Part 1
2 DB2 9 for z/OS: Distributed Functions
Copyright IBM Corp. 2009. All rights reserved. 3
Chapter 1. Architecture of DB2 distributed
systems
In this chapter we discuss distributed data topology and provide a brief introduction to the
Distributed Relational Database Architecture (DRDA) and its general implementation in the
family of DB2 products. We provide some details about the Distributed Data Facility (DDF),
the DB2 for z/OS component that deals with data access from remote applications as well as
the various clients that can connect to DB2 for z/OS.
Lastly, we mention (but do not use in this book) the Federated Data Server products that allow
you to perform distributed requests.
This chapter contains the following sections:
Before you start on page 4
Distributed data access topology on page 4
DRDA on page 9
Implementation of DRDA in the DB2 family on page 14
Implementation of DRDA by non-IBM products on page 16
DB2 for z/OS Distributed Data Facility architecture on page 17
Federated data support on page 26
1
4 DB2 9 for z/OS: Distributed Functions
1.1 Before you start
In this IBM Redbooks publication we describe many features and functions of products that
are constantly undergoing changes. At the time of writing, we used DB2 9 for z/OS and DB2
for Linux, UNIX, and Windows, 9.5 FixPack 3. For certain configurations, we used the
newest release of DB2 for LUW 9.7 and we will explicitly mention that.
The hardware and software configuration used during this project is described in
Appendix B.2.2, TRADE installation on page 405.
Terminology
DB2 for z/OS used in this book generally refers to DB2 9 for z/OS unless DB2 for z/OS V8 is
explicitly mentioned. Table 1-1 lists the abbreviations used throughout this publication.
Table 1-1 Terminology table
1.2 Distributed data access topology
Before discussing distributed data topology, we need to introduce two important concepts,
unit of work and two-phase commit.
Unit of work
A unit of work (UOW) is a single logical transaction. It consists of a sequence of SQL
statements that are either all successful, or considered as a whole to be unsuccessful. It
maintains data integrity within a transaction.
Figure 1-1 on page 5 illustrates two sequences of SQL statements that are executed.
In an environment where the UOW concept is not supported (on the left in the figure), if one
SQL statement fails, the data updated by previously executed SQL statements in the
transaction remains. SQL3 fails but data updated by SQL1 and SQL2 remains updated.
On the other hand, in a UOW-supported environment (on the right in the figure), if one
statement fails (SQL3), all the updates are discarded and affected data is reinstated to the
state before the update (affected by SQL1 and SQL2). This way, data integrity at the
transaction level can be maintained.
Official term Term used in this
book
DB2 for Linux, UNIX and Windows 9.5 FixPack 3 DB2 for LUW 9.5 FP3
DB2 Version 9.1 for z/OS DB2 9 for z/OS
DB2 for z/OS Version 8 DB2 for z/OS V8
IBM Data Server Driver for JDBC and SQLJ, Type 4 Connectivity Type 4 driver
IBM Data Server Driver for JDBC and SQLJ, Type 2 Connectivity Type 2 driver
IBM Data Server Driver for ODBC and CLI CLI Driver
Chapter 1. Architecture of DB2 distributed systems 5
Figure 1-1 Unit of work concept
Two-phase commit protocol
The second key concept to understand is the two-phase commit protocol. Two-phase commit
enables the UOW concept across multiple DBMSs.
Continuing to use the scenario from the previous section, two-phase commit involves a
coordinator and one or more participants. The coordinator coordinates the UOW. The
coordinator itself may also be a participant. Figure 1-2 shows the components and message
flows between the partners when the presumed abort protocol (discussed below) is used.
Figure 1-2 Two-phase commit protocol
At first, all SQL requests are sent to the database (remote or local), and the data is updated in
the database. Now the application decides it is time to commit. The coordinator sends a
PREPARE to COMMIT request to (all) participants. After receiving the request, a participant
prepares for the COMMIT. This includes flushing log records (and logging the commit phase 1
record) and keeping locks. When this is done, the participant replies to the coordinator that it
is ready to commit. (Preparation is also done by the coordinator.) When the coordinator has
received the replies from all participants, the coordinator can go ahead with the commit and
No UOW UOW
SQL1
UPDATE ... TABLEA
-> succeeded
SQL2
UPDATE ... TABLEB
-> succeeded
SQL3
UPDATE ... TABLEC
-> failed
TABLEA -> remains UPDATED
TABLEB -> remains UPDATED
SQL1
UPDATE ... TABLEA
-> succeeded
SQL2
UPDATE ... TABLEB
-> succeeded
SQL3
UPDATE ... TABLEC
-> failed
TABLEA -> state before UPDATE
TABLEB -> state before UPDATE
Coordinator Participant
Local SQL (SQL1)
-Commit point
Request
PREPARE COMMIT
Request
COMMIT / ROLLBACK
Log flushes
Reply
Commit / Rollback
Release locks
Hold locks
Release locks
Flush log
Hold locks
Send SQL (SQL2)
Send SQL (SQL3)
Update data by SQL2
Update data by SQL3
Phase 1
Phase 2
INFLIGHT
INDOUBT
INCOMMIT or ABORT
(OK / NO) Decide
Write Commit/Abort Log
Complete
Write End Log
Other request
Implied forget
6 DB2 9 for z/OS: Distributed Functions
write the commit log record. The coordinator sends the COMMIT request to all participants.
After receiving the request, the participant commits the update (logging a begin phase 2
record), releases the locks, and optionally sends a reply to the coordinator. When the
coordinator receives the replies from all the participants (either through an explicit
confirmation message, or the confirmation is implied with the next message), it considers the
UOW as completed. This way, the coordinator manages UOWs across participants.
The point from when the application starts the commit processing to the time the actual
COMMIT request is issued, is called phase 1. The time the COMMIT request is issued until
the end of the transaction is called phase 2. That is why this protocol is called a two-phase
commit protocol.
A transaction can be in any of the following states: INFLIGHT, INDOUBT, INCOMMIT or
ABORT, according to the phase to which it belongs (see Figure 1-2 on page 5). When the
participant encounters an unexpected problem when it is in doubt (for example, after the
system goes down due to power loss, and so forth), it can ask the coordinator the state of the
transaction, and recover according to the information provided by the coordinator. Thus, a
sequence of SQL statements across multiple DBMSs can be managed as a single consistent
transaction, which enables distributed units of work in distributed data environments.
There are three well-known variants of two-phase commit protocols, presumed nothing,
presumed abort, and presumed commit. Presumed nothing is the basic two-phase commit
protocol that is only supported by SNA and not supported by TCP/IP, which is the
recommended network protocol. Presumed Commit is a variation that is not implemented by
DRDA, so we limit our discussion to the Presumed Abort protocol.
Presumed Abort is an enhancement to the basic two-phase commit protocol that is designed
to reduce the number of messages exchanged and the number of times that log records are
written. When using the Presumed Abort variation, a coordinator can forget about the
transaction when it decides to abort. Whenever the participant asks about a transaction for
which the coordinator does not have information, it replies that the transaction is aborted. In
addition, participants do not have to reply to rollback requests from the coordinator. This
variation reduces writing logs by the coordinator, and also reduces the number of messages
exchanged when the transaction is aborted.
Let us now have a closer look at the different types of distributed data access topology.
1.2.1 Remote request
This is the basic remote access. An application (SQL) request accesses a single remote data
management system (DBMS), and each transaction (UOW) consists of only one SQL
statement. An SQL COMMIT statement is invoked either explicitly or implicitly. The remote
request concept, and a sample configuration, is shown in Figure 1-3 on page 7. Through this
section, a UOW is represented by a shaded rectangle. For example, one SQL statement and
a COMMIT statement are included in each UOW in the figures.
Important: While DB2 for z/OS requesters always use two-phase commit when
communicating with a DB2 for z/OS server, most other requesters default to one-phase
commit, and require the use of global XA transactions to ensure that two-phase commit is
used. We will discuss this in 5.8, XA transactions on page 225.
Note: The term DBMS in this and the following sections refers to an entire DB2 for z/OS
subsystem or data sharing group (in a distributed environment often addressed through its
location name), or DB2 LUW database server.
Chapter 1. Architecture of DB2 distributed systems 7
A remote request has the following characteristics:
One request per unit of work
One DBMS per unit of work
One DBMS per request
An application can access multiple databases. However, only one database is accessed at a
time and only one SQL statement is included in a single UOW. In this case, the result of one
SQL statement will not affect the result of another SQL statement.
Figure 1-3 shows a banking deposit application. The customer can deposit money to either
the checking or the savings account, using an ATM terminal. Whether the deposit into his
checking account is successful does not affect the deposit into the savings account.
Figure 1-3 Remote request
1.2.2 Remote unit of work
A Remote unit of work (RUW) is the next level of distributed data access. A remote unit of
work lets an application program read or update data on one DBMS per unit of work.
Remote unit of work has following characteristics:
Multiple requests per unit of work
One DBMS per unit of work
One DBMS per request
Application can initiate commit processing
Commit scope is a single DBMS
Figure 1-4 on page 8 shows the concept and an example. Although two databases are
accessible, only one DBMS can be included in one UOW. A UOW can include multiple SQL
statements.
Figure 1-4 on page 8 shows a deposit transfer application where a UOW is represented by a
shaded rectangle. Assume that a customer has two sets of checking and savings accounts in
two branches. A customer can transfer money from his checking account to his savings
account as long as it is within the same branch. The data of both the checking and the
savings account is maintained in a single database within each branch. In this case,
withdrawing from the checking account and depositing into the savings account (of the same
branch office) must be treated as a single event, in other words, a transaction, a single unit of
work. However, the two transactions for two branches do not affect each other.
This type of distributed data access is defined in DRDA level-1.
Client
Database A
DBMSs
Application
Database B
ATM terminal
Account DBs
Checking
Account
Concept Example
Saving
Account
SQL
COMMIT
SQL
COMMIT
Deposit to
Checking
Account
Deposit to
Saving
Account
8 DB2 9 for z/OS: Distributed Functions
Figure 1-4 Remote unit of work
1.2.3 Distributed unit of work
A distributed unit of work (DUW) involves more than one DBMS within a single unit of work.
Within one unit of work, an application can direct SQL requests to multiple DBMSs. In case
the application is aware of the database distribution, it indicates the target DBMS when it
executes the SQL statement. In DB2 this is done using the CONNECT TO statement.
However, you can also achieve some level of location transparency by creating an alias. This
means that the application is unaware of where the data actually resides. Note that location
transparency is not a DUW concept. It is a general distributed database concept that applies
to all levels of data distribution. When using a DUW, all objects referenced in a single SQL
statement are constrained to be in a single DBMS.
Distributed unit of work has the following characteristics:
Several DBMSs per unit of work.
Multiple requests per unit of work.
One DBMS per request.
Application can initiate commit processing.
Commit coordination across multiple DBMSs.
Two-phase commit support is essential to this implementation.
Figure 1-5 on page 9 shows the concept and an example. SQL statements are within one
UOW. A UOW is represented by a shaded rectangle. If one of the statements fails, all affected
data (in all DBMSs involved in the UOW) is reset to the state it was before the UOW started.
Figure 1-5 on page 9 shows the case where the customer asks for a transfer of funds from the
account in bank A to the account in bank B. In this case, the withdrawal in bank A and the
deposit in bank B must not be treated as different events. They should be treated as parts of
a single transaction.
By supporting the distributed unit of work concept, we can implement a transaction concept
across multiple DBMSs. This type of transaction is defined in DRDA level-2.
Client
Database A
DBMSs
Application
SQL
SQL
COMMIT
SQL
SQL
COMMIT
Database B
ATM terminal Account DBs
Concept Example
Checking
Saving
Branch A
Checking
Saving
Branch B
Transfer from
checking
to saving
Transfer from
checking
to saving
Chapter 1. Architecture of DB2 distributed systems 9
Figure 1-5 Distributed unit of work
1.2.4 Distributed request
The fourth and last distributed data access is the distributed request. It allows users and
applications to submit SQL statements that reference multiple DBMSs in a single SQL
statement. For example, you can execute a join between a table in system A and another
table in system B. The concepts and an example are shown in Figure 1-6. Note that you need
the use the IBM InfoSphere Federation Server products (mentioned in 1.8.1, IBM
InfoSphere Federation Server on page 28 and 1.8.2, IBM InfoSphere Classic Federation
Server for z/OS on page 29) to support distributed requests.
A distributed request has the following characteristics:
Several DBMSs per unit of work
Multiple requests per unit of work
Multiple DBMSs per request
Application can initiate commit processing
Commit coordination across multiple DBMSs
Figure 1-6 show that a customer can request the maximum checking balance across all
accounts in all banks where he has an account.
Figure 1-6 Distributed request
1.3 DRDA
A common protocol that is independent of the underlying RDBMS architecture and operating
environments is necessary to access a diverse set of RDBMSs. The IBM DB2 distributed
database functionality is based on DRDA. DRDA is an open, vendor-independent architecture
Client
Database A
DBMSs
Application
SQL
SQL
SQL
SQL
COMMIT
Database B
ATM terminal
Account
DBs
Bank B
account
database
Bank A
account
database
Concept Example
Transfer funds
from bank A
to bank B
Client
Database A
DBMSs
Application
Database B
ATM terminal
Account
DBs
Bank B
account
database
Bank A
account
database
Concept Example
Show maximum
checking
balance
accross banks
SQL
COMMIT
10 DB2 9 for z/OS: Distributed Functions
for providing connectivity between a client and database servers. It was initially developed by
IBM and then adopted by The Open Group
1
as an industry standard interoperability protocol.
Connectivity is independent from not only hardware and software architecture but also
vendors and platforms (see Figure 1-7). Using DRDA, an application program can access
various databases that support DRDA, using SQL as a common access method.
Figure 1-7 DRDA network
The major characteristics of DRDA are as follows:
Support for any SQL dialect, static (previously bound) and dynamic.
Automatic data transformation through receiver makes right philosophy.
UOW support including 2-phase commit, recovery, and multi-site updates.
Stored procedure support
Superior performance, scalability, and availability
Support for enhanced security mechanisms including data encryption, trusted context,
and Kerberos
Support for an XA distributed transaction processing (DTP) interface
DRDA is defined by using different levels so that vendors can implement their applications
step by step, depending on the level of DRDA. Some of the functions introduced in DB2 9 and
the latest level of DRDA are as follows:
XML extensions
DRDA supports new XML types. XML data are treated similar to LOB data from a DRDA
perspective and flowed through EXTDTAs
Dynamic data format
DRDA allows the in-lining of small LOBs in QRYDTA and the chunking of large LOBs into
progressive references that can be retrieved through the GETNXTCHK command.
Improved package management
DRDA is enhanced to support remote bind COPY and bind DEPLOY so a package can be
copied or deployed (no change in original bind options) from a source server to target.
Enhanced security
Support for trusted context and 256-bit Advanced Encryption Standard (AES) encryption.
1
The Open Group Web page is http://www.opengroup.org/dbiop/
DRDA
AR/AS
Any OS
Application
Program
System z
DB2 for
LUW
Linux
DB2 f or i
i/OS
System i
DB2 f or
LUW
AIX
System p
DB2 f or
LUW
Linux
System x
DB2 for
LUW
W indows
Any
hardware
Network using DRDA
as database protocol
Informix
UNIX
DB2 for
VSE/VM
VSE/VM
DB2 for
z/OS
z/OS
Chapter 1. Architecture of DB2 distributed systems 11
1.3.1 Functions and protocols of DRDA
For the purposes of this book, we are only giving a brief introduction to DRDA. DRDA is fully
documented in documents available online from the Open Group:
DRDA V4, Vol. 1: Distributed Relational Database Architecture
http://www.opengroup.org/onlinepubs/9699939399/toc.pdf
DRDA V4, Vol. 2: Formatted Data Object Content Architecture
http://www.opengroup.org/onlinepubs/9699939299/toc.pdf
DRDA V4, Vol. 3: Distributed Data Management Architecture
http://www.opengroup.org/onlinepubs/9699939199/toc.pdf
DRDA is a set of protocols and functions providing connectivity between applications and
database management systems. See Figure 1-8.
Figure 1-8 Functions and protocols used by DRDA
The following function types are provided:
Application requester
Application requester (AR) functions support SQL and program preparation services from
applications. The AR is SQL neutral, so it can accept SQL that is supported on any DBMS.
Application server
Application server (AS) functions support requests that application requesters have sent,
and route requests to database servers by connecting as an application requester.
Database server
Database server (DS) functions support requests from application servers. They support
the propagation of special register settings and can forward SQL requests to other
database servers.
The following protocols are provided:
Application Support Protocol
Application Support Protocol provides connection between application requesters (AR)
and application servers (AS).
Database Support Protocol
Database Support Protocol provides connections between application servers (AS) and
database servers (DS). Prior to executing any SQL statements at a database server,
special register settings set by the application are propagated to the database server.
Application
Process
Application
Requester
DRDA protocol
SQL
DBMS
Application
Server
Client DBMS
Database
Server
DRDA protocol
12 DB2 9 for z/OS: Distributed Functions
1.3.2 Conversations between the AR and AS
Connections are managed by the application. The roles of the two functions during their
communication is shown in Table 1-2. Equivalent functionality exists for communications
between the application server and the database server.
Table 1-2 Conversation of AR and AS
1.3.3 Building blocks of DRDA
DRDA requires the following architectures:
Distributed Data Management (DDM)
The DDM architecture provides the command and reply structure used by the distributed
databases.
Formatted Data Object Content Architecture (FD:OCA)
The FD:OCA provides the data definition architectural base for DRDA.
Character Data Representation Architecture (CDRA)
CDRA provides the consistency of character data across the multiple platforms.
Communication protocol
SNA and TCP/IP are available.
1.3.4 The DRDA process model
Figure 1-9 on page 13 shows the DRDA process model and the function of each process
(called manager in DDM terminology) and objects.
Application requester (AR) Application server (AS)
Initiates connection to the AS Listens for DRDA requests
Generates DRDA request Parses DRDA request
Sends requests to the AS Invokes RDBMS
Receives replies from the AS Generates DRDA replies
Parses replies from the AS Sends replies to the AR
Terminates connection Connection termination implies rollback
Chapter 1. Architecture of DB2 distributed systems 13
Figure 1-9 DRDA process model
A brief description of the components is given in Table 1-3.
Table 1-3 Components of the DRDA process model
Manager/process/object Function/description
SQL Application Manager (SQLAM) Represents the application to the remote relational
database manager; handles all DRDA flows.
Directory Maps the names of instances of manager objects to their
locations.
Dictionary A set of named descriptions of objects.
Communications Manager Provides conversational support for the agent in an AR
or AS through SNA or TCP/IP.
Agent In an AR, the agent interfaces with the SQLAM to receive
requests and pass back responses. In an AS, the agent
interfaces with managers in its local server to determine
where to send the command to be processed, to allocate
resources and to enforce security.
Supervisor Manages a collection of managers within a particular
operating environment.
Security Manager Participates in user identification and authentication, and
ensures the access of the requester.
Resynchronization Manager A system component that recovers protected resources
when a commit operation fails.
Syncpoint Manager A system component that coordinates commit and
rollback operations among various protected resources.
Relational Database Manager Controls the storage and integrity of data in a relational
database. DRDA provides no command protocol or
structure for relational database managers.
Application SQLAM
Communication
Manager
Dictionary
CCSID Manager
Supervisor
Security Manager
Syncpoint Manager
Resync Manager
Communication
Manager
Relational DB
manager
Dictionary
CCSID Manager
Supervisor
Security Manager
Syncpoint Manager
Resync Manager
Relational
Database
Directory
Directory
Application Requester
Application Server
XA Manager
XA Manager
Agent
SQLAM
Agent
14 DB2 9 for z/OS: Distributed Functions
1.4 Implementation of DRDA in the DB2 family
DRDA is implemented throughout the IBM relational data base systems as well as in
relational database management systems provided by companies other than IBM. In this
section we briefly describe how DRDA is implemented and used in the IBM products.
1.4.1 DB2 for z/OS
Both Application Server and Application Requester functions are implemented as standard
functions of DB2 for z/OS.
1.4.2 DB2 for i
Both Application Server and Application Requester functions are implemented as standard
functions of DB2 for i 6.1(formerly known as DB2 for i5/OS).
1.4.3 DB2 Server for VSE and VM
The DB2 Server for VSE and VM V7.5 implements a DRDA Application Server. With the DB2
Runtime only Client editions for VM and VSE features that implement the DRDA Application
Requester, you can purchase and use only the DB2 client, without having to pay for the
database server.
1.4.4 IBM Informix Dynamic Server
Starting with Version 11.10, the IBM Informix Dynamic Server (IDS) can act as a DRDA
Application Server. The IBM Data Server Driver for JDBC and SQLJ (described in 1.4.6, IBM
Data Server Driver for ODBC and CLI on page 15) as well as the IBM Data Server Driver for
.NET (described in 1.4.6, IBM Data Server Driver for ODBC and CLI on page 15) can be
used as DRDA Application Requesters when connecting to the Informix Dynamic Server.
1.4.5 DB2 for Linux, UNIX and Windows
On DB2 for Linux, UNIX, and Windows (DB2 for LUW), the DRDA Application Server support
is provided as a standard function within the DB2 server engine. On these platforms, the
DRDA Application Requester function is an optional feature that can be ordered with the
database server, or can also be licensed separately in the form of the DB2 Connect product
or any of the IBM Data Server Drivers or Clients listed below.
CCSID Manager Allows the specification of a single-byte character set
CCSID to be associated with character typed
parameters on DDM commands and DDM reply
messages.
XA Manager Provides a data stream architecture that will allow the
application requester and server to perform the
operations involved in protecting a resource.
Manager/process/object Function/description
Chapter 1. Architecture of DB2 distributed systems 15
Here is the list of the IBM Data Server Clients and Drivers:
IBM Data Server Client
IBM Data Server Runtime Client
IBM Data Server Driver for ODBC and CLI
IBM Data Server Driver for JDBC and SQLJ
IBM Data Server Driver Package
In addition, a separate product, the DB2 Connect Client, includes all the functionality of IBM
Data Server Client plus the capability to connect to midrange and mainframe databases. DB2
Connect capability can be added to any client or driver.
The DB2 Connect Server is considered a DRDA Application Server because it has the ability
to perform Connection Concentration and Sysplex Workload Balancing. Although the terms
Connect Gateway and Connect Server are used interchangeably, Gateway was never an
official term and we will refer to it as DB2 Connect Server. Prior to Version 8.1, DB2 Connect
Server was performing protocol conversions (from the DB2 for LUW version of private
protocol to DRDA) as well as codepage conversions when connecting to the DB2 for z/OS
server. Starting with Version 8.1, these conversions no longer need to be performed as long
as the DB2 clients are Version 8.1 or later.
Type 4 driver added sysplex workload balancing for JDBC 2.0 data sources in DB2 for LUW
V8 FP10 and for JDBC 1.2 ConnectionManager connections in DB2 for LUW V9. Sysplex
workload balancing was added to the non-Java-based client in DB2 LUW 9.5 FP3. eliminating
the need for DB2 Connect Servers to act as a middle tier between DB2 clients and DB2 for
z/OS servers.
You can run DB2 CLI and ODBC applications against a DB2 database server using the IBM
Data Server Client, the IBM Data Server Runtime Client, or the IBM Data Server Driver for
ODBC and CLI. However, to compile DB2 CLI or ODBC applications, you need the IBM Data
Server Client.
See the DB2 Version 9.5 for Linux, UNIX, and Windows Information Center for an overview of
IBM Data Server Clients and Drivers at the following Web page:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp?topic=/com.ibm.swg.
im.dbclient.install.doc/doc/c0022612.html
1.4.6 IBM Data Server Driver for ODBC and CLI
The IBM Data Server Driver for ODBC and CLI (CLI driver) provides runtime support for the
c-DB2 CLI application programming interface (API), the ODBC API, the XA API and
connecting to databases. We refer to these clients and the .NET support drivers as non-Java
based clients.
1.4.7 IBM Data Server Driver for JDBC and SQLJ
JDBC is an application programming interface (API) that Java applications use to access
relational databases. The IBM Data Server Driver for JDBC and SQLJ provides Type 4 and
Type 2 connectivity. To communicate with remote servers using DRDA, the Type 4 driver is
used as a DRDA Application Requester. SQLJ provides support for embedded static SQL in
Java applications. In general, Java applications use JDBC for dynamic SQL and SQLJ for
static SQL.
16 DB2 9 for z/OS: Distributed Functions
This driver is also called the JAVA Common Client (JCC) driver and was formerly known as
the IBM DB2 Universal Database Driver. The DB2 JDBC Type 2 driver for LUW, also called
the CLI existing driver, has been deprecated.
We can refer to these clients as Java-based clients.
See the Java application development for IBM data servers section at the DB2 Version 9.5 for
Linux, UNIX, and Windows Information Center at the following Web page:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/topic/com.ibm.db2.luw.apdv.ja
va.doc/doc/c0024189.html
1.4.8 IBM Data Server Driver Package
In June 2009, IBM announced the IBM Data Server Driver Package, a lightweight deployment
solution providing runtime support for applications using ODBC, CLI, .NET, OLE DB, open
source, or Java APIs. This replaced the separate drivers formerly available for .NET and
OpenSource. All of the IBM Data Server Drivers have a small footprint, are designed to be
redistributed by independent software vendors (ISVs), and to be used for application
distribution in mass deployment scenarios typical of large enterprises.
1.4.9 pureQuery
pureQuery is a high-performance data access platform that makes it easier to develop,
optimize, secure, and manage data access. It consists of the following features:
Application programming interfaces that are built for ease of use and for simplifying the
use of best practices
Development tools, which are delivered in Data Studio Developer, for Java and SQL
development
A runtime, which is delivered in Data Studio pureQuery Runtime, for optimizing and
securing database access and simplifying management tasks
IBM Data Studio pureQuery Runtime for z/OS offers a runtime environment that optimizes
data access performance of Java applications and stored procedures deployed on the z/OS
platform. pureQuery uses the IBM Data Server Driver for JDBC and SQLJ type 4 connectivity
to communicate through DRDA to DB2 for z/OS, DB2 LUW and IDS servers. See 5.6,
Developing static applications using pureQuery on page 211 for details on pureQuery.
1.5 Implementation of DRDA by non-IBM products
As mentioned, DRDA is an industry standard initially developed by IBM and adopted by The
Open Group as a database interoperability industry standard. There are a number of non-IBM
products that have implemented DRDA. Refer to The Open Group Web site for details, at the
following Web page:
http://www.opengroup.org/dbiop/
Chapter 1. Architecture of DB2 distributed systems 17
1.6 DB2 for z/OS Distributed Data Facility architecture
In this section, we describe the Distributed Data Facility (DDF) structure. DDF is the DB2 for
z/OS implementation of distributed database access. We describe the network protocols
used by DDF:
TCP/IP
SNA
We talk about inactive connection support provided by DDF and how this facilitates
connection pooling and transaction pooling.
1.6.1 What DDF is
DB2 for z/OS Distributed Data Facility (DDF) is a built-in component of DB2 for z/OS which
provides the connectivity to and from other databases over the network. DDF implements a
full DRDA Application Server and Application Requester.
DDF is DB2s transaction manager for distributed database connections. DDF has developed
mature thread management strategies to handle thousands of connections that can come
from anywhere within the bounds of the network that DB2 is operating in.
DDF runs as an additional address space in the DB2 subsystem. The address space name is
ssidDIST, where ssid is the DB2 subsystem name. DDF is an efficient connection handler. It
uses SRBs instead of TCBs, which reduces CPU time. z/OS enclaves are used in exchanging
data across address spaces. This enables proper management by Workload Manager (WLM)
of the work coming into DB2 through DDF and the possibility of routing DDF work to the zIIP
specialty engine.
1.6.2 Distributed configurations
Before we discuss the DDF implementation, it is important to have an understanding of how
distributed clients can connect to a DB2 for z/OS server.
When DDF was first introduced in DB2 Version 2.2, only another DB2 for z/OS subsystem
could communicate through Private Protocol to a DB2 for z/OS server.
With DB2 Version 2.3 (which introduced DRDA RUW support), Version 3 (which introduced
DUW support) and the standardization of DRDA as a communication protocol, any DRDA
compliant requester could communicate with a DB2 for z/OS server. The popular
configuration was clients connecting to DB2 for z/OS through the various editions of DB2
Connect.
Later, connecting to DB2 for z/OS through the WebSphere Application Server provided
developers and IT Architects with an innovative, performance-based foundation to build,
reuse, run, integrate, and manage service-oriented architecture (SOA) applications and
services.
Today, the most common customer configurations include application servers connecting
through one of the IBM Data Server Drivers (either Java-based or non-Java-based) to a DB2
for z/OS server.
We expect the gateway type accesses to become less popular with the Data Servers Drivers
offering equivalent and ever improving functions.
18 DB2 9 for z/OS: Distributed Functions
Figure 1-10 illustrates the most common ways distributed clients connect to a DB2 for z/OS
server.
Figure 1-10 Connecting to DDF
1.6.3 DDF implementation
An overview of the address spaces used by a DB2 z/OS to DB2 z/OS communication is
shown in Figure 1-11 on page 19, where APPL, DBM1, MSTR, DIST, and IRLM stand for
application, database service, system service, distributed data function, and lock manager
address spaces, respectively. This figure does not show other WLM-managed address
spaces that are used for non-native stored procedures and user-defined functions, nor the
Admin Scheduler address space new with DB2 9 for z/OS.
DB2 Client
Drivers
Web
Application server
DB2 Client/Drivers
DB2 Connect
Client
DB2 for z/OS
DB2 for z/OS
Client
Chapter 1. Architecture of DB2 distributed systems 19
Figure 1-11 Address spaces
Figure 1-11 shows a simplified view of the message exchanges that occur when an
application running on DB2 for z/OS requests to access data on remote DB2 for z/OS
subsystem.
As mentioned earlier, the DDF functions execute in the ssidDIST address space. Processes
running in the DIST address space access the DB2 database services address space
(DBM1) using cross memory services and the Shared Memory facility. Cross Memory
Services allow synchronous use of data and programs in different address spaces. In DB2 9
for z/OS, most of DDF control blocks are moved above the 2 GB bar. This removes storage
constraints for communications buffers and DRDA intensive usage by vendor applications.
With 64 bit, DB2 DDF uses the z/OS Shared Memory Facility to reduce data moves between
DBM1 and DDF. Shared memory is a type of virtual storage introduced in z/OS 1.5 that
resides above the 2 GB bar and allows multiple authorized address spaces to easily address
storage. 64-bit DDF is a performance enhancement for distributed server processing, but it
also provides virtual storage constraint relief. No cross memory moves are necessary for the
shared blocks, and this storage no longer needs to be allocated in the DBM1 address space
below the 2 GB bar. DDF also exploits shared private storage with TCP/IP and Unix System
Services for network operations.
In a local DB2 subsystem acting as application requester, as part of the program preparation
process, an application program is link edited with a language interface module (for example,
DSNELI for TSO). This enables the program to send SQL statements to the local subsystem,
making calls to the language interface module. When the statement references a remote
location, the request is sent to the remote subsystem through the DDF address space of the
local subsystem.
When the first SQL statement from a remote system is received by the DDF address space of
the DB2 subsystem acting as application server, a thread is created in the DBM1 address
space. This type of DB2 thread is called a Database Access Thread (DBAT)
2
. Otherwise, and
most likely, if a thread for a remote connection already exists, but is idle (pooled DBAT), it is
reused for the request. The DBAT is implemented through an z/OS enclave, performing work
through preemptive SRBs running in the DDF address space. Subsequent SQL statements
2
DB2 for z/OS associates a thread to the application request for work in the DBM1 address space (SQL accesses).
DB2 uses two types of threads: the allied threads and the DBATs. Allied threads deal with local (same LPAR)
requests, such as those coming from TSO, IMS, CICS, CAF, and RRSAF.
APPL DBM1 DIST
SQL
Super
Visor
DIST DBM1MSTRIRLM
Enclave
SQL
SQL
exe-
cution
Data
access
Result
set
Local subsystem Remote subsystem
MSTRIRLM
(DBAT)
20 DB2 9 for z/OS: Distributed Functions
are executed as part of the same enclaves SRBs, and the result set is returned to the
application. The enclave is kept alive until the connection goes inactive and becomes a
pooled thread (at commit time), or until the thread terminates (in case it cannot become a
pooled thread). For information about how enclaves are used by DDF, refer to 7.2.1,
Database Access Threads on page 271.
In DB2 9, DBM1 and DDF address spaces (as well as IRLM) allocate control blocks above the
2 GB bar and run in 64 bit addressing mode. DDF also uses above-the-bar storage in the
z/OS Shared Memory Facility to communicate with DBM1. See Figure 1-12 for an example of
the virtual storage layout.
Figure 1-12 DDF - Using shared private storage
DDF has moved almost all of the TCP/IP communication buffers and associated control
blocks from ECSA into shared memory, freeing systems resource for other users.
See 3.2.1, Defining the shared memory object on page 85 for information about set up and
operations.
1.6.4 Network protocols used by DDF
DDF can use either SNA or TCP/IP as a network protocol.
TCP/IP
Figure 1-13 on page 21 shows a sample configuration. In the figure, the AR is on the left, and
the AS on the right. When using DB2 for z/OS as the AR, you have to configure each remote
DB2 subsystem that you want to access by specifying the combination of an IP address and a
port number. This port number is also called the DRDA SQL port. The information that is
necessary to establish the connection to the target subsystem, including IP address, port
number, and DB2 location name, is stored in the communication database (CDB) of the local
DB2 subsystem. DB2 location name is a unique location name for the accessible server. This
is the name by which the remote server is known to local DB2 SQL applications. The location
name is the primary entry into the CDB. In addition, DB2 requires that each member of a data
sharing group have a resynchronization port number, also known as the resync port, that is
unique within the Parallel Sysplex. In the event of a failure, this unique port number allows a
requester to reconnect to the correct member so that units of work that require two-phase
commit can be resolved. The resync port only needs to be specified at the AS.
When using TCP/IP as a network protocol, DDF executes TCP/IP services using UNIX
System Services/O callable Assembler interface.
DDF and DBM1 address spaces run in 64-bit addressing mode and shared 64-
bit memory pool avoids cross memory movements between DDF and DBM1
DBM1 DDF
z/OS Shared Private Storage
2GB
Bar
Reduced amount of data formatting
and data movement
Reduced virtual storage
It exists once, instead of in each
address space
Storage key and fetch protection
Defaults to 2 TB size
DB2 requires a minimum of 128 GB
configured
Even if not running DIST
Set by HVSHARE in Parmlib
DISPLAY VIRTSTOR,HVSHARE
Chapter 1. Architecture of DB2 distributed systems 21
Starting with DB2 9 for z/OS, if you do not plan to communicate with remote sites with
SNA/APPC and only use TCP/IP, you do not need to define VTAM to DB2 when you update
the BSDS DDF record with an IPNAME value
3
. To use both IPv4 and IPv6 addresses, DB2
requires TCP/IP dual-mode stack support. Dual-mode stack support allows IPv4 address
communication with IPv4 partners, and IPv6 address communication with IPv6 partners.
DB2 9 for z/OS is an IPv6 system. All addresses are displayed in IPv6 format, even if only
IPv4 addresses are used. This is due to the fact that IP addresses may need to be displayed
before DB2 can determine that a dual-mode stack is configured. Hence DB2 chooses to
provide consistency by always displaying in IPv6 format.
Figure 1-13 Communication using TCP/IP
SNA
Figure 1-14 on page 22 shows a sample configuration, where DDF uses the Virtual
Telecommunication Access Method (VTAM) to communicate with other DB2 for z/OS
systems. Communication between subsystems uses a set of Systems Network Architecture
(SNA) communication protocols called LU6.2. To talk to a remote DB2 subsystem over SNA,
you need to know the LU name and a location name of the remote system. An LU name is the
name by which VTAM recognizes the subsystem in the network. The LUNAMES table must
be defined in the CDB instead of the IPNAMES name that is used for TCP/IP. Application
programs use a location name to indicate which subsystem to access. The local DB2
subsystem also needs to set up its own VTAM ACB to get into the network.
3
Location, ports and IP address are all recorded in the in BSDS.
Tip: When using any of the IBM Data Server Drivers as ARs, it is sufficient to provide the
IP address, database name, and port of the DB2 for z/OS server either as part of the client
application or as a separate configuration or datasource properties file to make a TCP/IP
connection. When using DB2 Connect as an AR, you can either use the Client
Configuration Assistant to setup a connection to the DB2 for z/OS server or catalog the
database through command line processor (CLP).
Restriction: SNA may be deprecated in a future release of DB2. The usage of TCP/IP is
highly recommended. Also note that two-phase commit is no longer supported when you
access a DB2 for z/OS server through DB2 Connect Version 8.1 or later release using
SNA.
S yst e m z
D B 2 ( i n C D B )
L o cat i o n n a me o f
r e mo t e AS
I P ad d r e ss o f r em o t e
AS
S Q L p o rt # o f r em o t e
AS
U S S
D R D A
TC P / I P
TC P
st ac k
S yst e m z
U S S T C P
st a ck
D B 2
AS l o c a t io n
n a me
I P a d d r e s s
in B S D S
S Q L p o r t #
R e s yn c p o r t #
Ap p l i c a ti o n R e q u e s t e r
Ap p l i c a ti o n S e r ve r
22 DB2 9 for z/OS: Distributed Functions
Figure 1-14 Communication using SNA
1.7 Connection pooling
Connection pooling is the generic term given to techniques that ensure that the connection
resources for distributed database access are pre-loaded, shared, and reused efficiently.
There are many different kinds of connection pooling that can be used with DRDA
connections to the DB2 for z/OS server.
The DB2 for z/OS server provides inactive connection support that provides a mechanism
to pool Database Access Threads (DBATs).
The IBM Data Server Drivers provide standards-based connection pooling facility that can
be exploited by applications using these APIs. They also support connection pooling and
transaction pooling (see 1.7.3, Transaction pooling on page 25) when sysplex WLB is
enabled.
DB2 Connect Server provides connection pooling and connection concentration functions.
Application Server Environments such as WebSphere provide their own connection
pooling software.
As a general rule of thumb, it is beneficial to exploit all the connection pooling facilities that
can be applied to any given environment. In general, we recommend concentrator pooling
(transaction pooling) over connection pooling. See 1.7.3, Transaction pooling on page 25.
1.7.1 Inactive connection support in a DB2 for z/OS server
Each inbound connection from distributed environments to the DB2 for z/OS server requires a
DDF connection and a DB2 database access thread (DBAT). DB2 for z/OS inactive
connection support (formerly referred to as Type 2 inactive thread support or thread pooling)
is a mechanism to share few DBATs among many connected applications. See Figure 1-15
on page 23.
System z AR
DB2
Location
name in
CDB
VTAM
LU name
System z AS
VTAM
LU name
DB2
LU name
Loc name
DRDA
LU 6.2
Chapter 1. Architecture of DB2 distributed systems 23
Figure 1-15 Inactive connection support
Inactive connection support operates by separating the distributed connections (in DDF) from
the threads (DBATs) that do the work (in DBM1). A pool of DBATs is created for use by
inbound DRDA connections. A connection makes temporary use of a DBAT to execute a
UOW and releases it back to the pool at commit time, for another connection to use. The
result is that a given number of inbound connections require a much smaller number of
DBATs to execute the work within DB2. Each DDF connection only consumes approximately
7.5 K of memory inside the DDF address space, whereas each active DBAT consumes
approximately 200 K of memory at a minimum, depending on SQL activity. The DSNZPARM
CMTSTAT=INACTIVE enables inactive connection support, and starting with DB2 for z/OS
V8, this is the default value for the DSNZPARM.
The main advantage of being able to reuse DBATs is related to a greater capacity to support
DRDA connections due to the following reasons:
CPU savings in DB2, by avoiding repeated creation and destruction of DBATs
Real memory savings in z/OS, by reducing the number of DBATs
Virtual storage savings in DBM1, by reducing the number of DBATs
See 6.2.2, Application Servers on page 249 for examples of deployment of connection
reuse and distribution.
It is important to be aware of what prevents a connection from being inactivated:
Any WITH HOLD cursors not closed
Any declared global temp tables not dropped
Any application using packages bound using the KEEPDYNAMIC option (except for
certain situations where the requester is able to tolerate the loss of KEEPDYNAMIC
environment - See 6.2.2 for details)
Held LOB locators
DDF
DBM1
DDF connection
Thread pool
Active DBAT
Inactive DBAT
24 DB2 9 for z/OS: Distributed Functions
1.7.2 Connection pooling using the IBM Data Server Drivers
Connection pooling allows a requester to reuse an existing network connection for a different
application once an application disconnects from the connection, either by terminating or by
releasing the connection. See Figure 1-16.
Figure 1-16 Connection pooling
With connection pooling, most application processes do not incur the overhead of creating a
new physical connection because the data source can locate and use an existing connection
from the pool of connections. When the application terminates, the connection is returned to
the connection pool for reuse. After the initial resources are used to produce the connections
in the pool, additional overhead is insignificant because the existing connections are reused.
Benefits of connection pooling
Connection pooling can improve the response time of any application that requires
connections, especially Web-based applications. When a user makes a request over the Web
to a resource, the resource accesses a data source. Because users connect and disconnect
frequently with applications on the Internet, the application requests for data access can
surge to considerable volume. Consequently, the total datastore overhead quickly becomes
high for Web-based applications, and performance deteriorates. When connection pooling
capabilities are used, however, Web applications can realize performance improvements of
up to 20 times the normal results.
When using connection pooling, the connection is only available for reuse after the
application owning the connection issues a disconnect request. In many client-server
applications, users do not disconnect for the duration of the workday. Likewise, most
application servers establish database connections at start up and do not release these
connections until the application server is shut down. In these environments, connection
pooling has little, if any, benefit. However, in Web and client-server environments where the
frequency of connections and disconnections is higher, connection pooling will produce
significant performance benefits.
Connection Pooling
Ability to reuse server agents after client disconnected
(physical connection between the driver/client and DB2 server)
Avoid repeated processing to create and terminate connections
Transport
3
Transport
1
Transport
2
point of client
disconnect
pooled transports
to DB2 server
Type 4 or CLI
driver
Chapter 1. Architecture of DB2 distributed systems 25
1.7.3 Transaction pooling
Transaction pooling allows a requester to share a network connection with other applications.
It is also referred to as Sysplex Workload Balancing. See Figure 1-17.
Figure 1-17 Transaction pooling
After a transaction completes, the requester can allow another application to reuse the same
physical connection (transport) to the server. During the commit or rollback process, the
server indicates whether the connection can be reused by another application. If reuse is
allowed, the server returns a group of SQL SET statements to the requester. The requester
can then replay these registers to the server, to re-establish the application execution
environment prior to the execution of the next transaction for the same application on possibly
another connection. The server always returns special registers to client even if the client did
not specify the DRDA RLSCONV (release conversation) instance variable.
A sysplex is a collection of DB2 systems (known as members) that form a data sharing group.
One or more coupling facilities provide high-speed caching and lock processing for the
data-sharing group. The sysplex, together with the WLM dynamic virtual IP address (DVIPA),
and the Sysplex Distributor, allow a client to access a DB2 for z/OS database over TCP/IP
with network resilience, and distribute transactions for an application in a balanced manner
across members within the data-sharing group.
Central to these capabilities is a server list that each member of the DB2 data-sharing group
returns on connection boundaries and optionally on transaction boundaries. This list contains
the IP address and the workload balancing weight for each DB2 member. With this
information, a client can distribute transactions in a balanced manner, or identify the DB2
member to use when there is a communications failure.
The server list is returned on the first successful connection to the DB2 database. Therefore,
the initial database connection should be directed at the group DVIPA owned by the Sysplex
Distributor. If at least one DB2 member is available, the Sysplex Distributor will route the
request to the database. After the client has received the server list, the client directly
accesses a DB2 member based on information in the server list.
The IBM Data Server Driver for JDBC and SQLJ is capable of performing sysplex workload
balancing functions since Version 8.1 Fixpack 10. Because improvements have been made
and seamless client reroute has been added, V9 FixPack 5 and V9.5 FixPack 1 are the
Type 4 driver
or
CLI driver
Logical
Connection
3
Logical
Connection
1
Logical
Connection
2
disconnect
at commit/rollback
pooled transports
to DB2 server
Transport
1
Transport
2
Thread
3
Thread
1
Thread
2
26 DB2 9 for z/OS: Distributed Functions
recommended minimum levels. Starting with Version 9.5 FixPack 3, IBM Data Server Clients
and non-Java-based data server drivers that have a DB2 Connect license. can also access a
DB2 for z/OS sysplex directly. Licensed clients no longer need to go through a middle-tier
DB2 Connect (gateway) server to use sysplex capabilities.
With workload balancing (see Figure 1-18), DB2 for z/OS and WLM ensure that work is
distributed efficiently among members of the data sharing group and that work is transferred
to another member of a data sharing group if one member has a failure.
When workload balancing is enabled, the driver gets frequent status information about the
members of a data sharing group. The driver uses this information to determine the data
sharing member to which the next transaction should be routed.
The IBM Data Server Driver for JDBC and SQLJ uses transport objects and a global transport
objects pool to support the workload balancing by performing both transaction pooling and
connection concentration. There is one transport object for each physical connection to the
data source. When you enable the connection concentrator and workload balancing, you set
the maximum number of physical connections to the data source at any point in time by
setting the maximum number of transport objects.
To configure non-Java-based client sysplex support, specify settings in the db2dsdriver.cfg
configuration file, explained in .NET Provider/CLI Driver on page 253. The DB2 for z/OS
requester uses the server list for workload balancing on a connection boundary, but it cannot
balance workload on a transaction boundary.
Figure 1-18 Sysplex workload balancing
1.8 Federated data support
Todays business climate demands fast analysis of large amounts and disparate sources of
business critical data. Businesses must access not only traditional application sources such
as relational databases, but also XML documents, text documents, scanned images, video
clips, news feeds, Web content, e-mail, analytical cubes, and special-purpose stores. Data
federation is a way of providing an end-to-end solution for transparently managing the
diversity of data. Data federation can quickly build the framework for such a solution by
Application Server
A
p
p
l
i
c
a
t
i
o
n
Resource
Adapter
JCA
Connection
Manager
Type 4 Driver
DB
Connection
Pool
To exploit Sysplex Workload Balancing, use both connection pooling and
transaction pooling.
disconnect
at commit/rollback
CF
pooled
connections
to DB2 Data
Sharing
Group
JVM
Logical
Connection
3
Logical
Connection
1
Logical
Connection
2
Transport
1
Transport
2
Chapter 1. Architecture of DB2 distributed systems 27
allowing applications to meet the need to adapt to business change, while maintaining a
business infrastructure that keeps up with the demands of the marketplace.
DB2s federated database functionalities extend the reach of all units of work letting you
access data in multiple sources in one SQL statement.
IBM developed DB2 Data Joiner several years ago as the first vehicle for federated
technology. Later federated database technology was delivered with DB2 UDB Version 7.1.
This product provided a unified access (information integration) to diverse and distributed
data belonging to the IBM DB2 family as well as Informix IDS. IBM DB2 Information Integrator
V8.1 extended the federated approach by providing the ability to synchronize distributed data
without requiring that it be moved to a central repository. For details, see Data Federation with
IBM DB2 Information Integrator V8.1, SG24-7052, and Publishing IMS and DB2 Data using
WebSphere Information Integrator: Configuration and Monitoring Guide, SG24-7132.
The current products, branded as of September 30th, 2008
4
as members of the IBM
InfoSphere family, are shown in Figure 1-19.
Figure 1-19 InfoSphere Federation Server products
The InfoSphere brand represents integration and openness, which are integral to the
Federation Server software portfolio and its role in providing organizations with real-time,
integrated access to disparate and distributed information across IBM and non-IBM sources.
Find more information at the following Web page:
http://www.ibm.com/software/data/integration/
In this section we briefly introduce the relevant Federation Server products.
4
IBM United States Software Announcement 208-159, dated September 30, 2008: IBM InfoSphere Information
Server offers globalization, native AIX and Solaris 64-bit support, enhanced deployment, and improved SOA
capabilities
Relational
databases
Web,
Non-Relational
Sources
Collaboration
Systems
XML
Web services
Packaged
applications
SQL,
SQL/XML
Content
Repositories
and
Imaging Systems
Workflow
systems
Content
Mainframe
files
Mainframe
databases
SQL
Access diverse and distributed information as if it were in one system
InfoSphere Classic Federation
Server for z/OS
InfoSphere Federation Server
InfoSphere II Content Edition
IBM InfoSphere Federation Server
28 DB2 9 for z/OS: Distributed Functions
1.8.1 IBM InfoSphere Federation Server
IBM InfoSphere Information Server achieves new levels of information integration and
flexibility by delivering industry-leading capabilities for understanding, cleansing,
transforming, and delivering information. Figure 1-20 on page 29 summarizes the functional
areas.
The recently enhanced areas are as follows:
Enhanced deployment
Ensures secure deployment and simplified metadata integration.
Connectivity to industry-leading applications, databases, and file systems
IBM InfoSphere Information Server is designed for automated and optimized access to
data stored behind industry-leading applications, databases, and file systems without the
need for extensive and complex code customization.
Improved information as a service
Extends the service-oriented reach of IBM InfoSphere Information Server to InfoSphere
Master Data Management Server, IBM InfoSphere Classic Federation Server, and
Oracle.
Direct data lineage integration capabilities
Extends IBM Metadata Workbench analysis from within Cognos Framework Manager
8.4.
Data quality enhancements
Along with globalization and enhanced deployment capabilities, InfoSphere QualityStage
has been enhanced with significant match improvements.
Databases include the DB2 product family, Oracle, Microsoft SQL Server, and Sybase.
Chapter 1. Architecture of DB2 distributed systems 29
Figure 1-20 InfoSphere Federation Server
1.8.2 IBM InfoSphere Classic Federation Server for z/OS
IBM InfoSphere Classic Federation Server for z/OS is the current product that provides direct,
real-time SQL access to mainframe databases and files without mainframe programming, as
shown in Figure 1-21. It maps logical relational table structures to existing physical mainframe
databases and files. UNIX, Windows, and Linux tools and applications issue standard SQL
commands to these logical tables.
Figure 1-21 Classic Federation Server for z/OS
InfoSphere Federation Server
Deliver
IBM InfoSphereFederation Server
Access and integrate heterogeneous
information across multiple sources
as if they were a single source
Extend value of existing analytical
applications by providing real-time
access to integrated information
Transparent
Appears to be one source
Independent of how and where data is
stored
Applications continue to work despite of
any change in how data is stored
Heterogeneous
Accesses data from diverse sources
Relational, Structured, XML, messages,
Web content
Extensible
Bring together almost any data source.
Wrapper Development Toolkit
High Function
Full query support against all data
Capabilities of sources as well
Autonomous
Non-disruptive to data sources, existing
applications, systems.
High Performance
Optimization of distributed queries
Metadata
Catalog
InfoSphere Classic Federation Server
z/OS
DB2 for z/OS VSAM IMS CA-Datacom CA-IDMS Adabas
Portal BI Tool Servlet Servlet Client
class
EJB
AIX, HP-UX, Solaris, Linux,
JVM 1.2, Widows NT, 2000, XP,
z/OS, USS, Linux on z
ODBC/CLI Client JDBC Client
Classic Data Architect Classic Data Architect
Copybooks, DBDs,
DDL
30 DB2 9 for z/OS: Distributed Functions
InfoSphere Classic Federation Server for z/OS dynamically generates native data access
commands that are optimized for each database and file type. Results are automatically
translated and reformatted into relational rows and columns providing seamless integration of
all mainframe data assets without proprietary programming.
InfoSphere Classic Federation Server for z/OS can also extend IBM InfoSphere Federation
Server's access to non-relational mainframe data sources. InfoSphere Classic Federation
Server lets applications access diverse and distributed data (including multivendor mainframe
and distributed, structured and unstructured, public and private) as though it were a single
database.
Classic Federation Server V9.5 provides extensions to the Classic Data Architect to help
automate the creation of metadata for more users and extend visual metadata management
to more complex scenarios. This improves the usability of the tool while also enabling more
rapid metadata creation and management. In addition, Classic Federation Server for z/OS
V9.5 provides an interactive view of the actual settings as well as a means for temporarily and
permanently changing configuration settings, greatly simplifying the management of the
operational platform and performance optimization processes.
Classic Federation Server for z/OS V9.5 supports new 64-bit clients and JDBC 3.0 SQL.
IBM InfoSphere Classic Federation Server for z/OS V9.5 can access data stored in VSAM,
IMS, CA-IDMS, CA-Datacom, Software AG Adabas, and DB2 for z/OS databases by
dynamically translating JDBC and ODBC SQL statements into native read/write APIs.
Data client software requirements depend on the data sources that are accessed. The client
software must be acquired separately unless specified otherwise. The client software must be
installed on the same system as the InfoSphere Federation or Replication Server.
The relationships between products and their respective functionalities are shown in
Figure 1-22.
Figure 1-22 Supported data sources
OLE DB
Excel
Flat files
Life sciences
Custom-built
InfoSphere BI
Adaptors
SAP
PeopleSoft
Siebel
Partner tools and custom-built connectors extend access to more sources
Web
Other
XML
Web services
Packaged
applications
SQL
InfoSphere Federation Server
DB2 for iSeries
DB2 for z/OS
DB2 for LUW
Informix
Oracle
Sybase
Teradata
Microsoft SQL Server
ODBC
Relational
databases
DB2 for z/OS
Relational
database
VSAM
Sequential
IMS
Adabas
CA-
Datacom
CA-IDMS
Mainframe
files
Mainframe
databases
SQL
InfoSphere Classic Federation
Server for z/OS
Chapter 1. Architecture of DB2 distributed systems 31
Federation functions
Federated access in InfoSphere Federation Server supports all the data sources shown in
Figure 1-22 on page 30.
Federated access in the DB2 homogeneous federation feature supports access to DB2 for
Linux, UNIX, and Windows, DB2 for z/OS, DB2 for System i, and Informix.
Replication and Event Publishing
SQL-based replication supports:
DB2, Informix Dynamic Server, Microsoft SQL Server, Oracle, and Sybase Adaptive
Server Enterprise as sources and targets.
Informix Extended Parallel Server and Teradata as targets.
DB2 for z/OS support requires InfoSphere Replication Server for z/OS.
DB2 for iSeries support requires IBM DB2 DataPropagator for System i.
Queue-based replication supports:
DB2 for Linux, UNIX, and Windows and DB2 for z/OS as sources and targets.
Informix Dynamic Server, Microsoft SQL Server, Oracle, and Sybase Adaptive Server
Enterprise as targets.
DB2 for z/OS support requires InfoSphere Replication Server for z/OS.
Data event publishing supports:
DB2 for Linux, UNIX, and Windows and DB2 for z/OS.
DB2 for z/OS support requires InfoSphere Data Event Publisher for z/OS.
32 DB2 9 for z/OS: Distributed Functions
Copyright IBM Corp. 2009. All rights reserved. 33
Chapter 2. Distributed database
configurations
In this chapter we discuss different scenarios for implementing DB2 for z/OS in a distributed
database business environment. We show some of the most commonly used configurations.
We mention protocols, installation and setup considerations, and application programming
interfaces that you can use. Implementations using DB2 Private Protocol are not discussed in
this chapter, as DB2 Private Protocol function has been deprecated. We do not discuss
Systems Network Architecture (SNA) support in this chapter, as SNA support for distributed
access has not been enhanced.
As mentioned in 1.4.1, DB2 for z/OS on page 14, DB2 for z/OS implements both the DRDA
AR and DRDA AS functions.
DB2 for Linux, UNIX, and Windows (DB2 for LUW) provides DRDA AS functions as a
standard feature. DB2 for LUW is delivered to the market in a set of Editions. The difference
between the editions is as follows:
DB2 Express Edition
DB2 Express Edition can be run on servers with a maximum of 2 CPUs and 4 GB of real
memory.
DB2 Workgroup Server Edition
DB2 Workgroup Server Edition (WSE) can be run on servers with a maximum of 4 CPUs
and 16 GB of real memory.
DB2 Enterprise Server Edition
DB2 Enterprise Server Edition (ESE) can run on any size server with any amount of real
memory
Among the DB2 for LUW products, only DB2 for LUW Enterprise Server Edition (DB2 for
LUW ESE) includes the DRDA AR functions. DB2 Connect provides the DRDA AR functions
that allow the other DB2 for LUW products to access DB2 for z/OS data.
2
34 DB2 9 for z/OS: Distributed Functions
Applications that do not require a local DB2 for LUW database can access DB2 for z/OS data
through DB2 Connect or by acquiring a DB2 Connect license and using the DRDA AR
functions in one of the following products:
IBM Data Server Driver for JDBC and SQLJ
IBM Data Server Driver for ODBC and CLI
IBM Data Server Driver Package
IBM Data Server Runtime Client
IBM Data Server Client
We discuss configurations using DB2 for z/OS, either as a requester or a server, or both. We
also discuss configurations for two-tier and three-tier access to DB2 for z/OS, as a server,
from DB2 for LUW, DB2 Connect and the IBM Data Server products listed above. Throughout
this chapter we focus on requesters and servers, rather than AR and AS functions. This is
because DRDA has evolved to include application requester (AR), application server (AS)
and database server (DS) functions, and DB2 for z/OS, as well as other DRDA participants,
may perform different DRDA functions at different times or in different scenarios.
This chapter consists of the following sections:
DB2 for z/OS both as a requester and a server on page 35
DB2 for LUW and DB2 for z/OS on page 37
IBM Data Server Drivers and Clients as requesters on page 40
DB2 Connect to DB2 for z/OS: Past, present, and future on page 51
DB2 Connect Server on Linux on IBM System z on page 58
DB2 for z/OS requester: Any (DB2) DRDA server on page 63
XA Support in DB2 for z/OS on page 63
Chapter 2. Distributed database configurations 35
2.1 DB2 for z/OS both as a requester and a server
In this section, we discuss configurations where both the requester and the server are DB2 for
z/OS subsystems.
2.1.1 Basic configuration
Figure 2-1 shows the DB2 for z/OS configuration. A group of address spaces, indicated as
DB2 in the figure, provides database functions and is called a DB2 subsystem. A DB2
subsystem includes the DIST, DBM1, MSTR, and IRLM address spaces, representing the
Distributed Data Facility (DDF), database services, system services, and locking manager
address spaces, respectively.
Figure 2-1 Connecting from DB2 for z/OS to DB2 for z/OS
To send requests to or receive requests from remote subsystems, you have to start the DDF
address space. You can stop or start DDF independently from other DB2 address spaces.
However, the DDF address space (ssidDIST) itself is always started together with the other
DB2 address spaces when the DDF DSNZPARM is set to COMMAND or AUTO. By stopping
and starting DDF, you only affect the communication with other systems. DDF can act as a
requester, an intermediate server for a remote requester, and as a server. In Figure 2-1, an
application program attached to the DB2 subsystem on the left requests data from the DB2
subsystem on the right.
Configuration overview
TCP/IP is the network protocol. DB2 stores connectivity information in the communication
database (CDB) and the bootstrap data set (BSDS). An application uses the target DB2
LOCATION name to address a DB2 subsystem.
The following TCP/IP information is necessary to identify DB2 for z/OS server subsystems:
IP address or domain name of the system where the DB2 subsystem resides.
Port numbers or service names to specify port numbers. You have to define a server port
(also DRDA SQL port) and a resynchronization port
1
.
1
In the event of a failure, this unique port number allows a requester to reconnect to the subsystem or member so
that units of work that require two-phase commit can be resolved. Remote DRDA partners record DB2's
resynchronization port number in their recovery logs.
System z System z
DB2
DRDA
TCP/IP
DB2
DB2 for z/OS DB2 for z/OS
Appl.
pgm.
MSTR DIST
requester
DBM1 IRLM DIST
server
DBM1 MSTR IRLM
36 DB2 9 for z/OS: Distributed Functions
At the DB2 for z/OS requester in a TCP/IP environment, the combination of IP address and
port number specifies the target subsystem in the network, which is linked with a LOCATION
name in the CDB.
Application programs can access a remote subsystem using the same APIs as are used for
the local subsystem. Both static and dynamic SQL are available.
2.1.2 Parallel sysplex environment
Parallel sysplex technology is a solution to provide highest levels of availability and scalability
to mission-critical systems. DB2 for z/OS can run in a parallel sysplex environment
(Figure 2-2).
Figure 2-2 Connecting to a DB2 data sharing group from a DB2 for z/OS system
In this configuration, a set of DB2 subsystems (called a data sharing group) shares a set of
system and user tables. Each DB2 subsystem that is part of the data sharing group is called a
member. A data sharing group can support more threads than a single DB2 subsystem. The
maximum number of DDF threads or connections in a data sharing group is the sum of the
number of threads or connections in each of the DB2 members in that data sharing group.
It is worth noting that DB2 for LUW Enterprise Server Edition (ESE) requesters (see 2.2,
DB2 for LUW and DB2 for z/OS on page 37), IBM Data Server Drivers and Clients (see 2.3,
IBM Data Server Drivers and Clients as requesters on page 40), and DB2 Connect
requesters and servers (see 2.4, DB2 Connect to DB2 for z/OS: Past, present, and future on
page 51), can also connect to a data sharing group.
Data Sharing Group
System z
Appl.
pgm
DB2 Member 1
Member 2
Other DB2
address
spaces
DB2
DB2
System z
System z
DB2 for z/OS
DIST
DRDA
server
DB2 for z/OS
DIST
DRDA
requester
DIST
DRDA
server
Other DB2
address
spaces
DB2 for z/OS
Other DB2
address
spaces
Chapter 2. Distributed database configurations 37
The data sharing group has a single-system image for requesting applications. Requesting
applications use the LOCATION NAME of the data sharing group to direct their SQL requests
to that group. There is a single location name that identifies the data sharing group. DB2 for
z/OS also supports LOCATION ALIAS to allow requesting applications to specify a subset of
the members of the data sharing group.
For more information about how to connect to a data sharing group, see Chapter 6, Data
sharing on page 233.
2.2 DB2 for LUW and DB2 for z/OS
DB2 for LUW ESE provides support for local DB2 databases as well as DRDA server and
requester functions. Applications local to DB2 for LUW need no additional products to access
local DB2 data or data from other DRDA servers. Remote applications, including applications
on a System z with DB2 for z/OS, can access DB2 data stored in DB2 for LUW.
2.2.1 DB2 for LUW ESE as requester to DB2 for z/OS server
If you have an application that currently uses data stored in DB2 for LUW ESE, that
application can also access data stored in DB2 for z/OS, even in the same transaction. You
can also deploy applications on this Windows, UNIX, or Linux system that only access data
stored in DB2 for z/OS. Figure 2-3 shows DB2 for LUW as an application requester accessing
data from DB2 for z/OS.
Figure 2-3 DB2 for LUW ESE requester connecting to DB2 for z/OS.
If the application also accesses a local DB2 for LUW database, DB2 for LUW establishes two
processes: one for DRDA to DB2 for z/OS, the other for the local database access.
2.2.2 DB2 for z/OS as requester to DB2 for LUW as server
DB2 for LUW provides DRDA server functionality as a standard function. Applications
attached to DB2 for z/OS can access data stored in DB2 for LUW, with a network connection,
without additional software. DB2 for z/OS requires an entry in its Communication Database
(CDB) to perform DRDA requester functions. Figure 2-4 on page 38 shows DB2 for z/OS
using DDF to access data in a DB2 for LUW database. As in Figure 2-3, an application
attached to DB2 for z/OS can access data stored in DB2 for z/OS and data stored in DB2 for
LUW in the same transaction. In this case the server could be DB2 for LUW ESE or DB2 for
LUW WSE, because the DRDA requester functions are not required.
TCP/IP
DRDA
System z
DB2 for z/OS
DIST
DRDA
server
Other DB2
address
spaces
DB2 for
z/OS
Windows, UNIX, Linux
DB2 for LUW
DB2
instance
DB2 for LUW
DRDA
requester
SQL to
local
database
38 DB2 9 for z/OS: Distributed Functions
Figure 2-4 DB2 for z/OS requester connecting to DB2 for LUW ESE or DB2 for LUW WSE
One difference from the previous example is that for a given application DB2 for z/OS uses a
single process to provide access to both DB2 for LUW data and DB2 for z/OS data.
2.2.3 DB2 for z/OS as an intermediate server
DDF can perform as a gateway by acting as both server and requester for the same
connection. In this case we use the terms upstream requester, intermediate server and
downstream server. Figure 2-5 illustrates an upstream requester accessing data in DB2 for
z/OS, then accessing data on a downstream server. DB2A is the intermediate server when
the application accesses data on the downstream server.
In this example, the requester application issues a CONNECT TO DB2A and a SELECT
FROM TABLE1. DB2As DDF serves as a DRDA AS to satisfy those statements. Next, the
application issues a SELECT FROM DB2B.MYDB.TABLE2. In this case, DB2As DDF acts as
an AS on behalf of the requesting application to resolve the three-part name and direct the
request to DB2B, which provides the database server (DS) function.
Figure 2-5 DB2A as an intermediate server between a requester and a server
The same concept applies to any DRDA-supporting product acting as a downstream server.
DB2 for z/OS can be an intermediate server from any DRDA upstream requester to any
DRDA downstream server.
System z
DB2 for z/OS
DIST
DRDA
requester
Other DB2
address
spaces
Windows, UNIX, Linux
DB2 for LUW
TCP/IP
DRDA
DB2 DB2 LUW
DB2
instance
DRDA
server
CONNECT TO DB2A
SELECT FROM TABLE1
SELECT FROM
DB2B.MYDB.TABLE2
Application
program
DB2 Connect DRDA
Upstream requester
DRDA
TCP/IP
Windows, UNIX, Linux
DB2B DB2 for z/OS DB2A DB2 for z/OS
DB2ADIST DRDA
Intermediate server
System z System z
DRDA
TCP/IP
DB2BDIST DRDA
Downstream server
TABLE1
MYDB.TABLE2
DB2ADBM1 DB2BDBM1
Chapter 2. Distributed database configurations 39
2.2.4 DB2 for z/OS as requester to a federation server
There is a special case where DB2 for z/OS acts as a DRDA requester to a federation server
on behalf of locally attached applications. In this case, DB2 for z/OS is not the two-phase
commit (2PC) coordinator. If all the data sources accessed during the unit of work (UOW)
support 2PC, for example in a global XA transaction, there are no issues. But if one of the
data sources does not support 2PC and is updated as part of one-phase commit (1PC)
processing, then there are special DRDA flows between DB2 for z/OS and the federation
server to determine how to handle updates to the 1PC data source in the UOW.
This support was added to DB2 for z/OS as part of DB2 9 for z/OS, and was retrofitted to DB2
for z/OS V8. Refer to Figure 2-6 as we discuss this situation. In this case DB2A acts as a
DRDA requester on behalf of an IMS or CICS transaction. IMS and CICS are non-DRDA 2PC
coordinators with respect to DB2, which is a 2PC participant. In Figure 2-6, an application
attempts to update several data sources, including two through a federation server (DB2C, a
DB2 for LUW system). TABLE1 is a DB2 for z/OS table at a remote site, in DB2B. TABLE2 is
a DB2 for LUW table in a database local to the federation server, DB2C. TABLE3 is a
non-DRDA data source, such as a file, a non-relational database or non-DRDA relational
database, that is associated to the federation server. TABLE4 is a DB2 for z/OS table in the
local DB2A subsystem.
Figure 2-6 DB2 for z/OS as requester in a federation server, unprotected update scenario
The business requirement in our scenario is that the application update all four tables. The
requirement to update all four tables presents a distinct challenge, because the non-DRDA
data source only supports one-phase commit (1PC). Therefore, any update to TABLE3 must
be an unprotected update. That is, it cannot be coordinated with other updates to tables
where the database supports 2PC. We will describe several options the application may
pursue.
DB2C DB2 for LUW
DB2A DB2 for z/OS
DB2ADIST
DRDA
requester
System z
DRDA server
federation server
TABLE2
DB2ADBM1
Windows, UNIX, Linux
Local Application:
Update four tables:
order is important. So
is unit of work (UoW).
System z
Non-DRDA
data source
TABLE3
DB2B
DB2 for z/OS
DBMS (or File)
Data sources:
DB2C.TABLE 2; 2PC
NonDRDA.TABLE 3;
1PC *
2PC
1PC*
2PC
TABLE4
2PC
TABLE1
40 DB2 9 for z/OS: Distributed Functions
First, the application may try to update the tables in order in a single UOW. In summary, the
application would look similar to Example 2-1.
Example 2-1 Updating tables in order in a single UOW
UPDATE TABLE1;
UPDATE TABLE2;
UPDATE TABLE3;
UPDATE TABLE4;
COMMIT;
In this case, the first two updates would be in process when the application attempted to
update TABLE3. Because the update to TABLE3 is an unprotected update, and other 2PC
updates were already in progress, the update to TABLE3 would fail and the whole transaction
would roll back.
Second, the application may try to update TABLE3 first, then update the other tables. In this
case, the update to TABLE3 would succeed, and the federation server would respond back to
DB2A that an unprotected update had been performed. At this point, DB2A would disallow
updates to other sources. Any access other than SELECT for TABLE1, TABLE2, or TABLE4
would result in the application receiving a -919 SQL code.
Third, the application might try to update all the other tables first and to update TABLE3 last.
However, just as in the first scenario, the fact that other 2PC updates had already occurred
would cause DB2A to disallow the unprotected update in this UOW.
Finally, the application would have to approach the 2PC data sources separately from the
1PC data source. That is, the application would either have to update TABLE3, commit, and
perform the other updates dependent upon the result of the first, or update TABLE1, TABLE2,
and TABLE4, issue a commit, and then update TABLE3.
The point to remember is, when unprotected updates are allowed, other updates in the same
UOW are disallowed.
For more information about federation servers, refer to 1.8, Federated data support on
page 26.
2.3 IBM Data Server Drivers and Clients as requesters
The IBM strategy is to remove the reliance on the DB2 Connect modules and replace DB2
Connect with the IBM Data Server Drivers or Clients. While DB2 Connect licenses (in the
form of DB2 Connect license files) are still required, you can replace DB2 Connect modules
with the IBM Data Server Drivers or Clients and receive equivalent or superior function. In
addition, you can reduce complexity, improve performance, and deploy application solutions
with smaller footprints for your business users.
With DB2 for LUW Version 9.5 FixPack 3 or FixPack 4 you can implement the DRDA AR
functions for your distributed applications with varied degrees of granularity. Instead of the
current function and large footprint of DB2 Connect, you can choose from the IBM Data
Server Drivers, the IBM Data Server Runtime Client, and the IBM Data Server Client. The
IBM Data Server Drivers include:
IBM Data Server Driver for JDBC and SQLJ
IBM Data Server Driver for ODBC and CLI
IBM Data Server Driver Package
Chapter 2. Distributed database configurations 41
In this section we introduce these drivers and clients and discuss situations where you can
take advantage of them to reduce complexity and improve performance and availability for
distributed access to DB2 for z/OS data.
2.3.1 DB2 distributed clients: Historical view
IBM has delivered a variety of client products for applications, application developers, and
database administrators to support distributed access to data stored in DB2 for z/OS. The
names and function of these products in recent DB2 for LUW versions are highlighted in
Table 2-1.
Table 2-1 Recent history of DB2 client products
Each of the DB2 9.5 clients in the right-most column is described in the sections that follow.
2.3.2 IBM Data Server Drivers and Clients overview
The IBM Data Server products include DRDA AR functions in FixPack 3. With DB2 9.5
FixPack 3, installing DB2 Connect modules is no longer required to connect to mid-range or
mainframe databases, although a license for DB2 Connect is still required. You may choose
to use DB2 Connect as a server in some circumstances. This is discussed in detail in 2.5,
DB2 Connect Server on Linux on IBM System z on page 58.
In this section we briefly describe these drivers and clients, including examples showing
distributed applications accessing data stored in DB2 for z/OS without using a DB2 Connect
Server. Following these examples we include a table that compares the driver and client
products. Start with the IBM DB2 9.5 Information Center at the following Web page for more
information about these products:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp?topic=/com.ibm.swg.
im.dbclient.install.doc/doc/c0022612.html
IBM Data Server Driver for JDBC and SQLJ
This driver is for applications using only Java. This driver provides support for client
applications and applets that are written in Java using JDBC or SQLJ. This is the latest driver
from IBM to support JDBC connectivity. We present a brief introduction to JDBC drivers, then
return to the other IBM Data Server products.
DB2 8.2 Clients DB2 9 Clients DB2 9.5 and 9.7 Clients
DB2 Administration Client
DB2 Application Development
Client
DB2 Client IBM Data Server Client
DB2 Run-Time Client DB2 Runtime Client IBM Data Server Runtime
Client
Java Common Client IBM DB2 Driver for
JDBC and SQLJ
IBM Data Server Driver for
JDBC and SQLJ
n/a IBM DB2 Driver for
ODBC and CLI
IBM Data Server Driver for
ODBC and CLI
n/a n/a IBM Data Server Driver
Package
42 DB2 9 for z/OS: Distributed Functions
DB2 support for JDBC drivers
The DB2 product includes support for two types of JDBC driver architecture, the Type 2 driver
and the Type 4 driver.
Driver for JDBC Type 2 connectivity (Type 2 driver)
Type 2 drivers are written partly in the Java programming language and partly in native
code. The drivers use a native client library specific to the data source to which they
connect. Because of the native code, their portability is limited. Use of the Type 2 driver to
connect to DB2 for z/OS is recommended for WebSphere Application Server running on
System z.
Driver for JDBC Type 4 connectivity (Type 4 driver)
Type 4 drivers are pure Java and implement the network protocol for a specific data
source. The client connects directly to the data source. The JDBC Type 4 driver is
recommended to connect distributed Java applications to DB2 for z/OS data.
IBM ships two versions of the JDBC Type 4 driver with the IBM Data Server Driver for
JDBC and SQLJ V9.5 FP3 product:
Version 3.5x is JDBC 3.0-compliant. It is packaged as db2jcc.jar and sqlj.zip and
provides JDBC 3.0 and earlier support.
Version 4.x is JDBC 3.0-compliant and supports some JDBC 4.0 functions. It is
packaged as db2jcc4.jar and sqlj4.zip.
The Type 4 driver provides support for distributed transaction management. This support
implements the Java 2 Platform, Enterprise Edition (J2EE), Java Transaction Service
(JTS), and Java Transaction API (JTA) specifications, which conform to the X/Open
standard for distributed transactions (Distributed Transaction Processing: The XA
Specification, available from the following Web page:
http://www.opengroup.org
We include an example of XA support in 2.7, XA Support in DB2 for z/OS on page 63.
Applications that use the IBM Data Server Driver for JDBC and SQLJ to access DB2 for z/OS
data across a network implement the Type 4 driver. In Figure 2-7 an application uses the IBM
Data Server Driver for JDBC and SQLJ to access a standalone DB2.
Figure 2-7 IBM Data Server Driver for JDBC and SQLJ connecting directly to DB2 for z/OS
In the remainder of the discussion and examples of the IBM Data Server Driver for JDBC and
SQLJ, we refer to the Type 4 driver support.
System z
Other DB2
address
spaces
Windows/UNIX/Linux
DB2
DB2 for z/OS
DIST
DRDA
server
Appl.
pgm
DRDA
requester
JDBC
SQLJ
IBM Data Server Driver
for JDBC and SQLJ
DRDA
TCP/IP
Type 4
driver
Chapter 2. Distributed database configurations 43
IBM Data Server Driver for ODBC and CLI (CLI driver)
This product is for applications using ODBC or CLI only and provides a lightweight
deployment solution designed for ISV deployments. This driver, also referred to as CLI driver,
provides runtime support for applications using the ODBC API or CLI API, without the need of
installing the IBM Data Server Client or the IBM Data Server Runtime Client.
The CLI driver is conceptually similar to the JDBC Type 4 driver. The CLI driver is packaged in
a small footprint, providing the DRDA AR functions necessary to connect to DB2 for z/OS for
those application scenarios where you do not require robust tools, development or
administration functions.
In Figure 2-8, an application uses the CLI driver to access DB2 for z/OS data directly.
Figure 2-8 IBM Data Server Driver for ODBC and CLI connecting directly to DB2 for z/OS
IBM Data Server Driver Package
IBM Data Server Driver Package provides a lightweight deployment solution providing
runtime support for applications using ODBC, CLI, .NET, OLE DB, open source, or Java APIs
without the need of installing Data Server Runtime Client or Data Server Client. This driver
has a small footprint and is designed to be redistributed by independent software vendors
(ISVs), and to be used for application distribution in mass deployment scenarios typical of
large enterprises.
The IBM Data Server Driver Package capabilities are as follows:
Support for applications that use ODBC, CLI, or open source (PHP or Ruby) to access
databases.
Support for client applications and applets that are written in Java using JDBC, and for
embedded SQL for Java (SQLJ).
IBM Informix Dynamic Server support for .NET, PHP, and Ruby.
Application header files to rebuild the open source drivers.
Support for DB2 Interactive Call Level Interface (db2cli).
On Windows operating systems, IBM Data Server Driver Package also provides support
for applications that use .NET or OLE DB to access databases. In addition, this driver is
available as an installable image, and a merge module is available to allow you to easily
embed the driver in a Windows Installer-based installation.
On Linux and UNIX operating systems, IBM Data Server Driver Package is not available
as an installable image.
System z
Other DB2
address
spaces
Windows/UNIX/Linux
DB2
DB2 for z/OS
DIST
DRDA
server
Appl.
pgm
DRDA
requester
ODBC, CLI
IBM Data Server Driver
for ODBC and CLI
DRDA
TCP/IP
CLI
driver
44 DB2 9 for z/OS: Distributed Functions
In Figure 2-9, an application uses the IBM Data Server Driver Package to access DB2 for
z/OS data directly.
Figure 2-9 IBM Data Server Driver Package connecting directly to DB2 for z/OS.
IBM Data Server Runtime Client
This product allows you to run applications on remote databases. Graphical user interface
(GUI) tools are not included. Capabilities are as follows:
Command line processor (CLP)
Base client support for database connections, SQL statements, XQuery statements and
commands
Support for common database access interfaces (JDBC, SQLJ, ADO.NET, OLE DB,
ODBC, command line interface (CLI), PHP and Ruby), including drivers and ability to
define data sources
Lightweight Directory Access Protocol (LDAP) exploitation
Support for TCP/IP and Named Pipe
Support for multiple concurrent copies and various licensing and packaging options
The IBM Data Server Runtime Client (Runtime Client) has a rich set of SQL APIs for
deployment in more complex application environments. In Figure 2-10 the Runtime Client
accesses DB2 for z/OS data directly, potentially supporting applications using different SQL
APIs.
Figure 2-10 IBM Data Server Runtime Client connecting directly to DB2 for z/OS
System z
Other DB2
address
spaces
DRDA
TCP/IP
Windows
DB2
DB2 for z/OS
DIST
DRDA
server
Appl.
pgm
DRDA
requester
ODBC, CLI,
.NET,
OLE DB,
PHP, Ruby,
Java
IBM Data Server Driver
Package
System z
Other DB2
address
spaces
DRDA
TCP/IP
Windows / Linux
DB2
DB2 for z/OS
DIST
DRDA
server
Appl.
pgm
DRDA
requester
CLP plus
JDBC, SQLJ
ODBC, CLI
.NET, OLE DB
PHP, Ruby
IBM Data Server
Runtime Client
Chapter 2. Distributed database configurations 45
IBM Data Server Client
This is the full-function product for application development, database administration, and
client/server configuration. Capabilities are as follows:
Configuration Assistant
Control Center and other graphical tools
First Steps for new users
Visual Studio tools
IBM Data Studio
Application header files
Precompilers for various programming languages
Bind support
All the functions included in the IBM Data Server Runtime Client
Figure 2-11 shows an example of the IBM Data Server Client supporting application
development, database administration and applications with direct access to DB2 for z/OS
data.
Figure 2-11 IBM Data Server Client connecting directly to DB2 for z/OS
Driver and Client comparison
The highlights of the IBM Data Server products are summarized in Table 2-2. Refer to
standard DB2 for LUW product documentation for additional details.
Table 2-2 IBM Data Server Drivers and Clients comparison
System z
Other DB2
address
spaces
DRDA
TCP/IP
Windows / Linux
DB2
DB2 for z/OS
DIST
DRDA
server
Appl.
pgm
DRDA
requester
IBM Data Server Client
All application
interfaces,
DBA and dev.
tools, Config.
Assistant
Product Smallest
footprint
JDBC
and
SQLJ
ODBC
and CLI
OLE DB
and .NET
Open
Source
CLP DBA,
Dev, GUI
tools
IBM Data Server Driver
for JDBC and SQLJ
X X
IBM Data Server Driver
for ODBC and CLI
X X
IBM Data Server Driver
Package
X X X X
IBM Data Server Runtime
Client
X X X X X
IBM Data Server Client X X X X X X
46 DB2 9 for z/OS: Distributed Functions
2.3.3 Connecting to a DB2 data sharing group
Any of the IBM Data Server Drivers or Clients can connect your applications directly to a DB2
data sharing group. We show the basic configurations in Figure 2-12. Java-based
applications use the Type 4 driver while non-Java-based applications use the CLI driver.
Figure 2-12 IBM Data Server Drivers connecting to a DB2 data sharing group
Java applications can balance their DB2 for z/OS data accesses across the data sharing
group on transaction boundaries (that is, after commits). Java applications are thread-based.
The Java driver can manage the threads across the drivers multiple connections to the data
sharing members. The Type 4 driver providing connection concentration and workload
balancing to distributed WebSphere applications accessing data in a DB2 for z/OS data
sharing group provide the same support to individual Java applications.
IBM Data Server Drivers and Clients at Version 9.5 FixPack 4 behave the same way, with
sysplex workload balancing (WLB) available on commit boundaries even on a single
connection. Because FixPack 4 provides WLB for a single connection, DB2 Connect server is
not needed for typical CLI/ODBC applications.
With IBM Data Server Drivers and Clients at Version 9.5 FixPack 3, individual ODBC and CLI
applications tend to stay connected to the same member until they end, then drop the
connection. Subsequent connections may be made to either member of the data sharing
group. These applications exhibit this behavior because ODBC and CLI applications tend to
be process-based and do not maintain a connection to the data sharing group once an
individual process ends. While it is possible to write an ODBC process that manages
separate connections to a data sharing group on behalf of multiple applications, this is not the
standard implementation in most customer installations. Until you deploy FixPack 4, one way
to take advantage of DB2 data sharing workload balancing benefits for single-connection
ODBC and CLI applications is to use a DB2 Connect Server, as in Figure 2-20 on page 56.
ODBC and CLI applications that are part of a Web server configuration, for example in a .NET
environment, can take advantage of transaction pooling and workload balancing across data
sharing members, even though the individual applications will not tend to workload balance
on their own behalf. The Web service performs the connection concentration and workload
balancing function.
For more information about distributed access to a DB2 data sharing group, refer to
Chapter 6, Data sharing on page 233.
Data
Sharing
Group
Member 1
Member 2 System z
System z
DB2 for
z/OS
Java-based
clients
JDBC, SQLJ
pureQuery
C-based
clients
CLI, Ruby,
PHP
DRDA via
Type 4 driver
DRDA via
CLI driver
ODBC, .NET
DB2 for
z/OS
Chapter 2. Distributed database configurations 47
2.3.4 Choosing the right configuration
For most customer configurations that currently use DB2 Connect Client for DRDA AR
functions, one of the IBM Data Server products can replace the DB2 Connect Client, resulting
in a significantly reduced footprint. A licence for DB2 Connect is still required.
There are many cases where customers currently use DB2 Connect Server to provide a
single point of connectivity to numerous workstations supporting a variety of applications. In
these cases, customers can deploy one or more of the IBM Data Server products with a
smaller footprint and achieve overall benefits. For example, while DB2 Connect Servers
provide a single point for managing connectivity, they introduce additional overhead and
elapsed time to the applications accessing DB2 for z/OS data.
In addition, many customers must clone their DB2 Connect Servers to provide a fault tolerant
configuration. Otherwise a server failure could impact a broad spectrum of business
applications. The availability of an application on an individual workstation is not improved by
the addition of a server. Even if the server is cloned, the availability of workstation access to
DB2 for z/OS data does not improve over direct access.
Finally, if an application experiences a performance problem, the presence of the DB2
Connect Server complicates the efforts to identify the source of the problem. The overhead,
elapsed time, cloned server, and performance monitoring challenges may be removed or
reduced by implementing IBM Data Server products.
In the case of DB2 data sharing, the function available in the IBM Data Server Drivers and
Clients is superior to the SYSPLEX support in the DB2 Connect Server. In a IBM Data Server
Driver (or Client) configuration, if you enable sysplex workload balancing (sysplex WLB),
which includes connection concentration, you will receive better availability characteristics
than DB2 Connect Server with the SYSPLEX feature. The IBM Data Server Drivers and
Clients provide seamless reroute after a network failure or a DB2 member fails or is stopped.
In the DB2 Connect case, any time a member is shut down, the application gets a
communications failure that it must handle.
In this section we illustrate several scenarios where IBM Data Server products can replace
DB2 Connect Client or DB2 Connect Server. In the next section we describe a general
situation in which a DB2 Connect Server may still be advantageous. Table 2-3 on page 48
provides a summary of the considerations for replacing DB2 Connect with the IBM Data
Server Drivers or Clients.
48 DB2 9 for z/OS: Distributed Functions
Table 2-3 Replacing DB2 Connect Server with IBM Data Server Drivers or Clients
Business scenarios: A variety of requirements for distributed access
Most customers who use DB2 for z/OS to manage enterprise data have a variety of business
applications with varying requirements for workstation clients. In some cases individual
workstations deal directly with DB2 for z/OS data. In other cases business users access
application servers which in turn access DB2 for z/OS data. Most customers will have Web
application servers with Java-based applications. In addition there are the mainframe-based
business applications, which also access DB2 for z/OS data, and which represent a
substantial and vital element of business processing. These mainframe applications must
also be considered in the overall effort to meet business requirements. This point will become
important in a scenario in the next section.
Current configuration with DB2 Connect
We show a typical customer configuration in Figure 2-13 on page 49, which might resemble
your starting point when migrating to IBM Data Servers Drivers and Clients. Here the
individual workstations access DB2 for z/OS directly using DB2 Connect Client installed on
each workstation. (We use DB2 Connect Client to refer to the DB2 Connect product for use by
an individual workstation.) The application servers and Web application servers use DB2
Connect Servers to take advantage of connection concentration and single point of
management.
Pros Cons
Improved performance
Performance on the workstation or application
servers or Web application servers can improve
due to:
Reduced network traffic
Reduced code path
Improved availability
Application access to DB2 for z/OS data equal to
or superior to three-tier configuration due to
elimination of a point of failure
Improved visibility
Easier to monitor application, workstation,
application server or Web application server
traffic and behavior
Improved problem determination
Easier to identify location of problems and
customers affected. Tools for analyzing data and
network traffic can provide detailed view
Improved security
The z/OS Communications Server Intrusion
Detection Services (IDS) can be used to control
transports at the server.
Some reduction in control of workload priorities
Limited ability to constrain low priority work
Potential impact to high priority distributed or
mainframe applications
DB2 Connect Server required for XA transactions
using multi-transport model
Chapter 2. Distributed database configurations 49
Figure 2-13 Current configuration with DB2 Connect Client and Server
Replacing DB2 Connect Client and Server with IBM Data Server Drivers
For most workstations running business applications, especially those which require a single
SQL API, the full range of DB2 Connect Client function is not necessary. The IBM Data
Server Driver for ODBC and CLI and the IBM Data Server Driver for JDBC and SQLJ each
provide the DRDA functions and support for the corresponding language APIs.
If you have developers and DBAs using workstations connected directly to DB2 for z/OS, you
can install the IBM Data Server Client to provide the DRDA AR functions plus a rich set of
productivity tools. If you have some workstations that require a CLP, you can install the IBM
Data Server Runtime Client. For workstations that run business applications requiring more
than a single API you can install the IBM Data Server Driver Package. Any of these options
will require a smaller footprint than DB2 Connect Client.
You may also be able to replace the DB2 Connect Server with IBM Data Server Drivers. In
cases where application servers only need to support ODBC and CLI APIs, the IBM Data
Server Driver for ODBC and CLI will be sufficient. In cases where Web application servers
only need to support JDBC and SQLJ APIs, the IBM Data Server Driver for JDBC and SQLJ
will be sufficient. The IBM Data Server Driver Package may be necessary if the application
servers or Web application servers need to support multiple APIs for the applications they
serve.
If you require CLP support, or if your developers or DBAs are operating in these
environments, you will need the IBM Data Server Runtime Client or the IBM Data Server
Client. For example, if you need to compile your applications, you will need the IBM Data
Server Client.
Application servers
Web application servers
DRDA TCP/IP
Communications
DB2 for
z/OS
Server
DB2 Connect Client
on each PC
DB2 Connect
Server
DB2 Connect
Server
Desktop PCs
50 DB2 9 for z/OS: Distributed Functions
Figure 2-14 illustrates the situation where all the DB2 Connect Clients and Servers are
replaced with one of the IBM Data Server Drivers or Clients.
Figure 2-14 DB2 Connect Client and Server replaced with IBM Data Server Drivers
Each of the IBM Data Sever products provides the DRDA AR functions necessary to access
DB2 for z/OS data. In addition, they provide support for parallel sysplex, including workload
balancing, to help you to achieve high application availability and workload balancing.
Refer to 2.7, XA Support in DB2 for z/OS on page 63 for discussion of a situation where
DB2 Connect Server function is still required.
See Chapter 6, Data sharing on page 233 for more information about connecting to a DB2
data sharing group.
2.3.5 Ordering the IBM Data Server Drivers and Clients
You can download the IBM Data Server Drivers and Clients from the IBM download site
(http://www.ibm.com/support/docview.wss?rs=4020&uid=swg21385217) where you will see
Table 2-4 on page 51, which can help in identifying the package you need.
Application servers
Web application servers
DRDA TCP/IP
Communications
DB2 for
z/OS
Server
Desktop PCs
IBM Data Server Driver
for ODBC and CLI on
each application server
IBM Data Server Driver for
JDBC and SQLJ on each
Web application server
IBM Data Server Driver
for ODBC and CLI or
IBM Data Server Driver
for JDBC and SQLJ
Chapter 2. Distributed database configurations 51
Table 2-4 IBM Data Server Client Packages: Latest downloads (V9.7)
2.4 DB2 Connect to DB2 for z/OS: Past, present, and future
Many customers have large volumes of business-critical data stored in DB2 for z/OS
databases. DB2 for z/OS is an excellent database for business-critical data because of the
outstanding capacity and availability characteristics of the mainframe platform. Historically,
DB2 Connect was the product for Windows, UNIX, and Linux platforms that provided the
DRDA requester functions to allow the strengths of DB2 for z/OS to be combined with the
applications and development frameworks available on distributed platforms. DB2 Connect
provided these DRDA requester functions in two-tier, three-tier (or server), or application
server configurations. Using DB2 Connect, any application on any platform that could
interface with DB2 Connect could communicate with any DRDA AS or DS.
In this section we describe configurations with DB2 Connect as a requester and DB2 for z/OS
as a server. We give examples of DB2 Connect in two-tier, three-tier, and application server
configurations. These examples may be similar to what you have used in the past or what you
are using now to access DB2 for z/OS data. We discuss DB2 Connect Client and DB2
Connect Server functions only. There are a variety of licensing alternatives. Refer to standard
DB2 Connect product documentation for details, starting at the DB2 Connect Web site:
http://www.ibm.com/software/data/db2/db2connect/
Driver package Description
IBM Data Server Driver Package (DS Driver) This package contains drivers and libraries for
various programming language environments. It
provides support for Java (JDBC and SQLJ),
C/C++ (ODBC and CLI), .NET drivers and
database drivers for open source languages like
PHP and Ruby. It also includes an interactive
client tool called CLPPlus that is capable of
executing SQL statements and scripts, and can
generate custom reports.
IBM Data Server Driver for JDBC and SQLJ (JCC
Driver)
Provides support for JDBC and SQLJ for client
applications developed in Java. Supports JDBC 3
and JDBC 4 standard. Also called as JCC driver.
IBM Data Server Driver for ODBC and CLI (CLI
Driver)
This is the smallest of all the client packages and
provides support for Open Database Connectivity
(ODBC) and Call Level Interface (CLI) libraries for
the C/C++ client applications.
IBM Data Server Runtime Client This package is a super-set of Data Server Driver
package. It includes many DB2 specific utilities
and libraries. It includes DB2 Command Line
Processor (CLP) tool.
IBM Data Server Client This is the all-in-one client package and includes
all the client tools and libraries available. It
includes DB2 Control Center, a graphical client
tool that can be used to manage DB2 Servers. It
also includes add-ins for Visual Studio.
IBM Database Add-Ins for Visual Studio This package contains the add-ins for Visual
Studio for .NET tooling support.
52 DB2 9 for z/OS: Distributed Functions
The IBM strategy is to remove the reliance on the DB2 Connect modules and replace DB2
Connect with the IBM Data Server Drivers or Clients. While DB2 Connect licenses (in the
form of DB2 Connect license files) are still required, you can replace DB2 Connect modules
with the IBM Data Server Drivers or Clients and receive equivalent or superior function. In
addition, you can reduce complexity, improve performance, and deploy application solutions
with smaller footprints for your business users.
We intend for the descriptions in this section to correspond to the recent past or what you
have currently installed. In the previous section, which represents the future, we provide
descriptions and examples of the IBM Data Server Clients and Drivers. Refer to 2.3, IBM
Data Server Drivers and Clients as requesters on page 40 if you are not familiar with the IBM
Data Server Drivers and Clients.
2.4.1 DB2 Connect Client as requester
DB2 Connect Client provides direct access from workstations to mainframe DB2 servers. This
product is only licensed to a single person, so it cannot be used for server configurations.
Each workstation has to install the DB2 Connect Client and has to be configured with network
connectivity. DB2 Connect Client is used for database access by applications implementing a
two-tier architecture. DB2 Connect Client at Version 9.5 FixPack 3 is available for Linux,
UNIX, and Windows platforms.
In Figure 2-15 we show that application programs running on a workstation with DB2 Connect
Client can use one of numerous APIs to access data stored in DB2 for z/OS.
Figure 2-15 DB2 Connect Client connecting to DB2 for z/OS
This example suggests the full functionality of DB2 Connect Client, all of which is available to
each workstation that installs it. For many business situations you may prefer a smaller code
footprint that still enables workstation applications to access DB2 for z/OS data. The IBM
Data Server Drivers offer the DRDA AR functions without requiring the installation of DB2
Connect. A DB2 Connect license is still required. Refer to 2.3, IBM Data Server Drivers and
Clients as requesters on page 40 for more information.
ODBC, CLI,
ADO, .NET,
JDBC, SQLJ,
Emb SQL,
OLE DB
System z
DB2 for z/OS
DIST
DRDA
server
Other DB2
address
spaces
SQL
APIs
Appl.
pgm
DB2
Connect
DRDA
requester
DRDA
TCP/IP
Windows, UNIX, Linux
DB2 Connect Client DB2
Chapter 2. Distributed database configurations 53
2.4.2 DB2 Connect Server
DB2 Connect Server is a server version of DB2 Connect Client. DB2 Connect Server
provides all the functions of DB2 Connect Client, plus connection concentration, enhanced
database support, sysplex workload balancing, and performance, monitoring, and availability
features. DB2 Connect Server supports three-tier and application server configurations.
In a three-tier configuration, where multiple workstations access DB2 for z/OS data using a
DB2 Connect Server, each workstation must have a driver or client installed. You can select
from the following drivers or clients:
IBM Data Server Client
IBM Data Server Runtime Client
IBM Data Server Driver Package
IBM Data Server Driver for ODBC and CLI
IBM Data Server Driver for JDBC and SQLJ
In a three-tier configuration, DB2 Connect Server generally provides the DRDA AS function
on behalf of one or more of the drivers or clients listed above, and DB2 for z/OS provides the
DRDA Data Server (DS) function.
The IBM Data Server Driver or Client (DRDA AR) provides the SQL interfaces and passes the
database request through the DB2 Connect Server (DRDA AS) to the DB2 for z/OS (DRDA
DS). Figure 2-16 shows a DB2 Connect Server providing access to DB2 for z/OS data for a
number of desktop clients. These desktop clients could have a variety of IBM Data Server
Drivers or Clients installed. Refer to 2.3, IBM Data Server Drivers and Clients as requesters
on page 40 for a discussion of these choices.
Figure 2-16 DB2 Connect Server providing access to DB2 for z/OS
Beginning with the Type 4 driver for Java and for ODBC and CLI with the IBM Data Server
Drivers and Clients Version 9.5 FixPack 3, DB2 Connect Server is no longer required
(although a DB2 Connect license file is required). One of the historic advantages of the DB2
Connect Server configuration over the DB2 Connect Client configuration was that the server
could provide the connection concentrator function. The server could manage a relatively
small number of connections (or transports) to DB2 for z/OS on behalf of a large number of
workstation applications. Figure 2-17 on page 54 illustrates an expansion in the number of
desktop clients without a corresponding increase in the number of connections between the
DB2 Connect Server and DB2 for z/OS. Refer to 1.7.3, Transaction pooling on page 25 for
the introduction to connection concentration.
Other DB2
address
spaces
Windows, UNIX, Linux
DB2 Connect
Server
DB2 Connect
DRDA server
Desktop
Clients
Appl.
pgm
DRDA
TCP/IP
ODBC, CLI,
.NET, ADO,
JDBC, SQLJ,
Emb SQL,
OLE DB
IBM Data Server
Driver or Client
DRDA requestor
DRDA
TCP/IP
DB2
System z
DB2 for z/OS
DIST
DRDA
DS
54 DB2 9 for z/OS: Distributed Functions
Figure 2-17 Example of DB2 Connect Server providing connection concentration
There may be some situations where a DB2 Connect Server provides you with advantages in
a configuration similar to Figure 2-17. This will be the exception, not the rule. For a discussion
of such a situation, refer to 2.5, DB2 Connect Server on Linux on IBM System z on page 58.
As a rule, you should connect your IBM Data Server Drivers or Clients directly to DB2 for
z/OS.
Many applications are built using a three-tier model. They can be purchased applications like
SAP, Siebel, or PeopleSoft, or home-grown applications using Java-based or .NET
development frameworks. An example is shown in Figure 2-18.
Figure 2-18 DB2 Connect Server in a Web application server environment
Other DB2
address
spaces
Windows, UNIX, Linux
DB2 Connect
Server
DB2 Connect
DRDA server
Desktop
Clients
Appl.
pgm
DRDA
TCP/IP ODBC, CLI,
.NET, ADO,
JDBC, SQLJ,
Emb SQL,
OLE DB
IBM Data Server
Driver or Client
DRDA requester
DRDA
TCP/IP
DB2
System z
DB2 for z/OS
DIST
DRDA
DS
Desktop
Clients
Appl.
pgm
ODBC, CLI,
.NET, ADO,
JDBC, SQLJ,
Emb SQL,
OLE DB
Desktop
Clients
Appl.
pgm
ODBC, CLI,
.NET, ADO,
JDBC, SQLJ,
Emb SQL,
OLE DB
IBM Data Server
Driver or Client
DRDA requester
IBM Data Server
Driver or Client
DRDA requester
System z
DIST
DRDA AS
Other DB2
address
spaces
Web Application Server:
Windows, UNIX, Linux
Browser
Thin
client
HTTP
Server
Appli-
cation
Server
DB2
Connect
Server
DRDA AR
HTTP
TCP/IP
DRDA
TCP/IP
DB2
DB2 for z/OS
Chapter 2. Distributed database configurations 55
In this model, the application acts as a concentrator to a thin client (typically a browser),
which means there is no need to deploy DB2 clients. DB2 Connect Server has provided the
DB2 connectivity from the application server to the database server on DB2 for z/OS. Now
you can use the IBM Data Server Drivers or Clients to connect the Web application server to
DB2 for z/OS.
2.4.3 Connecting to a DB2 data sharing group from DB2 Connect
DB2 data sharing provides capacity, scalability, availability, and workload balancing benefits
to applications that use DB2 for z/OS data. If you use DB2 Connect to access data in a DB2
data sharing group, DB2 Connect should specify the group dynamic virtual IP address (group
DVIPA), the SQL port and the location name of the DB2 for z/OS data sharing group. The
Sysplex Distributor will direct the request to an available DB2 member. After the first
connection, DB2 Connect will receive a server list from DB2 for z/OS that indicates which
members of the data sharing group are available. Subsequent requests can be balanced on
transaction boundaries among the members of the data sharing group.
Refer to Chapter 6, Data sharing on page 233 for more information about Sysplex
Distributor, DVIPA and configuring DB2 data sharing for availability and workload balancing.
DB2 Connect Client as requester to a data sharing group
In Figure 2-19 we show DB2 Connect Client connecting to the members of a DB2 data
sharing group. Applications connect to the data sharing group (by the groups LOCATION)
and DB2 Connect manages transports to the members of the group and can balance work
between the members across transaction boundaries (after commits).
Figure 2-19 DB2 Connect Client connecting to a DB2 data sharing group
DB2 Connect Server connecting to a data sharing group
In Figure 2-20 on page 56 we show a DB2 Connect Server allowing multiple desktops to
access the members of a DB2 data sharing group. After the first connection from DB2
Connect Server, subsequent connections are balanced across the available members of the
data sharing group. Application requests to DB2 for z/OS can be balanced across the DB2
data sharing members on transaction boundaries (after commits).
DB2 Connect Server provides connection concentration for the desktop clients, optimizing
DB2 resource usage in the parallel sysplex.
SQL
APIs
Appl.
pgm
DB2
Connect
DRDA
requester
ODBC, CLI,
ADO, .NET,
JDBC, SQLJ,
Emb SQL,
OLE DB
Windows, UNIX, Linux
DB2 Connect Client
Data Sharing Group
Other
DB2
address
spaces
Member 1
Member 2
Other
DB2
address
spaces
DB2
DB2
System z
System z
DB2 for z/OS
DIST
DRDA
server
DIST
DRDA
server
DB2 for z/OS
DRDA
TCPIP
56 DB2 9 for z/OS: Distributed Functions
You may have this configuration currently. Often the DB2 Connect Server represents a single
point of failure to the desktop clients. By using the IBM Data Server Drivers to connect directly
to the DB2 for z/OS data sharing group, you may be able to improve the overall availability of
your applications.
Figure 2-20 DB2 Connect Server providing access to a DB2 data sharing group
2.4.4 DB2 Connect: A case of managing access to DB2 threads
There are rare situations where you may need to implement controls on distributed access to
DB2 for z/OS. One of the controls you might choose is a DB2 Connect Server to manage the
number of connections for the workstation, application server, or Web application server
traffic to DB2 for z/OS even when the IBM Data Server products provide all the connectivity
functions you require. These rare situations would be those where there are a number of high
priority clients and a smaller number of lower priority clients who might require extended
connectivity to DB2. For example, the high priority business tasks could be efficient
transactions that commit after each DB2 interaction, freeing the DB2 DBAT resources, while
the lower priority business tasks might require several interactions, and network round-trips,
before the task is complete. In such a situation, a sporadically large number of lower priority
tasks might accumulate enough DBATs to constrain some higher priority business tasks from
acquiring a DBAT.
Distributed access requests to DB2 for z/OS are managed, once the thread is established
within DB2, by z/OS Workload Manager (WLM) service classes. Neither DDF nor WLM make
a distinction between the various sources of distributed access requests for connection to
DB2 nor for allocation of active database threads (DBATs). Thus, requests that are low in
priority from a business perspective compete for DB2 connections and DB2 DBATs equally
with high priority business requests. Once a DBAT is running, it will receive access to DB2
resources, but its access to CPU service units will be based on WLM service classifications.
Windows,
UNIX, Linux
DB2 Connect
Server
DB2 Connect
DRDA server
TCP/IP
Desktop
Clients
Appl.
pgm
ODBC, CLI,
.NET, ADO,
JDBC, SQLJ,
Emb SQL,
OLE DB
IBM Data Server
Driver or Client
DRDA requester
Desktop
Clients
Appl.
pgm
ODBC, CLI,
.NET, ADO,
JDBC, SQLJ,
Emb SQL,
OLE DB
IBM Data Server
Driver or Client
DRDA requester
Desktop
Clients
Appl.
pgm
ODBC, CLI,
.NET, ADO,
JDBC, SQLJ,
Emb SQL,
OLE DB
IBM Data Server
Driver or Client
DRDA requester
Data Sharing Group
Other DB2
address
spaces
Member 1
Member 2
Other DB2
address
spaces
DB2
DB2
System z
System z
DB2 for z/OS
DIST
DRDA
DS
DIST
DRDA
DS
DB2 for z/OS
Chapter 2. Distributed database configurations 57
A low priority business transaction could receive a smaller share of service units and
therefore retain its DBAT and other DB2 resources for a longer time than a high priority
business transaction that receives a larger share of service units, completes its processing
quickly, and then releases the DB2 resources it used.
One possible effect is that a large number of low priority requests, especially complex ones,
could reduce the number of or monopolize the available DBATs, effectively delaying high
priority work from entering the system. If the low priority requests are of long duration, they
could shut out some high priority work requests for an extended period. Since there is no way
to specify within DDF which requests are high priority and which are low priority, you must
implement your distributed architecture with due consideration to the relative priorities of the
various requests.
The effect described above can apply to local agents attached to DB2 for z/OS as well. Low
priority requests for distributed access may accumulate and retain DB2 for z/OS resources,
such as locks, EDM Pool storage, or sort work areas, in such a way as to impact CICS, IMS or
WebSphere for z/OS transactions.
IBMs direction is to provide centralized client management or server controls to mitigate the
effect we describe above. The current alternatives are to implement network based controls,
to use Trusted Context, or to use DB2 Connect Server. Network based controls include z/OS
Communications Server functions such as Intrusion Detection Services (IDS), which can use
traffic regulation to control the number of connections allowed from a requester. These
require that network administrators understand the DB2 for z/OS and distributed environment
to mitigate DB2 constraints in threads. One limitation of this alternative is that the controls are
based on IP address, and you may have some clients who do not have a dedicated IP
address which you can use to establish the desired control. Trusted Context could be
implemented to address what requests have access, but may not be the best choice for
managing priority. The alternative that may apply in some situations is to use DB2 Connect
Server to manage access to DB2 threads. See Figure 2-21.
58 DB2 9 for z/OS: Distributed Functions
Figure 2-21 Controlling DB2 threads with a DB2 Connect Server
You can use DB2 Connect Server to control application access to DB2 for z/OS connection
and DBAT resources, and you can reduce the likelihood that low priority requests will block
access for high priority requests. You can accomplish this by assigning lower priority client
requests to a DB2 Connect Server that has few transports, or connections, defined to DB2 for
z/OS. Thus, applications representing high priority requests could have access to more
connections and DBATs, and low priority requests could be grouped together with fewer
connections and DBATs.
2.5 DB2 Connect Server on Linux on IBM System z
You may have a configuration where a DB2 Connect Server is installed on Linux on IBM
System z (Linux on z) and connects to DB2 for z/OS on another LPAR on the same System z
machine using HiperSockets. There are some advantages to this configuration, based on
the strengths of the System z platform, the security of the connection between DB2 Connect
and DB2 for z/OS, and the performance characteristics of HiperSockets. The System z
platform strengths include environmental, operational, and availability characteristics. The
security and performance benefits are based on the use of HiperSockets instead of external
network connections.
2.5.1 DB2 Connect on Linux on z with HiperSockets
The use of HiperSockets is illustrated in Figure 2-22. In the diagram, the two bottom layers of
the OSI model are replaced by the HiperSocket. It shows two system images in one System z
machine: DB2 Connect Server on Linux on z, and a DB2 for z/OS subsystem. DB2 Connect
and DB2 for z/OS can communicate using TCP/IP over the HiperSockets. As is clear in this
figure, there are no external network cables between DB2 Connect and DB2 for z/OS. The
Application servers
Web application servers
DRDA TCP/IP
Communications
DB2 for
z/OS
Server
Desktop PCs
IBM Data Server Driver
for ODBC and CLI on
each application server
IBM Data Server Driver for
JDBC and SQLJ on each
Web application server
IBM Data Server Driver
for ODBC and CLI or
IBM Data Server Driver
for JDBC and SQLJ
DB2 Connect
Server
DB2 Connect
Server
DB2 Connect
Server
Chapter 2. Distributed database configurations 59
network traffic is through microcode at internal memory copy speed. This can significantly
reduce application elapsed time for clients connecting to DB2 for z/OS through a DB2
Connect Server. For more information about HiperSockets, see HiperSockets Implementation
Guide, SG24-6816.
Figure 2-22 HiperSocket: Example of multiple LPAR communication
In cases where you choose to implement DB2 Connect Server, you can locate the DB2
Connect Server on the same System z machine with DB2 for z/OS and achieve the following
advantages over DB2 Connect Server installed on a separate machine:
Elimination of middle-tier hardware
Many customers using DB2 Connect Servers have multiple separate systems or machines
to house the servers. By consolidating the servers in Linux on System z, these customers
can reduce the cost of server support, reducing the total cost of ownership (TCO). Refer to
Figure 2-23 on page 60 for an illustration of this scenario.
Fast data transfer between DB2 Connect and DB2 for z/OS through HiperSockets
The network cable between both systems has been removed, reducing network latency
and improving application response time.
Increased security
There are no wires between DB2 Connect and DB2 for z/OS, so one less network
connection is susceptible to physical access.
Application
Presentation
Session
Transport
Network
Datalink
Physical
HiperSocket
DB2
for
z/OS
System z Communications protocol layers
z/OS
DB2
Connect
Server
Linux on
System z
60 DB2 9 for z/OS: Distributed Functions
Figure 2-23 Server consolidation and HiperSockets and Linux on System z for DB2 Connect
In addition, you can create additional system images without bringing any new system
devices into your machine room. This configuration provides flexibility to your system. We
show an example of HiperSockets with DB2 Connect Server and DB2 for z/OS supporting
multiple client types in Figure 2-24 on page 61.
In this example, DB2 for z/OS serves as a robust database server and DB2 Connect acts as
an open server to Web application servers and fat clients. HiperSockets are used as a fast
communication link providing TCP/IP socket communication.
System z server
UNIX server farm
DB2 clients
DB2 clients
System z server
Internally
Translates to
z/OS
Linux on
System z
partitions
Inter-LPAR communication
using HiperSockets
Chapter 2. Distributed database configurations 61
Figure 2-24 HiperSocket: DB2 Connect using HiperSocket to communicate with DB2 for z/OS
2.5.2 HiperSockets and DB2 data sharing configurations
The benefits to using HiperSockets between DB2 Connect Server on Linux on z and DB2 for
z/OS are clear in the case of a single DB2 for z/OS subsystem. Some of those benefits
accrue in a DB2 data sharing environment as well. But there are special considerations with
using HiperSockets in a DB2 data sharing environment. There are different HiperSocket
implications depending on whether you are using DB2 for z/OS V8 or DB2 9 for z/OS. This
section discusses these implications. In either case, workload balancing algorithms do not
take network speed into consideration, which may reduce the overall benefit of HiperSockets
in a data sharing environment.
Among the benefits most customers seek when they implement DB2 data sharing are the
following:
Flexible capacity growth and scalability
High availability of data for business applications
Dynamic workload balancing
Requirements for high availability distributed access to a DB2 data sharing group include
implementing Sysplex Distributor, dynamic virtual IP addressing (DVIPA) for the members of
the data sharing group, and group DVIPA (or distributed DVIPA) for the data sharing group.
Refer to 3.1.5, Sample DB2 data sharing DVIPA and Sysplex Distributor setup on page 79
for information about best practices for distributed access to data sharing groups.
In DB2 for z/OS V8, meeting these requirements means binding the DB2 members to specific
IP addresses in the TCP/IP definitions. Binding members to specific addresses allows
requests to be routed to a DB2 member even if the member is restarted on an LPAR other
than the LPAR on which that member normally runs. On the other hand, binding the DB2
members to specific addresses makes it more difficult to define the connections from the DB2
Connect Server on Linux on z. Because the data sharing member on the same system is
listening to its bound address, DB2 Connect must specify that IP address. A TCP/IP route
table definition is needed to direct the DB2 Connect traffic for the data sharing members IP
address over the HiperSocket. In the event the DB2 data sharing member is restarted on
DB2 Client
ODBC
ADO
CLI
JDBC
SQLJ
Emb SQL
Appl.
pgm
Desktop
clients
System z
DB2 for z/OS
DIST
DRDA DS
Other
DB2
address
spaces
DB2 Connect
Server
DB2 Connect
DRDA AS
Linux on
System z
z/OS
Web App Server:
Windows, UNIX, Linux
Browser
Thin
clients
HTTP
server
Appli-
cation
Server
DB2
Client
ODBC
ADO
CLI
JDBC
SQLJ
Emb
SQL
HiperSockets
62 DB2 9 for z/OS: Distributed Functions
another LPAR, the IP route table should be dynamically updated with that information. This
maintains the high availability benefits of data sharing for applications using DB2 Connect
Servers on Linux on z.
In DB2 9 for z/OS, it is not required that DB2 bind to specific addresses. Instead you can
define the IP addresses in the DB2 bootstrap data set (BSDS). In this case, DB2 can accept
connection requests on any IP address. This will ease the definition of connections from DB2
Connect Server on Linux for System z to DB2 for z/OS. DB2 Connect will not have to specify
both the IP address to reach the DB2 for z/OS member across the HiperSocket.
The above discussion addresses the high availability benefits of data sharing in a DB2
Connect Server on Linux on z environment. We now turn to workload balancing and capacity
benefits. Most customers who implement HiperSockets between DB2 Connect Server and a
member of a data sharing group would like most of the traffic to use the HiperSocket, to gain
the benefits we described above in 2.5.1, DB2 Connect on Linux on z with HiperSockets on
page 58. However, the workload balancing algorithms in the Sysplex Distributor and DB2 will
not take the speed of the HiperSocket into consideration. Refer to Figure 2-25 for an
illustration of this situation.
Figure 2-25 HiperSockets in a data sharing environment
On the right side of this diagram are three DB2 members in a data sharing group. One of
them resides on the same System z machine with DB2 Connect Server running in a Linux on
z environment. When DB2 Connect Server requests a connection to a member of the DB2
data sharing group, and the data sharing best practices have been followed, the connection
request goes to the Sysplex Distributor application. Sysplex distributor will determine which
member of the data sharing group is available, then route the request to that member.
Sysplex distributor will spread the connection requests across the members without
considering the special characteristics of the HiperSocket. Initial connection requests will be
balanced approximately evenly, depending on the relative state of the various DB2 members.
This means that in a two-member DB2 data sharing group, with one member on the same
DB2 for z/OS
DIST
DRDA DS
Other
DB2
address
spaces
DB2 Connect
Server
DB2 Connect
DRDA AS
Linux on
System z
z/OS
HiperSockets
DB2 Client
ODBC
ADO
CLI
JDBC
SQLJ
Emb SQL
Appl.
pgm
Desktop
clients
Web App Server:
Windows, UNIX, Linux
Browser
Thin
clients
HTTP
server
Appli-
cation
Server
DB2
Client
ODBC
ADO
CLI
JDBC
SQLJ
Emb
SQL
Data Sharing Group
Other DB2
address
spaces
Member 1
DB2 for z/OS
DIST
DRDA
DS
Member 3
Other DB2
address
spaces
Member 2
DB2 for z/OS
DIST
DRDA
DS
TCP/IP
TCP/IP
TCP/IP TCP/IP
Sysplex
Distributor
Connection
type not
considered
Chapter 2. Distributed database configurations 63
System z machine with DB2 Connect Server on Linux on z, only half of the connection
requests will end up taking advantage of the HiperSocket. In a three-way data sharing group,
where only one member is on the same System z machine with DB2 Connect Server, about
one third of the initial requests would end up using the HiperSocket.
After the initial connections are made, DB2 Connect Server can workload balance across the
DB2 members based on the server list returned after each new DB2 for z/OS transaction. The
workload balancing decision does not consider the existence of the HiperSocket. This results
in the same load distribution described above; approximately half of the traffic will use the
HiperSocket in a two-member data sharing group, approximately one third in a three-member
group, and so on. This behavior is the same in DB2 for z/OS V8 and in DB2 9 for z/OS. This
means customers may not achieve the full application elapsed time benefit they anticipated
from HiperSockets.
One possible responses to this is to eliminate the DB2 Connect Server from the configuration
by implementing the IBM Data Server products as described in 2.3, IBM Data Server Drivers
and Clients as requesters on page 40. This removes the code path of the DB2 Connect
Server while maintaining the data sharing advantages of high availability, workload balancing,
and flexible capacity and scalability.
Another possible responses to this is to continue to use HiperSockets because of the server
consolidation benefits, to realize the data sharing benefits, and to realize only part of the
network latency benefit.
2.6 DB2 for z/OS requester: Any (DB2) DRDA server
DB2 for z/OS can be a DRDA AR to any DRDA-enabled data source. Within the IBM
Information Management family, the following DBMSs are accessible:
z/OS
DB2 for z/OS V8 or DB2 9 for z/OS
AIX, Linux, UNIX and Windows
DB2 for LUW 9.7, 9.5 or 9.1
VSE and VM
DB2 Server for VSE and VM 7.3 or higher
IBM i (or i5/OS)
DB2 for i 6.1 (or DB2 for i5/OS 5.4)
AIX, UNIX, Solaris
Informix Dynamic Server 11.50 or 11.10
2.7 XA Support in DB2 for z/OS
In DB2 for z/OS Version 8, DB2 supported XA flows between the JCC Type 4 driver and DB2
for z/OS. A WebSphere Application Server could use the Type 4 driver to include DB2 for
z/OS data in an XA transaction. A non-Java-based application that wanted to include DB2 for
z/OS data in an XA transaction had to use DB2 Connect. DB2 Connect provided a translation
between the XA flows and DRDA flows. Refer to Figure 2-26 on page 64 for an illustration of
this configuration.
64 DB2 9 for z/OS: Distributed Functions
Figure 2-26 XA transaction support with DB2 Connect or WebSphere Application Server.
With the recent support provided by the IBM Data Server Driver and Client products at
Version 9.5 FixPack 3, and with the corresponding PTFs for APAR PK69659 installed on DB2
for z/OS, clients using either the Java-based drivers or the non-Java-based drivers can
include DB2 for z/OS data in XA transactions while connecting directly to DB2 for z/OS. We
illustrate this configuration in Figure 2-27.
Figure 2-27 XA transaction support without DB2 Connect
WebSphere
Application Server
XA
DB2 Connect
Server
XA Compliant
TMs
DB2 for z/OS
SPM Log
XA
2-phase Commit
Type-4
Driver
WebSphere
Application Server
XA
XA Compliant
TMs
DB2 for zOS
XA
IBM Data
Server Driver
C-based
IBM Data
Server Driver
Java-based
Chapter 2. Distributed database configurations 65
The applications shown in the upper part of the diagram can use any of the drivers that
support ODBC and CLI, depending on their operating environment:
IBM Data Server Driver for ODBC and CLI
IBM Data Server Driver for ODBC, CLI and .NET (Windows), currently replaced by the
IBM Data Server Driver Package.
IBM Data Server Driver for ODBC, CLI and Open Source (AIX, Linux or UNIX), currently
replaced by the IBM Data Server Driver Package.
IBM Data Server Runtime Client
IBM Data Server Client
The lower part of the diagram shows Java-based applications in a WebSphere Application
Server environment. These applications could be written with JDBC or SQLJ and could be
any Java-based application, not only those running in WebSphere Application Server. These
applications could use the IBM Data Server Driver for JDBC and SQLJ, the IBM Data Server
Runtime Client, or the IBM Data Server Client.
Your DB2 for z/OS may be in a data sharing environment. XA transactions are still supported,
but in the case of indoubt resolution, the XA transaction may be routed to a member other
than the one with which the transaction was originally communicating. The member then
retrieves the XID from the Shared Communications Area (SCA) in the coupling facility to route
the resolution to the correct member. We show this configuration in Figure 2-28.
Figure 2-28 XA transaction support in a DB2 data sharing environment
The XA multi-transport model is not supported with direct connect to DB2 for z/OS data
sharing from IBM Data Server Drivers or Clients. In the multi-transport model, a commit may
flow on a separate transport from the datasource connection. If you use an XA transaction
manager that uses the multi-transport model, you still need to connect to DB2 for z/OS data
sharing with DB2 Connect Server.
For more information about XA support in DB2 for z/OS, refer to 5.8, XA transactions on
page 225.
WebSphere
Application Server
XA
XA Compliant
TMs
DB2 for z/OS
Data Sharing
Group
XA
IBM Data
Server Driver
C-based
IBM Data
Server Driver
Java-based
SCA
XIDs
66 DB2 9 for z/OS: Distributed Functions
Copyright IBM Corp. 2009. All rights reserved. 67
Part 2 Setup and
configuration
In this part we provide a description of the steps needed for the installation of a distributed
environment.
This part contains the following chapters:
Chapter 3, Installation and configuration on page 69
Chapter 4, Security on page 129
Part 2
68 DB2 9 for z/OS: Distributed Functions
Copyright IBM Corp. 2009. All rights reserved. 69
Chapter 3. Installation and configuration
In this chapter we provide information about the steps to enable your DB2 for z/OS for
distributed data access in a TCP/IP environment. We provide general configuration
information about the DB2 system, as well as recommendations for other system components
like TCP/IP, UNIX System Services, WLM, DB2 Connect, and the Data Server Drivers.
This chapter contains the following sections:
TCP/IP setup on page 70
DB2 system configuration on page 85
Workload Manager setup on page 105
DB2 for LUW to DB2 for z/OS setup on page 114
DRDA sample setupFrom DB2 for z/OS requester to DB2 for LUW on AIX server on
page 122
Character conversion: Unicode on page 125
Restrictions on the use of local datetime formats on page 126
HiperSockets: Definition on page 127
3
70 DB2 9 for z/OS: Distributed Functions
3.1 TCP/IP setup
Before DB2 can participate as a DRDA Application Server (AS) or DRDA Application
Requester (AR) in a TCP/IP environment, you must set up TCP/IP for DB2. In this section we
highlight the steps required for UNIX System Services, Language Environment support and
basic TCP/IP setup. We then describe the process for and give examples of defining TCP/IP
to support data sharing, dynamic virtual I/P addressing (DVIPA) and LOCATION ALIAS
support.
Refer to informational APAR II14203 for latest recommended maintenance for DB2 Version
9.1 for z/OS relating to DDF functions.
3.1.1 UNIX System Services setup
DDF uses the UNIX System Services assembler callable interface for TCP/IP services. Some
of the functions that the DDF address space performs require that the user ID associated with
the ssidDIST address space be a superuser. A superuser has an OMVS user ID value of
UID(0). To see whether your DDF is already defined as a superuser, find the OMVS user ID of
the ssidDIST address space.
From System Display and Search Facility (SDSF), use the Display Active panel to find the
DDF address space output. You should see information similar to what we show in Figure 3-1.
Figure 3-1 Output of D9C1DIST started task as seen from SDSF
The STC after USER indicates that D9C1DIST is assigned an OMVS user ID of STC. You
can then use the RACF command shown in Example 3-1, substituting STC for user ID to
learn whether DDF is a superuser.
Example 3-1 Displaying an OMVS user
LISTUSER userid OMVS
The output of the command is shown in Figure 3-2.
Figure 3-2 Output of LISTUSER STC OMVS command
IEF695I START D9C1DIST WITH JOBNAME D9C1DIST IS ASSIGNED TO USER STC , GROUP SYS1
OMVS INFORMATION
----------------
UID= 0000000000
HOME= /u/stc
PROGRAM= /bin/sh
CPUTIMEMAX= NONE
ASSIZEMAX= NONE
FILEPROCMAX= NONE
PROCUSERMAX= NONE
THREADSMAX= NONE
MMAPAREAMAX= NONE
Chapter 3. Installation and configuration 71
This display indicates that OMVS user STC has a UID(0), so DDF is a superuser. If you need
to define the ssidDIST user ID (distuid) as a superuser, you can use one of the RACF
commands shown in Example 3-2.
Example 3-2 Defining a superuser
ADDUSER distuid OMVS(UID(0)) (If you are adding the user first time)
ALTUSER distuid OMVS(UID(0)) (If you are altering the existing user)
Refer to the DB2 Version 9.1 for z/OS Installation Guide, GC18-9846, and the z/OS V1R10.0
Security Server RACF System Programmer's Guide, SA22-7681, for more information about
this topic.
3.1.2 Language Environment considerations
DDF uses some of the functions provided by Language Environment, so DB2 needs access
to the Language Environment runtime library. There are two ways to achieve this:
Include the Language Environment runtime library in the STEPLIB concatenation of the
DDF address space startup. In this case, the Language Environment runtime library must
be APF authorized. The DB2 installation automatically adds the library to the DDF
STEPLIB concatenation.
Concatenate the Language Environment library to the z/OS link list. In this case, the
library does not need to be APF authorized. If you choose this approach, remove the
Language Environment library concatenation from the DDF JCL procedure.
3.1.3 Basic TCP/IP setup
In this section we review the TCP/IP concepts and how they relate to DB2 as a DRDA AR and
DRDA AS. We also discuss changing the TCP/IP files to support the requirements of your
DB2 subsystem.
TCP/IP concepts and terminology
DB2 interacts with distributed clients and servers using the distributed data facility (DDF),
which executes in the ssidDIST address space. DDF performs both DRDA AS functions,
when DB2 is the data server, and DRDA AR functions, when DB2 is a requester. It is
important to differentiate these roles when determining what TCP/IP specifications are
necessary to support your requirements. Before we address these roles, we provide the
following brief definitions of TCP/IP terms, which should be useful to you as you read this
section.
IP address
Uniquely identifies a host within the TCP/IP network. This is sometimes called an internet
address. A DB2 subsystem resides on a TCP/IP host.
DB2 for z/OS supports both IP Version 4 addresses (which look like dotted decimal, e.g.
1.2.3.4) and IP Version 6 addresses (which are colon delimited, e.g.
2001:0DB8:0000:0000:0008:0800:200C:417A, which can also be abbreviated as
2001:DB8::8:800:200C:417A).
DB2 9 for z/OS is an IPv6 system and it displays all addresses (IPv4 or IPv6) in IPv6
colonhex format.
72 DB2 9 for z/OS: Distributed Functions
Virtual IP Address (VIPA)
A VIPA is a generic term that refers to an internet address on a z/OS host that is not
associated with a physical adapter. There are two types of VIPAs, static and dynamic
(DVIPA). A static VIPA cannot be changed except through a VARY TCPIP,,OBEYFILE
operator command.
Dynamic VIPA (DVIPA)
A DVIPA can move to other TCP/IP stack members in a sysplex or it can be activated by
an application program or by a supplied utility. Dynamic VIPAs are used to implement
Sysplex Distributor.
Distributed DVIPA
A distributed DVIPA, which is a special type of DVIPA, can distribute connections within a
sysplex. In DB2 data sharing, the group DVIPA, which is used to represent the group, is a
distributed DVIPA.
Domain name
The fully qualified name that identifies an IP address. This can be used instead of the IP
address. An example of a domain name is stlmvs1.stl.ibm.com.
Domain name server (DNS)
A DNS manages a distributed directory of domain names and related IP addresses.
Domain names can be translated into IP addresses and you can find a domain name
associated with a given IP address.
Port
A port identifies an application executing in a host. For example, a port number identifies a
DB2 subsystem to TCP/IP. There are three basic kinds of TCP/IP ports:
Well-known port
This is a port number between 1 and 1023 that is reserved in the TCP/IP architecture
for a specific TCP/IP application. Some typical well-known port numbers are: FTP (port
number 21), Telnet (port number 23), and DRDA relational database (port number
446).
Ephemeral port
Port numbers that are dynamically assigned to a client process by the clients TCP/IP
instance. DB2 uses an ephemeral port when it is acting as the DRDA application
requester (AR). This ephemeral port is associated with the requester for the life of the
thread or connection.
Server port
Port numbers that are used when a TCP/IP program does not have a well-known port
number, or another instance of the server program is already installed using the
well-known port number. If two different DB2 subsystems reside on the same host,
acting as two different locations (in other words, not members of the same data sharing
group), each DB2 subsystem must have a unique port.
DB2 uses the TCP/IP ports for three purposes:
SQL port
This is the well-known port or the server port and is what most requesters will use to
specify a DB2 subsystem or a DB2 data sharing group. (A data sharing group has a
single SQL port that each member specifies.)
Chapter 3. Installation and configuration 73
Resync port
The resynchronization port is used by DB2 to handle two-phase commit processing,
including indoubts. In a data sharing group, each member has a unique resync port, or
RESPORT.
Secure port
Optional. The secure port (or SECPORT) is the port DB2 uses for TCP/IP Secure
Socket Layer (SSL) support. There is one SECPORT for a data sharing group.
Service name
Another way to refer to a port number. A network administrator can assign a service name
for a remote location instead of using the port number.
TCP/IP uses domain names (or IP addresses) and port numbers (or service names) to
uniquely identify DB2 subsystems in the TCP/IP network.
DB2 9 for z/OS supports both IPv4 (32-bit) and IPv6 (128-bit) addressing. To enable DB2 to
support IPv6 traffic (either inbound or outbound), you must configure TCP/IP with the
dual-mode stack. You enable TCP/IP dual mode stack by changing the BPXPRMxx PARMLIB
member to add a NETWORK statement that includes DOMAINNAME(AF_INET6). Refer to
z/OS V1R10 Communications Server IP: Configuration Guide, SC31-8775 for more
information about enabling IPv6 addresses with BPXPRMxx.
DB2 as a data server
No entry is required in the DB2 Communications Database CDB if your DB2 subsystem acts
only as a data server in a TCP/IP environment.
As a server processing TCP/IP connection requests, DB2 uses the SQL port, either the
well-known port, 446, or a server port. DB2 will use the resynchronization (resync) port for
processing 2-phase commit resync requests. When DDF is started, the DB2 subsystem
identifies itself to TCP/IP using the port specification in its bootstrap data set. The port
specification includes the SQL port number, the resynchronization port number (RESPORT),
and, optionally, the secure port number (SECPORT). Refer to 3.2.4, Updating the BSDS on
page 97 for more information about the port specification.
Optionally, you can protect the port number(s) that DB2 uses from being used by any other
task or job in the subsystem. You can do this with a port reservation entry in the TCP/IP
profile data set, as in Example 3-3 on page 74.
DB2 9 also allows you to specify the IP address in the BSDS. Specifying the IP address in the
BSDS allows DB2 to accept requests to its port on any IP address (INADDR_ANY). This
provides significant advantage in a data sharing environment over the DB2 V8 function. In
DB2 9, if you specify the IP address in the BSDS you do not have to define a domain name to
TCP/IP.
In DB2 for z/OS V8, the domain name must be defined to the TCP/IP host so that a DB2
subsystem can accept connections from remote locations.
DB2 as a requester
The domain name and the server port number (or service name) of the database server must
be defined in the DB2 Communications Database (CDB) at a requesting DB2. If you use a
port number in the CDB to access a remote DB2 location, the port number must be defined to
TCP/IP. We show an example in 3.2.2, Configuring the Communications Database on
page 86.
74 DB2 9 for z/OS: Distributed Functions
TCP/IP will assign the requesting DB2 an ephemeral port for the life of the thread or
connection. You do not need to specify an ephemeral port.
Customizing TCP/IP data sets or files
There are several TCP/IP data sets or files that you may need to customize:
The PROFILE data set
The HOSTS data set
The SERVICES data set
Once these are customized, you may need to update the DB2 bootstrap data set (BSDS) and
configure the rest of the DB2 subsystem for DDF.
The PROFILE data set
If you do not know the high level qualifier (hlq), check the JCL for the TCP/IP address space
using SDSF. Figure 3-3 shows the JCL for our system.
Figure 3-3 JCL of TCP/IP job, from SDSF display, showing high level qualifier for TCP/IP
The PROFILE DD card shows the hlq is TCP. That DD card uses symbolic substitution to
indicate the profile data set our system uses, TCP.SC63.TCPPARMS(TCPPROF). This data
set contains the PORT statement where you can reserve the SQL ports, resync ports, and
secure ports for your DB2 subsystems. Example 3-3 shows a sample entry in a profile
member for an environment with two standalone DB2 subsystems, DB2C and DB2D.
Example 3-3 Port reservations for two DB2 subsystems
PORT 446 TCP DB2CDIST ; DRDA SQL port for DB2C
PORT 5020 TCP DB2CDIST ; Resync port for DB2C
PORT 5021 TCP DB2DDIST ; DRDA SQL port for DB2D
PORT 5022 TCP DB2DDIST ; Resync port for DB2D
//TCPIP JOB MSGLEVEL=1 STC29433
//STARTING EXEC TCPIP
XXTCPIP PROC P1='CTRACE(CTIEZB00)',TCPPROF=TCPPROF,TCPDATA=TCPDATA 00010000
XX* 00020000
XXTCPIP EXEC PGM=EZBTCPIP,REGION=0M,TIME=1440, 00030000
XX PARM=&P1 00040000
XX*STEPLIB DD DSN=TCPIP.SEZATCP,DISP=SHR 00050000
IEFC653I SUBSTITUTION JCL - PGM=EZBTCPIP,REGION=0M,TIME=1440,PARM=CTRACE(CTIEZB00)
XXSYSPRINT DD SYSOUT=*,DCB=(RECFM=VB,LRECL=137,BLKSIZE=0) 00060000
XXSYSERR DD SYSOUT=*,DCB=(RECFM=VB,LRECL=137,BLKSIZE=0) 00070000
XXSYSERROR DD SYSOUT=* 00080000
XXALGPRINT DD SYSOUT=*,DCB=(RECFM=VB,LRECL=132,BLKSIZE=136) 00090000
XXCFGPRINT DD SYSOUT=*,DCB=(RECFM=VB,LRECL=132,BLKSIZE=136) 00100000
XXSYSOUT DD SYSOUT=*,DCB=(RECFM=VB,LRECL=132,BLKSIZE=136) 00110000
XXCEEDUMP DD SYSOUT=*,DCB=(RECFM=VB,LRECL=137,BLKSIZE=0) 00120000
XXPROFILE DD DSN=TCP.&SYSNAME..TCPPARMS(&TCPPROF), 00130000
XX DISP=SHR,FREE=CLOSE 00140000
IEFC653I SUBSTITUTION JCL - DSN=TCP.SC63.TCPPARMS(TCPPROF),DISP=SHR,FREE=CLOSE
XXSYSTCPD DD DSN=TCP.&SYSNAME..TCPPARMS(&TCPDATA),DISP=SHR 00150000
IEFC653I SUBSTITUTION JCL - DSN=TCP.SC63.TCPPARMS(TCPDATA),DISP=SHR
XXSYSABEND DD SYSOUT=* 00160000
Chapter 3. Installation and configuration 75
Only one DB2 subsystem can use the well-known port, 446. The other uses port 5021. Each
DB2 defines a unique resynchronization port, 5020 and 5022, respectively. In this example
the started procedure names for the DDF address spaces are DB2CDIST and DB2DDIST.
In our environment, no ports had been reserved for DB2. We did not reserve ports until
defining the ports for our data sharing members. Refer to 3.1.4, TCP/IP settings in a data
sharing environment on page 76 and Example 3-4 on page 77 for our port reservation
statements.
We recommend you reserve the port numbers that DB2 will use in the TCP profile data set.
Even though it is not required, it is good practice to specify the port numbers DB2 uses in the
TCP profile data set. This also prevents other applications from starting to use these ports
before DDF is started.
The HOSTS data set
If you are defining TCP/IP support for the first time, or adding new hosts for DB2 access, you
must define the TCP/IP host names that DB2 needs to know. These include the local host
name, which must be defined before DDF is started. All domain names referenced in the table
SYSIBM.IPNAMES must be defined. (Refer to 3.2.2, Configuring the Communications
Database on page 86).
Define the host names by configuring the hlq.HOSTS.LOCAL data set, the /etc./hosts file in
the hierarchical file system (HFS), or the domain name server (DNS). Once these are
configured, execute the MAKESITE utility to generate the hlq.HOSTS.ADDRINFO and the
hlp.HOSTS.SITEINFO data sets. Refer to z/OS V1R10.0 Communications Server: IP System
Administrator's Commands, SC31-8781, for more information about the MAKESITE utility.
Figure 3-4 on page 76 shows the starting point of the TCP.HOSTS.LOCAL data set. The
three LPARs we used for our scenarios are SC63, SC64 and SC70, with host IP addresses of
9.12.6.70, 9.12.6.9 and 9.12.4.202, respectively.
Note: If you are defining TCP/IP entries for a DB2 data sharing group in a parallel sysplex
environment, you should use Dynamic Virtual IP Addresses (DVIPA). Refer to 3.1.4,
TCP/IP settings in a data sharing environment on page 76.
76 DB2 9 for z/OS: Distributed Functions
Figure 3-4 Starting point for TCP.HOSTS.LOCAL
Later we added definitions to /SC63/etc/hosts to support DVIPA. Refer to Figure 3-11 on
page 81 for this example of /etc/hosts definitions.
The SERVICES data set
You must define the TCP/IP service names that DB2 needs to know. Configure the
hlq.ETC.SERVICES data set or the /etc/services file in the HFS. If service names are present
in the CDB (in field PORT of table SYSIBM.LOCATIONS), they must be defined in the z/OS
data set or the HFS. An example of hlq.ETC.SERVICES entry is as follows:
DRDA 446/tcp ; DRDA databases
Update the BSDS
You must update the bootstrap data set (BSDS) to include the TCP/IP port numbers. Refer to
3.2.4, Updating the BSDS on page 97, where we discuss the updates and provide examples
and displays.
If you are not operating in a parallel sysplex and DB2 data sharing environment, you can
proceed to 3.2, DB2 system configuration on page 85. The remainder of this section relates
to TCP/IP definitions to support various aspects of data sharing.
3.1.4 TCP/IP settings in a data sharing environment
BROWSE TCP.HOSTS.LOCAL
Command ===>
*********************************************************** Top of Data *****
HOST : 9.12.6.70 : WTSC63 ::::
HOST : 9.12.6.70 : WTSC63.ITSO.IBM.COM ::::
HOST : 9.12.6.9 : WTSC64 ::::
HOST : 9.12.6.9 : WTSC64.ITSO.IBM.COM ::::
HOST : 9.12.4.48 : WTSC65 ::::
HOST : 9.12.4.48 : WTSC65.ITSO.IBM.COM ::::
HOST : 9.12.4.202 : WTSC70 ::::
HOST : 9.12.4.202 : WTSC70.ITSO.IBM.COM ::::
HOST : 9.12.6.71 : WTSC63OE ::::
HOST : 9.12.6.71 : WTSC63OE.ITSO.IBM.COM ::::
HOST : 9.12.6.31 : WTSC64OE ::::
HOST : 9.12.6.31 : WTSC64OE.ITSO.IBM.COM ::::
HOST : 9.12.4.49 : WTSC65OE ::::
HOST : 9.12.4.49 : WTSC65OE.ITSO.IBM.COM ::::
HOST : 9.12.4.203 : WTSC70OE ::::
HOST : 9.12.4.203 : WTSC70OE.ITSO.IBM.COM ::::
HOST : 9.12.8.106 : TWSCJSC ::::
HOST : 9.12.8.106 : TWSCJSC.ITSO.IBM.COM ::::
Important: For high availability and workload balancing in a DB2 data sharing environment
you must define TCP/IP and DB2 to take advantage of the function of Sysplex Distributor to
make an initial connection to an available member of the data sharing group and to define
the DB2 data sharing group and members with dynamic virtual IP addressing (DVIPA) to
allow connection requests to succeed despite LPAR outages.
Chapter 3. Installation and configuration 77
In a high availability configuration the DB2 data sharing group has a distributed DVIPA (also
known as the group DVIPA). The requesters in the network specify this group DVIPA to initiate
a connection to the data sharing group. The Sysplex Distributor function of TCP/IP identifies
an available member of the data sharing group. Sysplex distributor generally routes initial
connection requests evenly across the available members of the group. Subsequently the
workload balancing functions available in the DRDA requesters use the server list returned by
DB2 and route transactions or connection requests across the members of the data sharing
group. Refer to Chapter 6, Data sharing on page 233 for details on workload balancing and
the server list.
In DB2 9 for z/OS, there are three sets of required definitions; the first two comprise PORT
and VIPADYNAMIC statements in the TCP/IP PROFILE data set, the third defines DVIPA
addresses in the DB2 BSDS. After we describe these definitions for DB2 9 we describe the
DB2 for z/OS V8 definitions for high availability and workload balancing.
In the examples that follow, we use the DVIPA addresses that we defined during our project.
For a more complete narrative of the steps we took, refer to 3.1.5, Sample DB2 data sharing
DVIPA and Sysplex Distributor setup on page 79 and 3.2.4, Updating the BSDS on
page 97.
PORT statements: DB2 9 for z/OS
If you are using a DB2 data sharing group in a parallel sysplex, each member of the data
sharing group uses the same SQL port to receive incoming requests. However, each member
must have a unique RESYNC port.
TCP/IP normally does not allow multiple applications to use the same port number. In a data
sharing environment, it is possible to start multiple DB2 members on the same z/OS system,
either as normal operation or as a temporary measure after a system failure. If two members
of the same DB2 data sharing group start on the same LPAR, they will try to use the same
SQL port number. To allow multiple applications to share the same port, use the
SHAREPORT keyword in the TCP/IP profile data set. Example 3-4 shows the port
reservation statements for the three members of our data sharing group.
Example 3-4 Port reservations for three members of a DB2 data sharing group
38320 TCP D9C1DIST SHAREPORT ; SQL PORT
38321 TCP D9C1DIST ; Resync PORT
38320 TCP D9C2DIST SHAREPORT ; SQL PORT
38322 TCP D9C2DIST ; Resync PORT
38320 TCP D9C3DIST SHAREPORT ; SQL PORT
38323 TCP D9C3DIST ; Resync PORT
The SHAREPORT keyword allows multiple listeners (in our case D9C1DIST, D9C2DIST and
D9C3DIST) to listen on the same port, 38320. Make sure you make this change on all
members of the sysplex. The resync ports are not shared.
VIPADYNAMIC statements
To enable Sysplex Distributor function to distribute connection requests across the members
of your data sharing group, you must define the distribution as part of the VipaDynamic
section of the TCP PROFILE data set. Include VipaRange keywords in each LPARs TCP/IP
PROFILE data set. Specify VipaDefine and VipaDistribute Define keywords in the LPAR
where the SD function will reside. Then specify VipaBackup keywords for the other LPARs.
78 DB2 9 for z/OS: Distributed Functions
Figure 3-5 shows the VipaDynamic statements that specify the SD, VipaDefine and
VipaDistribute Define, in the TCP PROFILE data set on the SC70 LPAR where our Sysplex
Distributor resided.
Figure 3-5 VipaDynamic statements including Sysplex Distributor definition for SC70
Figure 3-6 shows the VipaDynamic statements for the TCP PROFILE data set for SC64,
which indicates VIPABACKUP to provide backup Sysplex Distributor function. If the TCP on
SC70 failed, the TCP on SC64 could take over the Sysplex Distributor function. The value
after VIPABACKUP, in this case 100, is a relative number.
Figure 3-6 VipaDynamic statements with backup SD for SC64
Figure 3-7 shows the VipaDynamic statements for the TCP PROFILE data set for SC63,
where the VIPABACKUP value is 200. In our configuration, 100 for SC64 is less than the
200 for SC63, so SC64 would be the first backup for Sysplex Distributor function and SC63
would be the second.
Figure 3-7 VipaDynamic statements with backup SD for SC63
BSDS and DVIPA addresses: DB2 9 for z/OS
With DB2 9 for z/OS, you can specify the group DVIPA and member-specific DVIPA in the
DB2 members BSDS. Figure 3-8 on page 79 shows the DSNJU003 input for member D9C1
of our data sharing group. The group DVIPA, which specifies the data sharing group, is
9.12.4.102. The member-specific DVIPA, which only refers to D9C1, is 9.12.4.103.
VIPADYNAMIC
VIPARANGE DEFINE 255.255.255.255 9.12.4.103 ; D9C1
VIPARANGE DEFINE 255.255.255.255 9.12.4.104 ; D9C2
VIPARANGE DEFINE 255.255.255.255 9.12.4.105 ; D9C3
VIPADEFINE 255.255.255.255 9.12.4.102 ; Group DVIPA
VIPADISTRIBUTE DEFINE 9.12.4.102 PORT 38320 DESTIP ALL
ENDVIPADYNAMIC
VIPADYNAMIC
VIPARANGE DEFINE 255.255.255.255 9.12.4.103 ; D9C1
VIPARANGE DEFINE 255.255.255.255 9.12.4.104 ; D9C2
VIPARANGE DEFINE 255.255.255.255 9.12.4.105 ; D9C3
VIPABACKUP 100 9.12.4.102 ; Group DVIPA
ENDVIPADYNAMIC
VIPADYNAMIC
VIPARANGE DEFINE 255.255.255.255 9.12.4.103 ; D9C1
VIPARANGE DEFINE 255.255.255.255 9.12.4.104 ; D9C2
VIPARANGE DEFINE 255.255.255.255 9.12.4.105 ; D9C3
VIPABACKUP 200 9.12.4.102 ; Group DVIPA
ENDVIPADYNAMIC
Chapter 3. Installation and configuration 79
Figure 3-8 BSDS specification with member-specific DVIPA and group DVIPA.
Make the corresponding changes for each DB2 members BSDS. Remember, you must stop
DB2 before running DSNJU003. Refer to 3.2.4, Updating the BSDS on page 97 for a more
complete discussion of our steps in updating the BSDS.
DB2 for z/OS V8, DVIPA and BIND SPECIFIC
For a DB2 for z/OS data sharing group you still need a group DVIPA and a member-specific
DVIPA for each member. The member-specific DVIPA supports workload balancing, allows for
the DB2 members to start on an available LPAR, and allows clients to reach the correct DB2
member in case of resynchronization. In DB2 for z/OS V8 you cannot specify these DVIPAs in
the BSDS. Instead, you must use the port specification section of the PROFILE data set to
bind each member to the group DVIPA and to a member-specific DVIPA. These statements
need to be included in the TCP PROFILE data sets in each LPAR where the DB2 data
sharing members may run.
Figure 3-9 shows an example of binding ports to specific IP addresses. This is what we would
have defined had we been using DB2 for z/OS V8 during our project.
Figure 3-9 DB2 for z/OS V8: Port statements binding a specific IP address to the DB2 ports
Using BIND SPECIFC for DB2 for z/OS V8 provides the required support for high availability
and workload balancing. The disadvantage of this approach is that DB2 cannot accept
TCP/IP requests using INADDR_ANY. Make sure you use the DB2 9 for z/OS capability to
define the group and member DVIPAs in the BSDS when you migrate to DB2 9 for z/OS.
3.1.5 Sample DB2 data sharing DVIPA and Sysplex Distributor setup
As mentioned earlier, our beginning configuration did not include port reservation statements
and, therefore, did not specify SHAREPORT. With only the SQL port and resync port
specified in the BSDS, and no port reservation statement, the members of our data sharing
group were able to listen on any IP address for requests to their ports.
//DSNTLOG EXEC PGM=DSNJU003,COND=(4,LT)
//STEPLIB DD DISP=SHR,DSN=DB9C9.SDSNLOAD
// DD DISP=SHR,DSN=DB9C9.SDSNEXIT
//SYSUT1 DD DISP=OLD,DSN=DB9CL.D9C1.BSDS01
//SYSUT2 DD DISP=OLD,DSN=DB9CL.D9C1.BSDS02
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSIN DD *
DDF IPV4=9.12.4.103,GRPIPV4=9.12.4.102
38320 TCP D9C1DIST SHAREPORT BIND 9.12.4.102 ; SQL PORT
38321 TCP D9C1DIST BIND 9.12.4.103 ; Resync PORT
38320 TCP D9C2DIST SHAREPORT BIND 9.12.4.102 ; SQL PORT
38322 TCP D9C2DIST BIND 9.12.4.104 ; Resync PORT
38320 TCP D9C3DIST SHAREPORT BIND 9.12.4.102 ; SQL PORT
38323 TCP D9C3DIST BIND 9.12.4.105 ; Resync PORT
80 DB2 9 for z/OS: Distributed Functions
Figure 3-10 shows the output, on LPAR SC63, of D TCPIP,,N,CONN, a request to TCP to
display connections. D9C1 is the member of the data sharing group on this LPAR. The last
two lines indicate that D9C1 is listening on the SQL port for the data sharing group, 38320,
and on the resync port, 38321, for D9C1, but that there are no IP addresses specified.
Figure 3-10 Output of D TCPIP,,N,CONN command
This definition, with the DRDA and resync port numbers specified in the BSDS but no IP
address, makes it easy to reach any single member of the data sharing group, assuming they
are all active, but it does not meet best practice for high availability. Best practice for high
availability would allow any member to be restarted on another LPAR, in case of an LPAR
outage, and would allow any available member to accept an incoming request. With our
beginning configuration, we would not have been able to start D9C1, for example, on SC64,
because we did not have SHAREPORT specified. And if a request for had been made for the
SQL port, 38320, at IP address 9.12.6.70, but SC63 were not running, the request would
simply fail.
When we were ready to implement DVIPA and Sysplex Distributor support in our data sharing
environment we had to take several steps. You can use these steps when you implement
DVIPA and SD support in your data sharing environment.
Identify available IP addresses for the group DVIPA and each member-specific DVIPA
Add port reservations statements specifying SHAREPORT
Add VipaDynamic statements
Add group and member-specific DVIPAs to the etc/hosts file
Update BSDS for each DB2 member
After we successfully accomplished these steps, we added LOCATION ALIAS support to our
environment. In this section we explain the steps we took and provide examples and display
output for the TCP related tasks. We document the BSDS updates in 3.2.4, Updating the
BSDS on page 97.
RESPONSE=SC63
EZZ2500I NETSTAT CS V1R10 TCPIP 125
USER ID CONN LOCAL SOCKET FOREIGN SOCKET STATE
CA9SO2 000118E5 0.0.0.0..6000 0.0.0.0..0 LISTEN
CBDQDISP 0000B7FF 0.0.0.0..51107 0.0.0.0..0 LISTEN
DB8ADIST 00000040 0.0.0.0..12345 0.0.0.0..0 LISTEN
DB8ADIST 0000004A 0.0.0.0..12346 0.0.0.0..0 LISTEN
DB9ADIST 00018D30 0.0.0.0..12347 0.0.0.0..0 LISTEN
DB9ADIST 00018D51 0.0.0.0..12348 0.0.0.0..0 LISTEN
DB9ADIST 0001A09B 9.12.6.70..12347 9.12.6.70..17965 ESTBLSH
DB9ADIST 00018FAB 9.12.6.70..12347 9.12.6.70..17948 ESTBLSH
DB9ADIST 00018D31 0.0.0.0..12349 0.0.0.0..0 LISTEN
DB9BDIST 00019F42 0.0.0.0..12351 0.0.0.0..0 LISTEN
DB9BDIST 00019F3D 0.0.0.0..12350 0.0.0.0..0 LISTEN
DFSKERN 0000002D 0.0.0.0..139 0.0.0.0..0 LISTEN
D8F1DIST 0000004C 0.0.0.0..38051 0.0.0.0..0 LISTEN
D8F1DIST 00000047 0.0.0.0..38050 0.0.0.0..0 LISTEN
D9C1DIST 000106FD 0.0.0.0..38321 0.0.0.0..0 LISTEN
D9C1DIST 000105BF 0.0.0.0..38320 0.0.0.0..0 LISTEN
Note: All of these examples below relate to a DB2 9 for z/OS environment where the IPV4
addresses for group DVIPA and member-specific DVIPA are defined in the BSDS.
Chapter 3. Installation and configuration 81
Identify IP addresses for group DVIPA and member-specific DVIPA
Contact your TCP/IP network administrator to identify what addresses are available for group
DVIPA and member-specific DVIPAs. If you are planning for a new data sharing group, and if
your existing DB2 requesters use an IP address that is specific to DB2, try to make that IP
address the group DVIPA, as it will ease the migration.
As you may have noticed in the examples and figures in the preceding section, we received
the following IP addresses for our three-way data sharing group:
9.12.4.102 Group DVIPA
9.12.4.103 Member-specific DVIPA for D9C1
9.12.4.104 Member-specific DVIPA for D9C2
9.12.4.105 Member-specific DVIPA for D9C3
Update TCP PROFILE data sets
The next step is to update the TCP PROFILE data sets. If you have not already done so,
reserve your DB2 ports. We repeat our port reservation statements in Example 3-5. Add the
same set of statements to each TCP PROFILE data set for the LPARs where your DB2 data
sharing members may run. We added this to each profile member for SC63, SC64, and
SC70. This example is only valid for DB2 9 for z/OS environments where IPV4 and IPV6
addresses have been added to the BSDS.
Example 3-5 Port reservation statements for our three-way data sharing group
38320 TCP D9C1DIST SHAREPORT ; SQL PORT
38321 TCP D9C1DIST ; Resync PORT
38320 TCP D9C2DIST SHAREPORT ; SQL PORT
38322 TCP D9C2DIST ; Resync PORT
38320 TCP D9C3DIST SHAREPORT ; SQL PORT
38323 TCP D9C3DIST ; Resync PORT
Add VipaDynamic statements in each LPAR. In this case, each is slightly different. The
VIPARANGE DEFINE statements will be identical, but only one profile will specify the
VIPADEFINE statement for the group DVIPA and the VIPADISTRIBUTE statement. Refer to
Figure 3-5 on page 78 for the VIPADEFINE and VIPADISTRIBUTE statements we used.
The others will specify the VIPABACKUP statement with a unique numeric value. In our case,
we used 200 for SC63 and 100 for SC64. Refer to Figure 3-6 on page 78 for our statements
for SC64 and to Figure 3-7 on page 78 for our statements for SC63.
Update etc/hosts file
The next step is to update the etc/hosts file with the domain name that corresponds to the
group DVIPA and the member-specific DVIPAs. Figure 3-11 shows the new entries in the
etc/hosts file for SC63.
Figure 3-11 Contents of /SC63/etc/hosts including DVIPA addresses
9.12.6.70 wtsc63.itso.ibm.com wtsc63
9.12.6.71 wtsc63oe.itso.ibm.com wtsc63oe
127.0.0.1 localhost.localdomain localhost
9.12.4.166 d8fg.itso.ibm.com d8fg
9.12.4.102 d9cg.itso.ibm.com d9cg
9.12.4.103 d9cg.itso.ibm.com d9cg
9.12.4.104 d9cg.itso.ibm.com d9cg
9.12.4.105 d9cg.itso.ibm.com d9cg
82 DB2 9 for z/OS: Distributed Functions
We then issued the following command:
D TCPIP,,NETSTAT,HOME
Figure 3-12 shows the output, including the new group DVIPA and member-specific DVIPA for
D9C1. The P flag shows the primary interface for TCP/IP, which is the original IP address for
SC63. The I shows the internally generated DVIPA, our group DVIPA.
Figure 3-12 Output of D TCPIP,,NETSTAT,HOME command
The last step is to update the BSDS. Refer to 3.2.4, Updating the BSDS on page 97 for our
discussion of this step.
Alias support
Beginning with DB2 for z/OS V8 you can use the LOCATION ALIAS function to specify a
subset of a data sharing group. Use this function to restrict the members of your data sharing
group to which a requester can connect.
We defined two LOCATION ALIASes for our data sharing group. The first one, called
DB9CALIAS, we used to specify member D9C1. The second alias, called DB9CSUBSET, we
used to specify members D9C1 and D9C2.
To specify alias support, update your port reservations statements to reserve the new aliases.
We assigned port 38324 to DB9CALIAS and port 38325 to DB9CSUBSET. Refer to
Example 3-6 for the statements we used. Note that a single port serves for all members that
use a specific alias. SHAREPORT is not required for a single-member alias.
Example 3-6 Port reservations for three members including aliases
38320 TCP D9C1DIST SHAREPORT ; SQL PORT
38321 TCP D9C1DIST ; Resync PORT
38320 TCP D9C2DIST SHAREPORT ; SQL PORT
38322 TCP D9C2DIST ; Resync PORT
38320 TCP D9C3DIST SHAREPORT ; SQL PORT
38323 TCP D9C3DIST ; Resync PORT
38324 TCP D9C1DIST ; Alias PORT for DB9CALIAS
38325 TCP D9C1DIST SHAREPORT ; Alias PORT for DB9CSUBSET
38325 TCP D9C2DIST SHAREPORT ; Alias PORT for DB9CSUBSET
RESPONSE=SC63
EZZ2500I NETSTAT CS V1R10 TCPIP 596
HOME ADDRESS LIST:
ADDRESS LINK FLG
9.12.6.70 OSA2000LNK P
9.12.6.71 OSA2020LNK
10.1.1.2 HIPERLF1
10.1.101.63 EZASAMEMVS
10.1.101.63 IQDIOLNK0A01653F
9.12.4.102 VIPL090C0466 I
9.12.4.103 VIPL090C0467
127.0.0.1 LOOPBACK
8 OF 8 RECORDS DISPLAYED
END OF THE REPORT
Chapter 3. Installation and configuration 83
We had to update our BSDS to record the new alias specifications. Refer to 3.2.4, Updating
the BSDS on page 97 for the details. When we completed our ALIAS specifications, we
issued the following command:
D TCPIP,,N,CONN
Figure 3-13 shows the output for SC63. D9C1DIST is listening on the SQL port, 38320, the
resync port, 38321, and the alias ports, 38324 and 38325.
Figure 3-13 Output of D TCPIP,,N,CONN command
3.1.6 Starting DDF with TCP/IP
Many of the TCP/IP definitions we have described for our DB2 environment are visible in the
DB2 DISPLAY DDF output. Figure 3-14 on page 84 shows the output for DB9A, our
standalone DB2 member, with APAR PK80474 applied
1
.
RESPONSE=SC63
EZZ2500I NETSTAT CS V1R10 TCPIP 858
USER ID CONN LOCAL SOCKET FOREIGN SOCKET STATE
CA9SO2 0001CB4D 0.0.0.0..6000 0.0.0.0..0 LISTEN
CBDQDISP 0000B7FF 0.0.0.0..51107 0.0.0.0..0 LISTEN
DB8ADIST 00000040 0.0.0.0..12345 0.0.0.0..0 LISTEN
DB8ADIST 0000004A 0.0.0.0..12346 0.0.0.0..0 LISTEN
DB9ADIST 0001D872 9.12.6.70..12347 9.12.6.70..18016 ESTBLSH
DB9ADIST 0001CB3A 0.0.0.0..12347 0.0.0.0..0 LISTEN
DB9ADIST 0001CB4F 0.0.0.0..12348 0.0.0.0..0 LISTEN
DB9ADIST 0001CB3B 0.0.0.0..12349 0.0.0.0..0 LISTEN
DB9BDIST 00019F42 0.0.0.0..12351 0.0.0.0..0 LISTEN
DB9BDIST 00019F3D 0.0.0.0..12350 0.0.0.0..0 LISTEN
DFSKERN 0000002D 0.0.0.0..139 0.0.0.0..0 LISTEN
D8F1DIST 0000004C 0.0.0.0..38051 0.0.0.0..0 LISTEN
D8F1DIST 00000047 0.0.0.0..38050 0.0.0.0..0 LISTEN
D9C1DIST 0001B928 9.12.6.70..38320 9.12.5.149..48835 ESTBLSH
D9C1DIST 0001B922 9.12.6.70..38320 9.12.5.149..48828 ESTBLSH
D9C1DIST 0001B921 0.0.0.0..38321 0.0.0.0..0 LISTEN
D9C1DIST 0001B919 9.12.6.70..38320 9.12.5.149..48822 ESTBLSH
D9C1DIST 0001B910 0.0.0.0..38320 0.0.0.0..0 LISTEN
D9C1DIST 0001B915 0.0.0.0..38325 0.0.0.0..0 LISTEN
D9C1DIST 0001B91F 9.12.4.103..38320 9.12.5.149..48819 ESTBLSH
D9C1DIST 0001B913 0.0.0.0..38324 0.0.0.0..0 LISTEN
D9D1DIST 0000004B 0.0.0.0..38331 0.0.0.0..0 LISTEN
D9D1DIST 00000042 0.0.0.0..38330 0.0.0.0..0 LISTEN
1
With APAR PK80474, message DSNL086I is eliminated in non-data sharing and a new message DSNL089I is
added for data sharing.
84 DB2 9 for z/OS: Distributed Functions
Figure 3-14 Example of DISPLAY DDF from DB9A standalone DB2
The LOCATION, DB9A, the SQL port, 12347, the SECPORT, 12349, the RESPORT, 12348,
the host IP address, and the domain names are all clear.
Figure 3-15 shows the display output for member D9C1. This display reflects the beginning
configuration. The IPADDR and the MEMBER IPADDR are the same, and no ALIAS is
defined.
Figure 3-15 Example of DISPLAY DDF from D9C1 data sharing member
Figure 3-16 shows the display output for member D9C2, on SC64.
Figure 3-16 Example of DISPLAY DDF from D9C2 data sharing member
RESPONSE=SC63
DSNL080I -DB9A DSNLTDDF DISPLAY DDF REPORT FOLLOWS:
DSNL081I STATUS=STARTD
DSNL082I LOCATION LUNAME GENERICLU
DSNL083I DB9A USIBMSC.SCPDB9A -NONE
DSNL084I TCPPORT=12347 SECPORT=12349 RESPORT=12348 IPNAME=-NONE
DSNL085I IPADDR=::9.12.6.70
DSNL086I SQL DOMAIN=wtsc63.itso.ibm.com
DSNL099I DSNLTDDF DISPLAY DDF REPORT COMPLETE
RESPONSE=SC63
DSNL080I -D9C1 DSNLTDDF DISPLAY DDF REPORT FOLLOWS:
DSNL081I STATUS=STARTD
DSNL082I LOCATION LUNAME GENERICLU
DSNL083I DB9C USIBMSC.SCPD9C1 -NONE
DSNL084I TCPPORT=38320 SECPORT=0 RESPORT=38321 IPNAME=-NONE
DSNL085I IPADDR=::9.12.6.70
DSNL086I SQL DOMAIN=wtsc63.itso.ibm.com
DSNL086I RESYNC DOMAIN=wtsc63.itso.ibm.com
DSNL089I MEMBER IPADDR=::9.12.6.70
DSNL099I DSNLTDDF DISPLAY DDF REPORT COMPLETE
RESPONSE=SC64
DSNL080I -D9C2 DSNLTDDF DISPLAY DDF REPORT FOLLOWS:
DSNL081I STATUS=STARTD
DSNL082I LOCATION LUNAME GENERICLU
DSNL083I DB9C USIBMSC.SCPD9C2 -NONE
DSNL084I TCPPORT=38320 SECPORT=0 RESPORT=38322 IPNAME=-NONE
DSNL085I IPADDR=::9.12.4.102
DSNL086I SQL DOMAIN=-NONE
DSNL086I RESYNC DOMAIN=-NONE
DSNL089I MEMBER IPADDR=::9.12.4.104
DSNL099I DSNLTDDF DISPLAY DDF REPORT COMPLETE
Chapter 3. Installation and configuration 85
Figure 3-17 shows the display output for member D9C3 on SC70.
These displays reflect the changes to support DVIPA. Notice that the IPADDR in each case is
the group DVIPA, 9.12.4.102. Each display shows a unique value in MEMBER IPADDR.
Figure 3-17 Example of DISPLAY DDF from D9C3 data sharing member
Each of the displays above includes the VTAM LUNAME. Even though the VTAM support was
not required for our scenarios, it had been previously defined and we left it there.
3.2 DB2 system configuration
If your TCP/IP, UNIX System Services, and Language Environment environments are already
defined, just define the DB2 resources to support distributed access. These resources are as
follows:
Shared memory object
Communications database (CDB)
DSNZPARMs
Bootstrap data set (BSDS)
DDF address space
Support for ODBC, JDBC, stored procedures, and so forth
3.2.1 Defining the shared memory object
Beginning with DB2 9 for z/OS, DB2 DDF takes advantage of shared memory to pass SQL
and data rows between the DIST address space and the DBM1 address space. Shared
memory is a type of virtual storage that was introduced in z/OS 1.5 that allows multiple
address spaces to address common storage. This memory resides above the 2 GB bar. The
shared memory object is created at DB2 startup and the DB2 address spaces for the
subsystem (ssidDIST, ssidDBM1, ssidMSTR, and Utilities) are registered with z/OS to access
the shared memory object.
To define the size of the shared memory, use the HVSHARE parameter of the IEASYSxx
member in the parmlib concatenation. Ensure that you have defined a high enough value for
HVSHARE to satisfy all component requests for shared memory within the z/OS image. If you
do not specify HVSHARE in your IEASYSxx member, the shared memory object will be
created with the default value. The default value is 510 TB.
Use the following z/OS command to see what the current definition is and what the current
allocation is: DISPLAY VIRTSTOR,HVSHARE.
RESPONSE=SC70
DSNL080I -D9C3 DSNLTDDF DISPLAY DDF REPORT FOLLOWS:
DSNL081I STATUS=STARTD
DSNL082I LOCATION LUNAME GENERICLU
DSNL083I DB9C USIBMSC.SCPD9C3 -NONE
DSNL084I TCPPORT=38320 SECPORT=0 RESPORT=38323 IPNAME=-NONE
DSNL085I IPADDR=::9.12.4.102
DSNL086I SQL DOMAIN=-NONE
DSNL086I RESYNC DOMAIN=-NONE
DSNL089I MEMBER IPADDR=::9.12.4.105
DSNL099I DSNLTDDF DISPLAY DDF REPORT COMPLETE
86 DB2 9 for z/OS: Distributed Functions
Figure 3-18 shows that HVSHARE is defined at the default value of 510 TB on our system.
Figure 3-18 Results of DISPLAY VIRTSTOR,HVSHARE showing default definition
For more information about shared memory, refer to z/OS V1R10.0 MVS Initialization and
Tuning Reference, SA22-7592.
3.2.2 Configuring the Communications Database
The Communications Database (CDB) is part of the DB2 Catalog and consists of a set of
tables that DDF uses to establish communications with remote databases. DDF uses the
CDB when providing DRDA AR functions. DDF does not use the CDB when performing
DRDA AS functions in a TCP/IP environment.
If your DB2 for z/OS provides DRDA AR functions to request data from other systems, you
must have one row in the SYSIBM.LOCATIONS table for each remote system you want to
access. You also need a row in the SYSIBM.IPNAMES table for each remote system.
The following section provides a brief description of what each table contains and an example
of the CDB tables as they were defined in our system.
SYSIBM.LOCATIONS table
DDF checks the SYSIBM.LOCATIONS table first when you issue a request to another
system. DDF uses the LOCATIONS table to determine the port number or service name used
to connect to the remote location. The column LINKNAME maps to the corresponding row in
table IPNAMES.
The following LOCATIONS columns apply to requests to remote systems using TCP/IP:
LOCATION
This column identifies the remote system. You must provide a value for each system from
which you intend to request data. This value is used on the SQL CONNECT TO
statement, or as the location name in remote binds or three-part names.
LINKNAME
This column identifies the TCP/IP attributes for the location. For each link name, you must
have a corresponding value in IPNAMES.
PORT
This column determines the port number to be used for the remote location. If blank, the
default port number 446 is used. If the value is not blank, it is either the SQL port number
of the remote system, or a TCP/IP service name, which can be converted to a TCP/IP
port.
DBALIAS
Database alias. The name associated with the remote server. If DBALIAS is blank, the
location name is used to access the remote database server. If DBALIAS is not blank and
the name of any database object contains the location qualifier (in other words it is a
2009110 19:32:30.08 PAOLOR6 00000210 DISPLAY VIRTSTOR,HVSHARE
2009110 19:32:30.10 PAOLOR6 00000010 IAR019I 19.32.30 DISPLAY VIRTSTOR 369
369 00000010 SOURCE = DEFAULT
369 00000010 TOTAL SHARED = 522240G
369 00000010 SHARED RANGE = 2048G-524288G
369 00000010 SHARED ALLOCATED = 393217M
Chapter 3. Installation and configuration 87
three-part name), this column does not change that name when the SQL is sent to the
remote site.
TRUSTED
This column indicates whether the connection to the remote server can be trusted.
SECURE
This column indicates whether a secure connection using the Secure Socket Layer (SSL)
protocol is required for outbound DRDA connections
Table 3-1 shows the values in SYSIBM.LOCATIONS during our project. In our environment
the last three columns were blank, so we do not include them in the table.
Table 3-1 SYSIBM.LOCATIONS
SYSIBM.IPNAMES table
The SYSIBM.IPNAMES table defines the remote DRDA servers DB2 can access using
TCP/IP. The relevant columns are:
LINKNAME
This column corresponds to the LINKNAME column of the LOCATIONS table. The
LINKNAME column value must be unique within the IPNAMES table.
SECURITY_OUT
This column defines the DRDA security option that is used when local DB2 SQL
applications connect to any remote server associated with the TCP/IP host specified by
this LINKNAME:
A
already verified, the default. Outbound connection requests contain an unencrypted
authorization ID and no password. The authorization ID is either the DB2 users
authorization ID or a translated ID, depending upon the value of the USERNAMES
column.
D
user ID and security-sensitive data encryption. Outbound connection requests
contain an authorization ID and no password. The authorization ID is either the DB2
users authorization ID or a translated ID, depending upon the value of the
USERNAMES column.
E
userid, password, and security-sensitive data encryption. Outbound connection
requests contain an authorization ID and a password. The password is obtained from
the SYSIBM.USERNAMES table. The USERNAMES column must specify O.
LOCATION LINKNAME PORT
DB8A DB8A 12345
DB9C DB9C 38320
DB9C2 DB9C2 38320
KODIAK KODIAK 50002
SAMPLE MYUDBLNK 50002
88 DB2 9 for z/OS: Distributed Functions
P
password. Outbound connection requests contain an authorization ID and a
password. The password is obtained from the SYSIBM.USERNAMES table. The
USERNAMES column must specify O. This option indicates that the user ID and the
password are to be encrypted if cryptographic services are available at the requester
and if the server supports encryption. Otherwise, the user ID and the password are
sent to the partner in clear text.
R
RACF PassTicket. Outbound connection requests contain a user ID and a RACF
PassTicket. The value specified in the LINKNAME column is used as the RACF
PassTicket application name for the remote server. The authorization ID used for an
outbound request is either the DB2 users authorization ID or a translated ID,
depending upon the value of the USERNAMES column. The authorization ID is not
encrypted when it is sent to the partner.
USERNAMES
This column controls outbound authorization ID transaction. In this column, you can
specify O, S or blank. If either O or S is specified, you must populate the
SYSIBM.USERNAMES table.
O
Specify O if you want outbound translation. If the value in the security_out column is
P, then this column must be O.
S
S indicates that a row in the SYSIBM.USERNAMES table is used to obtain the system
AUTHID used to establish a trusted connection.
blank
No translation occurs. Only outbound translation is supported with TCP/IP.
IPADDR
This column contains an IPv4 or IPv6 address, or domain name of a remote TCP/IP host.
Table 3-2 shows the values in SYSIBM.IPNAMES during our project.
Table 3-2 SYSIBM.IPNAMES
LINKNAME SECURITY_OUT USERNAMES IPADDR
DB8A P O WTSC63.ITSO.IBM.COM
DB9C A WTSC64.ITSO.IBM.COM
DB9C2 P O WTSC63.ITSO.IBM.COM
KODIAK P O KODIAK.ITSO.IBM.COM
MYUDBLNK P O KODIAK.ITSO.IBM.COM
Chapter 3. Installation and configuration 89
SYSIBM.USERNAMES table
The USERNAMES table is used for outbound ID translation (TCP/IP and SNA) and inbound
ID translation and come from checking (SNA only).
TYPE
Values indicate how DDF should use this row:
I (SNA only)
Inbound and come from checking.
O
For outbound authorization ID translation.
S
For outbound system AUTHID to establish a trusted connection.
AUTHID
Authorization ID to be translated. If blank, the translation applies to every authorization ID.
LINKNAME
Relates the row in IPNAMES with the same LINKNAME to the values in this row to
determine authorization ID translation.
NEWAUTHID
Translated value of AUTHID. Blank specifies no translation. NEWAUTHID can be stored as
encrypted data by calling the DSNLEUSR stored procedure. To send the encrypted value
of AUTHID across a network, one of the encryption security options in the
SYSIBM.IPNAMES table should be specified.
PASSWORD
Password to accompany an outbound request, if passwords are not encrypted by RACF. If
passwords are encrypted, or the row is for inbound requests, the column is not used.
PASSWORD can be stored as encrypted data by calling the DSNLEUSR stored
procedure. To send the encrypted value of PASSWORD across a network, one of the
encryption security options in the SYSIBM.IPNAMES table should be specified.
Table 3-3 shows the values in SYSIBM.USERNAMES during our project.
Table 3-3 SYSIBM.USERNAMES
SYSIBM.IPLIST table
The IPLIST table allows multiple IP addresses to be specified for a given LOCATION. The
same value for the IPADDR column cannot appear in both the IPNAMES table and the IPLIST
table. Use of this table allows DDF to select a member of a data sharing group from this list for
initial connection requests.
TYPE AUTHID LINKNAME NEWAUTHID PASSWORD
O DB8A PAOLOR5 PUP4SALE
O DB9C PAOLOR4 123ABC
O KODIAK db2inst3 db2inst3
O MYUDBLNK DB2INST3 DB2INST3
90 DB2 9 for z/OS: Distributed Functions
We do not recommend use of this table. Rather, indicate the IPADDR in the IPNAMES table
and implement Sysplex Distributor support to allow any available member of a data sharing
group to support the initial connection request. If you intend to limit the members of a data
sharing group to which a DB2 can connect, use LOCATION ALIAS support. Refer to Alias
support on page 82 and 3.2.4, Updating the BSDS on page 97 for examples of how we
specified LOCATION ALIAS in our environment. For further discussion of using LOCATION
ALIAS, refer to 6.2.1, DB2 data sharing subsetting on page 248.
We did not populate this table for our project.
Changing the CDB
You can make changes to the CDB while DDF is active. Depending on the type of changes
you make, these changes take effect at different times:
Changes to USERNAMES take effect at the next thread access.
If DDF has not yet started communicating to a particular location, IPNAMES and
LOCATIONS take effect when DDF attempts to communicate with that location.
If DDF has already started communication, changes to IPNAMES and LOCATIONS take
effect the next time DDF is started.
3.2.3 DB2 installation parameters (DSNZPARM)
In this section we describe the DSNZPARM values that apply to DDF and distributed traffic. In
addition we describe fields on the DB2 installation panels that are part of defining DDF but
are not DSNZPARM entries. For further information about any of these parameters or fields,
refer to DB2 Version 9.1 for z/OS Installation Guide, GC18-9846.
Table 3-4 provides a reference for the parameters, the panels on which they appear and their
possible and default values. We discuss these parameters in the order we list them here.
Table 3-4 DSNZPARM parameters
PARAMETER PANELID Possible values Default
MAXDBAT DSNTIPE 0-1999 200
CONDBAT DSNTIPE 0-150,000 10,000
DDF DSNTIPR NO, AUTO, COMMAND NO
RLFERRD DSNTIPR NOLIMIT, NORUN, 1 to 5,000,000 NOLIMIT
RESYNC DSNTIPR 1 to 99 2
CMTSTAT DSNTIPR ACTIVE, INACTIVE INACTIVE
MAXTYPE1 DSNTIPR 0 to value of CONDBAT 0
IDTHTOIN DSNTIPR 0 to 9999 120
EXTSEC DSNTIPR YES, NO YES
TCPALVER DSNTIP5 YES, NO NO
EXTRAREQ DSNTIP5 0 to 100 100
EXTRASRV DSNTIP5 0 to 100 100
HOPAUTH DSNTIP5 BOTH or RUNNER BOTH
TCPKPALV DSNTIP5 ENABLE, DISABLE, or 1 to 65534 120
Chapter 3. Installation and configuration 91
The first installation panel that affects DDF is DSNTIPE, shown in Figure 3-19. This is where
you specify how many concurrent distributed threads and distributed connections DDF can
support.
Figure 3-19 DSNTIPE panel specifying MAXDBAT and CONDBAT values
Max remote active (MAXDBAT)
Use this field to specify the maximum number of database access threads (DBATs) that
can be active concurrently. The default is 200, and you can specify up to 1999, with the
condition that the sum of MAXDBAT and CTHREAD (Field #2, Max Users, on DSNTIPE)
cannot exceed 2000.
If requests for DBATs exceed MAXDBAT the allocation is allowed, but subsequent
processing depends on whether you specified ACTIVE or INACTIVE in the DDF threads
field (CMTSTAT) on panel DSNTIPR.
POOLINAC DSNTIP5 0 to 9999 120
ACCUMACC DSNTIPN NO, 2-65535 10
ACCUMUID DSNTIPN 0-17 0
PRGSTRIN none - PK46079 ENABLE, DISABLE ENABLE
SQLINTRP none - PK59385 ENABLE, DISABLE ENABLE
DSNTIPE INSTALL DB2 - THREAD MANAGEMENT
===>
Check numbers and reenter to change:
1 DATABASES ===> 100 Concurrently in use
2 MAX USERS ===> 200 Concurrently running in DB2
3 MAX REMOTE ACTIVE ===> 200 Maximum number of active
database access threads
4 MAX REMOTE CONNECTED ===> 10000 Maximum number of remote DDF
connections that are supported
5 MAX TSO CONNECT ===> 50 Users on QMF or in DSN command
6 MAX BATCH CONNECT ===> 50 Users in DSN command or utilities
7 SEQUENTIAL CACHE ===> BYPASS 3990 storage for sequential IO.
Values are SEQ or BYPASS.
8 MAX KEPT DYN STMTS ===> 5000 Maximum number of prepared dynamic
statements saved past commit points
9 CONTRACT THREAD STG ===> NO Periodically free unused thread stg
10 MANAGE THREAD STORAGE ===> YES Manage thread stg to minimize size
11 LONG-RUNNING READER ===> 0 Minutes before read claim warning
12 PAD INDEXES BY DEFAULT===> NO Pad new indexes by default
13 MAX OPEN FILE REFS ===> 100 Maximum concurrent open data sets
PRESS: ENTER to continue RETURN to exit HELP for more information
PARAMETER PANELID Possible values Default
92 DB2 9 for z/OS: Distributed Functions
If you specified INACTIVE (the default) the request will be processed when DB2 can
assign an unused DBAT to the connection.
If you specified ACTIVE, further processing is queued waiting for an active DBAT to
terminate.
We recommend INACTIVE for most environments.
If you set MAXDBAT to zero, you prevent DB2 from accepting new distributed requests.
Max remote connected (CONDBAT)
Use this field to specify the maximum number of concurrent remote connections. This
value must be greater than or equal to MAXDBAT. If a request to allocate a new
connection to DB2 is received and you have already reached CONDBAT, the connection
request is rejected.
Set CONDBAT to be significantly higher than MAXDBAT. In general, a connection request
should not be rejected. The cost of maintaining a connection while waiting for a thread is low.
Figure 3-20 on page 93 shows panel DSNTIPR, the first of the two panels for defining DDF.
We discuss each field briefly before proceeding to DSNTIP5, the second DDF panel. The
fields shown represent our initial input settings, not necessarily the defaults.
Note: The number of allied threads (CTHREAD) and DBATs that your DB2 system can
handle concurrently is workload dependent. If you specify too high a sum (CTHREAD +
MAXDBAT) and you experience a workload spike, you may exhaust the available virtual
storage in the DBM1 address space. This could lead to a dramatic slowdown in
processing as DB2 performs a system contraction. You may also cause DB2 to abend if
you specify too large a number of concurrent threads. Evaluate your workload
requirements for thread storage and set MAXDBAT and CTHREAD conservatively to
ensure you do not encounter either of these conditions.
Chapter 3. Installation and configuration 93
Figure 3-20 DSNTIPR panel showing DDF values for subsystem DB9A
For each of the fields in this panel and the next panel, we repeat the field name, then indicate
the DSNZPARM parameter name in parentheses. (none) means the field does not
correspond to a DSNZPARM parameter, but is instead specified in the bootstrap data set
(BSDS), which we discuss in 3.2.4, Updating the BSDS on page 97.
DDF startup option (DDF)
Specify AUTO to have DDF initialized and started when DB2 starts. COMMAND also
initializes DDF, but you must then enter the START DDF command. NO indicates you do
not want the DDF active in this DB2.
DB2 LOCATION name (none)
This is the unique name that identifies this DB2, or the data sharing group of which this
DB2 is a member. Any requester specifies the LOCATION name and either an IP address
or a service name to reach this DB2. Default is LOC1.
In a data sharing environment requesters can specify a LOCATION ALIAS. For a
description of how to define a LOCATION ALIAS for DB2, refer to Alias support on
page 82. For a discussion of using LOCATION ALIAS, refer to 6.2.1, DB2 data sharing
subsetting on page 248.
DB2 network LUNAME (none)
This name uniquely identifies DB2 to VTAM. This field is required for this panel even if you
do not use VTAM in your environment. Default is LU1.
DB2 network password (none)
This optional field specifies the password that VTAM uses to recognize this DB2
subsystem. We recommend you do not use this field, even if you use VTAM.
DSNTIPR INSTALL DB2 - DISTRIBUTED DATA FACILITY
===>
Enter data below:
1 DDF STARTUP OPTION ===> AUTO NO, AUTO, or COMMAND
2 DB2 LOCATION NAME ===> DB9A The name other DB2s use to
refer to this DB2
3 DB2 NETWORK LUNAME ===> SCPDB9A The name VTAM uses to refer to this DB2
4 DB2 NETWORK PASSWORD ===> Password for DB2's VTAM application
5 RLST ACCESS ERROR ===> NOLIMIT NOLIMIT, NORUN, or 1-5000000
6 RESYNC INTERVAL ===> 2 Minutes between resynchronization period
7 DDF THREADS ===> INACTIVE Status of a qualifying database access
thread after commit. ACTIVE or INACTIVE.
8 MAX INACTIVE DBATS ===> 0 Max inactive database activity threads
9 DB2 GENERIC LUNAME ===> Generic VTAM LU name for this DB2
subsystem or data sharing group
10 IDLE THREAD TIMEOUT ===> 120 0 or seconds until dormant server ACTIVE
thread will be terminated (0-9999).
11 EXTENDED SECURITY ===> YES Allow change password and descriptive
security error codes. YES or NO.
PRESS: ENTER to continue RETURN to exit HELP for more information
94 DB2 9 for z/OS: Distributed Functions
RLST access error (RLFERRD)
For dynamic SQL, use this to specify what action DB2 takes if the governor cannot access
the resource limit specification table, or if DB2 cannot find a row in the table for the query
user.
NOLIMIT means the dynamic SQL runs without limit.
NORUN terminates dynamic SQL immediately with an SQL error code.
A number from 1 to 5,000,000 indicates the default limit; if the limit is exceeded the
dynamic SQL statement is terminated.
Refer to Example 7.4.3 on page 310 for further discussion of this topic.
Resync interval (RESYNC)
This is the interval, in minutes, between resynchronization periods. DB2 processes
indoubt logical units of work during these periods.
DDF threads (CMTSTAT)
Use this field to indicate how DB2 treats a DBAT after a successful commit or rollback
operation, when the DBAT holds no cursors.
If you specify ACTIVE, the DBAT remains active and continues to consume system
resources. This restricts the number of connections you can support. If you must support a
large number of connections, specify INACTIVE, which is the default.
If you specify INACTIVE, DB2 will use one of two concepts:
For most cases, if the DBAT holds no cursors, has no declared global temporary tables
defined, and has not specified KEEPDYNAMIC YES, then DB2 will disassociate the
DBAT from the connection, mark the connection inactive, and return the DBAT to a pool
for use by another connection. This concept is also called inactive connection.
Because the DBAT is pooled, inactive connection support is more efficient in
supporting a large number of connections.
The second concept typically involves private protocol threads. In this case the thread
remains associated with the connection, but thread storage is reduced. This concept is
also called inactive DBAT, and was previously called a type 1 inactive thread. See the
following bullet, Max inactive DBATs (MAXTYPE1).
When a thread issues a COMMIT or ROLLBACK, DB2 tries to make it an inactive
connection, and if that is not possible, DB2 tries to make it an inactive DBAT. If neither is
possible, the thread remains active, potentially consuming valuable resources.
Max inactive DBATs (MAXTYPE1)
This field limits the number of inactive DBATs in your system. If you choose the default,
zero (0), then inactive DBATS are not allowed. If a thread would otherwise meet the
requirements to become an inactive DBAT, it remains active. If you choose a value greater
than zero, that becomes the high water mark for inactive DBATs. If a thread meets the
requirements of an inactive DBAT, but max inactive DBATs is reached, the remote
connection is terminated.
DB2 generic LUNAME (none)
This applies only to a DB2 data sharing member using SNA to accept distributed traffic.
Idle thread timeout (IDTHTOIN)
This specifies the approximate time, in seconds, that an active server thread should be
allowed to remain idle before it is canceled. After the timeout expires, the thread is
canceled and its locks and cursors are released. Inactive and indoubt threads are not
subject to IDTHTOIN.
Chapter 3. Installation and configuration 95
If you specify 0, you disable time-out processing. In this case, idle threads remain in the
system and continue to hold resources. We recommend you choose a non-zero value.
Extended security (EXTSEC)
This field specifies two security options with one variable: whether detailed reason codes
are returned when a DDF connection request fails due to a security error, and whether
RACF users can change their passwords. YES is the default and is recommended. NO
returns generic codes and prevents RACF users from changing their passwords.
Figure 3-21 shows the second DDF panel, DSNTIP5. The fields shown represent our initial
input settings, not necessarily the defaults.
Figure 3-21 DSNTIP5 panel showing values for subsystem DB9A
DRDA port (none)
Specify the TCP/IP port number used to accept TCP/IP requests from remote DRDA
clients. This field is input to the DSNJU003 job that DB2 generates.
Remember to specify the same DRDA port on all members of a DB2 data sharing group.
Secure port (none)
Specify this field if you intend to accept secure TCP/IP connection requests from remote
DRDA clients. You must specify a value if you plan to use TCP/IP with Secure Socket
Layer (SSL). This field is input to the DSNJU003 job that DB2 generates. Remember to
specify the same secure port on all members of a DB2 data sharing group.
Refer to 4.3.3, Secure Socket Layer on page 151 for more information about SSL.
Resync port (none)
This field is the TCP/IP port number that is used to process requests for two-phase
commit resynchronization. This value must be different than the value that is specified for
DRDA port.
In a data sharing environment, each member must have a unique resync port.
DSNTIP5 INSTALL DB2 - DISTRIBUTED DATA FACILITY PANEL 2
===>
Enter data below:
1 DRDA PORT ===> 12347 TCP/IP port number for DRDA clients.
1-65534 (446 is reserved for DRDA)
2 SECURE PORT ===> 12349 TCP/IP port number for secure DRDA
clients. 1-65534 (448 is reserved
for DRDA using SSL)
3 RESYNC PORT ===> 12348 TCP/IP port for 2-phase commit. 1-65534
4 TCP/IP ALREADY VERIFIED ===> NO Accept requests containing only a
userid (no password)? YES or NO
5 EXTRA BLOCKS REQ ===> 100 Maximum extra query blocks when DB2 acts
as a requester. 0-100
6 EXTRA BLOCKS SRV ===> 100 Maximum extra query blocks when DB2 acts
as a server. 0-100
7 AUTH AT HOP SITE ===> BOTH Authorization at hop site. BOTH or RUNNER
8 TCP/IP KEEPALIVE ===> 120 ENABLE, DISABLE, or 1-65534
9 POOL THREAD TIMEOUT ===> 120 0-9999 seconds
PRESS: ENTER to continue RETURN to exit HELP for more information
96 DB2 9 for z/OS: Distributed Functions
TCP/IP already verified (TCPALVER)
Use this field to specify whether DB2 will accept TCP/IP connection requests that contain
only a user ID, but no password, RACF PassTicket or Kerberos ticket. This value must be
the same for all members of a data sharing group. This option applies to all incoming
requests that use TCP/IP regardless of the requesting location.
We recommend the default, NO, which requires all requesting locations to use passwords,
RACF PassTickets or Kerberos tickets when they send user IDs.
Extra blocks req (EXTRAREQ)
This field specifies an upper limit on the number of extra DRDA query blocks DB2 requests
from a remote DRDA server. This does not limit the size of the SQL query answer set; it
simply controls the total amount of data that can be transmitted on any given network
exchange. We recommend the default.
Extra blocks srv (EXTRASRV)
This field specifies an upper limit on the number of extra DRDA query blocks that DB2
returns to a DRDA client. This does not limit the size of the SQL query answer set; it
simply controls the total amount of data that can be transmitted on any given network
exchange. We recommend taking the default.
Auth at hop site (HOPAUTH)
This field indicates whose authorization is to be checked at a second server (also called a
hop site) when the request is from a requester that is not DB2 for z/OS. This option applies
only when private protocol access is used for the hop from the second to third site.
TCP/IP keepalive (TCPKPALV)
You can use this field as an override in cases where the TCP/IP KeepAlive value in the
TCP/IP configuration is not appropriate for the DB2 subsystem. The settings have the
following meanings:
ENABLE: Do not override the TCP/IP KeepAlive configuration value.
DISABLE: Disable KeepAlive probing for this subsystem.
1 to 65534: Override the TCP/IP KeepAlive configuration value with the entered
number of seconds. You should set this value close to the value in IDTHTOIN or the
IRLM resource timeout value (IRLMRWT).
We recommend the default value.
Pool thread timeout (POOLINAC)
This field specifies the approximate time, in seconds, that a DBAT can remain idle in the
pool before it is terminated. A DBAT in the pool counts as an active thread against
MAXDBAT and can hold locks, but does not have any cursors.
Specifying 0 causes a DBAT to terminate rather than go into the pool if the pool has a
sufficient number of threads to process the number of inactive connections that currently
exist. Specify 0 only if you are constrained on virtual storage below 2 GB in the DBM1
address space, as specifying 0 increases the likelihood of thread creation and
corresponding overhead.
Chapter 3. Installation and configuration 97
Two other fields of interest to DDF appears on DSNTIPN, the Tracing Parameters panel.
DDF/RRSAF Accum (ACCUMACC)
The value you specify here determines whether DB2 will accumulate, or rollup, accounting
records for DDF or RRSAF threads.
If you specify NO, DB2 writes an accounting record when a DDF thread is made inactive
or when signon occurs for an RRSAF thread
If you specify a number, DB2 writes an accounting record in the specified interval for a
given user, based on the aggregation fields specified in ACCUMUID.
Aggregation fields (ACCUMUID)
The value you specify here determines which combination of user ID, transaction name,
application name, and workstation name is used for the aggregation specified in
ACCUMACC. This value also specifies whether strings of hex zeros, X00, or blanks,
X40, are considered for rollup.
Refer to Table 7-4 on page 278 and related text for a discussion of accounting accumulation.
Refer to DB2 Version 9.1 for z/OS Installation Guide, GC18-9846 for the details of setting
ACCUMUID.
The following parameters are not in the install panels but do relate to DDF requests.
PRGSTRIN
This parameter relates to progressive streaming for LOBs or XML and allows customers to
disable progressive streaming behavior on DB2 for z/OS server where necessary.
PK46079 added this parameter to the DSNZPARM macro DSN6FAC for DB2 9 for z/OS
only. Refer to 5.7.8, Progressive streaming on page 222 for a discussion of this behavior.
SQLINTRP
This parameter relates to the SQL interrupt function that was added to DB2 for z/OS V8.
This parameter allows customers to disable the SQL interrupt function. PK59385 (DB2 9
for z/OS) and PK41661 (DB2 for z/OS V8) added this parameter to the DSNZPARM macro
DSN6FAC. Refer to 5.7.9, SQL Interrupts on page 223 for a discussion of this behavior.
3.2.4 Updating the BSDS
Each of the parameters in the previous section that did not correspond to a DSNZPARM entry
can be specified in the BSDS. These include LOCATION name, LUNAME, network
PASSWORD, GENERIC LU name, DRDA port, secure port (SECPORT) and resync port
(RESPORT).
Beginning with DB2 9 for z/OS, you do not have to define DDF to VTAM, so LUNAME and
GENERIC LU are not required. PASSWORD only applies to VTAM, and we do not
recommend its use.
To define the BSDS when installing a DB2 subsystem, or to change the BSDS to reflect
changes in your DDF configuration, you must run the change log inventory utility. We show
the change log inventory job used for our standalone DB2 subsystem, DB9A, in Figure 3-22
on page 98. The RESPORT, PORT and SECPORT must all be different.
Remember, DSNJU003 can only be executed when the target DB2 subsystem is stopped.
98 DB2 9 for z/OS: Distributed Functions
Figure 3-22 DSNJU003 for DB9A BSDS
You can see your current settings for these DDF values from the Communication Record
produced with the Print Log Map utility, DSNJU004. We show the output for DB9A in
Figure 3-23. In DNJU004 output, the resync port is RPORT and the secure port is SPORT.
Figure 3-23 DB9A DSNJU004 output with DDF values
Updating the BSDS for a data sharing environment
In a data sharing environment the BSDS includes specifications for the data sharing group as
well as the DDF specifications. When we began our project, the DSNJU003 job for the D9C1
member of our data sharing group looked like the example in Figure 3-24 on page 99. Note
that we started with a minimal definition.
//*********************************************************************
//* CHANGE LOG INVENTORY:
//* UPDATE BSDS
//*********************************************************************
//DSNTLOG EXEC PGM=DSNJU003,COND=(4,LT)
//STEPLIB DD DISP=SHR,DSN=DB9A9.SDSNLOAD
//SYSUT1 DD DISP=OLD,DSN=DB9AU.BSDS01
//SYSUT2 DD DISP=OLD,DSN=DB9AU.BSDS02
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSIN DD *
DDF LOCATION=DB9A,LUNAME=SCPDB9A,
RESPORT=12348,PORT=12347,SECPORT=12349
//*
**** DISTRIBUTED DATA FACILITY ****
COMMUNICATION RECORD
22:04:33 APRIL 22, 2009
LOCATION=DB9A IPNAME=(NULL) PORT=12347 SPORT=12349 RPORT=12348
ALIAS=(NULL)
IPV4=NULL IPV6=NULL
GRPIPV4=NULL GRPIPV6=NULL
LUNAME=SCPDB9A PASSWORD=(NULL) GENERICLU=(NULL)
Chapter 3. Installation and configuration 99
Figure 3-24 DSNJU003 for D9C1 BSDS
The output from DSNJU004 in Figure 3-25 shows the Communication Record for the D9C1
member. There is no indication from the Communication Record that D9C1 is a member of a
data sharing group other than the fact that the LOCATION name is one we know to be the
LOCATION name of the group.
Figure 3-25 D9C1 DSNJU004 output for member D9C1
After performing some of our test scenarios against the starting configuration, we enabled the
Sysplex Distributor function of TCP/IP and defined dynamic virtual IP addresses (DVIPAs) for
the members of the data sharing group. Refer to 3.1.4, TCP/IP settings in a data sharing
environment on page 76 and subsequent sections to see the TCP/IP examples. We took
advantage of the DB2 9 for z/OS function to specify the member DVIPA and group DVIPA in
the BSDS.
We defined the group DVIPA, also known as the distributed DVIPA, as 9.12.4.102. We defined
the member specific DVIPAs as follows
D9C1 9.12.4.103
D9C2 9.12.4.104
D9C3 9.12.4.105
//*********************************************************************
//* CHANGE LOG INVENTORY:
//* UPDATE BSDS
//*********************************************************************
//DSNTLOG EXEC PGM=DSNJU003,COND=(4,LT)
//STEPLIB DD DISP=SHR,DSN=DB9C9.SDSNLOAD
//SYSUT1 DD DISP=OLD,DSN=DB9CL.D9C1.BSDS01
//SYSUT2 DD DISP=OLD,DSN=DB9CL.D9C1.BSDS02
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSIN DD *
DDF LOCATION=DB9C,LUNAME=SCPD9C1,
RESPORT=38321,PORT=38320,SECPORT=0
DATASHR ENABLE
*
* WARNING! DO NOT CHANGE ANY PARAMETERS IN THE GROUP STATEMENT BELOW!
GROUP GROUPNAM=DB9CG,GROUPMEM=D9C1,MEMBERID=1
//*
**** DISTRIBUTED DATA FACILITY ****
COMMUNICATION RECORD
21:55:59 APRIL 22, 2009
LOCATION=DB9C IPNAME=(NULL) PORT=38320 SPORT=NULL RPORT=38321
ALIAS=(NULL)
IPV4=NULL IPV6=NULL
GRPIPV4=NULL GRPIPV6=NULL
LUNAME=SCPD9C1 PASSWORD=(NULL) GENERICLU=(NULL)
100 DB2 9 for z/OS: Distributed Functions
Figure 3-26 shows the DSNJU003 input for D9C1 to change the DDF record to include the
member DVIPA and the group DVIPA.
Figure 3-26 DSNJU003 input to add DVIPA and Group DVIPA to the BSDS for D9C1
Figure 3-27 shows the corresponding input for member D9C2.
Figure 3-27 DSNJU003 input to add DVIPA and Group DVIPA to the BSDS for D9C2
Figure 3-28 shows the corresponding input for member D9C3.
Figure 3-28 DSNJU003 input to add DVIPA and Group DVIPA to the BSDS for D9C3
//DSNTLOG EXEC PGM=DSNJU003,COND=(4,LT)
//STEPLIB DD DISP=SHR,DSN=DB9C9.SDSNLOAD
// DD DISP=SHR,DSN=DB9C9.SDSNEXIT
//SYSUT1 DD DISP=OLD,DSN=DB9CL.D9C1.BSDS01
//SYSUT2 DD DISP=OLD,DSN=DB9CL.D9C1.BSDS02
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSIN DD *
DDF IPV4=9.12.4.103,GRPIPV4=9.12.4.102
//DSNTLOG EXEC PGM=DSNJU003,COND=(4,LT)
//STEPLIB DD DISP=SHR,DSN=DB9C9.SDSNLOAD
// DD DISP=SHR,DSN=DB9C9.SDSNEXIT
//SYSUT1 DD DISP=OLD,DSN=DB9CL.D9C2.BSDS01
//SYSUT2 DD DISP=OLD,DSN=DB9CL.D9C2.BSDS02
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSIN DD *
DDF IPV4=9.12.4.104,GRPIPV4=9.12.4.102
//*
//DSNTLOG EXEC PGM=DSNJU003,COND=(4,LT)
//STEPLIB DD DISP=SHR,DSN=DB9C9.SDSNLOAD
// DD DISP=SHR,DSN=DB9C9.SDSNEXIT
//SYSUT1 DD DISP=OLD,DSN=DB9CL.D9C3.BSDS01
//SYSUT2 DD DISP=OLD,DSN=DB9CL.D9C3.BSDS02
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSIN DD *
DDF IPV4=9.12.4.105,GRPIPV4=9.12.4.102
//*
Chapter 3. Installation and configuration 101
After running the DSNJU003 jobs, and restarting the corresponding DB2 members, the
Communication Record from DSNJU004 shows the member DVIPA and the group DVIPA.
See Figure 3-29 for the contents of the new Communication Record for member D9C1.
Figure 3-29 DSNJU004 output for member D9C1 with DVIPA specified
We also issued the following command:
-D9C1 DISPLAY DDF
The output, in Figure 3-30, shows the group DVIPA and the member DVIPA. The SQL and
RESYNC DOMAIN entries show -NONE.
Figure 3-30 Output from -D9C1 DISPLAY DDF showing DVIPA specifications
**** DISTRIBUTED DATA FACILITY ****
COMMUNICATION RECORD
21:39:14 APRIL 24, 2009
LOCATION=DB9C IPNAME=(NULL) PORT=38320 SPORT=NULL RPORT=38321
ALIAS=(NULL)
IPV4=9.12.4.103 IPV6=NULL
GRPIPV4=9.12.4.102 GRPIPV6=NULL
LUNAME=SCPD9C1 PASSWORD=(NULL) GENERICLU=(NULL)
RESPONSE=SC63
DSNL080I -D9C1 DSNLTDDF DISPLAY DDF REPORT FOLLOWS:
DSNL081I STATUS=STARTD
DSNL082I LOCATION LUNAME GENERICLU
DSNL083I DB9C USIBMSC.SCPD9C1 -NONE
DSNL084I TCPPORT=38320 SECPORT=0 RESPORT=38321 IPNAME=-NONE
DSNL085I IPADDR=::9.12.4.102
DSNL086I SQL DOMAIN=-NONE
DSNL086I RESYNC DOMAIN=-NONE
DSNL089I MEMBER IPADDR=::9.12.4.103
DSNL099I DSNLTDDF DISPLAY DDF REPORT COMPLETE
102 DB2 9 for z/OS: Distributed Functions
We added the domain name server (DNS) entries for the data sharing group, as described in
3.1.4, TCP/IP settings in a data sharing environment on page 76 and Figure 3-11 on
page 81. Then we repeated the DDF display command. The output is shown in Figure 3-31.
Figure 3-31 Output from -D9C1 DISPLAY DDF showing DNS support for the group
This view of the output includes the message DSNL519I, which indicates DB2 is ready to
accept connections on any IP address supported by the TCP/IP stack. This is the benefit of
using DB2 9 for z/OS support to specify the group and member DVIPAs in the BSDS.
Updating the BSDS for LOCATION ALIAS support
After we made the TCP/IP changes described in Alias support on page 82, we had to add
the LOCATION ALIAS entries to our BSDS. Figure 3-32 shows the input for DSNJU003 to
add two LOCATION ALIAS specifications to data sharing member D9C1. DB9CALIAS is a
LOCATION ALIAS that only includes D9C1. DB9CSUBSET includes both D9C1 and D9C2.
Figure 3-32 DSNJU003 input to add ALIAS definitions to D9C1
DSNL519I -D9C1 DSNLILNR TCP/IP SERVICES AVAILABLE 484
FOR DOMAIN d9cg.itso.ibm.com AND PORT 38320
-D9C1 DISPLAY DDF
DSNL080I -D9C1 DSNLTDDF DISPLAY DDF REPORT FOLLOWS: 486
DSNL081I STATUS=STARTD
DSNL082I LOCATION LUNAME GENERICLU
DSNL083I DB9C USIBMSC.SCPD9C1 -NONE
DSNL084I TCPPORT=38320 SECPORT=0 RESPORT=38321 IPNAME=-NONE
DSNL085I IPADDR=::9.12.4.102
DSNL086I SQL DOMAIN=d9cg.itso.ibm.com
DSNL086I RESYNC DOMAIN=d9cg.itso.ibm.com
DSNL089I MEMBER IPADDR=::9.12.4.103
DSNL099I DSNLTDDF DISPLAY DDF REPORT COMPLETE
//DSNTLOG EXEC PGM=DSNJU003,COND=(4,LT)
//STEPLIB DD DISP=SHR,DSN=DB9C9.SDSNLOAD
// DD DISP=SHR,DSN=DB9C9.SDSNEXIT
//SYSUT1 DD DISP=OLD,DSN=DB9CL.D9C1.BSDS01
//SYSUT2 DD DISP=OLD,DSN=DB9CL.D9C1.BSDS02
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSIN DD *
DDF ALIAS=DB9CALIAS:38324,DB9CSUBSET:38325
Chapter 3. Installation and configuration 103
Figure 3-33 shows the input for DSNJU003 to add LOCATION ALIAS, DB9CSUBSET, to data
sharing member D9C2.
Figure 3-33 DSNJU003 input to add ALIAS definition to D9C2
After we restarted each of these two members, we ran DSNJU004 to verify the BSDS
changes. Figure 3-34 shows the addition of two ALIAS specifications for D9C1.
Figure 3-34 DSNJU004 output showing LOCATION ALIAS and DVIPA with IPV4
3.2.5 DDF address space setup
The DB2 distributed services address space (ssidDIST) startup JCL is generated by the
installation job DSNTIJMV (if the DDF startup option is set to AUTO or COMMAND, and a
location name is specified on the DSNTIPR panel). The DSNTIJMV job can be found in the
hlq.SDSNSAMP data set. Figure 3-35 on page 104 shows the JCL for D9C1DIST, one of the
members of our data sharing group.
//DSNTLOG EXEC PGM=DSNJU003,COND=(4,LT)
//STEPLIB DD DISP=SHR,DSN=DB9C9.SDSNLOAD
// DD DISP=SHR,DSN=DB9C9.SDSNEXIT
//SYSUT1 DD DISP=OLD,DSN=DB9CL.D9C2.BSDS01
//SYSUT2 DD DISP=OLD,DSN=DB9CL.D9C2.BSDS02
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSIN DD *
DDF ALIAS=DB9CSUBSET:38325
**** DISTRIBUTED DATA FACILITY ****
COMMUNICATION RECORD
00:24:32 APRIL 28, 2009
LOCATION=DB9C IPNAME=(NULL) PORT=38320 SPORT=NULL RPORT=38321
ALIAS=DB9CALIAS:38324,DB9CSUBSET:38325
IPV4=9.12.4.103 IPV6=NULL
GRPIPV4=9.12.4.102 GRPIPV6=NULL
LUNAME=SCPD9C1 PASSWORD=(NULL) GENERICLU=(NULL)
104 DB2 9 for z/OS: Distributed Functions
Figure 3-35 JCL for D9C1DIST
3.2.6 Stored procedures and support for JDBC and SQLJ
If you need to provide support for JDBC and SQLJ requesters for the first time, there are
several steps you must complete. Most of these steps are for DB2 for z/OS as the DRDA AS,
but some relate to the requesters. The steps to install support for JDBC and SQLJ are as
follows:
1. Allocate and load IBM Data Server Driver for JDBC and SQLJ libraries. Perform this step
on the clients.
2. On DB2 for z/OS, set the DESCSTAT parameter to YES (DESCRIBE FOR STATIC on the
DSNTIPF installation panel). This is necessary for SQLJ support.
3. In z/OS UNIX System Services, edit the .profile file to customize environment variable
settings.
4. (Optional) Customize IBM Data Server Driver for JDBC and SQLJ configuration
properties. This refers to the properties on the clients.
5. On DB2 for z/OS, enable the DB2-supplied stored procedures and define the tables that
are used by the IBM Data Server Driver for JDBC and SQLJ.
6. In z/OS UNIX System Services, run the DB2Binder utility to bind the packages for the IBM
Data Server Driver for JDBC and SQLJ.
7. Install z/OS Application Connectivity to DB2 for z/OS feature. This applies if you have a
JDBC or SQLJ application on z/OS in an LPAR where you do not have a DB2 for z/OS
subsystem. This feature is effectively the Type 4 driver for z/OS and provides the DRDA
AR function without a local DB2 (and DDF) to communicate with a DB2 for z/OS server
elsewhere in the network.
Refer to DB2 Version 9.1 for z/OS Installation Guide, GC18-9846 for more information about
these steps.
//D9C1DIST JOB MSGLEVEL=1
//STARTING EXEC D9C1DIST
XX*************************************************
XX* JCL PROCEDURE FOR THE STARTUP OF THE
XX* DISTRIBUTED DATA FACILITY ADDRESS SPACE
XX*
XX*************************************************
XXD9C1DIST PROC RGN=0K,
XX LIB='DB9C9.SDSNEXIT'
XXIEFPROC EXEC PGM=DSNYASCP,REGION=&RGN
IEFC653I SUBSTITUTION JCL - PGM=DSNYASCP,REGION=0K
XXSTEPLIB DD DISP=SHR,DSN=&LIB
IEFC653I SUBSTITUTION JCL - DISP=SHR,DSN=DB9C9.SDSNEXIT
XX DD DISP=SHR,DSN=CEE.SCEERUN
XX DD DISP=SHR,DSN=DB9C9.SDSNLOAD
Chapter 3. Installation and configuration 105
3.3 Workload Manager setup
Workload Manager (WLM) is the operating system component responsible for managing all
the work in your z/OS environment. WLM uses workload classification, service classes, and
performance objectives to manage the resources of the z/OS system to meet your business
objectives. From a DB2 perspective, WLM manages the priority of the DB2 address spaces,
including DDF, IRLM, and stored procedure address spaces.
WLM plays an important role in managing DDF work. We refer to work that enters the system
through DDF as DDF transactions. WLM manages the priority of DDF transactions separately
from the priority of the DDF address space. Define your DDF transactions to WLM correctly to
ensure that the DDF transactions receive an appropriate level of system resources and can
achieve the performance objectives that meet your business requirements.
In this section we discuss how WLM uses enclaves to manage your DDF transactions, and
we show how to define performance objectives for your DDF transaction workload.
3.3.1 Enclaves
An enclave is an independently dispatchable unit of work, or a business transaction, that can
span multiple address spaces and can include multiple SRBs and TCBs. DDF transactions
execute as enclaves in DB2. (DB2 also uses enclaves for other purposes, such as parallel
queries or native SQL procedures.)
The DDF address space owns all the enclaves created by the distributed data facility.
However, there is no special connection between the enclaves and their owner address
space. Each enclave is managed separately by the MVS System Resource Manager (SRM)
according to its performance objectives.
DDF and the life of an enclave
When a connection request comes to DDF, the connection must be associated with a DBAT
before the DDF transaction can execute SQL. When the DDF transaction processes its first
SQL statement, DDF calls WLM to create an enclave. WLM then manages the enclave based
on the workload characteristics assigned to that enclave.
The enclave is the basis for assigning system resources to the DDF transaction running on
that enclave. The enclave is also the basis for reporting thread performance. Refer to 7.2.1,
Database Access Threads on page 271 for details on DDF thread performance.
When the enclave is deleted depends on whether the DBAT can become pooled. If the DBAT
becomes pooled, the enclave is deleted. If the DBAT cannot become pooled, the enclave is
only deleted at thread termination time. (If a DBAT becomes type 1 inactive, the enclave is
deleted. Type 1 inactive DBATs generally only apply to DB2 Private Protocol connections.)
3.3.2 Managing DDF work with WLM
DDF transactions are classified and defined to a WLM service class in the active WLM policy.
Use the WLM administrative panels to define service classes, report classes, and
classification rules for DDF transactions.
106 DB2 9 for z/OS: Distributed Functions
To define DDF transactions to WLM, perform the following tasks:
Define service classes, performance objectives for these service classes, and optional
report classes.
Define classification rules to assign incoming DDF transactions to the appropriate service
class. DDF transactions are classified using the subsystem type of DDF, or SUBSYS =
DDF.
Activate the WLM policy.
The DIST address space is not managed based on the same classification rules for DDF
transactions. DIST performance objectives are defined with SUBSYS = STC. Generally DIST
should be defined to run with the same objectives as MSTR and DBM1, all of which should be
higher than the performance objectives of the DDF enclaves.
Performance objectives for DDF transactions
For a DDF transaction, WLM assigns the performance goal to the enclave, and it is the
lifetime of the enclave that WLM takes as the duration of the work. Therefore, when you run
with CMTSTAT=INACTIVE, DDF creates one enclave per transaction, and response time
goals and multiple period objectives can be used. However, if you have CMTSTAT=ACTIVE
DDF creates one enclave for the life of the thread, and response time goals and multiple
periods should not be used.
If you specify CMSTAT=INACTIVE but the DBAT cannot be pooled at commit, then the
enclave is not deleted and may end up running against lower period objectives. You should
ensure this happens only infrequently. See also 7.2.1, Database Access Threads on
page 271.
Modifying WLM definitions for DB2 and DDF
The remainder of this section describes the changes we made to the WLM definitions for our
DB2 address spaces and the specifications for DDF transactions in our environment.
For details on WLM, refer to System Programmer's Guide to: Workload Manager, SG24-6472.
We started by modifying the classification rules for DB2. By default, DB2 address spaces run
in SYSSTC. Like many customers, we prefer running our DB2 address spaces in importance
1 with a reasonable velocity goal. We choose option 6. Classification rules in Figure 3-36 on
page 107 to modify the rules for subsystem type STC.
Attention: If you do not define classification rules for SUBSYS = DDF, all your DDF
transactions will run in the SYSOTHER service class, which has a discretionary goal. This
means your DDF transactions will only execute after all the other service calls goals are
met. In a busy system, this could mean that your DDF transactions get little service.
Chapter 3. Installation and configuration 107
Figure 3-36 WLM: Choosing Classification Rules
Refer to Figure 3-37, where you can see that the default service class for subsystem type
STC, Started Tasks, is STC and that a report class of RSYSDFLT is defined.
Figure 3-37 WLM: Subsystem Type Selection: Choosing classification rules for started tasks
--------------------------------------------------------------------------
Functionality LEVEL019 Definition Menu WLM Appl LEVEL021
Command ===> ______________________________________________________________
Definition data set . . : none
Definition name . . . . . soaredb (Required)
Description . . . . . . . ________________________________
Select one of the
following options. . . . . 6__ 1. Policies
2. Workloads
3. Resource Groups
4. Service Classes
5. Classification Groups
6. Classification Rules
7. Report Classes
8. Service Coefficients/Options
9. Application Environments
10. Scheduling Environments
--------------------------------------------------------------------------
Subsystem Type Selection List for Rules Row 1 to 14 of 14
Command ===> ______________________________________________________________
Action Codes: 1=Create, 2=Copy, 3=Modify, 4=Browse, 5=Print, 6=Delete,
/=Menu Bar
------Class-------
Action Type Description Service Report
__ ASCH APPC Transaction Programs ASCH
__ CB WebSphere/Component Broker DDFDEF RCB
__ CICS CICS Transactions CICS
__ DB2 DB2 Sysplex Queries DB2QUERY
__ DDF DDF Work Requests DDFBAT
__ EWLM EWLM Subsystem for ESC/ETC SC_EWLM REWLMDEF
__ IMS IMS Transactions IMS
__ IWEB Web Work Requests RIWEB
__ JES Batch Jobs BATCHLOW BATCHDEF
__ MQ MQSeries Workflow RMQ
__ OMVS UNIX System Services SYSSTC1
3_ STC Started Tasks STC RSYSDFLT
__ SYSH linux
__ TSO TSO Commands TSO TSO
108 DB2 9 for z/OS: Distributed Functions
Figure 3-38 shows the changes we made to subsystem type STC. We added a transaction
name group (TNG) for the DB2 address spaces, which we called DB2AS, with service class
STCHI and report class RDB2AS. We placed the definition for this TNG before the SPM
definitions, otherwise the SPM would override our DB2AS TNG specifications. We also
modified the existing transaction (TN) definition for the IRLM address spaces. We made sure
the IRLMs had a service class of SYSSTC, because they should be equal to or higher than
the DB2 address spaces.
Figure 3-38 WLM: STC service classes and report classes
Figure 3-39 shows what we included in our DB2AS TNG. All of the DB2 subsystem or
member address spaces run in service class STCHI.
Figure 3-39 WLM: Transaction Name Group (TNG) for all DB2s in our sysplex
--------------------------------------------------------------------------
Modify Rules for the Subsystem Type Row 1 to 8 of 17
Command ===> ___________________________________________ Scroll ===> CSR
Subsystem Type . : STC Fold qualifier names? Y (Y or N)
Description . . . Started Tasks
Action codes: A=After C=Copy M=Move I=Insert rule
B=Before D=Delete row R=Repeat IS=Insert Sub-rule
More ===>
--------Qualifier-------- -------Class--------
Action Type Name Start Service Report
DEFAULTS: STC RSYSDFLT
____ 1 TNG DB2AS ___ STCHI RDB2AS
____ 1 TN %%%%IRLM ___ SYSSTC RIRLM
____ 1 SPM SYSTEM ___ SYSTEM RSYSTEM
____ 1 SPM SYSSTC ___ SYSSTC RSYSSTC
____ 1 TN DFS ___ SYSSTC2 RINETD1
____ 1 TN DFSKERN ___ SYSSTC3 RINETD1
____ 1 TN INETD1 ___ SYSSTC4 RINETD1
____ 1 TN PMAP ___ SYSSTC5 RPMAP
* Transaction Name Group DB2AS - DB2 system address spaces
Created by user PAOLOR6 on 2009/04/15 at 13:17:45
Last updated by user PAOLOR6 on 2009/04/15 at 14:28:09
Qualifier
name Description
--------- --------------------------------
%%%%MSTR System services
%%%%DBM1 Database services
%%%%DIST DDF
Chapter 3. Installation and configuration 109
Figure 3-40 shows the performance objective for service class STCHI.
Figure 3-40 WLM: STCHI service class goal
STCHI runs with importance 1 and velocity goal of 60. This is an appropriate goal for DB2
address spaces. All the DDF transactions we define below run below importance 1. It is not a
good idea for transactions to be running equal with the subsystem address spaces. The IRLM
is running in SYSSTC, which is above importance 1.
Figure 3-41 shows a portion of the System Display and Search Facility (SDSF) Display Active
(DA) panel for prefix D9C1. You can see that the IRLM is in service class SYSSTC and the
other DB2 address spaces are in service class STCHI.
Figure 3-41 SDSF display showing service classes for D9C1 address spaces
Defining service classes for DDF transactions
Choose option 4 from the main WLM menu to define Service Classes. In our environment we
started with three service classes for DDF transactions, DDFONL for high priority
transactions, DDFDEF, which at one time had been the default, and DDFBAT, which was the
default service class when we started. Figure 3-42 on page 110 shows the service class
definition for DDFONL.
* Service Class STCHI - Started Tasks
Created by user FISCHER on 2006/12/11 at 18:20:50
Base last updated by user PAOLOR6 on 2009/04/15 at 13:03:42
Base goal:
CPU Critical flag: NO
# Duration Imp Goal description
- --------- - ----------------------------------------
1 1 Execution velocity of 60
------------------------------------------------------------------------------
SDSF DA SC63 SC63 PAG 0 CPU/L/Z 7/ 7/ 0 LINE 1-4 (4)
COMMAND INPUT ===> SCROLL ===> CSR
NP JOBNAME Workload SrvClass SP ResGroup Server Quiesce ECPU-Time ECPU%
D9C1DBM1 STCTASKS STCHI 1 NO 21.35 0.00
D9C1DIST STCTASKS STCHI 1 NO 2.90 0.00
D9C1IRLM SYSTEM SYSSTC 1 NO 352.58 0.00
D9C1MSTR STCTASKS STCHI 1 NO 176.93 0.00
110 DB2 9 for z/OS: Distributed Functions
Figure 3-42 WLM: DDFONL service class goals
Note that DDF transactions that run in DDFONL service class can consume 500 service units
in first period, during which time they run at a WLM importance of 2. The goal is that 90% of
DDF transactions in this service class will complete within one second. Any DDF transaction
that consumes more than 500 service units will run at a WLM importance of 3 with a velocity
goal of 40.
Figure 3-43 shows the DDFDEF service class, which has lower performance objectives. First
period transactions run at importance 3 and we expect 80% to complete within half a second.
After 500 service units they run in second period at importance 4 with a velocity goal of 20.
Figure 3-43 WLM: DDFDEF service class goals
* Service Class DDFONL - DDF High priority
Created by user BARTR2 on 2003/02/18 at 17:09:58
Base last updated by user BART on 2003/04/11 at 17:42:53
Base goal:
CPU Critical flag: NO
# Duration Imp Goal description
- --------- - ----------------------------------------
1 500 2 90% complete within 00:00:01.000
2 3 Execution velocity of 40
Browse Line 00000000 Col 001 072
Command ===> SCROLL ===> CSR
**************************** Top of Data ******************************
* Service Class DDFDEF - DDF Requests
Created by user BART on 2003/04/11 at 20:27:56
Base last updated by user BART on 2003/04/11 at 20:27:56
Base goal:
CPU Critical flag: NO
# Duration Imp Goal description
- --------- - ----------------------------------------
1 500 3 80% complete within 00:00:00.500
2 4 Execution velocity of 20
*************************** Bottom of Data ****************************
Chapter 3. Installation and configuration 111
Figure 3-44 shows the DDFBAT service class which is the default service class for workload
of subsystem type DDF in our environment.
Figure 3-44 WLM: DDFBAT service class goals for our default DDF service class
Remember to define a default service class for subsystem type DDF in case DDF
transactions that you do not otherwise define come into the system. You would not want these
to end up in service class SYSOTHER.
We added two service classes for specific scenarios. These are DDFTOT for transactions
beginning with characters ToTo and DDFTST.
Figure 3-45 shows the service class definition for DDFTOT.
Figure 3-45 WLM: DDFTOT service class definition
* Service Class DDFBAT - DDF low priority
Class assigned to resource group DDF
Created by user BARTR2 on 2003/02/18 at 17:12:04
Base last updated by user PAOLOR6 on 2009/04/15 at 20:12:47
Base goal:
CPU Critical flag: NO
# Duration Imp Goal description
- --------- - ----------------------------------------
1 1000 3 Average response time of 00:30:00.000
2 4 70% complete within 01:00:00.000
* Service Class DDFTOT - DDF TOTO for RB
Created by user PAOLOR6 on 2009/04/22 at 20:05:12
Base last updated by user PAOLOR6 on 2009/04/22 at 20:05:12
Base goal:
CPU Critical flag: NO
# Duration Imp Goal description
- --------- - ----------------------------------------
1 500 2 80% complete within 00:00:00.500
2 3 Execution velocity of 20
112 DB2 9 for z/OS: Distributed Functions
Figure 3-46 shows the service class definition for DDFTST. We had the same performance
objectives but defined a separate service class to support separate scenarios.
Figure 3-46 WLM: DDFTST service class definition
Defining classification rules
Once you have defined the service classes, and optional report classes, that you require, you
can set up the classification rules that assign the incoming DDF transactions to the
appropriate service classes.
There are many attributes or qualifiers you can use to classify DDF transactions and assign
them to a service class. Table 3-5 provides a list of the attributes or qualifiers available to
classify DDF transactions. The attributes with an asterisk, (*), identify those fields that can be
set by client information APIs, for example SQLSET commands or Java
SET_CONNECTION_ATTRIBUTES.
Table 3-5 DDF work classification attributes
* Service Class DDFTST - DDF Test for RB
Created by user PAOLOR6 on 2009/04/22 at 20:01:31
Base last updated by user PAOLOR6 on 2009/04/22 at 20:03:05
Base goal:
CPU Critical flag: NO
# Duration Imp Goal description
- --------- - ----------------------------------------
1 500 2 80% complete within 00:00:00.500
2 3 Execution velocity of 20
Important: To classify work (transactions) coming in through DDF, use the DDF
subsystem type. The DB2 subsystem type is only for DB2 sysplex query parallelism. In
addition, to assign performance goals for the DB2 address spaces (including ssidDIST),
you use the STC subsystem type, as we describe in Figure 3-38 on page 108.
Attribute Type Description
Accounting Information* AI Can be passed from an application through Client Information
APIs
Correlation Information* CI Application program by default, but application can set through
Client Information APIs
Collection Name CN Collection name of the first SQL package accessed by the
requester in the unit of work
Connection Type CT Always DIST for DDF server threads
Package Name PK Name of the first DB2 package accessed by the DRDA requester
in the unit of work
Plan Name* PN Always DISTSERV for DDF server threads accessed through
DRDA requesters
Procedure Name PR Name of the procedure called as the first request of the unit of
work
Chapter 3. Installation and configuration 113
* The attribute can be set by Client Information APIs.
Figure 3-47 shows a subset of the classification rules that apply to our standalone subsystem,
DB9A, and to the members of our data sharing group.
Figure 3-47 A subset of WLM classification rules
We had two levels of classification rules for DB9A and two levels for the members of our data
sharing group.
Any DDF transactions that requested connections from DB9ADIST were qualified based on
the first four lines.
DDF transactions with an application name (PC) beginning with the characters TRX ran
in service class DDFONL and we could report on them separately with report class RSSL.
Requesters that passed accounting information (AI) with the characters ToToA beginning
in position 56 of the accounting string ran in service class DDFTOT. We did not need a
separate report class in this case, because we knew only those transactions would be in
DDFTOT service class.
Any DDF transactions with a primary AUTHID (UI) of PAOLOR3 ran in the default service
class for DDF (DDFBAT, not shown on this panel) with a separate report class.
Process Name* PC Client application name by default, but can be set through Client
Information APIs
Subsystem Collection
Name
SSC Usually the DB2 data sharing group name
Subsystem Instance SI DB2 servers MVS subsystem name
Sysplex Name PX Name assigned to the parallel sysplex at IPL
Userid UI DDF server threads primary AUTHID
Subsystem Parameter * SPM Concatenation of client user ID and workstation
* Subsystem Type DDF - DDF Work Requests
Last updated by user YIM on 2009/05/21 at 15:23:39
Classification:
Default service class is DDFBAT
There is no default report class.
Qualifier Qualifier Starting Service Report
# type name position Class Class
- ---------- -------------- --------- -------- --------
----1 SI DB9A DDFDEF RDB9ADEF
2 . PC . TRX* DDFONL RSSL
2 . AI . TotoA* 56 DDFTOT
2 . UI . PAOLOR3 RNISANTI
1 SI D9C* DDFDEF RD9CG
2 . UI . PAOLOR3 RNMSHARE
2 . UI . PAOLOR7 DDFTST
2 . PC . db2j* DDFONL
Attribute Type Description
114 DB2 9 for z/OS: Distributed Functions
Any other DDF transactions that came to DB9ADIST (SI = DB9A) ran in service class
DDFDEF with report class RDB9ADEF
Any DDF transactions that requested connections to any of the data sharing group members
were qualified based on the last four lines,
Any DDF transactions with a primary AUTHID (UI) of PAOLOR3 ran in the default service
class for DDF (DDFBAT, not shown on this panel) with a separate report class.
Any DDF transactions with a primary AUTHID (UI) of PAOLOR7 ran in the DDFTST
service class.
DDF transactions with an application name (PC) beginning with the characters db2j ran
in service class DDFONL.
Any other DDF transactions that came to a data sharing group member (SI beginning with
D9C) ran in service class DDFDEF with report class RD9CG.
This set of classification rules gave us the ability to distinguish between specific workloads
and monitor DDF transaction either with service class, for example from the SDSF DA panel,
or with report classes.
3.4 DB2 for LUW to DB2 for z/OS setup
In this section we briefly describe the steps to connect a DB2 for LUW DRDA requester to
DB2 for z/OS. We describe how to configure the IBM Data Server Clients or Drivers to
connect directly to DB2 for z/OS. We then describe briefly DB2 Connect configuration
commands in case you still support DB2 Connect in your environment.
3.4.1 IBM Data Server Drivers and Clients
If you use the IBM Data Server Client, we recommend you use the Client Configuration
Assistant (CCA) if possible, as this reduces the likelihood of errors. The CCA is a GUI tool
that guides you through the configuration process.
If you use the IBM Data Server Runtime Client or one of the non-Java IBM Data Server
Drivers, you must create and populate the db2dsdriver.cfg file.
If you are migrating from a DB2 Connect environment you can issue the db2dsgcfgfill
command, which will create the db2dsdriver.cfg file with most of the information you need.
This file will be in the cfg directory in your DB2 instance home. If you if you do not have a cfg
Note: The classification rules are evaluated in the order they are displayed by the WLM
ISPF interface. For good performance, especially if you set up many classification rules,
specify the ones that are either the most important or the most likely to be used, at the
beginning of the list.
Tip: It is worth noting that you can start the matching for a certain qualifier in any position
of the string you are matching against. You indicate the starting position in the Start
position column. The default is to start matching from the beginning of the string. If your
organization has good naming standards in place, this can be a powerful way to distinguish
between different types of work.
It is also worth noting that the values are case sensitive. Be careful when editing the WLM
panels that attributes you intend to be lower case are not folded to upper case.
Chapter 3. Installation and configuration 115
directory under your instance home, the db2dsdcfgfill command will fail. Create the directory
before executing the command. Refer to 6.2.2, Application Servers on page 249 for a
description of the use of this command in a data sharing environment.
If you are not migrating from a DB2 Connect environment, you can use the sample
db2dsdriver.cfg provided with the driver. We include an example from our environment in
Figure 3-48.
Figure 3-48 Sample db2dsdriver.cfg for our environment
In Figure 3-49 on page 116 we show the sample db2dsdriver.cfg file provided with the drivers.
Key information you will need to customize the <DSN Collection: for your environment is as
follows:
name = LOCATION name of the DB2 subsystem or data sharing group
host = IP address (group DVIPA in data sharing) or domain name
port = SQL port
<configuration>
<DSN_Collection>
<dsn alias="DB9C_DIR" name="DB9C" host="wtsc63.itso.ibm.com"
port="38320"/>
<!-- Long aliases are supported -->
</dsn>
</DSN_Collection>
<databases>
<database name="DB9C" host="wtsc63.itso.ibm.com" port="38320">
<WLB>
<parameter name="enableWLB" value="true"/>
<parameter name=maxTransports value=100/>
</WLB>
<ACR>
<parameter name="enableACR" value="true"/>
</ACR>
</database>
</databases>
<parameters>
<parameter name="enableDirectXA" value="true"/>
</parameters>
</configuration>
116 DB2 9 for z/OS: Distributed Functions
Figure 3-49 Sample db2dsdriver.cfg provided with the driver
For the IBM Data Server Driver for JDBC and SQLJ, you can specify the DB2 for z/OS server
using either the DriverManager or DataSource interface.
Refer to Chapter 5, Application programming on page 185 for more information about
defining the IBM Data Server Drivers to connect to DB2 for z/OS.
3.4.2 DB2 Connect
If you are supporting DB2 Connect and have not yet migrated to the IBM Data Server Drivers
(or Clients) and you have the DB2 Connect modules installed on the workstation, you can
enter the commands described below with the Command Line Processor (CLP), or you can
use the Client Configuration Assistant (CCA).
Two-tier connection to DB2 for z/OS
In a two-tier configuration, the application runs on a workstation and communicates directly to
DB2 for z/OS. Other than network routers, no hardware is between the workstation and DB2
for z/OS. In the following discussion we assume a workstation is connecting to LOCATION
DB9A, our standalone DB9A subsystem.
<configuration>
<DSN_Collection>
<dsn alias="alias1" name="name1" host="server1.net1.com" port="50001"/>
<!-- Long aliases are supported -->
<dsn alias="longaliasname2" name="name2" host="server2.net1.com"
port="55551">
<parameter name="Authentication" value="Client"/>
</dsn>
</DSN_Collection>
<databases>
<database name="name1" host="server1.net1.com" port="50001">
<parameter name="CurrentSchema" value="OWNER1"/>
<WLB>
<parameter name="enableWLB" value="true"/>
<parameter name="maxTransports" value="50"/>
</WLB>
<ACR>
<parameter name="enableACR" value="true"/>
</ACR>
</database>
<!-- Local IPC connection -->
<database name="name3" host="localhost" port="0">
<parameter name="IPCInstance" value="DB2"/>
<parameter name="CommProtocol" value="IPC"/>
</database>
</databases>
<parameters>
<parameter name="GlobalParam" value="Value"/>
</parameters>
</configuration>
Chapter 3. Installation and configuration 117
Figure 3-50 shows a diagram of a two-tier configuration in our environment with the
commands required to define the connection.
Figure 3-50 Two-tier configuration to our standalone DB2 for z/OS, DB9A
Table 3-6 shows the commands on the left, and the information about the DB2 for z/OS
environment required on the right.
Table 3-6 Commands to connect 2-tier to DB2 for z/OS and the information required
Commands to issue for DB2 Connect Information from DB2 for z/OS environment
db2 catalog tcpip node stand remote
wtsc63.itso.ibm.com server 12347
stand is an arbitrary name for this node
(standalone DB2 subsystem). It links the
tcpip node to the database in the next
command
The IP address could be used instead of host
name
Specify the DRDA port number, or service
name, after the server keyword
z/OS IP host name (or IP address) =
wts63.itso.ibm.com
DRDA port number for DB2 for z/OS = 12347
db2 catalog db DB9ADB at node stand
authentication dcs
Other security options are available besides
authentication dcs
DB9ADB is an arbitrary name you will use to
connect to DB2 for zOS.
db2 catalog dcs db DB9ADB as DB9A LOCATION name for DB2 for z/OS = DB9A
db2 connect to DB9ADB user yyyyyy using
xxxxxx
user ID = yyyyyy ; password = xxxxxx
DB2 Connect
Operating System: Windows, Linux,
or UNIX
DB2 CLP COMMANDS:
db2 catalog tcpip node stand remote
wtsc63.itso.ibm.com server 12347
db2 catalog db DB9ADB at node stand
authentication dcs
db2 catalog dcs db DB9ADB as DB9A
db2 connect to DB9ADB user yyyyyy using
xxxxxx
DB2 for z/OS: DB9A
Location Name: DB9A
IP address: 9.12.6.70
HOSTNAME:
wtsc63.itso.ibm.com
PORT:12347
2-tier configuration
to DB9A subsystem
118 DB2 9 for z/OS: Distributed Functions
For a description of other security options, refer to Chapter 4, Security on page 129.
Three-tier connection to DB2 for z/OS
In a three-tier configuration, the application runs on a workstation that communicates with a
server or gateway that runs DB2 Connect server. The DB2 Connect server communicates to
DB2 for z/OS. The workstation can have one of the IBM Data Server Drivers or Clients
installed. In the following discussion we assume a workstation is connecting to LOCATION
DB9A, our standalone DB9A subsystem.
Figure 3-51 shows a diagram of a three-tier configuration in our environment with the
commands required to define the connection from the DB2 Connect server to DB2 for z/OS
and from the workstation to the DB2 Connect server.
Figure 3-51 Three-tier configuration to our standalone DB2 for z/OS
Table 3-7 on page 119 shows the commands on the left, and the information about the DB2
Connect server environment required on the right. The information about DB2 for z/OS does
not change, so the commands entered at the DB2 Connect server for the three-tier
configuration are the same as those entered in the two-tier configuration in Table 3-6 on
page 117.
DB2 for z/OS: DB9A
Location Name: DB9A
IP address: 9.12.6.70
HOSTNAME: wtsc63.itso.ibm.com
PORT: 12347
DB2 Connect server
IP:9.12.5.149
PORT: 50002
db2 catalog tcpip node stand
remote wtsc63.itso.ibm.com server 12347
db2 catalog db DB9ADB at node stand authentication dcs
db2 catalog dcs db DB9ADB as DB9A
db2 catalog tcpip node gway remote 9.12.5.149 server 50002
db2 catalog db DB9ADB at node gway authentication dcs
db2 connect to DB9ADB user yyyyyy using xxxxxx IBM Data Server
Driver or Client
3-tier configuration
to DB9A subsystem
Chapter 3. Installation and configuration 119
Table 3-7 Commands to connect 3-tier to DB2 for z/OS and the information required
Connection to a data sharing group
For the most part, connecting to a DB2 data sharing group requires the same commands.
The information will be different. Best practices include specifying the group DVIPA or group
domain name and allowing the Sysplex Distributor to route the connection request to an
available member of the data sharing group. The diagrams and tables assume that the client
application is connecting to a member of DB2 data sharing group D9CG, which has a
LOCATION of DB9C and members D9C1, D9C2 and D9C3.
Figure 3-52 on page 120 shows the two-tier configuration with commands to connect to a
member of our data sharing group.
Commands to issue for workstation client Information from DB2 Connect server
environment (DB2 for z/OS does not change)
db2 catalog tcpip node gway remote
9.12.5.149 server 50002
Notes:
gway is an arbitrary name for DB2 Connect
server node (gateway).
We show the IP address. The host name
could be used instead.
Specify the DRDA port number, or service
name, after the server keyword
z/OS IP address) = 9.12.5.149
DRDA port number for DB2 Connect server =
50002
db2 catalog db DB9ADB at node gway
authentication dcs
Notes:
Other security options are available besides
authentication dcs
DB9ADB is an arbitrary name you will use to
connect to DB2 for zOS.
db2 catalog dcs db DB9ADB as DB9A LOCATION name for DB2 for z/OS = DB9A
db2 connect to DB9ADB user yyyyyy using
xxxxxx
user ID = yyyyyy ; password = xxxxxx
120 DB2 9 for z/OS: Distributed Functions
Figure 3-52 Two-tier connection to our DB2 for z/OS data sharing group
Table 3-8 shows the commands on the left, and the required information to connect to the
DB2 for z/OS data sharing environment on the right.
Table 3-8 Commands to connect 2-tier to DB2 data sharing group and the information required
Commands to issue for DB2 Connect Information from DB2 for z/OS environment
db2 catalog tcpip node share remote
d9cg.itso.ibm.com server 38320
Notes:
share is an arbitrary name for this node
(data sharing group). It links the tcpip node to
the database in the next command
The IP address (group DVIPA) could be used
instead of group host name
Specify the DRDA port number, or service
name, after the server keyword
z/OS IP host name (or IP address) =
d9cg.itso.ibm.com. This is the domain name
for the data sharing group.
DRDA port number for the DB2 for z/OS data
sharing group = 38320
db2 catalog db DB9CDB at node share
authentication dcs
Note:
Other security options are available besides
authentication dcs
DB9CDB is an arbitrary name to connect to
the DB2 for zOS data sharing group.
db2 catalog dcs db DB9CDB as DB9C parms
,,,,,sysplex
Note:
parms ,,,,,sysplex specifies sysplex support,
including workload balancing
LOCATION name for DB2 for z/OS data
sharing group = DB9C
DB2 Connect
Operating System: Windows, Linux,
or UNIX
DB2 CLP COMMANDS:
db2 catalog tcpip node share remote
d9cg.itso.ibm.com server 38320
db2 catalog db DB9CDB at node share
authentication dcs
db2 catalog dcs db DB9CDB as DB9C parms
',,,,,sysplex'
db2 connect to DB9CDB user yyyyyy using
xxxxxx
DB2 for z/OS: D9CG group
Location Name: DB9C
IP address: 9.12.4.102
HOSTNAME:
d9cg.itso.ibm.com
PORT: 38320
2-tier configuration
to data sharing
group D9CG
Chapter 3. Installation and configuration 121
If you use LOCATION ALIAS support for your data sharing group, you can choose to catalog
some of your clients to the group DRDA port number and LOCATION name and others to the
subset or alias port number and LOCATION ALIAS. In the above diagram and table, that
would mean one of the following changes:
For the single-member ALIAS, change the LOCATION from DB9C to DB9CALIAS and the
port from 38320 to 38324.
For the two-member ALIAS, change the LOCATION from DB9C to DB9CSUBSET and the
port from 38320 to 38325.
Figure 3-53 shows a diagram of a three-tier configuration in our environment with the
commands required to define the connection from the DB2 Connect server to our DB2 for
z/OS data sharing group and from the workstation to the DB2 Connect server.
Figure 3-53 Three-tier connection to our DB2 for z/OS data sharing group
Table 3-9 on page 122 shows the commands on the left, and the information about the DB2
Connect server environment required on the right. The information about DB2 for z/OS data
sharing group does not change, so the commands entered at the DB2 Connect server for the
three-tier data sharing configuration are the same as those entered in the two-tier
configuration in Table 3-8 on page 120.
db2 connect to DB9CDB user yyyyyy using
xxxxxx
user ID = yyyyyy ; password = xxxxxx
Commands to issue for DB2 Connect Information from DB2 for z/OS environment
DB2 for z/OS: D9CG group
Location Name: DB9C
IP address: 9.12.4.102
HOSTNAME: d9cg.itso.ibm.com
PORT: 38320
DB2 Connect server
IP:9.12.5.149
PORT: 50002
db2 catalog tcpip node share
remote d9cg.itso.ibm.com server 38320
db2 catalog db DB9CDB at node share authentication dcs
db2 catalog dcs db DB9CDB as DB9C parms ',,,,,sysplex'
db2 catalog tcpip node gway remote 9.12.5.149 server 50002
db2 catalog db DB9CDB at node gway authentication dcs
db2 connect to DB9CDB user yyyyyy using xxxxxx IBM Data Server
Driver or Client
3-tier configuration
to data sharing
group D9CG
122 DB2 9 for z/OS: Distributed Functions
Table 3-9 Commands to connect 3-tier to DB2 data sharing group and the information required
3.5 DRDA sample setupFrom DB2 for z/OS requester to DB2
for LUW on AIX server
This section provides the basic steps to connect DB2 for z/OS as a DRDA AR to a DB2 for
LUW DRDA AS. The diagram and table that follow use the definition, names, and CDB values
that you have seen earlier in this chapter.
Figure 3-54 on page 123 shows the steps and most of the information required to connect
DB9A to the SAMPLE database in DB2 for LUW. This figure and Table 3-10 on page 123
assume that the DB2 for LUW database has been populated with the system and sample
tables at least.
Commands to issue for workstation client Information from DB2 Connect server
environment (DB2 for z/OS does not change)
db2 catalog tcpip node gway remote
9.12.5.149 server 50002
Notes:
gway is an arbitrary name for DB2 Connect
server node (gateway).
We show the IP address could be used
instead of host name
Specify the DRDA port number, or service
name, after the server keyword
z/OS IP address) = 9.12.5.149
DRDA port number for DB2 Connect server =
50002
db2 catalog db DB9CDB at node gway
authentication dcs
Notes:
Other security options are available besides
authentication dcs
DB9CDB is an arbitrary name you will use to
connect to the DB2 for zOS data sharing
group.
db2 catalog dcs db DB9CDB as DB9C parms
,,,,,sysplex
Note:
parms ,,,,,sysplex specifies sysplex support,
including workload balancing
LOCATION name for the DB2 for z/OS data
sharing group = DB9C
db2 connect to DB9CDB user yyyyyy using
xxxxxx
user ID = yyyyyy ; password = xxxxxx
Important: DRDA requires that a package be bound on the server. This is true no matter
what platform is the requester nor what platform is the server.
Chapter 3. Installation and configuration 123
Figure 3-54 Steps to configure DB2 for z/OS as DRDA AR to DB2 for LUW as DRDA AS
In Table 3-10 we take each step and provide additional details, explanation and examples
based on the configuration we used.
Refer to 3.2.2, Configuring the Communications Database on page 86 to see the
corresponding CDB contents.
Table 3-10 Connecting DB2 for z/OS to DB2 for LUW
Steps to run on DB9A Information needed from DB2 for LUW
environment
Step 1: Configure the Communications Database (CDB)
Insert into SYSIBM.LOCATIONS
(location, linkname, port)
values
(SAMPLE, MYUDBLNK, 50002) ;
SAMPLE is the name of the DB2 for LUW
database to which we want to connect.
Insert into SYSIBM.IPNAMES
(linkname, security_out, usernames, ipaddr)
values
(MYUDBLNK, P, O, 9.12.5.149) ;
In our environment the IP address for DB2 for
LUW is 9.12.5.149 and the port number is 50002.
The DNS name of kodiak.itso.ibm.com could
have been used, too.
On DB9A:
Step 1: a) Insert row into SYSIBM.LOCATIONS
b) Insert row into SYSIBM.IPNAMES
c) (security option) Insert row into
SYSIBM.USERNAMES
d) Stop and start DDF
Step 2: a) BIND PACKAGES for SPUFI on
SAMPLE
b) BIND PLANS for SPUFI on SAMPLE
DB2 for LUW
Database name:
SAMPLE
IP address: 9.12.5.149
HOSTNAME:
kodiak.itso.ibm.com
PORT: 50002
Configuring
DB2 for z/OS (AR)
to DB2 for LUW (AS)
DB2 for z/OS: DB9A
Hostname: wtsc63.itso.ibm.com
IP address: 9.12.6.70
124 DB2 9 for z/OS: Distributed Functions
These steps should be enough to use SPUFI to test the connection by selecting from catalog
tables on DB2 for LUW.
Repeat the package and plan binds for your application programs. Make sure both are bound
at DB2 for z/OS and at DB2 for LUW. Ensure that appropriate security is in place before
allowing access to production tables.
Insert into SYSIBM.USERNAMES
(type, authid, linkname, newauthid, password)
values (O, , MYUDBLNK,DB2INST3,
DB2INST3) ;
Notes:
MYUDBLNK is an arbitrary name to link
SYSIBM.LOCATIONS, SYSIBM.IPNAMES
and SYSIBM.USERNAMES.
The value P for the SECURITY_OUT
column means we must supply a password
with a userid.
In our configuration we had a blank for the
AUTHID column. We do not recommend
leaving AUTHID blank in a production
environment.
dbm cfg AUTHENTICATION should be set to
SERVER, because we pass a user ID and
password and expect DB2 for LUW to perform the
authentication.
As our CDB shows, we used the same user ID
and password, DB2INST3.
Stop and start DDF (-db9a stop ddf; -db9a start
ddf) to ensure the changes take effect.
Step 2: Bind SPUFI
BIND PACKAGE (SAMPLE.DSNESPCS)
MEMBER(DSNESM68)
LIBRARY (hlq.SDSNDBRM)
ACTION (REPLACE)
ISOLATION (CS)
SQLERROR (NOPACKAGE)
VALIDATE BIND
SAMPLE is the DB2 for LUW database to which
we want to connect
This is the Cursor Stability SPUFI package.
The user ID performing the bind should have
been granted the appropriate privileges.
BIND PACKAGE (SAMPLE.DSNESPRR)
MEMBER(DSNESM68)
LIBRARY (hlq.SDSNDBRM)
ACTION (REPLACE)
ISOLATION (RR)
SQLERROR (NOPACKAGE)
VALIDATE (BIND)
This is the Repeatable Read package.
BIND PLAN (DSNESPCS)
PKLIST (*.DSNESPCS.DSNESM68)
ISOLATION (CS)
ACTION (REPLACE)
Using an asterisk (*) for the location in the
PKLIST ensures the plan is bound in all locations.
BIND PLAN (DSNESPRR)
PKLIST (*.DSNESPRR.DSNESM68)
ISOLATION (RR)
ACTION (REPLACE)
Steps to run on DB9A Information needed from DB2 for LUW
environment
Chapter 3. Installation and configuration 125
3.6 Character conversion: Unicode
When you transmit character data from one DBMS to another the data may need to be
converted to a different coded character set. In different database management systems
(DBMSs), character data can be represented by different encoding schemes. Within an
encoding scheme, there are multiple coded character set identifiers (CCSIDs). EBCDIC,
ASCII, and Unicode are ways of encoding character data. The Unicode character encoding
standard is a character encoding scheme that includes characters from almost all languages
of the world, including Latin. DB2 supports two implementations of the Unicode encoding
scheme: UTF-8 (a mixed-byte form) and UTF-16 (a double-byte form).
All character data has a CCSID. Character conversion is described in terms of CCSIDs of the
source and of the target. When you install DB2, you must specify a CCSID for DB2 character
data in either of the following situations:
You specify AUTO or COMMAND for the DDF STARTUP OPTION field on panel
DSNTIPR.
Your system will have any ASCII data, Unicode data, EBCDIC mixed character data, or
EBCDIC graphic data. In this case, you must specify YES in the MIXED DATA field of
panel DSNTIPF, and the CCSID that you specify is the mixed data CCSID for the encoding
scheme.
The CCSID that you specify depends on the national language that you use.
DB2 performs most character conversion automatically, based on system CCSIDs, when data
is sent to DB2 or when data is stored in DB2. If character conversion must occur, DB2 uses
the following methods:
DB2 searches the catalog table SYSIBM.SYSSTRINGS.
DB2 uses z/OS Unicode Conversion Services.
If DB2 or z/OS Unicode Conversion Services does not provide a conversion for a certain
combination of source and target CCSIDs, you receive an error message. If the conversion is
incorrect, you might get an error message or unexpected output. In that case you may have to
populate SYSIBM.SYSSTRINGS to specify the conversion you require.
Figure 3-55 on page 126 shows DB2 install panel DSNTIPF where you specify the CCSIDs
for your system. Most data coming into your system through DRDA will be converted
automatically by DB2 according to the principle Receiver makes right, which means the data
recipient is responsible for the conversion.
126 DB2 9 for z/OS: Distributed Functions
Figure 3-55 DSNTIPF panel where you specify your system CCSIDs
One potential source of conversion errors is if a system on which you have bound a package
has changed its system CCSID. Another conversion concern is adding support for the euro
symbol. If you encounter conversion errors or need to support the euro symbol, refer to the
DB2 Version 9.1 for z/OS Installation Guide, GC18-9846.
3.7 Restrictions on the use of local datetime formats
The following rules apply to the character string representation of dates and times:
For input
In distributed operations, DB2 as a server uses its local date or time routine to evaluate
host variables and literals. This means that character string representation of dates and
times can be as follows:
One of the standard formats.
A format recognized by the servers local datetime exit.
For output
With DRDA access, DB2 as a server returns date and time host variables in the format
defined at the server.
For BIND PACKAGE COPY
When binding a package using the COPY option, DB2 uses the ISO format for output
values unless the SQL statement explicitly specifies a different format. Input values can be
specified in the format described above under For input.
DSNTIPF INSTALL DB2 - APPLICATION PROGRAMMING DEFAULTS PANEL 1
===>
Enter data below:
1 LANGUAGE DEFAULT ===> IBMCOB ASM,C,CPP,IBMCOB,FORTRAN,PLI
2 DECIMAL POINT IS ===> . . or ,
3 STRING DELIMITER ===> DEFAULT DEFAULT, " or ' (IBMCOB only)
4 SQL STRING DELIMITER ===> DEFAULT DEFAULT, " or '
5 DIST SQL STR DELIMTR ===> ' ' or "
6 MIXED DATA ===> NO NO or YES for mixed DBCS data
7 EBCDIC CCSID ===> CCSID of SBCS or mixed data. 1-65533.
8 ASCII CCSID ===> CCSID of SBCS or mixed data. 1-65533.
9 UNICODE CCSID ===> 1208 CCSID of UNICODE UTF-8 data
10 DEF ENCODING SCHEME ===> EBCDIC EBCDIC, ASCII, or UNICODE
11 APPLICATION ENCODING ===> EBCDIC EBCDIC, ASCII, UNICODE, ccsid (1-65533)
12 LOCALE LC_CTYPE ===>
13 DECFLOAT ROUNDING MODE===> ROUND_HALF_EVEN
Attention: If you need to change the current CCSIDs for your system contact the IBM
Support Center. Such a change is complex and you should follow the detailed plan that the
IBM Support Center has developed for this change.
Chapter 3. Installation and configuration 127
3.8 HiperSockets: Definition
In this section we describe our HiperSocket definition. Configurations and considerations for
the use of HiperSockets are described in 2.5, DB2 Connect Server on Linux on IBM System
z on page 58.
HiperSockets represent the implementation of a virtual TCP/IP network path between LPARs.
The links and addresses must be defined in the TCP profile member with DEVICE, LINK,
HOME, ROUTE, and START entries.
Figure 3-56 shows the DEVICE, LINK and HOME entries. HiperSockets use TCP queued I/O
support provided through IPAQIDIO. Our link is called HIPERLF1. The IP address for the
HiperSocket on SC63 is 10.1.1.2.
Figure 3-56 TCP profile extract with HiperSocket definitions - part 1
BROWSE TCP.SC63.TCPPARMS(TCPPROF) - 01.56
Command ===>
DEVICE OSA2000 MPCIPA
LINK OSA2000LNK IPAQENET OSA2000
DEVICE OSA2020 MPCIPA
LINK OSA2020LNK IPAQENET OSA2020
DEVICE IUTIQDF1 MPCIPA
LINK HIPERLF1 IPAQIDIO IUTIQDF1
HOME
9.12.6.70 OSA2000LNK
9.12.6.71 OSA2020LNK
10.1.1.2 HIPERLF1
128 DB2 9 for z/OS: Distributed Functions
Figure 3-57 shows the remainder of the TCP profile member with the ROUTE entry for
HIPERLF1 and the START entry to start the IUTIQDF1 adapter identified in the DEVICE and
LINK statement.
Figure 3-57 TCP profile extract with HiperSocket definitions - part 2
For applications, including DB2 for LUW, on the Linux on z partition to reach SC63, they
specify IP address 10.1.1.2.
BROWSE TCP.SC63.TCPPARMS(TCPPROF) - 01.56
Command ===>
HOME
9.12.6.70 OSA2000LNK
9.12.6.71 OSA2020LNK
10.1.1.2 HIPERLF1
ITRACE OFF
BEGINROUTES
ROUTE 9.12.4.0 255.255.252.0 = OSA2000LNK MTU 1500
ROUTE DEFAULT 9.12.4.1 OSA2000LNK MTU 1500
ROUTE 9.12.4.0 255.255.252.0 = OSA2020LNK MTU 1500
ROUTE DEFAULT 9.12.4.1 OSA2020LNK MTU 1500
ROUTE 10.1.1.0 255.255.255.0 = HIPERLF1 MTU 8192
ENDROUTES
START OSA2000
START OSA2020
START IUTIQDF1
Copyright IBM Corp. 2009. All rights reserved. 129
Chapter 4. Security
The development of the Internet has dramatically changed the DRDA security landscape. The
major environmental changes that have an impact on DRDA security are as follows:
Application architectures have evolved, driven by the Internet. This has not only increased
the need for security, but has also changed the nature of the security requirements for
database access.
SNA has largely been abandoned in favor of TCP/IP. This has removed a layer of security
that was used extensively by DRDA configurations to provide security for links between
DRDA client LUs and the APPLs that represent DB2 for z/OS.
Many applications today use dynamic SQL. This generally requires explicit data access
authorization, instead of just package/plan execution authorization.
The combined risks of TCP/IP sniffers, the growing use of dynamic SQL for ODBC and JDBC
applications, and the growing use of group authorization IDs with substantial data access
authorization, make protecting the connection authentication dialog more important than ever.
In this chapter we provide a brief summary of the basic security DRDA configuration over
TCP/IP, explore the application server security options (including a mention of trusted context
and roles), add considerations on encryption, and discuss the security issues with dynamic
SQL.
This chapter contains the following sections:
Guidelines for basic DRDA security setup over TCP/IP on page 130
DRDA security requirements for an application server on page 140
Encryption options on page 147
Addressing dynamic SQL security concerns on page 173
4
130 DB2 9 for z/OS: Distributed Functions
4.1 Guidelines for basic DRDA security setup over TCP/IP
This section provides a brief overview of DRDA security over TCP/IP as a base for discussing
the other security topics in the chapter. The basic facilities of DRDA security are
authentication and authorization.
For details, refer to the following publications:
DB2 9 for z/OS Administration Guide, SC26-9931
DB2 Connect Version 9 Quick Beginnings for DB2 Connect Servers, GC10-4243
DB2 Connect Version 9 Users Guide, SC10-4229
4.1.1 Security options supported by DRDA access to DB2 for z/OS
DRDA defines the information flows that allow a DRDA AR connection request to be
authenticated at the DRDA AS.
The authentication mechanism to be used is specified on both the DRDA AR platform and the
DRDA AS platform. Naturally, the DRDA AR platform cannot override or bypass the
authentication standard demanded by the DRDA AS.
The security options supported by DB2 for z/OS and DRDA clients are listed in Table 4-1.
Table 4-1 Security options for DB2 for z/OS
Description DB2
server
support
DB2 Connect Type 4 driver CLI Driver AES
support
User ID
password
Y SERVER CLEAR_TEXT_
PASSWORD_
SECURITY
SERVER N
User ID only Y CLIENT USER_ONLY_
SECURITY
N N
Change
password
Y N N N N
User ID,
password
substitute
N N N N N
User ID,
encrypted
password
Y N ENCRYPTED_
PASSWORD_
SECURITY
N Y
Encrypted
user ID and
password
Y SERVER_ENCRYPT/
SERVER_ENCRYPT_
AES
ENCRYPTED_USER_
AND_
PASSWORD_
SECURITY
SERVER_ENCRYPT/
SERVER_ENCRYPT_
AES
Y
Encrypted
change
password
Y N N N Y
Kerberos Y KERBEROS KERBEROS_
SECURITY
KERBEROS N
Chapter 4. Security 131
For example, the authentication mechanism to be used by DB2 Connect is specified on the
catalog database command, as shown in Example 4-1. Similar settings apply also to T4
Driver or CLI driver. Some examples are given later in the chapter.
Example 4-1 Authentication mechanism by catalog database command
$ db2 catalog db DB9C at node WTSC63 authentication SERVER_ENCRYPT_AES
DB20000I The CATALOG DATABASE command completed successfully.
DB21056W Directory changes may not be effective until the directory cache is
refreshed.
The processing flow of inbound DRDA connections over TCP/IP to DB2 for z/OS is shown in
Figure 4-1 on page 132. The process can be described as follows:
1. Is the authentication information present?
The only case where authentication information is not required is when TCPALVER=YES
is specified for the subsystem. Setting TCPALVER=YES is not a recommended option.
Review the information in 4.1.3, Important considerations when setting security related
DSNZPARMs on page 132, before you set your parameter.
2. Check the identity of the ID.
A RACF ID (or equivalent security software product) must exist in all cases. The RACF ID
is either a RACF USER ID or a RACF ID that is derived from the Kerberos principal
identity.
3. Check that the ID is allowed to connect to DB2 for z/OS through the DDF address space.
Verify that the RACF ID is permitted to access the ssid.DIST profile in the DSNR resource
class for the target DB2 subsystem (ssid).
To permit access to D9C1 through DRDA to user ID DB2USER1, issue the following
command:
PERMIT D9C1.DIST CLASS(DSNR) ID(DB2USER1) ACCESS(READ)
Encrypted
user ID and
data
Y N ENCRYPTED_USER_
AND_DATA_
SECURITY
N N
Encrypted
user ID,
password,
and data
Y DATA_ENCRYPT ENCRYPTED_USER_
PASSWORD_AND_
DATA_SECURITY
DATA_ENCRYPT N
Encrypted
user ID,
password,
new
password,
and data
Y N N N N
Encrypted
user ID only
Y N ENCRYPTED_USER_
ONLY_SECURITY
N Y
SSL Y Y Y Y N/A
Description DB2
server
support
DB2 Connect Type 4 driver CLI Driver AES
support
132 DB2 9 for z/OS: Distributed Functions
4. Run the DB2 connection exit.
The connection exit routine(DSN3@ATH) can be used to perform a variety of functions. It
is usually used to assign secondary authorization IDs based on the list of RACF groups a
user belongs to. TCP/IP requests do not use the sign-on exit routine (DSN3@SGN).
5. Local authorization checking.
Validate that the primary authorization ID (PAI) or any secondary authorization ID (SAI)
has the authorization to execute the specified package and access to the DB2 resources
referenced in the SQL statement.
Figure 4-1 DRDA connection flow of DB2 for z/OS
4.1.2 Authorization
Once having gone through DRDA authentication processing, authorization verification is
handled in an identical way as for local processing.
4.1.3 Important considerations when setting security related DSNZPARMs
Considering the importance of your enterprise data, you must evaluate the following two
DSNZPARMs before allowing use of DB2 data through DRDA.
TCPALVER
EXTSEC
Setting the TCPALVER parameter
TCPALVER=NO (on DSNZPARM panel DSNTIP5) is strongly recommended for most TCP/IP
networks because of the potential security exposure from unplanned DRDA AR connections.
RACF
DSN3@ATH
Connection exit routine
DB2 access control
Connect authentication
Auth Information
Available?
YES
NO TCPALVER=NO
TCPALVER=YES
DRDA request
Authorization check
Primary/Secondary IDs
ID/(Password)
DSNR
ID/GROUP
Fail TCPALVER option
Fail
Fail
OK
OK
NG
NG
Authentication
Chapter 4. Security 133
If you specify TCPALVER=YES, RACF will not perform password checking unless the
connection request sends the password. DRDA ARs can force a password to be sent by
specifying one of server authentication attributes.
If you want to use TCPALVER=YES (to avoid RACF password checks, since the user ID is still
verified against RACF), be aware of the security risks. Evaluate the following issues when
setting TCPALVER.
Is the privilege to set authentication attribute to DRDA AR restricted?
If a person has authority to change the authentication attribute of DRDA AR clients, this
person can change the attribute to CLIENT authentication and allow access to DB2 for
z/OS without passwords.
Is the workstation restricted to use external media?
If a user can install any DRDA AR client (like DB2 Connect Personal Edition), this user
may catalog a link to DB2 for z/OS with authentication CLIENT to access DB2 for z/OS
without passwords.
Is the network physically separated and secured?
If a person can bring a notebook computer to simply plug into a network and have access
to DB2 for z/OS. Having a DRDA AR client (such as DB2 Connect Personal Edition)
installed in the notebook computer likely provides the privilege to use or change the DRDA
AR with authentication CLIENT.
Enhancements to TCPALVER parameter
The DRDA technology opened doors to accessing mainframe data from the intranet or
Internet. This also opens doors to ill-intentions and vulnerability. Many countries passed laws
related to IT security, and companies are implementing tighter corporate security policies to
comply with them.
For DRDA, most of the authentication mechanisms can be chosen at the DRDA AR sites,
where many users have the authority to access those settings. It is not hard to change those
settings causing security vulnerability. And it is hard to check whether all clients connecting to
DB2 for z/OS are fully complying with the policies when hundreds of clients are connecting to
the server through the network.
With APAR PK76143, DB2 for z/OS has introduced the new security validation option
SERVER_ENCRYPT for the TCPALVER parameter to solve this issue. The previous options
were YES to indicate that remote clients can access DB2 for z/OS without password, and NO
(default), which indicates that DB2 for z/OS requires a password authentication whether
encrypted or not.
The new SERVER_ENCRYPT value enforces clients to make encrypted authentication. All
the DRDA AR clients need to authenticate with DB2 for z/OS with one of following
authentication mechanism:
Password encryption (AES encryption only)
SSL connection (also AT-TLS)
IP Security (IPSec)
Kerberos
Attention: This function was not tested since APAR PK76143 was still open at the time of
writing.
134 DB2 9 for z/OS: Distributed Functions
After specifying SERVER_ENCRYPT, if the DRDA AR clients attempts to connect using an
unsupported security mechanism, DB2 for z/OS will reject the connection and issue the
DSNL030I message with reason code 00D31050.
In addition, the specification of options SERVER and CLIENT has been added for the
TCPALVER DSNZPARM for reasons of DB2 family compatibility. The SERVER value is an
alternative to the NO option and the CLIENT value is an alternative value to the YES option.
This makes it easier to match server with client settings.
Extended security
The extended security option in DSNZPARM panel DSNTIPR is a must have option. It
provides two key functions:
When a user or program is rejected by the authentication processing, the specific reason
for rejection is returned in the SQLCA. Example 4-2 shows the results of a connection
request with an invalid password, without extended security, and then with the extended
security. With extended security, the user can easily determine the reasons for failure.
Example 4-2 Authentication rejection with and without extended security
Connection request with invalid password, with EXTSEC=NO
$ db2 connect to db9a user paolor7 using password
SQL30082N Security processing failed with reason "15" ("PROCESSING FAILURE").
SQLSTATE=08001
Connection request with invalid password, with EXTSEC=YES
$ db2 connect to db9a user paolor7 using password
SQL30082N Security processing failed with reason "24" ("USERNAME AND/OR
PASSWORD INVALID"). SQLSTATE=08001
In both cases, SYSLOG output explains that the user has given invalid password
ICH408I USER(PAOLOR7 ) GROUP(SYS1 ) NAME(P. ) 663
LOGON/JOB INITIATION - INVALID PASSWORD
IRR013I VERIFICATION FAILED. INVALID PASSWORD GIVEN.
DSNL030I -DB9A DSNLTSEC DDF PROCESSING FAILURE FOR 665
LUWID=G90C0595.B75C.C400E544C97B
AUTHID=paolor7, REASON=00F30085
Note: Although DB2 for z/OS still supports 56-bit DES encryption, it is now considered an
insecure form of authentication.The new feature will not support 56-bit DES encryption as
an acceptable security mechanism.
RACF PassTickets is not included in this support, since the password is only encoded and
not as strong as with DES- or AES-based encryption. In addition, users with a high
transaction rate typically define RACF PassTickets as replay capable, and this weakens
the security protection further. PassTickets will only be accepted if the connection itself is
protected (by AT-TLS or IPSec).
Note: When using AT-TLS or IPSec, DB2 will validate the presence of an IPSec encryption
tunnel. Because of this you will need to pay the expense to ensure you are accessing DB2
for z/OS in clear text but using IPSec. The IPSec validation only occurs for initial requests.
Chapter 4. Security 135
The second function of extended security is the change password API. This allows the
RACF password to be changed as part of a DRDA AR (like DB2 Connect) request, as
shown in Example 4-3.
Example 4-3 Change RACF password on connect
$ db2 connect to db9a user paolor7 using passwd new newpswd confirm newpswd
Database Connection Information
Database server = DB2 z/OS 9.1.5
SQL authorization ID = PAOLOR7
Local database alias = DB9A
With the increasing deployment of distributed applications, where users do not necessarily
log on to TSO, or even have TSO access disabled from their RACF profile, the ability to
change passwords from the client environment, or from within a program, can be useful. The
change password API is available in all the programming languages supported by the DB2
client.
4.1.4 Recommendation for tightest security
For the tightest security, do not send a clear text password through the network. Instead,
consider using one of the following security options:
RACF PassTicket
Kerberos ticket
DRDA encrypted passwords
RACF PassTicket
RACF PassTicket is a cryptographically generated short life-span alternative to the RACF
password that can be used to get authenticated to other DB2 for z/OS, as illustrated in
Figure 4-2 on page 136. This means DB2 for z/OS DRDA AR does not have to send RACF
passwords in clear text through a network.
RACF PassTicket is not a replacement of the regular RACF password, which remains usable.
DB2 for z/OS DRDA AR users can use RACF PassTicket as substitute to RACF passwords.
This avoids the need to code passwords in the CDB (SYSIBM.USERNAMES) and update
them each time a password expires. Unlike the RACF password, the RACF PassTicket
applies to only one applications, and remains valid for a period of about 10 minutes, which
makes it easier to manage and keeps it secure.
Note: As described in the previous section, RACF PassTickets is not allowed with the new
SERVER_ ENCRYPT specification for TCPALVER, which restricts the security mechanism
on the server.
Note: You can also use RACF PassTicket on distributed platform using Tivoli Federated
Identity Manager (TFIM), sending request to DB2 for z/OS without needs to code a
password on your clients. Refer to TFIM product documentation for details on using RACF
Passticket on distributed platforms.
136 DB2 9 for z/OS: Distributed Functions
Figure 4-2 Connecting to DB2 using RACF PassTickets
RACF PassTicket is generated from the following attributes:
The user ID of the client
The application ID (CICS applid, IMS id, ...)
The LINKNAME in the requester CDB, and the server LUNAME (non-data sharing), or
generic LUNAME (data sharing), or IPNAME (TCP/IP with VTAM independence)
A secured sign-on application key, known to both sides of the connection
A time and date stamp
implementing RACF PassTicket
Perform the following steps to implement RACF PassTicket.
1. Set the SECURITY_OUT column of the SYSIBM.SYSIPNAMES to R.
2. Define RACF resource at requesting system
Activate the RACF PTKTDATA class by using the RACF command shown in Example 4-4.
Example 4-4 Activates PTKTDATA class
SETROPTS CLASSACT(PTKTDATA)
SETROPTS RACLIST(PTKTDATA)
Define a profile for each DB2 to which the application is connecting. This means that
profiles need to be defined for each LINKNAME columns of SYSIBM.LOCATION.
KEY_MASKED_VALUE is security sensitive information given when profiles are defined.
When the DB2 subsystem you are connecting to is using a different RACF database,
classes and profiles are required to be defined with the same KEKY_MASKED_VALUE.
For example, if you are connecting to DB9A using KEY_MASKED_VALUE of
E001193519561977 with the command shown in Example 4-5.
Example 4-5 New profiles to remote DB2 subsystem
RDEFINE PTKTDATA DB9A SSIGNON(KEYMASKED(E001193519561977))
SETROPTS RACLIST(PTKTDATA) REFRESH
Restriction: A RACF PassTicket is valid only once. The same application generates only
one different PassTicket every few seconds. If fast requests are made, the same
PassTicket can be used, which causes denied access. In such an environment, the NO
REPLAY PROTECTION text string in the APPLDATA field of the PTKTDATA RACF general
resource class profile will bypass PassTicket replay protection. Before using this option,
ensure it is only used in secured environments, such as limited access or physically
separated network environments.
RACF
DB2 for z/OS DDF
z/OS z/OS
DB2 for z/OS DDF
User/PassTicket
RACF
Chapter 4. Security 137
3. Define RACF resources at remote system.
As mentioned, if the server RACF database is different from the requesting DB2, you must
define classes and profiles using the same KEY_MASKED_VALUE.
Kerberos ticket
One of the most secure form of authentication encryption for DRDA connections to DB2 for
z/OS is to use the Kerberos authentication mechanism. Kerberos support was not tested
during the writing of this publication, so this section is limited to a theoretical discussion of
what should be done.
Kerberos derives its name from Cerberus, the mythological three-headed dog that guarded
the entrance to the underworld. Kerberos was developed by MIT as a distributed
authentication service for use over an untrusted network. Kerberos acts as a trusted third
party, responsible for issuing user credentials and tickets.
Users and servers are required to have keys registered with the Kerberos authentication
server. When a user wants to access a server, the Key Distribution Centre (KDC) issues a
ticket for access to the server. Tickets contain a clients identity, a dynamically created session
key, a time stamp, a ticket lifetime, and a service name.
The KDC consists of the following elements:
A Kerberos authentication server (KAS)
A ticket granting server (TGS)
A Kerberos database (KDB)
Kerberos is implemented in the following component of z/OS SecureWay Security Server:
z/OS SecureWay Security Server Network Authentication and Privacy Service
RACF or equivalent security product
For detailed information about setting up a Kerberos authentication service on z/OS, refer to
z/OS V1R10.0 Network Authentication Service Administration, SC24-5926.
Regarding the DRDA usage of Kerberos, the process is similar to other forms of
authentication. How to catalog a database with Kerberos principal server authentication to
DB2 Connect is shown in Example 4-6. Other DRDA AR Clients have options to choose
Kerberos authentication.
Example 4-6 Catalog a database using Kerberos authentication
catalog database <dbname> as <dbalias> at node <node> authentication kerberos
target principal
Note: DB2 for z/OS (as a DRDA AS) can only verify a Kerberos ticket when it is being
passed. DB2 for z/OS (as a DRDA AR) does not request authentication tickets from a
Kerberos server.
138 DB2 9 for z/OS: Distributed Functions
The authentication flow using Kerberos connecting to DB2 for z/OS is illustrated in Figure 4-3.
Figure 4-3 Authentication process using Kerberos
DRDA encrypted passwords
Depending on the DRDA level, DRDA AR can encrypt one of the following when you send
them to DB2 for z/OS server.
Password
User ID and password
User ID, password and security sensitive data
In this section we show examples of setting encrypted password. In DB2 9 for z/OS,
advanced options like AES encryption or SSL are provided to offer tighter security, those
options are discussed in 4.3, Encryption options on page 147.
Setting distributed clients to use encrypted passwords
As described in 4.1.1, Security options supported by DRDA access to DB2 for z/OS on
page 130, you can set the supported authentication mechanism to encrypt your password
flow through the network. If one of the previous options are not being used, we recommend
using one of the options to encrypt your password.
Example 4-7 shows the setting to use encrypted user ID and password using DB2 Connect.
Example 4-7 Example for setting authentication mechanism at DB2 Connect
$ db2 catalog db DB9A at node SC63TS authentication SERVER_ENCRYPT
DB20000I The CATALOG DATABASE command completed successfully.
DB21056W Directory changes may not be effective until the directory cache is
refreshed.
In WebSphere Application Server environments, you can change the connection properties to
use different security options. For example, if you want to use encrypted user ID and
password, you can set your properties as shown in Figure 4-4 on page 139.
WAS
z/OS
DB2
Connect
JDBC
Application
LUW
DB2 for z/OS
Kerberos Server
UserID, Ticket
UserID, Ticket
RACF
CLI
Application
UserID, Ticket
1.Request
Ticket
2. Return
Ticket
3. Request
Ticket Validation
4. Authentication
Completes
Chapter 4. Security 139
Figure 4-4 Setting for the DataSource custom properties on WebSphere Application Server
Setting DB2 for z/OS DRDA AR to use encrypted passwords
If you are connecting from DB2 for z/OS (as DRDA AR), you need to populate the valid user
ID and password in the communication database (CDB) tables to connect to remote DB2 for
z/OS server. If you are using INSERT statements to populate the CDB tables, you need to set
your user IDs and password in clear text. If the network is connected to the Internet or a
wide-range of intranets this can be a security exposure.
DB2 for z/OS provides the DSNLEUSR stored procedures to let users store encrypted
translated authorization ID (NEWAUTHID) and password (PASSWORD) in the
SYSIBM.USERNAMES table.
The DB2 manual, DB2 9 for z/OS Administration Guide, SC18-9840, provides a good
example of executing the DSNLEUSR stored procedure from a COBOL program. From an
administration and security point of view, you should restrict access to the DSNLEUSR stored
procedure to security or database administrators.
Figure 4-5 shows the syntax for the DSNLEUSR stored procedure.
Figure 4-5 Syntax diagram for the DSNLEUSR
Example 4-8 shows how to invoke the stored procedure from DB2 Connect. The procedure
asks for five input parameters and two output parameters (to return SQLCODE and
message.)
Example 4-8 Example of executing SYSPROC.DSNLEUSR(from DB2 Connect)
$ db2 "call SYSPROC.DSNLEUSR('O','PAOLOR7','DB9A','PAOLOR7','NEWPSWD',?,?)"
RETURNCODE: 0
MSGAREA: INSERTED INTO SYSIBM.USERNAMES SUCCESSFULLY.
"DSNLEUSR" RETURN_STATUS: 0
Note: To use the DSNLEUSR stored procedures, the following conditions must be met:
DB2 for z/OS V8 or later.
WLM established stored address space.
z/OS Integrated Cryptographic Service Facility (ICSF) must be installed, configured,
and active.
>>__CALL__DSNLEUSR__(__Type,___ _AuthID_ _,___ _LinkName_ _,___ _NewAuthID__,__>
|_NULL___| |_NULL_____| |_NULL______|
>___ _Password_ _,__ReturnCode,__MsgArea__)___________________________________><
|_NULL_____|
140 DB2 9 for z/OS: Distributed Functions
Example 4-9 shows the inserted translated authorization ID and password from the
SYSIBM.USERNAMES CDB table. Unlike distributed DRDA AR clients, when encrypted
authorization ID and password are inserted in CDB table, DB2 for z/OS automatically
chooses the security mechanism, so you only need to set it to O.
Example 4-9 Display of inserted row of SYSPROC.DSNLEUSR
---------+---------+---------+---------+---------+---------+---------
LINKNAME NEWAUTHID PASSWORD
---------+---------+---------+---------+---------+---------+---------
DB9A ..ba0c88441ef3b6e9 ..939ce233c947f266
DSNE610I NUMBER OF ROWS DISPLAYED IS 1
4.2 DRDA security requirements for an application server
With the growing use of the application server model, it is generally accepted that the security
challenges, authentication, and policy enforcement are increasingly being coordinated from
within the application server environment, and DB2 accepts DRDA requests from application
server threads, which act on behalf of the real users.
4.2.1 Characteristics of a typical application server security model
The security controls in an application server environment are typically enforced when a user
signs in or requests access to a function that requires authentication. The user will be
authenticated with a user ID that is probably not defined to RACF. The database access is
usually performed under the generic user ID of the application server.
There are a number of reasons for this approach:
It enables application connection pooling to be used.
It does not externalize user IDs with database privileges to the Internet.
It makes the user ID administration easy to manage.
Figure 4-6 on page 141 shows a typical Web application server security model, where clients
are authenticated at the Web application server and a generic user ID is used to connect to
DB2.
Note: There are no way to tell the value of encrypted translated authorization IDs or
encrypted passwords once they are inserted in CDB table. If you need to update a
password, you need to DELETE the current row before executing the DSNLEUSR stored
procedure again.
Chapter 4. Security 141
Figure 4-6 Typical DRDA access Web application server security model
The diagram does not attempt to illustrate how firewalls would be set up to establish a
demilitarized zone (DMZ)
1
, since that is outside the scope of this publication. Suffice to say
that the firewall configuration would permit the TCP/IP connections illustrated in the diagram,
and nothing else.
The fact that the user authentication challenge is handled before a DB2 request is made does
not alleviate DB2 from security considerations. It just changes the security exposures that
DB2 needs to be concerned with.
4.2.2 Considerations for DRDA security behind the application server
Given the typical application server security model, the major considerations for DB2 security
in this environment are:
Firewalls
To keep the bad guys out (assuming that they are outside of DMZ.)
Encryption
Encryption of authentication flows to provide protection against any bad guys who are
already inside the DMZ.
Static SQL security
Whenever possible, granting access to an executable package is preferable to granting
access to the underlying table or view.
TCP/IP security is outside the scope of this publication, but the topics of encryption and SQL
security are discussed.
1
Servers in the DMZ provide services to both the internal and external network, while an intervening firewall controls
the traffic between the DMZ servers and the internal network clients.
WAS
T4 JDBC Driver
Network
Network
DB2 for z/OS
DDF
z/OS
Network Interface Network Interface
TCP/IP Stack
Authorization
Tivoli Identity Manager etc
User identification not in a DB2 or RACF
Authorization
<static SQL>
GRANT EXECUTE ON PACKAGE Package_name
TO generic_user_id
GRANT S, I, U, D TO Package_owner
<dynamic SQL>
GRANT S, I, U, D ON TABLE table_name TO generic_user_id
Connect to DB9A user generic_user_id using passwd
Database server
Web Application server
Clients(Web browser)
142 DB2 9 for z/OS: Distributed Functions
4.2.3 Identifying a client user coming from the application server
A common requirement for identifying a user is to determine who is having or causing a
problem, for accounting or for auditing. As explained in 4.2.1, Characteristics of a typical
application server security model on page 140, there are some good reasons to use generic
user ID for the application server. However, a generic user ID makes it difficult to determine
who is having problems using tools like IBM Tivoli OMEGAMON XE for DB2 Performance
Expert (OMEGAMON PE)
2
. If Accounting and Audit logs are populated with generic user ID
and java process, it is impossible to determine who is doing what.
DB2 provides special registers where to put client information, and DRDA provides a solution
by providing a way to pass client information in the DRDA data flow. Depending on the
available API, you will use two different methods to set these fields.
The data server driver (ODBC/CLI/.NET)
The T4 driver (JDBC)
Set client information for an application server level
The client informations can be populated by setting properties of application server of
datasource, which will be passed to DB2 for z/OS as connection attributes. If you have
several Application Servers, each having different applications, then you change the setting
for each application server or a datasource to determine applications having different
datasource. Figure 4-7 shows the possible client settings. For more granular determination or
auditing, you need to set client information about each applications.
Figure 4-7 Client Informations settings through WebSphere Application Server admin console
For those who are using ODBC/CLI/.NET with CLI driver or .NET driver, you can also set
client information at either db2cli.ini or db2dsdriver.cfg configuration files to set a value at
driver installation level, in addition to connection attributes. Example 4-10 on page 143 shows
a sample configuration file which sets ClientWorkstationName to value my workstation
name.
2
IBM Tivoli OMEGAMON XE for DB2 Performance Expert on z/OS, V4.2 has been recently announced, see:
http://www.ibm.com/common/ssi/cgi-bin/ssialias?subtype=ca&infotype=an&appname=iSource&supplier=897&let
ternum=ENUS209-077&open&cm_mmc=4895-_-n-_-vrm_newsletter-_-10207_114312&cmibm_em=dm:0:16594050
Chapter 4. Security 143
The following list shows the IBM Data Server Driver configuration keywords:
ClientUserID
ClientApplicationName
ClientWorkstationName
ClientAccountingString
Example 4-10 Sample configuration file contents of data server driver
<configuration>
<DSN_Collection>
<dsn alias="DB9C" name="DB9C" host="wtsc63.itso.ibm.com" port="38320">
<parameter name="Authentication" value="Server_encrypt"/>
</dsn>
</DSN_Collection>
<databases>
<database name="DB9C" host="wtsc63.itso.ibm.com" port="38320">
<parameter name="ClientWorkstationName" value="my workstation name"/>
</databases>
<parameters>
<parameter name="CommProtocol" value="TCPIP"/>
</parameters>
</configuration>
If you are using ODBC driver in Windows environment, Figure 4-8 gives a sample
configuration to set your client information from your Data Source ODBC control panel
window. Panel will populate your db2cli.ini configuration file.
Figure 4-8 Client information setting example using datasource ODBC settings
Note: If you have installed one of DB2 Connect, Clients, or runtime, db2dsdriver.cfg will not
be used by the CLI driver. In most of cases, your application server should be able to
provide you with set connection attributes but if that is not the case, use db2cli.ini.
144 DB2 9 for z/OS: Distributed Functions
Set client information for applications
The application server may use a generic user ID to connect to DB2 for z/OS on behalf of
client users. In this case, if you need to audit data accesses, you need to set the client
information through properties or API before the DB2 transaction starts authentication at the
application server. DRDA AR clients provide sets of API to set client information from each
application.
Here we provide examples of code for various applications to show how client information can
be populated from the applications.
Example 4-11 shows a Java application example of setting client information.
Example 4-11 Sample for setting client information from Java applications
Client information set from connection properties.
java.util.Properties properties = new java.util.Properties();
properties.put("clientAccountingInformation", "PAYROLL");
c = java.sql.DriverManager.getConnection (url, properties);
Client information set from methods provided by T4 Driver
(com.ibm.db2.jcc.DB2Connection)db2conn.setDB2ClientUser(cl_user_name);
(com.ibm.db2.jcc.DB2Connection)db2conn.setDB2ClientWorkstation(cl_ws_name);
(com.ibm.db2.jcc.DB2Connection)db2conn.setDB2ClientApplicationInformation(cl_app);
(com.ibm.db2.jcc.DB2Connection)db2conn.setDB2ClientAccountingInformation("PAYROLL"
);
Example 4-12 shoes an example of setting client information at WebSphere Application
Server application using WSConnection class.
Example 4-12 Sample setting client information from WebSphere Application Server applications
WSConnection conn = (WSConnection) ds.getConnection();
Properties props = new properties();
props.setProperty(WSConnection.CLIENT_ID, cl_user_name);
props.setProperty(WSConnection.CLIENT_LOCATION ,cl_ws_name);
props.setProperty(WSConnection.CLIENT_APPLICATION_NAME,cl_app);
...
conn.setClientInformation(props);
Tip: There are several ways of adding entries into your db2cli.ini configuration file, such as
datasource ODBC settings, CLI panel, or even a text editor.
Tip: When setting client information in the application, be sure to include them in your
application standards.
Chapter 4. Security 145
Example 4-13 shows an example of setting client information from ODBC/CLI applications.
Example 4-13 Sample setting client information in ODBC/CLI applications
Information can be set as connection attribute using clientid char variable
RETCODE = SQLSetConnectAttr(hDbc, SQL_ATTR_INFO_USERID,
(SQLPOINTER)clientid,SQL_NTS);
if(RETCODE != SQL_SUCCESS){
printf("allocate conn attr unsuccessful. \n");
return(-1);
}
Informations can be set within application using API
sqleseti(dbAliasLen, dbAlias, 1, &clientAppInfo[0], &sqlca);
If you are writing a Visual Basic application, you can use DB2 .NET Data provider. It
provides a set of API, and, like for the other drivers, you can set your properties of
DB2Connection class, as shown in Example 4-14.
Example 4-14 Sample setting client information in ADO.NET application (Visual Basic)
con = New DB2Connection(myConnString)
con.ClientUser = cl_user_name
con.Open()
4.2.4 Network trusted context and roles
DB2 9 introduced new options for tighter security allowing more granularity and additional
flexibility. These options are implemented to DB2 for z/OS by the following two new entities:
Trusted context
Role
A trusted context establishes a trusted relationship between DB2 and an external entity, such
as middleware server or another DB2 subsystem. At connect time, sets of trusted attributes
are evaluated to determine if a specific context can be trusted. After the trusted connection is
established, sets of privileges and roles will be assigned to give access to DB2 data. Roles
are not available outside of the trusted connection.
When defined, connections from specific users through source servers allow trusted
connections to DB2. The users in this context can also be defined to obtain a database role.
A role is a database entity that groups together one or more privileges and can be assigned to
users. A role can provide privileges that are in addition to the current set of privileges granted
to the users primary and secondary authorization IDs.
Users must be allowed to use a trusted context. A trusted context can exist without a role. A
role is usable only within an established trusted connection. A default role can be assigned to
a trusted context.
Within a trusted connection, DB2 allows one and only one role to be associated with a thread
at any point in time.
Note: If you are setting client information from your application, you should not set any
client information from any configuration file.
146 DB2 9 for z/OS: Distributed Functions
For detailed explanation and examples using trusted context and roles, see Securing DB2
and Implementing MLS on z/OS, SG24-6480 and the article End-to-end federated trusted
contexts in WebSphere Federation Server V9.5, available from the following Web page:
http://www.ibm.com/developerworks/db2/library/techarticle/dm-0712baliga/index.html
Additionally, we briefly discuss here new functions introduced by DB2 9 APAR PK44617. The
functions added in this APAR are as follows:
EXTERNAL SECURITY PROFILE profile-name
Attribute JOBNAME with wildcard character *
The ROLE AS OBJECT OWNER clause is modified as ROLE AS OBJECT OWNER AND
QUALIFIER to indicate that the role value is used as the default for CURRENT SCHEMA
and CURRENT PATH special registers, as well as the default qualifier for the EXPLAIN
tables.
Before this change, the users allowed to switch within a trusted context could not be defined
in RACF. This makes it difficult for security (RACF) administrators to manage trusted context
users in DB2. With EXTERNAL SECURITY PROFILE, profile-name was added to user
clause to allow trusted context users to be managed by RACF.
When an unqualified name was used in your application, using the new syntax ROLE AS
OBJECT OWNER AND QUALIFIER will cause the role value to be used as the default for
CURRENT SCHEMA and CURRENT PATH special registers, as well as the default qualifier
for the EXPLAIN tables.
Some value changes have been introduced as a result of this support:
The initial value of the special registers CURRENT SCHEMA and CURRENT PATH for
trusted connections with the ROLE AS OBJECT OWNER AND QUALIFIER clause in
effect has changed:
CURRENT SCHEMA
The initial value of the special register is the value of role name associated with the
user in the trusted context, if the trusted connection is established with the ROLE AS
OBJECT OWNER AND QUALIFIER clause in effect.
CURRENT PATH
The initial value of the special register is "SYSIBM", "SYSFUN", "SYSPROC", "value of
role name associated with the user in the trusted context" if the PATH bind option or
SQL PATH option is not specified and if the trusted connection with the ROLE AS
OBJECT OWNER AND QUALIFIER clause in effect.
The default schema of EXPLAIN tables for trusted connections with the ROLE AS
OBJECT OWNER AND QUALIFIER clause in effect has changed.
The default schema is the role associated with the process if the EXPLAIN statement is
executed in a trusted connection with the ROLE AS OBJECT OWNER AND
QUALIFIER clause in effect.
Note: The message texts for the following SQLCODEs were changed due to APAR
PK44617.
-20373
-20374
-20422
Chapter 4. Security 147
4.3 Encryption options
In addition to the basic encryption options described in DRDA encrypted passwords on
page 138, you can choose several options for stronger security. In this section we show
different ways to set and use strong encryption:
DRDA encryption
IP Security
Secure Socket Layer (AT-TLS)
DataPower
4.3.1 DRDA encryption
Traditionally, DB2 used Data Encryption Standards (DES, FIPS 46-3). By installing APAR
PK56287, DRDA access to DB2 V8 for z/OS or later now supports Advanced Encryption
Standard (AES) user ID and password encryption, when establishing a connection with a
DB2 for z/OS with a tighter security.
Now DB2 for z/OS supports two levels of encryption. Even though DES weak cryptographic
function, DB2 supports DES encryption/decryption for compatibility reasons with downlevel
systems.
DES
56-bit single DES encryption of the password and the Diffie-Hellman algorithm to generate
a key for the encryption algorithm at connect time.
AES
256-bit AES encryption and the Diffie-Hellman algorithm to generate a key for the
encryption algorithm.
Using AES encryption to connect DB2 for z/OS
DB2 for z/OS requires to have ICSF on z/OS to use AES encryption. If ICSF is not enabled,
you will see the message in Example 4-15.
Example 4-15 SYSLOG output from misconfiguration
DSNL046I -D9C3 DSNLTSEC ICSF NOT ENABLED
Note: DB2 for LUW Version 8, FixPack 16, the IBM Data Server Driver for JDBC and SQLJ
provides support for AES encryption for connecting to DB2 for z/OS V8 or later with
appropriate PTF.
AES encryption applies to IBM Data Server Driver for JDBC and SQLJ type 4 connectivity.
You request AES encryption by setting the IBM DB2 Driver for JDBC and SQLJ property
encryptionAlgorithm.
DB2 Connect V9.1 FP5, V9.5 FP3 or later and equivalent level of data servers drivers
provides AES encryption.
Note: To enable AES encryption, you need to enable ICSF with 256-bit AES encryption. In
the course of our installation, we found we needed a new level of microcode level for our
z9 cryptographic coprocessor to add AES support and the z/OS V1R10 ICSF APAR
OA27145 (PTF UA45350)
148 DB2 9 for z/OS: Distributed Functions
After you have configured the server to enable AES encryption, you choose the option to use
AES encryption from the database directory, using the command shown in Example 4-16.
Example 4-16 Catalog database with AES option
$ db2 catalog db DB9C at node WTSC63 authentication SERVER_ENCRYPT_AES
DB20000I The CATALOG DATABASE command completed successfully.
DB21056W Directory changes may not be effective until the directory cache is
refreshed.
If you are using one of the data server drivers to connect to DB2 for z/OS using AES
encryption, you can pass your option using the Authentication option of the IBM Data Server
Driver configuration keywords. Example 4-17 shows a configuration using AES encryption for
the data server driver. You can also choose the option through connection attributes.
Example 4-17 Sample data server driver configuration for AES encryption
<configuration>
<DSN_Collection>
</dsn>
<dsn alias="DB9A" name="DB9A" host="wtsc63.itso.ibm.com" port="12347">
<parameter name="Authentication" value="Server_encrypt_aes"/>
</dsn>
<dsn alias="DB9C" name="DB9C" host="wtsc63.itso.ibm.com" port="38320">
<parameter name="Authentication" value="Server_encrypt"/>
</dsn>
</DSN_Collection>
...
</configuration>
For Java applications, the encryptionAlgorithm driver property provides the option to choose
between the 56-bit DES encryption (encryptionAlgorithm value of 1) and the 256-bit AES
encryption (encryptionAlgorithm value of 2). Example 4-18 shows how to set the property
from DB2DataSource class method.
Example 4-18 Using AES from Java application
Set following property to use AES(2)
(DB2DataSource)db2ds.setEncryptionAlgorithm(2);
one of the following two security options supports AES encryption
db2ds.setSecurityMechanism(com.ibm.db2.jcc.DB2BaseDataSource.ENCRYPTED_PASSWORD_SE
CURITY);
db2ds.setSecurityMechanism(com.ibm.db2.jcc.DB2BaseDataSource.ENCRYPTED_USER_AND_PA
SSWORD_SECURITY);
If you are configuring WebSphere Application Server to use AES configuration, you can add
the DataSource property from the DataSource custom properties panel of WebSphere
Application Server admin console, as shown in Figure 4-9 on page 149.
Important: To use AES encryption in Java environment, you need to obtain the
unrestricted policy file for JCE as documented in the manual. It is available at the following
Web page:
https://www.software.ibm.com/webapp/iwm/web/preLogin.do?source=jcesdk
Chapter 4. Security 149
Figure 4-9 Configure WebSphere Application Server DataSource custom properties to use AES
encryption
Data stream encryption options
DB2 provides a DRDA data stream encryption option, however DRDA data encryption only
supports 56-bit DES encryption to encrypt/decrypt data. Future direction for data stream
encryption is to utilize SSL support by DB2 9 for z/OS. See 4.3.3, Secure Socket Layer on
page 151, for detail on SSL connection. Using data stream encryption requires the Integrated
Cryptographic Service Facility (ICSF) enablement on z/OS.
When using DB2 Connect, you specify the data stream encryption through database catalog
command, as shown in Example 4-19.
Example 4-19 Catalog database with DRDA data stream encryption
$ db2 catalog db DB9A at node WTSC63 authentication DATA_ENCRYPT
DB20000I The CATALOG DATABASE command completed successfully.
DB21056W Directory changes may not be effective until the directory cache is
refreshed.
Example 4-20 shows the setting for data stream encryption for an ODBC/CLI/.NET
application with data server drivers.
Example 4-20 Sample data server driver configuration for data stream encryption
<configuration>
<DSN_Collection>
</dsn>
<dsn alias="DB9A" name="DB9A" host="wtsc63.itso.ibm.com" port="12347">
<parameter name="Authentication" value="Data_encrypt"/>
</dsn>
<dsn alias="DB9C" name="DB9C" host="wtsc63.itso.ibm.com" port="38320">
<parameter name="Authentication" value="Server_encrypt"/>
</dsn>
</DSN_Collection>
...
</configuration>
For Java applications, the T4 driver provides two options for data stream encryption under the
securityMechanism driver property. See Example 4-21 on page 150. You can set the data
stream encryption from your Java application or WebSphere Application Server admin
console.
Note: Enable the IBM Java Cryptography Extension (JCE) on your client. The IBM JCE is
part of the IBM SDK for Java, Version 1.4.2 or later. For earlier versions of the SDK for
Java, you need to install the ibmjceprovider.jar package.
150 DB2 9 for z/OS: Distributed Functions
Example 4-21 Using data stream encryption for Java applications
encryptionAlgorithm of 1(DES) is only supported(optional)
(DB2DataSource)db2ds.setEncryptionAlgorithm(1);
one of the following two security options supports data stream encryption
db2ds.setSecurityMechanism(com.ibm.db2.jcc.DB2BaseDataSource.ENCRYPTED_USER_AND_DA
TA_SECURITY);
db2ds.setSecurityMechanism(com.ibm.db2.jcc.DB2BaseDataSource.ENCRYPTED_USER_PASSWO
RD_AND_DATA_SECURITY);
4.3.2 IP Security
IP Security (IPSec) is a set of protocols and standards defined by the Internet Engineering
Task Force (IETF), which provides open architecture for security at the IP networking layer of
TCP/IP. IPSec has been deployed widely to implement Virtual Private Network (VPN).
Because IPSec works on the IP networking layer, IPSec can be used to provide security for
any TCP/IP applications without modifications, including DB2 for z/OS. If necessary,
applications can have their own security mechanisms on top of IPSec. Figure 4-10 illustrate
how IPSec works with DB2 for z/OS server and DRDA AR clients. The Transport Layer
protocols of the Internet Protocol Suite such as Transmission Control Protocol (TCP) and
User Datagram Protocol (UDP) use ports as communications endpoints. A specific port is
identified by its number, commonly known as the port number, the IP address it is associated
with, and the protocol used for communication
UDP provides a simple message service for transaction-oriented services. Each UDP header
carries both a source port identifier and destination port identifier, allowing high-level
protocols to target specific applications and services among hosts. Internet Control Message
Protocol (ICMP) messages contain information about routing with IP datagrams or simple
exchanges such as time-stamp or echo transactions.
Figure 4-10 IPSec overview
WAS
DB2
Connect
JDBC
Application
LUW
CLI
Application
Data Link
IP/ICMP
TCP/UDP
IPSec
Network
Applications
z/OS
DB2 for z/OS
z/OS CS
Data Link
IP/ICMP
TCP/UDP
Applications
Socket API
Chapter 4. Security 151
IPSec works in the IP layer and network traffic will be encrypted. Data will be passed to the IP
layer in clear text. Using IPSec does not require any changes to existing applications.
IPSec will utilize the System z cryptographic hardware, if the hardware is enabled and the
required cryptographic algorithm is supported by the hardware. In addition, IPSec is zIIP
eligible: the zIIP IPSECURITY (zIIP assist for IPSec) function can reduce IPSec processing
load on General Processors well beyond what is achievable using cryptographic hardware.
There are no DB2 for z/OS settings related to IPSec. For details, refer to the following
publications:
Communications Server for z/OS V1R9 TCP/IP Implementation Volume 4: Security and
Policy-Based Networking, SG24-75350
IBM z/OS V1R10 Communications Server TCP/IP Implementation Volume 4: Security and
Policy-Based Networking, SG24-76990
4.3.3 Secure Socket Layer
Secure Socket Layer (SSL), is a protocol designed and implemented by Netscape in
response to growing concerns over security on the internet. SSL was first implemented to
secure network traffic between browser and server. For example, when you are using Internet
Banking, you see a little padlock symbol on bottom of your browser (see Figure 4-11), which
tells you are using SSL. Other applications such as TELENT or FTP started using SSL and it
has become a general solution for network security. SSL does encryption between two
applications, where IPSec encrypt the whole network between hosts.
Figure 4-11 The padlock symbol indicates encryption
DB2 9 for z/OS supports SSL connections from DRDA AR clients also supporting SSL. SSL
provides encryption of data on the wire. On System z, DB2 for z/OS uses the z/OS
Communication Server (z/OS CS) IP Application Transparent Transport Layer service
(AT-TLS). z/OS V1R7 CS IP introduced a new AT-TLS function in the TCP/IP stack to provide
TLS for TCP/IP sockets applications that require secure connections. AT-TLS performs TLS
on behalf of the application by invoking the z/OS System SSL in the TCP layer of the stack.
The z/OS CS Policy Agent is a started task that is used to manage the policies that are
defined by the network administrators for their users. An AT-TLS policy is a file that defines
the SSL characteristics of a connection that the AT-TLS can understand to invoke the z/OS
system SSL. Figure 4-12 on page 152 illustrates how the SSL connection works between
DB2 for z/OS server and DRDA AR clients.
z/OS System SSL provides support for SSL 2.0, SSL 3.0 and TLS 1.0.
152 DB2 9 for z/OS: Distributed Functions
Figure 4-12 SSL overview
SSL consists of the record protocol and the handshake protocol. The record protocol controls
the flow of the data between the two endpoints of an SSL session. The handshake protocol
authenticates one or both ends of the SSL session and establishes a unique symmetric key
used to generate keys to encrypt and decrypt data for that SSL session. SSL uses
asymmetric cryptography, digital certificates, and SSL handshake flows to authenticate one or
both end of the SSL session. A digital certificate can be assigned to the applications using
SSL on each end of the connection. The digital certificate is comprised of a public key and
some identifying information that has been digitally signed. Each public key has an
associated private key, which is not stored with or as part of the certificate. The applications
which is being authenticated must prove that it has access to the private key associated with
the public key contained within the digital certificate.
There are three ways to prepare a certificate for use in SSL/TLS connections:
Request a well-known Certification Authority (CA) to sign your certificate
Generate a certificate yourself
Create a self-signed CA certificate and function as local CA
The following example shows how to configure the DB2 9 for z/OS server system using the
internal CA signed site certificate with RACDCERT command. For other options, see IBM
z/OS V1R10 Communications Server TCP/IP Implementation Volume 4: Security and
Policy-Based Networking, SG24-7699.
In the following sections we show you how to perform the following tasks:
Prepare to use SSL at DB2 9 for z/OS server
Preparing to use SSL for Java clients
Preparing to use SSL for non-Java-based Clients
Prepare to use SSL at DB2 9 for z/OS server
Preparing to use SSL includes defining policy agent and TCP/IP stack configuration. Related
RACF resource definitions include access to policy agent and RACF keyrings and SSL
certificates, and DB2 for z/OS configuration. The sample resource definitions are in no
WAS
z/OS
DB2
Connect
JDBC
Application
LUW
DB2 for z/OS
z/OS CS
CLI
Application
Data Link Data Link
IP/ICMP IP/ICMP
TCP/UDP
TCP/UDP
SSL
Network
Applications
Applications
AT-TLS
policy
Policy
Agent
System SSL
Socket API
Chapter 4. Security 153
specific order, you may change your order as needed. We will not go into details for each
parameter. Refer to z/OS V1R10.0 Communication Server: IP Configuration Reference,
SC31-8776.
Setup on z/OS includes the following resources:
RACF setup
Define and permit RACF resource for Policy Agent
Define RACF keyrings and SSL certificates
TCP/IP setup
PROFILE.TCPIP configuration
TCP/IP stack initialization access control
AT-TLS policy configuration
DB2 for z/OS setup
Figure 0-1 BSDS
Figure 4-13 on page 154 shows the sets of RACF commands that define the RACF resources
needed for SSL access and their access permits. Definitions include creation of a user for the
policy agent, the syslog daemon, granting access to TCP/IP stack and related commands.
The RACF resource EZB.INITSTACK.sysname.tcpname in the SERVAUTH class is used to
block stack access, except for the user IDs permitted to access the resource.
Important: We recommend applying the PTF corresponding to PK81175 when setting up
the DB2 for z/OS server to use SSL connections
154 DB2 9 for z/OS: Distributed Functions
Figure 4-13 Define RACF resources for policy agent
You need to create a started task for policy agent named PAGENT in your installation of
PROCLIB library, Figure 4-14 on page 155 shows a sample definition for PAGENT procedure.
Define user ID assign to policy agent and syslog daemon
ADDUSER PAGENT DFLTGRP(OMVSGRP) OMVS(UID(0) HOME('/'))
ADDUSER SYSLOGD DFLTGRP(OMVSGRP) OMVS(UID(0) HOME('/'))
RDEFINE STARTED PAGENT.* STDATA(USER(PAGENT))
RDEFINE STARTED SYSLOGD.* STDATA(USER(SYSLOGD))
SETROPTS RACLIST(STARTED) REFRESH
SETROPTS GENERIC(STARTED) REFRESH
Restricting access to operator commands
SETROPTS RACLIST (OPERCMDS)
RDEFINE OPERCMDS (MVS.SERVMGR.PAGENT) UACC(NONE)
PERMIT MVS.SERVMGR.PAGENT CLASS(OPERCMDS) ACCESS(CONTROL) ID(PAGENT)
SETROPTS RACLIST(OPERCMDS) REFRESH
Use to block stack access except permitted user
SETROPTS RACLIST (SERVAUTH)
SETROPTS GENERIC (SERVAUTH)
RDEFINE SERVAUTH EZB.INITSTACK.SC63.TCPIP UACC(NONE)
PERMIT EZB.INITSTACK.SC63.TCPIP -
CLASS(SERVAUTH) ID(OMVSKERN) ACCESS(READ)
PERMIT EZB.INITSTACK.SC63.TCPIP -
CLASS(SERVAUTH) ID(PAGENT) ACCESS(READ)
PERMIT EZB.INITSTACK.SC63.TCPIP -
CLASS(SERVAUTH) ID(SYSLOGD) ACCESS(READ)
SETROPTS GENERIC(SERVAUTH) REFRESH
SETROPTS RACLIST(SERVAUTH) REFRESH
Allow access to the pasearch command
RDEFINE SERVAUTH EZB.PAGENT.SC63.TCPIP.* UACC(NONE)
PERMIT EZB.PAGENT.SC63.TCPIP.* -
CLASS(SERVAUTH) ID(SYSADM) ACCESS(READ)
SETROPTS GENERIC(SERVAUTH) REFRESH
SETROPTS RACLIST(SERVAUTH) REFRESH
Note: In our test environment, the configuration files are defined as UNIX System Services
files. Alternatively, you may choose to prepare your configuration files as MVS data sets. If
that is the case, STDENV DD can be defined as below. The same applies to the other
configuration files.
//STDENV DD DSN=SYS1.TCPPARMS(PAENV),DISP=SHR
Chapter 4. Security 155
Figure 4-14 Definition for policy agent started task
You define policy agent environment file specified in your started task, as shown in
Figure 4-15. A brief explanation for each specified parameter follows:
PAGENT_CONFIG_FILE: Point to a policy agent main configuration file.
PAGENT_LOG_FILE: Specifies the destination of the log file.
TZ: Set to local time, our installation point to Eastern time zone in the United States.
Figure 4-15 Definition for policy agent environment file
Define the policy agent main configuration file, which points to AT-TLS configuration file, as
shown in Figure 4-16. A brief explanation for each specified parameter follows:
LogLevel
Specify the level of tracing for the Policy Agent. For example, if you specify 15, syserr,
objerr, proterr, and warning will be written.
TcpImage
Specify a TCP/IP image and its associated image configuration file to be installed to that
image.
TTLSconfig
Specify the path of a local AT-TLS policy file that contains stack-specific AT-TLS policy
statements
Figure 4-16 Definition for policy agent main configuration file
//PAGENT PROC
//*
//* SecureWay Communications Server IP
//* SMP/E distribution name: EZAPAGSP
//*
//* 5647-A01 (C) Copyright IBM Corp. 1999.
//* Licensed Materials - Property of IBM
//*
//STEP0 EXEC PGM=BPXTCAFF,PARM=TCPIP
//PAGENT EXEC PGM=PAGENT,REGION=0K,TIME=NOLIMIT,
// PARM='POSIX(ON) ALL31(ON) ENVAR("_CEE_ENVFILE=DD:STDENV")/'
//STDENV DD PATH='/u/stc/tls/pagent.sc63.soap.env',
// PATHOPTS=(ORDONLY)
//SYSPRINT DD SYSOUT=*
//SYSOUT DD SYSOUT=*
//CEEDUMP DD SYSOUT=*,DCB=(RECFM=FB,LRECL=132,BLKSIZE=132)
Policy agent environment file, /u/stc/tls/pagent.sc63.soap.env
PAGENT_CONFIG_FILE=/u/stc/tls/pagent.sc63.soap.conf
PAGENT_LOG_FILE=/tmp/pagent.sc63.soap.log
TZ=EST5EDT
Policy agent main configuration file, /u/stc/tls/pagent.sc63.soap.conf
LogLevel 15
TcpImage TCPIP flush purge 1
TTLSconfig /u/stc/tls/pagent.sc63.soap.policy flush purge
156 DB2 9 for z/OS: Distributed Functions
Define DB2 9 for z/OS as a SSL server in the AT-TLS configuration file as shown in
Figure 4-17. In our installation, the security port is 12349 and the IP address used to access
specific server is 9.12.6.70. The keyring name is case sensitive.
TTLSRule
Define an AT-TLS rule. Specify DB2 for z/OS secure port number to LocalPortRange, and
IP address used to access DB2 for z/OS to LocalAddr. Jobname is name of application
which will be DDF address space, ssidDIST.
TTLSGroupAction
Specify parameters for a Language Environment process required to support secure
connections. TTLSEnabled On means AT-TLS security is active, which Data might be
encrypted, based on other policy statements.
TTLSEnvironmentAction
Specify the attributes for an AT-TLS environment. HandshakeRole specifies the SSL
handshake role to be taken.
Figure 4-17 Sample server definition for AT-TLS configuration file
configuration for AT-TLS configuration file, /u/stc/tls/pagent.sc63.soap.policy
## ----------------------------------------------------------------
# Server Rule DB9ADIST on SC63, SSL-port 12349, 9.12.6.70
## ----------------------------------------------------------------
TTLSRule DB9ADIST_12349
{
LocalPortRange 12349
LocalAddr 9.12.6.70
Jobname DB9ADIST
Userid STC
Direction INBOUND
TTLSGroupActionRef DB9ADISTGrpAct
TTLSEnvironmentActionRef DB9ADISTEnvAct
}
## ----------------------------------------------------------------
# DB9ADIST Server Group action
## ----------------------------------------------------------------
TTLSGroupAction DB9ADISTGrpAct
{
TTLSEnabled On
Trace 15
}
## ----------------------------------------------------------------
# DB9ADIST Server Environment action
## ----------------------------------------------------------------
TTLSEnvironmentAction DB9ADISTEnvAct
{
TTLSKeyRingParms
{
Keyring DB9AKEYRING
}
HandShakeRole Server
}
Chapter 4. Security 157
Modify TCPIP.PROFILE with TPCCONFIG TTLS as shown in Figure 4-18. When the TTLS
subparameter is specified, the procedure starts after the policy agent has successfully
installed the AT-TLS policy in the TCP/IP stack and AT-TLS services are available.
Figure 4-18 Add TTLS parameter to TCP/IP stack configuration
Before you create keyrings and certificates, activate the DIGTCERT and DIGTRING generic
classes. Define and permit access to resources using the command in Figure 4-19.
Figure 4-19 Activate DIGTCERT and DIGTRING class
Create a self-signed server CA certificate, using the RACDCERT RACF command.
Figure 4-20 shows the creation of a sample self-signed certificate called DB9ASSLCA.
Figure 4-20 Create a self-signed server CA certificate
Similarly, create a private server certificate using the RACDCERT RACF command.
Figure 4-21 on page 158 shows the sample private server certificate called DB9ASSLCERT
for user ID STC.
TCP/IP configuration file, TCPIP.PROFILE
TCPCONFIG RESTRICTLOWPORTS
TTLS
SETR CLASSACT(DIGTCERT DIGTRING)
RDEF FACILITY IRR.DIGTCERT.LISTRING UACC(NONE)
RDEF FACILITY IRR.DIGTCERT.LIST UACC(NONE)
PE IRR.DIGTCERT.LIST CLASS(FACILITY) ID(SYSDSP) ACCESS(CONTROL)
PE IRR.DIGTCERT.LISTRING CLASS(FACILITY) ID(SYSDSP) ACCESS(READ)
SETR RACLIST (DIGTRING DIGTCERT) REFRESH
RACDCERT CERTAUTH GENCERT +
SUBJECTSDN( +
CN('wtsc63.itso.ibm.com') +
OU('UTEC620') +
O('IBM') +
L('SVL') +
SP('CA') +
C('USA') ) +
SIZE(1024) +
NOTBEFORE(DATE(2008-08-14)) +
NOTAFTER(DATE(2030-12-31)) +
WITHLABEL('DB9ASSLCA') +
KEYUSAGE(CERTSIGN) +
ALTNAME(DOMAIN('wtsc63.itso.ibm.com') )
158 DB2 9 for z/OS: Distributed Functions
Figure 4-21 Create private server certificate
Create a server keyring and add the created certificate, Figure 4-22 shows the creation of
keyring DB9AKEYRING and addition of DB9ASSLCA and DB9ASSLCERT to the keyring.
Figure 4-22 Create server keyring and add certificate
After creating and adding keyring completes, export the server CA certificate to a data set
using the command shown in Figure 4-23 on page 159. The exported certificate will be given
to the DRDA AR clients.
RACDCERT ID(STC) GENCERT +
SUBJECTSDN( +
CN('wtsc63.itso.ibm.com') +
OU('UTEC620') +
O('IBM') +
L('SVL') +
SP('CA') +
C('USA') ) +
SIZE(1024) +
NOTBEFORE(DATE(2008-08-14)) +
NOTAFTER(DATE(2030-12-31)) +
WITHLABEL('DB9ASSLCERT') +
ALTNAME(DOMAIN('wtsc63.itso.ibm.com') ) +
SIGNWITH(CERTAUTH LABEL('DB9ASSLCA'))
create a keyring and add certificates to created keyring.
RACDCERT ID(STC) ADDRING(DB9AKEYRING)
RACDCERT ID(STC) +
CONNECT(CERTAUTH LABEL('DB9ASSLCA') +
RING(DB9AKEYRING) )
RACDCERT ID(STC) +
CONNECT(LABEL('DB9ASSLCERT') +
RING(DB9AKEYRING) +
DEFAULT)
To verify your definition, issue following command, follows by sample output.
RACDCERT ID(STC) LISTRING(DB9AKEYRING)
Digital ring information for user STC:
Ring:
>DB9AKEYRING<
Certificate Label Name Cert Owner USAGE DEFAULT
-------------------------------- ------------ -------- -------
DB9ASSLCA CERTAUTH CERTAUTH NO
DB9ASSLCERT ID(STC) PERSONAL YES
Chapter 4. Security 159
Figure 4-23 Export server CA certificate
Set up your DB2 9 for z/OS to open the secure port. Update your BSDS using the Change
Log Inventory utility (DSNJU003). Figure 4-24 show the setting of secure port 12349.
Figure 4-24 Change BSDS to enable the secured port
For details of each command and parameter, refer to following documents:
z/OS V1R10 Communications Server IP: Configuration Guide, SC31-8775-14
z/OS V1R10 Communications Server IP: Configuration Reference, SC31-8776-15
z/OS V1R10 Communications Server IP: System Administrators Commands,
SC31-8781-08
z/OS V1R10 Security Server RACF Command Language Reference, SA22-7687-12
DB2 Version 9.1 for z/OS Utility Guide and Reference, SC18-9855
Preparing to use SSL for Java clients
To use SSL connection from your Java clients, you need to have the DB2 server certificate
imported using the keytool command.
The sequence for creating and importing your DB2 server certificate is follows:
1. Download the DB2 server certificate from the server.
RACF creates DB2 server certificate in EBCDIC. Be sure to FTP the DB2 server
certificate in ASCII mode from the Server. In our example, we named the downloaded
certificate cert.arm in C:\temp\residency directory.
RACDCERT CERTAUTH +
EXPORT(LABEL('DB9ASSLCA')) +
DSN('PAOLOR7.DB9ASSLC.B64') +
FORMAT(CERTB64)
//DSNJU003 JOB ('DSNJU003'),'CHANGE LOG INV',CLASS=A,MSGLEVEL=(1,1)
//STEP010 EXEC PGM=DSNJU003
//STEPLIB DD DSN=DB9A9.SDSNEXIT,DISP=SHR
// DD DSN=DB9A9.SDSNLOAD,DISP=SHR
//SYSUT1 DD DSN=DB9AU.BSDS01,DISP=OLD
//SYSUT2 DD DSN=DB9AU.BSDS02,DISP=OLD
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
DDF LOCATION=DB9A,SECPORT=12349
/*
Note: Consider the following rules when configuring SECPORT to DB2 for z/OS:
If the value of SECPORT is disabled, the client can still use the DRDA PORT and use
SSL on it, but DB2 for z/OS does not validate whether the connection uses SSL
protocol.
DB2 9 for z/OS will not allow SSL to work with the TCP/IP BIND specific statements.
Use DDF IPV4 or IPV6 and GRPIPV4 or GRPIPV6 statements instead to use a specific
IP address with DB2 for z/OS.
160 DB2 9 for z/OS: Distributed Functions
2. (Optional) Create the keystore using keytool command.
To create new keystore, use the -genkey option. In our case, see Example 4-22, the name
keystore is used for the keystore.
Example 4-22 Generating keystore for Java clients
Create a keystore for your application(or use your environment keystore)
$ ./keytool -genkey -keystore keystore
Enter keystore password: passwd123
What is your first and last name?
[Unknown]: PAOLOR7
What is the name of your organizational unit?
[Unknown]: ITSO
What is the name of your organization?
[Unknown]: IBM
What is the name of your City or Locality?
[Unknown]: SVL
What is the name of your State or Province?
[Unknown]: CA
What is the two-letter country code for this unit?
[Unknown]: US
Is CN=PAOLOR7, OU=ITSO, O=IBM, L=SVL, ST=CA, C=US correct? (type "yes" or "no")
[no]: yes
Enter key password for <mykey>:
(RETURN if same as keystore password):
3. Import the DB2 server certificate to a keystore.
Import the DB2 server certificate using the -import option. See Example 4-23 on
page 161.
We use also the following options to import the DB2 server certificate:
-alias option
This option provides label name for the DB2 server certificate.
-file option
This option provides name of the DB2 server certificate download in 1., Download the
DB2 server certificate from the server. on page 159. In the example cert.arm is used.
-keystore option
This option provides name or full path to the keystore. In the example, keystore is
used as the keystore name and keytool command was executed where the keystore
resides.
You will be prompted to enter the keystore password.
Note: Example 4-22 omits information for keystore and just shows the steps. It is
recommended to give appropriate information when creating your keystore.
Chapter 4. Security 161
Example 4-23 Import server certificate to your keystore
Importing certificate FTPed from z/OS to keystore
$ keytool -import -alias DB9ASSLCA -file cert.arm -keystore keystore
Enter keystore password: passwd123
Owner: OU=DB9ASSL, O=IBM, L=SVL, ST=CA, C=USA
Issuer: OU=DB9ASSL, O=IBM, L=SVL, ST=CA, C=USA
Serial number: 0
Valid from: 8/13/08 11:00 PM until: 12/31/30 10:59 PM
Certificate fingerprints:
MD5: 93:AF:C4:9B:96:E4:B2:A0:9C:7F:B3:EC:8E:6C:CD:8A
SHA1: D5:15:65:E0:A8:49:2C:46:7A:50:00:38:1A:B9:05:1A:9E:F3:1A:33
Trust this certificate? [no]: yes
Certificate was added to keystore
You can now use the -list option to verify the importing of the DB2 server certificate to the
keystore. Example 4-24 shows the output from keytool command with db2assaca entry.
Example 4-24 List the keystore entry
$ keytool -keystore keystore -list
Enter keystore password: passwd123
Keystore type: jks
Keystore provider: IBMJCE
Your keystore contains 2 entries
db9asslca, Apr 8, 2009, trustedCertEntry,
Certificate fingerprint (MD5): 93:AF:C4:9B:96:E4:B2:A0:9C:7F:B3:EC:8E:6C:CD:8A
mykey, Apr 8, 2009, keyEntry,
Certificate fingerprint (MD5): 15:D4:87:3F:1B:AC:EF:03:AA:67:E0:07:34:4E:69:8B
Preparing to use SSL for non-Java-based Clients
To use an SSL connection from non-Java-based clients, you need to import the DB2 server
certificate using IBM Key Management tool, which is part of DB2 Global Security Kit (GSKit).
If the GSkit is not available on the system, instructions are given in Configuring Secure
Sockets Layer (SSL) support in the DB2 client, available from the following Web page:
http://publib.boulder.ibm.com/infocenter/db2luw/v9/topic/com.ibm.db2.udb.uprun.doc
/doc/t0053518.htm
Create and import your DB2 server certificate as follows:
1. Download the DB2 server certificate from the server.
RACF creates DB2 server certificate in EBCDIC. Be sure to FTP the DB2 server
certificate in ASCII mode from the Server. In our example, we name d cert.arm in
C:\temp\residency directory.
2. Start the IBM Key Management tool.
GSkit requires you to set the JAVA_HOME environment variable before you start the IBM
Key Management tool. The results if you start your IBM Key Management tool without
setting the JAVA_HOME, is shown in Example 4-25 on page 162.
162 DB2 9 for z/OS: Distributed Functions
Example 4-25 Starting the IBM Key Management tool
$ export JAVA_HOME=/usr/java5
$ gsk7ikm_64
Probable error message without setting JAVA_HOME environment variable
$ gsk7ikm_64
which: 0652-141 There is no java in /usr/bin /etc /usr/sbin /usr/ucb
/home/db2inst1/bin /usr/bin/X11 /sbin . /home/db2inst1/sqllib/bin
/home/db2inst1/sqllib/adm /home/db2inst1/sqllib/misc.
java is not found
please set JAVA_HOME or add java to path
For example, you can set JAVA_HOME in a korn shell by the
following command
export JAVA_HOME=/usr/jdk1.4
where JDK is installed in directory /usr/jdk1.4
After issuing the command, the IBM Key Management tool window appears. See
Figure 4-25.
Figure 4-25 The IBM Key Management tool window
Tip: In Windows platform, you can set your JAVA_HOME environment variable using
following command:
set JAVA_HOME=C:\Program Files\IBM\Java50
Chapter 4. Security 163
3. (Optional) Create new Key Database file.
The Key database type field should be CMS. To create a new Key Database file, click
Key Data File, then click New in the IBM Key Management tool. The dialog shown in
Figure 4-26 will appear where you select and input three fields.
Key database type: You must select CMS.
File name: Specify the name of the keystore, In the example key.kdb is used.
Location: Specify the directory of the keystore, In the example, C:\Program
File\IBM\gsk7\bin\test is used.
Figure 4-26 Create a new Key Database file
Click OK. The Password Prompt dialog box (Figure 4-27 on page 164) will display.
Provide the password for the keystore.
Select the Stash the password to a file? check box when you set the keystore password.
This will create the stash file with the keystore. The stash file stores the password to
access the keystore file in encrypted format, giving an extra level of security in client
environment.
Note: Be sure to place the keystore in a secure place and that the clients are accessible to
the keystore.
164 DB2 9 for z/OS: Distributed Functions
Figure 4-27 Setting password for key database and making stash file
4. Import the DB2 server certificate to a keystore.
After creating or opening your keystore into the IBM Key Management too, click the Add
button (Figure 4-28). Select the DB2 server certificate.
Data type: Select Base64-encoded ASCII data.
Certificate file name: Specify the name of the DB2 server certificate, In our example is
cert.arm downloaded in 1., Download the DB2 server certificate from the server. on
page 161.
Location: Specify the directory of the DB2 server certificate.
Figure 4-28 Import the DB2 server certificate to Key Database
Chapter 4. Security 165
Click OK. The Enter a Label dialog box (Figure 4-29) displays. Type in a label for the
certificate to complete the import. In our example, DB9ASSLCA is used.
Figure 4-29 Enter the label for the DB2 server Certificate
After completing the import of the DB2 server certificate, your keystore should look like
Figure 4-30, where the imported certificate is shown in Key database content. The
keystore is now ready for use by your DB2 CLI applications.
Figure 4-30 The key database after imported the DB2 server certificate
Tip: Alternatively, you can create the keystore using the gsk7cmd command from your
command line. In the example below, name key.kdb, password pwd123k, type CMS
and -stash option were specified.
$ gsk7cmd -keydb -create -db key.kdb -pw pwd123k -type cms -stash
$ ls
key.crl key.kdb key.rdb key.sth
166 DB2 9 for z/OS: Distributed Functions
5. Create the SSL configuration file.
After importing the DB2 server certificates, you need to create the SSL configuration file in
order for the DB2 CLI applications to use them. You should create the SSL configuration
file named SSLClientconfig.ini in $INSTHOME/cfg. In our example, the SSL
configuration file is placed in /home/db2inst1/sqllib/cfg. The contents of the SSL
configuration file are listed in Example 4-26.
DB2_SSL_KEYSTORE_FILE
Fully qualified file name of the KeyStore that stores the Server Certificate.
DB2_SSL_KEYRING_STASH_FILE
Fully qualified file name to the stash file that stores the password to access the
keystore file in encrypted format. This provides an extra level of security in the client
scenario.
Example 4-26 Contents of SSLClientconfig.ini
DB2_SSL_KEYSTORE_FILE=/home/db2inst1/key.kdb
DB2_SSL_KEYRING_STASH_FILE=/home/db2inst1/key.sth
You are required to put SSL configuration file in a certain directory depending on the platform
you are using.
UNIX: $INSTHOME/cfg
Windows: $INSTHOME/
An example of $INSTHOME for Windows environment follows:
\Documents and Settings\All Users\Application Data\IBM\DB2\DB2COPY1\DB2
Tthe settings for $DB2ISNTPORF and $DB2INSTDEF DB2 profile variables are as shown in
Figure 4-31.
Figure 4-31 Settings for $DB2ISNTPORF and $DB2INSTDEF DB2 profile variables
C:\Program Files\IBM\SQLLIB\BIN>db2set -all
[e] DB2PATH=C:\Program Files\IBM\SQLLIB
[i] DB2INSTPROF=C:\Documents and Settings\All Users\Application
Data\IBM\DB2\DB2COPY1
[i] DB2COMM=TCPIP
[g] DB2_EXTSECURITY=NO
[g] DB2SYSTEM=LENOVO-B6AFDE0A
[g] DB2PATH=C:\Program Files\IBM\SQLLIB
[g] DB2INSTDEF=DB2
[g] DB2ADMINSERVER=DB2DAS02
Chapter 4. Security 167
Using SSL from Java Application
To use SSL Connection from Java application, set the sslConnection property from Common
IBM Data Server Driver for JDBC and SQLJ properties to true, and connect to the security
port as shown in Example 4-27.
Example 4-27 sample Java code for SSL connection
public static void main (String[] args)
{
String ServerName = "wtsc63.itso.ibm.com";
int PortNumber = 12349;
String DatabaseName = "DB9A";
java.util.Properties properties = new java.util.Properties();
properties.put("user", userid);
properties.put("password", mypasswd);
properties.put("sslConnection", "true");
String url = "jdbc:db2://" + ServerName + ":"+ PortNumber + "/" + DatabaseName
java.sql.Connection con = null;
try
{
Class.forName("com.ibm.db2.jcc.DB2Driver").newInstance();
}
catch ( Exception e )
{
System.out.println("Error: failed to load Db2 jcc driver.");
}
try
{
System.out.println("url: " + url);
con = java.sql.DriverManager.getConnection(url, properties);
application logic follows...
Note: In DB2 Connect V9.7, you no longer need to use separate configuration file,
SSLClientconfig.ini, to setup SSL support. Specification for the keystore file and stash file
will be replaced by connection string keyword.
CLI/ODBC driver:
ssl_client_keystoredb: Specify the fully-qualified key database file name.
sl_client_keystash: Specify the fully-qualified stash file name.
DB2 .NET Data Provider:
SSLClientKeystoredb: Specify the fully-qualified key database file name.
SSLClientKeystash: Specify the fully-qualified stash file name.
Security - Set security to SSL.
CLP and embedded SQL clients:
The CATALOG TCPIP NODE command with SECURITY SSL parameter.
The client-side database manager configuration parameters ssl_clnt_keydb, and
ssl_clnt_stash to connect to a database using SSL.
168 DB2 9 for z/OS: Distributed Functions
For Java applications, you can either pass the keystore name through JVM properties or
Common IBM Data Server Driver for JDBC and SQLJ properties. Example 4-28 shows a way
to pass the keystore name keystore and password passwd123 as JVM properties. In
Using SSL from WebSphere Application Server on page 171, we show how to pass that
information using the Common IBM Data Server Driver for JDBC and SQLJ properties as a
datasource settings.
Example 4-28 Executing Java application using SSL
$ java -Djavax.net.ssl.trustStore=keystore
-Djavax.net.ssl.trustStorePassword=passwd123 SSLTest
Using SSL from non-Java-based Applications using IBM DS Driver
If you are running your .NET applications on the application server, you can choose to install
IBM Data Server Driver package and connect to DB2 for z/OS. The thin client package does
not have a directory, so you need to configure a configuration file for the non-Java-based IBM
Data Server Driver. Example 4-29 gives a sample configuration from our test environment.
The dsn alias DB9AS is connection using SSL, which connect to Secure Port of our DB2, and
the dsn alias DB9A shows settings for regular DRDA connection.
Example 4-29 Sample db2dsdriver.cfg configuration
<configuration>
<DSN_Collection>
<dsn alias="DB9AS" name="DB9A" host="wtsc63.itso.ibm.com" port="12349">
<parameter name="Authentication" value="Server_encrypt_aes"/>
<parameter name="SecurityTransportMode" value="SSL"/>
</dsn>
<dsn alias="DB9A" name="DB9A" host="wtsc63.itso.ibm.com" port="12347">
<parameter name="Authentication" value="Server_encrypt"/>
</dsn>
<databases>
<database name="DB9A" host="wtsc63.itso.ibm.com" port="12349">
<parameter name="ProgramName" value="PID"/>
<parameter name="ConnectionLevelLoadBalancing" value="true"/>
<parameter name="ClientUserID" value="clientuserid"/>
<parameter name="ClientApplicationName" value="my application name"/>
<parameter name="ClientWorkstationName" value="my workstation name"/>
</database>
</databases>
<parameters>
<parameter name="CommProtocol" value="TCPIP"/>
</parameters>
</configuration>
Using SSL from DB2 Connect client
In our example, DB2 Connect was used to connect to DB2 for z/OS using SSL. You will need
to catalog your node directory specifying for the server the SECPORT of DB2 for z/OS and
security SSL, as shown in Example 4-30 on page 169. Catalog database directory and DCS
database directory the same way as your regular connection. All clients who use DB2
Connect to execute applications need to follow the same procedure to get the applications to
use the SSL connection.
Note: Our example configures AES encryption for user ID and password on top of SSL,
which is a valid configuration. But there are no point in setting encryption over SSL.
Chapter 4. Security 169
Example 4-30 Catalog DB2 sever to use SSL connection to DB2 Connect
$ db2 catalog tcpip node sslnode remote wtsc63.itso.ibm.com server 12349 security SSL
DB20000I The CATALOG TCPIP NODE command completed successfully.
DB21056W Directory changes may not be effective until the directory cache is refreshed.
$ db2 catalog db db9as at node sslnode authentication server_encrypt
DB20000I The CATALOG DATABASE command completed successfully.
DB21056W Directory changes may not be effective until the directory cache is refreshed.
$ db2 catalog dcs db db9as as db9a
DB20000I The CATALOG DCS DATABASE command completed successfully.
DB21056W Directory changes may not be effective until the directory cache is refreshed.
After completing the catalog of directories, verify your configuration by connecting to DB2
using the DB2 Connect Command Line Processor. You will not see any difference from your
non-SSL connections, but verify your connection to the secure port of DB2 for z/OS. Our
example shows the output from NETSTAT on z/OS. You should see that the connection was
established to the DB2 security port, as shown in Example 4-31.
Example 4-31 Sample SSL connection using DB2 Connect
$ db2 connect to db9as user paolor7
Enter current password for paolor7:
Database Connection Information
Database server = DB2 z/OS 9.1.5
SQL authorization ID = PAOLOR7
Local database alias = DB9AS
Netstat output shows Connection was established to DB2 Security Port
-DB9A DIS DDF
DSNL080I -DB9A DSNLTDDF DISPLAY DDF REPORT FOLLOWS: 278
DSNL081I STATUS=STARTD
DSNL082I LOCATION LUNAME GENERICLU
DSNL083I DB9A USIBMSC.SCPDB9A -NONE
DSNL084I TCPPORT=12347 SECPORT=12349 RESPORT=12348 IPNAME=-NONE
DSNL085I IPADDR=::9.12.6.70
DSNL086I SQL DOMAIN=wtsc63.itso.ibm.com
DSNL086I RESYNC DOMAIN=wtsc63.itso.ibm.com
DSNL099I DSNLTDDF DISPLAY DDF REPORT COMPLETE
D TCPIP,,N,CONN
EZZ2500I NETSTAT CS V1R10 TCPIP 291
USER ID CONN LOCAL SOCKET FOREIGN SOCKET STATE
DB9ADIST 000037A6 0.0.0.0..12349 0.0.0.0..0 LISTEN
DB9ADIST 000037A5 0.0.0.0..12347 0.0.0.0..0 LISTEN
DB9ADIST 00005D30 9.12.6.70..12347 9.12.5.149..63362 ESTBLSH
DB9ADIST 00003970 9.12.6.70..12347 9.12.6.70..1072 ESTBLSH
DB9ADIST 00005D42 9.12.6.70..12347 9.12.4.121..44633 ESTBLSH
DB9ADIST 00004D38 9.12.6.70..12347 9.12.6.70..1073 ESTBLSH
DB9ADIST 000037E0 9.12.6.70..12347 9.30.28.163..4450 ESTBLSH
DB9ADIST 000037AF 0.0.0.0..12348 0.0.0.0..0 LISTEN
DB9ADIST 00005DAC 9.12.6.70..12349 9.12.5.149..63597 ESTBLSH
If you had connected to your DB2 secure port without properly setting all your configurations,
you would have received an error message (as shown in Example 4-32 on page 170). In this
case we have a connection failure because a non-SSL request was attempted to the DB2
security port.
170 DB2 9 for z/OS: Distributed Functions
Example 4-32 Sample SYSLOG output for failed SSL connection.
Connecting to DB2 Security port without security SSL option in node directory
$ db2 connect to db9as3 user paolor7
Enter current password for paolor7:
SQL30081N A communication error has been detected. Communication protocol being used:
"TCP/IP". Communication API being used: "SOCKETS". Location where the error was detected:
"9.12.6.70". Communication function detecting the error: "recv". Protocol specific error
code(s): "73", "*", "0".
SQLSTATE=08001
Sample output from SYSLOG, which tells connection failure
EZD1287I TTLS Error RC: 5003 Data Decryption 307
JOBNAME: DB9ADIST RULE: DB9ADIST_12349
USERID: STC GRPID: 0000000C ENVID: 00000000 CONNID: 00008187
DSNL511I -DB9A DSNLIENO TCP/IP CONVERSATION FAILED 308
TO LOCATION ::9.12.5.149
IPADDR=::9.12.5.149 PORT=48584
SOCKET=RECV RETURN CODE=1121 REASON CODE=77B17343
If you use a network analyzer tool to capture the packets of the DRDA applications, you will
clearly see the difference between having and not having the SSL encryption active.
Figure 4-32 shows the network capture of a non-encrypted application. You can see that all
the DRDA commands have been analyzed and data are shown as you execute the
application. Anyone accessing the network with minimal skill can capture the data flowing
across.
Figure 4-32 The network capture of DRDA request (using Wireshark)
Figure 4-33 on page 171 shows an example of network capture of application with DRDA
data encryption. The DRDA data encryption is done within DRDA, so the analyzer tool can
still show the DRDA commands, but you will not be able to see data within DRDA command.
Chapter 4. Security 171
Figure 4-33 The network capture of DRDA Data encrypt
Figure 4-34 shows an example of network capture of DRDA application with SSL enabled. As
you can see, everything under TCP/IP is encrypted, so you are not able to see any DRDA
commands with the network analyzer tool.
Figure 4-34 The network capture of DRDA with SSL enabled
Using SSL from WebSphere Application Server
To establish SSL connections from WebSphere Application Server, the applications do not
need to be changed to get SSL enabled. All you need to do is prepare the keystore using the
keytool command and set the related properties using the custom properties settings of
DataSource.
172 DB2 9 for z/OS: Distributed Functions
Figure 4-35 show an example of settings for the WebSphere Application Server DataSource:
sslConnection: true
sslTrustStoreLocation: Specify the keystore location. In the example,
/home/db2inst1/keystore is full path to the keystore named keystore.
sslTrustStorePassword: Specify the keystore password. In the example, passwd123 was
a given for our settings.
Figure 4-35 Sample datasource custom properties settings for WebSphere Application Server
Using SSL in DB2 for z/OS DRDA Requester
Here we briefly discuss using SSL for DB2 for z/OS DRDA Requester. We will not discuss
policy definitions needed for Policy Agent, but you will need add your definition for RACF and
Policy Agents as discussed in Prepare to use SSL at DB2 9 for z/OS server on page 152.
In addition to defining AT-TLS policy, you must update the CDB table SYSIBM.LOCATIONS
as follows:
Populate the SECURE column with Y
Populate the PORT column with secure port of the DB2 for z/OS server
4.3.4 DataPower
The Extensible Markup Language (XML) has become the pervasive mechanism of choice for
representing data among today's enterprise networks. Applications in an SOA
implementation use messages with self-describing XML content to exchange information and
to coordinate distributed events.
Recently, the IT industry sought to increase the ability of network devices to understand and
operate upon application data.
Indeed, as XML adoption within enterprises increases, the growing number of
slow-performing applications demonstrates that existing overtaxed software systems cannot
support next-generation applications.
Enterprises need a platform that addresses XML's performance, security, and management
requirements head-on. WebSphere DataPower Integration appliance XI50 is a complete,
purpose-built hardware platform for delivering manageable, secured, and scalable SOA
solutions.
The IBM WebSphere DataPower SOA appliance portfolio includes the following elements:
IBM WebSphere DataPower XML Accelerator XA35
IBM WebSphere DataPower XML Security Gateway XS40
IBM WebSphere DataPower Integration appliance XI50
Note: For a SSL connection, the PORT column of SYSIBM.LOCATION table must be
populated with the secure port of DB2 for z/OS server, but if the SECURE column is Y
and the PORT column is blank, then DB2 uses the reserved secure port number of 448 as
the default.
Chapter 4. Security 173
For detailed explanations and scenarios, refer to DB2 9 for z/OS: Deploying SOA Solutions,
SG24-7663.
4.4 Addressing dynamic SQL security concerns
Many DB2 for z/OS sites have traditionally restricted the amount of dynamic SQL that can be
used, for reasons of performance and security. However, an inescapable aspect of the
growing tide of application servers is that they use dynamic SQL extensively. Consequently, it
is getting more difficult to keep dynamic SQL out of DB2 for z/OS.
Squeezing the Most out of Dynamic SQL with DB2 for z/OS and OS/390, SG24-6418, shows
many techniques to manage dynamic SQL with DB2 for z/OS. Refer to that book for guidance
on the wider issues of using dynamic SQL with DB2 for z/OS. It includes a section on the
security implications of dynamic SQL.
Besides considerations on performance, discussed in 5.6, Developing static applications
using pureQuery on page 211, the main security concerns with dynamic SQL are
summarized here:
It usually requires granting direct table access to primary authorization IDs or secondary
authorization IDs. This exposes the risk that users do have the authorization to manipulate
the data directly. By contrast, with the static SQL security model, users are granted
execute authority on specific packages, which means that they can only manipulate the
data using the applications that they are supposed to be using.
The administrative burden of granting access to data objects can be much higher than
granting execute authority on packages.
The static SQL security model offers major advantages to security and administration effort.
What is really needed are techniques to apply the static SQL security model to dynamic SQL
applications. The techniques in this section do precisely that.
In this section we explore the following techniques to resolve dynamic SQL security issues:
Using DYNAMICRULES(BIND) to avoid granting table privileges
Using stored procedures for static SQL security benefits
Static SQL options of JDBC to realize static SQL security benefits
Static execution of dynamic SQL to benefit from static SQL security
4.4.1 Using DYNAMICRULES(BIND) to avoid granting table privileges
DYNAMICRULES(BIND) provides a technique that allows dynamic SQL to be executed
dynamically, but to be authorized based on the static SQL security model.
What it does
The DYNAMICRULES(BIND) bind option for dynamic SQL packages forces the authorization
checks to be performed against the package owner, rather than the user who is executing the
dynamic SQL.
Note: Keep in mind that network trusted context and roles also provide further levels of
security.
174 DB2 9 for z/OS: Distributed Functions
Benefits
The benefit is that users can execute dynamic SQL applications without needing table access
authorities.
How it works
All SQL, including dynamic SQL, is executed through a package in DB2. For example, if you
look into the SQL contents of the package for the DB2 Command Line Processor, you can
see a number of generic PREPARE, DECLARE, OPEN,FETCH, and CLOSE
statements.
The DYNAMICRULES(BIND) option is an instruction to DB2 to perform authorization
checking against the owner of the package, rather than the authorization ID that executes the
SQL statement. It works as follows:
The dynamic SQL package PACKX is bound using owner AUTHID1 with option
DYNAMICRULES(BIND).
The user ENDUSER1 is granted EXECUTE authority on package PACKX. GRANT
EXECUTE ON PACKAGE PACKX TO ENDUSER1
AUTHID1 has SELECT authority against TABLEX.
At runtime, the application prepares and executes a dynamic SELECT statement against
TABLEX.
DB2 checks that ENDUSER1 has execute privilege on the package.
DB2 checks that AUTHID1 currently has SELECT authority for TABLEX.
Implementation considerations
DYNAMICRULES(BIND) is a valuable technique when the dynamic SQL application does not
provide any free-form SQL processing facilities (such as SPUFI on z/OS or DB2 CLP on DB2
Connect). If the application contains a free-form SQL processor, then anybody who has
execute authorization on the package, also inherits all the table authorization privileges held
by the package owner.
DYNAMICRULES(BIND) can be used with dynamic SQL applications that provide free-form
SQL processing facilities if you ensure that the authorization limits of the package owner are
acceptable from a security viewpoint. For example, if the package owner only has select
privilege on a defined set of tables, it can be acceptable to use DYNAMICRULES(BIND) and
accept the possibility that users might compose their own dynamic SQL queries against that
set of tables.
Multiple versions of a dynamic SQL package can be created in different collections. This
would allow multiple groups of users to execute the same dynamic SQL program through
different instances of the same package. For example, users from group A could be directed
to the package in collection A by setting the DB2CLI.INI variable CURRENTPACKAGESET to
A. By combining this technique with the DYNAMICRULES(BIND) BIND option, you can have
each group running the same dynamic SQL program, with the table privileges held by the
package owner of their particular instance of the package.
Tip: You can examine the SQL contents of DB2 Connect bind files using the bind file
descriptor utility, as follows:
db2bfd -s db2clpcs.bnd
Chapter 4. Security 175
DYNAMICRULES(BIND) does not work well for ODBC and JDBC applications, because the
packages used for binding ODBC and JDBC interfaces are generic and can be used by any
application using these APIs. The result would be that any user with EXECUTE privilege on
the ODBC and JDBC packages would have all the privileges of the package owner.
In short, it is probably best to stick with DYNAMICRULES(RUN) for the ODBC and JDBC
packages, and grant table authority to secondary authorization IDs for those users who need
ODBC and JDBC access to tables.
4.4.2 Using stored procedures for static SQL security benefits
Stored procedures can be called by dynamic SQL programs, and can execute their work
using static SQL within the procedures, and so derive the security strengths of the static SQL
model.
Stored procedures are also a good performance option for DRDA applications because they
can eliminate multiple network messages by executing all the SQL within the database server
environment.
Stored procedures with static SQL are also good for security reasons, since you can simply
grant execute privilege on the procedure, rather than access privileges on the tables that are
accessed in the procedures. An ODBC or JDBC application can issue a dynamic CALL
statement and invoke a static stored procedure to run under the authority of the package
owner for that stored procedure.
Stored procedures with dynamic SQL are also good for security reasons, but they require a
bit more effort to plan the security configuration.
If you bind the package for the stored procedure with DYNAMICRULES(BIND) then the
dynamic SQL in the stored procedure will also be authorized against the package owner
for the dynamic SQL program.
There are five other possible values for the DYNAMICRULES parameter (RUN,
DEFINEBIND, DEFINERUN, INVOKEBIND, and INVOKERUN). Each of these other
values will result in the authorization for dynamic SQL in a stored procedure being
checked against an authorization ID other than the package owner. Squeezing the Most
out of Dynamic SQL with DB2 for z/OS and OS/390, SG24-6418, contains a table that
clarifies which authorization ID is used for stored procedures with dynamic SQL. Chapter
7 provides a detailed examination of the meaning of DYNAMICRULES bind parameter
values. Refer to this publication for in-depth guidance on the range of possibilities that
exist.
Note: This technique has an exposure if the user has the ability to edit the DB2CLI.INI file
on her own workstation. There is no authorization required in DB2 for z/OS to issue SET
CURRENTPACKAGESET. Hence, a user could choose to modify the DB2CLI.INI to
another collection, and take advantage of the table privileges held by any other
authorization ID that has bound a version of the same package.
Note: DB2 for LUW and DB2 Connect have implemented the CALL statement as a fully
compiled statement. It can be dynamically prepared in CLI, ODBC, JDBC, SQLJ, and
embedded SQL.
176 DB2 9 for z/OS: Distributed Functions
4.4.3 Static SQL options of JDBC to realize static SQL security benefits
The pureQuery technology and SQLJ is a database interface for Java that derives the
security strengths of the static SQL model.
The primary database API for the Java application server world is JDBC. JDBC is a form of
dynamic SQL, and suffers from the same security concerns as other dynamic SQL vehicles.
pureQuery or SQLJ is closely related to JDBC, and enables embedding SQL in Java
methods. It offers performance and security benefits for Java applications by virtue of the fact
that the DB2 implementation of static SQL.
DB2 developer domain offers many fine articles covering the benefits of pureQuery and
SQLJ.
There are several articles on pureQuery but Understanding pureQuery, Part 1: pureQuery:
The IBM paradigm for writing Java database applications, by Azadeh Ahadian gives quick
introduction to the technology. It can be found at the following Web page:
http://www.ibm.com/developerworks/data/library/techarticle/dm-0708ahadian/
The article Considering SQLJ for Your DB2 V8 Java Applications by Connie Tsui does an
excellent job of explaining the strengths of SQLJ in the areas of performance, security, and
development simplicity. It can be found at the following Web page:
http://www7b.boulder.ibm.com/dmdd/library/techarticle/0302tsui/0302tsui.html
4.4.4 Static execution of dynamic SQL to benefit from static SQL security
We examine two situations:
Using pureQuery technology to benefit from static SQL security on page 176
ODBC/CLI static SQL profiling to benefit from static SQL security on page 178
We discuss a comparison at Usage considerations on page 181.
Using pureQuery technology to benefit from static SQL security
As part of IBM Data Studio pureQuery technology, you have the capability to run JDBC or
.NET-based dynamic SQL application as a static application. As already addressed, you will
get a benefit in security, because data access is granted to packages and not DB2 objects.
Depending on your installation, you will get other benefits by making application static
execution, but they are not be addressed here.
We briefly explain how to run SQL statements that are in a JDBC application or .NET
application statically. The steps to execute dynamic SQL application using pureQuery are
illustrated in Figure 4-36 on page 177. Similar steps apply to .NET applications.
Note: .NET static execution capability was added as a part of IBM Data Studio pureQuery
Runtime V2.1.
Chapter 4. Security 177
Figure 4-36 Overview of preparation steps to execute JDBC application in static mode
The basic steps to run statically are as follows:
1. Capture the SQL statements that you want to run statically.
For Java applications, set pdqProperties with captureMode(ON) and
pureQueryXml(capture.pdqxml) to start capturing by executing application.
For .NET applications, you can pass option through connection string, such as
captureMode=ON, collection=COL1, rootPackageName=APPL1, and
pureQueryXML=capture.pdqxml.
2. Specify options for configuring the DB2 packages that you will create in the next step from
the captured SQL statements.
For Java application, you will need to add options to the pureQuery XML file as follows:
java com.ibm.pdq.tools.Configure -pureQueryXml capture.pdqxml -rootPkgName
APPL1 -collection COL1
For .NET application, the above options are already specified so you do not need to go
through this step.
3. Create and bind the DB2 packages that contain the SQL statements.
For Java application, use the static Binder utility to bind captured SQL into packages.
java com.ibm.pdq.tools.StaticBinder -url
jdbc:db2://wtsc63.itso.ibm.com:12347/DB9A -username PAOLOR7 -password newpswd
-bindOptions "QUALIFIER PAOLOR7" -pureQueryXml C:\TEMP\capture.pdqxml
For .NET application, the bind command is:
db2cap bind C:\TEMP\capture.pdqxml -d DB9A -u PAOLOR7 -p newpswd
4. Run the JDBC application in static mode, so that the captured SQL statements run
statically.
For java application, set pdqProperties with executionMode(STATIC) and
pureQueryXml(capture.pdqxml) to execute your application in static mode.
DB2 Package
IBM Data Server Driver
for JDBC and SQLJ
IBM Data Studio
pureQuery Runtime
pureQueryXML file
JDBC application
Static Binder utility
pureQuery Configure
Utility
pureQueryXML file
pureQueryXML file
DB2 for z/OS
BIND
Add option
DBRM creation
DBRM
BIND
IBM Data Studio
pureQuery Runtime
DB2 for z/OS
captureMode(ON)
pureQueryXml(cap.pdqxml)
JDBC application
IBM Data Server Driver
for JDBC and SQLJ
1
2
3
4
executionMode(STATIC)
pureQueryXml(cap.pdqxml)
178 DB2 9 for z/OS: Distributed Functions
For .NET application, set your option through connection string, using
executionMode=STATIC and pureQueryXML=cpature.pdqxml.
ODBC/CLI static SQL profiling to benefit from static SQL security
Static SQL profiling is a technique that allows dynamic SQL to be captured from CLI/ODBC
applications executed dynamically, and bound into static SQL packages. At runtime, the
dynamic SQL statements are substituted at the DB2 client with calls to a static SQL package,
and so derive the security strengths of the static SQL model.
Static SQL profiling is subject to the following limitations and dependencies:
Static SQL profiling works with any application that uses the DB2 CLI. Hence, CLI and
ODBC applications are eligible.
Static SQL profiling depends on the SQL statement text being an identical match. It does
not parse the SQL statements at runtime to match them to bound SQL statements.
Static SQL profiling does support variable predicates by using parameter markers in the
SQL statements.
If an SQL statement is not matched, it continues to execute dynamically.
To use static SQL profiling, you follow similar steps to pureQuery, after the program
development has been completed and all the dynamic SQL statements are fixed.
1. Run the application and capture the dynamic SQL statements.
2. Bind the captured dynamic SQL statements to DB2.
3. Enable SQL statement matching mode.
Step 1: Run the application and capture the dynamic SQL statements
You must edit the db2cli.iniI settings to enable STATICSQL capture mode. You can do this
using the DB2 configuration GUI, or manually edit the db2cli.ini file. Example 4-33 shows a
sample db2cli.ini file that contains the four required parameter settings for capture mode.
Example 4-33 The db2cli.ini configuration for static SQL profiling capture mode
C:\>more "\Documents and Settings\All Users\Application
Data\IBM\DB2\DB2COPY1\db2cli.ini"
[DB9A]
STATICMODE=CAPTURE
STATICPACKAGE=PAOLOR7.CAPTURE
STATICCAPFILE=C:\Temp\residency\CAPTURE1.SQL
STATICLOGFILE=C:\Temp\residency\CAPTURE1.LOG
autocommit=0
DBALIAS=DB9A
Pay close attention to the value of STATICPACKAGE.
When you run the dynamic SQL program, the SQL statements will be captured and written to
file C:\Temp\residency\CAPTURE1.SQL. An audit trail of actions is written to
STATICLOGFILE=C:\Temp\residency\CAPTURE1.LOG.
Note: You can also use IBM Data Studio Developer to run your application statically. Above
java application step are explained using stand-alone application from command line. For
WebSphere Application Server application using JPA, static generator tool wsdb2gen.bat
is provided to do similar steps.
Chapter 4. Security 179
We tested the feature with db2cli.exe program, which executes SQL as script. It contains one
SQL statement, as shown in Example 4-34.
Example 4-34 Sample CLI script used to test static profiling
opt calldiag on
opt echo on
sqlallocenv 1
SQLAllocConnect 1 1
SQLConnect 1 DB9A -3 PAOLOR7 -3 NEWPSWD -3
SQLAllocStmt 1
SQLExecDirect 1 "SELECT COUNT(*) FROM SYSIBM.SYSTABLES" -3
fetchall 1
SQLDisconnect 1
SQLTransact 1 1 SQL_ROLLBACK
SQLDisconnect 1
SQLFreeHandle SQL_HANDLE_DBC 1
SQLFreeHandle SQL_HANDLE_ENV 1
After the program has run, it produces an SQL capture file STATICCAPFILE
=C:\Temp\residency\CAPTURE1.SQL, as shown in Example 4-35.
Example 4-35 Static profiling capture file
; Captured on 2009-04-27 12.12.10.
[COMMON]
CREATOR=
CLIVERSION=09.02.0000
CONTOKENUR=
CONTOKENCS=
CONTOKENRS=
CONTOKENRR=
CONTOKENNC=
[BINDOPTIONS]
COLLECTION=PAOLOR7
PACKAGE=CAPTURE
DEGREE=
FUNCPATH=
GENERIC=
OWNER=PAOLOR7
QUALIFIER=PAOLOR7
QUERYOPT=
TEXT=
[STATEMENT1]
SECTNO=
ISOLATION=CS
STMTTEXT=SELECT COUNT(*) FROM SYSIBM.SYSTABLES FOR FETCH ONLY
STMTTYPE=SELECT_CURSOR_WITHHOLD
Tip: The db2cli.exe is shipped with DB2 Connect or data server driver copy. For example,
in DB2 Connect PE/EE for Windows, you can find db2cli.exe as:
\Program Files\IBM\SQLLIB\samples\cli\db2cli.exe
180 DB2 9 for z/OS: Distributed Functions
CURSOR=SQLCURCAPCS0
OUTVAR1=INTEGER,,,,TRUE,,SQL_NAMED
You may want to edit the captured SQL file to change the OWNER field under the
[BINDOPTIONS] section of the captured SQL file. The captured value will be the primary
authorization ID that was used to run the program when the tracing was performed. The
owner should be changed to the authorization ID that will be used for authorization checking
at runtime.
You may also want to pay close attention to the QUALIFIER value. If your application uses
unqualified SQL, then this will be used as the qualifier by the package at bind time.
Step 2: Bind the captured dynamic SQL statements to DB2
To bind the captured SQL, simply run the db2cap utility provided with the DB2 client. The
syntax of the command follows:
db2cap [-h | -?] bind capture-file -d db-alias [-u userid [-p password]]
The command used in this test follows:
db2cap bind C:\Temp\residency\CAPTURE1.SQL -d DB9A -u paolor7 -p ********
It produces a package in DB2 for z/OS named CAPTURE1, in collection PAOLOR7 (as
specified in the db2cli.ini parameter STATICPACKAGE=PAOLOR7.CAPTURE1).
The db2cap utility actually updates the consistency tokens in the SQL capture file to ensure a
matching consistency token with DB2 for z/OS.
After binding the package you need to perform three more steps:
1. Grant execute on <collid.package> to <end_user_authid or group_authid>.
2. Copy the captured SQL file to all client workstations where matching is required.
3. Edit the db2cli.ini file on those workstations to enable matching (as shown in step 3).
Step 3: Enable SQL statement matching mode
To enable matching mode, the db2cli.ini file just needs to be updated to specify a
STATICMODE of MATCH, as shown in Example 4-36.
Example 4-36 db2cli.ini configuration for static SQL profiling match mode
STATICMODE=MATCH
STATICPACKAGE=PAOLOR7.CAPTURE
STATICCAPFILE=C:\Temp\residency\CAPTURE1.SQL
TATICLOGFILE=C:\Temp\residency\CAPTURE1.LOG
autocommit=0
DBALIAS=DB9A
If the static SQL matching is successful, it is be visible through tracing, and is reported to the
match mode log file. The capture log file was active during the test for both capture and match
phases in this test, and is shown in Example 4-37 on page 180.
Example 4-37 Capture - match log file
Capture mode started. DSN=DB9A, COLLECTION=PAOLOR7, PACKAGE=CAPTURE, authorization
ID=PAOLOR7, CURRENTPACKAGESET=NULLID , CURRENTSQLID=PAOLOR7, capture
file=C:\Temp\residency\CAPTURE1.SQL.
Number of statements captured is 1.
Chapter 4. Security 181
CLI0008I Capture mode terminated.
Match mode started. DSN=DB9A, COLLECTION=PAOLOR7, PACKAGE=CAPTURE, authorization
ID=PAOLOR7, QUALIFIER=PAOLOR7, OWNER=PAOLOR7.
Number of successful distinct statement matches: 1.
Match mode successfully completed.
Usage considerations
The value of the static execution can range from being magic to being a waste of effort,
depending on the ability of the application to take advantage of it.
If you are using pureQuery or static SQL profiling as a security technique, then you will want
to ensure that the end users do not have privilege to access to the underlying tables. In
circumstance, if SQL statements are not matched, the user receives a security SQLCODE
when dynamic execution was attempted. Therefore, it is important to trace all possible SQL
paths in the application.
If you are using those techniques for a performance technique, you may also want to grant
access to the tables to the end users, so that statements still complete successfully, even if
they were not matched, and have to be executed dynamically.
JDBC application scenarios that meet the following criteria are good candidates:
Theoretically, all JDBC applications are eligible. pureQuery 1.2 has the following
prerequisites:
JRE 1.5 or higher.
IBM Data Server Driver for JDBC and SQLJ, release 3.52 or higher
Be sure application management changes will take effect
Security changes
Access path management. You will have to take consideration how many SQL
statements you capture per file, which will likely become one package.
You should be aware of all the SQL you are going to capture. Currently you will not able
to see if all the SQL in application logic are captured or not, since technology captures
SQL executed.
ODBC/CLI application scenarios that meet the following criteria are good candidates:
Dynamic SQL with parameter markers executed through the ODBC/CLI driver.
A relatively small number of discrete SQL statements to be matched at runtime.
High volume application (for CPU benefits).
Established mechanism in place to easily distribute the capture the SQL file and the
db2cli.ini file to all clients (or use the thin DB2 client so that only one copy of the capture
file and db2cli.ini is needed).
182 DB2 9 for z/OS: Distributed Functions
Copyright IBM Corp. 2009. All rights reserved. 183
Part 3 Distributed applications
This part contains the following chapters:
Chapter 5, Application programming on page 185
Chapter 6, Data sharing on page 233
Part 3
184 DB2 9 for z/OS: Distributed Functions
Copyright IBM Corp. 2009. All rights reserved. 185
Chapter 5. Application programming
When dealing with distributed applications on various platforms, the bind options,
configuration parameters, and datasource properties configured on the requester can affect
your applications performance on the DB2 for z/OS server.
In this chapter we focus on remote application best practices from various requesters
connecting to a DB2 for z/OS server.
This chapter contains the following sections:
Accessing data on a DB2 for z/OS server from a DB2 for z/OS requester on page 186
Migrating from DB2 private protocol to DRDA on page 194
Program preparation steps when using non-DB2 for z/OS Requesters on page 200
Using the non-Java-based IBM Data Server Drivers on page 203
Using the IBM Data Server Driver for JDBC and SQLJ on page 207
Developing static applications using pureQuery on page 211
Remote application development on page 216
XA transactions on page 225
Remote application recommendations on page 229
5
186 DB2 9 for z/OS: Distributed Functions
5.1 Accessing data on a DB2 for z/OS server from a DB2 for
z/OS requester
In this section we discuss how to connect to a DB2 for z/OS server from a DB2 for z/OS
requester and recommended bind options for remote access. It is assumed that you have
already set up the Communications Database (CDB) on the DB2 for z/OS requester as
explained in 3.2.2, Configuring the Communications Database on page 86.
5.1.1 System-directed versus application-directed access
There are two main ways to code for distributed data from your application:
Using explicit SQL CONNECT statements (application-directed remote access)
The SQL CONNECT statement requires that the application be aware of where the data
resides, because it first has to connect to that remote database before issuing the SQL
statement.
Using three-part names/implicit DRDA (system-directed remote access)
Implicit DRDA or three-part names, you can use an alias to hide the actual location of the
data from the application, making the location of the data transparent to the application.
Before we go into the details of how and when to use SQL CONNECT and three-part names
statements, it is important to have a basic understanding of how connections to remote
databases work from an application point of view, especially when more than one connection
is involved.
Connection management during distributed data access
Coding for distributed data requires some information about the connecting states of your
application.
Figure 5-1 on page 187 shows the details about the application and SQL connection states.
An application process can be in a connected or an unconnected state, and have zero or
more SQL connections. An single SQL connection can be in one of the following four states:
Current and held
Current and release pending
Dormant and held
Dormant and release pending
An application process is initially in the connected state and its SQL connection is current and
held. Issuing a successful CONNECT to another location (or implicitly through a three-part
name), or a SET CONNECTION statement, makes that current SQL connection dormant and
held. Issuing a RELEASE puts the connection in the release pending state. Changing a
connections status from held to release pending does not have any effect on it status as held
or dormant.
Chapter 5. Application programming 187
Figure 5-1 Connection management
Three-part names
The easiest way to understand three-part table names is by looking at the syntax. We give an
example in Example 5-1.
Example 5-1 Three-part table names
SELECT * FROM DB9A.SYSIBM.SYSTABLES;
The first qualifier of the three-part name is the location name of the server where the DML is
going to be executed. The location name must be defined in the CDB. The second part is the
table/view qualifier, and the last part is the table or view name. Qualifier and object name are
the same as you normally use for local access.
For convenience, you can also define aliases instead of using full three-part table names. An
example is shown in Example 5-2.
Example 5-2 Aliases for three-part table names
CREATE ALIAS DB9ATABLES FOR DB9A.SYSIBM.SYSTABLES;
SELECT * FROM DB9ATABLES;
Using aliases provides some form of location transparency because it allows you to move the
data to another DBMS location without having to change your applications. When you move
the data to a different DBMS in another location, you only have to drop and recreate the
aliases pointing them to the new location. When you use explicit CONNECT statements, you
have to change the location name in the application in all the CONNECT TO statements.
Successful CONNECT
or SET CONNECTION specifying
another SQL connection
Current Dormant
Successful CONNECT
or SET CONNECTION specifying
the existing dormant
SQL connection
SQL Connection States
RELEASE
Held
Release
Pending
Begin
Process
The current SQL connection
is intentionally ended, or a
failure occurs that causes the
loss of the connection
Connected Unconnected
Successful CONNECT
or SET CONNECTION
Application Process Correction States
188 DB2 9 for z/OS: Distributed Functions
When you use three-part names, connections are implicitly established and released as you
execute your SQL statements referencing them. The default protocol used to establish a
connection when three-part names are used is DRDA. Starting with DB2 9 for z/OS, if you
choose to override it with the DBPROTOCOL PRIVATE bind option, you receive a warning
message on BIND.
When an application uses three-part names/aliases for remote objects and DRDA access, the
application program must be bound at each location that is specified in the three-part names.
Also, you need to define the alias at the remote site as well as at the local site for all remote
objects that are accessed through aliases. For example, if your current SQL ID or
authorization ID is ADMF001 and you issue the CREATE ALIAS statement in Example 5-3, it
does not resolve correctly.
Example 5-3 Incorrect alias definition
CREATE ALIAS MYALIAS1 FOR DB9A.AUTHID2.EMPTABLE
ADMF001.MYALIAS1 is not found at DB9A because the alias refers to AUTHID2.EMPTABLE.
You need to create the alias shown in Example 5-4 at the remote location DB9A.
Example 5-4 Correct remote alias definition
CREATE ALIAS ADMF001.MYALIAS1 FOR AUTHID2.EMPTABLE
Explicit CONNECT statements
Another way to access distributed data is using explicit SQL CONNECT statements. In
contrast with three-part names, the CONNECT statement is only supported in the DRDA
protocol. If you used explicit CONNECT statements in your application program but bound
your package with DBPROTOCOL(PRIVATE), DRDA is used to access the remote server.
Example 5-5 gives a basic example of using an explicit CONNECT statement.
Example 5-5 Explicit CONNECT statement
CONNECT TO DB9A;
SELECT * FROM SYSIBM.SYSTABLES;
When you execute the CONNECT statement, the application connects to a (remote) server.
You can hard code the location name or use a host variable. The CONNECT changes the
CURRENT SERVER special register to reflect the location of the new server. After the
CONNECT statement was successfully executed, any subsequent statement runs against the
new location. You can return to the local DB2 subsystem by issuing the CONNECT RESET
statement. It is not possible to perform a join of tables residing in two different DB2
subsystems (distributed request) unless you use the InfoSphere Classic Federation Server for
z/OS.
DB2 for z/OS also supports the syntax:
EXEC SQL CONNECT TO :LOC USER :AUTHID USING :PASSWORD;
When you issue explicit CONNECT statements to remote locations, you can release these
connections explicitly by issuing SQL RELEASE statements. This way, you have more
flexibility over the duration of your connections. When you execute a RELEASE statement in
your program, the statement does not immediately release the connection. The connection is
labeled as release-pending, and is released at the next commit point.
Chapter 5. Application programming 189
The explicit RELEASE statement follows:
RELEASE DB9A;
An application can actually use two CONNECT types: Type(1) and Type(2). The two types of
CONNECT statements have the same syntax, but different semantics. At precompile time, the
CONNECT(1/2) option determines the type of connection that will be used for the application.
CONNECT Type(1)
CONNECT(1) is basically a remote unit of work support. When you connect to a second
system using a CONNECT(1) connection, the new CONNECT ends any existing
connections of the application process, and closes any open cursors.
CONNECT Type(2)
CONNECT(2) connections are basically to support distributed units of work. Type(2)
connections do not end any existing connections or close any cursors. Connecting to
another system does not do any implicit COMMIT. You must issue your COMMIT
statements explicitly according to your application flow.
For detailed information about CONNECT Type(1), Type(2), and RELEASE statements, refer
to DB2 Version 9.1 for z/OS SQL Reference, SC18-9854.
Table 5-1 summarizes the differences between the CONNECT Type(1) and CONNECT
Type(2) statements.
Table 5-1 Type 1 and Type 2 CONNECT statements
Type1 Type 2
CONNECT statements can be executed only
when the application process is in the
connectable state. Only one CONNECT
statement can be executed within the same unit
of work
More than one CONNECT can be executed within a UOW.
There are no rules about the connectable state.
When a CONNECT statement fails because the
application is not in the connectable state, the connection
state is unchanged.
If the SQL connection fails for any other reason, the
application process is placed in an unconnected state.
If a CONNECT statement fails, the current SQL connection
is unchanged, and any subsequent SQL statements are
executed at the current server, unless a failure prevents
the execution of other statements by that server.
CONNECT ends any existing connections of the
application process, and it closes any open cursors. No
implicit commit issued.
No cursors closed, no connections ended.
A CONNECT to the current server is treated like any
CONNECT(1) statement.
If the SQLRULES(STD) bind option is in effect, a
CONNECT to an existing SQL connection of the
application process is an error. Thus, a
CONNECT to the current server is an error. For
example, an error occurs if the first CONNECT is
a CONNECT TO x where x is the local DB2.
If the SQLRULES(DB2) bind option is in effect, a
CONNECT to an existing SQL connection is not
an error. Thus, if x is an existing SQL connection
of the application process, CONNECT TO x
makes x its current connection. If x is already the
current connection, CONNECT TO x has no
effect on the state of any connections.
190 DB2 9 for z/OS: Distributed Functions
5.1.2 Program preparation when DB2 for z/OS is the AR
This section covers program preparation steps including precompiler and bind options when
both the requester and server are DB2 for z/OS subsystems.
Refer to DB2 Version 9.1 for z/OS Application Programming and SQL Guide, SC18-9842 for
details on program preparation and DB2 Version 9.1 for z/OS Command Reference,
SC18-9844 for details on BIND options.
Precompiler options
CONNECT
If you are accessing more than one server in a unit of work (DUW), you must use the
CONNECT(2) precompiler option. Using CONNECT(1) explicitly only allows your
programs to connect to one server at a time. The default and recommended option is
CONNECT(2).
SQL
If you are accessing a non-DB2 for z/OS server, use the SQL(ALL) option. This option
allows you to use any statement that is in the DRDA standard. If you are only accessing
DB2 for z/OS, you can use SQL(DB2). In that case, the precompiler only accepts
statements that are used by DB2 for z/OS.
Binding your applications
When you want to access remote objects using DRDA, you must use packages at the remote
site.
Example 5-6 illustrates different ways to bind, copy and deploy your remote package.
In our environment, DB9A is the local site and DB9C is the remote site.
Example 5-6 Binding packages at a remote site
-- Bind package at the remote site after shipping the DBRM
-- This is identical to a local bind
BIND PACKAGE(TESTCOL) MEMBER(TES2DCON)
-- Remote package bind running on your local system, binding DBRM TES2DCON
-- in local DBRM-lib on remote location DB9C into collection TESTCOL
BIND PACKAGE(DB9C.TESTCOL) MEMBER(TES2DCON)
-- Remote package bind running on your local system, copying package TES2DCON
-- in collection TESTCOLL on local DB2, and binding on remote location DB9C into
-- collection TESTCOL
BIND PACKAGE(DB9C.TESTCOL) COPY(TESTCOLL.TES2DCON) COPYVER(V1)
--Remote package bind running on your local system deploying 2 different versions
--of native SQL procedure package SP2DCON to remote server DB9C.
BIND PACKAGE(DB9C.TESTSPCOL) DEPLOY(PAOLOR3.SP2DCON) -
Note: Programs containing CONNECT statements that are precompiled with different
CONNECT precompiler options cannot execute as part of the same application process.
An error occurs when an attempt is made to execute the invalid CONNECT statement.
Chapter 5. Application programming 191
COPYVER(V1_return_4_rows) ACTION(ADD)
BIND PACKAGE(DB9C.TESTSPCOL) DEPLOY(PAOLO3.SP2DCON) -
COPYVER(V2_return_3_rows) ACTION(ADD)
In DB2 9 for z/OS, BIND DEPLOY can be used to deploy a native SQL procedure to another
DB2 for z/OS server. When using the DEPLOY option, only the collection ID, QUALIFIER,
ACTION, COPYVER and OWNER options are allowed.This allows a functionally-equivalent
procedure to be copied to another development environment in a local or remote DB2
subsystem, but does not allow other changes. Note that the access path is not copied when
you use the COPY or DEPLOY options and the package will be re-optimized.
For more details on how to deploy a native SQL procedure to other DB2 for z/OS servers,
refer to DB2 Version 9.1 for z/OS Application Programming and SQL Guide, SC18-9841.
For details on package management, refer to DB2 9 for z/OS: Packages Revisited,
SG24-7688.
Example 5-7 illustrates three different ways to bind your plan. Using wildcards gives you the
flexibility to add new packages without having to rebind the plan.
Example 5-7 Binding packages into the plan
-- BIND PLAN using specific list of local and remote packages
-- Only package PKNAME in collection COLLNAME on the local system
-- and LOCNAME remote system can be executed
BIND PLAN PKLIST (COLLNAME.PKNAME,LOCNAME.COLLNAME.PKNAME)
-- BIND PLAN allowing all packages in a collection to be executed
-- All packages in collection COLLNAME on the local stem
-- and LOCNAME remote system can be executed
BIND PLAN PKLIST (COLLNAME.*,LOCNAME.COLLNAME.*)
-- BIND PLAN allowing all packages in a collection to be executed at all locations
-- All packages in collection COLLNAME can be executed on local and all remote systems
BIND PLAN PKLIST (*.COLLNAME.*)
BIND PACKAGE options
Let us now have a brief look at the bind options that are specific to distributed database
access.
The options are:
SQLERROR
Use SQLERROR(CONTINUE) so that when there are statements that are not valid at the
current server, the package is still created. Also, when binding the same package at
multiple servers some references to objects may not be valid on all of the servers,
because the objects do not exist there. Using this option solves this problem.
CURRENTDATA
Use CURRENTDATA(NO) (default value in DB2 9 for z/OS) to force block fetch for
ambiguous cursors whenever possible.
192 DB2 9 for z/OS: Distributed Functions
ISOLATION LEVEL
This can have an impact on block fetch depending on the CURRENT DATA value. Isolation
Level CS (Cursor Stability) is the normal and default setting - avoid using RR to allow the
server to use block fetch. Refer to the table in the DB2 Version 9.1 for z/OS Performance
Monitoring and Tuning Guide, SC18-9851.
KEEPDYNAMIC
KEEPDYNAMIC(YES) is used to cache the SQL statements information inside the DBM1
address space, at the thread level. This option should not be confused with the dynamic
statement cache, or global cache feature in DB2. When dynamic statement cache (global
cache) and KEEPDYNAMIC(YES) is turned on, then DB2 keeps the prepared statements
and the statement string. When only KEEPDYNAMIC(YES) is used, and the global cache
is not active, then only the statement string is kept inside the DBM1 address space. Note
that the use of KEEPDYNAMIC(YES) may prevent a distributed connection from being
inactivated.
SQLRULES>
Use SQLRULES(DB2) for more flexibility when binding in coding your applications,
particularly for LOB data, and to improve performance.
DEFER>
Use DEFER(PREPARE) to improve performance for dynamic SQL so that the PREPARE
and EXECUTE are chained and sent together to the server. Note that this means that the
application receives a SQLCODE 0 on PREPARE, even though the statement may contain
invalid text or objects, and is not validated until the EXECUTE or DESCRIBE statement is
executed by the application. This means that the application program has to handle SQL
codes that are normally returned at PREPARE time, but can now occur at
EXECUTE/DESCRIBE/OPEN time.
RELEASE>
RELEASE(COMMIT) is the recommended option. Even if packages were bound with
RELEASE(DEALLOCATE) at a DB2 for z/OS server on behalf of a DRDA client
connection, DB2 for z/OS would treat them as though they were bound with
RELEASE(COMMIT) during execution time.
The following options are specific to BIND PLAN and distributed access.
DISCONNECT
The default and most flexible option is EXPLICIT. This releases all connections in release
pending state at commit time.This means that you must use RELEASE statements in your
applications to end the connections. Other values are AUTOMATIC and CONDITIONAL.
AUTOMATIC releases all connections at commit, even those that were not released by the
application. CONDITIONAL ends remote connections at commit, except when open,
with-hold cursors are in use by the connection.
CURRENTSERVER
Determines the location to connect to before running the plan. The column
CURRENTSERVER in SYSIBM.SYSPLAN records this value and the special register.
CURRENT SERVER receives this value when the plan is allocated. Use this value when
you want an application to use the data at another DB2 server without changing the
application. Avoid using this keyword when there are explicit CONNECT statements in
your application. An implicit type 1 connection is used with this option and causes any
explicit CONNECT statements in the application to be treated as type 1, even if the
application was precompiled with option type 2.
Figure 5-2 on page 193 illustrates the use of packages and their execution within a local and
remote DB2 subsystem when accessing distributed data.
Chapter 5. Application programming 193
Figure 5-2 Execution of remote packages
The flow in Figure 5-2 is as follows:
1. When the first SQL statement is executed, the local plan (TESTPLAN) is used.
2. Package (TESTCOL.TES2DCON) is allocated on the local system (DB9A).
3. The local update to the EMPLOCAL table is executed.
4. Connect to the remote DB2 system DB9C
5. When executing at a remote system using DRDA, all packages run under the same plan
called DISTSERV. The remote package TESTCOL.TES2DCON is looked up and loaded
from the directory of the remote system (DB9C).
6. The update on the remote table (EMPRMT) is executed as a true static SQL statement on
DB9C.
5.1.3 Using DB2 for z/OS as a requester going outbound to a non-DB2 for
z/OS server
When using DB2 for z/OS requester and going outbound to a non-DB2 for z/OS server such
as DB2/LUW or DB2 for i, you need to set up your CDB tables on the DB2 for z/OS server to
such that you can access the remote server through TCP/IP as documented in Chapter 3,
Installation and configuration on page 69.
This configuration is described in 2.2, DB2 for LUW and DB2 for z/OS on page 37.
DIST DIST REMOTE PLAN
LOCAL system
REMOTE system
PGM
OPEN CURSOR
FETCH
CLOSE CURSOR
UPDATE
EMPLOCAL
CONNECT TO
DB9C
UPDATE
EMPRMT
PACKAGE INDEX PACKAGE INDEX
SPT01
PACKAGE
SPT01
PACKAGE
LOCAL PLAN
TESTPLAN
TESTCOL . TES2DCON
(Local package)
DISTSERV
TESTCOL . TES2DCON
(remote package)
1
2
3
5
4
6
DB9A DB9C
EMPLOCAL EMPRMT
194 DB2 9 for z/OS: Distributed Functions
In addition to binding the packages against the local DB2 for z/OS, you need to bind against
the remote server. In this case SAMPLE is the LUW database to which we are connecting. In
your application program, you can either use explicit CONNECT to SAMPLE or a three-part
name statement that makes a connection through implicit DRDA. Remember that when CDB
outbound translation is in effect, you need to ensure that translated ID corresponding to the
the BIND OWNER (not QUALIFIER) has sufficient authority to bind and execute packages on
the remote server.
Example 5-8 shows a sample program that can be used to remote bind SPUFI packages
against the DB2 LUW server SAMPLE. For details, refer to the following Web page:
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp?topic=/com.ibm.db
29.doc.inst/db2z_runspufiatremotesite.htm
Example 5-8 Binding remote SPUFI packages
//PAOLO44B JOB (999,POK),REGION=5M,MSGCLASS=X,CLASS=A,
// MSGLEVEL=(1,1),NOTIFY=&SYSUID
/*JOBPARM S=SC63
//*------------------------------------------------------------------
//* DRDA REDBOOK --> BIND REMOTE SPUFI PACKAGES IN AIX BOX
//*------------------------------------------------------------------
//JOBLIB DD DISP=SHR,DSN=DB9C9.SDSNEXIT
// DD DISP=SHR,DSN=DB9C9.SDSNLOAD
//DSNTIRU EXEC PGM=IKJEFT01,DYNAMNBR=20,COND=(4,LT)
//SYSTSPRT DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSTSIN DD *
DSN SYSTEM(DB9A)
BIND PACKAGE(SAMPLE.REMAIXCS) MEMBER(DSNESM68) -
ACTION(ADD) ISO(CS) CURRENTDATA(YES) -
LIBRARY('DB9A9.SDSNDBRM')
BIND PACKAGE(SAMPLE.REMAIXRR) MEMBER(DSNESM68) -
ACTION(ADD) ISOLATION(RR) -
LIBRARY('DB9A9.SDSNDBRM')
BIND PACKAGE(SAMPLE.REMAIXPUR) MEMBER(DSNESM68) -
ACTION(ADD) ISOLATION(UR) -
LIBRARY('DB9A9.SDSNDBRM')
END
5.2 Migrating from DB2 private protocol to DRDA
Private protocol has been deprecated in DB2 9 for z/OS and will be removed in a future
release of DB2 due to the following reasons:
Private protocol is only used by DB2 for z/OS.
Networks are rarely homogeneous.
Private protocol has not been functionally enhanced since DB2 Version 5.
DRDAs support for data blocking and its improved performance make it the preferred
vehicle for remote data access.
Restriction: It is not possible to use SNA to connect to a non-DB2 for z/OS server, you
must use TCP/IP.
Chapter 5. Application programming 195
DB2 V6 enhanced the DRDA protocol with the support of three-part names and aliases. This
function allwed to to set the default of the DBPROTOCOL BIND option to PRIVATE protocol
when binding packages or plans of applications utilizing three-part name references. Starting
with DB2 9, it is no longer possible to change the default to private protocol. However, it is still
possible to specify DBPROTOCOL(PRIVATE) when performing a BIND, and the request
completes with a warning message DSNT226I indicating that this option is no longer
recommended.
Because you should be prepared for the eventual removal of private protocol support, DB2 for
z/OS provides a catalog analysis tool (DSNTP2DP) to be used on DB2 9 catalogs. This tool is
provided with the DB2 installation JCL in the form of a REXX program. This new program
searches the system's DB2 catalog tables to determine all private protocol dependencies
known to DB2 in existing bound applications. From this search a series of bind jobs that
specify DBPROTOCOL(DRDA) are automatically created.
5.2.1 DB2 performance trace to show private protocol use
By running a specific performance trace (IFCID 157) during execution windows of opportunity,
you get a record of the packages and DBRMs that are executing private protocol statements.
The trace records produced would have information such as plan name, package name,
DBRM name, section number, remote location name, statement type, and SQL statement
before and after alias resolution. With this information, you can create lists of packages that
should be bound in preparation for DB2's elimination of Private Protocol. This is not a new
trace, but rather is a performance trace of specific IFCIDs.
Because DRDA requires packages at remote sites, all programs currently using private
protocol must have packages bound to the remote sites referenced in those applications. One
way to determine the remote sites that are referenced in currently running programs is to run
the performance trace for IFCIDs 157 to obtain this information. A START TRACE command
to start these traces is shown here:
START TRACE(PERFM) CLASS(30) IFCID(157) DEST(GTF)
You can use the IBM Tivoli OMEGAMON XE for DB2 Performance Expert on z/OS
(OMEGAMON PE) is enhanced to dump the trace records for DB2 Version 8 as well as for
DB2 9.
For DB2 V8, you can download from the DB2 for z/OS Exchange section of the IBM
developerWorks Web site an unsupported version of DSNTP2DP that can be used against
DB2 V8 catalogs.
5.2.2 The PRIVATE to DRDA REXX migration tool: DSNTP2DP
To help you convert your plans and packages from using private protocol to DRDA protocol,
DB2 provides the private to DRDA protocol REXX tool, DSNTP2DP, which scans your catalog
and generates the necessary commands to convert all objects that have a remote location,
private protocol dependency to DRDA. We recommend that you run the DBRM conversion
before running the private protocol tool, to lessen the chance of overlaying existing packages
or deleting DBRMs. Refer to DB2 9 for z/OS: Packages Revisited, SG24-7688, for details on
DBRM conversion.
You can tailor the generated output from the tool and run it at your discretion. Use job
DSNTIJPD, which is customized during migration, to invoke DSNTP2DP.
196 DB2 9 for z/OS: Distributed Functions
A package or plan has a remote location private protocol dependency only when the tool can
extract from the catalog remote location dependency information that is related to a plan or
package. Just having the DRPROTOCOL column of the catalog tables that manage plans and
packages set to a value of 'P' (Private) does not mean that the plan or package has a remote
location private protocol dependency. However, packages and plans that were bound with the
DBPROTOCOL PRIVATE option will not be allowed in a future release of DB2.
The syntax and options for the tool DSNTP2DP are shown in Figure 5-3.
Figure 5-3 Options for tool DSNTP2DP
The tool has the following three parameters:
Subsystem ID of DB2 to be examined (mandatory parameter) specified as SSID=ssid
This first parameter is mandatory so that DSNTP2DP can connect to the DB2 subsystem.
Default collection name (optional parameter) specified as DEFCOLLID=collectionid
The second parameter is the default collection name to be used in the generated BIND
commands where a collection name cannot be determined. If this parameter is not
specified, then the tool assumes a default collection name of DSNCOLLID.
Run options (optional parameters) ALIASES=Y/N, PACKAGES=Y/N, PLANS=Y/N
The run parameters trigger which processing the tool performs. If none of the run option
parameters are specified, then all catalog objects are examined by the tool (that is, a value
of Y is assumed for processing the plans, packages, and aliases in the catalog). If any of
the run option parameters are specified and the value assigned to a particular run option
is an N, then the processing for those catalog objects controlled by that run option is not
performed.
There are two catalog tables that have a database protocol (DBPROTOCOL) flag, SYSPLAN
and SYSPACKAGE. The catalog analysis tool uses this flag to extract the potential packages,
DBRMs, or plans to be converted to DRDA from private protocol. Commands are only
generated for those packages, DBRMs, or plans, that have a remote location dependency.
The output of the DSNTP2DP REXX exec with ALIASES=Y contains SQL statements to
create aliases for the remote locations. You can execute these statements using DSNTEP2 or
DSNTEP4, or any application that can process SQL statements, including CREATE ALIAS,
CONNECT TO, RELEASE, and COMMIT.
Important: Use the option ACTION(REPLACE) RETAIN for binding plans if using
DB2-based security. This option preserves EXECUTE privileges when you replace the
plan. If ownership of the plan changes, the new owner grants the privileges BIND and
EXECUTE to the previous owner. The RETAIN option is not the default. If you do not
specify RETAIN, everyone but the plan owner loses the EXECUTE privilege (but not the
BIND privilege). If plan ownership changes, the new owner grants the BIND privilege to the
previous owner.
Chapter 5. Application programming 197
Determine whether there are existing packages that are bound with
DBPROTOCOL(PRIVATE)
Bind those packages with DBPROTOCOL(DRDA) at the correct locations. Running the
DSNTP2DP exec with PACKAGES=Y, which is run as part of job DSNTIJPD, provides output
that assists with this task.
Aliases
If the ALIASES run option is Y, then all aliases are examined. For each alias that references a
remote object and the fully qualified name of the remote object (for example,
AUTHID.OBJECT) does not match the fully qualified name of the alias (for example,
CREATOR.NAME) a command is generated to connect to the remote system and create the
necessary two-part name alias at the remote location.
Packages
If the PACKAGES run option is Y, then all the packages in the catalog are examined next.
For each local package that has a DBPROTOCOL set to P and a dependency on a remote
location, a command will be generated to REBIND the local PACKAGE with the
DBPROTOCOL(DRDA) option. This PACKAGE will be used as the source for the next
generated BIND PACKAGE COPY commands against each server location where the
PACKAGE has a remote dependency. When binding any of the packages to remote location
servers, SQLERROR CONTINUE will also be specified to ensure that the package is bound
to the remote locations, because not all of the statements within the package may actually
reference objects in the remote servers.
Figure 5-4 shows the commands generated to create the local package and to copy the local
package to the remote systems.
Figure 5-4 Sample output for packages
DSN SYSTEM(DB9A)
* This file can be used as input to TSO batch.
* Note: for each target location referenced in remote
* bind requests, a different userid (other than the one
* running the TSO batch job) may be used to access the
* target location depending on the configuration within
* this subsystem's CDB. That userid must either have
* SYSADM authority on the target syssubsytem or it must
* have suitable privileges and authorities to bind the
* package into the collections of the target location.
REBIND PACKAGE(LI682COLLID.LI682C) -
DBPROTOCOL(DRDA)
BIND PACKAGE(DB9A.LI682COLLID) COPY(LI682COLLID.LI682C) -
OPTIONS(COMPOSITE) -
OWNER(PAOLOR3) QUALIFIER(PAOLOR3) -
DBPROTOCOL(DRDA) SQLERROR(CONTINUE)
REBIND PACKAGE(LI682COLLID.LI682D.(V1R1M0)) -
DBPROTOCOL(DRDA)
BIND PACKAGE(DB9C.LI682COLLID) COPY(LI682COLLID.LI682D) -
COPYVER(V1R1M0) OPTIONS(COMPOSITE) -
OWNER(PAOLOR3) QUALIFIER(PAOLOR3) -
DBPROTOCOL(DRDA) SQLERROR(CONTINUE)
198 DB2 9 for z/OS: Distributed Functions
Plans
If the PLANS run option is set to Y, then the tool examines all the plans in the catalog.
Regardless of the setting of the PACKAGES run option, no further packages in the catalog
are examined as part of this phase of the tool's processing. Thus, this phase of the tool only
generates actions to convert all the DBRMs that are bound directly into plans that have a
remote location dependency.
For each DBRM that is bound directly into a plan that has a remote dependency, a BIND
PACKAGE command will be generated that will bind the DBRM as a PACKAGE locally within
a specified collection (or default collection DSNCOLLID) using BIND PACKAGE parameters
that can be extrapolated from the current PLAN parameters and any parameter values from
the corresponding SYSDBRM row but ensure that DBPROTOCOL(DRDA) is specified. The
source for the DBRM will be obtained from the PDSNAME column of the corresponding
SYSDBRM row. The next set of generated commands will be BIND PACKAGE COPYs with
the specified collection (or default collection DSNCOLLID) specified against each server
location to be accessed by the DBRM package, and the source will be the package just
created. In addition, the binding of these packages to remote location servers will also have
the SQLERROR CONTINUE option specified.
As a final step of this phase, a BIND PLAN is generated to replace the existing PLAN with a
new PKLIST if none was previously present, or an updated PKLIST if one was previously
present and DBPROTOCOL(DRDA) is specified. The BIND PLAN command's PKLIST
parameter must now include the local and remote collections.
The tool generates the commands that should be performed to make a plan or package use
DRDA protocols when accessing remote locations. These commands with the appropriate
JCL are stored in a file that can then be tailored for the environment.
Figure 5-5 on page 199 shows the commands generated to create the local packages from
DBRM-based plans and to BIND the local plans with the remote packages.
Important: If you run this tool against your DBRM-based plans using private protocol, you
create statements that will create packages, both locally and remotely. When the BIND
PLAN statement is generated, it will not have any DBRMs bound directly into the plan, and
all access will be through the packages listed in the PKLIST.
Chapter 5. Application programming 199
Figure 5-5 Sample output from PLANS data set
Complete the conversion
After you have run the DSNTP2DP REXX exec and have the SQL created for the aliases, and
the bind commands necessary for both packages and plans, execute these commands. You
can use DSNTEP2, DSNTEP4 or your favorite SQL processor to create the aliases. Run the
BIND commands for the packages and plans using JCL you would use to run any other bind
statements.
You might also want to replicate plans and packages that need to be converted into a
development environment, run the tool there, exercise the new binds, test the code and then
deliver tested scripts to production.
DSN SYSTEM(DB9A)
* This file can be used as input to TSO batch.
* Note: for each target location referenced in remote
* bind requests, a different userid (other than the one
* running the TSO batch job) may be used to access the
* target location depending on the configuration within
* this subsystem's CDB. That userid must either have
* SYSADM authority on the target syssubsytem or it must
* have suitable privileges and authorities to bind the
* package into the collections of the target location.
BIND PACKAGE(DSNCOLLID) MEMBER(PGM04) -
LIBRARY('PAOLOR3.TEMP.DBRM1') -
OWNER(PAOLOR3) QUALIFIER(PAOLOR3) -
...
DBPROTOCOL(DRDA) SQLERROR(CONTINUE)
BIND PACKAGE(DB9C.DSNCOLLID) COPY(DSNCOLLID.PGM04) -
OPTIONS(COMPOSITE) -
OWNER(PAOLOR3) QUALIFIER(PAOLOR3) -
DBPROTOCOL(DRDA) SQLERROR(CONTINUE)
BIND PLAN(FINPRIV1) ACTION(REPLACE) RETAIN -
ACQUIRE(USE) CACHESIZE(256) DISCONNECT(EXPLICIT) -
OWNER(PAOLOR3) QUALIFIER(PAOLOR3) -
VALIDATE(RUN) -
ISOLATION(CS) -
RELEASE(COMMIT) -
CURRENTDATA(NO) -
NODEFER(PREPARE) -
DEGREE(1) -
DYNAMICRULES(RUN) -
REOPT(NONE) -
KEEPDYNAMIC(NO) -
ENCODING(37) -
IMMEDWRITE(NO) -
SQLRULES(DB2) -
PKLIST(DSNCOLLID.*,DB9C.DSNCOLLID.*) -
DBPROTOCOL(DRDA)
Attention: For DB2 9 for z/OS, obtain the PTF for APAR PK78553 which provides a more
complete version of DSNTP2DP. For DB2 for z/OS V8, a version of DSNTP2DP is available
through developerWorks:
http://www.ibm.com/developerworks/exchange/dw_entryView.jspa?externalID=213&cat
egoryID=32
200 DB2 9 for z/OS: Distributed Functions
5.3 Program preparation steps when using non-DB2 for z/OS
Requesters
In this section, we discuss the program preparation steps when using DB2 Connect or any of
the IBM Data Server Drivers/Clients as Application Requesters. Remember that Command
Line Processor (CLP) is bundled with DB2 Connect and the IBM Data Server Clients/Runtime
clients but not available when using the thin Data Server Drivers. When CLP is not available
you can use the DB2Binder utility (described in 5.3.2, Using the DB2Binder utility to bind
packages used by the Data Server Drivers on page 203) to bind packages required by the
data server drivers on the DB2 for z/OS Server.
5.3.1 Connecting and binding packages from DB2 CLP
Provided that you have already cataloged your database either using CCA (Client
Configuration Assistant) or from the command line, you can connect to a DB2 for z/OS server
from DB2 CLP using the CONNECT...USER...USING syntax, or as shown in Example 5-9
where you are prompted for your password.
Example 5-9 Connecting from DB2 CLP
C:\Program Files\IBM\SQLLIB\BIN>db2 connect to DB9A user paolor3
Enter current password for paolor3:
Database Connection Information
Database server = DB2 z/OS 9.1.5
SQL authorization ID = PAOLOR3
Local database alias = DB9A
You can specify any generic bind option following the generic keyword. You need to be in the
BND directory to perform the BIND. You need to switch back to the BIN directory to execute
remote SQL.
For details, refer to the following Web page:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/topic/com.ibm.db2.luw.apdv.cl
i.doc/doc/t0006343.html
Example 5-10 shows how to bind packages used by DB2 Connect at a remote DB2 for z/OS
server.
Example 5-10 Binding DB2 Connect packages on a remote DB2 for z/OS
C:\Program Files\IBM\SQLLIB\bnd>db2 bind @ddcsmvs.lst blocking all sqlerror cont
inue
LINE MESSAGES FOR ddcsmvs.lst
------ --------------------------------------------------------------------
SQL0061W The binder is in progress.
Tip: We recommend that you explicitly bind the packages specified in DDCSMversusLST
with BLOCKING ALL and SQLERROR CONTINUE with a user ID that has BINDADD
authority in the NULLID collection. If you do not bind explicitly, another user connected to
the target DB2 for z/OS subsystem with the right authority running a query will cause an
implicit rebind. The DDCSMversusLST packages will be implicitly bound without
BLOCKING.. All subsequent users of the gateway will experience poor query performance
because rows will be returned on a one row per block basis.
Chapter 5. Application programming 201
LINE MESSAGES FOR db2clist.bnd
------ --------------------------------------------------------------------
SQL0038W The bind option SQLERROR CONTINUE has been
activated since it is required when binding this DB2-supplied
list file to DB2/MVS, SQL/DS, or OS/400.
SQL0038W The bind option SQLERROR CONTINUE has been
activated since it is required when binding this DB2-supplied
list file to DB2/MVS, SQL/DS, or OS/400.
LINE MESSAGES FOR db2clpcs.bnd
------ --------------------------------------------------------------------
2958 SQL0408N A value is not compatible with the data type of its
assignment target. Target name is "XMLVAL". SQLSTATE=42821
LINE MESSAGES FOR db2clprr.bnd
------ --------------------------------------------------------------------
2958 SQL0408N A value is not compatible with the data type of its
assignment target. Target name is "XMLVAL". SQLSTATE=42821
LINE MESSAGES FOR db2clpur.bnd
------ --------------------------------------------------------------------
2958 SQL0408N A value is not compatible with the data type of its
assignment target. Target name is "XMLVAL". SQLSTATE=42821
LINE MESSAGES FOR db2clprs.bnd
------ --------------------------------------------------------------------
2958 SQL0408N A value is not compatible with the data type of its
assignment target. Target name is "XMLVAL". SQLSTATE=42821
LINE MESSAGES FOR ddcsmvs.lst
------ --------------------------------------------------------------------
SQL0091N Binding was ended with "0" errors and "6" warnings.
C:\Program Files\IBM\SQLLIB\bnd>
Example 5-11 shows the use of command db2bfd (bind file display) that you can use to
confirm what defaults were used when binding a particular package.
The options for db2bfd are as follows:
-b = display bind parameters
-s = display statements in the .bnd file
-v = display information about the host variables
Example 5-11 Using command db2bfd
C:\Program Files\IBM\SQLLIB\bnd>db2bfd -b -s -v db2clist.bnd
db2clist.bnd: Header Contents
Header Fields:
Field Value
----- -----
releaseNum 0x800
Endian 0x4c
202 DB2 9 for z/OS: Distributed Functions
numHvars 25
maxSect 37
numStmt 39
optInternalCnt 4
optCount 10
Name Value
------------------ -----
Isolation Level Uncommitted Read
Creator "NULLID "
App Name "SYSSTAT "
Timestamp "SYSLVL01:User defined timestamp"
Cnulreqd Yes
Sql Error Continue
Block Block All
Date ISO
Time ISO
Validate Bind
*** All other options are using default settings as specified by the server ***
db2clist.bnd: SQL Statements = 39
<<SQL statements and host variables not shown>>
Once you have connected and bound the packages, you can issue remote SQL interactively
from DB2 CLP as shown in Example 5-12.
Example 5-12 Executing remote SQL interactively from DB2 CLP
C:\Program Files\IBM\SQLLIB\BIN>db2 select count(*) from sysibm.systable
-----------
5904
1 record(s) selected.
You can also add a set of SQL statements to a file and use the db2 -f <inputfilename> facility
to execute them from CLP. You can create a native SQL procedure and call the procedure
using literals from DB2 CLP as shown in Example 5-13. If you need to use host variables in
your CALL, then you have to use CLI.
Example 5-13 Creating and calling a a native SQL procedure from DB2 CLP
C:\Program Files\IBM\SQLLIB\BIN>db2 call nishu.write_to_log22(1,'parm2','parm3',
'parm4')
RT: IT WAS OK
"WRITE_TO_LOG22" RETURN_STATUS: 0
Chapter 5. Application programming 203
5.3.2 Using the DB2Binder utility to bind packages used by the Data
Server Drivers
The DB2Binder utility is shipped with the IBM Data Server Driver for JDBC and SQLJ and is
used to bind the packages used at the DB2 for z/OS by the driver. It also grants EXECUTE
authority on the packages to PUBLIC. You can also use this utility to rebind DB2 packages
that are not part of the IBM Data Server Driver for JDBC and SQLJ. See Example 5-14.
Example 5-14 Using the DB2Binder utility
C:\DDF\test\javatests>java com.ibm.db2.jcc.DB2Binder -url
jdbc:db2://wtsc63.itso.ibm.com:12347/DB9A -user paolor3 -password yyyy
Binder performing action "add" to "jdbc:db2://wtsc63.itso.ibm.com:12347/DB9A" under
collection "NULLID":
<Other messages from Binder not shown>
The default collection used is NULLID, but if you wanted to bind your packages with
non-default bind options, you can explicitly specify the collection and bind options you want to
use. You can then use the setCurrentPackagePath() property to specify which package you
want to use at execution time. You can also specify bind options as a property for the
DB2Binder class as shown in Example 5-15 where the defaultQualifierName is set for native
SQL procedure package that will be deployed to a remote site.
Example 5-15 Bind options as a property for the DB2Binder class
Properties bndOpts = new Properties();
connection = (com.ibm.db2.jcc.DB2Connection) DB2simpleDatasource.getConnection ();
System.out.println ("Connection successful");
bndOpts.setProperty("defaultQualifierName", "PAOLOR3");
com.ibm.db2.jcc.DB2Binder binder = new com.ibm.db2.jcc.DB2Binder ();
binder.deployPackage ("DB9A", "PAOLOR4", "PAOLOR4", "PSM756", "V1", bndOpts,
connection);
5.4 Using the non-Java-based IBM Data Server Drivers
As described at 2.3, IBM Data Server Drivers and Clients as requesters on page 40, you
can use one of several data server drivers, clients, or runtime clients to access the DB2 for
z/OS server.
5.4.1 Using the IBM Data Server Driver for ODBC and CLI
In this section we will briefly discuss recommended configuration parameters when using IBM
Data Server Driver for ODBC and CLI.
Example 5-16 shows a C program snippet that creates a connection to the DB2 for z/OS
server and sets the AutoCommit Connection attributed to false.
Example 5-16 Connecting to DB2 for z/OS through CLI
rc=SQLAllocHandle(SQL_HANDLE_DBC, henv1, &hdbc1);
if (rc != SQL_SUCCESS) goto dberror;
// location/uid/pwd in passed in as parms
rc=SQLConnect(hdbc1, loc1, SQL_NTS, loc1uid, SQL_NTS, loc1pwd, SQL_NTS);
204 DB2 9 for z/OS: Distributed Functions
if (rc != SQL_SUCCESS) goto dberror;
// sample code to toggle autocommit
// autocommit=yes is cli default, so set=off for this connection.
// add explicit commit/rollback if you turn autocommit off.
rc=SQLSetConnectAttr( hdbc1,SQL_ATTR_AUTOCOMMIT,(void*) SQL_AUTOCOMMIT_OFF,
SQL_NTS);
if (rc != SQL_SUCCESS) goto dberror;
Example 5-17 shows the db2dsdriver.cfg file that can be used at the client to enable sysplex
workload balancing with automatic client reroute (ACR) as well as enabling direct XA
transactions
1
without having to go through a DB2 Connect Server.
Example 5-17 The db2dsdriver.cfg file
<configuration>
<DSN_Collection>
<dsn alias="DB9C_DIR" name="DB9C" host="wtsc63.itso.ibm.com" port="38320"/>
<!-- Long aliases are supported -->
</dsn>
</DSN_Collection>
<databases>
<database name="DB9C" host="wtsc63.itso.ibm.com" port="38320">
<WLB>
<parameter name="enableWLB" value="true"/>
<parameter name=maxTransports value=100/>
</WLB>
<ACR>
<parameter name="enableACR" value="true"/>
</ACR>
</database>
</databases>
<parameters>
<parameter name="enableDirectXA" value="true"/>
</parameters>
</configuration>
5.4.2 Using the IBM Data Server Driver Package in a .NET environment
In June 2009, IBM announced the IBM Data Server Driver package which includes drivers for
the .NET and OpenSource environments. In this section we will focus on the usage of the
Data Server Driver Package in a .NET environment which extends DB2 data server support
for the ADO.NET interface. The IBM Database Development Add-Ins enable you to develop
.NET applications for IBM data servers using Microsoft Visual Studio. You can also use the
Add-Ins to create database objects such as indexes and tables, and develop server-side
objects, such as stored procedures and user-defined functions.
1
XA transactions come from the X/Open group specification on distributed, global transactions.
Tip: When you use the data server drivers, you can add the remote server details directly
to this configuration file without having to catalog your database.
Chapter 5. Application programming 205
.NET applications can be developed in Visual Basic or C# and to connect to the DB2 for z/OS
server, you would need to provide the IP address, port, user ID, and password in a
configuration file.
Figure 5-6 shows a sample .NET application developed using Microsoft Visual Studio.
Figure 5-6 .NET application code sample
Imports System
Imports System.Data
Imports System.Web.Services
Imports System.Web.Services.Protocols
Imports System.Web
Imports IBM.Data.DB2
Imports System.Data.Common
Namespace BART_GETCARPOOLERS1
<WebService(), _
WebServiceBinding()> _
Public Class BART_GETCARPOOLERS1
<WebMethod()> _
Public Overridable Function [Select](ByVal MYCITY_param As String) As DataSet
Dim db2Connection1 As IBM.Data.DB2.DB2Connection = New
IBM.Data.DB2.DB2Connection
Dim db2DataAdapter1 As IBM.Data.DB2.DB2DataAdapter = New
IBM.Data.DB2.DB2DataAdapter
Dim db2SelectCommand1 As IBM.Data.DB2.DB2Command = New IBM.Data.DB2.DB2Command
'db2Connection1.ConnectionString =
"database=DB9A;userid=rajesh;server=wtsc63.itso.ibm.com:12347;password=rathnam2"
db2Connection1.ConnectionString =
ConfigurationManager.ConnectionStrings("TEST").ConnectionString
db2Connection1.ClientUser = ConfigurationManager.AppSettings("TEST1")
db2DataAdapter1.SelectCommand = db2SelectCommand1
db2DataAdapter1.SelectCommand.CommandType = CommandType.StoredProcedure
db2DataAdapter1.SelectCommand.CommandText = "BART.GETCARPOOLERS1"
db2DataAdapter1.SelectCommand.Parameters.Add("MYCITY",
DB2Type.LongVarChar).Value = MYCITY_param
db2DataAdapter1.SelectCommand.Connection = db2Connection1
Dim ds As System.Data.DataSet = New System.Data.DataSet
Try
db2DataAdapter1.Fill(ds)
Catch ex As DB2Exception
Throw ex
Finally
db2Connection1.Close
End Try
Return ds
End Function
End Class
End Namespace
206 DB2 9 for z/OS: Distributed Functions
Figure 5-7 shows the associated configuration file.
Figure 5-7 .NET configuration file
For more information, refer to the following Web page:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/topic/com.ibm.db2.luw.apdv.ms
.doc/doc/c0010960.html
5.4.3 db2cli.ini and db2dsdriver.cfg
When you are using the IBM Data Server Drivers, you need to configure the db2dsdriver.cfg
file. When using DB2 Connect, the Data Server Client, or Data Server Runtime Client, you
still use db2cli.ini. When using CLI, you can also specify Statement or Connection attributes
instead of the db2cli.ini keyword. Typically, this is the CLI keyword prefixed with SQL_ATTR.
For example, DeferredPrepare corresponds to SQL_ATTR_DEFERRED_PREPARE.
If the CLI/ODBC configuration keywords set in the db2cli.ini file conflict with keywords in the
SQLDriverConnect() connection string, the SQLDriverConnect() keywords take precedence.
<?xml version="1.0"?>
<configuration>
<appSettings>
<add key="TEST1" value="ABCD"/>
</appSettings>
<connectionStrings>
<add name="TEST"
connectionString="database=DB9A;userid=rajesh;server=wtsc63.itso.ibm.com:12347;password=rathnam2"
providerName="IBM.Data.DB2"/>
</connectionStrings>
<system.web>
<!-- Set compilation debug="true" to insert debugging symbols into the compiled page.
Because this affects performance, set this value to true only during development. -->
<compilation debug="true">
<assemblies>
<add assembly="IBM.Data.DB2, Version=9.0.0.2, Culture=neutral,
PublicKeyToken=7C307B91AA13D208"/>
</assemblies>
</compilation>
<!-- The <authentication> section enables configuration of the security authentication mode
used by ASP.NET to identify an incoming user. -->
<authentication mode="Windows"/>
<!-- The <customErrors> section enables configuration of what to do if/when an unhandled
error occurs during the execution of a request. Specifically, it enables developers to configure
html error pages to be displayed in place of a error stack trace.<customErrors mode="RemoteOnly"
defaultRedirect="GenericErrorPage.htm"><error statusCode="403" redirect="NoAccess.htm" /><error
statusCode="404" redirect="FileNotFound.htm" /></customErrors>-->
</system.web>
</configuration>
Tip: Syntax errors in db2dsdriver.cfg are silently ignored. If you want to make sure your
settings are getting picked up, update your database configuration using diaglevel 4
through CLP (or manually update db2cli.ini if using the thin data server drivers where CLP
is not available) and check db2diag.log for error messages.
Chapter 5. Application programming 207
Table 5-2 lists the db2cli.ini/db2dsdriver.cfg configuration parameters that are relevant from a
distributed perspective. Notice that the names of certain keywords are different between
db2dsdriver.cfg and db2cli.ini and some do not have a direct equivalent (N/A).
For recommended parameters when using Workload Balancing, refer to Table 6-4 on
page 254 (CLI Driver) and Table 6-5 on page 258 (DB2 Connect Server).
Table 5-2 db2cli.ini and db2dsdriver.cfg configuration parameters
5.5 Using the IBM Data Server Driver for JDBC and SQLJ
As described in Chapter 1, Architecture of DB2 distributed systems on page 3, The IBM
Data Server Driver for JDBC and SQLJ provides Type 4 and Type 2 connectivity. We focus on
the Type 4 driver in this section because it can be used to connect to a DB2 for z/OS server
db2dsdriver.cfg
parameter
Equivalent CLI
keyword
Description Default/Recommended value
(1 is ON, 0 is OFF)
AllowDeferredPrepare DeferredPrepare Combines the PREPARE and
EXECUTE requests
1/1
N/A OptimizeForNRows Appends the Optimize for N
Rows clause to each SQL
Not appended/Use when
fetching large amounts of data
to enable extra blocks
DisableAutoCommit AutoCommit Commits every statement by
default
AutoCommit is 1 by default
(DisableAutoCommit is 0 by
default)/Turn AutoCommit OFF
and use explicit COMMITs.
DisableCursorHold CursorHold Cursors persist past COMMIT CursorHold is 1by default
(DisableCursorHold is 0 by
default)/ When possible, set
CursorHold to 0 to allow the
connection to go inactive
NumRowsOnFetch BlockForNRows Specifies number of rows
returned per block
Default is to return as many
rows as can fit in a block which
is also recommended.
EnableLobBlockingOn
Fetch
BlockLobs Specifies if LOBs should be
blocked
0/1
N/A StreamGetData Enables progressive streaming
for LOBs (Statement or
connection property only)
Default varies based on which
driver or client is used/1
LOBCacheSize LOBCacheSize Specifies threshold up to which
LOBs can be inlined
None/12K
enableDirectXA N/A Allows direct XA connection
from the data server drivers.
0/1 when you need two-phase
commit
QueryTimeoutInterval QueryTimeoutInterval Indicates how long the driver
should wait between checks to
see if the query has completed.
5 seconds/ Set to a value that is
larger than the
SQL_ATTR_QUERY_TIMEOUT to
prevent timeouts from occurring
at the specified interval.
Important: The precedence rule is in the following order: application, db2cli.ini,
db2dsdriver.cfg.
208 DB2 9 for z/OS: Distributed Functions
through DRDA. The Type 4 driver is now delivered in the jcc3 and jcc4 streams. The jcc4
stream requires JDK 1.6 to be installed. You also need to customize and run the
DSNITJMS job that creates stored procedures and tables required by the Type 4 driver. Refer
to the following Web pages for information:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/topic/com.ibm.db2.luw.apdv.ja
va.doc/doc/t0024156.html
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/topic/com.ibm.db2.luw.apdv.ja
va.doc/doc/c0052041.html
To know which version of the driver you are using, use the command in Example 5-18.
Example 5-18 Determining the driver version
C:\DDF\test>java com.ibm.db2.jcc.DB2Jcc -version
IBM Data Server Driver for JDBC and SQLJ 4.8.23
5.5.1 Connecting to a DB2 for z/OS server using the Type 4 driver
You can use either the DriverManager or the DataSource interface to obtain a connection to
the database server. Here is an example using the DriverManager.getConnection() interface
where the connection properties are embedded in the application program. Example 5-19
shows the usage of the getConnection() interface to obtain a connection to the DB2 for z/OS
server.
Example 5-19 Using the getConnection()
String url = "jdbc:db2://wtsc63.itso.ibm.com:12347/DB9A
java.util.Properties prop = new java.util.Properties();
prop.put("user", user);
prop.put("password", password);
prop.put("driverType", "4");
conn1 = DriverManager.getConnection(url,prop);
It is possible to create and use a DataSource object in the same application similar to the
DriverManager interface. This method does not provide portability. The recommended way to
use a DataSource object is for your system administrator to create and manage it separately,
using WebSphere Application Server or some other tool. It is then possible for your system
administrator to modify the data source attributes and you do not need to change your
application program. Example 5-20 shows the use of the DataSource interface to obtain a
connection to the DB2 for z/OS server.
Example 5-20 Connecting to DB2 for z/OS through the DataSource interface
DB2SimpleDataSource dataSource = new com.ibm.db2.jcc.DB2SimpleDataSource();
dataSource.setServerName (servername);
dataSource.setPortNumber (Integer.parseInt(port));
dataSource.setDatabaseName (databasename);
dataSource.setUser (user);
dataSource.setPassword (password);
dataSource.setDriverType (4);
conn1 = dataSource.getConnection();
For details on connections using the DriverManager or DataSource interfaces, refer to the
following Web page:
Chapter 5. Application programming 209
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/topic/com.ibm.db2.luw.apdv.ja
va.doc/doc/cjvjdcon.html
To learn more about using WebSphere to deploy DataSource objects, refer to the following
Web page:
http://www.ibm.com/software/webservers/appserv/
In general, the default properties enabled for the Type 4 driver are designed to maximize
distributed performance. If you want to change either the configuration properties that have
driver-wide scope, or the Connection/Datasource properties that are application-specific,
refer to the following Web page:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/topic/com.ibm.db2.luw.apdv.ja
va.doc/doc/rjvdsprp.html
5.5.2 Coding static applications using SQLJ
The main difference between using JDBC and using SQLJ to access DB2 is that JDBC
always uses dynamic SQL. SQLJ can use dynamic SQL, but through the customization
process can be set up to use static SQL instead. This normally means better run-time
performance. The customization process (using db2sqljcustomize) can also be used to check
the syntax of the SQL statements used in the application, which reduces the possibility of
runtime errors.
Refer to the following Web page for a complete list of configuration properties:
http://publib.boulder.ibm.com/infocenter/db2luw/v9/topic/com.ibm.db2.udb.apdv.java
.doc/doc/cjvcfgpr.htm
Tip: Refer to Table 6-4 on page 254 for recommended settings to enable Sysplex
Workload Balancing
Tip: If you want to make sure you are getting static execution for your packages, set the
db2.jcc.sqljUncustomizedWarningorException property to 1 (warning) or 2 (exception).
The default is 0, which means it will switch to dynamic SQL if there were any errors
encountered during the customization process.
210 DB2 9 for z/OS: Distributed Functions
Figure 5-8 shows a simple application that access the DB2 for z/OS server using SQLJ. It
was developed using the IBM Data Studio Developer.
Figure 5-8 SQLJ application sample
Chapter 5. Application programming 211
Figure 5-9 shows the db2sqljcustomize step that is necessary for static execution of SQLJ
packages.
Figure 5-9 Customizing and binding an SQLJ application
5.6 Developing static applications using pureQuery
pureQuery is a high performance Java data access platform focused on simplifying the tasks
of developing and managing applications that access data. It consists of tools, API, and a
runtime engine. pureQuery allows you to capture SQL from existing JDBC (dynamic)
applications and allows you to switch from dynamic to static SQL execution. This allows for
improved performance, predictability, and security. It is also transparent as there is no need
for source code. As shown in Figure 5-10 on page 212, when using pureQuery you will still
need to use the Type 4 driver as the DRDA AR to connect to the DB2 for z/OS server.
For security considerations with pureQuery, see Using pureQuery technology to benefit from
static SQL security on page 176.
For more information, refer to the following Web page:
http://publib.boulder.ibm.com/infocenter/idm/v2r1/index.jsp
212 DB2 9 for z/OS: Distributed Functions
Figure 5-10 pureQuery runtime
5.6.1 When should you use pureQuery?
You might find the functions of pureQuery beneficial for developing or modifying applications.
Developing new Java applications
You will benefit from static SQL performance and security, developer tooling, and enhanced
diagnostic capabilities.
Modifying existing JDBC applications and frameworks
If your dynamic SQL cache hit ratio is below 80%, consider using client optimization to test
the advantages of static SQL.
Modifying existing SQLJ applications
If you are using SQLJ you are already getting the performance benefits of static SQL. But you
could still benefit from new API features provided by pureQuery such as heterogeneous batch
updates that allow you to combine several SQL statements into a single network call. Also it is
simpler to bind and deploy with pureQuery than with SQLJ.
For more information, refer to the following Web pages:
Data Studio Information Center (part of Integrated Data Management)
http://publib.boulder.ibm.com/infocenter/idm/v2r1/index.jsp
No Excuses Database Programming for Java
http://www.ibm.com/developerworks/data/library/dmmag/DBMag_2008_Issue2/NoExcuse
sDB/index.html
Chapter 5. Application programming 213
5.6.2 pureQuery programming styles
When you use pureQuery, you can choose between the inline-method style and the
annotated-method programming style. The inline style is similar to the JDBC programming
style except for being simpler and faster to code. It was designed to reduce the repeated
programming tasks familiar to the JDBC programmer, as well as to provide an API that tools
could easily use to tie in data access development with Java development. The coding is
referred to as "inline" because of the way SQL statements are defined in the application.
The annotated method coding style has the additional goal of maximizing configurability and
security for the resulting pureQuery application. It was developed in response to customer
demand for a named query programming interface for data access that was similar to Java
Persistence API (JPA), only simpler, quicker to code, and capable of supporting static
execution when required.
It is outside the scope of this book to go into the details of these two styles. Refer to the
articles at the following Web pages for more information about pureQuery programming
techniques:
http://www.ibm.com/developerworks/data/library/techarticle/dm-0804lamb/
http://www.ibm.com/developerworks/data/library/techarticle/dm-0804vivek/
http://www.ibm.com/developerworks/data/library/techarticle/dm-0808rodrigues/
5.6.3 pureQuery client optimization
Client optimization, a key enhancement in the 1.2 release, makes it possible to take
advantage of static SQL for existing JDBC applications, without modifying, or even having
access to the source code. This capability also works on Java applications that use a
framework for persistence, such as Hibernate. With client optimization, you execute the
application while the pureQuery runtime is in capture mode, typically as part of a use case
test validation that exercises various program execution paths. The DBA can then bind the
captured SQL into packages at the database server. Subsequent executions of the
application allow the pureQuery runtime to execute the captured SQL statically. In this way,
client optimization gives applications the performance and security benefits of static SQL,
without the need to modify or recompile the application.
To convert existing dynamic SQL applications to static, you need to follow this 4-step process:
1. Capture. Traps SQL statements from existing dynamic applications and records them into
a capture file as illustrated by Example 5-21.
Example 5-21 Capturing dynamic SQL statements
prop.setProperty ("pdqProperties",
"captureMode ON, executionMode DYNAMIC ,allowDynamicSQL true, pureQueryXml
abc_pdq.xml");
2. Configure. Specifies collection, package name and version about the target package and
sets them to the capture file as shown in Example 5-22.
Example 5-22 Configuring target packages
C:\DDF\test\javatests>java com.ibm.pdq.tools.Configure -pureQueryXml c:\ddf\test
\javatests\abc_pdq.xml -rootPkgName CHECK -collection NULLID
214 DB2 9 for z/OS: Distributed Functions
3. Bind. StaticBinder reads the capture file configured in the previous step and binds
packages to the target DB2.
Example 5-23 shows some of the enhanced error reporting features available in the
StaticBinder utility in the 2.1 release such as -showDetails and -grant
Example 5-23 Using the StaticBinder utility
C:\DDF\test>java com.ibm.pdq.tools.StaticBinder -url jdbc:db2://wtsc63.itso.ibm
.com:12347/DB9A;emulateParameterMetaDataForZCalls=1; -grant "grantees(GAURAV,
MANOJ, JAIJEET)" -showdetails true -user paolor3 -password yyyy -bindOptions
"VALIDATE RUN" -pureQueryXml C:/ddf/test/abc_pdq.xml
The StaticBinder utility is beginning to bind the pureQueryXml file 'C:/ddf/test
/abc_pdq.xml'.
Following Exception(s)/Warning(s) were reported for package : PKGA1 at BIND STEP
Error Code : 204, SQLSTATE : 01532
Message : PAOLOR3.TABLE33 IS AN UNDEFINED NAME. SQLCODE=204, SQLSTATE=01532, DRI
VER=4.8.23
SQL Statement: drop table table33
SQL Locator : 1
The StaticBinder utility successfully bound the package 'PKGA1' for the isolatio
n level UR.
Executed successfully : GRANT EXECUTE ON PACKAGE "COl51"."PKGA1" TO GAURAV, MANO
J, JAIJEET
<Similar messages for isolation levels CS, RR and RS not shown>
Displaying the -showDetails results:
Number of packages input :'2'
Number of statements input : '5'
Number of DDL statements : '2'
Number of packages for which isBindable is false : '0'
Number of statements for which isBindable is false : '0'
Number of root packages bound : '2'
Number of statements bound: '5' (for each isolation level specified)
The number of statements in each package are as follows:
Package : 'PKGA', statements= '2'
Package : 'PKGB', statements= '3'
4. Run. Run your applications statically with the capture file when execution mode is static.
See Example 5-24.
Example 5-24 Running the static application
prop.setProperty ("pdqProperties", "captureMode OFF, executionMode STATIC,
pureQueryXml abc_pdq.xml");
Chapter 5. Application programming 215
Figure 5-11 illustrates this process.
Figure 5-11 The internal workings of pureQuery
Table 5-3 highlights the differences between static and dynamic SQL applications.
Table 5-3 Comparing pureQuery dynamic and static SQL
PureQuery: How Does it Work?
Capture Configure Bind Execute
Feature Dynamic SQL (pureQuery, JDBC) Static SQL (pureQuery, SQLJ)
Performance Can approach static SQL performance with
help from dynamic SQL cache. Cache misses
are costly.
All SQL parsing, catalog access, done at BIND
time. Fully optimized during execution.
Access path
reliability
Unpredictable: Any prepare can get a new
access path as statistics or host variables
change
Guaranteed: locked in at BIND time. All SQL
available ahead of time for analysis by
EXPLAIN.
Authorization Privileges handled at object level. All users or
groups must have direct table privileges:
Security exposure, and administrative burden.
Privileges are package based. Only
administrator needs table access.
Users/Groups have execute authority. Prevent
non-authorized SQL execution.
Monitoring,
problem
determination
Database View is of the JDBC or CLI package:
No easy distinction of where any SQL
statement came from.
Package View of applications makes it simple
to track back to the SQL statement location in
the application.
Capacity planning,
forecasting
Difficult to summarize performance data at
program level.
Package Level Accounting gives program view
of workload to aid accurate forecasting.
Tracking
dependent objects
No record of which objects are referenced by a
compiled SQL statement.
Object dependencies registered in database
catalog.
216 DB2 9 for z/OS: Distributed Functions
5.7 Remote application development
In this section, we discuss some of the distributed performance topics from an application
programming perspective. We have primarily used the Type 4 driver to illustrate examples but
have mentioned the CLI driver as well as applications connecting from a DB2 for z/OS
requester where there are significant differences.
5.7.1 Limited block fetch
With limited block fetch, the DB2 for z/OS server attempts to fit as many rows as possible in a
query block. DB2 transmits the block of rows over the network. Data can also be pre-fetched
when the cursor is opened without needing to wait for an explicit fetch request from the
requester. As a result, using limited block fetch can significantly decrease the number of
network transmissions.
Example 5-25 shows a Java application code snippet where block fetch is used to retrieve
data from SYSIBM.SYSTABLES.
Example 5-25 Block fetching in a Java program
void test () throws SQLException
{
Statement stmt1 = conn1.createStatement();
SQLWarning warning = stmt1.getWarnings();
if ( null != warning )
System.out.println("Warning: "+warning.toString());
ResultSet rs1 = stmt1.executeQuery("SELECT DBID, OBID FROM SYSIBM.SYSTABLES");
int row = 0;
System.out.println("Running next()........");
while (rs1.next()) {
row = rs1.getRow();
rs1.getInt(1);
rs1.getInt(2);
}
rs1.close();
stmt1.close();
}
Conditions for block fetch to occur
In order for DB2 for z/OS server to use block fetch, you need to use a read-only unambiguous
cursor.
Attention: DRDA supports the concept of exact blocking versus flexible blocking. All
DRDA requesters at SQLAM 7 and higher can handle flexible blocks, which means a block
can be expanded to include a complete row or SQL rowset (when multi-row fetch is used)
and partial rows/rowsets are not returned to the requester. We limit our discussion to
flexible blocking. Although the default block size is 32 K, due to this potential expansion of
blocks, the DB2 9 for z/OS server may send a query block up to 10 MB in size when
rowsets are involved.
Attention: Prior to DB2 for z/OS V8, you will see one IFCID 59 record cut for every row
fetched. Due to performance enhancements in V8, you will now only see one IFCID 59
record per block fetched.
Chapter 5. Application programming 217
For detailed information about when DB2 for z/OS server decides to use block fetch, refer to
the following Web page:
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/topic/com.ibm.db29.doc.perf
/db2z_ensureblockfetch.htm.
Extra blocks
DRDA supports the concept of extra blocks, where multiple query blocks can be returned to
the requester in response to a single FETCH request. The number of extra blocks returned is
computed as the minimum of what the requester asked for and what the DB2 for z/OS server
will allow to be returned (EXTRASRV). The default value and maximum value for EXTRASRV
is 100. In the case of a z/OS requester, the maximum number of extra blocks requested is
controlled by EXTRAREQ. The default and maximum value for EXTRAREQ is 100. The
actual number of blocks returned depends on the data and on either the OPTIMIZE FOR m
ROWS (m) value or the QRYROWSET value.
OPTIMIZE for n ROWS and FETCH FIRST n ROWS
To trigger extra blocks to be returned, you need to use either of the above clauses. Both
clauses may also be used to influence access path selection.
The FETCH FIRST n ROWS clause can be used to limit the number of rows returned to the
application. Once n ROWS are fetched and end of data is reached, the cursor qualifies for fast
implicit close.
The OPTIMIZE for m ROWS clause is used to control both access path selection and DRDA
blocking.
OPTIMIZE for 1 ROW has special meaning to the optimizer as it indicates that sort should be
avoided if possible when choosing an access path. But to avoid sending too many blocks to a
requester, when m is less than 4, DB2 returns 16 rows per block but uses the original m for
access path selection.
The size of the row data and the size of the query block will still determine how many rows will
fit in a block. The OPTIMIZE FOR m ROWS and FETCH FIRST n ROWS values can
influence the maximum number of rows that the server might return for a single CNTQRY.
These rows could be spread out over several query blocks. If you used a small value for m,
that can potentially allow for a smaller number of rows to be fetched, hence making for a
faster turnaround if the application only wants a small number of rows returned at a time. A
large m value can potentially allow for extra query blocks to flow, decreasing the number of
network transmissions.
5.7.2 Multi-row FETCH
Multi-row FETCH was introduced in DB2 for z/OS V8 and the performance advantages of
using multi-row FETCH over single row FETCH have been clearly established. DSNTEP4
uses multi-row FETCH by default.
Important: FETCH FIRST n ROWS and OPTIMIZE for m ROWS are not used for DRDA
blocking when the cursor is a rowset or scrollable cursor. For scrollable cursors and rowset
cursors DB2 uses the DRDA QRYROWSET parameter to determine the number of rows
fetched in a block.
218 DB2 9 for z/OS: Distributed Functions
Example 5-26 shows a PL/I application code snippet that retrieves a rowset of 64 rows.
Example 5-26 Retrieving a rowset
EXEC SQL DECLARE SPC2 INSENSITIVE SCROLL CURSOR
WITH ROWSET POSITIONING
WITH RETURN
FOR
SELECT NAME, CREATOR FROM DB9A.SYSIBM.SYSTABLES;
EXEC SQL FETCH NEXT ROWSET FROM C2 FOR 64 ROWS
INTO :OUTVAR1, :OUTVAR2;
The Type 4 driver (version 3.57 and higher) automatically uses multi-row fetch for scrollable
cursors. This means you can get the performance advantages of multi-row fetch without
making any application changes. The useRowsetCursor property is set to true by default and
the DB2BaseDataSource.enableRowsetSupport property if set, can override the value of the
useRowsetCursor property.
Example 5-27 shows a Java application where multi-row fetch is used implicitly.
Example 5-27 Implicit multi-row fetching in a Java program
void test () throws SQLException
{
Statement stmt1 = conn1.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,
ResultSet.CONCUR_READ_ONLY);
stmt1.setFetchSize (64);
ResultSet rs1 = stmt1.executeQuery("SELECT DBID, OBID FROM
+ SYSIBM.SYSTABLES");
int row = 0;
while (rs1.next()) {
row = rs1.getRow();
rs1.getInt(1);
rs1.getInt(2);
}
rs1.close();
stmt1.close();
}
A single FETCH statement from a rowset cursor might encounter zero, one, or more
conditions. If the current cursor position is not valid for the fetch orientation, a warning occurs
and the statement terminates. If a warning or non-terminating error (such as a bind out error)
occurs during the fetch of a row, processing continues. In this case, a summary message is
returned for the FETCH statement, and additional information about each fetched row is
available with the GET DIAGNOSTICS statement. The SQLERRD3 that is part of the SQLCA
only contains the number of rows returned when multi-row fetch is used.
Tip: setFetchSize() can be used to specify the number of rows per block for scrollable
rowset cursors. If a forward-only cursor is used, setFetchSize() is ignored by the Type 4
driver.
Chapter 5. Application programming 219
When using the Type 4 driver, set the property shown in Example 5-28 to obtain extended
diagnostic information from the DB2 for z/OS server.
Example 5-28 Setting a property to enable extended diagnostic
prop.put("extendedDiagnosticLevel",241);
5.7.3 Understanding the differences between limited block FETCH and
multi-row FETCH
Table 5-4 summarizes the differences between limited block FETCH and multi-row FETCH
from a DB2 9 for z/OS server perspective.
Table 5-4 Comparing limited block FETCH and multi-row FETCH
5.7.4 Fast implicit CLOSE and COMMIT of cursors
The DB2 for z/OS server attempts to close cursors implicitly whenever possible to avoid
additional network flows that are required when the requester has to initiate a close.
DB2 uses fast implicit close when the following conditions are true:
The query retrieves no LOBs
The query retrieves no XML data.
The cursor is not a scrollable cursor.
The cursor is declared with FETCH FIRST n ROWS clause and N rows have been fetched
One of the following conditions is true:
The cursor is declared WITH HOLD, and the package or plan that contains the cursor
is bound with the KEEPDYNAMIC(YES) option.
The cursor is declared WITH HOLD and the DRDA client passes the QRYCLSIMP
parameter set to SERVER MUST CLOSE, SERVER MUST CLOSE AND MAY
COMMIT or SERVER DECIDES
The cursor is not defined WITH HOLD.
Limited block FETCH Multi-row FETCH
Supported only for distributed applications through DRDA. Supported for local as well as distributed applications
through DRDA
To limit the number of rows returned to the application, use
FETCH FIRST n ROWS, otherwise a block is filled to
capacity or till end of result table is reached.
Number of rows fetched is limited using explicit FOR N
ROWS syntax or through setFetchSize() JDBC API.
Non-atomic May be atomic or non-atomic.
It is possible to send extra blocks when using FETCH
FIRST n ROWS or OPTIMIZE for n ROWS
The QRYROWSET parameter is used for rowset cursors.
Prefetching is possible at OPEN time without waiting for an
explicit FETCH request.
Prefetching is never done because the rowset size is only
specified at FETCH time by the requester.
Tip: You will not see an IFCID 66 trace record cut when the cursor is implicitly closed by
the server.
220 DB2 9 for z/OS: Distributed Functions
Implicit COMMIT enhancement when using the Type 4 driver
Although we recommend that AutoCommit be turned off for distributed applications, there are
certain cases where it is necessary to turn it on. The additional commit request and reply
flows used to honor AutoCommit result in degraded performance as compared to when
AutoCommit is not enabled. With the JCC Type 4 driver level 3.57.82/4.7.85 shipped with DB2
for LUW 9.7 and the DB2 for z/OS APAR PK68746, the server may commit after implicitly
closing the cursor upon SQLSTATE 02000 (SQLCODE +100). The JCC fix is included in the
following releases:
In V9.7 (Cobra) GA
As a special build on top of V9.1 fixpack 4 (3.6.98)
Planned to be in the official release of V9.1 fixpack 8
The JCC property queryCloseImplicit can be used to enable or disable this feature.
The possible values for queryCloseImplicit follow:
0: NOT_SET to JCC. The jcc driver will choose a default depending on the driver and
server level (DB2 for z/OS requester uses 0 as the default, which means server decides)
1: Implicit close only
2: Disable implicit close
3: Implicit close + commit. The 9.7 jcc driver will choose this value as the default value only
when targeting z/OS server version 1.10 or DB2 for z/OS server version 9 NFM with APAR
PK68746 applied.
5.7.5 Multi-row INSERT
Similar to multi-row fetch, multi-row INSERT is supported for both local applications and
distributed applications over DRDA. Example 5-29 shows a sample PL/I application that
performs multi-row insert using a dynamic cursor.
Example 5-29 Multi-row INSERT in a PL/I program
DCL MYSTR1 CHAR(900) VARYING;
DCL MYSTR2 CHAR(100) VARYING;
DCL MYSTR3 CHAR(900) VARYING;
DCL HCHAR1(3) CHAR(3);
DCL HCHAR2(3) CHAR(1);
DCL HCHAR3(3) CHAR(10);
DCL INT1 BIN FIXED(15);
HCHAR1(1) = 'ABC';
HCHAR1(2) = 'ABC';
HCHAR1(3) = 'ABC';
HCHAR2(1) = 'A';
HCHAR2(2) = 'A';
HCHAR2(3) = 'A';
HCHAR3(1) = 'ABCDEFGHIJ';
HCHAR3(2) = 'abcdefghij';
HCHAR3(3) = 'abcdefghij';
INT1 = 3;
MYSTR2 = 'FOR MULTIPLE ROWS';
MYSTR1 = 'INSERT INTO MYTABLE VALUES(?,?,?)';
Chapter 5. Application programming 221
EXEC SQL PREPARE STMT1 ATTRIBUTES :MYSTR2 FROM :MYSTR1;
EXEC SQL EXECUTE STMT1 USING :HCHAR1, :HCHAR2, :HCHAR3 FOR
:INT1 ROWS;
When you execute multiple INSERT statements in a batch, and the data source supports
multi-row INSERT, the IBM Data Server Driver for JDBC and SQLJ uses multi-row INSERT to
insert the rows. Multi-row INSERT can provide better performance than individual INSERT
statements.
Example 5-30 shows a Java application program using addBatch().
Example 5-30 Using addBatch() in a Java program
int[] testValuesInt = new int[] {1, 2, 3, 4};
String[] testValuesString = new String[] {"row1", "row2", "row3", "row4"};
PreparedStatement pStmt1 = con.prepareStatement ("INSERT INTO " +
tableName1 + " VALUES "
+ " (?,?) ");
for (int i = 0; i < 4; i++) {
pStmt1.setInt (1, testValuesInt[i]);
pStmt1.setString (2, testValuesString[i]);
pStmt1.addBatch ();
}
pStmt1.executeBatch ();
5.7.6 Multi-row MERGE
The SQL MERGE statement introduced in DB2 9 for z/OS allows rows to be inserted into a
table if they did not exist and updated when they do exist. It is also possible to specify the
FOR N ROWS syntax to merge multiple rows.
For details on the MERGE statement, see the DB2 Version 9.1 for z/OS SQL Reference,
SC18-9854.
Example 5-31 is an example of a MERGE statement in a Java application.
Example 5-31 Multi-row MERGE in a Java program
String mergeStmt = "MERGE INTO CMF001.T1 USING +
(VALUES (?,?,?,?)) AS TMP1 (C1, C2, C3, C4) ON TMP1.C1 = AGPK " +
"WHEN NOT MATCHED THEN INSERT (C1, C2, C3, C4) +
VALUES (TMP1.C1, TMP1.C2, TMP1.C3, TMP1.C4)" );// +
java.sql.Statement s1 = c.createStatement ();
s1.executeUpdate (mergeStmt);
5.7.7 Heterogeneous batch updates
With pureQuery, you can batch INSERT, UPDATE, and DELETE statements that refer to
different tables. These heterogeneous batch updates allow all associated tables to be
updated in one network round trip to the server. With this means of heterogeneous batch
updates, you call a method to indicate to pureQuery that you are starting a batch update.
222 DB2 9 for z/OS: Distributed Functions
Batch heterogeneous updates with parameters are described at the following Web page:
http://publib.boulder.ibm.com/infocenter/idm/v2r1/topic/com.ibm.datatools.javatool
.runtime.doc/topics/cpdqrunhetbthupdmlttblwth.html
5.7.8 Progressive streaming
Progressive streaming, also referred to as Dynamic Data Format, was introduced in the DB2
9 for z/OS server to optimize the retrieval of small and large LOB and XML data. It can also
improve resource utilization on the DB2 for z/OS server in terms of freeing storage associated
with LOB or XML data in a cursor-based scope (as opposed to a transaction-based scope). It
is no longer necessary to use LOB locators. The server dynamically determines the
mechanism to be used to return LOBs. Small LOBs (generally less than 12 K in size) may be
inlined in the DRDA QRYDTA object. Larger LOBs (typically greater than 32 K in size) can be
retrieved in chunks through progressive references (DRDA GETNXTCHK).
When using the JCC Type 4 driver, the following parameter values affect the use of
progressive streaming:
DB2BaseDataSource.progressiveStreaming = NOT_SET (default value)
DB2BaseDataSource.progressiveStreaming = YES
DB2BaseDataSource.progressiveStreaming = NO
A value of YES (1) indicates that progressive streaming should be used if the data source
supports it. A value of NO (2) indicates that progressive streaming should not be used. The
Type 4 driver automatically uses a value of YES (1) when retrieving LOB or XML data that is
not related to a stored procedure result set cursor.
No application changes are necessary to benefit from this performance enhancement and
progressive references are handled under the covers by the drivers. Some customers were
unable to disable the progressiveStreaming property and needed a server-side switch to
disable this feature. APAR PK46079 adds the ability to disable Progressive Streaming if the
following elements are present:
The JCC property DB2BaseDataSource.progressiveStreaming = NOT_SET
DB2 for z/OS DSNZPARM PRGSTRIN=DISABLE
Refer to the IBM Data Server Driver for JDBC and SQLJ APAR IY99846 and DB2 for z/OS
APAR PK46079 for details.
For a sample program that reads an input file and uses progressive references to retrieve
chunks of XML data from the server, refer to Appendix C.3, Progressive streaming on
page 439.
Table 5-5 on page 222 lists the Type 4 driver properties that are relevant when using dynamic
data format.
Table 5-5 Type 4 driver properties
Driver Property Description Default
progressiveStreaming Enables progressive streaming
of LOB and XML data
Enabled by default for
non-stored procedure result set
cursors
streamBufferSize Threshold that limits the size of
LOB data that can be inlined
Generally 12 K
Chapter 5. Application programming 223
f
For more information, refer to LOBs with DB2 for z/OS: Stronger and Faster, SG24-7270.
5.7.9 SQL Interrupts
Support for SQL Interrupts was introduced in the DB2 for z/OS V8 server as a mechanism
that allows long running SQL statements to be interrupted instead of cancelling the server
thread and causing rollback. Because SQL interrupts do not interrupt threads held on locks,
their applicability is not general, you may still need to cancel thread when the interrupt is
issued.
You need to establish an additional remote connection from your application to the DB2 for
z/OS server to issue the CLI /ODBC function SQLCancel() or invoke the JDBC cancel
method. This is shown in Example 5-32.
When you cancel an SQL statement from a client application, you do not eliminate the original
connection to the remote server. The original connection remains active to process additional
SQL requests. Any cursor that is associated with the canceled statement is closed, and the
DB2 server returns an SQLCODE of -952 to the client application when you cancel a
statement by using this method. You can cancel only dynamic SQL codes that exclude
transaction-level statements (CONNECT, COMMIT, ROLLBACK) from a client application.
Example 5-32 Issuing SQL interrupts
public void run() {
try {
int i = 0;
try {
while (!parent.stmtStarted) {
i++;
Thread.sleep(20000); //Wait 20 seconds
if (i > 2) {
break;
}
}
} catch (Exception e) {
;
}
System.out.println("CancelThread: Issuing cancel");
stmt.cancel();
fullyMaterializeLOBData Avoids the use of LOB locators 0 for servers that support
dynamic data format
Restriction: Dynamic Data Format is not supported when DB2 9 for z/OS is acting as a
DRDA Application Requester.
Driver Property Description Default
Tip: You can use the setQueryTimeout() method to limit the number of seconds that SQL
operations that use the given execution context object can execute. If an SQL operation
exceeds the limit, an SQLException is thrown.
224 DB2 9 for z/OS: Distributed Functions
System.out.println("CancelThread: Issued without any exception");
} catch (SQLException e) {
System.out.println("CancelThread Exception: " + getRealMessage(e));
e.printStackTrace();
}
System.out.println("CancelThread Exit");
}
Some customers may experience application failures when migrating from a DB2 for z/OS V7
to a DB2 for z/OS V8 server when the applications have not been coded to tolerate the new
error symptoms that could occur as a result of the SQL interrupt support. SQLINTRP was
introduced by APAR PK41661 as a temporary measure to help customers with their migration
process while they made the necessary application changes. The valid settings are ENABLE
(the default) or DISABLE.
5.7.10 Remote external stored procedures and native SQL procedures
Table 5-6 can be used to compare and contrast external and native SQL procedures.
Table 5-6 Native versus external procedures
Example 5-33 shows the creation and calling of a native SQL procedure using the Type 4
driver. Refer to Appendix C.1.1, Using the Type 4 driver to call a native SQL procedure
(BSQLAlone.Java) on page 420 for the complete program.
Example 5-33 Creating and calling a native SQL procedure using the Type 4 driver
// Creating the native SQL procedure
stmt.executeUpdate("CREATE PROCEDURE BSQL_ALONE ( IN VAR01 BINARY(5),"
+ " IN VAR02 VARBINARY(5),"
+ " INOUT VAR03 BINARY(5),"
+ " INOUT VAR04 VARBINARY(5),"
+ " OUT VAR05 BINARY(5),"
+ " OUT VAR06 VARBINARY(5) )"
+ "VERSION VERSION1 "
+ "ISOLATION LEVEL CS "
+ "RESULT SETS 1 "
+ "LANGUAGE SQL "
+ " P1: BEGIN "
Remote external stored procedures Remote native SQL procedures
Supported since DB2 Version 5 Introduced in DB2 9 for z/OS
Need to be parsed and translated to C No need for generated C code and compilation.
Multiple versions of the procedure not
supported.
Extensive support for Versioning.
Managed by WLM Broken down into runtime structures like any SQL
statement, not managed by WLM
SQL processing is run under a TCB, hence
not zIIP-eligible.
Run under DBM1 enclave SRB, thus becoming
zIIP eligible when called from a DRDA client.
Performs well when your stored procedure
has a lot of application logic and uses math
functions/string manipulation.
Performs well when your stored procedure is
database-intensive and contains a lot of SQL but
minimal logic.
Chapter 5. Application programming 225
+ " DECLARE cursor1 CURSOR FOR "
+ "SELECT COLUMN4, COLUMN5 "
+ " FROM CDS_1 "
+ " WHERE COLUMN5 LIKE VAR03 AND COLUMN4 NOT LIKE VAR04; "
+ " SET VAR05 = VAR03; "
+ " SET VAR06 = VAR04; "
+ " OPEN cursor1; "
+ " END P1");
//<Several lines removed>
// Calling the native SQL procedure
cstmt = con
.prepareCall("{CALL PAOLOR3.BSQL_ALONE(?,?,?,?,?,?)}");
byte[] inputByteArray1 = { 1,1,1};
cstmt.setBytes(1, inputByteArray1);
byte[] inputByteArray2 = { 2,2,2};
cstmt.setBytes(2, inputByteArray2);
byte[] inputByteArray3 = { 3,3,3};
cstmt.setBytes(3, inputByteArray3);
byte[] inputByteArray4 = { 4,4,4};
cstmt.setBytes(4, inputByteArray4);
cstmt.registerOutParameter(3, Types.BINARY);// IO
cstmt.registerOutParameter(4, Types.VARBINARY);
cstmt.registerOutParameter(5, Types.BINARY);// out as null
cstmt.registerOutParameter(6, Types.VARBINARY); // out as null
cstmt.execute();
Refer to the following resources for information:
Using the IBM Data Studio Developer to create, test and deploy native SQL procedure
http://publib.boulder.ibm.com/infocenter/dstudio/v1r1m0/index.jsp?topic=/com.ib
m.datatools.dwb.tutorial.doc/topics/dwb_abstract.html
Debugging stored procedures on DB2 for z/OS using Data Studio (Part 1)
http://www.ibm.com/developerworks/data/library/techarticle/dm-0811zhang/?S_TACT
=105AGX11&S_CMP=ART
DB2 for z/OS Stored Procedures: Through the CALL and Beyond, SG24-7083
5.8 XA transactions
When using the DB2 for z/OS requester to connect to a DB2 for z/OS server, two-phase
commit (2PC) is used by default. However, when the various data server drivers are used as
requesters, the default commit protocol is one-phase commit (1PC) and you need to use XA
transactions to use 2PC.
226 DB2 9 for z/OS: Distributed Functions
A global (XA) transaction is controlled and coordinated by an external transaction manager
(external coordinator) to a resource manager. The transaction normally requires coordination
across multiple resource managers that may reside on different platforms. To access an
enterprise information system, the external coordinator sends an XID, which is defined by the
X/Open XA standard, to a resource adapter. In addition to the length and FormatID fields, an
XID has two other parts: the global transaction identifier (GTRID) and the branch qualifier
(BQUAL).
5.8.1 Using the Type 4 driver to enable direct XA transactions
You would first need to create and register a data source and obtain an XA connection similar
to what is shown in Example 5-34. In most scenarios, you will be using an Application Server
such as WebSphere Application Server to develop your applications, so you would not have a
need to use the explicit XA APIs as shown in Example 5-34.
If you are not using WebSphere, you need to download jndi.jar, providerutil.jar, and
fscontext.jar from java.sun.com and add them to your CLASSPATH. You first need to create
an XA datasource and register it with JNDI (Java Naming and Directory Interface). JNDI is a
Java API that provides a standard way to access objects in a registry. The most basic
implementation uses a file-based system lookup. For the complete programs to create and
register the XA datasource and enable two-phase commit, see Appendix C.2, XA transaction
samples on page 423.
Example 5-34 Enabling XA transaction through explicit XA API
xaDS = (XADataSource)context.lookup (xau.getDataSourceName(serverType,
driverType,1));
xaconn = xau.getXAConnection(xaDS);
System.out.println ("XA Connection Obtained Successfully.....");
conn = xaconn.getConnection();
System.out.println ("Underlying Physical Connection conn: " +
conn.getClass().getName());
//cleanup(conn);
// Get the XA Resources
xares = xaconn.getXAResource();
Xid xid = xau.createRandomXid();
Statement stmt1 = conn.createStatement();
xares.start (xid, XAResource.TMNOFLAGS);
xau.addToXidList(xid);
int count1 = stmt.executeUpdate ("INSERT INTO H_POSUPDATE "
+ "VALUES (330,340)");
xares.end(xid, XAResource.TMSUCCESS);
System.out.println( "call xares.rollback(xid)" );
xares.rollback(xid);
System.out.println( "Closing XA connection" );
xaconn.close();
xaconn = null;
}
Chapter 5. Application programming 227
5.8.2 Using the IBM non-Java-based Data Server Drivers/Clients to enable
direct XA transactions
Starting with Version 9.5 FixPack 3, IBM Data Server Clients and non-Java-based Data
Server Drivers that have a DB2 Connect license can directly access a DB2 for z/OS Sysplex
and use native XA support without going through a middle-tier DB2 Connect server.
This type of client-side XA support is only available for transaction managers that use a
single-transport processing model. In a single-transport model, a transaction, over a single
transport (physical connection), is tied to a member from xa_start to xa_end. The transaction
end is followed immediately by xa_prepare(readonly), xa_prepare plus xa_commit or
xa_rollback, or xa_rollback. All of this must occur within a single application process.
Examples of transaction managers that use this model include IBM TXSeries CICS, IBM
WebSphere Application Server, and Microsoft Distributed Transaction Coordinator. Enable
XA support by using the SINGLE_PROCESS parameter in the xa_open string, or by
specifying enableDirectXA = true in the db2dsdriver configuration file.
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/topic/com.ibm.db2.luw.qb.dbco
nn.doc/doc/t0054754.html
The ADO .NET provider for DB2 provides distributed transaction support. The sample
provided is a windows form application written in Visual Basic. The sample demonstrates the
distributed transaction support provided by the ADO .NET provider for DB2. The application
opens two connections to the DB2 subsystem on z/OS. The application then inserts rows into
two tables, one from each connection, to demonstrate distributed transaction.
Important: DB2 for z/OS (V8 and 9) APAR PK69659 must be installed for direct XA
support (needed for transaction managers such as Microsoft Distributed Transaction
Coordinator).
Tip: When using the non-Java-based Data Server Client or Runtime Client, you need to
set enableDirectXA=true in the db2dsdriver.cfg file or specify SINGLE_PROCESS in the
CLI script. When using the non-Java-based data server drivers, this support is implicitly
provided and is the default behavior.
228 DB2 9 for z/OS: Distributed Functions
Figure 5-12 indicates that the Enable XA Transactions box must be checked before
attempting XA connections using the .NET driver.
Figure 5-12 Enabling XA transaction
Chapter 5. Application programming 229
Figure 5-13 shows a sample application developed using .NET. The associated
db2dsdriver.cfg file has been shown in Example 5-16 on page 203.
Figure 5-13 .NET application that uses XA transactions
5.9 Remote application recommendations
Here is a quick summary of recommendations when developing distributed applications:
Use the WHERE, GROUP BY, and HAVING clauses to limit the size of your result set.
Use OPTIMIZE for n ROWS and FETCH FIRST n ROWS whenever possible to get both
access path selection and DRDA blocking benefits including extra blocks.
Declare your cursor with FOR FETCH ONLY, or FOR READ ONLY, and INSENSITIVE
STATIC to allow the server to use block fetch.
Use CURRENTDATA(NO) and ISOLATION(CS) when possible. Avoid ISOLATION(RR).
Use stored procedure result sets.
Use COMMIT on RETURN clause for stored procedures that do not return result sets.
230 DB2 9 for z/OS: Distributed Functions
Use dynamic data format for LOBs, multi-row fetch and multi-row insert where supported
by the drivers.
COMMIT on business transaction boundaries but do not use autocommit (default for CLI
applications).
Avoid using WITH HOLD cursor.
Free resources you no longer need, that is, explicitly close cursors after you have fetched
all data, declare DGTTs with ON COMMIT DROP TABLE, free LOB locators.
Use KEEPDYNAMIC(YES) for applications that use very few SQL statements very
frequently to avoid excessive prepares and keep in mind that it prevents the connection
from being inactivated.
Use connection pooling and connection concentration to speed up connection processing.
Table 5-7 compares and contrasts the various requesters.
Table 5-7 Comparison across requesters
Ease of
installation/size of
footprint
Ease of coding
applications (GUI
tools like Data Studio
Developer)
Performance Support for
Sysplex WLB,
seamless
failover/ACR
(data sharing)
DB2 9 for z/OS
Requester
Most complex
installation
Use when you have
existing applications.
Supports both static
and dynamic SQL.
Not supported
Type 4 driver Easy to install /small
footprint
For existing Java-based
dynamic applications.
Dynamic SQL only,
Performs well when
cache hit ratio is high.
Supported
SQLJ Easy to install For existing Java-based
static applications.
Supports both static
and dynamic SQL.
Supported
pureQuery
using Type 4
driver
Easy to install Easiest to code.
Recommended for
NEW Java-based static
applications.
Supports both static
and dynamic SQL.
Supported
Data server
drivers in
ODBC/CLI
environment
Easy to install/small
footprint.
For C/C++ applications. Dynamic SQL only,
performs well when
cache hit ratio is high.
Supported
directly as well as
through DB2
Connect server
Data server
drivers in .NET
environment
Easy to install/small
footprint
For C and VisualBasic
applications
Dynamic SQL only,
performs well when
cache hit ratio is high.
Supported
Chapter 5. Application programming 231
Table 5-8 lists the fetch/insert features supported by the DB2 9 for z/OS requester and various
client drivers against the DB2 9 for z/OS server.
Table 5-8 Fetch/insert feature support by client/driver
Requester/Feature Limited block
FETCH
Progressive
Streaming for LOB
and XML data
(Dynamic Data
Format)
Multi-row FETCH Multi-row INSERT
DB2 9 for z/OS
requester
Supported for
blocking cursors
Not supported Supported when
explicit rowset syntax
is used or implicitly
when using
DSNTEP4
Supported when
explicit FOR
MULTIPLE ROWS
syntax is used
JCC Type 4
driver/pureQuery
Supported for
blocking cursors
Supported by default
for both LOB and
XML data. Can be
disabled.
Supported by default
for scrollable cursors.
Heterogeneous
batch updates
(INSERT/MERGE)
supported through
pureQuery APIs.
Data server driver
in ODBC/CLI
environment
Supported for
blocking cursors.
Supported by default
only for LOB data.
Can be disabled
Supported through
SQLBulkOperations(
)
Supported through
array input chaining
Data server driver
in .NET
environment
Supported for
blocking cursors.
Supported by default
only for LOB data.
Cannot be disabled
Not supported Supported through
array input chaining
and DB2BulkCopy
232 DB2 9 for z/OS: Distributed Functions
Copyright IBM Corp. 2009. All rights reserved. 233
Chapter 6. Data sharing
In this chapter we discuss considerations on setting up a data sharing group in a distributed
environment where availability, workload balancing, and failover management are the primary
objectives.
This chapter contains the following sections:
High availability aspects of DRDA access to DB2 for z/OS on page 234
Recommendations for common deployment scenarios on page 247
DB2 failover scenario with and without Sysplex Distributor on page 259
Migration and coexistence on page 265
6
234 DB2 9 for z/OS: Distributed Functions
6.1 High availability aspects of DRDA access to DB2 for z/OS
In this section we focus on DRDA access to a DB2 data sharing group to ensure that remote
applications get the highest possible level of availability accessing DB2 for z/OS.
The principal vehicle for high availability of DB2 for z/OS is the exploitation of System z
parallel sysplex technology. Consequently, we focus on continuous availability of a DB2 for
z/OS server by exploring the seamless integration of DRDA connections with a DB2 data
sharing group.
We use DB2 Connect V9.5 FP3 or later and equivalent DRDA AR clients for distributed
platforms supporting sysplex workload balancing (sysplex WLB). Supported DRDA AR clients
include: DB2 Connect server/client, Type 4 driver, and the non-Java-based IBM Data Server
Driver (CLI Driver and .NET Provider).
The test for deployment verification was done using the trade workload described in
Appendix B, Configurations and workload on page 401.
6.1.1 Key components for DRDA high availability
To understand DRDA integration with the high availability capability of DB2 for z/OS, it is
important to know the functions of some key components of a parallel sysplex and of a DB2
for z/OS data sharing group:
z/OS Workload Manager (WLM)
WLM provides a server list to the DRDA AR. The server list indicates the available DB2 for
z/OS data sharing members and their relative weights.
DB2 Connect server/client, the Type 4 driver, and the non-Java-based IBM Data Server
Driver
These products include sysplex awareness capability that provides transparent
transaction routing based on the weights of the available members of the DB2 for z/OS
data sharing group.
TCP/IP stack
Virtual IP Addressing (VIPA) isolates the physical network path from IP addressing.
Dynamic VIPA (DVIPA) provides network resiliency against TCP/IP address space or
z/OS outages.
Distributed DVIPA offers the capability to connect to a single DVIPA to distribute service
provided by several stacks, providing isolation from an outage of a DB2 for z/OS
member.
6.1.2 z/OS WLM in DRDA workload balancing
WLM plays important roles in terms of using sysplex workload balancing (WLB). The sysplex
WLB is based on a DRDA server list, which consists of lists of IP address for DB2 data
sharing member, port number, and WLM server weight (weight). The server list is created
and maintained by WLM, and will be passed to DRDA AR client through DRDA flow during
Note: DB2 for z/OS Requester does not have same level of capability for transaction
pooling, but it does provides connection level load balancing.
Chapter 6. Data sharing 235
connect processing. When transaction pooling is used, DRDA AR clients will use the updated
server list on each transaction to distribute workload to DB2 data sharing members. DRDA
AR clients get the server list when heavy reuse is used.
There are no configuration parameters related to enabling sysplex WLB on DB2 for z/OS
server. When DDF is started, it registers itself to WLM, unless MAXDBAT is set to 0 or DDF is
stopped. In either of these exception cases, that members DDF does not appear in the server
list. If you stop DDF, DDF de-registers from WLM and the member is removed from the server
list.
There are several factors WLM will take into account when creating and updating weights.
These factors follow:
Displaceable capacity of systems (base information)
Enclave service class achievement (Performance Index, or PI)
WLM goal should be attainable when system is not under stress, which would result in a
PI < 1. If a service class has a PI > 1, it may affect workload balancing (as of z/OS V1R7
or higher, and PK03045 for DB2 for z/OS V8).
Enclave service class request queuing
Availability of general CP and zIIP (z/OS V1R9 or higher)
The weight of server list will reflect a combination of general purpose processor and zIIP
for available capacity.
DB2 9 for z/OS Health
DB2 will report its health factor of 0 to 100 to WLM based on the current storage
consumption within the DBM1 and DIST address spaces.
You need to know how your WLM policies are defined for your DRDA workload and how your
DRDA workloads meet the goal you defined, using RMF reports and server list informations.
Example 6-1 shows an RMF workload activity report from our test. You should have good
number in PI(<1) when system is not under stress. Having PI > 1 could mean that sysplex
WLB be affected.
Example 6-1 RMF workload activity report example
1 W O R K L O A D A C T I V I T Y
PAGE 18
z/OS V1R10 SYSPLEX SANDBOX DATE 04/23/2009 INTERVAL 09.59.748 MODE = GOAL
RPT VERSION V1R10 RMF TIME 17.39.37
POLICY ACTIVATION DATE/TIME 04/22/2009 20.38.49
------------------------------------------------------------------------------------------------------------ SERVICE CLASS PERIODS
REPORT BY: POLICY=OVER WORKLOAD=DATABASE SERVICE CLASS=DDFTST RESOURCE GROUP=*NONE PERIOD=1 IMPORTANCE=2
CRITICAL =NONE
-TRANSACTIONS- TRANS-TIME HHH.MM.SS.TTT --DASD I/O-- ---SERVICE--- SERVICE TIME ---APPL %--- --PROMOTED-- ----STORAGE----
AVG 0.00 ACTUAL 23 SSCHRT 0.0 IOC 0 CPU 0.004 CP 0.00 BLK 0.000 AVG 0.00
MPL 0.00 EXECUTION 23 RESP 0.2 CPU 124 SRB 0.000 AAPCP 0.00 ENQ 0.000 TOTAL 0.00
ENDED 1 QUEUED 0 CONN 0.1 MSO 0 RCT 0.000 IIPCP 0.00 CRM 0.000 SHARED 0.00
END/S 0.00 R/S AFFIN 0 DISC 0.0 SRB 0 IIT 0.000
Note: zIIP awareness was introduced in APAR PK38867 for DB2 V8 and 9. Prior to z/OS
V1R9, WLM sysplex routing service returned only weight of a server based on available
capacity of general processors adjusted by the health of the server and queuing at the
server. With this change, z/OS V1R9 WLM will return the weight of the available capacity of
a server with general CP, zIIP, and zAAP processors. The combined weight depends on all
servers running z/OS V1R9 or later.
236 DB2 9 for z/OS: Distributed Functions
#SWAPS 0 INELIGIBLE 0 Q+PEND 0.1 TOT 124 HST 0.000 AAP 0.00 -PAGE-IN RATES-
EXCTD 0 CONVERSION 0 IOSQ 0.0 /SEC 0 AAP 0.000 IIP 0.00 SINGLE 0.0
AVG ENC 0.00 STD DEV 0 IIP 0.002 BLOCK 0.0
REM ENC 0.00 ABSRPTN 5391 SHARED 0.0
MS ENC 0.00 TRX SERV 5391 HSP 0.0
GOAL: RESPONSE TIME 000.00.00.500 FOR 80%
RESPONSE TIME EX PERF AVG --EXEC USING%-- -------------- EXEC DELAYS % ----------- -USING%- --- DELAY % --- %
SYSTEM ACTUAL% VEL% INDX ADRSP CPU AAP IIP I/O TOT CRY CNT UNK IDL CRY CNT QUI
SC63 100 N/A 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Monitoring and tuning sysplex workload balancing
You can monitor the weights of each DB2 data sharing member using the DISPLAY DDF
DETAIL command. The command has been enhanced in APAR PK80474 for DB2 V8 and 9.
Example 6-2 displays the location server list of our 3-member DB2 data sharing group: D9C1
(9.12.4.103), D9C2 (9.12.4.104), and D9C3 (9.12.4.105). D9C2 has the smallest weight (18)
because the other two members have each installed two zIIPs.
Example 6-2 Example of the server list from the DISPLAY DDF DETAIL command
-D9C1 DIS DDF DET
DSNL080I -D9C1 DSNLTDDF DISPLAY DDF REPORT FOLLOWS: 335
DSNL081I STATUS=STARTD
DSNL082I LOCATION LUNAME GENERICLU
DSNL083I DB9C USIBMSC.SCPD9C1 -NONE
DSNL084I TCPPORT=38320 SECPORT=0 RESPORT=38321 IPNAME=-NONE
DSNL085I IPADDR=::9.12.4.102
DSNL086I SQL DOMAIN=d9cg.itso.ibm.com
DSNL086I RESYNC DOMAIN=d9cg.itso.ibm.com
DSNL087I ALIAS PORT SECPORT
DSNL088I DB9CALIAS 38324 0
DSNL088I DB9CSUBSET 38325 0
DSNL089I MEMBER IPADDR=::9.12.4.103
DSNL090I DT=I CONDBAT= 300 MDBAT= 100
DSNL092I ADBAT= 8 QUEDBAT= 0 INADBAT= 0 CONQUED= 0
DSNL093I DSCDBAT= 3 INACONN= 19
DSNL100I LOCATION SERVER LIST:
DSNL101I WT IPADDR IPADDR
DSNL102I 45 ::9.12.4.105
DSNL102I 42 ::9.12.4.103
DSNL102I 18 ::9.12.4.104
DSNL099I DSNLTDDF DISPLAY DDF REPORT COMPLETE
If you are using the Type 4 driver to connect to DB2 for z/OS, you can get a trace from the
global transport object pool to monitor connection concentrator and sysplex workload
balancing as shown in Example 6-3 on page 238.
Note: APAR PK80474 adds several enhancements to DISPLAY DDF commands, including
display server list, show Group IP address, member specific IP address, and defined
location aliases. For DB2 for z/OS V8, the capability to display location alias, already
supported in DB2 9, has also been added.
Chapter 6. Data sharing 237
In this example, the counters are as follows:
npr
The total number of requests that the Type 4 driver has made to the pool since the pool
was created.
nsr
The number of successful requests that the Type 4 driver has made to the pool since the
pool was created. A successful request means that the pool returned an object.
lwroc
The number of objects that were reused but were not in the pool. This can happen if a
Connection object releases a transport object at a transaction boundary. If the Connection
object needs a transport object later, and the original transport object has not been used
by any other Connection object, the Connection object can use that transport object.
hwroc
The number of objects that were reused from the pool.
coc
The number of objects that the Type 4 driver created since the pool was created.
aooc
The number of objects that exceeded the idle time that was specified by
db2.jcc.maxTransportObjectIdleTime and were deleted from the pool.
rmoc
The number of objects that have been deleted from the pool since the pool was created.
nbr
The number of requests that the Type 4 driver made to the pool that the pool blocked
because the pool reached its maximum capacity. A blocked request might be successful if
an object is returned to the pool before the db2.jcc.maxTransportObjectWaitTime is
exceeded and an exception is thrown.
tbt
The total time in milliseconds for requests that were blocked by the pool. This time can be
much larger than the elapsed execution time of the application if the application uses
multiple threads.
sbt
The shortest time in milliseconds that a thread waited to get a transport object from the
pool. If the time is under one millisecond, the value in this field is zero.
lbt
The longest time in milliseconds that a thread waited to get a transport object from the
pool.
abt
The average amount of time in milliseconds that threads waited to get a transport object
from the pool. This value is tbt/nbr.
tpo
The number of objects that are currently in the pool.
238 DB2 9 for z/OS: Distributed Functions
Example 6-3 Example trace output from the global transport objects pool
2009-04-29-16:14:46.111 ageServerLists, 8000 2000 4
2009-04-29-16:14:54.113 ageServerLists, 8000 2000 4
2009-04-29-16:14:55.712| Scheduled PoolStatistics npr:6347 nsr:6353 lwroc:6294
hwroc:0 coc:49 aooc:0 rmoc:0 crr:0 nbr:0 nbdsm:0 nbpm:0 sbt:0 abt:0 lbt:0 tbt:0
tpo:49
2009-04-29-16:15:02.121 ageServerLists, 8000 2000 4
2009-04-29-16:15:10.131 ageServerLists, 8000 2000 4
2009-04-29-16:15:24.261 ageServerLists, 8000 2000 4
2009-04-29-16:15:32.360 ageServerLists, 8000 2000 4
2009-04-29-16:15:40.369 ageServerLists, 8000 2000 4
2009-04-29-16:15:48.436 ageServerLists, 8000 2000 4
2009-04-29-16:15:55.716| Scheduled PoolStatistics npr:9721 nsr:9727 lwroc:9655
hwroc:0 coc:52 aooc:1 rmoc:1 crr:0 nbr:0 nbdsm:0 nbpm:0 sbt:0 abt:0 lbt:0 tbt:0
tpo:51
2009-04-29-16:15:56.440 ageServerLists, 8000 2000 4
2009-04-29-16:16:04.448 ageServerLists, 8000 2000 4
2009-04-29-16:16:12.454 ageServerLists, 8000 2000 4
2009-04-29-16:16:20.621 ageServerLists, 8000 2000 4
2009-04-29-16:16:28.699 ageServerLists, 8000 2000 4
2009-04-29-16:16:36.709 ageServerLists, 8000 2000 4
2009-04-29-16:16:44.715 ageServerLists, 8000 2000 4
2009-04-29-16:16:52.721 ageServerLists, 8000 2000 4
2009-04-29-16:16:55.717| Scheduled PoolStatistics npr:11749 nsr:11755 lwroc:11681
hwroc:0 coc:52 aooc:2 rmoc:2 crr:0 nbr:0 nbdsm:0 nbpm:0 sbt:0 abt:0 lbt:0 tbt:0
tpo:50
2009-04-29-16:17:00.724 ageServerLists, 8000 2000 4
If you are using DB2 Connect server, you can also display your server list by using the db2pd
command with -sysplex option. You can also see the number of connections made to a
specific member. See Example 6-4.
Example 6-4 Display server list information from DB2 Connect Server
$ db2pd -sysplex
Database Partition 0 -- Active -- Up 0 days 00:06:06
Sysplex List:
Alias: DB9C
Location Name: DB9C
Count: 3
IP Address Port Priority Connections Status PRDID
9.12.4.105 38320 53 1 0 DSN09015
9.12.4.103 38320 48 0 0
9.12.4.104 38320 21 0 0
Tip: Use one or a combination of these reports together with the RMF workload activity
report to make sure the system is not under stress.
Chapter 6. Data sharing 239
6.1.3 The sysplex awareness of clients and drivers
DB2s primary approach for scalability and high availability is to exploit the clustering
capabilities of the System z Parallel Sysplex. In a DB2 data sharing environment, multiple
instances of DB2 data sharing member can all access the same databases. If one member
fails or cannot be reached, the workload can be dynamically re-routed to other members of
the data sharing group, continuing to work.
DRDA connections can exploit the sysplex as follows:
DRDA AR can connect applications to a DB2 data sharing group as though it were a
single database server, and spread the workload among the different members, based on
server lists dynamically provided by WLM.
DRDA AR can recognize when a member of a DB2 data sharing group fails and can
automatically route new connections to other members.
Initially, the sysplex awareness capability (sysplex WLB) was provided by the DB2 Connect
server (known as Gateway). Today, the T4 Driver and the non-Java-based IBM Data Server
Driver and Client products have been enhanced to provide sysplex WLB.
Figure 6-1 on page 240 illustrates how DRDA AR clients on distributed platforms provided by
IBM can be configured to connect to the DB2 data sharing group, A client can connect to any
of the members. From the point of view of the application on the remote platform, the DB2
data sharing group, DB9C, appears as a single database server, as shown in Figure 6-1 on
page 240.
Note: The sysplex WLB of CLI Diver and .NET Provider was introduced in DB2 Connect
V9.5 FP3 or later and any equivalent level of the non-Java-based IBM Data Server Driver
or Client.
Note: To take advantage of sysplex WLB with one of the DRDA clients, you need to have
connection pooling capability where remote applications can reuse physical connections
between DB2 and applications.
DB2 V9.5 FP4 introduced sysplex WLB without connection pooling
240 DB2 9 for z/OS: Distributed Functions
Figure 6-1 DRDA access to DB2 data sharing
For the DRDA AR clients, connections to a DB2 data sharing group are established exactly
the same way as connections to a non-data sharing DB2 subsystem. The information for the
chosen DB2 data sharing member must be catalogued to the appropriate configuration. After
an initial successful connect, the DRDA AR is automatically able to establish connections to
either member using the server list.
The logic flow of DRDA AR clients for obtaining and using a server list is illustrated in
Figure 6-2 on page 241. In this example, the DRDA AR client has catalogued the DB2 data
sharing member as a single DB2 subsystem: DB9C at 9.12.4.102, which represents
distributed DVIPA explained in 6.1.4, Network resilience using Virtual IP Addressing on
page 242. The DRDA AR clients can be any one of following: the Type 4 driver, the
non-Java-based IBM Data Server Driver or Client, or a DB2 Connect server.
CF
TCP/IP Stack
DB2 for z/OS(D9C1)
DDF
z/OS
TCP/IP Stack
z/OS
DB2 data sharing group
(DB9C)
Network Interface Network Interface Network Interface Network Interface
Network
DB2 for z/OS(D9C2)
DDF
WLM
WAS
Type 4 Driver
WAS
Type 4 Driver .Net Provider ODBC Driver
App Server
.Net Provider
App Server Application Application
ODBC Driver
DB2 Connect
server
DRDA Clients
Chapter 6. Data sharing 241
Figure 6-2 The logical flow of DRDA AR clients obtaining server information
The sequence of action in the diagram is as follows:
1. The first connect request is presented by the DRDA AR client. It is a connect to database
DB9C, which is correctly cataloged at TCP/IP address 9.12.4.102, with sysplex WLB
turned on, so that the DRDA AR client knows the target is a DB2 data sharing group. The
DRDA AR client flows the connection to member D9C1, which was routed by Sysplex
Distributor. As this is the first connection, the DRDA AR client is unaware of number of
members available in data sharing environment, and Sysplex Distributor is responsible for
routing the first connection to DB2 for z/OS.
2. Once the connection to DB9C is established, server lists, which comprise a list of available
data sharing members and WLM information (weight) for the relative load of each
member, will be returned to the DRDA AR client. The DRDA AR client caches this
information for use with subsequent connections.
3. The DRDA AR client continues to route all SQL traffic for this first transaction to member
D9C1, which was directed by Sysplex Distributor.
4. A second transaction request is initiated by the DRDA AR client. In the application, this
request is also directed to database D9C1.
5. The DRDA AR client uses the server list information to decide to which member to route
the second transaction. This decision is based on connectivity information and WLM
information received by the first connect.
6. On this occasion, the DRDA AR client chooses to route this connection request to member
D9C2.
7. After successful connection establishment (transparent to the application), the DRDA AR
client receives updated server lists with member availability and WLM data for use during
the next sysplex connection decision.
The example in Figure 6-2 shows the IP configuration at the beginning of our project.
SYSPLEX
CF
TCP/IP
D9C1
DDF
SC63
TCP/IP
SC64
DB2 Data Sharing Group
Network Interface Network Interface
Network Interface Network Interface
D9C2
DDF
WLM
IP: 9.12.6.9
PORT: 38320
RESPORT: 38322
IP: 9.12.6.9
PORT: 38320
RESPORT: 38322
IP: 9.12.6.70
PORT: 38320
RESPORT: 38321
IP: 9.12.6.70
PORT: 38320
RESPORT: 38321
Type 4 Driver, CLI Driver,
.NET Provider, or
DB2 Connect server
9.12.6.70:38320,wt
9.12.6.9:38320,wt
9.12.6.70:38320,wt
9.12.6.9:38320,wt
1 3
2
SQLCODE 0
Server List
4
Connect to D9C1 SQL Statement Connect to D9C1
Routing Decision
Routing Decision
6
5
7
SQLCODE 0
Server List
Route to
D9C2
242 DB2 9 for z/OS: Distributed Functions
How sysplex workload balancing works
Take Table 6-1 as an example of a server list taken from our test to explain how sysplex WLB
works. The weight will be calculated as a ratio. This means that about 42% of the workload
will be routed it to D9C1, about 17% to D9C2, and about 42% to D9C3. (This example shows
the member DVIPA addresses.)
Table 6-1 Server list and calculated ratio
Assume the Type 4 driver (or any other DRDA AR client that supports sysplex WLB) has
assigned connections to the transport as per the numbers of active connections (connections
hold transports) assigned by Type 4 driver, as shown in Table 6-2.
Table 6-2 The numbers of active connections
When a new transaction arrives, the weight ratios are recalculated according to the new
server list, and active connection ratios would be evaluated using them. The lowest weighted
member would be checked first, then the higher weighted member. Assume, in this example,
server list weight did not change. In case of Table 6-2, D9C3 will get the next new transaction
since D9C1 and D9C2 have higher active transaction ratios than associated weight ratios.
6.1.4 Network resilience using Virtual IP Addressing
Traditionally, an IP address is associated with each physical link and is unique across the
entire visible network. Within those IP routing network, failure of any intermediate link or
adapter means failure of application network service unless they are an alternate path of
routing the network. The router can route network traffic around failed intermediate links but if
the link associated to the destination fails, there is no way for IP routing network to provide an
alternate path to application. The virtual IP address (VIPA) of z/OS Communication Server
Restriction: When you are using DB2 Connect server, connection concentrator needs to
be configured to work as described above. Otherwise, server list are updated when each
physical connection is made.
For all other DRDA AR clients, transaction pooling is automatically turned on when you
turned on sysplex WLB.
Member Weight Ratio (weight/total weight)
D9C1(9.12.4.103) 53 0.41732284
D9C2(9.12.4.104) 21 0.16535433
D9C3(9.12.4.105) 53 0.41732284
Member Active connections Ratio (active/total)
D9C1(9.12.4.103) 19 0.452381
D9C2(9.12.4.104) 7 0.166667
D9C3(9.12.4.105) 16 0.380952
Note: The described algorithm is based on DB2 Connect V9.5 FP3 or later and equivalent
level of the T4 Driver and non-Java-based IBM Data Server Driver or Client with
sysplexWLB enabled.
Chapter 6. Data sharing 243
removed this limitation by disassociating the IP address from a physical link and associating it
with the TCP/IP address space or a stack. This type of VIPA is called static VIPA. Because a
stack is just another address space in z/OS, the failure of the physical interface can be
extended to the failure or planned outage of a stack, or failure of the entire z/OS. Moving VIPA
across the stacks is needed in case of stack outage; this process is called VIPA Takeover.
Dynamic VIPA (DVIPA) automates VIPA Takeover to a backup stack. Distributed DVIPA
enables z/OS Sysplex Distributor improved function by allowing a request to a single DVIPA to
be served by applications on several stacks listed in the configuration. This adds the benefit of
limiting application exposure to stack failure, while providing additional benefit of connection
level work load balancing.
Static VIPA
A static VIPA is an z/OS TCP/IP facility to provide alternate network routing to the same
DB2 for z/OS subsystem or member in the event of a physical network failure. A static
VIPA cannot reroute connections to a different DB2 for z/OS system.
DVIPA
Provides network resilience by providing VIPA Takeover to a backup stack.
Distributed DVIPA
Represented DVIPA of z/OS Sysplex Distributor, which provides application resilience by
rerouting the DRDA TCP/IP traffic to a different member of a DB2 for z/OS data sharing
group. DVIPAs should be used in conjunction with the z/OS Sysplex Distributor.
In non-data sharing DRDA connection environment, Static VIPA or DVIPA is recommended
for network resilience. In case of network interface outage, existing connections are
automatically rerouted to the same DB2 for z/OS subsystem, transparent to the connected
applications (assuming an alternate network route exists).
Distributed DVIPA and sysplex WLB provided by DRDA AR clients is recommended for DRDA
connections to DB2 for z/OS data sharing environment.
6.1.5 Advanced high availability for DB2 for z/OS data sharing
The combination of distributed DIPA, or z/OS Sysplex Distributor, and client capabilities
provides applications with further levels of resilience.
Advanced Member routing. DRDA AR clients rely on the server lists of DB2 data sharing
group returned from server to distribute workload across DB2 data sharing group. Without
Sysplex Distributor, the first connection from an application relies on the availability of the
catalogued DB2 data sharing member. Sysplex Distributor provides the capability to
connect to any available DB2 data sharing member without re-cataloging in DRDA AR
clients.
Automatic client reroute to a different member of a data sharing group, in the event of the
failure of one member.
Distributed DVIPA and Sysplex Distributor
Figure 6-3 on page 244 illustrates the concept of DRDA connections to the DB2 data sharing
group through Sysplex Distributor. The DRDA AR client system simply catalogs the DB2 data
sharing group at the Dynamic VIPA of the Sysplex Distributor (9.12.4.102) as though you are
connecting to a single DB2 server. Once a new connection request is made, it is routed to the
most appropriate member, based on availability and WLM information delivered from DB2 for
z/OS to DRDA AR clients.
244 DB2 9 for z/OS: Distributed Functions
Figure 6-3 Sysplex Distributor connection assignment
The specification of enabling sysplex WLB in the Type 4 driver or the non-Java-based IBM
Data Server Driver or Client by parameter (example will be shown in the following sections),
or in DB2 Connect Server by specifying the SYSPLEX parameter on the DCS database
directory entry is essential to ensure that the WLM workload distribution is managed by
DRDA AR clients, based on DB2 WLM information. Sysplex Distributor only performs network
load balancing for initial connection. After an initial connection is made and a server list is
sent to DRDA AR clients, DRDA AR clients will use DB2 data sharing member specific IP
addresses to connect to appropriate DB2 data sharing members.
The following explanation gives Sysplex Distributor-related definitions on TCP/IP stack. When
you define Sysplex Distributor, you need to define two different roles for each of the TCP/IP
stacks participating in the Sysplex Distributor. One is the distribution stack or distributed
DVIPA, which represent the Sysplex Distributor. The other one is the target stack, which
provides the service, like DB2 for z/OS. You can give both roles to one stack but you will need
to define a backup for the distribution stack. In Figure 6-3, one of two participating TCP/IP
stacks, represented on SC63 and SC64, will become the primary distribution stack and the
other will become the backup stack. Both stacks take on the role of target stack.
Set following statements in the IPCONFIG statement in all your participating stacks:
DATAGRamfwd
Enables the transfer of data between networks
DYNAMICXCF
Indicates that XCF dynamic support is enabled
SYSPLEXRouting
Specifies that this TCP/IP host is part of an sysplex domain and should communicate
interface changes to the WLM.
SYSPLEX
CF
TCP/IP
D9C1
DDF
SC63
TCP/IP
SC64
DB2 Data Sharing Group
Network Interface Network Interface Network Interface Network Interface
D9C2
DDF
WLM
VIPA: 9.12.4.104
PORT: 38320
RESPORT: 38322
VIPA: 9.12.4.104
PORT: 38320
RESPORT: 38322
VIPA: 9.12..4.103
PORT: 38320
RESPORT: 38321
VIPA: 9.12..4.103
PORT: 38320
RESPORT: 38321
DVIPA: 9.12..4.102
PORT: 38320
DVIPA: 9.12..4.102
PORT: 38320
DRDA AR Application
Chapter 6. Data sharing 245
Define a distributed DVIPA and the definitions in the distribution stack using following sub
statements for VIPADYNAMIC statement:
VIPADEFINE
Define a Distributed DVIPA.
VIPADISTRIBUTE
Enables (using DEFINE) the Sysplex Distributor function for a dynamic VIPA (defined on
the same stack by VIPADEFINE or VIPABACKUP) for which new connection requests can
be distributed to other stacks in the sysplex. Specify service port number (for DB2 for z/OS
the DRDA port specified in BSDS) and the dynamic XCF address of the target stack to
which the distributing DVIPA will forward requests.
Define a back up stack for the distributed DVIPA, in one or more sysplex members in case of
failure. Define in the sub statement of VIPADYNAMIC statement on a stack other than your
distributed stack:
VIPABACKUP
Define a back up DVIPA
The representation of the Sysplex Distributor spread across sysplex is intended to show that
the Sysplex Distributor is accessible from any system of the sysplex group, but it is a DVIPA
which represents a group of members. Sample definitions for our test environment are
covered in Chapter 3, Installation and configuration on page 69.
The automatic client reroute operation
Transaction pooling and automatic client reroute (ACR), as well as seamless automatic client
reroute (seamlessACR), are enabled by default when sysplex WLB is enabled. This
combination allows you to take advantage of the function to reroute a live connection to a
different member of a data sharing group.
Connection rerouting is possible because transaction pooling disassociates the connection
from the Transport within the DRDA AR client. If one member of DB2 data sharing group has
failed, DRDA AR clients knows, and simply dispatches the next transaction request from each
connection a transport that is connected to an available member of the data sharing group.
The Type 4 driver and the non-Java-based Data Server Driver give further capability called
seamlessACR, where the driver retries the transaction on the new member without notifying
the application when possible. When the first SQL in transaction was executing and DB2
server fails, seamlessACR will work and retries the SQL on other available DB2 data sharing
members. When sysplex WLB is enabled, seamlessACR is always used using the Type 4
driver. The same applies to the non-Java-based Data Server Driver, where sysplex WLB is
enabled seamlessACR is enabled by default. You can disable seamlessACR when the
parameter for seamlessACR was explicitly set to turn it off.
Note: You may need a dynamic routing feature, such as OMPRoute or RIP enabled, to give
full capability of network failover. We included all the IP addresses available in our test
system in same subnet to avoid configuring dynamic routing. Be sure you configure your
Sysplex Distributor and network according to your environment needs.
Restriction: Document based on V9.5 FP3 or later.
246 DB2 9 for z/OS: Distributed Functions
The resilience of the DRDA connection is illustrated in Figure 6-4, which shows member
D9C1 failing, and the connected workload switching automatically to member D9C2. A
detailed examination of the possible application failover scenarios is covered in 6.3, DB2
failover scenario with and without Sysplex Distributor on page 259.
Figure 6-4 Client reroute
6.1.6 Scenario with Q-Replication for high availability
Some customers may choose to have multiple DB2 data sharing groups configured for high
availability and disaster recovery purposes where the data is copied across using a
replication solution as shown in Figure 6-5 on page 247. The IBM Data Server drivers can
only be used for workload balancing among members of a DB2 data sharing group. It is not
possible to route the workload across DB2 data sharing groups seamlessly just by using the
drivers. If an asynchronous replication solution, such as Q-Replication, is used to copy data
between two sysplexes, the status of data being replicated is not known or guaranteed and it
is recommended to do a user directed switch between the DB2 data sharing groups.
If you have configured multiple copies of application servers where each server directs the
workload to another DB2 data sharing group, you can use the external workload router to
balance the workload across multi-DB2 data sharing system.
Note: A seamless failover, as described in the Java application development for IBM data
servers documentation at the following Web page is equivalent to seamlessACR described
here:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp?topic=/com.ibm.d
b2.luw.apdv.java.doc/doc/c0024189.html
For the DB2 Connect server, ACR is performed when the DB2 Connect server is at the
same level as the client or higher, the client can perform seamless failover. Otherwise, the
client does not perform seamless failover, and the error SQL30108N is returned to the
applications.
SYSPLEX
CF
TCP/IP
D9C1
SC63
TCP/IP
SC64
DB2 Data Sharing
Network Interface Network Interface Network Interface Network Interface
DB2 Connect server
D9C2
WLM
Logical Agent (LA)
Coordinator Agent (CA)
IP: 9.12.6.9
PORT: 38320
RESPORT: 38322
IP: 9.12.6.9
PORT: 38320
RESPORT: 38322
9.12.6.70, 38320, weight=**
9.12.6.9, 38320, weight=**
9.12.6.70, 38320, weight=**
9.12.6.9, 38320, weight=**
IP: 9.12.6.70
PORT: 38320
RESPORT: 38321
IP: 9.12.6.70
PORT: 38320
RESPORT: 38321
DDF DDF
Chapter 6. Data sharing 247
For example, when the application servers is configured to use the Sysplex on the left of
Figure 6-5, the workload will be balanced within the data sharing group as shown by the long
dashed (red) arrows. The same is true for the application servers configured to use the
Sysplex to the right drawn as blue bold arrows. In case of a whole Sysplex failure, users
needs to change their setting manually to direct the workload to the other group, as shown by
the dotted blue arrows.
Figure 6-5 Multi-sysplex configuration scenario
6.2 Recommendations for common deployment scenarios
The considerations for a resilient infrastructure for distributed access to DB2 for z/OS fall into
two categories:
DB2 for z/OS server resilience
The most resilient DB2 for z/OS servers exploit DB2 data sharing, parallel sysplex,
Sysplex Distributor, and DVIPA. The tests in this chapter have shown that such a
configuration provides resilience for all scenarios except an in-flight unit of work.
Optionally, we will show a DB2 data sharing subsetting scenario.
The clients and drivers resilience
Depending on the nature of the distributed applications, we consider the scenarios which
utilize connection pooling by application servers.
There are several differences in terms of configuring for a resilient infrastructure using DRDA
to access DB2 for z/OS. This section shows different ways of configuring each DRDA AR
client.
DRDA Clients
(IBM Data Server Drivers or DB2 Connect server)
Network
C
F
TCP/IP Stack
z/OS
NIC NIC
TCP/IP Stack
z/OS
NIC NIC
DB2 for z/OS
DDF
DB2 for z/OS
DDF
WLM
DB2 data
sharing group
C
F
TCP/IP Stack
z/OS
NIC NIC
TCP/IP Stack
z/OS
NIC NIC
DB2 for z/OS
DDF
DB2 for z/OS
DDF
WLM
DB2 data
sharing group
Replication
WAS
Type 4 Driver
WAS
Type 4 Driver
External workload router
248 DB2 9 for z/OS: Distributed Functions
6.2.1 DB2 data sharing subsetting
Suppose you want to configure a three member sysplex but you want only two out of those
three DB2 data sharing members used for DRDA access. Using a location alias, you can
create a subset of DB2 data sharing members. Load distribution for both transactions and
connections are performed across the subset. If all members of the subset are down,
connection failures will occur even if other members not in the subset are started and capable
of processing work. By designating subsets of members, you can perform the following tasks:
Limit the members to which DRDA AR clients can connect. System and database
administrators might find this useful for any number of purposes.
Ensure that initial connections are established only with members that belong to the
specified subset. Without subsets, requesters can make initial and subsequent
connections to any member of the data sharing group.
Provide requesters with information about only those members in the subset.
Figure 6-6 shows test configuration used in our test environment. Configuring DB2 data
sharing subsetting is straightforward. Use the Change Log Inventory Utility(DSNJU003) to set
DDF ALIAS to subset member of BSDS.
Figure 6-6 DB2 data sharing subsetting
Our configuration has location name DB9C (which consists of all three DB2 data sharing
members), location alias DB9CSUBSET (which consists of D9C1 and D9C2), and location
alias DB9CALIAS (which consists of D9C1). Each location alias listens to different port
numbers.
WAS
CF
TCP/IP Stack
z/OS
Network Interface Network Interface
Network
TCP/IP Stack
z/OS
Network Interface Network Interface
Type 4 Driver
.Net Provider
App Server
TCP/IP Stack
z/OS
Network Interface Network Interface
DB2 for z/OS(D9C3)
DDF
DB2 data sharing group(DB9C)
DB2 for z/OS(D9C2)
DDF
DB2 for z/OS(D9C1)
DDF
DB2 data sharing subset(DB9CSUBSET)
WAS
Type 4 Driver
DB2 data sharing subset
(DB9CSUBSET)
DB2 data sharing group
(DB9C)
DB2 data sharing member alias
(DB9CALIAS)
Chapter 6. Data sharing 249
Assume DRDA AR clients all configured for sysplex WLB, each server connected to a
different location name and port number. As Figure 6-6 on page 248 shows, each DRDA AR
client only distribute the workload to applicable members.
APAR PK80474, also mentioned at 6.1.2, z/OS WLM in DRDA workload balancing on
page 234, has enhanced DB2 for z/OS V8 to list all location alias names and port numbers,
which was made first available in DB2 9 for z/OS.
You can use this function to define one subset of a data sharing group to support distributed
access and another subset to support batch workloads, for example.
6.2.2 Application Servers
This section gives practical deployment scenario with recommendation for Java-based
application server using Type 4 driver and non-Java-based application server using the
non-Java-based IBM Data Server Driver or Client. Sysplex Distributor should always be used
in conjunction with sysplex WLB.
DB2 for z/OS server resilience
Regarding the DB2 for /OS server, use a DB2 data sharing group with Sysplex Distributor and
DVIPA, to provide maximum resilience of the DB2 for z/OS server.
Network resilience
Regarding the TCP/IP network outside the sysplex, alternate routing should be configured in
the TCP/IP network to provide alternate routes to the DB2 server.
WebSphere Application Server connectivity
When using Java-based application servers, such as WebSphere Application Server to
connect to DB2 data sharing group, you can take advantage of the DB2 data sharing group
using Type 4 driver without going through DB2 Connect Server.
The minimum level of requirement for Type 4 driver is 2.7.xx or later. Example 6-5 shows
command output from system with DB2 Connect V9.5 FP3. If not already defined to system,
the classpath to Type 4 driver module needs be defined or given to a Java Virtual Machine
(JVM) through Java option when issuing commands.
Example 6-5 Verify level of Type 4 driver
$java com.ibm.db2.jcc.DB2Jcc -version
IBM DB2 JDBC Universal Driver Architecture 3.53.70
Important: We do not recommend defining a DB2 data sharing subset that consists of only
one member, since that would be the same as connecting to a specific IP address and port
number without gaining the availability benefits of DB2 data sharing. To have a resilient
infrastructure, we recommend defining a data sharing subset with at least two members.
The only reason to have a single member subset is if your requirement to restrict workload
from one or more members exceeds you requirements for availability.
Note: Our example showed IBM DB2 JDBC Universal Driver, which was the older name for
Type 4 driver, you will see new name in a message, when nnn is 4.0 or later.
C:\DDF\test\javatests>java com.ibm.db2.jcc.DB2Jcc -version
IBM Data Server Driver for JDBC and SQLJ 4.8.23
250 DB2 9 for z/OS: Distributed Functions
Figure 6-7 Java application server scenario configuration using WebSphere Application Server
To set up your sysplex WLB with Type 4 driver, you have two places to configure your options.
One is datasource properties where you enable sysplex WLB and limit your number of
transports, and another is Type 4 driver configuration properties where you set global limit of
transports and set your monitoring.
Following are the lists of properties to enable and limits:
IBM Data Server Driver for JDBC and SQLJ data source properties
enableSysplexWLB
enableConnectionConcentrator
maxTransportObjects
IBM Data Server Driver for JDBC and SQLJ configuration properties
db2.jcc.minTransportObjects
db2.jcc.maxTransportObjects
db2.jcc.maxTransportObjectIdleTime
db2.jcc.maxTransportObjectWaitTime
db2.jcc.dumpPool
db2.jcc.dumpPoolStatisticsOnSchedule
db2.jcc.dumpPoolStatisticsOnScheduleFile
Table 6-3 on page 251 shows recommended settings for common java application
environments for high availability. Start from recommendation and tune each properties as
you monitor.
SYSPLEX
CF
TCP/IP
D9C1
SC63
TCP/IP
SC64
DB2 Data Sharing Group
Network Interface Network Interface Network Interface Network Interface
D9C2
WLM
Type 4 Driver
WAS
Transport Object
Logical Connection
Applications
Data Source
Physical
Connection Pool
IP: 9.12.4.104
PORT: 38320
RESPORT: 38322
IP: 9.12.4.104
PORT: 38320
RESPORT: 38322
IP: 9.12.4.103
PORT: 38320
RESPORT:38321
IP: 9.12.4.103
PORT: 38320
RESPORT:38321
9.12.4.103, 38320, weight=
9.12.4.104, 38320, weight=
9.12.4.103, 38320, weight=
9.12.4.104, 38320, weight=
jcc.properties
db2.jcc.minTransportObjects=50
db2.jcc.maxTransportObjects=100
db2.jcc.maxTransportObjectWaitTime=30
db2.jcc.dumpPool=0
db2.jcc.dumpPoolStatisticsOnSchedule=60
db2.jcc.dumpPoolStatisticsOnScheduleFile=../logs/poolstats
enableSyplexWLB
maxTransportObjects
DDF DDF
DVIPA: 9.12..4.102
PORT: 38320
DVIPA: 9.12..4.102
PORT: 38320
Chapter 6. Data sharing 251
Table 6-3 Recommendation for common deployment
Setting Description Default Recommended
enableSysplexWLB Enables Sysplex
WLB
false true (to enable WLB)
enableConnectionConcentrator Enables
Connection
Concentrator
Defaults to true when
enableWLB is true
true
maxTransportObjects Specifies the
maximum number
of transports per
DataSource object.
-1(limit to
db2.jcc.maxTransport
Objects)
50100 range
db2.jcc.minTransportObjects Specifies the
minimum number of
transports in the
transport pool
0 (no global transport
pool)
2550 range (about half of
maxTranportObject)
db2.jcc.maxTransportObjects Specifies the max
number of
transports in the
transport pool
-1 (implies unlimited) 1000
db2.jcc.maxTransportObjectIdleTime Specifies the time
in seconds that an
unused transport
object stays in a
global transport
object pool before it
can be deleted from
the pool. Transport
objects are used for
workload
balancing.
60 >0
db2.jcc.maxTransportObjectWaitTime Specifies the
maximum amount
of time in seconds
that an application
waits for a transport
object if the
db2.jcc.maxTransp
ortObjects value
has been reached.
-1 (implies unlimited) 30
enableSeamlessFailover Enables Seamless
automatic client
reroute with failover
When DB2 for z/OS is
the server and
enbleSysplexWLB is
true,
enableSeamlessFailov
er is automatically
enabled.
DB2BaseDataSource.YES
(optional)
Note: Recommended values may vary depending on your installation and how your
applications are implemented.
252 DB2 9 for z/OS: Distributed Functions
Setting the Type 4 driver configuration properties
Prepare the configuration file using your preferred editor. Example 6-6 shows the contents of
the configuration file.
Example 6-6 Example of configuration properties file
db2.jcc.minTransportObjects=50
db2.jcc.maxTransportObjects=1000
db2.jcc.maxTransportObjectWaitTime=30
db2.jcc.dumpPool=0
db2.jcc.dumpPoolStatisticsOnSchedule=60
db2.jcc.dumpPoolStatisticsOnScheduleFile=/home/db2/logs/poolstats
Set your configuration file through JVM properties. For WebSphere Application Server
environments, add new properties for JVM by navigating to Application servers your
copy of application server Process Definition Java Virtual Machines Custom
Properties. Figure 6-8 shows an example setting configuration file named jcc.properties.
Figure 6-8 Setting Type 4 driver configuration properties file to WebSphere Application Server
Note: If you are configuring the global transport object pool monitor, be sure you have a
sufficient authority to access a file and directory. With the setting in the example. you will
get logs filled every one minutes. Make sure you have sufficient space and clean up
frequently.
Chapter 6. Data sharing 253
.NET Provider/CLI Driver
For non-Java-based application servers, there are two different possibilities. For .NET or CLI
environment, you can choose from either one of the two following packages with DB2
Connect license:
DB2 Connect client/server
The non-Java-based IBM Data Server Driver or Client
Figure 6-9 on page 254 shows a sample configuration for .NET Provider environment. The
logical configuration is similar to the Java environment except for the data server driver
configuration file, where sysplex WLB and other configuration options such as number of
transports are set.
The sysplex WLB happen within a process where logical connection of application and
transport are separated. The application server configuration for .NET Provider is similar to
WebSphere Application Server environments where you can take full advantage of the
sysplex WLB.
Note: APAR PK41236 added sysplex WLB support for Type 4 driver with
KEEPDYNAMIC(YES) specification to DB2 V8 and 9. Until this change,
KEEPDYNAMIC(YES) prevents thread from becoming inactive, which prevents DRDA AR
clients from re-using connections.
To implement this change, the server returns information about transaction boundaries to
tell the Type 4 driver if disconnect happens after COMMIT/ROLLBACK, the client will not
be able to connect to the original server. If there are no held cursors, declared global
temporary tables, etc., and the only thing preventing clients from reusing connections was
KEEPDYNAMIC(YES), no errors are returned and the SQL statement flows to a different
DB2 data sharing member, even if some statement needs to be re-prepared to execute. In
the case where the client does not get a disconnect failure, behavior does not change, so
users can have the benefit of having KEEPDYNAMIC(YES).
The Type 4 driver (version 3.51.x/4.1.x or later) has a feature which supports the
specification of the datasource properties "keepDynamic" and "enableSysplexWLB" to be
enabled for connections to a DB2 for z/OS data sharing group. When both properties are
enabled, the changes described will take effect.
Note: If you installed the non-Java-based Data Server Driver, you will not have a database
directory. So configure dsn alias in your data server driver configuration file,
ds2dsdriver.cfg. For any of the clients with a database directory, such as DB2 Connect
server/client, the db2dsdriver.cfg file will not be used for alias lookup. Optionally, you can
also give server information to your application through connection attributes, similar to
Type 4 driver configuration.
254 DB2 9 for z/OS: Distributed Functions
Figure 6-9 Non-Java-based application server scenario configuration using .NET
The transaction-pooling and seamlessACR should always be enabled when using sysplex
WLB. Table 6-4 lists the recommended settings for the highest availability. Those parameters
should be pass to the non-Java-based Data Server Driver through configuration file,
db2dsdriver.cfg.
Table 6-4 Recommended settings for the non-Java-based IBM Data Server Driver
SYSPLEX
CF
TCP/IP
D9C1
DDF
SC63
TCP/IP
SC64
DB2 Data Sharing Group
Network Interface Network Interface Network Interface Network Interface
D9C2
DDF
WLM
.NET Provider
Application Server(IIS)
Transport Object
Logical Connection
Applications thread
App Process
Connection Pool
IP: 9.12.4.104
PORT: 38320
RESPORT: 38322
IP: 9.12.4.104
PORT: 38320
RESPORT: 38322
IP: 9.12.4.103
PORT: 38320
RESPORT: 38321
IP: 9.12.4.103
PORT: 38320
RESPORT: 38321
9.12.4.103, 38320, weight=
9.12.4.104, 38320, weight=
9.12.4.103, 38320, weight=
9.12.4.104, 38320, weight=
db2dsdriver.cfg
<configuration>
<DSN_Collection>
<dsn alias="DB9C" name="DB9C" >
<parameter name="Authentication" value="Server_encrypt_aes"/>
</dsn>
</DSN_Collection>
<databases>
<database name="DB9C" host>
<WLB>
<parameter name="enableWLB" value="true"/>
<parameter name="maxTransports" value="5"/>
</WLB>
<ACR>
<parameter name="enableACR" value="true"/>
</ACR>
</database>
</configuration>
DVIPA: 9.12..4.102
PORT: 38320
DVIPA: 9.12..4.102
PORT: 38320
Setting Description Default Recommended
enableWLB Enables sysplex WLB false Set to true to
enable WLB
maxTransports Specifies the max number of
transports in the transport pool
-1 (implies unlimited) Start from 50-100
range
maxTransportIdleTime Specifies the maximum elapsed
time in number of seconds
before an idle transport is
dropped
600 600
maxTransportWaitTime Specifies the number of
seconds that the client will wait
for a transport to become
available
-1(unlimited) 30
maxRefreshInterval Specifies the maximum elapsed
time in number of seconds
before the server list is
refreshed
30 30
enableACR Enables automatic client reroute Defaults to true when
enableWLB is true
true (optional)
Chapter 6. Data sharing 255
Example 6-7 shows a configuration file for the non-Java-based IBM Data Server Driver.
Example 6-7 Sample setting for db2dsdriver.cfg
<configuration>
<DSN_Collection>
<dsn alias="DB9C" name="DB9C" host="wtsc63.itso.ibm.com" port="38320">
<parameter name="Authentication" value="Server_encrypt_aes"/>
</dsn>
</DSN_Collection>
<databases>
<database name="DB9C" host="wtsc63.itso.ibm.com" port="38320">
<parameter name="CurrentSchema" value="PAOLOR7"/>
<parameter name="DisableAutoCommit" value="1"/>
<parameter name="ProgramName" value="PID"/>
<WLB>
<parameter name="enableWLB" value="true"/>
<parameter name="maxTransports" value="100"/>
<parameter name="maxTransportWaitTime" value="30"/>
<parameter name="maxTransportIdleTime" value="600"/>
<parameter name="maxRefreshInterval" value="30"/>
</WLB>
<ACR>
<parameter name="enableACR" value="true"/>
</ACR>
</database>
</databases>
<parameters>
<parameter name="CommProtocol" value="TCPIP"/>
</parameters>
</configuration>
The following applies to clients with a database directory, such as DB2 Connect server/client
connecting to DB2 for z/OS:
The db2dsdriver.cfg file will not be used for alias lookup. You should catalog your
database using the catalog command before making a connection or should provide the
connection attributes in the applications.
To enable sysplex WLB, you need to have ,,,,,SYSPLEX parameter specified in the
Database Connection Services directory.
enableSeamlessACR Enables Seamless automatic
client reroute with failover
When DB2 for z/OS is the server
and enableACR is true,
enableSeamlessACR is
automatically enabled.
true (optional)
Setting Description Default Recommended
256 DB2 9 for z/OS: Distributed Functions
6.2.3 Distributed three-tier DRDA clients
This section gives practical recommendations for a distributed three-tier DRDA configuration.
DB2 for z/OS server resilience
Regarding the DB2 for z/OS server, use a DB2 data sharing group with Sysplex Distributor
and DVIPA, to provide maximum resilience of the DB2 for z/OS server.
Additionally, implement connection concentrator (transaction pooling) in the DB2 Connect.
Network resilience
Regarding the TCP/IP network outside the sysplex, alternate routing should be configured in
the TCP/IP network to provide alternate routes to the DB2 server.
Tip: If you are migrating from DB2 Connect server/client to the non-Java-based IBM Data
Server Driver or Client, you can use the following command to read the current database
directory information to create the basic configuration file. You will need to customize the
configuration file to enable the workload balancing functions.
$ db2dsdcfgfill
SQL01535I The db2dsdcfgfill utility successfully created the db2dsdriver.cfg
configuration file.
,,,,,SYSPLEX keyword is equivalent to the enableWLB parameter that should be set
through the db2dsdriver.cfg file. This parameter will not be set automatically through the
db2dsdcfgfill command. To enable Sysplex Workload balancing, you need to set both the
enableWLB and enableACR parameters, which is equivalent to enabling the connection
concentrator function in DB2 Connect server.
For Windows platform users, the configuration file will be created in cfg directory under
instance home.
When your DB2 profile registry look as follows:
C:\>db2set -all
[e] DB2PATH=C:\Program Files\IBM\SQLLIB
[i] DB2INSTPROF=C:\Documents and Settings\All Users\Application
Data\IBM\DB2\DB2COPY3
[i] DB2COMM=TCPIP
[g] DB2_EXTSECURITY=NO
[g] DB2SYSTEM=LENOVO-B6AFDE0A
[g] DB2PATH=C:\Program Files\IBM\SQLLIB
[g] DB2INSTDEF=DB2
[g] DB2ADMINSERVER=DB2DAS03
The configuration file will be created at following location:
C:\Documents and Settings\All Users\Application Data\IBM\DB2\DB2COPY3\DB2\cfg
If you do not have the cfg directory under your instance home, the db2dsdcfgfill command
will fail. Create the directory before executing the command.
Chapter 6. Data sharing 257
DB2 Connect server (gateway configuration)
When you are configuring to use application servers, there should be no need to add a DB2
Connect server in your installation, since that will just introduce an additional single point of
failure in your system.
When all the client applications (such as MS Excel, and so forth) come in from each client
users notebook computer, DB2 Connect server is probably still an option to do the traffic
control to limit the number of actual connections to DB2 for z/OS in conjunction with
Connection Concentrator for transaction pooling.
Figure 6-10 shows how DB2 Connect server acts as a gateway when applications connect to
DB2 for z/OS data sharing group.
Figure 6-10 Distributed three tier DRDA clients scenario
When using DB2 Connect server, you need to catalog the database, node, and DCS
directory. For example, catalog the database using Distributed DVIPA(9.12.4.102) to connect
to DB2 data sharing group. The initial connection will be made as long as one of DB2 data
sharing member exists.
Cataloging a connection to the DB2 data sharing group is as follows:
catalog tcpip node ZOSNODE remote 9.12.4.102 server 38320
catalog database DB9C as DB9C at node ZOSNODE authentication server_encrypt
Catalog DCS directory is optional, since Sysplex support is set by default. If you cataloged
your database directory using location name, you do not need to catalog DCS directory. If you
used a location name longer than 8 bytes, you needs to catalog DCS directory with the
,,,,,SYSPLEX option.
catalog DCS database DB9C as DB9C parms ,,,,,SYSPLEX
Table 6-5 on page 258 lists recommended settings for common environments. You can start
from the recommendation and tune your settings as you monitor your environment.
SYSPLEX
CF
TCP/IP
D9C1
SC63
TCP/IP
SC64
DB2 Data Sharing
Network Interface Network Interface
Network Interface Network Interface
DB2 Connect server
D9C2
WLM
Logical Agent (LA)
Coordinator Agent (CA)
IP: 9.12.4.104
PORT: 38320
RESPORT: 38322
IP: 9.12.4.104
PORT: 38320
RESPORT: 38322
9.12.4.103, 38320, weight=*
9.12.4.104, 38320, weight=*
9.12.4.103, 38320, weight=*
9.12.4.104, 38320, weight=*
IP: 9.12.4.103
PORT: 38320
RESPORT: 38321
IP: 9.12.4.103
PORT: 38320
RESPORT: 38321
DDF DDF
DVIPA: 9.12..4.102
PORT: 38320
DVIPA: 9.12..4.102
PORT: 38320
258 DB2 9 for z/OS: Distributed Functions
Table 6-5 Recommended settings for DB2 Connect server configuration
A sample settings for connection concentrator and ACR using the recommendation shown in
Table 6-5 is as follows:
$ db2 update dbm cfg using MAX_CONNECTIONS 101 MAX_COORDAGENTS 100 MAXAGENTS 100
NUM_POOLAGENTS 50 NUM_INITAGENTS 50
$ db2set DB2TCP_CLIENT_CONTIMEOUT=10
B
DB2 Connect server failover (DRDA three-tier client resilience)
Whenever DB2 Connect server crashes, all clients connected through that DB2 Connect
server to DB2 for z/OS receive a communications error, which terminates the connection
resulting in an application error. In cases where availability is important, you should have
implemented either a resilient set up or the ability to fail the server over to a standby or
Setting Description Default Recommended
parms ,,,,,SYSPLEX Enables Sysplex WLB true (if DCS directory node
not defined)
Set parms in sixth field
to enable sysplex WLB
max_coordagents Specifies the max number
of agents (transports) in the
agent pool
AUTOMATIC 100
max_connections Specifies the max number
of connections allowed to
connect to DB2 Connect
server
AUTOMATIC 101 (max_coordagents
+1) to turn on
connection
concentrator
num_poolagents Specify the maximum size
of the idle agent pool
100, AUTOMATIC 50
num_initagents Specify the initial number of
idle agents that are created
in the agent pool at
DB2START time
0 50
DB2_MAX_CLIENT_CONNRE
TRIES
The maximum number of
connection retries
attempted by automatic
client reroute.
30(DB2_CONNRETRIES_I
NTERVAL not set),
10(DB2_CONNRETRIES_I
NTERVAL set)
not set
DB2_CONNRETRIES_INTER
VAL
The sleep time between
consecutive connection
retries, in number of
seconds.
30 not set
DB2TCP_CLIENT_CONTIME
OUT
Specifies the number of
seconds a client waits for
the completion on a TCP/IP
connect operation
When DB2 for z/OS is the
server and enableACR is
true, enableSeamlessACR
is automatically enabled.
10
Note: Settings based on DB2 Connect V9.5 FP3
Tip: If you are running an application on the same host as the DB2 Connect server, and if
you want the application to go through DB2 Connect, you need to set the DB2 registry
variable:
db2set DB2CONNECT_IN_APP_PROCESS=NO
Chapter 6. Data sharing 259
backup node. In either case, the DB2 client code attempts to re-establish the connection to
the original server, which might be running on a failover node, or to a new server.
You can set your alternate server to DB2 Connect server using the UPDATE ALTERNATE
SERVER FOR DATABASE command as follows:
update alternate server for database altserver using hostname host2 port 54320
The following environment variables can be used to influence how client reroute behaves for
client applications.
DB2TCP_CLIENT_RCVTIMEOUT
DB2_MAX_CLIENT_CONNRETRIES
Refer to DB2 9.1 Administration Guide: Implementation, SC10-4221 for a description of these
environment variables.
6.3 DB2 failover scenario with and without Sysplex Distributor
The role of z/OS Sysplex Distributor is to increase the resilience and recovery of applications
in System z parallel sysplex. DB2 for z/OS takes advantage of the Sysplex Distributor, as
discussed earlier in this chapter.
In this section we show some practical scenarios of the failover resilience of DRDA
connectivity to a DB2 for z/OS data sharing group, with or without Sysplex Distributor. The
intent is to demonstrate the effectiveness of Sysplex Distributor.
6.3.1 Configuration for scenarios
The configuration used for testing failover scenarios with Sysplex Distributor is represented in
Figure 6-11 on page 260. Test was done with and without Sysplex Distributor. The case
without Sysplex Distributor was performed specifying member-specific VIPA address at the
DRDA AR client.
DB2 for z/OS data sharing group is configured with/without Sysplex Distributor.
Sysplex Distributor: Configured with Distributed DVIPA with 9.12.4.102
Two members: D9C1 and D9C2 at 9.12.4.103 and 9.12.4.104, respectively
Location name: DB9C
Four DRDA Application Requestors are configured.
DB2CCL: DB2 Connect Personal Edition V9.5 FP3 for Windows (no connection
concentrator)
DB2CSV: DB2 Connect Enterprise Edition V9.5 FP3 for AIX (direct connection with CLI
driver)
JCC: The Type 4 driver, sysplex WLB
DB2zOS driver: DB2 9 for z/OS
Restriction: If you configured the two phase commit processing using SPM log of DB2
Connect server, failover nodes need to take over the failed SPM log to recover all the
transactions. It is recommended not to use SPM log (two phase commit processing) when
configured with DB2 Connect server client reroute.
260 DB2 9 for z/OS: Distributed Functions
Figure 6-11 Failover test scenario with Sysplex Distributor
The failure that is tested is an outage of D9C1, which is one member of the data sharing
group. The outage is achieved by cancelling the IRLM with the SDSF command or stopping
DDF MODE(FORCE) at DRDA AS site:
/F D9C1IRLM,ABEND,NODUMP
When D9C1 is cancelled or DDF stopped, any connection to D9C2 continues unaffected. The
purpose of these tests is to confirm what happens to an application that is connected (or
trying to connect) to D9C1.
6.3.2 Application states for scenarios
In order to test failover resilience, it is necessary to build application test cases that represent
all the possible states that an application can be in when a failure occurs. There are five
possible application connection states that can be in effect when D9C1 failed. The application
states (AS1 to AS5) used are:
AS1
The application is not connected to DB2 when member D9C1 fails. The requester has not
established a connection to D9C1, and hence does not have member information. The
application then issues CONNECT TO DB9C
Note: Test scenarios are all configured with connection concentrator, except for the
DB2CCL scenario. Product recommendation is to configure connection concentrator if you
are using sysplex awareness. The Type 4 driver and CLI Driver will set connection
concentrator as default if you configure sysplex awareness.
You get Sysplex WLB capability by default without the need to catalog your DCS directory.
You need to configure sysplex WLB in your DCS directory if you catalog the DCS directory.
SYSPLEX
CF
TCP/IP
D9C1
DDF
SC63
TCP/IP
SC64
DB2 Data Sharing Group
Network Interface Network Interface
Network Interface Network Interface
D9C2
DDF
WLM
VIPA: 9.12.4.104
PORT: 38320
RESPORT: 38322
VIPA: 9.12.4.104
PORT: 38320
RESPORT: 38322
VIPA: 9.12..4.103
PORT: 38320
RESPORT: 38321
VIPA: 9.12..4.103
PORT: 38320
RESPORT: 38321
DVIPA: 9.12..4.102
PORT: 38320
DVIPA: 9.12..4.102
PORT: 38320
Type 4 Driver, DS Driver,
or DB2 Connect server
Application
Chapter 6. Data sharing 261
AS2
The application is not connected to DB2 when D9C1 fails. A DRDA client has previously
established a connection to D9C1, which has returned the member information to a client.
The client then issues CONNECT TO DB9C.
AS3
The DRDA client is connected to D9C1, but has no unit of work is in progress, when D9C1
fails. The application then issues another SQL statement, PREPARE (followed by
DECLARE, OPEN, FETCH, CLOSE, COMMIT).
AS4
The DRDA client is connected to D9C1, and has just issued a COMMIT, but has a
CURSOR WITH HOLD open when D9C1 fails. The application then issues another SQL
call against the held cursor:
FETCH
CLOSE
COMMIT
AS5
When the client is connected to D9C1 and has an in-flight unit of work (insert, plus open
cursor with one row fetched) when D9C1 fails. The application then issues another SQL
statement:
INSERT
COMMIT
The application that runs as a DB2 z/OS AR is an open with hold cursor with one row
fetched, when D9C1 fails. The application then issues other SQL statements:
FETCH
CLOSE
COMMIT
PREPARE
...
6.3.3 Results without Sysplex Distributor
Table 6-6 shows a matrix of the test results without Sysplex Distributor. The table lists the
SQLCODES for the SQL statements attempted after member D9C1 failed.
Table 6-6 Test results after D9C1 failed without Sysplex Distributor
Application
state
DB2CCL DB2CSV JCC DB2zOS
AS1 SQLCODE:
-30081
SQLCODE:
-30081
SQLCODE:
-4499
SQLCODE:
-30081
AS2 SQLCODE: 0 SQLCODE: 0 SQLCODE: 0 SQLCODE: 0
AS3 SQLCODE:
-30081
SQLCODE: 0 SQLCODE: 0 SQLCODE:
-30081
AS4 Fetch: 0
Close: -1024
Fetch: 0
Close: -30108
Fetch: 0
Close: -30108
Fetch: 0
Close: -918
AS5 Insert: -30081
Commit: -1024
Insert: -30108
Commit: 0
Insert: -30108
Commit: 0
Fetch: 0
Close: -918
262 DB2 9 for z/OS: Distributed Functions
Application state 1
The application is not connected when member D9C1 fails. The DRDA AR client has not
established a connection to D9C1, and hence does not have member information. The
application then issues CONNECT TO D9C1.
For all DRDA client scenarios
Without Sysplex Distributor, a DB2 Connect configuration must have been started, and
must have made at least one successful connection to the DB2 data sharing group, to
have the member list information that allows connections to be routed to alternate
members of the data sharing group.
Application state 1 defines the case where DRDA client had not been previously
started before the D9C1 failure, and hence any SQL Connect from any (cold-started)
DRDA client fails with the generic connection failure SQLCODE -30081.
For DB2 for z/OS
DB2 for z/OS, acting as a DRDA AR, also makes use of the member list information. As
with DRDA client in application state 1, and without Sysplex Distributor, DB9A (DRDA AR)
has not retrieved the member list information, and any SQL CONNECT fails with
SQLCODE -30081.
Application state 2
The application is not connected to DB2 when D9C1 fails. DB2 Connect has previously
established a connection to D9C1, which has returned the member information to DRDA AR
client. The application then issues CONNECT TO D9C1.
For all DRDA client scenarios
Application state 2 defines the case where DRDA AR client had been previously started,
and connected to D9C1 before the D9C1 failure. In this case, DRDA AR client will have
retrieved the member list information, and can reroute the connection request to D9C2
with SQLCODE 0.
For DB2 for z/OS
Application state 2 defines the case where DB9A client had been previously started and
connected to D9C1 before the D9C1 failure. In this case, DB9A will have retrieved the
member list information, and reroutes the connection request to D9C2 with SQLCODE 0.
Tip: SQLCODE: -30108 means that the query fails due to the member to which you are
connected not being available. Sysplex awareness does a client reroute. If you capture
-30108, the executing UOW work has rollback, but you can rerun your UOW without
reconnecting unless locks from a previous execution remain as retained locks.
In our test scenario with seamless failover no error is returned to the application.
Chapter 6. Data sharing 263
Application state 3
The application is connected to D9C1, but has no unit of work in progress when D9C1 fails.
The application then issues PREPARE (followed by DECLARE, OPEN, FETCH, CLOSE,
COMMIT).
For DB2 Connect client (no connection concentrator)
Once connected to member D9C1, even if no unit of work is in-flight, the connection is
exclusively tied to member D9C1. If member D9C1 fails, then the connection is lost. The
application does not get any notification of the lost connection until the next time it tries to
issue an SQL statement. In this test, it attempts an SQL SELECT, and receives an
SQLCODE -30081.
For all other DRDA AR client scenarios
With transaction pooling, the connection is transferred automatically to D9C2 because the
connection is disassociated from transports, and is not exclusively tied to any individual
member. The new unit of work is started on a DBAT in D9C2, totally transparent to the
program.
For DB2 for z/OS
DB2 for z/OS as a DRDA AR does not have the transaction pooling functionality that
allows a second unit of work to be rerouted to an alternate DB2 for z/OS sysplex member.
DB2 returns an SQLCODE -30081.
Application state 4
The application is connected to DB2, and has just issued a COMMIT, but has a CURSOR
WITH HOLD open when D9C1 fails. The application then issues the following commands:
FETCH
CLOSE
COMMIT
PREPARE (followed by DECLARE, OPEN, FETCH, CLOSE, COMMIT)
For DB2 Connect client (no connection concentrator)
The FETCH succeeds because the blocked cursor is cached in the DB2 Connect block
cache (as set by RQRIOBLK).
The CLOSE and COMMIT both fail with SQLCODE -1224. This SQLCODE is not as
helpful as it could be. It is chosen by DB2 Connect because the connection agent is
already established and working with the cursor, but can no longer access the cursor.
The PREPARE is not related to the existing cursor, and DB2 Connect returns a more
helpful -1024 (a database connection does not exist).
For all other DRDA AR client scenarios
The FETCH succeeds because the blocked cursor is cached in the DB2 Connect block
cache (as set by RQRIOBLK).
The CLOSE fails with SQLCODE -904, resource unavailable.
The COMMIT and PREPARE succeed because the automatic transfer to D9C2 has
completed successfully, and further SQL statements can now be executed.
For DB2 for z/OS
The FETCH succeeds because the blocked cursor is cached by DB2 for z/OS DRDA
blocking.
The PREPARE fails with SQLCODE -918. The SQL statement cannot be executed
because a connection has been lost.
264 DB2 9 for z/OS: Distributed Functions
Application state 5
The application is connected to DB2, and has an in-flight unit of work (select and update
scenarios are both tested), when D9C1 fails. The application then issues the following
commands:
INSERT
COMMIT
For DB2 Connect client (no connection concentrator)
The INSERT fails with SQLCODE -30081.
The COMMIT returns -1024. A database connection does not exist.
For all other DRDA client scenarios
The INSERT fails with SQLCODE -30108, connection failed but the client
re-established the connection.
The COMMIT succeeds because connection was automatically re-route to D9C2 and
completed successfully, and further SQL statements can now be executed.
For DB2 for z/OS
The FETCH succeeds, because the blocked cursor is cached by DB2 for z/OS DRDA
blocking.
The PREPARE fails with SQLCODE -918. The SQL statement cannot be executed
because a connection has been lost.
6.3.4 Results with Sysplex Distributor
The execution results with Sysplex Distributor are summarized in a matrix of SQLCODEs, as
shown in Table 6-7.
Table 6-7 Test results after D9C1 failed with Sysplex Distributor
The table clearly shows one difference with respect to resilience: If DB2 Connect has not
been started, connections that are cataloged against the failed data sharing group member
now succeed.
Application
State
DB2CCL DB2CSV JCC DB2zOS
AS1 SQLCODE: 0 SQLCODE: 0 SQLCODE: 0 SQLCODE: 0
AS2 SQLCODE: 0 SQLCODE: 0 SQLCODE: 0 SQLCODE: 0
AS3 SQLCODE:
-30081
SQLCODE: 0 SQLCODE: 0 SQLCODE:
-30081
AS4 Fetch: 0
Close: -1024
Fetch: 0
Close: -30108
Fetch: 0
Close: -30108
Fetch: 0
Close: -918
AS5 Insert: -30081
Commit: 0
Insert: -30108
Commit: 0
Insert: -30108
Commit: 0
Fetch: 0
Close: -918
Chapter 6. Data sharing 265
6.4 Migration and coexistence
DB2 Connect server, the Type 4 driver, and the CLI driver or Client all provide sysplex WLB,
which supports infrastructure fault-tolerance. As described through this chapter, DB2 Connect
server spreads the workload amongst like DB2 for z/OS subsystems that are members of the
DB2 data sharing group.
However, in DB2 for z/OS V7 and V8 coexistence environment, the Type 4 driver and DB2
Connect server connection concentrator connections are downgraded, as APAR PK05198.
For example, when DB2 Connect server makes a connection to DB2 for z/OS V8 member,
connections are only balanced across the DB2 for z/OS V8 members and DB2 for z/OS V7
members are not used.
Similar case was reported in migration to New Function Mode in APAR PK06697, where
connections cannot be routed to subsystems that are in different mode to the original
connection created to the DB2 data sharing group. This support will improve a DB2 data
sharing migration.
In conjunction to changes made against DB2 Connect server FixPack 10, DB2 for z/OS V8 is
providing a more effective approach. If DB2 Connect server opens a connection to DB2 for
z/OS V8 Compatibility Mode, the DB2 for z/OS V8 member now returns connection s
information indicating only V7 functionality.
Note: This supports requires APAR PK05198 and PK06697 installed in conjunction with
DB2 Connect V8 FixPack 10 or later.
DB2 Connect V8.2 is no longer in service as of April 30, 2009. All other descriptions in this
book are related to DB2 Connect V9.5 FixPack 3 or later.
266 DB2 9 for z/OS: Distributed Functions
Copyright IBM Corp. 2009. All rights reserved. 267
Part 4 Performance and
problem determination
This part contains the following chapters:
Chapter 7, Performance analysis on page 269
Chapter 8, Problem determination on page 323
Part 4
268 DB2 9 for z/OS: Distributed Functions
Copyright IBM Corp. 2009. All rights reserved. 269
Chapter 7. Performance analysis
In a distributed architecture environment, having knowledge of the factors affecting the
performance of the application is crucial. By identifying these factors, you can plan to tune
them during the system and application design.
Performance tuning is a coordinated work of many functional teams in an organization (such
as the database administration group, system programmers, network engineers, and
application teams.) Keeping in mind that the main purpose of this chapter is to highlight the
factors that affect performance, an attempt has been made to clarify these functional areas so
that the teams can work in parallel on performance issues.
Traces are important for performance analysis as they can provide the information that is
required for understanding and creating a performance profile of a given application or
infrastructure. Traces are described in Chapter 8, Problem determination on page 323.
This chapter contains the following sections:
Application flow in distributed environment on page 270
System topics on page 271
Checking settings in a distributed environment on page 294
Obtaining information about the host configuration on page 307
7
270 DB2 9 for z/OS: Distributed Functions
7.1 Application flow in distributed environment
Organizations today are able to serve several customers from different parts of the world.
Even a single organization has branches in several cities and countries. The applications built
to support their business need to be performing to specification and scalable to a large
numbers of users.
In Chapter 2, Distributed database configurations on page 33, we described the most
commonly used configurations for implementing DB2 for z/OS in a distributed database
business environment. In this chapter we consider 2- and 3-tier configurations.
In configurations where applications connect directly to the server, such as clients with Data
Server Drivers, a gateway is not required between the DRDA requester and the server. This
constitutes a 2-tier configuration. See Figure 7-1.
Figure 7-1 2 tier architecture representation
Figure 7-2 illustrates an 3-tier configuration. The clients can be thin or fat. The clients connect
to middle-tier servers, namely, Web servers and application servers. The function of the Web
servers is to handle the Hypertext Transfer Protocol (HTTP) requests and redirect them to
application servers. The application servers handle the business process logic.
In a DRDA environment, to get the data from the host, often communication gateways
coordinate with the host for the retrieval and update of information. In this scenario, the
middle-tier communication server often used is DB2 Connect Enterprise Edition.
Figure 7-2 3-tier architecture representation
Chapter 7. Performance analysis 271
7.2 System topics
In this section we discuss the following topics:
Database Access Threads
Accumulation of DDF accounting records
zIIP
Using RMF to monitor distributed data
7.2.1 Database Access Threads
Because of Database Access Threads (DBATs)
1
impact on performance and accounting, the
general recommendation is to use INACTIVE MODE threads instead of ACTIVE MODE
threads.
Table 7-1 lists the most relevant DSNZPARM parameters affecting DBATs. Refer to
Chapter 3, Installation and configuration on page 69 for more information about
DSNZPARMs.
Table 7-1 Summary of DSNZPARM parameters affecting DBATs
The MAX USERS field on panel DSNTIPE represents the maximum number of allied threads,
and the MAX REMOTE ACTIVE field on panel DSNTIPE represents the maximum number of
database access threads. Together, the values you specify for these fields cannot exceed
1999.
In the MAX REMOTE CONNECTED field of panel DSNTIPE, you can specify up to 150,000
as the maximum numberof concurrent remote connections that can concurrently exist within
DB2. This upper limit is only obtained if you specify the recommended value INACTIVE for the
DDF THREADS field of installation panel DSNTIPR.
Consider a scenario where the MAXDBAT is reached and DBATs start to queue. This can be
simulated, for example, by starting many requesters in a remote server against a DB2 for
z/OS where MAXDBAT was set to a convenient value of 100.
Example 7-1 on page 272 displays the DDF THD(*) DETAIL command output of such
scenario.
1
DBATs are introduced in 1.7, Connection pooling on page 22.
Parameter Description
CMTSTAT ACTIVE or INACTIVE. It governs whether DBATs/connections remain active
across commits.
MAXDBAT Maximum number of concurrent DBATs (<=1999) or connections if
CMTSTAT=ACTIVE
CONDBAT Maximum number of concurrent connections (<=150000)
POOLINAC POOL THREAD TIMEOUT: Specify the approximate time, in seconds that a
database access thread (DBAT) can remain idle in the pool before it is
terminated
IDTHTOIN Idle thread timeout interval
272 DB2 9 for z/OS: Distributed Functions
Example 7-1 DIS THD(*) DETAIL
DSNL080I -DB9A DSNLTDDF DISPLAY DDF REPORT FOLLOWS:
DSNL081I STATUS=STARTD
DSNL082I LOCATION LUNAME GENERICLU
DSNL083I DB9A USIBMSC.SCPDB9A -NONE
DSNL084I TCPPORT=12347 SECPORT=12349 RESPORT=12348 IPNAME=-NONE
DSNL085I IPADDR=::9.12.6.70
DSNL086I SQL DOMAIN=wtsc63.itso.ibm.com
DSNL086I RESYNC DOMAIN=wtsc63.itso.ibm.com
DSNL090I DT=I CONDBAT= 300 MDBAT= 100
DSNL092I ADBAT= 100 QUEDBAT= 695 INADBAT= 0 CONQUED= 28
DSNL093I DSCDBAT= 0 INACONN= 3
DSNL099I DSNLTDDF DISPLAY DDF REPORT COMPLETE
Note the following fields in Example 7-1:
MDBAT
Maximum number of database access threads as determined by the MAXDBAT
DSNZPARM parameter. This effectively determines the maximum number of concurrent
active database access threads that could potentially be executing SQL.
QUEDBAT
This value reflects a cumulative counter that is always incremented when the MDBAT limit,
described above, has been reached. The QUEDBAT value is equal to the cumulative
number of newly attached connections, or new work on inactive connections (see
DSNL093I INACONN), or new work on inactive DBATs (see INADBAT) that had to wait for
a DBAT to become available to service the new work. This value is identical to the
QDSTQDBT statistical value and a non-zero value suggests that performance and
throughput may have been affected. See ADBAT and DSNL090I MDBAT for additional
information. Also note that the QUEDBAT counter is only reset at restart for this DB2
subsystem.
So MAXDBAT has been reached and DBATs start to queue. Example 7-2 shows an
OMEGAMON XE for DB2 Performance Expert on z/OS (OMEGAMON PE) Statistics Trace
report short created using the SMF records cuts during this scenario. Note the field DBAT
QUEUED. If the limit set by MAXDBAT parameter is reached, remote SQL requests are
queued until a DBAT can be created. The number of times queuing occurred is shown by the
field DBAT QUEUED.
Example 7-2 OM/PE Statistics Report Short showing DBAT QUEUED-MAXIMUM ACTIVE > 0
1 LOCATION: DB9A OMEGAMON XE FOR DB2 PERFORMANCE EXPERT (V4) PAGE: 3-62
GROUP: N/P STATISTICS TRACE - SHORT REQUESTED FROM: NOT SPECIFIED
MEMBER: N/P TO: NOT SPECIFIED
SUBSYSTEM: DB9A ACTUAL FROM: 04/23/09 21:09:30.76
DB2 VERSION: V9
---- HIGHLIGHTS --------------------------------------------------------------------------------------------------------------------
BEGIN RECORD: 04/23/09 21:59:30.76 TOTAL THREADS : 0 AUTH SUCC.W/OUT CATALOG: 0 DBAT QUEUED: 649
END RECORD : 04/23/09 22:04:30.76 TOTAL COMMITS : 620 BUFF.UPDT/PAGES WRITTEN: 6.51 DB2 COMMAND: 0
ELAPSED TIME: 5:00.000030 INCREMENTAL BINDS: 0 PAGES WRITTEN/WRITE I/O: 1.58 TOTAL API : 48
Example 7-3 on page 273 show an OMEGAMON PE Statistics Report Long. This report
provides more detailed information.
Chapter 7. Performance analysis 273
Example 7-3 OM/PE Statistics Report Long showing DBAT QUEUED-MAXIMUM ACTIVE > 0
1 LOCATION: DB9A OMEGAMON XE FOR DB2 PERFORMANCE EXPERT (V4) PAGE: 2-9
GROUP: N/P STATISTICS REPORT - LONG REQUESTED FROM: ALL 00:00:01.
MEMBER: N/P TO: DATES 23:55:23.
SUBSYSTEM: DB9A INTERVAL FROM: 04/27/09 14:34:28.
DB2 VERSION: V9 SCOPE: MEMBER TO: 04/27/09 14:39:28.
---- HIGHLIGHTS ----------------------------------------------------------------------------------------------------
INTERVAL START : 04/27/09 14:34:28.60 SAMPLING START: 04/27/09 14:34:28.60 TOTAL THREADS : 0.00
INTERVAL END : 04/27/09 14:39:28.60 SAMPLING END : 04/27/09 14:39:28.60 TOTAL COMMITS : 620.00
INTERVAL ELAPSED: 5:00.000008 OUTAGE ELAPSED: 0.000000 DATA SHARING MEMBER: N/A
GLOBAL DDF ACTIVITY QUANTITY /SECOND /THREAD /COMMIT QUERY PARALLELISM QUANTITY /SECOND /THREAD /COMM
--------------------------- -------- ------- ------- ------- --------------------------- -------- ------- ------- -----
DBAT QUEUED-MAXIMUM ACTIVE 638.00 2.13 N/C N/A MAX.DEGREE OF PARALLELISM 0.00 N/A N/A N
CONV.DEALLOC-MAX.CONNECTED 0.00 0.00 N/C N/A PARALLEL GROUPS EXECUTED 0.00 0.00 N/C 0.
COLD START CONNECTIONS 0.00 0.00 N/C 0.00 RAN AS PLANNED 0.00 0.00 N/C 0.
WARM START CONNECTIONS 0.00 0.00 N/C 0.00 RAN REDUCED 0.00 0.00 N/C 0.
RESYNCHRONIZATION ATTEMPTED 0.00 0.00 N/C 0.00 SEQUENTIAL-CURSOR 0.00 0.00 N/C 0.
RESYNCHRONIZATION SUCCEEDED 0.00 0.00 N/C 0.00 SEQUENTIAL-NO ESA 0.00 0.00 N/C 0.
CUR TYPE 1 INACTIVE DBATS 0.00 N/A N/A N/A SEQUENTIAL-NO BUFFER 0.00 0.00 N/C 0.
TYPE 1 INACTIVE DBATS HWM 1.00 N/A N/A N/A SEQUENTIAL-ENCLAVE SER. 0.00 0.00 N/C 0.
TYPE 1 CONNECTIONS TERMINAT 0.00 0.00 N/A N/A ONE DB2 - COORDINATOR = NO 0.00 0.00 N/C 0.
CUR TYPE 2 INACTIVE DBATS 53.00 N/A N/A N/A ONE DB2 - ISOLATION LEVEL 0.00 0.00 N/C 0.
TYPE 2 INACTIVE DBATS HWM 60.00 N/A N/A N/A ONE DB2 - DCL TTABLE 0.00 0.00 N/C 0.
ACC QUEUED TYPE 2 INACT THR 648.00 2.16 N/A N/A MEMBER SKIPPED (%) N/C
CUR QUEUED TYPE 2 INACT THR 50.00 N/A N/A N/A REFORM PARAL-CONFIG CHANGED 0.00 0.00 N/C 0.
QUEUED TYPE 2 INACT THR HWM 52.00 N/A N/A N/A REFORM PARAL-NO BUFFER 0.00 0.00 N/C 0.
CURRENT ACTIVE DBATS 100.00 N/A N/A N/A
ACTIVE DBATS HWM 100.00 N/A N/A N/A
TOTAL DBATS HWM 153.00 N/A N/A N/A
CURRENT DBATS NOT IN USE 0.00 N/A N/A N/A
DBATS NOT IN USE HWM 26.00 N/A N/A N/A
DBATS CREATED 0.00 N/A N/A N/A
POOL DBATS REUSED 659.00 N/A N/A N/A
The Global DDF Activity section of the OMEGAMON PE Statistics Report contains, among
others, the following fields of interest:
DBAT QUEUED-MAXIMUM ACTIVE
The number of times a DBAT was queued because it reached the DSNZPARM maximum
for active remote threads, MAXDBAT.
CONV.DEALLOC-MAX.CONNECTED
Number of conversations that were deallocated because the DSNZPARM limit was
reached for maximum remote connected threads, CONDBAT.
CUR TYPE 2 INACTIVE DBATS
The current number of inactive type 2 DBATs
TYPE 2 INACTIVE DBATS HWM
The maximum number of inactive type 2 DBATs - high-water mark.
ACC QUEUED TYPE 2 INACT THR
The queued receive requests for a type 2 inactive thread and the number of requests for
new connections received after the maximum number of remote DBATs was reached.
QUEUED TYPE 2 INACT THR HWM
The maximum number of queued RECEIVE requests for a type 2 inactive thread and the
number of requests for new connections received after the maximum number of remote
DBATs was reached.
ACTIVE DBATS HWM
The maximum number of active database access threads. This is a high-water mark.
TOTAL DBATS HWM
The maximum number of active and inactive database access threads.
274 DB2 9 for z/OS: Distributed Functions
This report contains many more fields of interest. You should get to know and monitor them.
For instance, if you detect more than 1% of DBAT queuing you may consider increasing the
value of the MAXDBAT DSNZPARM parameter. However, for this particular parameter and
other examples, you need to consider the global system impact of changing the value. For
instance, when increasing a frequently hit MAXBAT, you need to evaluate the impact on
storage and CPU utilization.
The IDTHTOIN (idle thread timeout) parameter represents the approximate time, in seconds,
that an active server thread should be allowed to remain idle before it is cancelled. After this
time is exhausted, the server thread is terminated to release resources that might affect other
threads.
This usually occurs for one of following reasons:
CMTSTAT=ACTIVE and a requester application or its user did not make a request to the
DB2 server for an extended period. This can happen, for example, during a lengthy user
absence. As a result, the server thread becomes susceptible to being cancelled because
of the timeout value.
CMTSTAT=INACTIVE and a requester application or its user committed one of the
following:
Failed to commit before an extended dormant period (such as user absence)
Committed before an extended dormant period (such as user absence), but database
resources are still held because of other existing conditions.
As a result, the server thread cannot be moved to the inactive state and becomes
susceptible to being cancelled because of the timeout value.
IDTHTOIN has applicability with CMTSTAT set to INACTIVE. If an inflight DBAT has not
received messages for an interval, it will be aborted. If a DBAT has been pooled, then it is not
idle and timer does not apply.
Example 7-4 shows the information received in the system log when a thread is cancelled
because of one of the reasons described above.
Example 7-4 Thread cancelled because of idle thread timeout threshold reached
DSNL027I -D9C2 SERVER DISTRIBUTED AGENT WITH 287
LUWID=USIBMSC.SCPDB9A.C418F1FEAD95=41
THREAD-INFO=PAOLOR4:*:*:*
RECEIVED ABEND=04E
FOR REASON=00D3003B
DSNL028I -D9C2 USIBMSC.SCPDB9A.C418F1FEAD95=41 288
ACCESSING DATA FOR
LOCATION ::9.12.6.70
IPADDR ::9.12.6.70
7.2.2 Accumulation of DDF accounting records
The DB2 subsystem parameter ACCUMACC controls whether and when DB2 accounting
data is accumulated by the user for DDF and RRSAF threads. Given its impact on the way
you consider and analyze accounting records, this parameter has special importance.
When roll up occurs, the values of some fields shown in accounting reports and traces lose
their meanings because of the accumulation. Thus, these fields are marked either N/P or N/C
Chapter 7. Performance analysis 275
in reports generated by OMEGAMON PE. Table 7-2 on page 275 shows the fields impacted
by roll up.
Table 7-2 Fields affected by roll up for distributed and parallel tasks
The acceptable values for ACCUMACC are as follows:
NO
DB2 writes an accounting record when a DDF thread:
ends
is made inactive
does not go inactive because used a keepdynamic(yes) package
or a sign-on occurs for an RRSAF thread
n (a value from 2 to 65535)
Field name Field short description
QPACAANM ACTIVITY NAME
QPACAANM_VAR ACTIVITY NAME
QPACARNA DB2 ENTRY/EXIT - AVG.DB2 ENTRY/EXIT
QPACASCH SCHEMA NAME
QPACASCH_VAR SCHEMA NAME
QPACCANM STORED PROCEDURE EVENTS
QPACCAST SCHED.PROCEDURE SUSP TIME
QPACCONT CONSISTENCY TOKEN
QPACEJST ENDING TCB CPU TIME
QPACSCB BEGINNING STORE CLOCK TIME
QPACSCE ENDING STORE CLOCK TIME
QPACSPNS STORED PROCEDURE EXECUTED
QPACSQLC SQL STATEMENTS
QPACUDNU UDF EVENTS
QPACUDST
SCHED.UDF SUSP TIME
QTXAFLG1 RES LIMIT TYPE
QTXARLID RLF TABLE ID
QWACARNA DB2 ENTRY/EXIT EVENTS
QWACNID NETWORK ID VALUE
QWACSPCP STORED PROCEDURE TCB TIME
QWACTREE TRIG ELAP TIME UNDER ENCLAVE
QWACTRTE TRIG TCB TIME UNDER ENCLAVE
QXMIAP RID LIST SUCCESSFUL
QXNSMIAP RID LIST NOT USED-NO STORAGE
276 DB2 9 for z/OS: Distributed Functions
DB2 writes an accounting record every n accounting intervals for a given user, where n is
the number that you specify for ACCUMACC. The default value is 10.
A parameter value of 2 or higher causes accounting records to roll up into a single record
every n occurrences of the user on the thread. Even if you specify a value between 2 and
65535, an accounting record might be written prior to the n-th accounting interval for a
given user in the following cases:
An internal storage threshold is reached for the accounting rollup blocks.
The staleness threshold is reached. The user has not rolled data into an internal block
in approximately 10 minutes and DB2 considers the data stale.
A user is identified by a concatenation of values, which is specified in the AGGREGATION
FIELDS field ACCUMUID. The acceptable values for ACCUMUID range from 0 to 17. The
acceptable values are listed in Table 7-3.
Table 7-3 ACCUMUID acceptable values
DB2 writes individual accounting records for threads that do not meet the criteria for rollup.
These values can be set by DDF threads through Server Connect and Set Client calls, and by
RRSAF threads through the RRSAF SIGN, AUTH SIGNON, and CONTEXT SIGNON
functions.
For example, our test system was configured with the following parameters:
ACCUMUID=1
Value Rollup criteria Are X'00' or X'40' values
considered for rollup?
0 End user ID, transaction name, and workstation name Yes
1 End user ID No
2 End user transaction name No
3 End user workstation name No
4 End user ID and transaction name Yes
5 End user ID and workstation name Yes
6 End user transaction name and workstation name Yes
7 End user ID, transaction name, and workstation name No
8 End user ID and transaction name No
9 End user ID and workstation name No
10 End user transaction name and workstation name No
11 End user ID, application name, and workstation name Yes
12 End user ID Yes
13 End user application name Yes
14 End user workstation name Yes
15 End user ID and application name Yes
16 End user ID and workstation name Yes
17 End user application name and workstation name Yes
Chapter 7. Performance analysis 277
ACCUMACC=NO
CMTSTAT = INACTIVE
As ACCUMACC was set to NO, no grouping of accounting records is happening. Threads will
become inactive after a successful commit or rollback. DB2 trace begins collecting this data
at successful thread allocation to DB2, and writes a completed record when the thread
terminates, the thread becomes inactive or when the authorization ID changes.
This scenario can cause the creation of a large number of SMF records and you could decide
to implement the following parameters to lower the stress in your SMF collection and
management system:
ACCUMUID=1
ACCUMACC=10
CMTSTAT = INACTIVE
These settings will start the rollup of accounting information every 10 transactions, where
each transaction is defined, for example, by a COMMIT. Because of ACCUMUID=1, the
records will be grouped by End User ID in this example, but as shown in Table 7-3 on
page 276, the grouping can also be made more granular. This table is only an extract of
grouping possibilities. For full capabilities, refer to DB2 Version 9.1 for z/OS Installation Guide,
GC18-9846. if the application has not set the 'End User ID' information, then rollup does not
occur.
As these two DSNZPARMs can be changed online, changes can be introduced using the
tracing parameters installation panel DSNTIPN or by editing a current copy of the installation
job DSNTIJUZ, without a DB2 outage.
Example 7-5 shows how we changed the DSN6SYSP section of our current DSNTIJUZ job to
activate this function.
Example 7-5 Activating accounting rollup
DSN6SYSP ACCUMACC=10, X
ACCUMUID=1, X
We activated the changes by issuing the SET SYSPARM RELOAD command as shown in
Example 7-6. The changes are effective immediately and you can verify the new
DSNZPARMs values using one of the methods described in 7.4.1, Verification of currently
active DSNZPARMs on page 307.
Example 7-6 SET SYSPARM RELOAD command example
-DB9A SET SYSPARM RELOAD
DSNZ006I -DB9A DSNZCMD1 SUBSYS DB9A SYSTEM 147
PARAMETERS LOAD MODULE NAME DSNZPARM IS BEING LOADED
DSN9022I -DB9A DSNZCMD0 'SET SYSPARM' NORMAL COMPLETION
DSNG002I -DB9A EDM RDS BELOW HAS AN 148
INITIAL SIZE 18575360
REQUESTED SIZE 18577408
AND AN ALLOCATED SIZE 23695360
DSNZ007I -DB9A DSNZCMD1 SUBSYS DB9A SYSTEM 149
PARAMETERS LOAD MODULE NAME DSNZPARM LOAD COMPLETE
278 DB2 9 for z/OS: Distributed Functions
Figure 7-3 for an example of the effects of ACCUMAC on accounting records.
Figure 7-3 Effects of ACCUMACC on some of the fields of the accounting records
Table 7-4 shows some observed changes on fields of the table DB2PMFACCT_DDF.
Table 7-4 ACCUMACC and effects on some accounting fields
As a consequence of rollup when ACCUMUID=1, you lose more granular information than the
user ID. This effect depends on the level of aggregation that is defined by the parameter
ACCUMUID.
For problem determination, it can be useful to deactivate the accounting rollup during a period
long enough to repeat the problem and collect more granular information.
As an example, consider the accounting graph shown in Figure 7-4 on page 279. As
documented in Table 7-4, the information that can help to detect the origin of the first CPU
usage spike is not retained when using accounting rollup.
Field ACCUMACC=NO ACCUMACC=n
CLIENT_ENDUSER (QWHCEUID) Toto Toto
CLIENT_TRANSACTION (QWHCEUTX) TotoTestApplication ..........
PLAN_NAME (QWHCPLAN) TotoTest DISTSERV
MAINPACK (ADMAINPK) SYSSH200 *ROLLUP*
Impacts of ACCUMACC on accounting
0
0.002
0.004
0.006
0.008
0.01
0.012
0.014
Accounting records during test
0
1
2
3
4
5
6
Sum of CLASS2_IIP_CPU
Sum of CLASS2_CPU_TOTA
# of COMMITs
T
o
t
a
l
C
P
U
#
C
O
M
M
I
T
s
/
R
E
C
O
R
D
Chapter 7. Performance analysis 279
Figure 7-4 Reverting to ACCUMACC=NO
You can achieve this by changing, assembling and activating the current DSNZPARM.
Example 7-7 shows how we changed the ACCUMACC parameter back to NO in our
DSNTIJUZ job. The changes are made active by the execution of a SET SYSPARM RELOAD
command.
Example 7-7 De-Activating accounting rollup
DSN6SYSP ACCUMACC=NO, X
ACCUMUID=1, X
Summary
The parameter ACCUMACC indicates if rollup of accounting traces takes place and indicates
the frequency of grouping records. The parameter ACCUMUID defines the aggregation fields.
Both can be updated online.
Rollup of accounting information can be useful for reducing the amount of SMF data created.
The summarized information created may be not adequate for problem investigation because
the summarized information may hide the effects of a bad behaving thread in the rollup
traces. This section showed how you can turn off accounting rollup during a problem
investigation exercise. Note that if rollup is in effect, package accounting information in
accounting classes 7.8 and 10 are not collected for the involved threads.
7.2.3 zIIP
The System z9 and System z10 Integrated Information Processor (IBM zIIP) is a specialty
engine that runs eligible database workloads. The IBM zIIP is designed to help free-up
general computing capacity and lower software costs for select DB2 workloads such as
business intelligence (BI), enterprise resource planning (ERP), and customer relationship
management (CRM) on the mainframe.
Portions of DDF server thread processing, utility processing, and complex query parallel child
processing can be directed to an IBM zIIP. The amount of general-purpose processor savings
will vary based on the amount of workload executed by the zIIP, among other factors.
Impacts of ACCUMACC on accounting
0
0.01
0.02
0.03
0.04
0.05
0.06
Accounting records during test
0
1
2
3
4
5
6
Sum of CLASS2_IIP_CPU
Sum of CLASS2_CPU_TOTAL
Sum of COMMIT
T
o
t
a
l
C
P
U
#
C
O
M
M
I
T
s
/
R
E
C
O
R
D
280 DB2 9 for z/OS: Distributed Functions
Refer to the IBM documentation for software and hardware requisites for zIIP at IBM System
z Integrated Information Processor (zIIP):
http://www.ibm.com/systems/z/advantages/ziip/index.html
The following processes can take advantage of IBM zIIP:
DDF server threads that process SQL requests from applications that access DB2 using
DRDA with TCP/IP.
Parallel child processes. A portion of each child process executes under a dependent
enclave SRB if it processes on behalf of an application that originated from an allied
address space, or under an independent enclave SRB if the processing is performed on
behalf of a remote application that accesses DB2 by TCP/IP. The enclave priority is
inherited from the invoking allied address space for a dependent enclave or from the main
DDF server thread enclave classification for an independent enclave.
Utility index build and maintenance processes for the LOAD, REORG, and REBUILD
INDEX utilities. A portion of the index build/maintenance runs under a dependent enclave
SRB.
DB2 native SQL Stored Procedures when called remotely (through DDF)
Additionally to DB2, the following components can take advantage of zIIP:
z/OS Communications Server exploits the zIIP for eligible IPSec network encryption
workloads.
z/OS XML System Services is enabled to take additional advantage of the zIIP for eligible
XML workloads.
z/OS Global Mirror (zGM, formerly XRC Extended Remote Copy) enables DFSMS System
Data Mover (SDM) processing associated with zGM/XRC to be eligible for the zIIP.
z/OS Communications Server exploits the zIIP for select HiperSockets large message
traffic.
IBM Global Business Services can enable the Scalable Architecture for Financial
Reporting (SAFR) enterprise business intelligence reporting solution for zIIP.
The following processes cannot use zIIP
External stored procedures
Triggers or functions that join the enclave SRB using a TCB.
DDF server threads that use SNA to connect to DB2.
DB2 for z/OS and sysplex routing services
Prior to z/OS V1R9.0, Work Load Manager (WLM) sysplex routing services returned the
weight of a server based on the available capacity of only the regular CPU processors of a
server's system adjusted by the health of the server and any work request queuing at the
server.
z/OS V1R9.0 WLM Sysplex Routing services was enhanced to include an awareness of the
available capacity of a server running with zIIP specialty engines. The overall weight returned
for a server will now be a combined weight of the regular CPU, zIIP, and IBM System z
Application Assist Processor (zAAP) capacity. This combined weight is dependent on all
Important: Other than the software and hardware pre-requisites, no special enablement
procedure is necessary for DB2 to utilize zIIP. If a zIIP is installed and online, z/OS
automatically manages the processing of each enclave and redirects a portion to zIIP.
Chapter 7. Performance analysis 281
servers in the sysplex running z/OS V1R9.0 or later. WLM will now also return server system
weights of the regular CPU, zIIP, and zAAP available capacities.
With the exception of calling Java stored procedures, the work requested by DRDA TCP/IP
clients benefits from running on server systems that have a roughly equal available capacity
of regular CPU and zIIP specialty engines. Therefore, even with the z/OS V1R9.0 sysplex
routing services zIIP awareness enhancements, the weighted list of servers was not
optimized to favor work requests being routed to the zIIP enabled members of the DB2 data
sharing group.
APAR PK38867 (DB2 V8 and 9 support for z/OS V1R9.0 sysplex routing services ZIIP
awareness enhancements) improves the way the server list are built: DB2 z/OS sysplex
routing server list processing has been changed to take advantage of the regular CPU and
zIIP weights now being returned by z/OS V1R9.0 WLM sysplex routing services. When all
members of the DB2 data sharing group are running on z/OS V1R9.0, then DB2 will create a
new weight for each member, which is a sum of that member's regular CPU and zIIP weights.
DB2 will then re-sort the server list in descending order of these new calculated weights.
APAR PK38867 also enhances DB2 support for sysplex routing services by providing DRDA
TCP/IP clients with a weighted list of servers which favors servers in the following order of
precedence:
Servers with the highest combined regular CPU and zIIP available capacity
Servers with some combination of regular CPU and zIIP available capacity
Servers with primarily regular CPU and minimal to no zIIP available capacity
We also recommend applying PK41236, which ensures that DB2 for z/OS sysplex routing
server list always considers CPU and zIIP weights if DB2 is running on z/OS V1R9.0 or better.
zIIP performance considerations
You can verify if zIIP processors are available to a LPAR by executing the z/OS system
command /d m=cpu.
Example 7-8 shows an output example of this command. This kind of information can be
found listed in the system log.
Example 7-8 Output of /d m=cpu command
RESPONSE=SC63
IEE174I 12.39.42 DISPLAY M 313
PROCESSOR STATUS
ID CPU SERIAL
00 + 04991E2094
01 + 04991E2094
02 +A 04991E2094
03 +A 04991E2094
04 +I 04991E2094
05 +I 04991E2094
CPC ND = 002094.S18.IBM.02.00000002991E
CPC SI = 2094.710.IBM.02.000000000002991E
Model: S18
...
A APPLICATION ASSIST PROCESSOR (zAAP)
I INTEGRATED INFORMATION PROCESSOR (zIIP)
...
282 DB2 9 for z/OS: Distributed Functions
If you do not specify any processor identifiers, the system displays the online or offline status
of all processors attached to them. Whether you specify a processor identifier or not, the
system displays N when a processor is neither online or offline, but is recognized by the
machine.
The information shown in Example 7-8 on page 281 indicates that this command was
executed in a processor model 2094-710 for the IBM System z9 EC server family.
For information about the capacity of this and other System z configurations, see the following
IBM Web site:
http://www.ibm.com/systems/z/advantages/management/srm/
The maximum number of processor units, including zIIPs, depends on the server model. A
general purpose CPU is required per each specialty engine. For details, refer to the specific
System z IBM Web site, which for IBM System z9 Enterprise Class is as follows:
http://www.ibm.com/systems/z/hardware/z9ec/specifications.html
During the design, consider that the maximum number of zIIPs that can be made available in
a server and its processing speed depend on the server Model. A zIIP processor can be
shared among LPARs of the same box. Combined zAAPs and zIIPs cannot be more than
twice the number of CPs.
zIIP related system parameters
This section describes the system parameters for IEAOPTxx that are related to the zIIP
processors, you may need to contact your local z/OS System Engineers for more details
about the current settings of this parameters.
PROJECTCPU
The projected usage function (PROJECTCPU) is intended to gather information about how
much CPU time is spent executing code which could potentially execute on zIIPs. This
information can be gathered from production workloads or from representative workloads to
understand the potential for zIIP execution with current applications even before actually
owning a zIIP processor.
Setting the IEAOPTxx parmlib member option PROJECTCPU=YES directs z/OS to record
the amount of work eligible for zIIP, and zAAP, processors. SMF Record Type 72 subtype 3 is
input to the RMF post processor. The Workload Activity Report lists workloads by WLM
service class. In this report, the field APPL% IPPCP indicates which percentage of a
processor is zIIP eligible, and the field APPL% AAPCP indicates which percentage of a
processor is zAAP eligible. SMF Record Type 30 provides more detail on specific address
spaces.
For customers that have installed zIIPs, the reporting functions can provide current zIIP
execution information that can be used for optimizing current configurations or for helping to
predict potential future usage.
The acceptable values for this parameter are NO and YES. The default value is NO.
Tip: For details about this and other commands, refer to the following Web page:
http://publib.boulder.ibm.com/infocenter/zos/v1r10/index.jsp
Note: A zIIP engine always delivers its full hardware capacity even if the general purpose
processors in the box are capped to run at less than full speed.
Chapter 7. Performance analysis 283
IIPHONORPRIORITY
This parameter defines if zIIP eligible workload may be executed or not in a general purpose
processor in cases where the zIIP is busy. The acceptable values for this parameter are YES
and NO, the default is YES.
If the IIPHONORPRIORITY parameter is set to YES, it specifies that standard processors run
both zIIP processor eligible and non-zIIP processor eligible work in priority order when the
zIIP processors indicate the need for help from standard processors. The need for help is
determined by the alternate wait management (AWM) function of SRM for both standard and
zIIP processors. Standard processors help each other and standard processors can also help
zIIP processors if YES is in effect, which is the default. Specifying YES does not mean the
priorities will always be honored because the system manages dispatching priorities based
on the goals provided in the WLM service definition.
If zIIP processors are defined to the LPAR but are not online, the zIIP processor eligible work
units are processed by standard processors in priority order. The system ignores the
IIPHONORPRIORITY parameter in this case and handles the work as though it had no
eligibility to zIIP processors. The zIIP processor eligible processor times are reported in RMF
and SMF for planning purposes.
IBM suggests that you specify or default to IIPHONORPRIORITY=YES.
If you set this parameter to NO, standard processors will not examine zIIP processor eligible
work regardless of the demand for zIIP processors as long as there is standard processor
eligible work available.
ZIIPAWMT
Specifies an AWM value for zIIPs to minimize SRM and LPAR low utilization effects and
overhead.
In an LPAR, some n-way environments with a small workload may appear to have little
capacity remaining because of the time spent waking up idle zIIPs to compete for individual
pieces of work. The ZIIPAWMT parameter allows you to reduce this time so that capacity
planning is more accurate and CPU overhead is reduced, even though it might take longer
until arriving work gets dispatched.
The ZIIPAWMT parameters internally affects the frequency with which the specialty engine
will check the need for help. If help is required, the zIIP processor signals a waiting zAAP or
zIIP to help. When all zAAP or zIIP processors are busy and IIPHONORPRIORITY=YES, the
zIIP processor asks for help from the standard processors. All available speciality engines
must be busy before help is asked of the standard processors.
Attention: Standard processors can also run zIIP processor eligible work (even if
IIPHONORPRIORITY is set to NO), if necessary to resolve contention for resources with
non zIIP processor eligible work. It means that work that could have been executed in a
zIIP processor would be executed in a standard processor if a transaction delayed by zIIPs
being busy is keeping resources locked.
Important: If you work with IIPHONORPRIORITY=NO and the zIIP processors available in
the system are busy, a DB2 request would wait and you may see an increase on NOT
ACCOUNTED time in the DB2 accounting caused by wait on CPU (zIIP). However, on busy
sub full capacity machine configurations, and as zIIPs are not capped processors, zIIP
eligible workload may execute faster on a zIIP than in a general purpose processor.
284 DB2 9 for z/OS: Distributed Functions
Reducing the value specified for ZIIPAWMT causes the specialty engines to request help after
being busy for a shorter period of time. If IIPHONORPRIORITY is set to YES, help is provided
to one CP at a time, in the priority order of zIIP processor eligible work, non-zIIP processor
eligible work. Reducing the ZIIPAWMT value too low can cause the standard processors to
run an excessive amount of zIIP processor eligible workload, which might result in lower
priority non-zIIP processor eligible work to be delayed. Conversely, increasing the value
specified for ZIIPAWMT causes the specialty engines to request help only after being busy for
a longer period of time, which might delay the standard processors from providing help when
it is necessary.
The acceptable values for this parameter range from 1 to 499999 microseconds, the default
Value is 12000 (12 milliseconds).
Impacts on DB2 accounting
The DB2 accounting trace records provide information related to application programs
including processor resources consumed by the application.
Figure 7-5 shows an example of accounting report including the zIIP processor usage. This
graph includes the zIIP CPU time as reported by the field AWACC2Z, IIP CPU TIME of
OMEGAMON PE. You can expect, given the required technical conditions, about 50% of the
CPU time of a DDF workload to be eligible and executed on a zIIP processor. Failing to
update your reporting tools to account for the new zIIP related CPU utilization reporting fields
may introduce errors in your capacity planning and cost distribution policies.
Figure 7-5 Accounting report including zIIP CPU usage
New accounting fields are defined to let users know how much time is spent on IBM zIIPs, as
well as how much time zIIP eligible work overflowed to standard processors. CPU time
eligible for offload to a zIIP processor is provided in the accounting records even if a zIIP is
Important: Accumulated zIIP CPU time is not accumulated in the existing class 1, class 2,
and class 7 accounting fields.
0%
10%
20%
30%
40%
50%
60%
70%
80%
zIIP CPU
GP CPU
Ratio
Linear (Ratio)
C
P
U
s
e
c
o
n
d
s
R
a
t
i
o
z
I
I
P
/
T
o
t
a
l
C
P
U
Chapter 7. Performance analysis 285
not online or DB2 is not running on a z9 mainframe. z/OS and DB2 maintenance
requirements apply.
Changes to IFCIDs 0003, 0231, 0239, 0147, and 0148 as mapped by the following macros:
DSNDQPAC - PACKAGE/DBRM ACCOUNTING DATA MAPPING MACRO
DSNDQWAC - ACCOUNTING DATA MAPPING MACRO
DSNDQWHU - IFC CPU HEADER MAPPING MACRO
DSNDQW01 - IFCID HEADER MAPPING MACRO
DSNDQW02 - IFCID HEADER MAPPING MACRO
DSNDQW03 - PARALLEL GROUP TASK TIME TRACE MAPPING MACRO
Example 7-9 shows some of the zIIP related fields. The field QWACZIIP_ELIGIBLE, reporting
zIIP eligible work executed on general processors, will not be populated anymore after DB2 9
for z/OS.
Example 7-9 Extract of hlq.SDSNMACS(DSNDQWAC) macro showing zIIP related fields
QWACZIIP EQU *
QWACCLS1_zIIP DS CL8 /* Accumulated CPU time consumed while */
* /* executing on an IBM specialty engine in*/
* /* all environments. */
QWACCLS2_zIIP DS CL8 /* Accumulated CPU time consumed while */
* /* executing in DB2 on an IBM specialty */
* /* engine. */
QWACTRTT_zIIP DS CL8 /* Accumulated CPU time consumed executing*/
* /* triggers on the main application */
* /* execution unit on an IBM specialty */
* /* engine. */
QWACZIIP_ELIGIBLE DS CL8 /* Accumulated CPU executed on a standard */
* /* CP for zIIP-eligible work. */
* /* This field will no longer be populated */
* /* after DB2 version 9. */
QWACSPNF_zIIP DS CL8 /* Accumulated CPU time consumed executing*/
* /* stored procedure requests on the main */
* /* application execution unit on an IBM */
* /* specialty engine. Since these SPs run */
* /* entirely within DB2, this time */
* /* represents class 1 and class 2 time. */
QWACUDFNF_zIIP DS CL8 /* ** RESERVED FOR FUTURE FUNCTION** */
Refer to APAR PK18454 for DB2 for z/OS V8 exploitation of the IBM System z9 Integrated
Information Processor for DRDA threads.
Important: Any monitor or capacity planning program or process that you may be using
needs to reflect zIIP CPU time when zIIP workload starts to be executed. It is important to
account for all the CPU used by a DB2 thread when part of its processing is routed to a
zIIP engine. Otherwise you may observe a drop in the total CPU usage by applications that
are offloading CPU time to zIIPs.
286 DB2 9 for z/OS: Distributed Functions
For OMEGAMON PE V310, verify APAR PK25395: zIIP support for OMEGAMON XE for DB2
Performance Expert on z/OS V310. This APAR provides monitoring support for zIIPs
processors as follows:
Support in Reporting (Accounting, Record Trace)
Support in all relevant user interfaces' "Thread Details" panels, for example, TEP,
Classic/VTAM, and PE Client
Performance Warehouse Support
APAR PK51045 incorporates specialty engines (zIIP and zAAP) support needed for XML
data.
As an example, an OMEGAMON PE report was executed using the command shown in
Example 7-10. This report shows the activity of a DRDA request executed using the CLI
driver.
Example 7-10 OMEGAMON PE RECTRACE command example
RECTRACE
TRACE LEVEL(LONG)
INCLUDE( IFCID(003))
Example 7-11 shows an extract of the RECORD TRACE report.
Example 7-11 OMEGAMON PE Record Trace extract showing zIIP related fields
LOCATION: DB9A OMEGAMON XE FOR DB2 PERFORMANCE EXPERT (V4) PAGE: 1-7172
GROUP: N/P RECORD TRACE - LONG REQUESTED FROM:
....
|-----------------------------------------------------------------------------------------------------------------------
| INSTRUMENTATION ACCOUNTING DATA
|CLASS 1 BEGINNING STORE CLOCK TIME 04/16/09 22:25:22.213037 ENDING STORE CLOCK TIME 04/16/09 22:25:27.925297
| ELAPSED TIME 5.712260 MVS TCB TIME 0.05
| BEGINNING MVS TCB TIME 0.000011 ENDING MVS TCB TIME 0.051343
| STORED PROC ELAPSED TIME 0.000000 CONVERSION FACTOR 564
| STORED PROCEDURE TCB TIME 0.000000 PAR.TASKS: 0 PAR.TOKEN: X'00000000'
| UDF ELAPSED TIME 0.000000 COMMITS : 1 SVPT REQ.: 0
| UDF TCB TIME 0.000000 ROLLBACKS: 0 SVPT RLB.: 0
| NETWORK ID VALUE 'BLANK' PROGRAMS : 1 SVPT REL.: 0
| REASON ACCT INVOKED: 'DDF TYPE 2 INACTIV IS BECOMING ACTIVE'
| IIP CPU TIME 0.062312
|CLASS 1/2 STORED PROC ZIIP TCB TIME 0.000000
| STORED PROC ELAPSED TIME 0.000000
| STORED PROC CP ELAPSED TIME 0.000000
|CLASS 2 DB2 ELAPSED TIME 0.558715 DB2 ENTRY/EXIT EVENTS 106
| TCB TIME 0.047076 NON-ZERO CLASS 2 YES
| STORED PROC ELAPSED TIME 0.000000 CLASS 2 DATA COLLECTED YES
| STORED PROCEDURE TCB TIME 0.000000 STORED PROC. ENTRY/EXITS 0
| UDF ELAPSED TIME 0.000000 UDF SQL ENTRY/EXITS EVENTS 0
| UDF TCB TIME 0.000000 IIP CPU TIME 0.057921
| TRIG ELAP TIME UNDER ENCLAVE 0.000000 IIP ELIGIBLE CP CPU TIME 0.000000
| TRIG TCB TIME UNDER ENCLAVE 0.000000 QWACTRTT_ZIIP 0.000000
| TRIG ELAP TIME NOT UNDER ENCLAVE 0.000000
...
The following zIIP related information is indicated in bold in Example 7-11:
CLASS 1/2 STORED PROC ZIIP TCB TIME
The accumulated CPU time that is consumed while running stored procedure requests on
the main application execution unit on an IBM zIIP. As these stored procedures run entirely
within DB2, this time represents class 1 and class 2 time
Chapter 7. Performance analysis 287
IIP CPU TIME
The total CPU time for all executions of this package or DBRM that was consumed on an
IBM zIIP
IIP ELIGIBLE CP CPU TIME
The accumulated CPU time that ran on a standard CP for zIIP-eligible work
QWACTRTT_ZIIP
The accumulated CPU time consumed on an IBM specialty engine while running triggers
on a nested task or on the main application execution unit.
Monitoring zIIP utilization
To monitor the IBM specialty engine usage, you can use the following tools:
DB2 Traces
The DB2 accounting trace records provide information related to application programs
including processor resources consumed by the application. As described in this section,
accumulated specialty engine CPU time is not accumulated in the existing class 1, class 2,
and class 7 accounting fields.
RMF
The Resource Measurement Facility provides information about specialty engine usage.
The SMF Type 72 records contain information about specialty engine usage. Fields in
SMF Type 30 records let you know how much time is spent on specialty engines, as well
as how much time was spent executing specialty engine eligible work on standard
processors.
Example 7-12 on page 288 shows a view of SDSF showing enclaves. You may notice the
following columns which are of interest when looking at zIIP utilization:
zIIP-Time
Cumulative zIIP time consumed by dispatchable units running in the enclave on the local
system.
zICP-Time
Cumulative zIIP on CPU time consumed by dispatchable units running in the enclave on
the local system. This CPU time may have been consumed in a zIIP, but were executed in
a standard processor.
zIIP-NTime
Normalized zIIP service time, in seconds.
These columns show time consumed by dispatchable units running in the enclave on the local
system. For a multisystem enclave, time consumed on other systems is not included. Refer to
the IBM publication SDSF Operation and Customization, SA22-7670-11 for details.
288 DB2 9 for z/OS: Distributed Functions
Example 7-12 SDSF view of enclaves showing zIIP utilization
Display Filter View Print Options Help
-----------------------------------------------------------------------------------------------------------------------------------
SDSF ENCLAVE DISPLAY SC63 ALL LINE 1-20 (29)
COMMAND INPUT ===> SCROLL ===> CSR
PREFIX=* DEST=(ALL) OWNER=* SYSNAME=
NP NAME vel SysName SubSys zAAP-Time zACP-Time zIIP-Time zICP-Time Promoted zAAP-NTime zIIP-NTime
340005705E 01.10.00 HBB7750 SC63 DB9A 0.00 0.00 1.11 0.03 NO 0.00 1.11
3C00057066 01.10.00 HBB7750 SC63 DB9A 0.00 0.00 1.09 0.03 NO 0.00 1.09
400005705F 01.10.00 HBB7750 SC63 DB9A 0.00 0.00 1.03 0.03 NO 0.00 1.03
480005706F 01.10.00 HBB7750 SC63 DB9A 0.00 0.00 0.84 0.02 NO 0.00 0.84
5000057073 01.10.00 HBB7750 SC63 DB9A 0.00 0.00 0.83 0.03 NO 0.00 0.83
5C0005707D 01.10.00 HBB7750 SC63 DB9A 0.00 0.00 0.09 0.01 NO 0.00 0.09
6000057070 01.10.00 HBB7750 SC63 DB9A 0.00 0.00 0.45 0.01 NO 0.00 0.45
6800057076 01.10.00 HBB7750 SC63 DB9A 0.00 0.00 0.75 0.03 NO 0.00 0.75
6C0005707A 01.10.00 HBB7750 SC63 DB9A 0.00 0.00 0.13 0.01 NO 0.00 0.13
7800057043 01.10.00 HBB7750 SC63 DB9A 0.00 0.00 1.35 0.03 NO 0.00 1.35
7C0005705A 01.10.00 HBB7750 SC63 DB9A 0.00 0.00 0.96 0.02 NO 0.00 0.96
7.2.4 Using RMF to monitor distributed data
CPU time for enclaves generated by DDF is reported in the SMF type 30 and 72 records.
SMF type 30 records contain service for all completed or in-flight enclaves, because the
previous SMF type 30 record was cut. It provides resource consumption information at the
address space level. A SMF type 30 record is not granular enough to measure CPU usage by
enclave. DB2 produces SMF type 101 records to provide that information. SMF type 101 is
used for IFCID3, the DB2 accounting record.
SMF Type 72 records provide data collected by RMF monitor 1. There is one type 72 record
for each WLM Service Class Period and WLM Report Class. It can provide you with
information about the goals set for your enclaves and their actual measured values for their
particular service class.
While using RMF to monitor enclaves, keep in mind the condition that an enclave is created or
deleted, in conjunction with the DB2 always active or sometimes active threads.
You can use the RMF online monitor to look at enclaves, or you can use the RMF Post
Processor to generate reports on SMF data on enclaves. The RMF online monitor output on
enclaves is shown in Example 7-13. You can reach this panel by navigating to 3 Monitor III
1 OVERVIEW 6 ENCLAVE.
Example 7-13 RMF online monitoring of enclaves
RMF V1R10 Enclave Report Line 1 of 23
Command ===> Scroll ===> CSR
Samples: 22 System: SC63 Date: 04/23/09 Time: 14.39.39 Range: 21 Sec
Current options: Subsystem Type: ALL -- CPU Util --
Enclave Owner: Appl% EAppl%
Class/Group: 14.5 98.4
Enclave Attribute CLS/GRP P Goal % D X EAppl% TCPU USG DLY IDL
*SUMMARY 26.68
ENC00008 DDFDEF 2 20 1.806 2.456 33 62 0.0
ENC00016 DDFDEF 2 20 1.770 2.261 17 67 0.0
ENC00021 DDFDEF 2 20 1.756 2.317 33 58 0.0
....
Chapter 7. Performance analysis 289
A detailed view of the enclave can be obtained by pressing Enter when placing the cursor
over an enclave. An example of the obtained panel is shown in Example 7-14.
Example 7-14 RMF Enclave Classification Data
RMF Enclave Classification Data
Details for enclave ENC00016 with token 000000BC 00056D52
Press Enter to return to the Report panel.
- CPU Time - -zAAP Time-- -zIIP Time--
Total 2.261 Total 0.000 Total 1.221
Delta 2.231 Delta 0.000 Delta 1.214
State ---- Using ---- ---------- Delay ---------- IDL UNK
Samples CPU AAP IIP I/O CPU AAP IIP I/O STO CAP QUE
18 0.0 0.0 5.6 11 39 0.0 28 0.0 0.0 0.0 0.0 0.0 17
Classification Attributes:
More: +
Subsystem Type: DDF Owner: DB9ADIST System: SC63
Accounting Information . . :
SQL09053AIX 64BIT db2bp paolor4
Collection Name . . . . . : NULLID
Connection Type . . . . . : SERVER
Correlation Information . : db2bp
LU Name . . . . . . . . . :
Netid . . . . . . . . . . :
Package Name . . . . . . . : SQLC2G15
Plan Name . . . . . . . : DISTSERV
Procedure Name . . . . . . :
Process Name . . . . . . . : db2bp
Transaction Class/Job Class:
Transaction Name/Job Name :
Userid . . . . . . . . . . : PAOLOR4
Scheduling Environment . . :
Priority . . . . . . . . . :
Subsystem Collection Name :
Subsystem Instance . . . . : DB9A
Subsystem Parameter . . . :
paolor4 kodiak.itso.ibm.
Example 7-15 on page 290 shows how, after sorting, you can execute a RMF CPU Activity.
The options for batch reports are numerous. Refer to the RMF documentation for information
about the report options that is appropriate for your needs.
Tip: For details about RMF, refer to Effective zSeries Performance Monitoring Using
Resource Measurement Facility, SG24-6645, and the IBM RMF Web site:
http://www.ibm.com/servers/eserver/zseries/zos/rmf/Library/
290 DB2 9 for z/OS: Distributed Functions
Example 7-15 RMF batch reporting
//PAOLO44B JOB (999,POK),REGION=5M,MSGCLASS=X,CLASS=A,
// MSGLEVEL=(1,1),NOTIFY=&SYSUID
//*
/*JOBPARM S=SC63
//*----------------------------------------------------
//* DRDA REDBOOK --> BASIC RMF REPORT EXAMPLE
//*----------------------------------------------------
//POSTMFR EXEC PGM=ERBRMFPP,REGION=0M
//STEPLIB DD DSN=SYS1.LINKLIB,DISP=SHR
// DD DSN=CEE.SCEERUN,DISP=SHR
//SYSOUT DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//MFPMSGDS DD SYSOUT=*
//MFPINPUT DD DISP=SHR,DSN=SMFDATA.ALLRECS.G0539V00
//SYSIN DD *
REPORTS(CPU)
/*
The chart in Figure 7-6 shows an excerpt of the RMF CPU Activity Report where the
important values are highlighted.
Figure 7-6 CPU report with zIIP and zAAP
The chart shows the CPU utilization of the different processors in the LPAR generated by the
RMF batch CPU report.
The RMF CPU activity report shows 2 CPs, 1 zAAP engine and 1 zIIP engine.
In this case the report is for a distributed ODBC/CLI workload showing 42% zIIP redirect at
the LPAR level.
Monitoring System level zIIP and zAAP Redirect
with zIIP and zAAP installed
RMF CPU Report for CLI DRDA Workload
C P U A C T I V I T Y
z/OS V1R10 SYSTEM ID SC63
RPT VERSION V1R10 RMF
CPU 2094 MODEL 724 H/W MODEL S28
---CPU--- ONLINE TIME LPAR BUSY MVS BUSY
NUM TYPE PERCENTAGE TIME PERC TIME PERC
0 CP 100.00 22.49 22.49
1 CP 100.00 21.72 21.72
CP TOTAL/AVERAGE 22.11 22.11
2 AAP 100.00 0.10 0.10
AAP AVERAGE 0.10 0.10
3 IIP 100.00 32.47 32.47
IIP AVERAGE 32.47 32.47 zIIP CPU %
zAAP CPU %
CP CPU %
zIIP Redirect % at the LPAR level = 42%
Chapter 7. Performance analysis 291
This is derived by using the formula:
zIIP CP% / Total CP% (32.47 / (22.49+21.72+0.10+32.47)
The chart in Figure 7-7 shows the most important parts of a DRDA-related Workload Activity
report obtained with the control card:
SYSRPTS(WLMGL(SCLASS,RCLASS,POLICY,SYSNAM(xxxx)))
In the RMF Workload Activity report look at Service Policy to find the zIIP redirect at the
WLM Policy level. You can use the SYSNAM(XXXX) in the SYSRPTS control card to get the
zIIP redirect for a specific LPAR.
We have highlighted the most important values for zIIP redirect evaluation.
Appl% CP
Percentage of CPU time used by transactions running on standard CPs in the service or
report class period.
Appl% IIPCP
Percentage of CPU time used by zIIP eligible transactions running on standard CPs. This
is a subset of APPL% CP.
Appl% IIP
It shows the percentage of CPU time used by transactions executed on zIIPs in the
service or report class period.
Figure 7-7 Calculating zIIP redirect %
This chart shows that a redirect percentage of 55% of the DRDA workload level using the
APPL% formula.
RMF Workload Activity Report Showing CLI SQL DRDA zIIP Redirect
APPL % is % of a single engine.
APPL% IIP = Service Time IIP / Report Interval
APPL% CP = (Service Time CPU+SRB+RCT+IIT-AAPIIP) / Report Interval
Using WLM Subsystem DDF, Service Class DDFWORK
Redirect % = Service Time IIP / Service Time CPU
= APPL% IIP / (APPL% CP+APPL% IIP)
= 55% for this DRDA workload
REPORT BY: POLICY=DRDAIC1 WORKLOAD=DB2 SERVICE CLASS=DDFWORK RESOURCE GROUP=*NONE
TRANSACTIONS TRANS-TIME HHH.MM.SS.TTT --DASD I/O-- ---SERVICE---- SERVICE TIMES ---APPL %---
AVG 2.90 ACTUAL 14 SSCHRT 507.2 IOC 0 CPU 29.3 CP 24.02
MPL 2.90 EXECUTION 13 RESP 0.3 CPU 831425 SRB 0.0 AAPCP 0.00
ENDED 11384 QUEUED 0 CONN 0.2 MSO 0 RCT 0.0 IIPCP 0.00
END/S 207.84 R/S AFFIN 0 DISC 0.0 SRB 0 IIT 0.0
#SWAPS 0 INELIGIBLE 0 Q+PEND 0.1 TOT 831425 HST 0.0 AAP 0.00
EXCTD 0 CONVERSION 0 IOSQ 0.0 /SEC 15179 AAP 0.0 IIP 29.49
AVG ENC 2.90 STD DEV 15 IIP 16.2
REM ENC 0.00 ABSRPTN 5243
MS ENC 0.00 TRX SERV 5243
Service Times : CPU time includes IIP and AAP time
zIIP Redirect % at the LPAR level = 42%
INTERVAL: 54 Sec
292 DB2 9 for z/OS: Distributed Functions
The DRDA redirect percentage can also be calculated using the Service times:
Service Times IIP / Service Times CPU
The effective redirect percentage for this workload at the LPAR level is 42%, as shown in
Figure 7-6 on page 290. It is lower at the LPAR level because of the CPU consumed by other
non DB2 DRDA components such as other DB2 address spaces, TCP/IP and so forth.
RMF spreadsheet reporter
The RMF Spreadsheet Reporter serves as a front-end to the RMF postprocessor on your
z/OS system. With its graphics capabilities, RMF Spreadsheet Reporter allows you to analyze
z/OS performance data through powerful graphical charts right from your workstation.
The main advantage of this product is ease of use. With it, you can get a clear view of the
system behavior in a short period of time. This tool was extensively used during the examples
in this chapter. You can get this software free of charge from the following Web page:
http://www.ibm.com/servers/eserver/zseries/zos/rmf/tools/#spr_win
Save the RMF batch report output created, for instance, using JCL similar to the one shown in
Example 7-15 on page 290, and transfer it to your workstation.
Start the software and select Working Set from the Create menu as shown in Figure 7-8. You
need to select the Report Listing folder to make this option available.
Figure 7-8 RMF Spreadsheet reporter: creating a working set menu
You will get the Create Working Set panel as shown in Figure 7-9 on page 293. You can
provide a meaningful name to the working set under the Name section. Click Run to start the
working set creation. This process could take some time if the report is large.
Chapter 7. Performance analysis 293
Figure 7-9 RMF Spreadsheet reporter: creating a working set
Once you get confirmation of the working set, you can go to the Spreadsheet folder and
select any of the available spreadsheet reports, as shown in Figure 7-10.
Figure 7-10 RMF Spreadsheet reporter: selecting reports
294 DB2 9 for z/OS: Distributed Functions
As an example of reporting, consider Figure 7-11. This report shows the Physical total
dispatch time in our test LPAR during a intensive test. Note the high zIIP utilization.
Figure 7-11 RMF report example: Physical Total Dispatch Time %
The workload reported in this example was created using the script described in
Appendix D.1, Stress tests script on page 444.
7.3 Checking settings in a distributed environment
In this section we describe the following tools that are available for distributed components of
a n-tier architecture to obtain configuration related information:
db2set: DB2 profile registry command
db2 get dbm cfg
db2 get cli configuration
db2pd
Getting database connection information
Getting online help for db2 commands
Other useful sources of information
7.3.1 db2set: DB2 profile registry command
This command displays, sets, or removes DB2 for LUW profile variables. It can be issued at
the DB2 client and DB2 Connect Server to obtain information related to the environment
setup.
Physical Total Dispatch Time %
Partition: A04
0.00
10.00
20.00
30.00
40.00
50.00
60.00
70.00
80.00
90.00
0
4
/
2
3
/
2
0
0
9
-
1
6
.
5
9
.
3
7
0
4
/
2
3
/
2
0
0
9
-
1
7
.
0
9
.
3
7
0
4
/
2
3
/
2
0
0
9
-
1
7
.
1
9
.
3
7
0
4
/
2
3
/
2
0
0
9
-
1
7
.
2
9
.
3
7
0
4
/
2
3
/
2
0
0
9
-
1
7
.
3
9
.
3
7
0
4
/
2
3
/
2
0
0
9
-
1
7
.
4
9
.
3
7
0
4
/
2
3
/
2
0
0
9
-
1
7
.
5
9
.
3
7
0
4
/
2
3
/
2
0
0
9
-
1
8
.
0
9
.
3
7
0
4
/
2
3
/
2
0
0
9
-
1
8
.
1
9
.
3
7
P
h
y
s
i
c
a
l
T
o
t
a
l
D
i
s
p
a
t
c
h
T
i
m
e
%
CP AAP ICF IFL IIP
Chapter 7. Performance analysis 295
If no variable name is specified, the values of all defined variables are displayed. If a variable
name is specified, only the value of that variable is displayed. To display all the defined values
of a variable, specify variable -all. To display all the defined variables in all registries, specify
registry -all.
To modify the value of a variable, specify variable=, followed by its new value. To set the value
of a variable to NULL, specify variable = -null. Changes to settings take effect after the
instance has been restarted. To delete a variable, specify variable=, followed by no value.
Example 7-16 shows the execution of this command using the -all option.
Example 7-16 db2set -all
$ db2set -all
[i] DB2CONNECT_IN_APP_PROCESS=NO
[i] DB2COMM=tcpip
[g] DB2SYSTEM=kodiak.itso.ibm.com
[g] DB2INSTDEF=db2inst1
[g] DB2ADMINSERVER=dasusr1
$
Example 7-17 shows how you can use the command db2set -lr to list all the environment
variables defined to DB2. This command can help you to identify its correct spelling.
Example 7-17 db2set -lr command
$ db2set -lr
DB2_OVERRIDE_BPF
DB2_PARALLEL_IO
DB2ACCOUNT
DB2ADMINSERVER
DB2BQTIME
....
DB2_FORCE_NLS_CACHE
DB2YIELD
DB2_AVOID_PREFETCH
DB2_COLLECT_TS_REC_INFO
DB2_GRP_LOOKUP
DB2_INDEX_FREE
DB2_MMAP_READ
...
7.3.2 db2 get dbm cfg
The command db2 get dbm cfg returns the values of individual entries in the database
manager configuration file. Example 7-18 shows a partial output of this command executed in
a Linux on z server.
Example 7-18 db2 get dbm cfg output
db2inst1@linux11:~> db2 get dbm cfg
Database Manager Configuration
Node type = Enterprise Server Edition with local and remote clients
....
296 DB2 9 for z/OS: Distributed Functions
Database manager authentication (AUTHENTICATION) = SERVER
Cataloging allowed without authority (CATALOG_NOAUTH) = NO
Trust all clients (TRUST_ALLCLNTS) = YES
Trusted client authentication (TRUST_CLNTAUTH) = CLIENT
Bypass federated authentication (FED_NOAUTH) = NO
....
TCP/IP Service name (SVCENAME) = 5000
Discovery mode (DISCOVER) = SEARCH
Discover server instance (DISCOVER_INST) = ENABLE
...
No. of int. communication buffers(4KB)(FCM_NUM_BUFFERS) = AUTOMATIC
No. of int. communication channels (FCM_NUM_CHANNELS) = AUTOMATIC
Node connection elapse time (sec) (CONN_ELAPSE) = 10
Max number of node connection retries (MAX_CONNRETRIES) = 5
Max time difference between nodes (min) (MAX_TIME_DIFF) = 60
db2start/db2stop timeout (min) (START_STOP_TIME) = 10
Refer to the DB2 for LUW documentation for more details on the information provided by this
command.
Example 7-19 shows how you can combine this command with the information in the UNIX
/etc/services file to obtain the port on which a DB2 is listening for connections.
Example 7-19 Getting the port on which DB2 for LUW listen
$ db2 get dbm cfg | grep -i service
TCP/IP Service name (SVCENAME) = db2c_db2inst3
$ cat /etc/services | grep db2c_db2inst3
db2c_db2inst3 50002/tcp
$
7.3.3 db2 get cli configuration
The GET CLI CONFIGURATION command lists the contents of the db2cli.ini file. This command
can list the entire file, or a specified section. Example 7-20 shows the syntax of this
command.
Example 7-20 GET CLI CONFIGURATION syntax
>>-GET CLI--+-CONFIGURATION-+--+-----------------+-------------->
+-CONFIG--------+ '-AT GLOBAL LEVEL-'
'-CFG-----------'
>--+---------------------------+-------------------------------><
'-FOR SECTION--section-name-'
Example 7-21 on page 297 shows an example of execution output.
Chapter 7. Performance analysis 297
Example 7-21 GET CLI CONFIGURATION output example
$ db2 get cli configuration
Section: tstcli1x
-------------------------------------------------
uid=userid
pwd=*****
autocommit=0
TableType='TABLE','VIEW','SYSTEM TABLE'
Section: tstcli2x
-------------------------------------------------
SchemaList='OWNER1','OWNER2',CURRENT SQLID
Section: MyVeryLongDBALIASName
-------------------------------------------------
dbalias=dbalias3
SysSchema=MYSCHEMA
Refer to Chapter 8, Problem determination on page 323 for details about this command and
how to update the CLI configuration using commands.
7.3.4 db2pd
db2pd is a DB2 database command available for monitoring and troubleshooting DB2 for
LUW. (This command is not available for DB2 for z/OS). It is often used for troubleshooting
because it can return immediate information from the DB2 memory sets.
db2pd provides more than twenty options to display information about database transactions,
table spaces, table statistics, dynamic SQL, database configurations, and many other
database details. For details, see the following Web page:
http://www.ibm.com/developerworks/db2/library/techarticle/dm-0504poon2/
Example 7-22 is an extract of the db2pd syntax showing the -sysplex option. This option
returns information about the list of servers associated with the database alias indicated by
the db parameter. If the -database parameter is not specified, information is returned for all
databases.
Example 7-22 db2pd sysplex syntax
>>-db2pd--+--------+--+--------+--+-----------+----------------->
'- -inst-' '- -help-' '- -version-'
>--+-----------------------------------------------+------------>
'- -sysplex--+-------------+--+---------------+-'
'-db=database-' '-file=filename-'
Example 7-23 on page 298 shows the output of the command db2pd -sysplex. Without the
db2pd -sysplex command, the only way to report the sysplex list is through a DB2 trace. In
our working environment, DB9A is a single system and DB9C is a Data Sharing Group.
298 DB2 9 for z/OS: Distributed Functions
Example 7-23 db2pd usage example: Getting the server list
$ db2pd -sysplex db=DB9A
Database Partition 0 -- Active -- Up 14 days 02:19:11
Sysplex List:
Alias: DB9A
Location Name: DB9A
Count: 1
IP Address Port Priority Connections Status PRDID
9.12.6.70 12347 65535 2 0 DSN09015
$ db2pd -sysplex db=DB9C
Database Partition 0 -- Active -- Up 14 days 02:19:23
Sysplex List:
Alias: DB9C
Location Name: DB9C
Count: 3
IP Address Port Priority Connections Status PRDID
9.12.6.70 38320 53 0 0
9.12.4.202 38320 53 0 0
9.12.6.9 38320 21 0 0
7.3.5 Getting database connection information
DB2 Connect uses the following directories to manage database connection information:
The system database directory, which contains name, node, and authentication
information for every database that DB2 Connect accesses
Use the list db directory command to view the information associated with the
databases cataloged in your system. Example 7-24 shows the syntax of this command.
Example 7-24 Syntax of the list database directory command
>>-LIST--+-DATABASE-+--DIRECTORY--+---------------+------------><
'-DB-------' '-ON--+-path--+-'
'-drive-'
Example 7-25 shows a portion of its execution output.
Example 7-25 db2 list db directory command output example
$ db2 list db directory
System Database Directory
Number of entries in the directory = 7
Database 1 entry:
Database alias = DB9AS
Database name = DB9AS
Node name = SSLNODE
Database release level = c.00
Comment =
Chapter 7. Performance analysis 299
Directory entry type = Remote
Authentication = SERVER
Catalog database partition number = -1
Alternate server hostname =
Alternate server port number =
Database 2 entry:
Database alias = DB9AS2
Database name = DB9AS2
Node name = SSLNODE
Database release level = c.00
Comment =
Directory entry type = Remote
Authentication = SERVER_ENCRYPT
Catalog database partition number = -1
Alternate server hostname =
Alternate server port number =
Database 3 entry:
....
The most important fields are:
Database alias
The value of the alias parameter when the database was created or cataloged. If an
alias was not entered when the database was cataloged, the database manager uses
the value of the database name parameter when the database was cataloged. This is
the value used in an application when connecting to a database.
Database name
The value of the database name parameter when the database was cataloged. This
name is usually the name under which the database was created.
Node name
The name of the remote node. This name corresponds to the value entered for the
nodename parameter when the database and the node were cataloged.
Directory entry type
This value indicates the location of the database. A Remote entry describes a
database that resides on another node.
Authentication
The authentication type cataloged at the client
The node directory, which contains network address and communication protocol
information for every host or System z database server that DB2 Connect accesses.
The node directory is created and maintained on each database client. The directory
contains an entry for each remote workstation having one or more databases that the
client can access. The DB2 client uses the communication endpoint information in the
node directory whenever a database connection or instance attachment is requested.
The entries in the directory also contain information about the type of communication
protocol to be used to communicate from the client to the remote database partition.
Use the list node directory command for listing the contents of the node directory. The
syntax of this command is shown in Example 7-26 on page 300.
300 DB2 9 for z/OS: Distributed Functions
Example 7-26 Syntax of the list node directory command
>>-LIST--+-------+--NODE DIRECTORY--+-------------+------------><
'-ADMIN-' '-SHOW DETAIL-'
Example 7-27 shows a partial output of this command.
Example 7-27 list node directory command output
$ db2 list node directory
Node Directory
Number of entries in the directory = 4
Node 1 entry:
Node name = SC63TS
Comment =
Directory entry type = LOCAL
Protocol = TCPIP
Hostname = wtsc63.itso.ibm.com
Service name = 12347
Node 2 entry:
Node name = SSLNODE
Comment =
Directory entry type = LOCAL
Protocol = TCPIP
Hostname = wtsc63.itso.ibm.com
Service name = 12349
Security type = SSL
Node 3 entry:
...
Some important fields:
Node name
The name of the remote node. This corresponds to the name entered for the
nodename parameter when the node was cataloged.
Directory entry type
LOCAL means the entry is found in the local node directory file. LDAP means the entry
is found on the LDAP server or LDAP cache
Protocol: Indicates the communications protocol cataloged for the node.
The database connection services (DCS) directory, which contains information specific to
host or System i database server databases.
Use the list dcs directory command to list the contents of the DCS. Example 7-28
shows the syntax of this command.
Example 7-28 Syntax of the list dcs directory command
>>-LIST DCS DIRECTORY------------------------------------------><
Chapter 7. Performance analysis 301
Example 7-29 shows partially the output of this command.
Example 7-29 list dcs directory command output example
$ db2 list dcs directory
Database Connection Services (DCS) Directory
Number of entries in the directory = 6
DCS 1 entry:
Local database name = DB9A
Target database name =
Application requestor name =
DCS parameters =
Comment =
DCS directory release level = 0x0100
DCS 2 entry:
Local database name = DB9AS
Target database name = DB9A
Application requestor name =
DCS parameters =
Comment =
DCS directory release level = 0x0100
DCS 3 entry:
....
Some important fields:
Local database name
Specifies the local alias of the target host database. This corresponds to the database
name parameter entered when the host database was cataloged in the DCS directory.
Target database name
Specifies the name of the host database. This corresponds to the target database
name parameter entered when the host database was cataloged in the DCS directory.
DCS parameters
String that contains the connection and operating environment parameters to use with
the application requester. Corresponds to the parameter string entered when the host
database was cataloged. This is where the sysplex support is configured. The string
must be enclosed by double quotation marks, and the parameters must be separated
by commas.
7.3.6 Getting online help for db2 commands
In case you do not remember all the valid options of a particular DB2 for LUW command, all
Command Line Processor(CLP) commands can invoke a help panel at the CLP prompt by
preceding the command keyword with a question mark. For many of the system commands, a
summarizing help panel can be displayed by issuing the command keyword followed by a
help option.
302 DB2 9 for z/OS: Distributed Functions
To display a CLP command help panel, preface the command keyword with a question mark
at the db2 interactive mode prompt. Example 7-30 shows an extract of the output of the
command db2 ? get executed in the CLP.
Example 7-30 Getting online help using the CLP
C:\Program Files\IBM\SQLLIB\BIN>db2 ? get
GET ADMIN CONFIGURATION [FOR NODE node-name [USER username USING password]]
...
GET DATABASE CONFIGURATION [FOR database-alias] [SHOW DETAIL]
GET DATABASE MANAGER CONFIGURATION [SHOW DETAIL]
GET DATABASE MANAGER MONITOR SWITCHES
[AT DBPARTITIONNUM db-partition-number | GLOBAL]
...
Most of the system commands can display a command help panel by entering the system
command keyword followed by a help option. Many system commands use a common help
option, while other system commands may use different and additional help options. For the
first attempts, without having to search for a command's forgotten help option, try the
following most common options which are likely to result in successfully invoking the
command help panel:
-h
-?
-help
nothing entered after the command keyword. This may cause some commands to be
actually executed.
Example 7-31 shows an output extract of using the -h option with the command db2pd.
Example 7-31 Using the -h option with a system command
C:\Program Files\IBM\SQLLIB\BIN>db2pd -h
Usage:
-h | -help [file=<filename>]
Help
-v | -version [file=<filename>]
Version
-osinfo [disk] [file=<filename>]
Operating System Information
-dbpartitionnum <num>[,<num>]
Database Partition Number(s)
-alldbpartitionnums
All partition numbers
-database | -db <database>[,<database>]
Database(s)
-alldatabases | -alldbs
All Active Databases
-inst
Instance scope output
-file <filename>
All Output to Filename
-command <filename>
Read in predefined options
Chapter 7. Performance analysis 303
-interactive
Interactive
-full
Expand output to full length
-repeat [num sec] [count]
Repeat every num seconds (default 5) count times
-everything
All options on all database partitions
You can always refer to the IBM DB2 Database for Linux, UNIX, and Windows Information
Center for more detailed information. It can be found, for Version 9.5, at the following Web
page:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp
7.3.7 Other useful sources of information
Example 7-32 shows how you can use the db2licm command to get licence information from
a DB2 Connect or DB2 Enterprise Server Edition (ESE) server. You need to be logged in as
the instance owner or have sourced the db2profile profile.
Example 7-32 Using the command db2licm
$ db2licm -l
Product name: "DB2 Enterprise Server Edition"
License type: "Trial"
Expiry date: "06/21/2009"
Product identifier: "db2ese"
Version information: "9.5"
Product name: "DB2 Connect Server"
License type: "Trial"
Expiry date: "06/21/2009"
Product identifier: "db2consv"
Version information: "9.5"
Example 7-33 shows how you can get software level information, including the production
version and FixPack applied.
Example 7-33 Using the command db2level
$ db2level
DB21085I Instance "db2inst3" uses "64" bits and DB2 code release "SQL09053"
with level identifier "06040107".
Informational tokens are "DB2 v9.5.0.3", "s081118", "U818975", and FixPack
"3".
Product is installed at "/opt/IBM/db2/V9.5_ESE".
To get the port on which the DB2 Server is listening, refer to Example 7-34 on page 304. This
will provide the service that is defined (UNIX) in the file /etc/services.
304 DB2 9 for z/OS: Distributed Functions
Example 7-34 Getting the service associated with the DB2 Server
$ db2 get dbm cfg | grep -i service
TCP/IP Service name (SVCENAME) = db2c_db2inst3
$
Example 7-35 shows how to get the port information from the /etc/services file.
Example 7-35 Getting the port number from /etc/services
$ cat /etc/services | grep -i db2c_db2inst3
db2c_db2inst3 50002/tcp
For getting the IP address of a target server when you know its name (dns entry), see
Example 7-36.
Example 7-36 Getting the IP address of a DB2 Connect or ESE server from its dns entry
C:\>ping kodiak.itso.ibm.com
Pinging kodiak.itso.ibm.com [9.12.5.149] with 32 bytes of data:
Reply from 9.12.5.149: bytes=32 time=89ms TTL=128
Reply from 9.12.5.149: bytes=32 time=88ms TTL=128
Reply from 9.12.5.149: bytes=32 time=90ms TTL=128
Reply from 9.12.5.149: bytes=32 time=86ms TTL=128
Ping statistics for 9.12.5.149:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 86ms, Maximum = 90ms, Average = 88ms
For getting your own IP address, see Example 7-37.
Example 7-37 Getting your IP address in a windows machine
C:\>ipconfig
Windows IP Configuration
Ethernet adapter Local Area Connection 2:
Connection-specific DNS Suffix . : localdomain
IP Address. . . . . . . . . . . . : 172.16.209.128
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 172.16.209.2
Chapter 7. Performance analysis 305
See Example 7-38 if you work with an AIX server.
Example 7-38 Getting the IP address of an AIX server
$ ifconfig -l
en0 lo0
$ ifconfig en0
en0:
flags=5e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,
CHECKSUM_OFFLOAD(ACTIVE),PSEG,LARGESEND,CHAIN>
inet 9.12.5.149 netmask 0xfffffc00 broadcast 9.12.7.255
tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0
You may need to get the DNS entry for an IP address. For instance, you find the message of
Example 7-39 in the DB2 SYSPRINT.
Example 7-39 DB2 SYSPRINT
14.06.14 STC27893 DSNL032I -DB9A DSNLIRTR DRDA EXCEPTION CONDITION IN 382
382 REQUEST FROM REQUESTOR LOCATION=::9.30.28.156 FOR THREAD WITH
382 LUWID=G90C0646.J03D.C3F9275F8EA7
382 REASON=00D3101A
382 ERROR ID=DSNLIRTR0003
382 IFCID=0192
382 SEE TRACE RECORD WITH IFCID SEQUENCE NUMBER=0000000E
You may need the server name related to the IP address for problem determination. On a
Windows system, you can use the command ping -a followed by the IP address to resolve its
host name, as shown in Example 7-40.
Example 7-40 ping -a example command for resolving a host name
C:\Documents and Settings\VRes01>ping -a 9.30.28.156
Pinging LENOVO-B6AFDE0A-009030028156.svl.ibm.com [9.30.28.156] with 32 bytes of
data:
Reply from 9.30.28.156: bytes=32 time=3ms TTL=128
Reply from 9.30.28.156: bytes=32 time<1ms TTL=128
Reply from 9.30.28.156: bytes=32 time<1ms TTL=128
Reply from 9.30.28.156: bytes=32 time=1ms TTL=128
Ping statistics for 9.30.28.156:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 3ms, Average = 1ms
When you encounter a problem, you can use the following diagnostics tool to collect relevant
information for problem analysis:
All diagnostic data including dump files, trap files, error logs, notification files, and alert
logs are found in the path specified by the diagnostic data directory path (diagpath)
database manager configuration parameter:
If the value for this configuration parameter is null, the diagnostic data is written to one of
the following directories or folders:
306 DB2 9 for z/OS: Distributed Functions
For Linux and UNIX environments: INSTHOME/sqllib/db2dump, where INSTHOME is
the home directory of the instance.
For supported Windows environments:
If the DB2INSTPROF environment variable is not set then
x:\SQLLIB\DB2INSTANCE is used where x:\SQLLIB is the drive reference and the
directory specified in the DB2PATH registry variable, and the value of
DB2INSTANCE has the name of the instance.
If the DB2INSTPROF environment variable is set then
x:\DB2INSTPROF\DB2INSTANCE is used where DB2INSTPROF is the name of
the instance profile directory and DB2INSTANCE is the name of the instance (by
default, the value of DB2INSTDEF on Windows 32-bit operating systems).
For Windows operating systems, you can use the Event Viewer to view the administration
notification log.
The available diagnostic tools that can be used include db2trc, db2pd, and db2support.
You can always refer to the IBM DB2 Database for Linux, UNIX, and Windows Information
Center for more detailed information. It can be found, for version 9.5, at the following Web
page:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp
For Linux and UNIX operating systems, you can use the ps command, which returns
process status information about active processes to standard output.
For UNIX operating systems, use the core file that is created in the current directory when
severe errors occur. It contains a memory image of the terminated process, and can be
used to determine what function caused the error.
The db2 database manager configuration parameter diaglevel specifies the type of diagnostic
errors that will be recorded in the db2diag.log file. This online configurable parameter applies
to the following areas:
Database server with local and remote clients
Client
Database server with local clients
Partitioned database server with local and remote clients
Valid values for this parameter are as follows:
0: No diagnostic data captured
1: Severe errors only
2: All errors
3: All errors and warnings
4: All errors, warnings and informational messages
The default value is 3, all errors and warnings are reported. You may need to change this
parameter to 4 during problem determination. This parameter can be changed using
commands shown in Example 7-41.
Example 7-41 Updating db2 diaglevel
db2 update dbm cfg using diaglevel 4
Chapter 7. Performance analysis 307
7.4 Obtaining information about the host configuration
There are several places where you can find host configuration information that affect
distributed workloads or that shows the current status. The most relevant sources of
information are described briefly in this section:
Verification of currently active DSNZPARMs
SYSPLAN and SYSPACKAGES
Example 7.4.3 on page 310
GET_CONFIG and GET_SYSTEM_INFO stored procedures
DB2 commands
7.4.1 Verification of currently active DSNZPARMs
You can use the following methods, for example, for getting the current DSNZPARMs values
in effect:
Optimization Service Center subsystem parameter panel
The IBM provided DSNWZP stored procedure
OMEGAMON PE batch reporting
You can specify the ddname SYSPRMDD to get a System Parameter report. This ddname
is optional and applies to all the OMEGAMON PE reports.
Example 7-42 shows an extract of a sample report showing some of the DSNZPARM
parameters of interest for this chapter.
Example 7-42 Extract of and OMEGAMON PE System Parameter Report
LOCATION: DB9A OMEGAMON XE FOR DB2 PERFORMANCE EXPERT (V4)
GROUP: N/P SYSTEM PARAMETERS REPORT
STORAGE SIZES INSTALLATION PARAMETERS (DSNTIPC,DSNTIPE) IRLM START PROCEDURE NAME (IRLMPRC)....................DB9AIRLM
------------------------------------------------------- SECONDS DB2 WILL WAIT FOR IRLM START (IRLMSWT)..............300
MAX NO OF USERS CONCURRENTLY RUNNING IN DB2 (CTHREAD).......200 U LOCK FOR REPEATABLE READ OR READ STABILITY (RRULOCK).......NO
MAX NO OF TSO CONNECTIONS (IDFORE)...........................50 X LOCK FOR SEARCHED UPDATE/DELETE (XLKUPDLT).................NO
MAX NO OF BATCH CONNECTIONS (IDBACK).........................50 IMS/BMP TIMEOUT FACTOR (BMPTOUT)..............................4
MAX NO OF REMOTE CONNECTIONS (CONDBAT)......................300 IMS/DLI TIMEOUT FACTOR (DLITOUT)..............................6
MAX NO OF CONCURRENT REMOTE ACTIVE CONNECTIONS (MAXDBAT)....100 WAIT FOR RETAINED LOCKS (RETLWAIT)............................0
TRACING, CHECKPOINT & PSEUDO-CLOSE PARAMETERS (DSNTIPN) COPY2 ARCHIVE LOG DEVICE TYPE (UNIT2)...................'BLANK'
------------------------------------------------------- SPACE ALLOCATION METHOD (ALCUNIT).........................BLOCK
s
p
i
n
e
)
0
.
4
7
5
<
-
>
0
.
8
7
3
2
5
0
<
-
>
4
5
9
p
a
g
e
s
D
B
2
9
f
o
r
z
/
O
S
:
D
i
s
t
r
i
b
u
t
e
d
F
u
n
c
t
i
o
n
s
D
B
2
9
f
o
r
z
/
O
S
:
D
i
s
t
r
i
b
u
t
e
d
F
u
n
c
t
i
o
n
s
D
B
2
9
f
o
r
z
/
O
S
:
D
i
s
t
r
i
b
u
t
e
d
F
u
n
c
t
i
o
n
s
D
B
2
9
f
o
r
z
/
O
S
:
D
i
s
t
r
i
b
u
t
e
d
F
u
n
c
t
i
o
n
s
D
B
2
9
f
o
r
z
/
O
S
:
D
i
s
t
r
i
b
u
t
e
d
F
u
n
c
t
i
o
n
s
D
B
2
9
f
o
r
z
/
O
S
:
D
i
s
t
r
i
b
u
t
e
d
F
u
n
c
t
i
o
n
s
SG24-6952-01 0738433225
INTERNATIONAL
TECHNICAL
SUPPORT
ORGANIZATION
BUILDING TECHNICAL
INFORMATION BASED ON
PRACTICAL EXPERIENCE
IBM Redbooks are developed
by the IBM International
Technical Support
Organization. Experts from
IBM, Customers and Partners
from around the world create
timely technical information
based on realistic scenarios.
Specific recommendations
are provided to help you
implement IT solutions more
effectively in your
environment.
For more information:
ibm.com/redbooks