DB2 Replication V8
DB2 Replication V8
DB2 Replication V8
Lijun (June) Gu LIoyd Budd Aysegul Cayci Colin Hendricks Micks Purnell Carol Rigdon
ibm.com/redbooks
International Technical Support Organization A Practical Guide to DB2 UDB Data Replication V8 December 2002
SG24-6828-00
Note: Before using this information and the product it supports, read the information in Notices on page xi.
First Edition (December 2002) This edition applies to DB2 Universal Database Enterprise Server Edition for Windows and UNIX Version 8, DataPropagator for z/OS and OS/390 V8, and DataPropagator for iSeries V8.
Copyright International Business Machines Corporation 2002. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii The team that wrote this redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi Comments welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Chapter 1. Introduction to DB2 Replication V8. . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Overview of the IBM replication solution . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Why use replication? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.1 Distribution of data to other locations . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.2 Consolidation of data from remote systems . . . . . . . . . . . . . . . . . . . . 4 1.2.3 Bidirectional exchange of data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2.4 Other requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3 DB2 V8 replication from 30,000 feet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3.1 Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3.2 Capture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.3 Apply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.3.4 Alert Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.4 Putting the pieces together . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.4.1 Administration for all scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.4.2 Data distribution and data consolidation . . . . . . . . . . . . . . . . . . . . . . 19 1.4.3 Bidirectional with a master (update anywhere) . . . . . . . . . . . . . . . . . 20 1.4.4 Bidirectional with no master (peer-to-peer) . . . . . . . . . . . . . . . . . . . . 21 1.4.5 Alert Monitor configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.5 DB2 Replication V8 close up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.5.1 Administration defining a replication scenario . . . . . . . . . . . . . . . 23 1.5.2 Operations DB2 Capture and Apply . . . . . . . . . . . . . . . . . . . . . . . 28 1.5.3 Operations Informix Capture and Apply . . . . . . . . . . . . . . . . . . . . 34 1.5.4 Administration and operations Alert Monitor. . . . . . . . . . . . . . . . . 37 1.6 Whats new in DB2 Replication V8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 1.6.1 Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 1.6.2 Capture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 1.6.3 Apply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 1.6.4 Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 1.6.5 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 1.7 The redbook environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
iii
Chapter 2. Getting started with Replication Center . . . . . . . . . . . . . . . . . . 45 2.1 DB2 Replication Centers architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.2 Technical requirements for DB2 Replication Center . . . . . . . . . . . . . . . . . 54 2.2.1 Hardware requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 2.2.2 Software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 2.2.3 Networking requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 2.2.4 Requirements at replication servers . . . . . . . . . . . . . . . . . . . . . . . . . 57 2.2.5 Requirements for replication to/from non-DB2 servers . . . . . . . . . . . 58 2.3 DB2 products needed to use Replication Center . . . . . . . . . . . . . . . . . . . 60 2.4 How to get Replication Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 2.4.1 How to get DB2 Connect Personal Edition . . . . . . . . . . . . . . . . . . . . 63 2.5 Installing DB2 Replication Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 2.5.1 System kernel parameters on Solaris, HP-UX, and Linux . . . . . . . . 63 2.5.2 Installing DB2 Administration Client with Replication Center . . . . . . 64 2.6 Configuring DB2 Connectivity for Replication Center . . . . . . . . . . . . . . . . 66 2.7 Replication Center and file directories. . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 2.8 Desktop environment for Replication Center. . . . . . . . . . . . . . . . . . . . . . . 67 2.9 Opening DB2 Replication Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 2.10 A Quick Tour of DB2 Replication Center . . . . . . . . . . . . . . . . . . . . . . . . . 72 2.11 Managing your DB2 Replication Center profile . . . . . . . . . . . . . . . . . . . . 76 2.12 Replication Center dialog windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 2.13 Run Now or Save SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 2.13.1 Running Saved SQL files later . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 2.14 Creating Control Tables - using Quick or Custom . . . . . . . . . . . . . . . . . . 92 2.15 Adding Capture and Apply Control Servers . . . . . . . . . . . . . . . . . . . . . . 93 2.15.1 Removing Capture/Apply Control Centers . . . . . . . . . . . . . . . . . . . 94 2.16 Replication Center objects for non-DB2 servers . . . . . . . . . . . . . . . . . . . 95 2.17 Creating registrations and subscriptions . . . . . . . . . . . . . . . . . . . . . . . . . 96 2.18 Replication Center Launchpad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 2.19 Trying replication with the DB2 SAMPLE database . . . . . . . . . . . . . . . . 99 2.20 More Replication Center tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Chapter 3. Replication control tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 3.1 Introduction to replication control tables . . . . . . . . . . . . . . . . . . . . . . . . . 106 3.2 Setting up capture control tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 3.2.1 Create capture control tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 3.2.2 Platform specific issues, capture control tables . . . . . . . . . . . . . . . 114 3.3 Setting up apply control tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 3.3.1 Creating apply control tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 3.3.2 Platform specific issues, apply control tables . . . . . . . . . . . . . . . . . 121 3.4 Advanced considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 3.4.1 Creating control tables at a command prompt . . . . . . . . . . . . . . . . 123 3.4.2 Capture control tables - advanced considerations . . . . . . . . . . . . . 123
iv
3.4.3 Apply control tables - advanced considerations . . . . . . . . . . . . . . . 125 3.4.4 Sizing tablespaces for control tables. . . . . . . . . . . . . . . . . . . . . . . . 126 3.4.5 Control tables described . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Chapter 4. Replication sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 4.1 What is a replication source? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 4.2 Define a replication source from Replication Center . . . . . . . . . . . . . . . . 134 4.2.1 Registering the replication sources . . . . . . . . . . . . . . . . . . . . . . . . . 134 4.2.2 Selecting the replication sources . . . . . . . . . . . . . . . . . . . . . . . . . . 135 4.2.3 Defining the registration options . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 4.2.4 iSeries replication sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 4.2.5 CD Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 4.2.6 Non-DB2 sources - CCD tables . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 4.2.7 Non-DB2 sources - Capture triggers and procedures . . . . . . . . . . . 161 4.3 Views as replication sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 4.3.1 Views over one table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 4.3.2 Views over multiple tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 4.3.3 Restrictions on views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Chapter 5. Subscription set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 5.1 Subscription set and subscription set members . . . . . . . . . . . . . . . . . . . 170 5.1.1 Subscription attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 5.2 Subscription set and member planning . . . . . . . . . . . . . . . . . . . . . . . . . . 172 5.2.1 Member definitions to non-DB2 targets servers . . . . . . . . . . . . . . . 174 5.2.2 Subscription set and apply qualifiers planning . . . . . . . . . . . . . . . . 175 5.3 Define subscriptions using the Replication Center . . . . . . . . . . . . . . . . . 176 5.3.1 Create subscription sets with members . . . . . . . . . . . . . . . . . . . . . 176 5.3.2 Create subscription set without members . . . . . . . . . . . . . . . . . . . . 177 5.3.3 Subscription sets from non-DB2 servers. . . . . . . . . . . . . . . . . . . . . 177 5.3.4 Subscription sets to non-DB2 servers . . . . . . . . . . . . . . . . . . . . . . . 177 5.3.5 Adding subscription members to existing subscription sets . . . . . . 178 5.3.6 Subscription sets and member notebook . . . . . . . . . . . . . . . . . . . . 180 5.4 Target types descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 5.4.1 User copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 5.4.2 Point-in-time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 5.4.3 Aggregate tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 5.4.4 CCD (consistent change data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 5.4.5 Replica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 5.5 Data blocking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 5.6 Scheduling replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 5.7 SQL script description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 5.8 Create subscriptions using iSeries CL commands . . . . . . . . . . . . . . . . . 225 5.8.1 Add subscription set - ADDDPRSUB . . . . . . . . . . . . . . . . . . . . . . . 225
Contents
5.8.2 Add subscription members - ADDDPRSUBM . . . . . . . . . . . . . . . . . 230 Chapter 6. Operating Capture and Apply . . . . . . . . . . . . . . . . . . . . . . . . . 233 6.1 Basic operations on Capture and Apply . . . . . . . . . . . . . . . . . . . . . . . . . 234 6.1.1 Basic operations from the Replication Center . . . . . . . . . . . . . . . . . 234 6.1.2 Basic operations from the command prompt . . . . . . . . . . . . . . . . . 256 6.1.3 Considerations for DB2 UDB for UNIX and Windows . . . . . . . . . . . 270 6.1.4 Considerations for DB2 UDB for z/OS . . . . . . . . . . . . . . . . . . . . . . 275 6.1.5 Considerations for DB2 UDB for iSeries . . . . . . . . . . . . . . . . . . . . . 286 6.1.6 Troubleshooting the operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 6.2 Capture and Apply parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 6.2.1 Change Capture parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 6.2.2 Capture parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 6.2.3 Apply parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 6.3 Other operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 6.3.1 Pruning control tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 6.3.2 Reinitializing Capture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 6.3.3 Suspend and resume Capture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 6.4 Using ASNLOAD for the initial load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 6.4.1 Using ASNLOAD on DB2 UDB for UNIX and Windows . . . . . . . . . 302 6.4.2 Using ASNLOAD on DB2 UDB for z/OS . . . . . . . . . . . . . . . . . . . . . 305 6.4.3 Using ASNLOAD on the iSeries . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 Chapter 7. Monitoring and troubleshooting . . . . . . . . . . . . . . . . . . . . . . . 309 7.1 Capture and Apply status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 7.2 Replication alert monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 7.2.1 Creating monitoring control tables . . . . . . . . . . . . . . . . . . . . . . . . . 317 7.2.2 Create contacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 7.2.3 Alert Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 7.2.4 Replication monitoring and non-DB2 sources . . . . . . . . . . . . . . . . . 321 7.2.5 Replication monitoring and non-DB2 targets . . . . . . . . . . . . . . . . . 322 7.2.6 Replication monitor program operations . . . . . . . . . . . . . . . . . . . . . 322 7.2.7 Using JCL to start monitoring on z/OS . . . . . . . . . . . . . . . . . . . . . . 323 7.2.8 Receiving an alert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 7.2.9 Replication monitoring example . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 7.3 Other monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 7.3.1 Examining historic data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 7.3.2 Health center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 7.3.3 System monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 7.4 Troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 7.4.1 DB2 Administration Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 7.4.2 Files generated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 7.4.3 Replication Analyzer (asnanalyze and ANZDPR) . . . . . . . . . . . . . . 338
vi
7.4.4 Replication Trace (asntrc and WRKDPRTRC) . . . . . . . . . . . . . . . . 339 7.4.5 DB2 Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 7.4.6 db2support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 7.4.7 How to get assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 7.4.8 Platform specific troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . 341 7.5 Advanced troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 7.5.1 asnanalyze and ANZDPR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 7.5.2 DB2 replication trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Chapter 8. Maintaining your replication environment . . . . . . . . . . . . . . . 351 8.1 Maintaining registrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 8.1.1 Adding new registrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 8.1.2 Deactivating and activating registrations. . . . . . . . . . . . . . . . . . . . . 352 8.1.3 Removing registrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 8.1.4 Changing capture schemas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 8.1.5 Changing registration attributes for registered tables . . . . . . . . . . . 357 8.2 Maintaining subscriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 8.2.1 Adding new subscriptions sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 8.2.2 Deactivating and activating subscriptions . . . . . . . . . . . . . . . . . . . . 359 8.2.3 Changing subscription sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 8.2.4 Removing subscription sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364 8.2.5 Adding members to existing subscription sets . . . . . . . . . . . . . . . . 366 8.2.6 Changing attributes of subscription sets . . . . . . . . . . . . . . . . . . . . . 368 8.2.7 Adding a new column to a source and target table . . . . . . . . . . . . . 369 8.3 Promote function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370 8.3.1 Promoting registered tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 8.3.2 Promoting registered views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 8.3.3 Promoting subscription sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 8.4 Maintaining capture and apply control servers . . . . . . . . . . . . . . . . . . . . 374 8.4.1 Manually pruning replication control tables . . . . . . . . . . . . . . . . . . . 374 8.4.2 RUNSTATS for replication tables . . . . . . . . . . . . . . . . . . . . . . . . . . 376 8.4.3 REORG for replication tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 8.4.4 Rebinding replication packages and plans . . . . . . . . . . . . . . . . . . . 377 8.4.5 Recovering source tables, replication tables, or target tables. . . . . 378 8.4.6 Managing DB2 logs and journals used by Capture . . . . . . . . . . . . . 379 8.5 Full refresh procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 8.5.1 Automatic full refresh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 8.5.2 Manual full refresh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 8.5.3 Bypassing the full refresh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 Chapter 9. Advanced replication topics . . . . . . . . . . . . . . . . . . . . . . . . . . 383 9.1 Replication filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 9.1.1 Replicating column subsets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
Contents
vii
9.1.2 Replicating row subsets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 9.2 Replication transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 9.2.1 Capture transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 9.2.2 Source table views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 9.2.3 Apply transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 9.2.4 Before and after SQL statements . . . . . . . . . . . . . . . . . . . . . . . . . . 394 9.3 Replication of large objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 9.3.1 DB2 LOB replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 9.3.2 Informix LOB replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 9.4 Replication of DB2 Spatial Extender data . . . . . . . . . . . . . . . . . . . . . . . . 397 9.5 Update anywhere replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 9.5.1 Administration defining update anywhere replication . . . . . . . . . 399 9.5.2 Operations Capture and Apply . . . . . . . . . . . . . . . . . . . . . . . . . . 401 9.6 DB2 peer to peer replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 9.6.1 Administration and operations for peer to peer replication . . . . . . . 406 9.6.2 Adding another peer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 9.6.3 Conflict detection using triggers . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 Chapter 10. Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 10.1 End-to-end system design for replication . . . . . . . . . . . . . . . . . . . . . . . 418 10.1.1 Pull replication system design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 10.1.2 Push replication system design. . . . . . . . . . . . . . . . . . . . . . . . . . . 419 10.1.3 iSeries-to-iSeries replication with remote journalling . . . . . . . . . . 420 10.1.4 Replicating to non-DB2 servers . . . . . . . . . . . . . . . . . . . . . . . . . . 422 10.1.5 Replicating from non-DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424 10.2 Capture performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 10.2.1 Reading the DB2 Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 10.2.2 Reading iSeries Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427 10.2.3 Collecting transaction information in memory . . . . . . . . . . . . . . . . 428 10.2.4 Capture Insert into CD and UOW tables . . . . . . . . . . . . . . . . . . . . 431 10.2.5 Capture pruning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 10.2.6 non-DB2 source servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 10.2.7 Captures latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 10.2.8 Captures throughput. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440 10.3 CD tables and the IBMSNAP_UOW table . . . . . . . . . . . . . . . . . . . . . . . 440 10.4 Apply performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 10.4.1 Fetching changes from the source server . . . . . . . . . . . . . . . . . . . 444 10.4.2 Caching Applys dynamic SQL at source servers . . . . . . . . . . . . . 446 10.4.3 Specifying more memory for caching dynamic SQL packages . . . 447 10.4.4 Transporting changes across the network . . . . . . . . . . . . . . . . . . 447 10.4.5 Apply spill files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448 10.4.6 Apply updating the target tables . . . . . . . . . . . . . . . . . . . . . . . . . . 449 10.4.7 Target table and DB2 log characteristics at the target server . . . . 450
viii
10.4.8 Caching Applys dynamic SQL at the target server. . . . . . . . . . . 10.4.9 Querying target tables while Apply is replicating. . . . . . . . . . . . . 10.4.10 Apply operations and Subscription set parameters. . . . . . . . . . 10.4.11 End-to-end latency of replication. . . . . . . . . . . . . . . . . . . . . . . . 10.4.12 Apply throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.13 Time spent on each of Applys suboperations . . . . . . . . . . . . . 10.5 Configuring for low-latency replication . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Development benchmarks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.1 Benchmark system configuration . . . . . . . . . . . . . . . . . . . . . . . . 10.6.2 Benchmark results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . .
451 451 452 456 459 460 467 470 470 474
Appendix A. DB2 Admin Client and DB2 Connect PE install . . . . . . . . . 477 Appendix B. Configuring connections for Replication Center . . . . . . . . 483 B.1 Connecting directly to DB2 for z/OS and OS/390 . . . . . . . . . . . . . . . . . . 484 B.2 Connecting to DB2 for z/OS via DB2 Connect . . . . . . . . . . . . . . . . . . . . 488 B.3 Connecting directly to iSeries (AS/400) . . . . . . . . . . . . . . . . . . . . . . . . . 492 B.4 Connecting to iSeries via DB2 Connect server. . . . . . . . . . . . . . . . . . . . 496 B.5 Connecting to DB2 UNIX or Linux server . . . . . . . . . . . . . . . . . . . . . . . . 497 B.6 Connecting to DB2 on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501 B.7 Testing Connections and Binding Packages . . . . . . . . . . . . . . . . . . . . . 505 Appendix C. Configuring federated access to Informix . . . . . . . . . . . . . 511 C.1 Technical requirements for federated access . . . . . . . . . . . . . . . . . . . . . 512 C.2 Information from the Informix server . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512 C.3 Software installation on the DB2 system . . . . . . . . . . . . . . . . . . . . . . . . 516 C.4 Configuring Informix sqlhosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517 C.5 Preparing the DB2 instance for federated . . . . . . . . . . . . . . . . . . . . . . . 518 C.6 Configuring federated objects in a DB2 database . . . . . . . . . . . . . . . . . 521 C.6.1 Creating federated objects using SQL statements . . . . . . . . . . . . . 522 C.6.2 Creating federated objects using the Control Center . . . . . . . . . . . 527 Related publications . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other resources . . . . . . . . . . . . . . . . . . . . . . . . Referenced Web sites . . . . . . . . . . . . . . . . . . . . . . How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . IBM Redbooks collections . . . . . . . . . . . . . . . . . ...... ...... ...... ...... ...... ...... ....... ....... ....... ....... ....... ....... ...... ...... ...... ...... ...... ...... . . . . . . 537 537 537 537 538 539
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
Contents
ix
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces.
xi
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: AIX AIX/L AS/400 DataJoiner DataPropagator DB2 DB2 Connect DB2 Universal Database Distributed Relational Database Architecture DRDA GDDM IBM IBM.COM IBM eServer IMS Informix iSeries Language Environment MVS OS/390 OS/400 Perform pSeries RACF Redbooks(logo) RETAIN RISC System/6000 S/390 SecureWay SP TME z/OS zSeries
The following terms are trademarks of International Business Machines Corporation and Lotus Development Corporation in the United States, other countries, or both: Lotus Notes Lotus Notes
The following terms are trademarks of other companies: ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. C-bus is a trademark of Corollary, Inc. in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure Electronic Transaction LLC. Other company, product, and service names may be trademarks or service marks of others.
xii
Preface
IBM DB2 Replication (called DataPropagator on some platforms) is a powerful, flexible facility for copying DB2 and/or Informix data from one place to another. The IBM replication solution includes transformation, joining, and the filtering of data. You can move data between different platforms. You can distribute data to many different places or consolidate data in one place from many different places. You can exchange data between systems. The objective of this IBM Redbook is to provide you with detailed information that you can use to install, configure, and implement replication among the IBM database family DB2 and Informix. The redbook is organized so that each chapter builds upon information from the previous chapter. Chapter 1 is a high-level overview of IBM DB2 Replication with some details given. It introduces you to the replication components, product requirements, sample configurations, and the new functions introduced in Version 8. Chapters 2 through 5 give you details on installing, configuring, and using the new DB2 V8 Replication Center to define replication scenarios. Chapters 6 through 8 cover the details of operating the replication programs to capture and apply changes and include monitoring, troubleshooting, and maintaining the replication environment. Chapter 9 includes advanced topics, such as bi-directional replication, replication of large objects, and replication of spatial data. Chapter 10 describes the DB2 V8 replication performance benchmarks run at the IBM Silicon Valley Lab.
xiii
From right to left: Lijun Gu, LIoyd Budd, Aysegul Cayci, Colin Hendricks, Micks Purnell, Carol Rigdon
Lijun (June) Gu is a Project Leader at the International Technical Support Organization (ITSO), San Jose Center, California, where she conducts projects on all aspects of DB2 Universal Database (DB2 UDB). She is an IBM-Certified Solution Expert of DB2 UDB Database Administrator and an IBM-Certified Specialist DB2 UDB User. She has extensive experience in DB2 UDB, UNIX/AIX/LINUX, and ADSM administration as well as database design and modeling. She holds three master degrees: MS in Computer Science, MS in Analytical Chemistry and MS in Soil Science. LIoyd Budd is a DB2 Support Analyst at the IBM Toronto Lab in Markham, Canada. He has two years of experience with DB2 and is an IBM Certified Solutions Expert DB2 UDB Version 5, 6, and 7 Database Administration. He holds a bachelors degree in Computer Science specializing in Software Engineering from the University of Victoria, Canada. His areas of expertise include DB2 multi-platform logging, backup, and recovery. He is an active member of the DB2 user, documentation, and development communities. He is also involved in the open source community. Aysegul Cayci is a DB2 Instructor and Consultant on DB2 for z/OS and UNIX and Windows platforms in Turkey. She has 13 years of experience in DB2. She has been working at Polaris Computer Systems for seven years. She has BSc in Computer Science from Middle East Technical University, Ankara and MSc in Computer Engineering from Bosphorus University, Istanbul. She is an IBM Certified Solutions Expert DB2 UDB Version 7.1 Database Administration, DB2 UDB Version 7.1 Family Application Development, DB2 UDB V7.1 for
xiv
OS/390 Database Administration. Her areas of expertise include DB2 performance management and database administration. Colin Hendricks is an Certified IBM I/T Specialist, currently providing pre-sales support on BI/CRM Solutions and Data Management products on the iSeries platform, specializing in iSeries data replication technical support and implementation. He has worked 13 years with IBM, and has over 20 years of experience in I/T field. Through out his I/T career he has worked primarily on IBM midrange platforms, performing various technical support roles, I/T operations management, project management, application designing, and development. Micks Purnell is a Technical Specialist in IBMs Advanced Technical Support for DataJoiner and DB2 Replication and for the DB2 Federated Server. He has been with IBM for over 20 years. Through out his career, he was a Networking Technical Specialist with expertise in SNA software and hardware; a Product Manager for IBMs TCP/IP and OSI networking software products; a Client/Server Technical Specialist focused on distributed database and on client/server system development issues; and Project Manager for a successful customer proof-of-concept using IBMs DB2 Replication Version 1 and DataJoiner Version 1. Micks has continued to support DataJoiner and DB2 replication in various positions. Carol Rigdon is an IBM I/T Specialist, providing pre-sales technical support for DB2 in the Americas. She has 20 years of experience in data management and has been working with replication for the last seven years. She holds a degree in Computer Science from the University of North Florida. Her areas of expertise include homogeneous and heterogeneous database replication, cross-platform database connectivity, and database federation. We especially want to thank the following people for their contributions in supporting this residency and/or for their technical review: Beth Hamel Patrick See Anthony J Ciccone Alice Leung IBM Silicon Valley Lab, San Jose, CA, USA Thanks to the following people for their contributions to this project by providing technical help and/or technical review: Jaime Anaya Jayanti Mahapatra Ken Chia Ravichandran Subramaniam Chuck Vanhavermaet
Preface
xv
Kathy Kwong Michael Morrison David Yang Jing-Song Jang Benedikt Berger Karan Karu Lakshmi Palaniappan Miranda Kwong Kris Tachibana Nhut Bui Nguyen Dao Tom Wabinski Dan Lubow Lorry Paulsen Gil Lee Jason Chen IBM Silicon Valley Lab, San Jose, CA, USA Thanks to the following people for their help or support: Emma Jacobs, Yvonne Lyon, Maritza Dubec, Journel Saniel, William Carney, Bart Steegmants, Ueli Wahli, Osamu Takagiwa, Patrick Vabre, Gabrielle Velez International Technical Support Organization, San Jose Center
xvi
Comments welcome
Your comments are important to us! We want our Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at:
ibm.com/redbooks
Mail your comments to: IBM Corporation, International Technical Support Organization Dept. QXXE Building 80-E2 650 Harry Road San Jose, California 95120-6099
Preface
xvii
xviii
Chapter 1.
The two target servers may be copying different subsets or transformations of the data on different schedules.
The data at the source servers has the same structure. In SQL terms, the target is a UNION of the sources.
Another type of bidirectional replication does not have a designated master location. Each location copies changes from all other locations directly. This is often called multi-master or peer-to-peer replication. Peer-to-peer replication can be used to maintain disaster recovery sites, to provide fail-over systems for high availability, and to balance query workload across multiple locations. Figure 1-4 shows a peer-to-peer configuration.
1.3.1 Administration
The Replication Center is a graphical user interface used to define replication sources and map sources to targets. It is also used to manage and monitor the Capture and Apply processes on local and remote systems. The Replication Center runs on Windows and UNIX/Linux systems and must have connectivity to both the source and target servers. The DB2 V8 Administration Client for Windows and UNIX includes the Replication Center. Figure 1-5 shows a replication administrator using the Replication Center to manage the other three replication components.
M O N ITO R
C O N TR O L
AL E R T M O N ITO R
SO URC E
C O N TR O L C O N T R OL
TAR G E T
CAPTUR E
U N IT O F W O R K C H A N G E D ATA
APPLY
TAR G ET TAR G ET TAR G E T
1.3.2 Capture
Changes to DB2 source tables are captured by a Capture program running at the source server. The DB2 source server can be DB2 for z/OS and OS/390 Versions 6,7 and 8, DB2 for iSeries on OS/400 V5R2, or DB2 for Windows and UNIX Version 8. Changes to Informix source tables are captured by triggers created automatically when the replication source is defined. Data can be filtered by column during the Capture process.The captured changes are stored in a table local to the source table and are automatically deleted after they have been applied.
DB2 Capture
Figure 1-6 is the data flow for DB2 Capture.
DB2 SOURCE
CONTROL
C APTU R E
D B2 Log
U N IT O F W O R K S o u rc e Ta b le C H A N G E D A TA
When changes are made to the source table, DB2 writes log (journal) records. These log records are used for database recovery and for replication. The Capture program uses database interfaces to access log records: DB2 z/OS and OS/390 IFI 306 DB2 Windows and UNIX asynchronous log read API db2ReadLog iSeries RCVJRNE command Each source table has a corresponding Change Data (CD) table where the captured changes are stored. The CD table is created by the Replication Center when you define a replication source table. You can choose to capture a subset of the source table columns. You can also capture the values before the change is made (called before-image columns) with the values after the change is made (called after-image columns). The log record sequence number (LSN) of a change is used to uniquely identify that change. DB2 Capture holds the changes in memory until a COMMIT is issued for those changes. When a COMMIT is issued for a transaction that involves replication source tables, Capture inserts the captured changes into the appropriate CD tables and stores the COMMIT information in the Unit of Work (UOW) control table. When Capture detects a ROLLBACK, it removes the associated changes from memory. You can run multiple Capture programs on a source server to improve throughput. Each Capture has its own schema for control tables and its own set
10
of CD tables. The schema is defined using the Replication Center when you create the capture control tables. You specify the schema when you define your replication sources and targets and when you start the Capture program. The default capture schema is ASN. The CD tables and the UOW table are always located on the server where the source table is located. The only exception is when iSeries remote journaling is used. Figure 1-7 shows iSeries replication with remote journaling.
Source Table
Journal
Remote iSeries
CONTROL
CAPTURE
UNIT OF WORK
Journal
CHANGE DATA
You can set up remote journaling on the iSeries with the ADDRMTJRN command. Changes to the source table are written to the local log (journal) and also sent to another iSeries system synchronously or asynchronously. Capture can run on the remote iSeries system and replicate changes from the journal on that system.
11
server to process changes in both directions. Peer-to-peer replication has a Capture and an Apply program running at each server.
Informix Capture
Figure 1-8 is the data flow for Informix capture triggers.
In fo rm ix SO URCE
CONTROL
T R IG G E R S
S ource Ta b le
C H A N G E D A TA
Remember that DB2 V8 for Windows and UNIX or DB2 Connect V8 is required when defining replication from or to an Informix database. DB2 Replication is written using DB2 syntax and data types, so the DB2 V8 federated capability is required to translate from DB2 to Informix. If you use one DB2 V8 federated database to access multiple Informix source databases, then each Informix source database must have its own unique capture schema. There will be one capture control table defined in the DB2 V8 federated database, ASN.IBMSNAP_CAPSCHEMAS. When you define an Informix replication source table, the Replication Center creates insert, update, and delete triggers for the source table. Informix triggers are limited to 256 characters in length. This is not enough for the replication logic, so three Informix stored procedures are also created. The Replication Center also creates a consistent change data (CCD) table to hold the changes captured by the triggers. There is one CCD table for each Informix replication source. All these objects are located in the same Informix database as the source.
12
When a change is made to the Informix source table, the appropriate trigger is executed with the before- and after-images of the changed row. The trigger calls the associated stored procedure and that procedure inserts a row into the CCD table, along with control information. Triggers do not have access to transaction information, so there is no UOW table.
1.3.3 Apply
Captured changes are applied to target tables by Apply programs. The Apply program can run on any server and must have connectivity to both the source and the target servers. Data can be filtered by column, filtered by row, joined with other data (using views), and transformed with SQL expressions during the Apply process. Bidirectional replication, including peer-to-peer is supported only for the DB2 family.
DB2 SOURCE
CONTROL
DB2 TARGET
APPLY
Apply captured UNIT OF WORK changes
CHANGE DATA
SOURCE
Log
CONTROL
CAPTURE
The Replication Center is used to map a source table or view to a target table. You define a subscription set which is a group of one or more target tables (called subscription members) that will be processed as a unit by Apply. The changes
13
from the CD tables are applied for each table separately, in the same order they occurred at the source. A single COMMIT is issued after the last member in the set is processed. You can specify transactional replication for subscription members that are user copies (read only), point in time (PIT), or replicas and changes will be applied in the same order that they occurred at the source across all the members in the set. If your target tables have DB2 referential constraints or other relationships that must be preserved, then you should choose transactional replication when defining the subscription set. Apply selects from the source tables for the first initialization of the target tables, using any transformations or filtering you defined. This is called full refresh. Optionally, you can do this step yourself. The Replication Center provides Manual Full Refresh to update the apply control tables when you do the initial population of the target tables outside of replication. After the full refresh, Apply selects changes from the CD tables and applies those changes to the target tables. Some target table types may require a join of the CD and the UOW table during the Apply process. The join is not required for user copy targets that do not have any column in the UOW table in its predicates, used for data distribution and data consolidation. The join is required for bidirectional copies. Apply can be run as a batch process or as a task that runs all the time. You specify the schedule for replication when you define the subscription set. The schedule can be time-based, on an interval from zero seconds to one year. You can also schedule Apply using an event. You name the event and then insert a row into the Apply events control table, ASN.IBMSNAP_SUBS_EVENT, whenever you want Apply to start copying.
14
DB2 SOURCE
CONTROL
APPLY
Nicknames
Informix TARGET
TARGET TARGET TARGET
Log
CONTROL
CAPTURE
This is the same process that is used for DB2 to DB2 replication. The only difference is that there must be a DB2 V8 federated database with nicknames for the Informix target tables. The Apply programs connects to the DB2 federated database and issues DB2 inserts, updates, deletes, and commits against the nicknames. DB2 V8 translates the DB2 SQL syntax and data types to Informix syntax and data types. The DB2 V8 federated server can be installed on the same system as Informix or on a different server. The Informix SDK Client must be installed on the federated server. If your source server is DB2 V8 for Windows or UNIX, then your source server can also act as the federated server.
15
Nicknames
APPLY
DB2 TARGET
TARGET TARGET TARGET
CONTROL
TRIGGERS CAPTURE
This is the same process that is used for DB2 to DB2 replication. The difference is that there must be a DB2 V8 federated database with nicknames for the Informix source, control, and change data tables. The Apply programs connects to the DB2 federated database and issues DB2 selects against the nicknames. DB2 V8 translates the Informix syntax and data types to DB2 SQL syntax and data types so that Apply can process the data against the DB2 target. The DB2 V8 federated server can be installed on the same system as Informix or on a different server. The Informix SDK Client must be installed on the federated server. If your target server is DB2 V8 for Windows or UNIX, then your target server can also act as the federated server.
16
Capture and Apply servers. You can also define users or groups of users to receive e-mail notification when an alert occurs. The server where the monitor runs is called the Monitor Server. A monitor server can monitor one or more local and/or remote servers. The Alert Monitor does not monitor the Capture triggers on non-DB2 source servers. Figure 1-12 shows an Alert Monitor monitoring two different Capture and Apply servers.
ALERTS
MONITOR
CONTROL
e-mail alert
ALERT MONITOR
SOURCE
CONTROL
DAS
UNIT OF WORK
DAS
APPLY
CONTROL
TARGET
CAPTURE
TARGET TARGET TARGET
CHANGE DATA
The Alert Monitor program collects information from the capture and apply control tables. It also uses the Database Administration Server (DAS) installed on the Capture and Apply servers to receive remote commands and supply system information. DAS is installed when you install DB2 for Windows or UNIX.
17
The DB2 Administration Server for z/OS will be included with DB2 for z/OS and OS/390 V8. DAS is installed in the Unix System Services shell (USS). DAS V8 will support DB2 for z/OS and OS/390 Versions 7 and 8. If you will be using the Capture-Status-Down or Apply-Status-Down alert condition, then DAS is required at the Monitor server and at the Capture and/or Apply server. DAS is not required for Monitor to check for other alert conditions. DAS is not required on any iSeries Capture and Apply servers that will be monitored. You can run the Alert Monitor from the Replication Center or by issuing a command. You can specify how often the Alert Monitor checks the events and thresholds using the monitor interval parameter. When a monitored event such as an error message occurs or a monitor threshold is exceeded, the Alert Monitor inserts an alert into the ASN.IBMSNAP_ALERTS table and sends e-mail notification to the contacts you have defined. Notifications are sent using an SMTP server. Examples of SMTP servers are Lotus Notes, Microsoft Outlook, and the sendmail program in UNIX operating systems.
18
DB2 Administration Server for z/OS V8 DAS must be installed on any Capture or Apply z/OS control server if you use the Replication Center for operations on those control servers. Operations include starting and stopping processes and monitoring process.
Source Server
DB2 for z/OS and OS/390 DB2 for iSeries
19
SOURCE
DB2 for z/OS and OS/390 DB2 for iSeries DB2 for Windows & UNIX Informix Dynamic Server
TARGET
DB2 for z/OS and OS/390
DB2 for z/OS and OS/390 DB2 for iSeries DB2 for Windows & UNIX Informix Dynamic Server
DB2 for z/OS and OS/390 DB2 for iSeries DB2 for Windows & UNIX Informix Dynamic Server
DB2 for z/OS and OS/390 DB2 for iSeries DB2 for Windows & UNIX Informix Dynamic Server
20
to Table 1-1 to determine which products you need in your environment to provide the Capture program. In an update anywhere scenario, Apply should be running at each replica to process the changes that are coming from the master and the changes that need to be sent to the master. UseTable 1-1 to determine what products are needed to apply changes in an update anywhere scenario. For example, assume that the master site is DB2 for z/Os and OS/390. There are 20 replicas, all running DB2 for Windows and UNIX V8. You would need to install DB2 DataPropagator for z/OS and OS/390 on the master site. The replicas already have Capture and Apply programs, since they are included in DB2 for Windows and UNIX. Remember that update anywhere is only valid for the DB2 database family. It is not supported for Informix sources or targets.
21
monitor replication on both the new version of DB2 and the previous version of DB2 (DB2 for z/OS and OS/390 V7). The products required for the Alert Monitor Server are listed in Table 1-3.
Table 1-3 Alert Monitor Server requirements
Required Product
DB2 Universal Database for Windows and UNIX V8 DB2 DataPropagator for z/OS and OS/390 V8 and DB2 Administration Server for z/OS V8
The products required on a Capture or Apply server to be monitored for alerts are listed in Table 1-4.
Table 1-4 Monitored servers requirements
Required Products
None, DAS is included with the product DB2 DataPropagator for z/OS and OS/390 V8 and DB2 Administration Server for z/OS V8 None, DAS is not used for iSeries monitoring
22
23
the apply control server. Apply control tables always have the schema ASN . There are 10 apply control tables. A database that will be used to monitor replication is a monitor control server. The monitor control tables are defined in the monitor server database using the replication center. The monitor server can be located anywhere in your enterprise and must have access to the capture and apply control servers that you want to monitor. There are nine monitor control tables and they always have the schema ASN. All the control tables are described in Chapter 3, Replication control tables on page 105 of this book and in detail in the IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1, SC27-1121.
24
New registrations are always inactive until a subscription is processed against them. If the source table is in an iSeries database then the name of the journal for that source table is inserted into the capschema.IBMSNAP_REG_EXT table at the capture control center. Also kept there is the name of the source system if a remote journal configuration is used. On the iSeries, you can also use the ADDDPRREG command to define a replication source. These actions occur when you register an Informix source table: A consistent change data table, called a CCD table, is created to hold the captured changes. The CCD table has the columns you selected when registering the table. The CCD table columns for after-image values have the same names and data types as the source table. If you choose to capture the before-image values, the CCD table columns have the column names of the source prefixed with a character you choose. The default before-image prefix is X. The CCD table has an operation column to identify the type of change and a sequence column. Three triggers and three associated stored procedures are created to capture the changes. There is one trigger/procedure pair each for inserts, updates, and deletes to the source table. The stored procedures generate a unique sequence number for each change and insert changed values into the CCD table. A trigger and stored procedure are created on the capture control table, capschema.IBMSNAP_PRUNCNTL. If the trigger and stored procedure already exist, they are modified to include this new registration. For DB2 sources, the Capture program deletes (prunes) CD table rows after they have been applied. For non-DB2 sources, pruning is done via these triggers. Informix triggers are limited in size to 256 characters, so stored procedures are defined and called by the triggers.
25
26
Non-condensed one CCD row for each change to a source table row You can also choose to store control information, such as the authorization id that made the change, in the CCD table. Replica an updateable copy of all or a portion of a source table Replicas are used for update anywhere replication. Only DB2 tables can be replicas; this target type cannot be chosen for Informix target tables. Base aggregate an aggregate of a source table based on SQL column functions and GROUP BY filters that you define Apply issues a select against the source table each time it processes a base aggregate target table and inserts new rows in the target table. Change aggregate an aggregate of the changes to a source table based on SQL column functions and GROUP BY filters that you define Apply issues a select against the CD table each time it processes a change aggregate table and inserts new rows in the target table. This is what takes place when you define a subscription member: If this is the first member added to a set, a row is inserted into the capschema.IBMSNAP_PRUNE_SET table at the capture control server with set information. This information is used when pruning change data after it has been applied. One row is inserted into the capschema.IBMSNAP_PRUNCNTL table at the capture control server for each subscription member. Each member is assigned a MAP_ID number which uniquely identifies that subscription member. The MAP_ID starts at 0 and increases by 1 for every subscription to this capture control server. It is stored in the MAP_ID column of capschema.IBMSNAP_PRUNCTL. One row is inserted into the ASN.IBMSNAP_SUBS_MEMBR table at the apply control server. This row has the apply qualifier, set name, and source and target table names. This is the source table to target table mapping. The MEMBER_STATE for each member is N for new. The PREDICATES value is the row filter that you specified for this mapping. If this is the first member added to a set, and if the capture control server is the iSeries, it also saves the information regarding the source tables journal in ASN.IBMSNAP_SUBS_SET. It uses this information later to make sure the source tables of the same set uses the same journal. If both the capture control server and the target server are iSeries, you can also use the command ADDDPRSUBM to define a subscription set member.
27
Note: The PREDICATES row filter is used when Apply initializes the target table (select from the source table) and when Apply processes changes (select from the CD table). You can specify a predicate to be used only when Apply processes changes by updating the UOW_CD_PREDICATES column of ASN.IBMSNAP_SUBS_MEMBR. You would do this if you wanted to filter changes based on a value in the CD or UOW table. For instance, if you wanted to block the replication of deletes, you would specify IBMSNAP_OPERATION <> D for the UOW_CD_PREDICATES.
UOW_CD_PREDICATES cannot be set from the Replication Center. You must update the ASN.IBMSNAP_SUBS_MEMBR table manually while connected to the apply control center. One row for each target table column is inserted into the ASN.IBMSNAP_SUBS_COLS table at the apply control server. Each row defines a mapping from a source column or SQL expression to a target table column. Also included is a flag to identify target table columns that are part of the target table primary key or unique index. This is the source table column to target table column mapping. If the source server is an Informix database, then special SQL statements are inserted into the ASN.IBMSNAP_SUBS_STMTS table to control Apply processing. If you defined SQL statements that you want to run before or after each Apply processing cycle, they are inserted into ASN.IBMSNAP_SUBS_STMTS. If the target table does not already exist, it is created, along with a unique index. If the target server is an Informix database, a nickname is created for the target table. If the target table is a replica (Update Anywhere), then there is extra processing to define the replication from the replica to the source server: A second set is created in ASN.IBMSNAP_SUBS_SET, and rows inserted into ASN.IBMSNAP_SUBS_MEMBR and ASN.IBMSNAP_SUBS_COLS. The replica is registered as a replication source at the apply control server.
28
Capture has five threads: INIT thread sets the environment and starts the other threads. ADMIN thread manages administrative tasks, including error messages and traces, and stores monitoring statistics at the capture control server. HOLDL thread holds an exclusive lock on capschema.IBMSNAP_CAPENQ to prevent a second instance of Capture with the same capture schema and capture control server. WORKER thread requests log records from DB2, manages log information and inserts rows in CD tables and the UOW table. This thread also updates the capture control tables. PRUNE thread deletes (prunes) rows from the CD tables and UOW tables after they have been applied. This thread also prunes capschema.IBMSNAP_TRACE, capschema.IBMSNAP_CAPMON, and capschema.IBMSNAP_SIGNAL, based on limits you set for Capture startup. Apply has three threads: ADMIN thread manages administrative tasks. HOLDL thread holds exclusive lock on the ASN.IBMSNAP_APPENQ row with a matching apply qualifier to prevent a second instance of Apply with the same apply control server and apply qualifier. WORKER thread processes subscription sets based on the apply control server and apply qualifier specified when Apply is started. This thread accesses capture control tables, source tables, CD tables, UOW tables, apply control tables, and target tables. You can use the Query Status option on the Replication Center to show the status of each of the Capture and Apply threads. The cornerstone of DB2 replication is the log record sequence number, the LSN. The LSN for each change is stored in the CD table row for that change, along with the LSN of the commit statement for that changes unit of work. These values are used to ensure that changes are replicated in the order they occurred at the source. Capture and apply control tables have columns named SYNCHPOINT, which are used as progress indicators to control replication and restart processing. Recovery logging must be enabled for a DB2 for Windows and UNIX capture control server before Capture is started the first time.
29
it inserts a row into capschema.IBMSNAP_REGISTER with the GLOBAL_RECORD column set to Y and the SYNCHPOINT/SYNCHTIME columns set to the current point in the DB2 log. Capture reads the capschema.IBMSNAP_REGISTER table to find registrations. A registration is not active until Apply has signalled that a full refresh has been done.
Note: You must start Capture at least once to initialize the global record values in the ASN.IBMSNAP_REGISTER table. Once that is done, all Capture and Apply communications are done through the ASN.IBMSNAP_SIGNAL table. Capture does not need to be running when a subscription is defined or when Apply is started.
Also, you can add new registrations without re-initializing Capture. When Apply runs for the first time using the new registration, Capture will dynamically load the registration information into memory.
30
There is one spill file for each member in the subscription set. Apply issues a DELETE against the target table to delete existing rows and then INSERTs to the target table using the data from the spill file. If you start Apply with the parameter LOADXIT set to Y, then Apply calls the ASNLOAD user exit. Apply passes control information to the exit, including the select with the transformations and row filters. There is a sample ASNLOAD exit shipped with Apply. ASNLOAD can call native unload and load utilities to improve the performance of the full refresh. Another alternative is to do the full refresh yourself, but you must use the Replication Center Full Refresh -> Manual option to ensure that the capture and apply control tables are updated correctly. Apply then updates the LASTSUCCESS and SYNCHTIME columns for this set in ASN.IBMSNAP_SUBS_SET and changes the MEMBER_STATE for each member in ASN.IBMSNAP_SUBS_MEMBR to L to indicated that the target tables have been loaded.
31
commit of a unit of work involving a registered table, it inserts the change information into the appropriate CD tables and inserts the commit information into the UOW table. If a rollback is received, Capture removes the associated changes from memory. Capture also updates capture control tables to indicate progress and to save restart information. Capture issues commits of this information based on the COMMIT_INTERVAL parameter specified when Capture is started.These tables are updated:
capschema.IBMSNAP_RESTART holds the LSNs for a Capture restart point in the DB2 log. capschema.IBMSNAP_REGISTER rows are updated with the LSN of the most recently inserted UOW table row (CD_NEW_SYNCHPOINT) for each source table that had change activity. capschema.IBMSNAP_REGISTER global_record row is updated with the LSN and timestamp of the most recently inserted UOW table row.
32
Note: If the target table type is not user copy, then this select is a join of the CD table and the UOW table. You can force the join for a user copy by setting the JOIN_UOW_CD column in ASN.IBMSNAP_SUBS_MEMBR to Y. You do this if you want to copy one or more of the values (authid is an example) from the UOW table to your target table. You would also do this if you have a row filter which refers to a column in the UOW table.
JOIN_UOW_CD cannot be set using the Replication Center. You must issue the SQL directly while connected to the apply control server. Stores the result set in a spill file on the server where Apply was started. There is one spill file for each subscription member. Reads the spill files and issues inserts, updates, deletes against the target tables If the target table type is a replica or a user copy and you chose transactional processing, then Apply reads all of the spill files and applies each unit of work in the same order it occurred at the source server. Apply will issue commits after processing each x units of work, where x is the number you specified when defining the subscription set. If the target table type is not replica or user copy, or if the target table type is user copy and you did not choose transactional processing when defining the subscription set, then Apply will process each spill file separately and issue one commit after all the spill files for the current processing cycle are processed. Changes may need to be reworked when issuing inserts, updates, and deletes. This is sometimes called upsert. Rework modifies the operation of a change to ensure that it can be processed. Table 1-5 is a list of the rework rules:
Table 1-5 Apply rework rules
Source change
insert update delete
Change reworked to
update insert null change is ignored
Executes any SQL statements from the ASN.IBMSNAP_SUBS_STMTS which are marked to be run AFTER Apply processing. Updates the ASN.IBMSNAP_SUBS_SET SYNCHPOINT and SYNCHTIME columns for this set at the apply control server with the LSN of the upper
33
bound and the timestamp of the upper bound. The MEMBER_STATEs in ASN.IBMSNAP_MEMBR for all members of the set are set to S. Updates the capschema.IBMSNAP_PRUNE_SET SYNCHPOINT column for this set at the capture control server with the upper bound LSN. Inserts an audit row into ASN.IBMSNAP_APPLYTRAIL at the apply control server.
remoteschema.IBMSNAP_REGISTER
34
35
the SYNCHPOINT in remoteschema.IBMSNAP_PRUNCNTL to hex zeroes to indicate a full refresh is being started. Executes the SQL statement from ASN.IBMSNAP_SUBS_STMTS that was inserted when defining the subscription. The statement is an update to the remoteschema.IBMSNAP_REG_SYNCH table. This table has a trigger/stored procedure which selects from the most current sequence value from the remoteschema.IBMSNAP_SEQTABLE and updates the remoteschema.IBMSNAP_REGISTER SYNCHPOINT and SYNCHTIME columns. This is the beginning sequence number for Apply. Selects data from the source table based on your source to target mapping. Deletes all rows from the target table and inserts the new rows. Updates ASN.IBMSNAP_SUBS_SET.
36
Reads the spill files and issues inserts, updates, deletes against the target tables Changes may need to be reworked when issuing inserts, updates, and deletes. Executes any SQL statements from the ASN.IBMSNAP_SUBS_STMTS which are marked to be run AFTER Apply processing. Updates the ASN.IBMSNAP_SUBS_SET SYNCHPOINT and SYNCHTIME columns for this set at the apply control server with the LSN of the upper bound and the timestamp of the upper bound. The MEMBER_STATEs in ASN.IBMSNAP_MEMBR for all members of the set are set to S. Updates the capschema.IBMSNAP_PRUNE_SET SYNCHPOINT column for this set at the capture control server with the upper bound LSN. Inserts an audit row into ASN.IBMSNAP_APPLYTRAIL at the apply control server.
Administration
You identify a monitor server by creating the monitor control tables in that server using the Replication Center. Each instance of the Alert Monitor program is started with a monitor qualifier that you define. Under the monitor qualifier, you define alert conditions for the Capture and apply control servers you want to monitor. Alert conditions are defined for capture schemas, apply qualifiers, and
37
subscription sets. For each alert condition, you specify a contact or contact group that should be notified if the condition is met. A contact is an e-mail address, which can be an e-mail address for a user or for a pager. A contact group is just that, a set of contacts. The monitor qualifier, alert conditions, and contact information are created using the Replication Center and stored in monitor control tables. The Alert Monitor can check for: Status of Capture and Apply programs Error and warning messages Latency thresholds Memory usage Subscription set failures or full refreshes Transactions rejected due to update anywhere conflicts Transactions reworked by Apply
Operations
You can have several Alert Monitor programs running on the same or different systems, each monitoring a different set of Capture and Apply servers. The Alert Monitor program must be able to connect to the monitor control server and all monitored capture/apply control servers. The monitor_interval is the number of seconds in a monitor cycle and is specified when the Alert Monitor is started. The Alert Monitor checks for alert conditions by selecting values from capture and apply control tables and issuing system commands to the DAS running on the capture or apply server. If the Alert Monitor detects an alert condition, then an e-mail message describing the condition is sent to the contact or contact group defined in the monitor control tables and the alert is inserted into the ASN.IBMSNAP_ALERTS table at the monitor control server. The MAX_NOTIFICATIONS_PER_ALERT startup parameter can be used to prevent flooding the contacts with alerts for the same problem. The MAX_NOTIFICATIONS_MINUTES startup parameter is used to control the number of minutes between notifications for the same alert. The ASN.IBMSNAP_ALERTS table is pruned by the Alert Monitor at startup, based on the ALERT_PRUNE_LIMIT startup parameter.
38
1.6.1 Administration
The improvements in administration are: The new Replication Center combines and extends the administrative functions found in the previous tools DataJoiner Replication Administration (DJRA) and replication functions in the Control Center. It is a graphical user interface for defining replication scenarios and managing and monitoring replication processes. Appendix A has comparisons between the Replication Center and DJRA and between the Replication Center and the Control Center. Source and target table names can be up to 128 characters and column names up to 30 characters, subject to any limits imposed by the database source, target, or control server. Replication definitions can be added without re initializing the Capture program. New columns can be added to replication sources while Capture is running. Not supported on the iSeries A migration utility is included to convert existing DB2 Replication V5, V6, and V7 environments to DB2 Replication V8. All stored passwords are encrypted. The Replication Center is supported on 64-bit Windows and Unix systems with a 64bit Java Virtual Machine.
1.6.2 Capture
Capture enhancements are: The new IBMSNAP_SIGNAL table is used for improved communications between Capture and Apply. Once Capture has been started successfully the first time, there is no longer a requirement to always start Capture before starting Apply, or that Capture must be running before Apply processes a new subscription. Changes are not inserted in change data and UOW tables until they have been committed. A join of the CD and UOW table is no longer required for user copies. Changes which are rolled back are no longer put in the CD table.
39
Greater control over the changes captured has been added: If you choose only a subset of source columns when defining a replication source, you can specify that no changes should be captured for that source unless the change affects your selected columns. This option was available in V7 as a start Capture program option. When defining a replica, you can specify that the changes processed by Apply from the master site should not be recaptured at the replica site. Multiple Capture programs can run on the same DB2 database, subsystem, or data sharing group: Each Capture has its own schema, control tables and change data tables. A Capture program is uniquely defined by the combination of source server and capture schema. You can use multiple Captures to improve throughput. This also allows you to define multiple non-DB2 replication sources in a single federated database. Capture prunes changes concurrently with the capture of changes on DB2 for Windows, UNIX, and z/OS so that pruning no longer affects replication latency. Pruning no longer requires a join of each Change Data table with the Unit of work table and is cursor-based with interim commits. The new IBMSNAP_SIGNAL table, created with data capture changes, provides a way to communicate with Capture through log records. Apply inserts records into this table to signal that capturing should start for a table. It is also used to signal update anywhere replication and to provide precise end points for Apply events. Capture start up parameters can be modified while Capture is running. New options have been added for warm start. One Windows NT or 2000 service for each Capture program. The services are defined through the Replication Center or with commands. You can stop and start the services from the Windows Services window. Capture is supported on 64-bit platforms Windows, UNIX, z/OS. Capture is enabled for the MVS Automatic Restart Manager (ARM) on z/OS. If Capture is registered to ARM, then ARM will automatically restart Capture in the event of a Capture or system failure.
1.6.3 Apply
Apply enhancements include: Joins of the Change Data table and the Unit of Work table are no longer required when applying changes to user copies. All the information needed
40
for such copies is contained in the Change Data table. If information from the Unit of Work table is needed for a subscription predicate or target table column, then you can set a flag to tell Apply to do the join. Changes to target table primary key values can now be handled without converting all captured updates to delete/insert pairs. You must capture the before-image of the source table columns used for the target table primary key. Apply can use the before-image values to locate the target table row for an update. The previous alternative, where Capture converts all updates to delete/insert pairs, is still available and should be used if partitioning key values are subject to change. Faster full refreshes of target tables can be done using the load improvements in DB2 for Windows and UNIX V8 and DB2 for z/OS and OS/390 V7 or later. The Apply exit ASNLOAD sample program has been updated to take advantage of utility options offered on each platform. Apply password files are now encrypted. The new asnpwd command is used to create and maintain the password file. No passwords are stored in readable text. Not applicable on iSeries One Windows NT or 2000 service for each Apply program. The services are defined through the Replication Center or with commands. You can stop and start the services from the Windows Services window. Apply is supported on 64-bit platforms Windows, UNIX, z/OS. Apply is enabled for the MVS Automatic Restart Manager (ARM) on z/OS. If Apply is registered to ARM, then ARM will automatically restart Apply in the event of an Apply or system failure. The ASNDLCOPYD daemon is no longer required (except on the iSeries) to replicate files stored in DB2 Data Links Manager Version 8.1. Data Links provides a replication daemon for retrieving and storing external files managed by Data Links. If the Data Links reference to an external file is defined with RECOVERY YES, then DB2 replication can ensure that the reference and external file are consistent when replicated.
1.6.4 Monitor
Extensive monitoring functions have been added to DB2 Replication: Replication Center Monitoring You can check the status of replication processes and display historical information including: Capture messages Capture throughput analysis Capture latency
41
Apply messages Apply throughput analysis End-to-end latency New commands with parameters to show the status of Capture and Apply processes: Capture asnccmd with the status parameter Apply asnacmd with the status parameter Not applicable to iSeries. On the iSeries, users can use the WRKSBSJOB QZSNDPR command as always Replication Alert Monitor The Replication Alert Monitor is a separate program which can monitor one or many Capture/Apply processes. You use the Replication Center to define alert conditions, such as error messages, program termination, and exceeding limits on memory or latency. The Alert Monitor stores alerts in a control table that can be viewed from the Replication Center. You define users or groups of users that will receive e-mail notification when an alert occurs.
1.6.5 Troubleshooting
Serviceability improvements include: New trace facilities based on the db2trc model have been added for Capture and Apply. The asntrc command starts and stops a trace while the Capture and Apply programs are running. The new trace is available on Windows, UNIX and z/OS. New trace points have been added to DataPropagator for iSeries to provide more debugging information. The Replication Analyzer Program has been updated to work with DB2 replication V8 control tables.
42
A IX 4 .3 .3 In fo rm ix D y n a m ic S e r v e r 9 .3
T C P /IP
W in d o w s 2 0 0 0 DB2 V8
O S /4 0 0 V 5 R 2 D a ta P ro p a g a to r V 8
R e d h a t L in u x DB2 V8 W in d o w s 2 0 0 0 D B 2 V 8 A d m in C lie n t
We used the SAMPLE database on DB2 for Windows and Unix, the SALES demo database on Informix, the SAMPLE tables on DB2 for z/OS and OS/390, and created equivalent sample tables on DB2 for iSeries.
43
44
Chapter 2.
45
Informix IDS Replication Center can set up and operate replication to: DB2 for OS/390 Version 6, DB2 for z/OS Version 7, and (future) DB2 for z/OS Version 8 DB2 for iSeries Version 5 Release 2 DB2 Universal Database Version 8 for Linux, UNIX, Windows Informix IDS and XPS DataJoiner Replication Administration (DJRA) and the DB2 Control Center that is with DB2 Administration Client Version 5, 6, or 7 cannot be used to administer DB2 Replication Version 8. Also, DB2 UDB Version 8 Replication Center cannot be used to administer prior versions of DB2 Replication.
User interface
Replication Center, though it is delivered with the DB2 Administration Client, has a separate user interface. Replication Center is opened separately on the desktop from the other tools, such as the DB2 Control Center, that come with DB2 Administration Client. Replication Center can be opened from within these other tools, and these other tools can be opened from within Replication Center. Also, there are some functions that are shared among the tools; for instance, the dialog windows for filtering the names of a tables, views, or nicknames that should be displayed. Figure 2-1 shows an example of Replication Centers user interface.
46
User-interaction methodology
Replication Centers dialogs for defining and operating replication interact between you and the various replication control servers to obtain the information needed to generate the SQL or commands to perform a task, such as to create the Capture Control tables at a replication source. When generating SQL, Replication Center also gets input from its own customizable profiles for replication control tables, replication source objects, and replication target objects; Replication Center has default assumptions for these profiles, but you can modify these profiles to change the assumptions used when Replication Center generates the SQL for a particular task. Before the generated SQL or command for a particular task is executed, it is displayed to you. You can edit the SQL or command in Replication Centers Run now or Save SQL dialog window. You can then run the SQL or command immediately, or saving it to a file to be run later.
47
Functions under Definitions where SQL scripts are generated. Create Capture/Apply Control Tables Drop Capture/Apply Control Tables Register Tables/Views/Nicknames Create Subscription Set (Properties on Subscription Set) Delete Registrations/Subscription Sets/Subscription Set Members Add/Remove Statements Promote
Static reporting functions under Operations Show Capture Messages Show Capture Throughput Analysis Show Capture Latency Show Apply Report Show Apply Throughput Analysis Show End-to-End Latency
Retrieval of the result set in populating the contents pane Common functions like Refresh, Filter, Show Related Activate/Deactivate Subscription Sets Row Count and View Selected Contents on Dependent Targets Manage Values in CAPPARMS
DB2 Connectivity
To obtain information to fill in the graphical user interface, and for simple tasks such as updating the status of a subscription, Replication Center uses SELECT and UPDATE statements delivered directly from the Replication Center GUI to the server via JDBC and DB2 connectivity. For more complex replication definition tasks, Replication Center uses its own APIs for these tasks. Each API is a separate ASNCLP command. These APIs are within the replication center software. The APIs in turn generate the SQL that executes the replication definition task. Replication Center will use JDBC and DB2 Connectivity to execute the SQL. Replication Centers use of its own APIs is seamless and not apparent to the user. That is, the user will only see the SQL that is generated by the Replication Center and Replication Center APIs working together and will not see the format and content of the Replication API request made by the Replication Center GUI.
48
Sorry, but we didnt cover the Replication Center APIs (ASNCLP) in this redbook. And while these APIs are not documented in the IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1, SC27-1121, DB2 Replication product development may make information about ASNCLP via the product Web site. DB2 client-to-server connectivity to DB2 Universal Database servers on Linux, UNIX, and Windows depends on the DB2 Runtime Client that comes with DB2 Administration Client being configured to communicate to DB2 on the server. Communications can be over TCP/IP, NetBIOS, Named Pipes, or APPC. DB2 client-to-server connectivity to DB2 for z/OS and OS/390 servers depends on either DB2 Connect Personal Edition on the replication center workstation, or on the DB2 Runtime Client in DB2 Administration Client being configured to connect to DB2 for z/OS or OS/390 via a DB2 Connect Enterprise Edition or DB2 ESE server.
49
Operations involve servers on all supported platforms other than iSeries Start/Stop Capture/Apply Suspend/Resume/Reinitialize Capture Prune Capture Control Tables Change Operational Parameters Query Status on Capture or Apply program
Client machine
DASe Client
DASe Connectivity
DASe Server (z/OS, Linux, UNIX, Windows)
Target server
50
R egister Table s/Vie w s on iS eries platform Ope rations involve servers on iSeries p latfo rm Start/Stop C apture/Ap ply R einitialize C ap ture Prun e C a pture C ontrol Tables C hange O perationa l Param eters
R eplication C enter G U I
I B M Toolbox for Java Java Toolbox/400
C lient machine
TCP/IP
Figure 2-4 Use of IBM Toolbox for Java APIs to iSeries servers
A subset of the IBM Toolbox for Java APIs are included within Replication Center. The APIs that are included with Replication Center permit Replication Center to send to iSeries requests for the CL commands needed for replication such as ADDDPRREG to define a replication source and STRDPRAPY to start Apply.
51
Functions that only involve the client where Replication Center GUI runs. Manage Passwords Manage Profiles (Control Tables, Source Objects, Target Objects) Add Capture/Apply Control Servers Remove Capture/Apply/Monitor Control Servers Save SQL
Client machine
Target server
52
Functions under Definitions where SQL scripts are generated. Static reporting functions under Operations Create Capture/Apply Control Tables Show Apply Report Register Tables/Views/Nicknames Show Apply Throughput Analysis Create Subscription Set (Properties on Subscription Set) Show End-to-End Latency
Retrieval of the result set in populating the contents pane Row Count and View Selected Contents on Dependent Targets
DB2 Connectivity
DB2 Federated Server SystemCatalog Apply Control Server Apply Control tables Capture Control Server Informix
Figure 2-6 Non-DB2 sources and targets
DB2 Federated Server SystemCatalog Monitor Control Server Monitor Control tables
For replication from Informix, Replication Center will need to: Create Capture Control tables in Informix Create staging tables in Informix Create triggers and procedures in Informix Create nicknames and tables in the DB2 federated database that contains the federated server definition for the Informix data source To create the Capture Control tables, staging tables, triggers, and procedures in the Informix replication source, Replication Center will use DB2 Federated Server Set Passthru capability to the Informix server. For replication to non-DB2 targets, Replication Center can create the target tables if they dont exist. Replication Center uses DB2 Federated Server Set Passthru capability to Informix to create the target table. If target tables already exists in Informix, Replication Center will need to be able to connect to a DB2 federated database that contains nicknames for the target tables.
53
54
The operating systems supported by DB2 Administration Client include: Microsoft Windows 2000 Microsoft Windows/NT Version 4 with Service Pack 6a or later Microsoft Windows 98 Microsoft Windows ME Microsoft Windows XP (32-bit or 64-bit) Microsoft Windows.NET servers (32-bit or 64-bit) AIX 4.3.3.78 or later. Sun Solaris 2.7 (32-bit or 64-bit) or later Check DB2 Client Quick Beginnings for required patch levels Hewlett Packard HP-UX 11.0 32-bit or 64-bit or HP-UX 11i 32-bit or 64-bit. Check DB2 Client Quick Beginnings for release bundle details. Linux for Intel For 32-bit Intel systems: Kernel level 2.4.9 or higher glibc 2.2.4 RPM 3 For 64-bit Intel systems Red Hat Linux 7.2, or SuSE Linux SLES-7
Linux for z/Series (390) Red Hat Linux 7.2, or SuSE Linux SUES-7 Other software requirements: On Windows Java Runtime Environment 1.3.1. JRE is included with DB2 Clients for Windows. TCP/IP, Named Pipes or NetBIOS, which are included with the Windows operating system. Note: DB2 Admin Server client-to-server communications only supports TCP/IP. For LDAP support, Microsoft LDAP client or IBM SecureWay LDAP Client V3.1.1. Microsoft LDAP client is included with Windows ME, XP, 2000, and .NET.
55
On AIX Java Runtime Environment 1.3.1 or later. Included with DB2 Clients for AIX. TCP/IP - included with AIX For LDAP support, IBM SecureWay Directory Client V3.1.1 is required On HP-UX Java Runtime Environment 1.3.1 or later. Included with DB2 Clients for HP-UX TCP/IP - included with HP-UX On Solaris Java Runtime Environment 1.3.0 is required for Solaris 32-bit and Java Runtime Environment 1.4.0 is required for Solaris 64-bit. JRE is not included with DB2 Clients for Solaris. TCP/IP - included with Solaris On Linux systems Java Runtime Environment 1.3.1. JRE is not provided with DB2 Clients for Linux TCP/IP - included with Linux Additional pre-installation tasks on Solaris, HP-UX, and Linux: System kernel parameters in /etc/system must be set for DB2 by the Linux/UNIX administrator (root) and the system re-booted before the DB2 Client is installed. See Modifying kernel parameters under the Solaris, HP-UX, and Linux installation instructions in Quick Beginnings for DB2 Servers.
56
If there are Informix replication sources or targets, the Replication Center workstation will need to be able to access a system running either DB2 ESE or DB2 Connect EE Version 8. That system will in turn need to be able to access the Informix server. DB2 Connect or DB2 ESE Version 8 could be installed on the Informix server itself. Replication Control Tables can be created and Replication Source definitions, and Replication Subscriptions can be defined over various networking protocols as supported by the DB2 Server. DB2 UDB servers on Windows support TCP/IP, NetBIOS, and Named Pipes for communications from clients. But for managing replication operations (such as starting/stopping Capture Apply, and Monitor), which depends on DB2 Administration Server client-to-server communications, Replication Center will need TCP/IP connectivity to the DB2 servers. It is best to test networking connectivity from the Replication Center system to each of the DB2 or DB2 federated servers that Replication Center will need to access. For instance, if TCP/IP will be used, ping each of the servers by their hostnames or IP addresses.
57
The Informix server will need to be able to accept connections from the Informix Client SDK on a DB2 federated server. Typically this means that the Informix servers onconfig and sqlhosts information must indicate that the servers listener is running for Informix protocol onsoctcp. You will need a userid at the Informix data source: If replicating to Informix and the target tables do not already exist, the userid will need to be able to read the Informix system catalog and create the target table. If replicating to Informix and the target tables already exist, the userid will need to be able to read the Informix system catalog and insert/update/delete into the target table If replicating from Informix, the userid will need to be able to read the Informix system catalog, create tables, create triggers, and create procedures on the source tables. Informix Client SDK will need to be installed on the DB2 federated server system and configured to connect to the Informix server. The DB2 ESE Version 8 system that is providing federated server access to Informix will need to be configured to receive connections from the DB2 Client on the Replication Center workstation. The Replication Center user will need a userid on the federated server system that can create Capture, Apply, Monitor Control tables in the federated database and can also create nicknames for the source or target tables that are in the Informix server.
58
DASe Client
DB2 Connectivity
DB2 ESE V8 DB2 database
DASe Server
Apply Monitor
Wrappers INFORMIX Servers User Mappings IDS93 TYPE=INFORMIX VERSION=9.3 AUTHID=repladmin SERVERr=IDS93 WRAPPER=INFORMIX OPTIONS (REMOTE_AUTHID='infxusr1', OPTIONS (NODE 'ifmx93', REMOTE_PASSWORD='infmxpw') DBNAME='ifxdb1' )
sqlhosts dbserver=ifmx93
Here is a summary of the steps required to set up federated access from DB2 ESE Version 8 to an Informix data source. Informix Client SDK must installed and available on the DB2 ESE Version 8 system, and it must be configured to connect to the non-DB2 data source. Typically this means an entry for the Informix server in the sqlhosts file on UNIX or in the SQLHOSTS information in the Windows Registry on windows. DB2 ESE Version 8 or DB2 Connect EE Version 8 needs to be installed and latest fixpack applied. If DB2 ESE Version 8 is a source or target, a separate installation of DB2 ESE Version 8 or DB2 Connect Version 8 is not required for the federated access to Informix.
59
on Linux and UNIX system, djxlink needs to be performed to create a wrapper library that is linked with the data source client software. This step is not required on Windows systems. An INFORMIX Wrapper definition must be created in the DB2 ESE V8 database. Server and User Mapping definition for the Informix server must be created in the DB2 ESE V8 database. It is recommended that Set Passthru and then Create Nickname be used to test the Server/User Mapping definitions. The Server Option IUD_APP_SVPT_ENFORCE should be specified with a setting of N. If this option is not specified, insert/update/delete operations, which are required for replication both to and from Informix, wont be enabled. More detail on the technical requirements and steps to configure federated access from DB2 ESE or DB2 Connect Version 8 to Informix are covered in Appendix C, Configuring federated access to Informix on page 511.
60
An alternative to installing DB2 Connect Personal Edition on the Replication Center workstation, is to install only the DB2 Administration Client (with the DB2 Runtime Client included) on the Replication Center workstation and to connect to the DB2 for z/OS or DB2/400 source/target system via DB2 Connect Enterprise Edition or DB2 ESE on a Linux, UNIX, or Windows server. For Replication Center to connect to a DB2 ESE or DB2 Connect that has federated access to Informix, then the minimum DB2 product you require on the Replication Center workstation is the DB2 Administration Client.
Note: Replication Center cannot use AS/400 Client Access to connect to an iSeries system.
61
The DB2 Administration Client can be installed from the Client Installation CDs included with a DB2 Connect or DB2 Server product, or it can be downloaded from IBM. Once the first DB2 UDB Version 8 fixpack becomes available, a month or two after availability of DB2 UDB Version 8, it is recommended to download the DB2 Administrative Client from the IBM, rather than to install from the DB2 Client installation CDs. At the download site, IBM provides complete DB2 Client installation packages which include all the updates of the latest available fixpack. Thus once the first DB2 UDB Version 8 fixpack becomes available, the DB2 Client, with latest fixpack included, can be installed in one step by downloading it from the IBM download site, thus avoiding the two step process of first installing the client from the product CDs and then applying the fixpack. To get the latest DB2 UDB Version 8 Administration Client you can go to the DB2 Universal Database Web site with a browser and select Downloads and answer the questions appropriately to navigate to the download server, or go directly to the IBM download ftp server itself from a command prompt or a browser. The DB2 Universal Database Web site, as of the time of writing, is:
http://www.ibm.com/software/data/db2/udb/
The DB2 UDB fixpacks and most current client software can be found under:
ftp://ftp.software.ibm.com/ps/products/db2/fixes/language/platform-release/ client/admin
For example, for the DB2 UDB Version 7 Administrative Client for Windows 2000, the full URL is:
ftp://ftp.software.ibm.com/ps/products/db2/fixes/english-us/db2ntv7/client/ admin/
We use this example because as of the writing of this redbook, DB2 UDB Version 8 wasnt available and so there were no DB2 UDB Version 8 Client downloads available, with or without fixpack included, at the IBM download site. When DB2 UDB Version 8 does become generally available, we might expect the URL to the DB2 UDB Version 8 Administration Client for Windows NT and 2000 to look something like this:
ftp://ftp.software.ibm.com/ps/products/db2/fixes/english-us/db2ntv8/client/ admin/
The DB2 Client download file is usually in the form of a zip file for Windows platforms and a compressed tar file (.tar.Z) for UNIX and LInux platforms.
62
Select DB2 Universal Database. The recommended settings for the Solaris and HP-UX kernel parameters are listed in Additional Reference Topics in DB2 UDB Version 8 Quick Beginnings for DB2 Servers, GC09-4836. In this chapter there are tables:
63
Recommended HP-UX kernel configuration parameters Recommended Solaris kernel configuration parameters. Instructions about how to set the HP-UX and Solaris kernel parameters are found in Modifying kernel parameters (HP-UX) and Modifying kernel parameters (Solaris) in the chapter about installing DB2 Servers on UNIX in DB2 UDB Version 8 Quick Beginnings for DB2 Servers, GC09-4836. The recommended settings for the Linux kernel parameters are found in Preparing for Installation (Linux) - Modifying kernel parameters in the chapter about installing DB2 Servers on UNIX in DB2 UDB Version 8 Quick Beginnings for DB2 Servers, GC09-4836.
Select DB2 Universal Database. If you have the installation media for a DB2 Server or DB2 Connect product, it is OK to install that on your workstation, being sure to indicate that the Administrative Client be included among the components to be installed. In fact, if installing DB2 ESE Version 8 is an option for you, it is recommended that you do this as this will allow you create a couple DB2 databases on your workstation for becoming familiar with DB2 Replication without involving DB2 on a server, mainframe, or iSeries. Other options for having a replication sandbox on your own workstation are: DB2 Personal Edition Version 8 if you will be administering replication only between DB2 on Linux, UNIX, and Windows systems. DB2 Personal Edition also includes the DB2 Administration Client and DB2 Runtime Client. DB2 Personal Edition Version 8 plus DB2 Connect Personal Edition Version 8 if you will be administering replication on z/OS, OS/390, iSeries as well as on Linux, UNIX, and WIndows systems.
64
Installing on UNIX
See Installing DB2 Clients on UNIX in the chapter about installing a DB2 client in DB2 UDB Version 8 Quick Beginnings for DB2 Clients, GC09-4832 Root needs to do the install. If you have the installation CD for DB2 Clients, or for a DB2 Server, or DB2 Connect product, you can place it in a CD-ROM drive accessible from the system, find db2setup on the CD, and run it (./db2setup) to bring up the DB2 Setup Wizard. If you downloaded the DB2 Administration Client from IBM, root will need to first uncompress and untar the file and then find db2setup and run it (./db2setup) to bring up the DB2 Setup Wizard. Once in the DB2 Setup Wizard, choose Install Products and be sure to include the DB2 Administration Client or DB2 Administration Tools among the components installed. This will installed the DB2 Control Center, DB2 Replication Center, and other DB2 administration tools on the workstation. The DB2 software will be installed in: /usr/opt/db2_08_01 on AIX /opt/IBM/db2/V81. on other UNIX and Linux systems
Installing on Windows
See Installing DB2 Clients on Windows in the chapter about installing a DB2 client in DB2 UDB Version 8 Quick Beginnings for DB2 Clients, GC09-4832. If you are installing a DB2 Server or DB2 Connect product, then see the appropriate chapters in DB2 UDB Version 8 Quick Beginnings for DB2 Servers GC09-4836 or the applicable DB2 Connect Version 8 Quick Beginnings (GC06-4834 and GC09-4833). You will need an account on the workstation that has the appropriate authority. If you are only installing the DB2 Administrative Client the minimum is: On Windows 98 and ME: Any valid Windows 98 user account On Windows 2000, Windows NT, Windows.NET, and Windows XP: A user account with more authority than the Guests group, such as an account in the Users or Administrators group. If in the Users group and on Windows 2000 or Windows.NET, then the registry permissions have to be modified to allow Users write access to the Windows Registry branch HKEY_LOCAL_MACHINE/Software
65
The above rules are true for installing the DB2 Administrative Client. If you are installing a DB2 Server or DB2 Connect product, it should be with an account in the Administrators group. In our project that worked with DB2 Replication Version 8 to create this redbook, we installed DB2 ESE Version 8, including Administrative Client, using a local user account that was in the local Administrators group.
66
In Configuration Assistant, on the Node Options tab In DB2 CLP Commands, on the catalog tcpip node command. See Appendix B for examples. After configuring a connection to a DB2 data source, you can test the connection either in DB2 Configuration Assistant or using the DB2 Command Line Processor. Appendix B also includes example of how to test your connection. Though Replication Center, Capture, Apply and Monitor appear to bind any packages they need at any DB2 server they need to access, there may be occasions when you need to explicitly bind packages needed by Replication Center, Capture, Apply or Monitor. The appendix also covers how to bind packages.
67
In iSeries (AS/400):
A Client Access session from your workstation using Operational Navigator A 5250 terminal emulator (IBM Personal Communications) using CL commands such as DSPFD and DSPFFD A DB2 Command Line Processor session from your workstation
In iSeries (AS/400):
A Client Access session from your workstation using Operational Navigator A 5250 terminal emulation session using CL commands such as DSPPFM (Display Physical Master) and RUNQRY (Display Contents with column names) A DB2 Command Line Processor session from your workstation
68
Replication Center also has an option for seeing the contents of source and target tables. Under Replication Definitions -> Apply Control Servers, select a particular server, and then Dependent Targets under that server. In the dependent targets displayed in the right window, highlight a particular target and either right-click or pick Selected from the menu bar and then select View Selected Contents from among the options. The View Selected Contents window will let you: Limit the number of rows displayed. Select the target or the source table. To see the contents of the source table, use the check box in the middle of the screen. Specify a predicate (WHERE clause) to limit the rows displayed.
Start -> Programs -> DB2 -> General Administration Tools -> Replication Center
At command prompt: db2rc On Windows, this will work in a DOS Prompt or in DB2 Command Window Prompt. On UNIX and Linux, if you are not the DB2 instance owner, then before you enter db2rc, your PATH variable must have the DB2 directories added to it and the DB2INSTANCE variable must be set. An easy way to do this is to add the DB2 instance owners db2profile to your .profile.
69
For instance, if the DB2 instance owner on your system is db2inst1 with home directory /home/db2inst1, then full path to his db2profile is: /home/db2inst1/sqllib/db2profile You can edit your .profile, and add the line: . /home/db2inst1/sqllib/db2profile For your PATH to be updated and DB2INSTANCE added to your environment, you will need to either exit and login again to the workstation or execute your .profile. From within DB2 Control Center, Command Center, or other DB2 Administrative Tool: From the Menu bar, select Tools -> Replication Center See Figure 2-8.
Figure 2-8 Opening Replication Center from DB2 Admin Tool menu bar
From within DB2 Control Center, Command Center or other Administration Tool, select the Replication Center icon. See Figure 2-9.
The Replication Center should open on your desktop. The first time you open the Replication Center, two windows should open. The Replication Center Launchpad will be on top, and behind it will be the Replication Center itself. See Figure 2-10.
70
If you do not want the Launchpad to open every time you start the Replication Center, check the box Do not show the Launchpad again when Replication Center opens; this check-box is in the lower left corner of the Launchpad. Your Replication Center profile will be updated so that Launchpad wont open with the Replication Center. You will still be able to open the Launchpad if you need it by going to the Replication Centers menu bar and selecting Replication Center -> Launchpad. If you are not familiar with DB2 Replication, the Launchpad may be useful to you since it guides you the major steps of setting up and starting replication.
71
But if this is your first time opening Replication Center on your workstation, we recommend you close the Launchpad to get to the Replication Center itself so you can start updating your Replication Center profiles.
The Replication Center has a look and feel common to the DB2 UDB Administration Tools.
72
To see actions that can be performed on an object in the right window, highlight the object and right-click to see the options. Among the options on many objects is the Show Related option, which will display a list of various types of other objects that could be related. In Figure 2-12, we have expanded the hierarchical tree to show Capture Control servers. There are no objects in the right window because we have not created the Capture Control tables in any servers yet.
Replication Center
Under Replication Center are options for the whole Replication Center, including managing system userids and passwords Replication Center will use when accessing various servers
Selected
IUnder Selected are options available for a specific object that has been highlighted in the left or right windows below.
Edit
The options under Edit can be used to find or select items in the Replication Center windows.
View
There are several options under View. Create or change the filter for objects to be displayed in the windows below.
73
Sort the contents of the windows or customize the column headings Refresh the view, causing Replication Center to check for recent additions, deletions to the objects displayed. See the Legend panel that shows the meaning of the various Replication Center icons.
Tools
Open another one of the DB2 Administration Tools (Control Center, Command Center, etc.)or change over all Tools Settings. If the Administrative Tool you want is already open on your workstation, you can select its window or use Alt->Tab to get to it.
Help
Help includes a list of several options. DB2 Information Center online DB2 Tutorials About, which displays details of the version, release, and maintenance level of the DB2 product code on the system Other helpful resources
Tool bar
The Replication Centers tool bar also provides a row icons for opening other DB2 tools. See Figure 2-13.
The icons have info pops; hover your pointer over the icon for two seconds and an info pop will appear indicating what the icon is for. For most users of Replication Center, the most interesting will likely be: Control Center is the 1st icon on the left. Command Center is the 3rd icon from the left. Tools Settings is the 3rd icon from the right. DB2 Information Center is the i icon. After putting a few replication definitions in place, the Replication Center might look as in Figure 2-14. In the tree in the left window, under the Capture Control Server SAMPLE, Registered Source icon has been selected. In the right window only one registered table DB2DRS4.DEPARTMENT is shown; the small number of objects shown in the right window could either because there in fact only a
74
small number of tables registered at SAMPLE, or because a filter was used to restrict the list of objects displayed in the right window. In the example, the object has been highlighted and right-clicked to show a list of options. The same list could have been displayed by highlighting the object and clicking Selected in the menu bar at the top of the Replication Center.
Selecting Properties would open the Registered Table Properties window, which looks like the Register Tables window, but is populated with the attributes of the registration. Selecting Show Related will show other types of items that are related, for instance if there are any subscriptions defined from this registered table.
75
76
On UNIX, look for it in the home directory, or a sub-directory, of the DB2 instance owner. Replication Center maintains a backup - dbrepl.prf.bkp - in the same directory as the current db2repl.prf. If for some reason you lose both db2repl.prf and db2repl.prf.bkp, Replication Center will create a new one when you open and save from any of the Replication Centers profile windows. Though there is in fact only one Replication Center Profile (db2repl.prf), there are several dialog windows within Replication Center that can enter and change data in the profile:
In Database tab, Add any DB2 servers you can access from the DB2 Client or DB2 Connect Personal Edition that will either have replication sources, replication targets, or control tables for Capture, Apply or Monitor. If you will replicating to or from Informix servers, Add the DB2 for Windows, Linux, or Windows database that will contain the federated Server definition for the Informix server. For Userid and Password, put the userid and password by which Replication Center will access the DB2
77
server. The Userid should have adequate authority and privileges to read the catalog at the DB2 server and to create Capture, Apply, or Monitor control tables, staging tables, and/or target tables. In System tab, Add systems running DB2 Administrative Server (DAS) that Replication Center will access to start Capture, Apply, or Replication Monitor. This could include Linux, UNIX, or Windows systems running DB2 UDB. It could include z/OS systems running DB2 for z/OS and the DB2 Administration Server (DAS) running. DB2 Administration Server server for z/OS will be made available for DB2 for z/OS Version 6 and Version 7 via 390 Enablement packages JDB661D and JDB771D respectively; there will be DB2 for z/OS PTFs required to work with DAS on z/OS and OS/390. If a server doesnt have DAS, you will still be able to start Capture, Apply, or Monitor by logging directly into the system (i.e. not via Replication Center) and use the Capture, Apply, and Monitor commands to start/stop Capture, Apply, and Monitor. After you have added a Database or System you can later return to Manage Replication Center Passwords and Change the password. The userid/passwords entered via the Manage Replication Center Passwords is stored in encrypted form in db2repl.prf.
78
Once you have selected the type of platform, you will notice the options in the dialogue window change as appropriate for the platform. For instance, for DB2 on UNIX and Windows you can specify containers for tablespaces. For DB2 on z/OS and OS/390 you can specify databases and storage groups for tablespaces. You should note that the default behavior is to create one tablepspace to hold all the Capture Control tables except IBMSNAP_UOW, create a separate tablespace for IBMSNAP_UOW, and to create one tablespace to hold all the Apply Control tables. After filling in the window, click Apply in the lower-right corner to update the profile, and then Close to close the window. You can come back and update the Control Table profile later, such as right before creating the control tables at a particular server.
79
Opened from Replication Definitions icon with right-click for from top menu bar Selected You will first be presented a window for selecting a particular Target Server If target tables for a Subscription member dont already exist, Replication Center will use information in this profile for the definition of the target tables. Contains fields for specifying the algorithm that Replication Center will use for target table names, indexes on target tables, and name and characteristics of tablespaces for target tables. When filled in, or changed, click OK to store the information into the Replication Center profile.
80
In this window, the tablespace information and index-naming information (back tab), was filled in from the Control Tables Profile. If the Control Tables Profile has not yet been set up on this workstation, then the tablespace/index naming information would be based on the default settings that come with Replication Center. You will be able to either change the values in the dialogue, Cancel and go back and update the Control Tables Profile to change the value that will be in the dialogue the next time you open it. Clicking OK will cause Replication Center to generate the SQL that will execute this definition or operations step, but the SQL will not be executed. It will be displayed in a Run now or Save SQL window that will follow. Actually, in the foreground, will be message box which will have any messages associated with generating the SQL. The message box will look like Figure 2-18.
81
You can read the messages and press Close to get to the Run now or Save SQL window. The first Run now or Save SQL window you see may look like that in Figure 2-33 which was for creating the Capture Control Tables.
82
We want to point out some things about this window and how to use it: The SQL (or Command) that was generated is in the SQL Script or Command window in the lower part of the window. To see more of the SQL at once, you can resize the window so the display field is larger. You can edit the text in the window to change what will be run or saved, or you can press Cancel and go back to the dialogue that generated the SQL, change the values in the input fields so that different SQL will be generated. You could also go back and make changes to the Control Table Profile (or Source Object Profile or Target Object Profile) to change the input to the definition dialogue that generates the SQL and go through the definition dialogue again to generate different SQL. Run Now, in the upper left corner of the window, is the default selection. If you press OK (in the lower right corner) the generated SQL will be executed and, if it runs successfully, the Run now or save SQL window will close. If you
83
press Apply, the generated SQL will be executed, and the Run now or save SQL window will remain open. You could then press Cancel to close the window. If you were to press OK, instead of Apply, you will get a message box indicating success or failure. If the SQL was successful, the Run now or Save SQL window itself should close. If you select Save to file in the upper left corner, then pressing Apply will save the SQL to a file (which file and where well discuss below) and leave the window open. You should see a message box indicating if the file was successfully created. You could then select Run now in the upper left corner and OK or Apply; in this way you could save the SQL to file for record or future review, but execute the definition step right here in this window.I
Attention: If you select Save to file and click OK, the SQL or command will be saved in a file, but the Run Now or Save SQL window will close and you will have to find another way, outside Replication Center, to run the SQL or command.
If you want to save a Replication Center-generated SQL or command to a file and run it now: 1. Select Save to file. 2. Fill out the Save specifications section of the Run Now or Save SQL window. 3. Click Apply here on the Run Now or Save SQL window. This will save the SQL or command to a file and leave the Run Now or Save SQL window open. 4. Select Run now. 5. Fill out the Run specifications section of the Run Now or Save SQL window. 6. Click OK here on the Run Now or Save SQL window. This will run the SQL or command, and close the Run Now or Save SQL window.
When you try to save the SQL to file, you may get a message box indicating that there was in I/O error. There may be different causes for this: A file with the same name already exists. Replication Center cannot over-write or replace existing files. Replication Center did not have authority to write in the directory that was chosen. The requested directory couldnt be found.
84
We recommend using a file manager (Windows Explorer) or command prompt (DOS Prompt) to go to the directory where you wanted to save the file and determine the possible cause of the IO Error message by examining the file already in the directory and/or by trying to create a small file in the directory such as with Notepad. Please note the Save multiple scripts in one file check box immediately under Save to file. This is relevant when creating Subscription Sets with source-to-target members included, or when adding members to an existing Subscription Set. A member definition involves SQL steps at two or three servers: At Apply Control Server, records are inserted into the control tables that that Apply reads to find out about the source-to-target member definition. At Source Server, records are inserted into the IBMSNAP_PRUNCNTL table that tells Capture about targets defined from a registered source table. At the Target Server, if different from the Apply or Capture Control Server, the target table is created if it doesnt already exist. If you leave Save multiple scripts in one file unchecked, Replication Center will generate a different SQL file for each of the different severs involved. Replication Center will take the output file name you specify, and add 01, 02 etc. to the file name for each of the files created. Each of these files will contain CONNECT TO statements for the indicated server. If you check Save multiple scripts in one file, Replication Center will create only one file containing all the SQL. Within the file, there will be CONNECT TO statements before each of the groups of statements to be executed at a particular server. The Run specifications fields in the middle-left side of the window indicates the server the Replication Center will CONNECT TO to execute the SQL and the userid Replication Center will use on the Connection. Replication Center obtained this information from the Manage Passwords for Replication Center windows Database tab. The Save specifications box in the middle-right side of the window indicates what system Replication Center will create the file that will contain the saved SQL and the userid and password that Replication Center will use. The userid/password information comes from the Manage Passwords for Replication Center windows System tab. Typically we would expect you to save the file on your own workstation (or to a network drive accessed by your workstation), but there is an option to specify another system and DB2 Administration Server (DASe) at that system could attempt to save the file there.
85
To pick the name of the file and the directory where the SQL is to be saved, click the box with the three dots (...) next to the File Name field. This is the last field under Save Specifications. You will presented with the File Browser window as in Figure 2-20.
Figure 2-20 File Browser for specifying file name and path
In the Replication File Browser window: Select the appropriate disk drive in the lower right corner. The current directory will be displayed in the upper right corner. Sub-directories will displayed in the Directories window below. If you want to save into a sub-directory of the Current directory, double-click the sub-directory in the Directories field. If you want to save into a sub-directory that is not under the Current directory, click the double dots (..) at the top of the Directories list and the Current directory value will change to a higher level directory and the Directories field will present a list of sub-directories under the new Current Directory. When the Current directory field displays the path to the directory where you want to create the file: If there are already any files in that directory, their names will appear in the Files field on the left.
Note: Replication Center cannot over-write or replace existing files. If you want to replace a file, go to Windows Explorer or a command prompt and erase or rename the existing file.
86
Type the name of the file you want to create into the Path field. When saving SQL to a file, you may want to have a naming convention for the file names, for instance something to suggest: The Server where the SQL would be executed The type of definition activity The name of the source table, subscription set, or target table
For an example of how we filled in the Replication Center File Browser window to name the file SAMP_CapCtl_MIXCAP.ddl and save it into directory d:\DB2Repl\ReplCtr. See Figure 2-21.
87
DB2 Command Center: A GUI tool with a facility for entering new queries or other SQL statement, or for importing scripts
88
The CONNECT TO statement indicates that the statement(s) that follow are to be executed at DB2 server SAMPLE. The DB2 CONNECT statement is described in the DB2 UDB Version 8 SQL Reference, SC09-4845. The USER value indicates the userid to be sent with the CONNECT statement when it is executed and the USING value password that is to be used; Replication Center has substituted XXX for both the USER and USING values in this statement so it cannot be used as is. Your options with this CONNECT statement is: Add two dashes (--) before CONNECT, so that the line becomes a comment and is not execute, and you should connect to SAMPLE by some other means before executing the contents of the file. Or, Put a valid userid and password in place of the two XXXs so that the CONNECT statement will execute successfully when the contents of the file are executed. The userid you use here should be one that has authority to execute the statements at the server indicated in the CONNECT statement. The CREATE TABLE statement is over many lines; we have not shown them all. Youll notice the semi-colon after IN USERSPACE1. This semi-colon marks the end of this CREATE TABLE statement. CREATE UNIQUE INDEX is the beginning of the next SQL statement in the file; we have not shown all of that statement. A SQL file created by Replication Center will likely have multiple CREATE TABLE, INSERT, UPDATE and/or DELETE statements in it. Following all the statements to be execute at the same DB2 Server will be a simple COMMIT statement. For SQL files that create control tables or that register tables, all the statements will likely be executed all at one server and there will be one COMMIT near the end of the file. A file created by the Replication Centers Subscription Set dialog window, if it adds members to an Subscription Set, will also include connections to multiple different DB2 servers if you checked on the Replication Centers Run Now or Save SQL window the option Save multiple scripts in one file. This is because creating a new member in a Subscription Set involves inserting records into Apply Control Tables, inserting records into Capture Control Tables, and, if the target table doesnt exist yet, creating the target table at the target server. The SQL file will have a COMMIT at each DB2 server before the CONNECT to the next DB2 server. When creating control tables at Informix servers, registering tables in Informix, and creating new target tables in Informix, Replication Center will use DB2s Federated Server Set Passthru facility to create tables, triggers, procedures at the
89
Informix server. In that case, the SQL generated by Replication Center will contain: A CONNECT TO statement to the DB2 Linux, UNIX, or Windows database that has the Server definition for the Informix server, SET PASSTHRU for the federated database Server name for the Informix server SQL statements to be performed at the Informix; for instance CREATE TABLE, CREATE UNIQUE INDEX, CREATE TRIGGER, CREATE PROCEDURE statements. At the end of the statements to be execute at the Informix server, there should be a COMMIT, SET PASSTHRU RESET. If any tables were created at the Informix server in SET PASSTHRU mode, then after SET PASSTHRU will probably be CREATE NICNAME statements to put nicknames in place for the tables that were created. Following the CREATE NICKNAME statement may be one or more ALTER NICKNAME statements to change the local data type of some of the nicknames columns; this is done if the DB2 federated servers default type mappings caused the local type of a nickname column to be inappropriate for the data values that will be replicated through that column of the nickname. COMMIT at the federated server database that contains the new nicknames
If the CONNECT TO statements in the file were not preceded by dashes (--) then they will be executed and will establish the connection to SAMPLE where the rest of statements in the file will be executed; in this case, the CONNECT TO statements would need valid userids and passwords after the USER and USING keywords respectively. For SQL generated by the Replication Center Create Subscription with Members or Add Members dialogue, if you specified that multiple scripts be saved in one file, find all the CONNECT statements in the file to add userids/passwords that are valid for each of the DB2 servers referenced, and execute all the statements in the file, at all the DB2 servers, in one use of db2 -vtf filename.
90
If the CONNECT TO statements in the file were preceded by dashes, we would first need to connect to SAMPLE, and then execute the file with db2 -vtf filename. For instance, cd to the directory (d:\DB2Repl\ReplCtr\) containing the file and enter:
db2 connect to sample user db2drs4 using db2drpwd
The server will responds with connection information seen in Example 2-2.
Example 2-2 Connection Information
Database Connection Information Database server SQL authorization ID Local database alias = DB2/NT 8.1.0 = DB2DRS4 = SAMPLE
And the un-commented statements in the file will be executed at SAMPLE. DB2 Command Window can be used to connect to and execute SQL statements at any DB2: DB2 Universal Database on Linux, UNIX, or Windows DB2 Universal Database for z/OS and OS/390 DB2 Universal Database for iSeries
91
to execute. Select this icon and you will be presented with a dialogue window for entering the userid and password to be used on the connection. Once the connection is establish, select Command Centers Script tab. From the menu bar at the top of the window, select Script->Import. You will be presented with the Import file window. In the System Name field near the top-left, select the system (probably your own workstation) where the file is. Select the appropriate disk drive in the Drives field at the lower right. Select the appropriate directory from the Directories window on the right. You can navigate to higher directories by selecting .. from the directory list. Files in a selected directory will appear in the Files field on the left. Select the file you want to import so that its name appears in the Path field on the left and press OK. You will be returned to the Command Centers Script tab with the contents of the file in the Scripts field. If you want, you can edit the script before running it. For instance, find CONNECT statements and precede each of them with two dashes (--) so that they wont execute. You can then execute the script by either selecting the gears icon in the upper left corner of the window, or Script->Execute from the top menu bar, or by pressing your Control and Enter keys together at the same time. DB2 Command Center can be used to execute Replication Center Save SQL files meant for execution with DB2 UDB on Linux, UNIX, and Windows and with DB2 UDB for iSeries. The Command Center should also work with DB2 for z/OS or OS/390.
92
After you fill in the Create Capture or Apply Control Server dialogues, click OK and then, on the Run Now or Save SQL window, select Run Now (upper left corner) and click either Apply or OK (lower right corner), the Capture or Apply Control Tables will be created and an icon with the server name will then appear under the Capture Control Server or Apply Control Server icons in the tree on the left side of the Replication Center main window.
The Add Capture or Apply Control Server window should open. See Figure 2-23.
93
In our example we want to add SAMPLE as a Capture Control Server since we know that it has the Capture Control Tables. We would check the box for SAMPLE under the Capture Control Server column. If our Replication Center Passwords profile has an entry for this server, then the Userid from that entry should be filled in when we click the Capture Control Server check box. Not shown in the figure are the OK , Cancel and Help buttons in the lower right corner of this window. When the required information has been filled in, the OK button should become available and should be selected to close the window and add the new server name into the main Replication Center window. As with the other Replication Center dialogs, the OK button will not be available to select until required fields - outlined in red - are filled in.
Note: The Add server function wont work unless the appropriate Capture or Apply Control tables exist at the server.
94
Capture or Apply Control Server by using the Add function discussed in Adding Capture and Apply Control Servers above.
In the example, the text for the specific Capture Control Servers icon includes: The Server name for the Informix server in the DB2 ESE/Connect database. (In this example, it is IDS_A23BK51.) The name of the DB2 ESE or DB2 Connect database that contains the server definition to the Informix server. (In this example, it is FED_DB.)
95
We will see similar indicators elsewhere in the Replication Center if a Target server object includes federated access to an Informix server for target tables at an Informix server.
96
97
98
right-click to see options, and select Create Capture Control Tables -> Custom or Create Apply Control Tables -> Custom. All the functions available through the Replication Center Launchpad, and more, are available through the main Replication Center window.
99
Wait a minute and then check the target table to verify that Apply has replicated the update You could also look at Capture and Applys various operational reporting facilities through the Replication Center. You can also Create the Replication Monitor Control tables and start Replication Monitor to see what information if provides. The DB2 UDB Sample database could be used as your source database. The DB2 UDB Sample database can be created by opening DB2 UDBs First Steps window with Start->Programs->DB2->Setup Tools->First Steps. When you create the Sample database, several tables are created in SAMPLE, and several records are loaded into each of these tables. Tables that are useful for replication familiarization are DEPARTMENT, EMPLOYEE, and PROJECT tables. Keep in mind that the tables create when the Sample database is created do not have any primary keys or unique indexes. DB2 Replication requires that target tables have either primary keys or unique indexes; you will see in the SQL that Replication Centers Create Subscription dialogue windows that target tables are created with unique indexes. Apply uses the unique index/primary key values in updates and deletes to determine which record to update or delete in the target table. If you dont alter the tables of the Sample database to add primary keys, then when you set up a Subscription from one of these Sample tables, you will need to explicitly declare which column should be the primary key in the target table. This is not a hard decision to make since the appropriate column names are obvious: DEPTNO, EMPNO, PROJNO. Or if you want, you can add Primary Keys to the source tables before you define replication; these is easy to do in the DB2 Control Center. Highlight the table and right-click to see available options; select Alter Table, then the Keys tab, and then the Add Primary Key button to be presented with a list of columns from which one or more can be made part of a primary key. If you do this, when you create a subscription for this table, you will see that the primary key for the target is already designated in the Create Subscription dialogue.
100
local database directory made by using Catalog Database commands or by using Configuration Assistant. You will need to close and re-open Replication Center to make it aware of any new DB2 servers that were configured while Replication Center was open.
101
starting Replication Center from a command prompt (i.e. DOS-prompt) using the db2rc command and specifying -tf filename. Also, some Replication Center activity can be captured in the DB2 trace (db2trc) on the Replication Center workstation. To trace Replication Centers activities using db2trc: Add DB2 Tools tracing to DB2 trace. To do this, on the Replication Center main window (or the main window of Control Center, Command Center, etc.), from the menu bar at the top, select Tools > Tools Settings. In the dialogue notebook that opens, on the General tab, check Add Tools Tracing to DB2 Trace. Open Replication Center and do your activities just up to the point before the error. We will only want in the trace records for just the specific activity that had the error. Open a DB2 Command Window and cd to a directory where you want the trace output files. If you are not yet familiar with db2trc, look at its online help.
db2trc -h
Shows the general format of db2trc command, and input parameters. It lists the major commands of db2trc: on, off, clear, dump, format, etc.
db2trc command -u
Shows more details about usage of a specific db2trc command Turn on db2trc For example, to turn on db2trc with a memory buffer of 8 megabytes
db2trc on -l 8388608
-l tells db2trc, if it runs out of memory in the trace buffer, over-write the first records captured. -i instead, would tell db2trc if it runs out of memory in the trace buffer to stop capturing records when the buffer is full and keep the first records captured. In Replication Center, perform the activity you want to trace Dump the contents of the db2trc buffer to a file. For example:
db2trc dump > RC_trc.dmp
Well point here that one single activity in Replication Center could generate over a hundred thousand db2trc records with a trace dump file size of over 8,000,000 bytes. If when you format the trace, you get the indicator
102
This means that db2trc, while tracing your activity, ran out of trace-buffer memory and wrote over the first trace records it captured. This could occur if db2trc was started with parameter -l.
Trace truncated: YES
This would indicate that db2trc ran out of memory in the trace buffer and stopped capturing trace records. This could occur if db2trc was started with parameter -i. If you want to look at the trace yourself to see if you can find the cause of a problem, you will probably find that the trace has many records. In the trace, you could try to find some word that is associated with the error to try to find records with useful information about the cause of a problem.
103
104
Chapter 3.
105
Tip: Replication Center includes excellent help. Clicking the Help button on most window will display detailed information about the window. The help often lists additional references.
Assumptions are made in the following sections: Database Administrator (DBA) knowledge of table space creation and management. Basic configurations will be provided. All databases involved have been cataloged. See 2.6, Configuring DB2 Connectivity for Replication Center on page 66 for additional assistance. You have appropriate authority to connect to the computers, access the databases, and create table spaces and tables.
106
Capture control tables must be created in the source database. Version 8 adds the ability to have multiple captures running for a single DB2 for Linux, UNIX, Windows database, or DB2 for z/OS sub-system or sharing group, as well as for a single iSeries system; see 3.4.2, Capture control tables - advanced considerations on page 123.
107
As documented in detail in 2.2, Technical requirements for DB2 Replication Center on page 54, the computer where RC is run can otherwise be uninvolved in replication.
Attention: Control tables setup on iSeries cannot be done from the RC, please see 3.2.2, Platform specific issues, capture control tables on page 114.
Within RC to open the Create Capture Control Tables window: 1. Double-click on Replication Definitions so that the folders for Capture Control Servers and Apply Control Servers are shown. 2. Select Capture Control Servers. 3. From the menu bar at the top of the window choose Selected -> Create Capture Control Tables -> Custom....
Tip: There are numerous ways to make the selects in RC described in this chapter. Often when an object is highlighted, right-clicking will list the option you want.
4. Select the database where you want to capture data, and then select OK.
Note: If you have not correctly set up RC to manage you password, you may be prompted for an id and password. See 2.11, Managing your DB2 Replication Center profile on page 76 for details on RC managing passwords.
108
Capture Control Tables except the IIBMSNAP_UOW, and to create another tablespace to contain the IBMSNAP_UOW table. If you want Replication Center to create any new tablespaces, you can select one of the tables (e.g. IBMSNAP_REGISTER), the Create tablespace check-box, and key in the specifications for the tablespace names and characteristics in the fields provided under Specification for tablespace. Once the tablespace name and characteristics have been specified in the Tablespace properties panel for one of the tables, you can select the same tablespace for another table using the Use a tablespace already defined in this session check-box and selecting the tablespace name from the pulldown menu. If you want to change the specifications for that tablespace, you can do that in the Tablespace properties panel for any of the tables going into that tablespace. If you need yet another tablespace created for one or more of the remaining control tables, on the Tablespace properties panel for the first of the tables you want to go into the new tablespace, select the Create tablespace check box and key in the specifications for the tablespace. For the next table in the list on the Tablespace properties panel, in the Use a tablespace already defined in this session pulldown, you will notice the new tablespace name is available their for selection. The capture control tables are briefly described in Table 3-1. The underlined tables are new in This8, and tables with stars * have changed since V7.
Table 3-1 List of the Capture Control Tables
Table name
Description
Ensures only one Capture is running for a particular Capture Schema. The data collected when monitoring Capture.
109
Table name
Description
Registration information. Capture warm start configuration and data. Replaces V7s IBMSNAP_WARMSTART. Signals used to control the Capture program. Unit of work details. iSeries registration extension
These tables are described in more detail in 3.4.5, Control tables described on page 129.
Note: When you create the Capture Control Tables, no changed data (CD) tables will be created. There is a one-for-one relationship between CD tables and registered source tables. A CD table will be created when each source table is registered. See Chapter 4, Replication sources on page 133.
If you want to create the apply control tables in the same database, the option Use this server as both a Capture and Apply Control Server near the top of the window should be selected. When selected, the apply control tables will also be listed in the Control Tables section of the window. The first apply table, IBMSNAP_SUBS_MEMBR, is used to create the table space for the capture control tables group. The creation of the capture control tables is described in 3.3, Setting up apply control tables on page 118. We discuss in 10.1, End-to-end system design for replication on page 418 why you may want to create the apply tables on the same server. At this point, we will only create the capture tables. For this example we leave the parameters to their defaults. DB2 Multi-platform Version 8 has complete functionality to modify table spaces. In previous versions, depending on the needs for simple management, we may have recommended putting the more dynamic tables in system managed spaces (SMS). For additional information on maintenance of DB2 Linux, UNIX, and Windows table spaces use DB2 Control Center, select a table space then from the menu bar at the top of the window choose Selected -> Manage Storage....
110
Note: For detailed information on DB2 on Linux, UNIX, and Windows table spaces, refer to:
Table space design in DB2 UDB Administration Guide: Planning, SC09-4822 LIST TABLESPACES command in the DB2 UDB Command Reference, SC09-4828 ALTER TABLESPACE command in the DB2 UDB SQL Reference Volume 2, SC09-4845 The most active control table, other than the CD tables which are described in detail in Chapter 4, Replication sources on page 133, is the unit of work table (IBMSNAP_UOW). It grows with the number of committed transactions on replicated tables and shrinks when Capture prunes. A row is inserted in the UOW table for each committed transaction that includes an insert, delete, or update operation on a registered source table. You should initially over-estimate the space required by the table and monitor the space actually used to determine if any space can be recovered. The size of a row in the IBMSNAP_UOW is about 0.11 KB. It needs to be approximated as there are some variable length fields. We use the upper bounds as high estimates are greater than low estimates. After creating the capture control tables, you could follow the steps in Example 3-1 to verify this length.
Example 3-1 Size of row in UOW
CONNECT TO databaseName DESCRIBE TABLE ASN.IBMSNAP_UOW This will identify each column and its size. size = (10+10+1+18) character + 10 timestamp + (30+30) varchar Each of these types is 1 byte, therefore size >= 51 bytes size <= 109 bytes
You should delay consideration of sizing for the UOW table until after you have set up your registrations, see Chapter 4, Replication sources on page 133. IBMSNAP_CAPMON and IBMSNAP_CAPTRACE can also fluctuate in size, but there growth can be limited by how the monitoring and tracing is used. You may want to re-examine the locations of these tables after reading Chapter 7, Monitoring and troubleshooting on page 309. The UOW table, these two tables. and IBMSNAP_SIGNAL are pruned, see Capture prunes applied changes on page 34.
111
The Replication Center is very good at assisting in tablespace creation, still you may want to refer to DB2 UDB SQL Reference Volume 2 for the complete CREATE TABLESPACE syntax. The Replication Center only can create database managed spaces (DMS). If for easy of maintenance, you decide to put some or all of the capture control tables in SMS, you will have to do it outside of Replication. SMS provides poorer performance. If you do want to use SMS you would do something similar to Example 3-2 using the DB2 Command Center.
Example 3-2 Create SMS
CONNECT TO databaseName CREATE TABLESPACE TSASNUOW MANAGED BY SYSTEM USING ('TSASNUOW')
If you do create table spaces outside of Replication Center from the DB2 Control Center or using another interface, you would select Use an existing table space for each of the tables that you have prepared such table spaces for. See Figure 3-2.
If you are using the defaults, or have completed your customizations, select OK.
Note: If the table space, buffer pools, or other related database objects do not exist, then the replication scripts will not be created successfully. The message will identify what is in error, and closing the message will return you to the Create Capture Control Table window with field-values as they were before you attempted to generate the SQL and received the error message.
112
2. Read the information in the message box, and then Close the message box. 3. On top is now the Run Now or save SQL window, see Figure 2-19 on page 83. 4. We prefer to save a copy of the scripts before running. Saving the scripts is useful if you have a problem with RC. The scripts are SQL, and are a great reference. They can also be modified, and used to setup capture on another system of the same platform family. Select Save to file in the Processing option section. 5. In the Save specifications sections you can choose any of the systems cataloged. We save it to the machine that we are setting up capture on. 6. The User ID and Password fields are self explanatory. 7. Give a complete, existing path ending with a new filename to be created, example: /home/prodDB2/repDDL/capCtrlCrt.
Note: The CONNECT TO database statement in the script does not include your user identification and password.
8. Be careful to choose Apply, and not OK. If you choose OK, the script will be saved, but you will be back at the main RC window. If you choose OK, creating the saved-SQL file while closing the Create Control Tables window, you could then use the Command Center to import the saved SQL and, after including your connectivity and authentication information, run the script. 9. Select Run Now in the Processing option section. 10.Confirm that the Run Specifications is filled in. 11.Select OK. A message box will inform you if successful. If successful, clicking Close in the message box will result in the RC window being on top.
Note: If you Save the SQL for creating the Capture Control tables and run this SQL outside Replication Center, Replication Center will not automatically add the new Capture Control Server to the Replication Center left-window tree. If the Capture Control Tables have been created at a server, you can add the new Capture Control Server to the Replication Center left-window by using the Add server function. This is covered in 2.15, Adding Capture and Apply Control Servers on page 93.
113
Within RC to remove previously created capture control tables: 1. Double-click on Replication Definitions so that the folders for Capture Control Servers and Apply Control Servers are shown. 2. Select Capture Control Servers. 3. Select the database where you want to remove the control tables. 4. From the menu bar at the top of the window choose Selected -> Drop Capture Control Tables. 5. Confirm that the correct capture schema is shown in the drop down Capture schema field. 6. If you want RC to try and drop the table spaces as well, select Drop table spaces used only by these control tables. 7. Selecting OK will generate the SQL scripts. 8. A message box will inform you if successful. If successful, clicking Close in the message box will result in the Run Now or Save SQL window. You can also use this method to drop them at a later time. You will be warned if the tables contain data from replication transactions. Be sure to stop all replication operations that involve these control tables before taking this action. Also, remember that you cannot easily recover a dropped table space.
Description
Stores IBMSNAP_UOW table. Stores tables with ROW lock size. Stores tables with PAGE lock size.
114
The following can be specified for the tablespace: Number of pages per segment: Segment size can be optimized based on the sizes of tables in the tablespace. Refer to Sizing for capture control tables on page 126 to estimate the table size. Locksize: It is recommended to accept the default locksize displayed for that tablespace. Encoding schema: If encoding schema is not provided, it defaults to the encoding schema of the database. If you are using V7 unicode, refer to Appendix B, UNICODE and ASCII encoding schemas on z/OS, in IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1, SC27-1121, for the usage and the restrictions. Buffer pool: You may prefer to specify a different buffer pool for performance reasons. The performance implications of specifying buffer pools is discussed on Chapter 10, Performance on page 417. Storage group: This parameter determines the physical volumes of the tablespace. If it is not specified, defaults to the storage group of the database. Minimum primary and secondary space allocation: You can alter the defaults based on your estimations. Refer to Sizing for capture control tables on page 126 to estimate the tablespace sizes.
115
See 10.1, End-to-end system design for replication on page 418 and particularly, 10.1.5, Replicating from non-DB2 on page 424. You must use the RC to create the control tables. If you are accessing more than one Informix source database from a single DB2 V8 federated database, then the nicknames for the Capture Control Tables in each Informix source database must have their own unique capture schema. The Capture Schema for an Informix server will be the schema of the nicknames for the Capture Control Tables at that Informix server. See Creating multiple sets of capture control tables on page 124 for details.
Note: Change tables will also be created at the Informix server; this will be done when a nickname for an Informix table is registered as a replication source. This will be covered inChapter 4, Replication sources on page 133. The schema of the change tables, and the schema of the nicknames for the change tables, can be the same or different from the schema for the Capture Control Tables and for their nicknames.
One of the Capture Control Tables - the ASN.IBMSNAP_CAPSCHEMAS table will be created in the federated database itself. CAPSCHEMAS contains a record for each Capture Schema. Thus, for each Informix server containing Capture Control Tables, there will be a record in CAPSCHEMAS. This record will indicate the schema of the nicknames for the Capture Control Tables at that Informix server. To create the Capture Control Tables at an Informix server, and nicknames in the federated server for these control tables, on the Create Capture Control Tables dialog window, select the Use this DB2 federated server to capture changes on a non-DB2 server. In the non-DB2 server field, select the name of the federated Server definition for the Informix server. This will create control tables on the source and nicknames are created on the DB2 federated database. See Table 3-3 for the control tables that are created.
Table 3-3 Listing of capture control tables for non-DB2 relational sources
Table name
Description
Contains prune control information. Contains prune set information. Contains registration synchronization information. Contains registration information.
116
Table name
Description
Sequencing table. Only in Informix; no nickname is created for this table in the federated database. Contains signals used to control the Capture program.
IBMSNAP_SEQTABLE
IBMSNAP_SIGNAL
These tables are described in more detail in 3.4.5, Control tables described on page 129. It is recommended that for an Informix server, that the remote schema for the Capture Control tables be in lower case. In the Manage Control Tables Profile, if you update it before opening the Create Control Tables window, select Platform Informix, then enter the remote schema in lower case and surrounded by double-quotes. For example:
Capture Schema = VIPER_IDS Remote schema name = db2repl
Replication Center will use DB2 Federated Servers Set Passthru capability to create the control tables in Informix and the procedures and triggers that also need to be created there. A procedure, and trigger that calls this procedure, is created automatically on the register synchronization table (IBMSNAP_REG_SYNCH). The IBMSNAP_SEQTABLE is used in Informix for the process of generating sequence numbers for the change records that will be inserted into staging tables. The IBMSNAP_REG_SYNCH table is used in the process of updating the new-change signals (specifically, the SYNCHPOINT and SYNCHTIME columns) in the IBMSNAP_REGISTER table in Informix before Apply reads this table to see if there are new changes available in the staging tables. All the control tables created in Informix, except the IBMSNAP_SEQTABLE, have an associated nickname that is created in the DB2 database that has the Server definition to Informix. After you create the Capture Control Tables in Informix, and nicknames for them in a DB2 ESE or DB2 Connect EE database, you will notice in Replication Centers Replication Definitions for Capture Control Servers, a separate object for the Informix server. The object will indicate the federated Server name for the Informix server and the name of the DB2 database that contains this Server definition.
117
Creating apply control tables and their considerations involves many of the same concepts as creating capture control tables. As in 3.2, Setting up capture control tables on page 107, we will not be using the Launchpad. Instead, we will navigate the Replication Center (RC). Again, this more involved method will assist you in developing the skills to customize replication to meet any of your needs.
118
Attention: Control tables setup on iSeries cannot be done from the RC. See 3.3.2, Platform specific issues, apply control tables on page 121.
Unlike the capture control tables, the apply control tables can be on any database server. For now we will create the apply control tables on the target server. We will visit the issue of creating apply control tables on different database servers in 3.4.3, Apply control tables - advanced considerations on page 125. Within RC to open the Create Apply Control Tables window: 1. Double-click on the Replication Definitions so that the folders for Capture Control Servers and Apply Control Servers are shown. 2. Select Apply Control Servers. 3. From the menu bar choose Selected -> Create Apply Control Tables -> Custom.... 4. Select the database where you want to replicate to, and then OK.
Note: If you have not correctly set up RC to manage you password, you may be prompted for an id and password. See 2.11, Managing your DB2 Replication Center profile on page 76.
119
Table name
Description
Ensures only one Apply is running for an apply qualifier. Contains the data collected when monitoring Apply. Contains audit information about all subscription sets cycles performed by Apply. Contains apply parameters. A temporary table for compensation processing for subscription-sets with more than 150 members. Contains subscription columns. Contains subscription events for all subscription sets. Contains subscription members for all subscription sets. Contains the subscription sets identifiers. Contains before or after SQL statements or stored procedures calls that you define for a subscription set.
Attention: The IBMSNAP_APPPARMS is not currently listed in the Create Apply Control Tables window. However, it is created by this dialog. You can verify this or modify the properties of its table space in the SQL Script field of the Run now or save SQL window. Unlike the IBMSNAP_CAPPARMS table, for which RC has the Manage Values in CAPPARMS window... the values in the records of the APPPARMS table are not yet modifiable using the RC. However, they can be modified using SQL.
These tables are described in more detail in 3.4.5, Control tables described on page 129. Only IBMSNAP_APPLYTRACE and IBMSNAP_APPLYTRAIL possibly could require much space. You may want to manually prune these tables from time to time. These use of these tables will be discussed in detail in Chapter 7, Monitoring and troubleshooting on page 309.
120
Note: As with Capture Control Servers, if you save the SQL to create the Apply Control Tables to a file and run this SQL outside Replication Center, the new Apply Control Server will not automatically appear in the Replication Centers left-window tree. You can use Replication Centers Add server function to add the new Apply Control Server into Replication Center. See 2.15, Adding Capture and Apply Control Servers on page 93.
121
Tables are created on two tablespaces. The tables are distributed to tablespaces based on the locksize. Tablespace specifications are the same as capture control server. Refer to 3.2.2, Platform specific issues, capture control tables on page 114.
Note: After you create the controls tables you need to grant authority to specific user profiles to access them. See Chapter 2, in the IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1 , SC27-1121 for details on using the GRTDPRAUT command to authorize a user to access these tables.
122
123
replication control tables go in which tablespaces and the placement of those tablespaces on disk drives. Also, putting the UOW table in its own bufferpool can help replication performance.
124
only be created with the first set of Capture Control Tables created at a server. The schema of this table is fixed and only one copy of this table is needed at each Capture Control Server.
Note: After you create additional capture schemas, the capture controls tables needs authority to specific user profiles to access them. See Chapter 2, in the IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1, SC27-1121 for details on using the GRTDPRAUT command to authorize a user to access these tables.
125
preferred to have the apply tables in the target database on each target server. This keeps things simple, and generally gives the best performance as it minimizes network communication that Apply has to do. Also, if network problems occur, Apply is still able to read its control tables and able to write records, including error information, into its control tables. There are scenarios where you may want to trade the simplicity to achieve better performance, security or manageability. See 10.4, Apply performance on page 443
Note: The row length number on the right indicates the iSeries
Table 3-5 Tablespace sizing for capture control tables
Row length
30 / 33
Number of rows
Number of capture schemas
126
Row length
9 94 / 142
Number of rows
none 1 row inserted every Capture MONITOR_INTERVAL. On the iSeries, 1 row inserted for each active journal job. Rows are eligible for pruning after MONITOR_LIMIT is reached 1 1 row for each Capture message. Rows are eligible for pruning after TRACE_LIMIT is reached. 1 row for each subscription member (target table) that copies from a table registered in this capture schema none 1 row for each subscription set that is listed in IBMSNAP_PRUNCNTL 1 row for each registered table in this capture schema 1 ( For iSeries, 1 row for each journal job) variable, depending on Apply and user activities. Rows with SIGNAL_STATE = C (completed) are eligible for pruning. 1 row for each registered table in this capture schema.
588 / 618
IBMSNAP_PRUNE_LOCK DB2 IBMSNAP_PRUNE_SET DB2 and Informix IBMSNAP_REGISTER DB2 and Informix IBMSNAP_RESTART DB2 IBMSNAP_SIGNAL DB2 and Informix
1 74 / 90
IBMSNAP_REG_EXT iSeries
n/a / 256
The unit of work (UOW) table is volatile. Capture inserts a row into this table whenever a commit is issued for a transaction that involves replication sources. After the captured changes for the transaction have been applied to all replication targets, Capture deletes the associated rows from the UOW table.
127
The UOW table exists only on DB2 source servers. Table 3-6 is a worksheet to help you estimate the space needed for the UOW table.
Table 3-6 Sizing worksheet for IBMSNAP_UOW
Calculations
minutes
number of transactions
UOW row length (109) * UOW row rate UOW row length (129) * UOW row rate
(should be 2 or more)
Row length
18 1060 / 1078 1403 / 1583
Number of rows
1 for each Apply started on this server 1 row for each Apply message. Rows must be pruned manually. 1 row each time Apply processes a subscription set. Rows must be pruned manually. 1 row for each apply qualifier
IBMSNAP_APPPARMS
1087 / 1091
128
Row length
31
Number of rows
variable, used for conflict compensation in update anywhere replication 1 row for each column in each target table 1 row for each posted event. Rows must be pruned manually. 1 row for each target table 1 row for each subscription set 1 row for each SQL statement or stored procedure call defined for a subscription set. 2 rows for replica
IBMSNAP_APPPARMS
The Apply Parameter table overwrites the environment defaults for the Apply program. It is created on the apply control server. The table is read on start of Apply. Example 3-3 shows the apply parameter tables data definition language (DDL). You can only have one row in this table for each apply qualifier. The table columns have the same properties as the Apply parameters of the same names. Table 3-8 describes the table columns. Any value in these columns will be overridden by values that you supply to Apply when it is started. The apply parameters are demonstrated in Chapter 6, Operating Capture and Apply on page 233. Detailed information for the parameters is available in the DB2 UDB Replication Guide and Reference in the chapter titled Operating the Apply program. iSeries Apply does not support IBMSNAP_APPPARMS. Currently you have to manually run SQL to change the values of this table. Example 3-4 shows how to generate a row for an apply qualifier and updating one of the columns. This affects the behavior of Apply on that apply qualifier on subsequent starts. Until the functionality is added to RC, these parameters are not shown when you start Apply.
129
Column name
APPLY_QUAL APPLY_PATH
Description
The Apply qualifier that identifies which subscription sets for Apply to run. Is used for all the file I/O of Apply. The value, if set, must be a valid path on the apply control server. Apply generates an operations log file in this directory. This directory will also be used for spill files, unless the SPILLFILES column is set, or overrode at Apply start. Update to Y if you want Apply to stop after running each subscription set once. This value represents the amount of time in seconds to wait between apply runs. The default is 6. 10000 is the maximum valid value, and is equivalent to about 2.5 hours. The delay between runs is also influenced by the value of sleep of your subscription sets.
COPYONCE DELAY
130
Column name
ERRWAIT
Description
If error, how much time in seconds to wait before Apply retries. The default is 300 seconds (5 minutes). It can be set to zero seconds. We do not have information on a maximum value. Update to N if you want Apply not to issue a message when it becomes inactive. Update to Y if you want Apply to invoke the ASNLOAD exit routine to refresh the target tables. Update to Y if you want Apply to overwrite its operations log file when Apply is started. Update to Y if you want Apply to log its operations to standard out as well as the operations log file. Update to Y if you want Apply when it completes processing a subscription to call (notify) the ASNDONE exit routine. Update to Y if you want the Apply program to cache member and column data. Member or column changes require stop/restart of Apply. This is not feasible if there are multiple subscription sets. Update to N if you want Apply to terminate, instead of sleeping, when it finishes processing subscription sets. The only valid value on Linux, UNIX, or WIndows is DISK. On z/OS the default is VIO. You can specify to store it on disk, then the Apply program uses the specifications on the ASNAPLDD card to allocate spill files. Update to Y if you want Apply to continue when it encounters an SQL error message. This is not recommended for production environments.
INAMSG LOADXIT
LOGREUSE
LOGSTDOUT
NOTIFY
OPT4ONE
SLEEP
SPILLFILE
SQLERRCONTINUE
131
Column name
TERM
Description
Update to N if you do not want Apply to terminate when DB2 terminates on the Apply Control Server. Update to N if you want to overwrite the IBMSNAP_APPLYTRAIL on Apply start. See Chapter 7, Monitoring and troubleshooting on page 309 for additional information about the apply trail.
TRLREUSE
IBMSNAP_COMPENSATE
The compensate table was introduced in a Version 7 fixpak. It is created on the apply control server. The compensation table is used for update-anywhere replication with conflict detection. It is automatically used when there are more than 150 subscription set members in a subscription set.
Example 3-5 IBMSNAP_COMPENSATE DDL
CREATE TABLE ASN.IBMSNAP_COMPENSATE( APPLY_QUAL CHAR( 18) NOT NULL, MEMBER SMALLINT NOT NULL, INTENTSEQ CHAR( 10) FOR BIT DATA NOT NULL, OPERATION CHAR( 1) NOT NULL) IN TSASNAA; CREATE UNIQUE INDEX ASN.IBMSNAP_COMPENSATX ON ASN.IBMSNAP_COMPENSATE( APPLY_QUAL ASC, MEMBER ASC);
132
Chapter 4.
Replication sources
This chapter discusses the following topics: What is a replication source? Define a replication source Views as replication sources
133
134
If the data source is non-DB2, select the Capture Control Server icon that indicates the federated Server definition name for the non-DB2 source, and the database name of the DB2 ESE database that contains this Server definition. For example: if the Capture Control Server name is IDS93_ITSO / FED_DB, IDS93_ITSO is the Server definition name in a DB2 ESE database FED_DB is the name of the DB2 ESE database containing the Server definition.
4. If you have created more than one set of capture control tables for this capture control server, from the list of capture schemas expand the one you want to register. 5. From the list, either select Register Tables or Register Views. Or if the source server is non-DB2, select Register Nicknames In order to find out the differences related to registration of views, refer to 4.3, Views as replication sources on page 162. An alternative path to register sources from Replication Center, is the Launchpad.
135
You select the tables you want to register as replication sources from the Add Registerable Tables. If you use the Retrieve All button, your list will include all the registerable tables of the capture control server. This button will ignore any search criteria you specified. You can provide search criteria and use the Retrieve button on this window for filtering. The search criteria columns are based on the columns that exist in the DB2 catalog and specific to the platform of the capture control server. The most
136
commonly used ones for DB2 UDB for UNIX and Windows are the name, schema. On the iSeries schema equates to libraries. If you are unsure how many table descriptions may be selected, Count button gives the number of rows satisfying the selection. This is much more efficient than returning all the list. You may find filtering even more useful, if the replication source server is DB2 UDB for z/OS, since the list will include all eligible tables from the subsystem catalog. Selection can be based on many different criteria among which database or creator are most commonly used. It is not possible to register the Change Data (CD) tables. The DB2 UDB for z/OS tables with EDITPROC or VALIDPROC defined in them cannot be the replication sources and therefore do not appear in the registerable tables list.
Important:
This list does not include the sources which are already registered for this capture schema of the capture control server. If you want to capture a source multiple times, you need to define multiple capture schemas.
137
There are a number of options you can specify when registering a source.
Note: If you are reading this book just to do hands-on example with Replication center, you can accept the defaults on this screen and continue with 4.3, Views as replication sources on page 162.
Continue to read this section, if you want to read on registration options or CD table properties.
138
139
Figure 4-3 shows how Apply replicates the updates. There are differences on how it executes depending on whether you update the target key and whether or not you capture the before-image values. In each of the examples, MGRNO of the department A00 is updated from 000010 to 000020. The target key in the first example is DEPTNO and is not updated. The Apply program searches the target based on DEPTNO and replicates the update correctly without any need of before-image values. In the second example, MGRNO is specified as the target key which may be preferred by the DBA to take advantage of the index created for ad-hoc queries against the target table. No before-images values are captured for this example. The manager number (000010) of the department with the department code A00 is updated to 000020 on the source table. Apply program uses the changed value (MGRNO=000020) to find the qualifying rows which does not exist on the target table. Apply then inserts a new row to the target table with (MGRNO=000020). Since Apply does not have the before-image value, row with (MGRNO=000020) is replicated to the target table but old row (MGRNO=000010) is not deleted. The target key is updated in the third example. The Apply program has the before-image values to search the target table. The update on the source table is correctly replicated to the target because Apply finds the qualifying rows on the target based on the before-image value of the target key.
140
DEPARTMENT
DEPTNO DEPTNAME MGRNO
TARGET KEY=DEPTNO
MGRNO
A00
A00
000020
APPLY
N O BEFORE-IMA GE
DEPARTMENT
DEPTNO DEPTNAME MGRNO
TARGET KEY=MGRNO
MGRNO
A00
A00 A00
000010 000020
APPLY
NO BEFORE-IMA GE
DEPARTMENT
DEPTNO DEPTNAME MGRNO
TARGET KEY=MGRNO
MGRNO
A00
A00
000020
WITH BEFORE-IMA GE
APPLY
Figure 4-3 How Apply replicates the updates to the target key
Important: During subscription, Let the Apply program use before-image values to update target key columns option must be checked and the before-image values of the target keys must be captured in order to have correct replication if target keys are being updated.
Restriction: Update Anywhere is not supported with non-DB2 source tables or target tables. DB2 Replications Update Anywhere implementation requires that Capture run at both the source table and the target, and this is not possible when either the source or the target is in a non-DB2 database.
141
Since replication is an asynchronous operation, there is a probability that the same row is changed on both sides before being replicated to the other side. This is called an update-conflict. If one master and many replicas are defined (update-anywhere) , this conflict can be detected and resolved by accepting the change of master and undoing the update of the replica. Apply needs before-image values of the replica to undo the changes at the replica in case of conflicts. If the model is peer-to-peer, since there is no master in this model, conflict detection is not possible and capture of before-image values of replica is not required. See 9.6, DB2 peer to peer replication on page 405 for peer to peer setup. Consider the replication environment in Figure 4-4. There is a HCUSTOMER table at the headquarters. It is replicated to branch offices BRANCH_1 and BRANCH_2 regularly. The HCUSTOMER table is updated at the headquarters. There are also applications updating the replica tables (BCUSTOMER) at branch offices independently. Both headquarters and branch offices are capture control servers and the CD tables created to be used for captured changes are called CD_HCUST and CD_BCUST respectively. Assume the following sequence of events that describes an update conflict between master (headquarters) and replica (one of the branches): 1. At time T1, customer record (say C1) is changed and committed at HCUSTOMER and this change is stored to CD_HCUST. 2. At time T2, C1 is changed and committed at BCUSTOMER and captured at CD_BCUST. 3. At time T3, during Apply cycle, C1 is read from CD_HCUST for replication to one of the branches. 4. Apply searches for C1 on CD_BCUST. A match indicates that an update conflict has occurred. 5. Change of BRANCH_1 is undone by using the before-image values on the CD_BCUST. In update-anywhere configurations, Apply program before applying the change, searches the CD table on the target with matching keys values. If a match is found, the change of the replica is undone by using the before-image value and the master stays.
142
UPDATE-ANYWHERE
HEADOFFICE
CD_HCUST
HCUSTOMER
APPLY APPLY
APPLY APPLY
BRANCH_1
BRANCH_2
B_CDCUST
B_CDCUST
BCUSTOMER
BCUSTOMER
Important: Select the before-image values for the replicas for update anywhere model. Before-image values are not used for peer-to-peer model.
If before-image values are stored in the CD table as well as after-image values, increase on the CD table size should be considered,
143
If you want Capture to capture whenever a registered column is changed only, select the Capture changes to registered columns only on the pull-down menu for Row-capture rule. Capturing changes to all columns is the default. This option used to be set as a Capture start-up parameter (CHGONLY) which applied to all replication sources that Capture program is capturing data for. Since Version 8, row-capture rule can be set for individual registered sources and thus the selection method is valid only for the sources registered you are currently defining. This option is not available as a startup parameter anymore.
TAR G E T S E RV E R
TAR G E T
AP P L Y C O NT R O L SERVER
C ON T RO L
A PPL Y
On a full refresh only replication (see Figure 4-5), Apply reads the registration information from the capture control tables, accesses the replication source and replicates to the target. Since the data on the replication source is not captured and Apply reads the replication source directly, full refresh only replication is possible without starting the Capture program. If the replication source is a small table you may prefer full refresh only to differential (change capture) replication. Full refresh only replication does not have some of the overhead of change capture replication like updating the CD tables and generating log records for CD table updates. Not running a Capture program can also be regarded as a performance benefit.
144
This is the only way for replicating DB2 UDB for UNIX and Windows catalog tables. It is not possible to replicate DB2 UDB for z/OS catalog tables.
Stop on error
This option determines whether the Capture program terminates on every error or terminates when obliged to. The default for this option is yes for DB2 UDB for UNIX and Windows and z/OS. The default is no for DB2 UDB for iSeries. If Stop on error is yes, Capture program terminates on every error occurred. But, there are errors that does not require the Capture to terminate. If you set Stop on error to no, then Capture will not terminate on every error but will stop processing by either deactivating the registration or not activating the registration at all if the error occurred on first capture cycle of this registration. Do not stop on error avoids Capture to terminate on following situations (IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1, SC27-1121): The registration is not defined correctly. The Capture program did not find the CD table when it tried to insert rows of changed data. The Capture program started or reinitialized when the DATA CAPTURE CHANGES option on the (non OS/400) source table was set to OFF. If the errors listed above occurred and do not stop on error is your option, STATE column of IBMSNAP_REGISTER table is set to S (=stopped) and error message number associated with the failure is stored in STATE_INFO column for this registration.This allows you to take corrective action and then you must set the STATE column back to I (=inactive), see 8.1.2, Deactivating and activating registrations on page 352. If the replication source is on DB2 UDB for z/OS and the tablespace is compressed, loss of compression directory causes Capture to set the STATE to I for this replication source. Apply can immediately capstart this registration. In order to avoid Apply from capstart is to use USER or CMD/STOP signals to coordinate compression dictionary changes. If you are using DB2 compression utilities, you should coordinate the these utilities with Capture program. You can find how to coordinate these utilities with Capture program in Chapter 13, Maintaining your replication environment, in IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1, SC27-1121. Some errors like worker thread having a media-failure condition cannot avoid Capture from terminating even if you specified do not stop on error.
145
146
where acct_balance > 50000 where acct_balance is updated from 14000 to 64000 insert new row
C U STO M ER
PRE M IU M CU STOM ER S
C O STO M ER U N TR O L
REG U LAR CU STO M ER S
C O NTRM ER US TO O L
Consider the example in Figure 4-6, the CUSTOMER table is replicated to two targets. First target is for premium customers whose account balances are more than 50000. Customers whose account balances less or equal to 50000 are regular customers and they are replicated to a different target. In order to replicate CUSTOMER to different targets two subscription sets (say S1 and S2) are defined. The premium customers predicate is defined as row filter for S1 and the predicate for regular customers is defined to S2. Apply, before applying a change for either S1 or S2, checks the acct_balance and skips it if it not qualifying. An update changes a customers balance from 14000 to 64000 causing his category to be changed from regular to premium. If we assume that update is stored as an update in the CD table, during subscription cycle of S1, Apply will check the acct_balance which is 64000 and will find out that this row qualifies for S1 and will insert it into premium customers after receiving a notfound condition. Apply just skips this row during subscription cycle of S2 because it does not qualify for S2 though it has to be deleted from regular customers. If for this example, update is stored as one delete and one insert in the CD table, delete (with acct_balance=14000) will qualify during S2 and this operation will be applied to the regular customers and insert (with acct_balance=64000) will qualify for S1 and this operation will be applied to the premium customers. If you use this option, two rows instead of one row is stored in your CD table. You should consider this when estimating the size of the CD table especially if updates are high in your environment.
147
and others as replicas. You must consider whether to re-capture or not when registering for both the master and the replica target tables. On peer-to-peer replications, where there are no master sites and all sites are replicas, capturing changes from the replica tables should be avoided. See 9.6, DB2 peer to peer replication on page 405. The purpose of re-capturing is to propagate the change received from one site to others. If you have one master and one replica as in the first example of Figure 4-7, where neither the master nor the replica has others to replicate the change received, you simply uncheck this option for both the master and the replica.
DO NOT
M R
3
DO NOT RE-CAPTURE
M R1
R2
KEY RANGE-1
KEY RANGE-2
RE-CAPTURE
2
DO NOT RE-CAPTURE
4
RE-CAPTURE DO NOT RE-CAPTURE
M R1 R2
If you replicate to more than one replicas from the master as in the second example, the changes originating from replicas R1 and R2 has to be propagated to R2 and R1 respectively by the master. So, you need to re-capture changes at the master. Re-capturing the changes originating from the master at the replicas is not necessary since they are not propagating data to any other site. If you have distributed your data to the replicas by partitioning based on a key as in example three, then master need not re-capture the changes from the replicas as a row only belongs to one replica. A change originating from R1 will never be replicated to R2 because it is not in the range of rows replicated to R2.
148
DO NOT RE-CAPTURE
R1
R2
DO NOT RE-CAPTURE
DO NOT RE-CAPTURE
DO NOT RE-CAPTURE
If you are replicating from replicas to other replicas, you need to re-capture at the replica site which is replicating to others. On the forth example, R1 must re-capture to replicate changes originating both from M to R2 and from R2 to M. If you are re-capturing a change, APPLY_QUAL column in IBMSNAP_UOW table which identifies the Apply program that applied the change, prevents this change to be propagated to the originating site again.
Attention: If on your registration, capture changes from replica tables is checked even if you do not need re-capturing, this may cause unnecessary updates to CD and capture control tables. Capture changes from replica tables is checked by default. Uncheck it, if re-capturing is unnecessary
If you need to have a better checking, you may prefer enhanced conflict detection but the requirements of enhanced conflict detection may be hard to satisfy on every environment. It is aimed to make a complete check of changes made at the
149
target on enhanced conflict detection. For this reason, Apply waits for the Capture to capture all the log records. This can be a never ending process. So, Apply locks the target in order to stop new changes. After all log records are read and captured to the CD table, conflict checking is performed and this checking covers all the changes made at the target. Although, enhanced conflict detection can detect the conflicts that cannot be detected by standard detection, it is almost impossible to implement this model in production environments where replication is continuous and tables are accessed by the applications almost all the time. This conflict detection method may be suitable for mobile users who occasionally connect to the server for replication and do not run applications until the replication ends.
Local Journal
Before registering your source tables to capture changed data, you must start journaling to those tables. If you specify the option for full refresh only, then you dont need to journal those source tables. The following will highlight the steps to create journals, journal receivers and starting your journals. Please, reference Chapter 2 in IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1, SC27-1121 for details on creating journals and managing journals. First you need to create a journal receiver using the CRTJRNRCV command (Example 4-2):
Example 4-2 Create Journal receiver
CRTJRNRCV JRNRCV(Journalreceiverlibrary/Journalreceivername)THRESHOLD(100000) TEXT(DataPropagator Journal Receiver )
After you create a journal receiver, you will reference it in when you create the journal using the CRTJRN command (Example 4-3):
150
To start the journal for a source table use the STRJRNPF CL command. You can enter multiple source tables at one time (Example 4-4).
Example 4-4 Start Journal
STRJRNPF FILE(Sourcetablelibrary/Sourcetable) JRN(Journallibrary/Journalname) OMTJRNE(*OPNCLO)IMAGES(*BOTH)
Note: The Capture program require *BOTH for the IMAGES parameter. When ever you end journal for a source table, using the ENDJRNPF CL command. A full refresh is triggered to the target table.
Remote Journal
A remote journal is a copy of the source journal that resides on a target iSeries server. The remote journal provide the option to efficiently replicate journal entries to the remote journal that resides on one or more systems. The remote journal system management uses the following communications protocols for replicating the journal entries to the remote target severs: OptiConnect for OS/400. Systems Network Architecture (SNA). Transmission Control Protocol/Internet Protocol(TCP/IP). The remote journal function, replicates journal entries to the remote system at the Licensed Internal Code layer. Moving the replication to this lower layer provides the following benefits: The remote system handles more of the replication overhead. Overall system performance and journal entry replication performance is improved. Replication to the remote journal can (optionally) occur synchronously. Journal receiver save operations can be moved to the remote system.
151
To create a remote journal at capture control server, enter the ADDRMTJRN command at the source server were the local journal is located (Example 4-5).
Example 4-5 Create the remote journal
ADDRMTJRN RDB(RemoteDBname) SRCJRN(Journallibrary/Journalname) TEXT('Remote journal from source system')
When you initially create a remote journal, the delivery state is inactive. Therefore, to start remote journal activity it needs to be activated, by performing the following steps at the source server: Enter the WRKJRNA JRN(Journallibrary/Journalname) CL Command to display the Work with Journal Attributes screen. Press the F16 to display the Work with Remote Journal Information. Type 13 on the option field, then F4 to display the CHGRMTJRN( Change Remote Journal) prompt screen. Enter on the delivery parameter: *ASYNC for asynchronous , *SYNC for synchronous as the journal entry delivery option.
152
Run the CRTSQLPKG were the Capture program is running, pointing to the iSeries server were the source table resides. Proceed to Chapter 2 in the IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1, SC27-1121 for the rest of the procedure to setup replication on the iSeries.
153
We will describe the remote journaling and RRN option, that is indigenous on the iSeries. All the other replication sources functions are described in the sections mentioned in the preceding paragraph.
154
If you want to use RRN as described in the proceeding heading, Relative Record Number on page 153. Just click the check box Use Relative Record Number (RRN) as primary key
155
Add DPR Registration (ADDDPRREG) Type choices, press Enter. Source table . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
Source table type . . . . . . . *POINTINTIME... Allow full refresh . . . . . . . Text 'description' . . . . . . . Capture columns . . . . . . . . + for more values Capture relative record number More... F3=Exit F4=Prompt F5=Refresh F13=How to use this display
*USERTABLE, *POINTINTIME... *YES *NONE *ALL *NO *YES, *NO *YES, *NO
F12=Cancel
After you enter a required value for the Source table and Library press the F10 key for additional parameters, then the page down key, to display the screen in Figure 4-10.
Add DPR Registration (ADDDPRREG) Type choices, press Enter. Record images . . . . . . . . . *AFTER *AFTER, *BOTH
For detail information on the parameter values, you can use the field level help, by moving the cursor to the parameter and press the F1 key. Or refer to Chapter 18 in the IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1, SC27-1121.
156
4.2.5 CD Table
As a result of registration, a CD table is created on the capture control server where replication source resides.
In Figure 4-11, you see the replication source EMPLOYEE and the CD table created for EMPLOYEEE. During registration, after-images of all the columns and before-images for EMPNO, WORKDEPT, JOB, SALARY, BONUS and COMM are selected. The before-image prefix is X which is the default. This prefix is concatenated to the column names of the columns which will be used to store the before-images. All the columns except the first three are for storing captured data. The first two columns IBMSNAP_COMMITSEQ and IBMSNAP_INTENTSEQ are log sequence numbers of the commit log record and the change log record respectively. IBMSNAP_OPERATION is the operation type and is either I, U or D.
Important: Capture inserts only the changes of the committed transactions to the CD table.
157
The data in the CD table may be pruned periodically by the Capture program depending on the value of the auto-prune parameter when Capture is starting. It the CD table size is a concern, it is possible to decrease the data stored in it. Capture changes to registered columns only option available during registration allows rows to be inserted to the CD table only if one of the registered columns is changed. Another method to suppress capturing unnecessary rows is to define triggers on CD table. See 9.1.2, Replicating row subsets on page 386.
CD tablespace
From the CD table tab, you can change the attributes of the tablespaces of the CD table and the CD name and schema. The fields on the CD Table and CD-Table Index screens are filled by your source object profile which is customized by you from Manage Source Object Profiles. At the Manage Source Object Profiles, you have the options to use the source name or the timestamp, concatenated with a prefix and/or suffix, as the naming convention for the CD table. The convention you can choose for the CD table schema are either the sources schema or a specific one. This convention is used for all the CD tables definitions of a capture control server. If you are using source name but not the timestamp as the CD name convention and register a replication source more than once with different capture schemas, same name for the CD table will be produced on the second registration. You must either change the CD name for this replication source or use timestamp as CD naming convention.
158
You must enter a database name for the tablespace. This database must be created since the script generated for registration only creates the tablespace. You can change the Lock size and Buffer pool specific to this tablespaces. The defaults in this screen are from your source object profile. Storage group, Minimum primary space allocation and minimum secondary space allocation attributes are used for allocation of the dataset. CD tables are volatile in size. Capture inserts rows into this table whenever a COMMIT is issued for a transaction that involves the associated replication source. After the captured changes for the transaction have been applied to all replication targets, the applied CD rows are eligible for pruning. The CD table exists only on DB2 source servers. Table 4-1 is a worksheet to help you estimate the space needed for the UOW table:
Table 4-1 Sizing worksheet for CD tables
You gather this information Calculations
CD row life = max(max(Apply interval),prune_interval) Apply interval is the frequency you run Apply to process changes. Prune interval is the frequency you prune changes CD row rate = number of successful SQL inserts, updates, deletes issued for the replication source table during the CD row life CD row length CD minimum size CD exception factor = multiplier to account for delays or problems (like a network outage) that might prevent changes from being applied. You should always start with at least an exception factor of 2. CD adjusted size
minutes
number of changes
(should be 2 or more)
159
CD-Table index
An index is automatically defined on the two log sequence columns of the CD table. Index schema, name, PCTFREE AND MINPCTFREE can be customized from the CD-Table index screen.
160
Server definition. The Set Passthru Reset will be followed by the Create Nickname statement to create a nickname in the DB2 federated databases for the CCD table that is being created in the non-DB2 server. In the SQL generated by Replication Center to register a nickname, following the Create Nickname statement for the CCD may be Alter Nickname statements. These Alter Nickname statements are to over-ride the federated servers default data type mappings for some of the columns of the CCD nickname. The federated servers default type mappings take care of the usual query access to the data in remote tables. For replication, the values from some columns of the remote CCD table may need to have different data-type characteristics than would be provided with the default type mappings. Alter Nickname statements for CCDs change the DB2 local type of a CCD nickname column so that the federated server function will provide the values from these columns in the data-type format needed by replication.
161
involves the IBMSNAP_SEQTABLE that was created with the Capture Control Tables. In the case of Informix as a replication source, there are also Capture Procedures created. This is because Informix allows only a limited number of characters in the logic of a trigger. To get around this restriction, Replication Center first creates a procedure containing the logic to insert a record in the CCD, and then creates a trigger on the source table that calls the procedure. You will see in the SQL generated that the three Capture Triggers (or in the case of Informix, three procedures each followed by a trigger) are created in the same Set Passthru session that creates the CCD table. Following the Create Procedure and Create Trigger statements, you will see a commit that commits the creation of the CCD table and the procedures and triggers; this commit will be followed by Set Passthru Reset and then the Create Nickname statement for the CCD table. The Register Nicknames generated SQL will also create or drop/re-create another trigger in the non-DB2 source server. In the case of Informix, a procedure and a trigger get created. This is the pruning trigger that deletes replicated records from the CCDs when Apply updates the IBMSNAP_PRUNCNTL records after it successfully replicates changes to target tables. For the first table registered at a non-DB2 server, Replication Center creates the trigger (or procedure and trigger) to check for and delete replicated changes from the CCD table for the only registered table. When a second, third, etc. table is registered, the trigger/procedure is dropped and recreated, adding the additional CCD tables to the list that are checked for and pruned.
162
In order to register a view, you follow the steps explained at Registering the replication sources. When you select the Register Views, the Add Registerable Views automatically launches and its usage is very similar to Add Registrable Tables. At the bottom of Register Views, there is a Create view button. You can create your views using this button before registering them. It is not required to specify any option on Register Views. The fields like the View Schema, View Name, etc. on the CD views can be altered.
Restriction: Views in non-DB2 servers (e.g. Informix) are not supported as replication sources. Also, DB2 views that include nicknames cannot be registered for replication.
When you register a table, a CD table is created. When you register a view, a CD view is created. The underlying tables of the CD view differs depending on whether the underlying tables of the registered view are defined as full refresh or differential replication or whether registered or not.
As you will notice from the DDL created for this registration, a view over the CD table of EMPLOYEE is created. No additional data is captured as a result of view
163
registration. Apply will access the CD table of EMPLOYEE and populate the target based on the definition of the view VEMPLOYEE.
CC CC FR
CC FR/NR FR/NR
CC CC FR
In our example, the project number, name, department the project is assigned to, department name and manager of the projects whose duration more than a year, will be replicated to the target. The requirement is to replicate the columns which are spread to two tables to only one table at the target. The VPROJ_DEPT view in Example 4-7 is created for this purpose. The DEPARTMENT table is a stationary table which is not updated at the source on the contrary there are changes to the PROJECT table at any time at the source. PROJECT table is registered for differential refresh to propagate the changes. DEPARTMENT table is not a registered table.
Example 4-7 View on two tables, one differential refresh other not registered
CREATE VIEW VPROJ_DEPT AS
164
(SELECT P.PROJNO,P.PROJNAME,P.DEPTNO,D.DEPTNAME,D.MGRNO FROM DB2ADMIN.PROJECT P, DB2ADMIN.DEPARTMENT D WHERE P.DEPTNO = D.DEPTNO AND DAYS(PRENDATE) - DAYS(PRSTDATE) < 365)
In order to try this example, follow the following steps: Register the PROJECT table accepting all the defaults on the Register Tables. (It must be registered for change capture and the columns that appear on the view should be registered). Create the view VPROJ_DEPT as in Example 4-7. Register the view accepting all the defaults on the Register Views. After a full-refresh, this registration will enable any changes to the PROJECT table to be propagated to the target joined with the values in the DEPARTMENT table. This is a change capture replication. The CD view is defined over DEPARTMENT and CD table of PROJECT. On the following example, the details of the employee activity on the projects will be replicated to the target together with the project name and the id of the employee who is responsible from the project. The view VEMP_ACT1 in Example 4-8 is defined to be used for this replication. Both of the tables, EMP_ACT and PROJECT are updated at the source. Since the updates on both tables need to be propagated, they are both registered for differential refresh.
Example 4-8 View on two tables, both of them differential refresh
CREATE VIEW VEMP_ACT1 AS (SELECT A.EMPNO,A.PROJNO,A.EMSTDATE,A.EMENDATE,P.PROJNAME,P.RESPEMP FROM DB2ADMIN.PROJECT P, DB2ADMIN.EMP_ACT A WHERE P.PROJNO = A.PROJNO)
You can implement this example by following the steps below: Register the PROJECT table accepting all the defaults on the Register Tables. (It must be registered for change capture and the columns that appear on the view need to be registered). Register the EMP_ACT table accepting all the defaults on the Register Tables. (It must be registered for change capture and the columns that appear on the view need to be registered). Create the view VEMP_ACT1 as in Example 4-8. Register the view accepting all the defaults on the Register Views. This registration will result in creation of two views. One joining PROJECT table with CD table of EMP_ACT and the other one joining EMP_ACT table with the
165
CD table of PROJECT. Any change occurring on any of the base tables will be propagated to the target with a join on the other base table.
166
17 This source view is a duplicate for this session. 18 The view definition cannot be supported. 19 The view has an asterisk (*) instead of a specific column name in the view definition. 20 The view contains the join of a CCD and a non-CCD table.
167
168
Chapter 5.
Subscription set
In this chapter we will describe: A subscription set and subscription set members. Planning on the grouping of subscription sets and members Creating a subscription set and members using the Replication Center as the administration task. The subscription set and members attributes iSeries subscription set and members commands
169
Subscription set
When you create a Subscription set, the following are some of the attributes defined in the IBMSNAP_SUBS_SET table: A name for the subscription set. The source and target server name. The Apply qualifier. When to start replication, how often to replicate, and whether to use interval timing, event timing, or both. Data blocking, if you expect large volumes of changes If the subscription set is for replication from or to a non-DB2 server, such as Informix, the source or target server name will be the name of the DB2 ESE or DB2 Connect EE database containing the Server definition for the non-DB2 source/target server. The IBMSNAP_SUBS_SET table has two additional attributes, which will be filled in as appropriate for a subscription set replicating from or to a non-DB2 server: Federated Server name of a non-DB2 source server Federated Server name of a non-DB2 target server
170
A name for the subscription set The Apply qualifier The source table or view and a target table or view Source and target schema The structure of the target table or view The rows that you want replicated (SQL predicates) When replicating from a non-DB2 server, the source table name and schema will be for the nickname for the source table. When replicating to a non-DB2 server, the target table name and schema will be for the nickname for the target table.
Subscription columns
A subscription columns table contains the target table or view columns definitions. When a Subscription column is created the following are some of the attributes defined in the IBMSNAP_SUBS_COLS: A name for the subscription set The Apply qualifier The target column name The target column type SQL column expression for data transformation If replication is to a non-DB2 target server, the target column name will be for the column of the nickname for the target table. If replication is from a non-DB2 source server, the SQL column expression will reference the columns of the nickname for the source table.
Subscription statement
A subscription statement table contains an SQL statement or procedure that is executed before or after the Apply programs runs. This is an optional table for your subscriptions. When a subscription statement is created the following are some of the attributes defined in the IBMSANP_SUBS_STMTS: A name for the subscription set The Apply qualifier Execute before or after indicator Execute SQL statements or an eight-byte name of a SQL procedure to be executed
171
Subscription event
A subscription event table is an optional subscription table for defining an event name and time to start replicating. When a Subscription event is created the following are some of the attributes defined in the IBMSANP_SUBS_EVENT: Event name Event starting timestamp
172
ESE/Connect, there needs to be a Server definition to the non-DB2 server. See Appendix C for requirements and instructions for configuring federated access to Informix. The database containing the Server definition to Informix could also be the Apply Control Server.
173
complete the cycle successfully unless all members do. The Apply program rolls back a failed subscription set to the last successful commit point, which could be within the current Apply cycle if you specified the commit_count keyword when you started the Apply program. See Figure 5-2 on page 181.
174
-408 -180
A value is not compatible with the data type of its assignment target. The string representation of a date, time, or timestamp value does not conform to the syntax for the specified or implied data type.
175
might want to replicate a large number of subscription sets with one Apply qualifier. This could be the best option if you wait until after business hours before replicating. The Apply program processes the subscription sets sequentially. Therefore, your overall replication latency could increase. If you have specific requirements for certain subscription sets, you can combine these two options. For example, you could have one Apply program process most of your subscription sets and thus take advantage of using one Apply program to process related subscription sets together, and you can have another Apply program process a single subscription set and thus ensure minimum replication latency for that subscription set. And by using two instances of the Apply program, you increase the overall processing performance for your subscription sets.
176
Note: This approach can also be used to create a subscription set with members.
177
Add one subscription set member to one or more subscription sets Note: If you are selecting more then one subscription set they must be from the same Capture control server and capture schema.
Select one or more subscription sets. Right-click to display a pop up menu. Press Add Member to display the Add Member to Subscription Sets note book page. See Figure 5-1 on page 179.
178
Check box to select one or more subscription sets -> Member Information page -> Add button to add subscription member -> Retrieve all or Retrieve using your search criteria to list the registered source tables -> Select registered sources -> OK -> select target type -> Details button to display Member Properties note book page. See Figure 5-5 on page 189 and follow steps to define subscription member properties. You need to click Details button for each Subscription Set. Click the OK button if active will generate the SQL script to update the apply control tables at the apply control server specified in Figure 5-2 on page 181. See, 2.13.1, Running Saved SQL files later on page 87. Also see 5.7, SQL script description on page 222
179
Note: You can also open this notebook from the launchpad clicking on option 4. Create a Subscription Set, see 2.18, Replication Center Launchpad on page 98for details.
180
Set information
Here are the steps for setting information.
181
If the Create Subscription Set notebook is displayed as a result of selecting the Create option on a Subscription Set folder in the tree view as indicated in 5.3.2, Create subscription set without members on page 177, or from the launchpad, verify the Apply control server is the one you want. You cannot modify this field. To change the server name, close the notebook and reopen it again from the correct server.
Note: The Lauchpad may not always have the Apply control server context established at the same a user select step 4. If thats the case the Apply control server alias will not be filled in. Set Name
Type in the name for the subscription set, the name can be up to 18 characters long. This name uniquely identifies a group of source and targets tables within an apply qualifier, that is process by a separate apply program. You can define more than one subscription set within an apply qualifier.
Apply qualifier
Type in a name for a new apply qualifier or click the down arrow to select from a list of existing apply qualifiers. The apply qualifier is case sensitive. Therefore, if you want lower or mixed case, you must delimit it with quotation marks. For example, Apyqual1. Lower or mixed case characters that are not delimited are changed to uppercase. By using more than one Apply qualifier, you can run more than one instance of the Apply program from a single user ID. The Apply qualifier is used to identify records at the control server that define the work load of an instance of the Apply program; whereas the user ID is for authorization purposes only. For example, assume that you want to replicate data from two source databases to the target tables on your computer. The data in source table A is replicated using full-refresh copying to target table A, and the data in source table B is replicated using differential-refresh copying to target table B. You define two subscription sets (one for table A and one for table B), and you use separate Apply qualifiers to allow two instances of the Apply program to copy the data at different times. You can also define both subscription sets using one Apply qualifier.
182
If you opened the Create Subscription notebook from the Apply control server folder as indicated in 5.3.2, Create subscription set without members on page 177, you must select the Capture control server by clicking... button. This will open the Select Server window for you to select the Capture control server. When creating a Subscription Set to replicate from a non-DB2 server, select the select the Database Alias that has the non-DB2 Server name for the non-DB2 source server. If you opened the Create Subscription notebook from the Launchpad or the selection of registered tables as indicated in 5.3.1, Create subscription sets with members on page 176, verify the Capture control server is the one you want. You cannot modify this field. To change the server name, close the notebook and open it again from the correct server.
Note: Launching this function from lauchpad doesnt always have the Capture control server context. Capture schema
This field contains the schema of the capture control tables containing the registered tables and CD tables for this subscription set. For non-DB2 sources, this will be the schema of the nicknames for the Capture Control Tables. If you opened the Create Subscription notebook from the Apply control server folder as indicated in 5.3.2, Create subscription set without members on page 177, you must select the Capture schema from the list box when you click the arrow button. If you opened the Create Subscription notebook from the Lauchpad or the selection of registered tables as indicated in 5.3.1, Create subscription sets with members on page 176, verify the Capture schema is the one you want. You cannot modify this field. To change the schema name, close the notebook and reopen it again from the correct schema.
Note: Launching this function from lauchpad doesnt always have the Capture schema context. Target server alias
This field contain the name of the target server were the target table resides. Click the ... button to open the Select Server window for you to select the target server. For non-DB2 targets, see the discussion on the previous page in Subscription Sets to non-DB2 Servers.
183
Note: If you decide to deactivate the subscription set, you can activate it later, by right-clicking the subscription set and select Activate from the pop up menu, See 8.2.2, Deactivating and activating subscriptions on page 359 Data blocking factor
Click the up or down arrow to change the number of minutes worth of captured data for apply to process during a single cycle. See 5.5, Data blocking on page 217.
Note: The default is 20, changing it to 0 will disable the blocking factor function Allow Apply to use transactional processing for set members
Click on the box to change the way apply program replicate changes from the spill file. Table mode is the default processing mode when this box is not checked, which means all changes from the spill files are applied to the corresponding target tables one at a time in the set. Then it issues a DB2 commit to commit all the changes to each of the target tables within the subscription set. If the boxed is checked, apply processing will change to transactional mode, which means all changes from the spill file are open and process at the same time. The apply order to the target tables is the same as the source transaction order. Apply will issue DB2 commits at intervals that is specified in the: Number of transactions applied to target table before Apply commits field. Click the up and down arrow to change the numeric value. When you complete the Set Information in Figure 5-2 on page 181, click the Source-to-Target Mapping page to display the source to target mapping notebook. See Figure 5-3 on page 186. Follow the steps as instructed to define the subscription member and columns. You can optionally click the Schedule page to display the notebook to schedule your replication. See Figure 5-16 on page 203. You can optionally click the Statements page to display the notebook to define the SQL statements or call procedure, which is executed before or after the apply program runs. See Figure 5-17 on page 205.
184
The OK button if active will generate the SQL script to update the apply control tables at the apply control server specified in Figure 5-2 on page 181. Also update the prune set and prune control tables at the Capture control server. If the target table doesnt exist, it will create the target table, index and the tablespace were applicable at the Apply control server. See 2.13.1, Running Saved SQL files later on page 87. Also see 5.7, SQL script description on page 222
Note: If you select the option to add subscription member later as shown in 5.3.2, Create subscription set without members on page 177. The OK button is active, which indicates you can create an empty subscription set.
Source-to-Target Mapping
The following describes the source to target information that you create or add to the subscription set member table from the Source to Mapping notebook view. See Figure 5-3.
185
Registered Source
These are the registered source tables described in Chapter 4, Replication sources on page 133. Click on the Registered Source field to activate the Add..., Remove and Details... buttons. The Add... button is active if its the first subscription set member to add to the subscription set. For non-DB2 sources, their will be two fields:
Registered nickname : Schema and nickname of the registered nickname Remote source: Remote schema and remote table name at the non-DB2 server
186
Target Schema
The Target schema for the target table is defined here. The default name is the source table schema, also it is defined in the target profiles. Type over this name if you want to change it
Target Name
The name of the Target table is defined here. The target table could either be a new or existing table. The default target name is defined in the target object profile were you can establish a naming convention for target tables. You can change the name by typing over it.
Remote Target Schema: Name of the schema for the target table in the non-DB2 target server. The default values come from the Target Table Profile for the target server platform if that was filled in before this window was opened. If the target server is Informix, it is recommended that the remote target schema be in lower case. If so, the value should be enclosed in double-quotes in the Remote Target Schema field. Remote Target Name: Name of the target table in the non-DB2 target server. The default values come from the Target Table Profile for the target server platform if that was filled in before this window was opened. Target Type
Click the arrow to select a target type from the drop down menu. See 5.4, Target types descriptions on page 208 for more details on target types. The Member Properties column selection notebook page is displayed when you Click the Details... button to continue defining the subscription members. See Figure 5-5 on page 189. The OK button if active will generate the SQL script to update the apply control tables at the apply control server specified in Figure 5-2 on page 181. See 2.13.1, Running Saved SQL files later on page 87. Also see 5.7, SQL script description on page 222.
187
Note:
If the target type is a Consistent Change Data, the CCD properties notebook page is displayed when you click the Details... button. See Figure 5-4 on page 188. If you select the Replica target type, there are additional pages shown for replica definition. See Figure 5-15 on page 202. Read the following pages from Figure 5-5 on page 189 for a description of the other notebook pages. This note book page is displayed if your target type selection from Figure 5-3 is CCD.
188
Click the appropriate radio button or box to select which type of CCD to be created. See 5.4.4, CCD (consistent change data) on page 209 for more details. Or click the Help button and then the define the properties of the CCD tables link, for a description of the radio buttons and box selection associations. The column selections from the registered source tables or view to the target tables is performed in following notebook page shown in Figure 5-5.
All the columns are selected to the target table, which is shown on the right side of the notebook page, this is the default. If you dont want to replicate a particular column, click the column to highlight it, then click the < bottom to move the column to the registered column side. The << will move all the columns.
189
When the column mapping notebook page is selected the following notebook is displayed. See Figure 5-6.
Column mapping from the selected targets column is performed on the notebook page shown in Figure 5-6. The mapping functions that is available depends on the existence of the target table. For a detail description on these mapping functions click the Help button. If the target table doesnt exist the following functions are available: Move Up and Move Down button. Select a row in the target column to activate these buttons, which will enable you to change the position of the columns in the target table.
190
Change target table column properties. Click the field to enter the appropriate and compatible value. Click Add Calculated Column to display the SQL Expression Builder window. See Figure 5-7 on page 192. If the target table does exist the following functions are available: The source and target column mapping arrows are automatically mapped, indicated by the arrows between the select and target column. However, unmatched columns are not mapped, which is indicated by no arrows displayed. Theres an option to remove the mapping, by right-clicking the arrow between the columns and clicking the Remove popup button, which will remove the arrow. If you want to remapped or change the mapping, click the arrow in the blue box then drag the mouse to the circle in the red box to create the mapping arrow. For non-DB2 target tables, if a nickname already exists, we found the Column Mapping window will show, with arrows, Replication Centers guess of the desired column mapping based on the names and attributes of the source table columns and the attributes of the nickname columns. If one or more of the source columns do not have arrows to any target nickname columns, this suggests strongly that the attributes of the target columns, or of the data types of the nickname columns, are not compatible with the source columns. Even if all the source columns are shown as mapped to nickname columns, you may still get warning messages that the nickname target columns do not have all the attributes of the source columns when the Create Subscription/Add Member window generates the SQL A typical warning message looks like:
ASN1827W The column "DEPTNO" of the target member "IFMX_TGT.TGDEPARTMENT" does not preserve a DB2 column attribute of the corresponding column "DEPTNO" of the source member "DB2DRS4.DEPARTMENT". Reason Code "4"
Click Add Calculated Column to display the SQL Expression Builder window. See Figure 5-7 on page 192.
191
This window will assisted you to build an SQL expression to calculate values in a column in the target table based on columns that you have chosen to replicate from the replication source. Click on the Help button for detail instructions on how to use this window. If the target type is either Base or Change Aggregate the GROUP BY clause for aggregation field is displayed on the bottom of the Column Mapping notebook page. See Figure 5-8.
192
Enter the column name to do a GROUP BY clause when aggregating data in Base or Changed Aggregate target types, see 5.4.3, Aggregate tables on page 209 or Base aggregate on page 209 When the Target-Table Index notebook page is selected the following notebook is displayed. See Figure 5-9.
Note: The Target-Table Index notebook page is not shown if the target type is Base or Change Aggregate.
193
The apply program requires a unique index or primary key to be defined on a condense target table, for change capture replication. The following are the target types that require a unique target index: User copy Point-in-time Replica Condense CCD Creating or specifying an existing target table index is performed on the notebook page shown in Figure 5-9. The default index schema and index name comes from the target object profile. See 2.11, Managing your DB2 Replication Center profile on page 76. These field will be disabled if the target table exist. If the target table doesnt exist, these defaults could be used to create the target index.
194
The buttons that are active depends on the existence of source table indexes and or primary keys.
Note: If the target index or primary key does exist see Figure 5-10 on page 196. Let the Replication Center suggest an index radio button. This button will use the index or primary key defined on the registered source table. It is only active if the source table index or primary key exist. Create your own index radio button. This button will allow you to select the columns from the: Available columns in the target name window, to define the target index. This button is still active for existing indexes. Move up and Move Down. These push buttons will change the position of the columns in the index, that are selected in the Columns used in the index window. These push buttons are active when you select the Create your own index radio button and have more then one column selected to be used as the index. Ascending and Descending. These radio buttons changes the up or down direction of the yellow arrow, next to the column selected to be used as the index. These arrows indicate the sort order of the index. Show Unique index at Source push button. This button will show a list of columns that are defined as unique source columns the index. It will only show a list for existing unique source table indexes. Let the Apply program use before-image values to update target-key columns check box. Click this box if the columns defined in the target index is different from the columns defined in the source table. The before image is used from the CD table to update the column used in the target index; see Registration chapter. The before image is used by the apply program to delete the old key row and insert a new row with the new key value. Use select columns to create primary key check box. Click this box to create a unique primary key instead of a unique index. Will be inactive if a primary key exist on a existing target table.
If you want to select specific rows from the registered source table, click the Row Filter page to add this member property. See Figure 5-13 on page 199. Click the Target-Table Table Space note book page to create a table space. See Figure 5-14 on page 201.
Note: The Target-Table Table Space page option is not displayed if the target table already exist, if the target table resides on a iSeries server or the target table is non-DB2.
195
Clicking OK when it becomes active, will take you back to the Source-to-Target Mapping page of the Create Subscription Set note book. See Figure 5-3 on page 186. For a additional information on the target index functions, click the Help button -> define a primary key or unique index on the target table link on the browser. The following note book page is displayed if the target index or primary key already exist.
Use an existing index radio button. Click this button and select which existing index you want to use, if there is more than one. Click the > button to move it the Selected target indexes window.
196
Create your own index radio button. Click this button to create another unique target index. See Figure 5-11 on page 197.
The other functions on this note book page are described in Figure 5-9 on page 194. Option to create another target index, for existing target indexes.
Select available columns to create another target index. See Figure 5-9 on page 194. Click Show index at Target to display the a window showing the existing target indexes. See Figure 5-12. For non-DB2 target tables, index information for the nickname will be shown. When the nickname for the existing target table was created, federated server
197
retrieved information about the primary key or indexes on the target table and placed a record in the federated servers catalog table for each primary key and index. There is not a real index for the nickname in the DB2 database, only a record in its catalog about the primary key or index at the non-DB2 target server. The OK button will be active after you complete the target index options, indicating the member properties definition is complete and you can either go back to the Subscription Set note book page (see Figure 5-2 on page 181), or continue defining your subscription member if you have any predicate requirements, and click the Row filter page. See Figure 5-13 on page 199.
The Close button will take you to Figure 5-14 on page 201. This note book page is displayed when you select the Row Filter page, which is available for you to define your predicates after you process the target index page.
198
The Apply program, by default replicates all the rows from the source tables. However, if you only want to replicate specific rows to your target table, then this note book page provides that function. Enter a predicate in this window, which updates the PREDICATE column in the subscription member apply control table (IBMSNAP_SUBS_MEMBER). The apply program will use this predicate to select only the rows from the source table during full refresh or change updates. For example: WORKDEPT=D11 predicate will select only the rows from the source table that contains D11 in column WORKDEPT. Notice that it is a WHERE clause SQL statement, but dont enter the WHERE statement. You can type in a predicate, click Import from File to bring in an existing predicate SQL script from a directory on your workstation or click SQL Assist to help you build a predicate. See Figure 5-7 on page 192.
199
Clicking OK when it becomes active, will take you back to the Source-to-Target Mapping page of the Create Subscription Set note book. See Figure 5-3 on page 186.
Note: Selecting records against the CD tables is not supported by the Replication Center, because the predicates entered on this page only applies to the source tables or views.
For example: To prevent deletes at the target table, by entering the following predicate: IBMSNAP_OPERATION <> D. You need to manually update a column in the subscription member control table (IBMSNAP_SUBS_MEMBER) called UOW_CD_PREDICATES. The Apply program will use the predicate defined in this column against the CD or UOW table. The following SQL statement is a example to update this field. UPDATE ASN.IBMSNAP_SUBS_MEMBR SET UOW_CD_PREDICATES =IBMSNAP_OPERATION <>"D"WHERE APPLY_QUAL =apply_qual AND SET_NAME =set_name ANDSOURCE_OWNER =ALL AND SOURCE_TABLE =source table. This note book page is displayed when you select the Create target-table table space page. If you are creating a subscription set on an iSeries, this note book page option is not displayed.
200
For non-DB2 target tables, the options for specifying the location of a target table to be created by Replication Center depend on the target server platform. For Informix target servers, the only option on this panel is to specify the dbspace for the target tables. After Replication Centers Create Subscription or Add Member window generates SQL, this SQL can be edited to add more details for the storage location of the target table before the SQL is run.
Attention: If your source table is in SMS you may want to pay particular attention to sizing an appropriately large DMS tablespace, or create an SMS tablespace outside of RC, and Use an existing tablespace.
201
Clicking OK when it becomes active, will take you back to the Source-to-Target Mapping page of the Create Subscription Set note book. See Figure 5-3 on page 186. If your target table type is Replica, you can click the Replica page to display the Replica definitions notebook page. See Figure 5-15 on page 202 to continue creating the subscription member replica target types. This notebook page is displayed if the replica target type is selected from Figure 5-3 on page 186.
After defining the replica target table, you are prompt to definite the registration definition for the replica target table, which will create the CD table and update the capture control tables, that is used by the capture program running at the
202
target server. The check boxes in the Replica definition section of this page, also the CD Table and CD table index pages, is described in detail. See 4.2.5, CD Table on page 157. The OK button at this point will take you back to the Subscription Set properties note page. See Figure 5-2 on page 181.
Schedule
After creating or adding the member subscriptions from the Source-to-Target Mapping, clicking the Schedule page from the subscription set note book (see Figure 5-2 on page 181), the following page is displayed, Figure 5-16.
Scheduling your replication is specified on this note book page. There are two methods to schedule when to replicate, time based or event base, see 5.6, Scheduling replication on page 220.
203
Time based method has two options: Relative timing will schedule the apply program start and stop on a interval timing basis from 1 minute to 52 weeks, which will start from the Start date and Start time specified on this screen. Enter a numeric value by the Minute, Hours, Days or Weeks for specific interval timing options. For example: the page in Figure 5-16 indicates that replication will start on 8/21/02 at 11:49:15 and replicate every 20 minutes there after. Continuously, clicking this radio button will schedule the apply program to run continuously until you stop it manually. For performance tip when you use this option, see 10.4.10, Apply operations and Subscription set parameters on page 452
Event-base, click this box to schedule the apply program to start and stop, base on the start and stop times defined in the apply control table called: IBMSNAP_SUBS EVENT. This table is updated manually or automatically from a application program. The event name specified on this note book page correspond to the event name with a start and stop time in the events control table. See 5.6, Scheduling replication on page 220.
The OK button if active will generate the SQL script to update the apply control tables at the apply control server specified in Figure 5-2 on page 181. See, 2.13.1, Running Saved SQL files later on page 87. Also see 5.7, SQL script description on page 222.
Statements
After creating or adding the member subscriptions from the Source-to-Target Mapping, clicking the Statements page from the subscription set note book (see Figure 5-2 on page 181), the following page is displayed, Figure 5-17.
204
You can create SQL statements or call procedures that are process each time the apply program process a subscription set. The SQL statements or procedures are updated in the IBMSNAP_SUBS_STMTS apply control table, that is defined from this note book page. For example, you could create a SQL statement to automate the process in maintaining the apply control tables. Clicking Add will display the following window to defined your SQL statement or procedure call. See Figure 5-18. The OK button if active will generate the SQL script to update the apply control tables at the apply control server specified in Figure 5-2 on page 181. See 2.13.1, Running Saved SQL files later on page 87. Also see 5.7, SQL script description on page 222.
205
This is the window were you define the actual SQL statement or procedure calls: First click the radio button to specify on which server to run the SQL or procedure and whether you want to run it before or after the apply program run. If you select SQL statement radio button, you can either enter the SQL statement or press the SQL Assist button to assist you in creating the SQL statement. See Figure 5-19 on page 207. If you have typed an SQL statement, click Prepare Statement. The Replication Center will check the syntax of the SQL statements and checked that the objects to which they refer do in fact exist. If such an object does not exist, the Replication Center returns an error. If you know that an object referred to by the statement does not exist at the time you create the subscription set, but will exist when the statement is run, you can ignore the error. If you select the Procedure call radio button you must enter a CALL before specifying the stored procedure name.
206
The OK button will close this window and take you back to Figure 5-17 on page 205. This window is displayed when SQL Assist is pressed from the Add SQL statement and Procedure call displayed in Figure 5-17 on page 205.
There are many functions of his screen to assist you in creating an SQL statement. Press the HELP key for detail instructions. The OK button will take you back to the Add SQL statement and procedure call. See Figure 5-17 on page 205.
207
5.4.2 Point-in-time
Similar to the features of a user copy with a timestamp column added, to indicate when the Apply program committed the row at the target. You can select this target type if you want to keep track of the time when the changes were applied to the target table.
208
Base aggregate
This target table type will summarize the entire contents of the source table during a replication cycle. For example, you could use this target type if you want to keep track of year to date sum or average sales by salesman or region.
Change aggregate
This target table will summarize the changes between replication cycles by reading the contents in the CD or CCD table, not the source table. For example, you could use this target type to track monthly sales totals by salesman, region or customer.
User-defined columns that are derived from SQL expressions. The source data type can be converted to different target data types, by using computed columns with SQL scalar functions. The log or journal record sequence number of the captured commit statement. This value groups inserts, updates, and deletes by the original transactions for the source table. The log or journal record sequence number that uniquely identifies a change
IBMSNAP_COMMITSEQ
IBMSNAP_INTENTSEQ
209
Column name
Description
A flag to indicate the type of operation: I - insert, U - update and D - delete The commit time at the Capture control server. Optional Uniquely identifies which Apply program will process this CCD table. Optional This value is set only during update-anywhere replication, if conflict detection is specified as standard or advanced when you define your replication source. It is not valid for non-DB2 relational targets because they cannot participate in update-anywhere configurations. The values are: 0 - A transaction with no known conflict. 1 - A transaction that contains a conflict where the same row in the source and replica tables have a change that was not replicated. When a conflict occurs, the transaction will be reversed at the replica table. 2 - A cascade-rejection of a transaction dependent on a prior transaction having at least one same-row conflict. When a conflict occurs, the transaction will be reversed at the replica table. 3 - A transaction that contains at least one referential-integrity constraint violation. Because this transaction violates the referential constraints defined on the source table, the Apply program will mark this subscription set as failed. Updates cannot be copied until the referential integrity definitions are corrected. 4 - A cascade-rejection of a transaction dependent on a prior transaction having at least one constraint conflict. Optional The authorization ID associated with the transaction. It is useful for database auditing. AUTHID length is 18 characters. If you supply a longer value, it is truncated. For DB2 Universal Database for z/OS, this column is the primary authorization ID. For DB2 Universal Database for iSeries, this column has the name of the user profile ID under which the application that caused the transaction ran. This column holds a 10-character ID padded with blanks. This column is not automatically copied to other tables; you must select it and copy it as a user data column. This column can be selected as a user data column for a non complete CCD target table. Optional The authorization token associated with the transaction. This ID is useful for database auditing. For DB2 Universal Database for z/OS, this column is the correlation ID. For DB2 Universal Database for iSeries, this column is the job name of the job that caused a transaction. This column is not automatically copied to other tables; you must select it and copy it as a user data column. This column can be selected as a user data column for a non complete CCD target table.
IBMSNAP_AUTHID
IBMSNAP_AUTHTKN
210
Column name
Description
IBMSNAP_UOWID
Optional The unit-of-work identifier from the log or journal record header for this unit of work.
211
Local Local
Yes Yes
Yes No
CCD table located in the source database, containing the same data as the replication source table CCD table located in the source database, containing the original data from the replication source table, and a history of subsequent changes CCD table located in the source database, containing only the latest change data CCD table located in the source database, containing all the change data CCD table resides in a database access by the Apply program, containing the same data as the source table CCD table resides in a database access by the Apply program, containing the original data from the replication source data, and a history of subsequent changes CCD table resides in a database access by the Apply program, containing only the latest changes CCD table resides in a database access by the Apply program, containing all the change data
No No Yes Yes
Yes No Yes No
Remote Remote
No No
Yes No
212
213
CD Table
UOW
Source DB
Tier 1
Capture
Apply
Tier 2 Tier 3
Apply
CCD Target
Control DB
Tier 4
Apply
Control DB
Target DB
Control DB
Target DB
214
Do not use this type of CCD table as a replication staging table because of the space consumed by saving each and every change. This table contains a complete set of rows initially, and it is appended with each and every row change. No information is overwritten, and no information is lost. Use this type of CCD table for auditing applications that require a complete set of rows.
215
If you want to use column subsetting in an internal CCD table, review all previously-defined target tables to make sure that the internal CCD table definition includes all appropriate columns from the source tables. If you define the subscription set for the internal CCD table before you define any of the other subscription sets from this source, the other subscription sets are restricted to the columns that are in the internal CCD table.
5.4.5 Replica
Update anywhere replication, allows you replicate changes from the replica read/write target types table, then subsequently the source tables changes are replicated to target server to update the replica target table. Therefore, at the source server we have the master table and the replica at the target. An update anywhere replication is configured, with capture running at the source and target sever. The apply is configure to run only at the target server. The capture on the target server will apply any changes to the replica in the CD table at the target server, then the apply program will push those changes to the source system and apply the changes to the source tables. Any changes at the source server is captured in the CD table, then the apply program will pull the changes as normal and update these changes to the replica target tables.
Restriction: Replica target tables cannot be defined at non-DB2 target servers. A replica requires Capture at the target server, which is not possible with non-DB2 servers.
An update anywhere replication, could also be configured to have multiple replicas target table, against the same source table. A change at any of the replica target tables would replicate back to the source, then that change will eventually replicate to the other replicas target tables within this configuration. This is achieved because capture is running concurrently on all servers. See Figure 5-21.
216
Update Anywhere
CAPTURE
CD
APPLY
USER TABLE CD
CAPTURE
CAPTURE
CAPTURE
CD
APPLY
CD REPLICA
APPLY
REPLICA
REPLICA
Replica sites
Figure 5-21 Update anywhere configuration with multi replicas
One of the issues to consider in a update anywhere replication is a conflict detection. If 2 users were updating the same row on both system at the same time. This would create a conflict detection situation when the same row is replicated to both servers at the same. The best way to prevent this problem is to design the application to avoid this if possible. However, during source tables registration, to help resolve this problem you need to know exactly what to do when a conflict detection occurs. Theres a choice of three levels of confliction detections: None, Standard and Enhanced. See Capture changes from replica target table on page 147 for additional information.
217
Network overload when transmitting the large backlog of changes from the server Spill file could overflow from memory, which causes additional overhead to the apply At the target server updating the target tables requires locking many rows, which could cause contention The logging resource to support the batch updates could exceed the target allocated log space. Data blocking allows you to control the change data backlog situation by specify how many minutes worth of change data the Apply program can replicate during a subscription cycle. The number of minutes that you specify determines the size of the data block. This number is updated in the MAX_SYNCH_MINUTES column of the Subscription set table. To enter this number, see data blocking factor in Figure 5-2 on page 181. When the backlog of change data is greater than the size of the data block, the Apply program changes a single subscription cycle into many mini apply cycles, which will reduce the backlog to manageable pieces. It also retries any unsuccessful mini-cycles and will reduce the size of the data block to match available system resources. See Figure 5-22 on page 219.
218
Data blocking
s stre s le ss rk e tw o on n
target
journal less stress on target journal on the iSeries Use MAX_SYNCH_MINUTES to break up the backlog into mini-subscriptions, using multiple real subscription cycles to perform one scheduled cycle.
- or -
If replication fails during a mini-cycle, the Apply program will rerun the subscription set from the last successful mini-cycle, which is considered as another benefit in using data blocking. Instead of having to rerun the entire backlog of changes. By default, the Apply program uses no data blocking, that is, it copies all available committed data that has been captured. If you enter a data-blocking value, the number of minutes that you set should be small enough so that all transactions for the subscription set that occur during the interval can be copied without causing the spill files or log to overflow. The restrictions are: You cannot split a unit of work A previous mini cycle cannot roll back Only supported in change replication, cannot use data blocking to full refresh
219
220
EVENT_NAME
The unique identifier of an event. This identifier is used to trigger replication for a subscription set
EVENT_TIME
An Apply control server timestamp of a current or future posting time. User applications that signal replication events provide the values in this column. Optional. END_SYNCHPOINT A log sequence number that tells the Apply program to apply only data that has been captured up to this point. You can find the exact END_SYNCHPOINT that you want to use by referring to the signal table and finding the precise log sequence number associated with a timestamp. Any transactions that are committed beyond this point in the log are not replicated until a later event is posted. If you supply values for END_SYNCHPOINT and END_OF_PERIOD, the Apply program uses the END_SYNCHPOINT value because it then does not need to perform any calculations from the control tables to find the maximum log sequence number to replicate. Optional. A timestamp used by the Apply program, which applies only data that has been logged up to this point. Any transactions that are committed beyond this point in the log are not replicated until a later event is posted
END_SYNCHPOINT
END_OF_PERIOD
Note that this table is updated by a user or user application to post the events, not the replication center. For example your batch application has to process a vital batch job step, before replicating the data. This batch job step run at different time each night, depending on the workload, so when this batch job finishes the next step is to execute SQL script to insert a row into the into the IBMSNAP_EVENT table, as follows: CONECT TO Database USER username USING password; INSERT INTO ASN.IBMSNAP_SUBS_EVENT (EVENT_NAME,EVENT_TIME) VALUES (NIGHTRUN,CURRENT TIMESTAMP +5 MINUTES) Use the CONTECT statement if the apply control table exist on another server. This SQL will post an event called NIGHTRUN, which will start the apply program to replicate the source data at the current system time plus 5 minutes after this SQL ends. The plus 5 minutes will ensure that a future event is posted in the events table, because when the apply program monitors this table, it is always looking for a future event to start replication, it will ignore past time events.
221
CONNECT TO captureserver USER XXX USING XXX; INSERT INTO ASN.IBMSNAP_PRUNE_SET ( APPLY_QUAL, SET_NAME, TARGET_SERVER, SYNCHTIME, SYNCHPOINT) VALUES ( 'APY1','SET1', 'SAMPLE', null, X'00000000000000000000' ); INSERT INTO ASN.IBMSNAP_PRUNCNTL ( APPLY_QUAL, SET_NAME, CNTL_SERVER, CNTL_ALIAS, SOURCE_OWNER, SOURCE_TABLE, SOURCE_VIEW_QUAL, TARGET_OWNER, TARGET_TABLE, TARGET_SERVER, TARGET_STRUCTURE, MAP_ID, PHYS_CHANGE_OWNER, PHYS_CHANGE_TABLE ) SELECT 'APY1', 'SET1', 'SAMPLE', 'SAMPLE','DB2DRS2','EMPLOYEE',0,'DB2DRS2',' TGEMPLOYEE', 'SAMPLE', 8, coalesce (char(max(INT(MAP_ID)+1)),'0'),'DB2DRS2', 'CDEMPLOYEE' FROM ASN.IBMSNAP_PRUNCNTL; COMMIT ;
In this part of the SQL script, the subscription set definition is updated at the target server.
CONNECT TO applyserver USER XXX USING XXX; INSERT INTO ASN.IBMSNAP_SUBS_SET (APPLY_QUAL, SET_NAME,WHOS_ON_FIRST,SET_TYPE, ACTIVATE, SOURCE_SERVER, SOURCE_ALIAS, TARGET_SERVER, TARGET_ALIAS, STATUS, REFRESH_TYPE, SLEEP_MINUTES,EVENT_NAME, MAX_SYNCH_MINUTES, AUX_STMTS, ARCH_LEVEL, LASTRUN, LASTSUCCESS, CAPTURE_SCHEMA, TGT_CAPTURE_SCHEMA, OPTION_FLAGS, FEDERATED_SRC_SRVR, FEDERATED_TGT_SRVR, COMMIT_COUNT, JRN_LIB, JRN_NAME) VALUES ('APY1','SET1', 'S','R',1, 'SAMPLE', 'SAMPLE','SAMPLE', 'SAMPLE', 0,'R', 5, null, 0, 0, '0801','2002-08-27-10.43.31.0', null, 'ASN', null, 'NNNN',null, null,null, null, null );
222
In this part of the SQL script the subscription member definition is updated at the target server.
INSERT INTO ASN.IBMSNAP_SUBS_MEMBR (APPLY_QUAL, SET_NAME, WHOS_ON_FIRST, SOURCE_OWNER,SOURCE_TABLE, SOURCE_VIEW_QUAL, TARGET_OWNER, TARGET_TABLE, TARGET_STRUCTURE,TARGET_CONDENSED, TARGET_COMPLETE, PREDICATES, UOW_CD_PREDICATES, JOIN_UOW_CD, MEMBER_STATE,TARGET_KEY_CHG ) VALUES ( 'APY1', 'SET1','S', 'DB2DRS2','EMPLOYEE', 0,'DB2DRS2','TGEMPLOYEE', 8,'Y','Y','WORKDEPT = ''D11''', null, null, 'N','N' );
In this part of the SQL script the subscription column definition is created at the target server.
INSERT INTO ASN.IBMSNAP_SUBS_COLS (APPLY_QUAL, SET_NAME, WHOS_ON_FIRST, TARGET_OWNER, TARGET_TABLE, TARGET_NAME, COL_TYPE, IS_KEY,COLNO,EXPRESSION) VALUES ('APY1','SET1', 'S','DB2DRS2' ,'TGEMPLOYEE','EMPNO', 'A','Y',1, 'EMPNO'); INSERT INTO ASN.IBMSNAP_SUBS_COLS (APPLY_QUAL, SET_NAME, WHOS_ON_FIRST, TARGET_OWNER, TARGET_TABLE, TARGET_NAME, COL_TYPE, IS_KEY,COLNO,EXPRESSION) VALUES ( 'APY1', 'SET1', 'S','DB2DRS2', 'TGEMPLOYEE','FIRSTNME', 'A','N',2,'FIRSTNME'); INSERT INTO ASN.IBMSNAP_SUBS_COLS (APPLY_QUAL, SET_NAME, WHOS_ON_FIRST, TARGET_OWNER, TARGET_TABLE, TARGET_NAME, COL_TYPE, IS_KEY,COLNO,EXPRESSION) VALUES ( 'APY1','SET1', 'S', 'DB2DRS2','TGEMPLOYEE','MIDINIT', 'A','N',3,'MIDINIT'); There is an INSERT statement for each column within the target table
In this part of the SQL script the subscription statement definition is updated at the target server.
223
.
INSERT INTO ASN.IBMSNAP_SUBS_STMTS( APPLY_QUAL, SET_NAME, WHOS_ON_FIRST, BEFORE_OR_AFTER, STMT_NUMBER, EI_OR_CALL, SQL_STMT, ACCEPT_SQLSTATES) VALUES ('APY1','SET1','S','A',5,'E', 'DELETE FROM ASN.IBMSNAP_APPLYTRAIL IBMSNAP_APPLYTRAIL WHERE IBMSNAP_APPLYTRAIL.LASTRUN < CURRENT TIMESTAMP - 7 DAYS',NULL); UPDATE ASN.IBMSNAP_SUBS_SET SET AUX_STMTS = AUX_STMTS + 1 WHERE APPLY_QUAL = 'APY1' AND SET_NAME = 'SET1' AND WHOS_ON_FIRST = 'S';COMMIT ;
In this part of the SQL script the target table and index is created at the target server.
CONNECT TO applyserver USER XXX USING XXX; CREATE TABLESPACE TSTGEMPLOYEE MANAGED BY DATABASE USING (FILE 'TGEMPLOYEE' 2048K); CREATE TABLE DB2DRS2.TGEMPLOYEE( EMPNO CHARACTER(6) NOT NULL,FIRSTNME VARCHAR(12) NOT NULL ,MIDINIT CHARACTER(1) NOT NULL ,LASTNAME VARCHAR(15) NOT NULL , WORKDEPT CHARACTER(3),PHONENO CHARACTER(4),HIREDATE DATE ,JOB CHARACTER(8), EDLEVEL SMALLINT NOT NULL,SEX CHARACTER(1), BIRTHDATE DATE ,SALARY DECIMAL(9,2) ,BONUS DECIMAL(9,2), COMM DECIMAL(9,2)) IN TSTGEMPLOYEE; CREATE UNIQUE INDEX DB2DRS2.IXTGEMPLOYEE ON DB2DRS2.TGEMPLOYEE (EMPNO ASC); COMMIT ;
Figure 5-28 Subscription SQL Script - Create target table and index
For target tables in non-DB2 servers, the SQL generated by Replication Center will include: CONNECT to the DB2 ESE/Connect database with the Server definition for the non-DB2 target server SET PASSTHRU to the non-DB2 server CREATE TABLE for the target table CREATE UNIQUE INDEX for the target table COMMIT for the create table and create unique index at the non-DB2 server SET PASSTHRU RESET CREATE NICKNAME for the target table ALTER NICKNAME statements if needed for any nickname columns to over-ride the default type mappings in the federated server wrapper and make
224
the attributes of these nickname columns compatible with the attributes of the source table columns. COMMIT for the create nickname and alter nickname
225
Add DPR Subscription (ADDDPRSUB) Type choices, press Enter. Apply qualifier . . . . . . . . APYQUAL Set name . . . . . . . . . . . . SETNAME Target table . . . . . . . . . . TGTTBL
*NONE
*NONE
Library . . . . . . . . . . . Control server . . . . . . . . . Source server . . . . . . . . . Capture control library . . . . Target capture control library Target type . . . . . . . . . . Refresh timing . . . . . . . . . More...
After you enter a required value for the Apply qualifier and Set name press the F10 key for additional parameters, then the page down key, to display the following screens of parameters.
226
Add DPR Subscription (ADDDPRSUB) Type choices, press Enter. Interval between iterations: INTERVAL Number . . . . . . . . . . . . Interval . . . . . . . . . . . + for more values Key columns . . . . . . . . . . KEYCOL + for more values
1 *DAY *SRCTBL
Additional Parameters Activate subscription . . . . . ACTIVATE Create target table . . . . . . CRTTGTTBL Check target table format . . . CHKFMT More.. *YES *YES *YES
Add DPR Subscription (ADDDPRSUB) Type choices, press Enter. Source columns . . . . . . . . . COLUMN + for more values Unique key . . . . . . . . . . . UNIQUE Target columns: TGTCOL Column . . . . . . . . . . . . New column . . . . . . . . . . + for more values More... *YES *COLUMN *ALL
227
Add DPR Subscription (ADDDPRSUB) Type choices, press Enter. Calculated columns: CALCCOL Column . . . . . . . . . . . . Expression . . . . . . . . . .
*NONE
*ALL
Add DPR Subscription (ADDDPRSUB) Type choices, press Enter. SQL to run before: SQLBEFORE SQL statement . . . . . . . .
*NONE
Server to run on . . . . . . . Allowed SQL states . . . . . . + for more values + for more values More... F3=Exit F4=Prompt F5=Refresh F12=Cancel
228
Add DPR Subscription (ADDDPRSUB) Type choices, press Enter. SQL to run after: SQLAFTER SQL statement . . . . . . . .
*NONE
Server to run on . . . . . . . Allowed SQL states . . . . . . + for more values + for more values Maximum synchronization time: MAXSYNCH Number . . . . . . . . . . . . Interval . . . . . . . . . . . + for more values Commit count . . . . . . . . . . CMTCNT Target key change . . . . . . . TGTKEYCHG Add DPR registration . . . . . . ADDREG More...
Press the page down to display the last screen for this command.
Add DPR Subscription (ADDDPRSUB) Type choices, press Enter. Federated server . . . . . . . . FEDSVR Bottom *NONE
For detail information on the parameter values, you can use the field level help, by moving the cursor to the parameter and press the F1 key. Or refer to Chapter 18 in the IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1, SC27-1121.
229
Add DPR Subscription Member (ADDDPRSUBM) Type choices, press Enter. Apply qualifier . . . . . . . . Set name . . . . . . . . . . . . Target table . . . . . . . . . .
Name
Library . . Control server Source server Target type . Key columns . More...
*USERCOPY, *REPLICA...
After you enter a required value for the Apply qualifier, Set name, Target table and Source table press the F10 key for additional parameters, then the page down key, to display the followings screens of parameters:
230
Additional Parameters Create target table . . . . . . Check target table format . . . Source columns . . . . . . . . . + for more values Unique key . . . . . . . . . . . Target columns: Column . . . . . . . . . . . . New column . . . . . . . . . . + for more values More... *YES *YES *ALL *YES *COLUMN *YES, *NO *YES, *NO
*YES, *NO
Add DPR Subscription Member (ADDDPRSUBM) Type choices, press Enter. Calculated columns: Column . . . . . . . . . . . . Expression . . . . . . . . . .
*NONE
*ALL
For detail information on the parameter values, you can use the field level help, by moving the cursor to the parameter and press the F1 key. Or refer to Chapter 18 in the IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1, SC27-1121.
231
232
Chapter 6.
233
234
a. Expand the server you want to operate. b. Select the Apply Qualifiers. c. On the content pane, select and right-click the apply qualifier you want. d. Select either Start Apply or Stop Apply depending on your choice. Alternatively you can start Capture and Apply from the Launchpad at Replication Center. In this section, we describe how to start and stop Capture and Apply with the most common parameters. You can find other parameters in 6.2, Capture and Apply parameters on page 295. The Capture and Apply programs can be operated from Replication Center or command prompt for all platforms. You can find the platform specific requirements, restrictions and alternative method s for:
DB2 UDB for UNIX and Windows in 6.1.3, Considerations for DB2 UDB for UNIX and Windows on page 270 DB2 UDB for z/OS in 6.1.4, Considerations for DB2 UDB for z/OS on page 275 DB2 UDB for iSeries in 6.1.5, Considerations for DB2 UDB for iSeries on page 286 Attention:
If you want to operate Capture and Apply for remote DB2 Windows, Unix, or z/OS servers from the Replication Center, DB2 Administration Server (DAS) on the remote server should be started. DAS is not required for DB2 iSeries servers. There are alternative ways to start Capture and Apply for DB2 UDB for z/OS. If you prefer to use Replication Center to operate Capture and Apply for z/OS, the DB2 Administration Server (DAS) should be installed on DB2 UDB for z/OS. See 6.1.4, Considerations for DB2 UDB for z/OS on page 275. If you are operating Capture and Apply for iSeries go to 6.1.5, Considerations for DB2 UDB for iSeries on page 286.
Starting Capture for DB2 for UNIX and Windows and z/OS
Replication Center accepts start-up parameters for Capture from Start Capture window and prepares and runs the command to start the Capture. You can see the Start Capture window displayed for SAMPLE database in Figure 6-1.
235
Attention: If the Capture control server is DB2 UDB for UNIX and Windows, the database must be enabled for replication. If it is not enabled, you will receive a message indicating that it is not configured for archival logging. You can enable it from Replication Center. In order to check if your database is enabled for replication and to enable it, refer to 6.1.3, Considerations for DB2 UDB for UNIX and Windows on page 270.
The Capture server is the database you selected at the Replication Center to start capture for. Capture must read control information in order to start. The control information are on the DB2 tables whose schema are capture schema. Therefore capture schema is also essential for Capture to start. Default capture schema is ASN. If you are using multiple capture schemas for this capture control server or you used a capture schema different than ASN, you must specify it by highlighting it from the Capture schema list at the top of the screen.
236
Each parameter is specified with a keyword. Parameter values may originate from three different sources except for capture_server, capture_schema and capture_path: Each parameter, except for db2_subsystem,has a shipped default. The db2_subsystem parameter is only used for Capture on OS/390 and z/OS. Each parameter has a default for that capture schema in the IBMSNAP_CAPPARMS. The defaults are stored as one row in this table and this row is inserted when the table is created by the Replication Center. The values in this control table can be altered or set to null. A parameter may have an overridden value for this instance of Capture. If a parameter is overridden at start-up, Capture assigns the start-up value for that parameter. If parameter is not overridden, reads the value for this parameter from IBMSNAP_CAPPARMS. In case the value in the IBMSNAP_CAPPARMS for this parameter is null or the row in IBMSNAP_CAPPARMS is deleted, assigns the shipped default. It is not possible to change the shipped default anyway. The defaults created in the IBMSNAP_CAPPARMS table can be altered but not when starting the Capture from the Replication Center. It is only possible to give overridden values when starting Capture. Capture directs all its file I/O to capture_path. This includes the log file, spill files and others. There is no shipped default for the capture_path. There is a column in IBMSNAP_CAPPARMS for capture_path but it is initially null. If you specify a path on the Start Capture window, then capture starts with this start-up value for capture_path. If you do not specify a path, then capture_path is set to the working directory you should give on Run Specifications at Run Now or Save Command window. Capture creates a log file on the capture path. You can find the more about the log file in 6.1.2, Basic operations from the command prompt on page 256. The startmode can be either of the following: WARMSI: Warm start, switch initially to cold start WARMNS: Warm start, never switch to cold start WARMSA: Warm start, always switch to cold as necessary COLD: Cold start If this is your first run of Capture for this schema, replication sources should be replicated to the targets with full refresh. A cold start initiates a full-refresh. All the start modes except WARMNS provide switch to cold start and can be used as the startmode on the first run. Generally, the safest way is to use WARMNS start mode that do not switch to cold start but this mode is not applicable for the first
237
run. WARMSI also provides this safety and you do not have to change the startmode after the first run. I
In our example in Figure 6-1, we have only overridden the startmode. Replication Center generates the command seen in Figure 6-2. Capture_path is not specified as a parameter. All the files will be created by Capture on the working directory specified at Run Specifications. Note that capture_server is a mandatory parameter. ASN is the shipped default for capture_schema. These two parameters are essential for Capture to find the control tables and start running. Capture can be started as a Windows service on Windows 2000 and Windows NT operating systems. If you mark the check-box in Figure 6-1 on page 236, the command to create and start the service is prepared by Replication Center. You need to do some modifications before running the commands. Refer to Starting Capture and Apply as a Windows service on page 274 for necessary
238
modifications. How to create a replication service (asnscrt) and drop a replication service (asnsdrop) are also explained in that section.
The Capture schema is selected from this list box. In this example the capture control tables resides in a schema called COLINSRC. The default capture schema ASN. The CAPPARMS Value column contains defaults values for the corresponding Keyword parameter. For example, the CLNUPITV keyword has a CAPPARMS default value of 86400. The defaults values that are specified in that column comes from a table called IBMSNAP_CAPPARMS. When you change a value in that table, it becomes the new default value when ever you select this screen to start Capture. See 6.2.1, Change
239
Capture parameters on page 296. However, you can override these values for only this instance of the Capture program. The other defaults values not defined in the CAPPARMS tables are changed in the Value section of the screen. The changes are only applied to this instance of the Capture program. For example: The JRN overriding parameter value shown in Figure 6-3 is an example of overriding the Capture program to use a specific journal to capture changes from the source tables that are registered only to a journal called DPRJRN, instead of capturing changes from all the journal that contain registered source tables. The Capture keyword parameter values are described in detail in Chapter 18 of the IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1, SC27-1121. Click OK to generate the command to start the Capture program with the override values. See Figure 6-4.
As you can see the command that is generated is the STRDPRCAP command with the JRN parameter overridden with a specific journal. This command is sent to the iSeries to run the Capture program on the source server. See, 6.1.5, Considerations for DB2 UDB for iSeries on page 286 for details on this command.
Starting Apply for DB2 for UNIX and Windows and z/OS
Capture must run on the source server. Apply can run on any system but should be able to connect to source, target and control servers. If you start Apply any place other than the target server, Apply runs in push mode. If you start Apply on
240
the target server, Apply runs in pull mode. Running Apply in push or pull mode has a performance implication which is discussed in Chapter 10, Performance on page 417. The server where the Apply control tables for the apply qualifier is its control server. The Apply control server must be configured as an application requestor (AR) for source and target servers. Source and target servers must be configured as application servers (AS). If Apply is running on DB2 UDB for z/OS, refer to Configuring z/OS apply control server as AR on page 278 for connectivity of Apply. If Apply is running on DB2 UDB for UNIX or Windows and: If the source server is a remote database, source server must be cataloged at the server where Apply is running. If the target server is a remote database, target server must be cataloged at the server where Apply is running. In our sample configuration in Figure 6-5, capture control server (SAMPLE) is on Windows. The target (AIXSAMP) which is the apply control server is on AIX. Apply is running at the target in pull mode. Source is cataloged to the apply control server (STHELENS) with the following commands, since it is a remote server:
db2 catalog tcpip node nodewin1 remote 9.1.39.85 server 50000 db2 catalog database sample at node nodewin1
APPLY CONTR OL SERVER TAR GET SERVE R apply_qual:APY1 PULL STHELENS IP:9.1.38.178 PORT:60016 INSTAN CE:CAYCI U SERID :C AYCI P AS SW OR D:CAYC I
A23B K31Z IP:9.1.39.85 PORT:50000 INSTANC E:DB 2 USER ID:DB 2DRS3 PASSW ORD :D B2DR S3
SAMPLE
AIXSAMP
When you start Apply from the Replication Center, you specify the system where Apply will run. The system name is given from the System pull-down menu on top of Start Apply window. Apply will run on STHELENS in our example in Figure 6-6.
241
The control_server and apply_qual are two mandatory parameters for Apply and they have already been specified before reaching this screen. You see control server (AIXSAMP) and apply qualifier (APY1) on the top left corner of the Start Apply window. DB2 z/OS subsystem id is also required if apply will run on z/OS or OS/390. The other parameters on this screen are all optional parameters. You can specify values for the parameters to override the defaults from Start Apply window. Parameter assignment method of Apply is very much like the Capture. Every parameter has a keyword. There are three possible values for each parameter, the shipped default, the defaults in the IBMSNAP_APPPARMS and the start-up value. Apply uses the same precedence as Capture such that shipped defaults are overridden by the values in IBMSNAP_APPPARMS parameter table and start-up values override the values in the parameter table.
242
There is one set of apply control tables on an apply control server. However, there is always one row in IBMSNAP_CAPPARMS and this row is inserted by the Replication Center when capture control tables are created. The apply parameter table can have zero or more rows and it is not populated by the Replication Center. Each row defines the defaults for one apply qualifier. If there is a row for an apply qualifier that Apply is processing, Apply uses the defaults from this row to override the shipped defaults. On this version of the product, the parameters values from the IBMSNAP_APPPARMS do not appear on the Start Apply window but will be available on a future fixpack. Although the values do not display on the Replication Center, Apply still uses the parameters from IBMSNAP_APPPARMS as described above. If you want to change shipped defaults for an apply qualifier, you can insert a row into the table with an SQL statement. The parameters that exist in IBMSNAP_APPPARMS are described in 3.4.5, Control tables described on page 129. The apply_path usage is also similar to Capture. This path is used for all the file I/O of Apply. If specified it is carried to the directory field on Run Specifications, if not specified, the directory must be filled on Run Specifications. Apply generates a log file like Capture does, in the directory of apply_path. The password file is used by Apply to find the userid and password pairs to access the source and target servers. If the source or target servers are remote to apply control server and if configuration parameter of the source or target server indicates that authentication is required at the SERVER, a valid userid and password for that server must be present in the password file. The password file is named asnpwd.aut by default. This password file is created and populated with asnpwd command. This command is explained in Maintaining the password file on page 289. This command should be issued at the apply control server. Apply searches this file on the directory given in the apply_path. If you used a different name than the default, you should indicate the name with pwdfile parameter. You can also place this file to a sub-directory of apply_path. If you placed your password file under a sub-directory apply_path, you must indicate your sub-directory as well as the password file with pwdfile parameter.
Important: Password file which holds the passwords of userids from various platforms is encrypted.
In our example, we created and inserted the userid and password for SAMPLE database using the following commands at STHELENS before starting Apply. Our password file is created in the pwd directory under apply_path.
cd /home/cayci/pwd/
243
asnpwd init using password.aix asnpwd add alias sample id db2drs3 password db2drs3 using password.aix
Note that init is issued only once for creating the password file but add should be issued for every database Apply connects and which requires authentication. In our case only the source database is remote. Replication Center generates the following command and submits to STHELENS as the consequence of our selections at the Start Apply window.
asnapply CONTROL_SERVER=AIXSAMP APPLY_QUAL=APY1 PWDFILE=pwd/password.aix
One way to start Apply is to define it as a Windows service. This method is possible only when Apply is on the Window 2000 and Windows NT operating systems. If you mark the check-box at the bottom of the window in Figure 6-6 on page 242, Replication Center prepares the command to create and start the service based on the parameters provided on the Start Apply window. The command prepared by the Replication Center needs to be modified. Necessary modifications are explained at Starting Capture and Apply as a Windows service on page 274. The command for creating the service(asnscrt) and the command for dropping the service(asnsdrop) are also explained in that section.
Attention: If you receive ASN0506E, The program could not attach to the replication communications message queue error message at the stop command, the most common reason is that the program (Capture or Apply) is already stopped or never started.
244
In this section we will consider pull mode configuration and make references to remote journal configurations. Before starting the Apply program, the following procedures need to be accomplished with a valid user profile on the source and target server:
Connectivity configuration
Before the Apply program can connect to the Capture control server and Apply control server, the DRDA communication connection must be configured (Example 6-1):
Example 6-1 Configure connection
At the source server enter command: WRKRDBDIRE and note the relational database (RDB) name for the *local remote location The follow display is the WRKRDBDIRE screen, indicating DB2400D is the RDB the target server will connect to: Work with Relational Database Directory Entries Position to . . . . . . Type options, press Enter. 1=Add 2=Change 4=Remove 5=Display details Relational Database NST105 MYNST103 NST103 DB2400D Remote Location 9.9.74.999 9.30.74.43 9.112.26.3 *LOCAL
6=Print details
Option
Text
At the target server enter the ADDRDBDIRE command, F4 key to display the prompt screen. Enter the following values to create the DRDA connection to the source server: Add RDB Directory Entry (ADDRDBDIRE) Type choices, press Enter. Relational database . . . . . . RDB Remote location: RMTLOCNAME Name or address . . . . . . . Need to get IP address or Host name of Type . . . . . . . . . . . . . > Text . . . . . . . . . . . . . . TEXT Port number or service program PORT Remote authentication method: RMTAUTMTH Preferred method . . . . . . . Allow lower authentication . .
> DB2400D > 999.99.999.999 the source sever *IP *BLANK *DRDA *ENCRYPTED *ALWLOWER
245
If you want to connect to a Capture control server on a Windows platform change the port parameter to 50000. Run SQL connect statement to test the connection to the Capture control server database: CONNECT TO DB2400D USER UserID USING Password If you receive an authorization error when connecting to the Capture control server and the userID and password is correct. At the target server enter CHGDDMTCPA. If password required parameter is: *YES. Then run the following command:
ADDSVRAUTE USRPRF(TargetUserprofile) SERVER(SourceRDB) USRID(SourceUserID) PASSWORD(SourcePassword)
For remote journal configuration run the following command at the target server where the Capture and Apply program is running. The source server is where the actual source table reside (Example 6-3):
Example 6-3 Remote journal configuration
CRTSQLPKG PGM(QDP4/QZSNSQLF) RDB(Source table sever) OBJTYPE(*SRVPGM)
After creating the SQL packages you must grant *EXCECUTE privileges to these objects. Enter the following command at the source server (Example 6-4):
Example 6-4 Grant authority to SQL packages
GRTOBJAUT OBJ(ASN/package_name ) OBJTYPE(*SQLPKG) USER(subscriber UserID ) AUT(*OBJOPR *EXECUTE
If Data Propagator products exist at the Source server use the following command (Example 6-5):
Example 6-5 Data Propagator products
246
GRTDPRAUT
For details see Chapter 18 in IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1, SC27-1121.
Starting Apply
From the Replication Center when you select an apply qualifier from the Apply control server, the window in Figure 6-7 is displayed to start the Apply program at the target server:
Select the target sever from the System pull down list. The apply qualifier APY2 for this Apply example was selected from the previous screen as described in the beginning of this section. Select the Keyword and enter the value, which is displayed in the Overriding Value part of the display. For example, the OPTSNGSET parameter was overridden to YES. For details about these parameters refer to Chapter 18 under STRDPRAPY command, in the IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1, SC27-1121.
247
The following STRDPRAPY command is generated showing the override of the OPTSNGSET parameter, when you click OK button, executed at the target server (Example 6-6):
Example 6-6 STRDPRAPY command generated from the Replication Center
strdprapy CTLSVR(STL400G) APYQUAL(APY2) OPTSNGSET(Y)
You are required to select the capture schema, and the option to end controlled by selecting After all task complete outside the ENDJOB option. Or selecting the other option to end Capture immediately. Select the Stop Apply from the menu when you right-click on the apply qualifier list from the apply control servers. Will display a window to select Apply control server with the option to end controlled or immediately.
248
If Replication Center can attach to the communications message queue of the Capture, the threads and the status of the Capture threads are listed. Capture is a multi threaded application. Type of threads and their functions are as follows: 1. HOLDL thread: In order to ensure that only one Capture is capturing for this schema, locks the IBMSNAP_CAPENQ table of that schema. This thread is also called serialization thread. 2. ADMIN thread: This thread externalizes the statistics to IBMSNAP_CAPMON and writes messages to the Capture log and IBMSNAP_CAPTRACE. 3. PRUNE thread: This thread prunes the Change Data(CD), IBMSNAP_UOW, IBMSNAP_SIGNAL, IBMSNAP_CAPTRACE and IBMSNAP_CAPMON tables. If auto_prune is not set, then this thread stays in the resting state until prune command is issued. If Capture is run with auto-pruning, then prune thread rests until prune_interval. 4. WORKER thread: This thread reads the DB2 logs, captures changes into memory. Worker thread inserts changes to CD table and updates the commits to the IBMSNAP_UOW when the application commits. Query status displays the four of the Capture threads. One more thread is used by Capture. This thread establishes the environment and starts the other threads. The is resting and working states are indication of normal operation of Capture.
249
The status of Apply can also be queried in a similar way. Expand Apply Control Servers under Operations. Expand your apply control server and select Apply Qualifiers. On the content pane, select and right-click the qualifier you want to query and from the option list select Query Status. A Query Status screen similar to the Query Status screen of Apply displays. Two threads and states of Apply threads are being displayed. 1. HOLDL thread: This thread is for serialization. It stays in resting state all the time and prevents multiple applies to be started for the same apply qualifier for an apply control server. 2. WORKER thread: As evident from its name, this thread does the apply. Worker thread reads the CD table and UOW from the capture control server, updates the control tables of both capture control server and apply control server and updates the targets at the target server. There is also an administrative thread which is not displayed on this screen but can be detected by issuing the command from the command line. It is also possible to display the threads with DB2 or operating system commands. See Manage processes and threads on page 290 for the commands available on various platforms. Query status may return an error on attaching the communications message queue for both Capture and Apply. One reason for not able to attach the communication message queue is that the program (Capture or Apply) is not started or stopped due to an error. In a data sharing environment (z/OS) and if you start Capture with the DB2 data sharing group name, you must issue the status or trace command using the same group name. If you use the subsystem name instead of the group name you will receive the same error.
250
Type options, press Enter. 2=Change 3=Hold 4=End 5=Work with 6=Release 8=Work with spooled files 13=Disconnect Opt Job COLINSRC DPRJRN LETAQ NST108RC PROMOTED User HCOLIN HCOLIN RCTEST02 REPLSVT1 RCTEST01 Type BATCH BATCH BATCH BATCH BATCH -----Status----MSGW ACTIVE ACTIVE MSGW MSGW
7=Display message
F5=Refresh F18=Bottom
F9=Retrieve
F11=Display schedule
The Capture program we started in Figure 6-3 on page 239 is shown on this display as COLINSRC job. This is the Capture controlling job that controls the journal job and pruning process. It will be the only Capture program displayed for each Capture Schema, if its the first time starting capture for a group of registered source tables. When the full refresh process has successfully finish from the apply program, another Capture program starts up. This program is the Capture journal program, theres one program for each journal, for each capture schema. See Figure 6-10, which shows DPRJRN job as the Capture journal job. The job name is represented by the actual journals name. The MSGW and ACTIVE status on this display indicate these jobs are working okay. The job log for both capture jobs will show details information if its successful or message for any errors that occurred. From Figure 6-10, enter option 5 by controlling job, to display the Work with Job screen. Select option 10 to Display job log, then F10 to display detail job log. The following example shows the job log when the Capture program starts successfully:
251
System: Job . . :
STL400D COLINSRC
User . . :
HCOLIN
Number . . . :
028097
>> CALL PGM(QDP4/QZSNCV81) PARM('COLINSRC ' 'Y *LIBL/QZSNDPR 120 1 COLINSRC/ DPRJRN') File IBMSN00021 in library COLINSRC changed. Capture process is starting with RESTART(*YES). Capture process has started. Capture has started clean up. Capture has completed clean up.
Figure 6-11 Successful Capture program job log for controlling job
Enter option 5 on the WRKSMBJOB screen, see Figure 6-10 by the Capture journal job, select 10 to Display job log, then F10 to display detail job log. The following example shows the job log when the Capture journal job starts successfully:
Display All Messages System: STL400D Job . . : 028098 DPRJRN User . . : HCOLIN Number . . . :
>> CALL PGM(QZSNCV82) PARM('COLINSRC DPRJRN 0 1 -1 2002-09-11-15.34.23. 205000 15 0 COLINSRC ') Block mode started by the RCVJRNE exit program.
Figure 6-12 Successful Capture job log for the journal job
If you cant find either the Capture or the journal program in the WRKSBMJOB display, that means an error occurred in the Capture programs and it has cancelled. Therefore, you need check the job log associated to these programs in QEZJOBLOG OUTQ to determine the problem. Use either WRKOUTQ OUTQ(QEZJOBLOG) or WRKSPLF SELECT(YourUserID)
252
Type options, press Enter. 2=Change 3=Hold 4=End 5=Work with 6=Release 8=Work with spooled files 13=Disconnect
7=Display message
F5=Refresh F18=Bottom
F9=Retrieve
F11=Display schedule
The Apply program we started in Figure 6-7 on page 247 is shown on this display as APY2 job, which is the name of the Apply qualifier. The status is active, but you need to view the job log, to check if any error occurred during the apply cycle. The procedure to view the job log is described in the preceding heading, see Checking Capture program status on page 250. The following is a sample of the Apply program job log to check for any message:
253
Display All Messages System: STL400G Job . . : 235188 APY2 User . . : HCOLIN Number . . . :
>> CALL PGM(QDP4/QZSNAPV2) PARM('APY2 ' '*LOCAL 'N' '*NONE ' '*NONE ' 'Y' 'Y' ' ' 1 '00000300' ' N' 'N' 'N') Database connection started over TCP/IP or a local socket. Printer device PRT01 not found. Output queue changed to QPRINT in library QGPL. Printer device PRT01 not found. Output queue changed to QPRINT in library QGPL. 2 File ASNAS000 created in library QTEMP. Member ASNAS000 added to file ASNAS000 in QTEMP. Member TGPROJECT file TGPROJECT in COLINTAR cleared. Apply will be inactive for 1 minutes and 53 seconds. 3 Apply will be inactive for 1 minutes and 57 seconds. Apply will be inactive for 1 minutes and 57 seconds. Press Enter to continue. F3=Exit F5=Refresh F12=Cancel F17=Top F18=Bottom
'
Figure 6-14 shows a typical job log when the Apply program has started successfully. The numbered arrows indicates the following: 1 - A successful connection to the remote source server. 2 - Create spill file in QTEMP. If Capture and Apply is ruling on the same server, a spill file is not created. 3 - Apply process 3 cycles and went to sleep approximately for 2 minutes. As specified in the subscription set. If the Apply encountered a problem, the following job log is generated:
254
CALL PGM(QDP4/QZSNAPV2) PARM('APY2 ' '*LOCAL ' 'N' '*NONE ' '*NONE ' 'Y' 'Y' ' ' '00000060' ' N' 'N' 'N') Database connection started over TCP/IP or a local socket. Printer device PRT01 not found. Output queue changed to QPRINT in library QGPL. Printer device PRT01 not found. Output queue changed to QPRINT in library QGPL. Apply will be inactive for 0 minutes and 40 seconds. Apply will be inactive for 1 minutes and 57 seconds. Apply will be inactive for 1 minutes and 58 seconds. File ASNAS000 created in library QTEMP. Member ASNAS000 added to file ASNAS000 in QTEMP. TGPROJECT in COLINTAR type *FILE not found. Apply will be inactive for 1 minutes and 0 seconds. TGPROJECT in COLINTAR type *FILE not found. Apply will be inactive for 1 minutes and 0 seconds. TGPROJECT in COLINTAR type *FILE not found. Apply will be inactive for 1 minutes and 0 seconds.
This job log example indicates the target table is missing, thus causing the Apply program error. For additional information on the error displayed on this screen, just move the cursor to were error message is displayed and press the F1 key, which will display another screen showing the Additional Message Information.
255
This screen can help you to determine the status of your Apply program by selecting from the Information to display list and reviewing the results of your selection, after you click the Refresh button. This information is retrieved from the IBMSNAP_APPPLYTRAIL table.
256
On Windows operating systems commands are run from Command Window which is under IBM DB2 -> Command Line Tools or you can issue the commands from any MS-DOS prompt. On UNIX operating systems, commands are run from the shell prompt. On z/OS, IBM DB2 DataPropagator V8.1 provides the data replication feature. For requirements, environment and general information on IBM DB2 DataPropagator V8.1 refer to General information on DB2 DataPropagator on page 275. Capture and Apply run as UNIX System Services (USS) programs on z/OS. You can run the commands on z/OS either of the ways: After logging on to TSO, call omvs command to start USS shell command line. Use remote login program (rlogin) to access USS shell on z/OS from an UNIX or Windows environment. At the USS profile, Data Propagator load library, DB2 load library and C run time libraries must be stated on the STEPLIB and data replication bin directory must be listed on the PATH. If you are using SDSNEXIT library for authorization exits, then it must also be listed at STEPLIB. LANG is optional. It is set if different code page is required. A sample profile is shown in Example 6-7.
Example 6-7 Sample USS profile Data Propagator
DPRLOAD=DPROPR.V810.LOAD DB2LOAD=DSN.SDSNLOAD CRUNLOAD=CEE.SCEERUN:CBC.SCLBLDLL export STEPLIB=$(DPRLOAD):$(CRUNLOAD):$(DB2LOAD) export PATH=/usr/lpp/db2repl_08_01/bin:$(PATH) export LANG=en_US
You can also run the commands by a JCL as a batch job or started task on z/OS. Running commands using JCL is described in Using JCL to operate Capture and Apply on page 280. In the examples in this section, commands are issued on one of the possible platforms chosen randomly but they can be run from every platform listed above. From the command line, commands are called with the start-up parameters. Start-up parameters are not positional. They are given with a keyword. Commands have mandatory parameters, and optional parameters. Optional parameters are specified only to override the defaults for this run of Capture of Apply. In order to examine the complete syntax of the commands, refer to Chapter 17, System Commands for replication (UNIX, Windows, z/OS), in IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1, SC27-1121.
257
Capture_schema is also necessary for Capture. The default for capture_schema is ASN. You may be using different schemas due to exploitation of multiple schemas for your replication environment. Startmode is an optional parameter. It is set on the example to change it to WARMSI from the default which is WARMNS. Capture warm starts and switches to cold start only if this is the first run of Capture for this capture control server with this schema.
The informational message issued by Capture on the example indicates the completion of re-initialization during warm start. Stopped registrations are the registered sources with state S in the IBMSNAP_REGISTER table. They may be stopped by the Capture due to registration errors or set manually by he administrator during maintenance. Capture will not capture changes for a stopped registration until its state is set to I manually. Capture will capture changes for inactive registrations. If the capture_path is not given as in the example, it is defaulted to the directory where the asncap is issued. The naming convention for the log is as follows:
<instance name>.<capture_server>.<capture_schema>.CAP.log
258
The capture log generated by this invocation of Capture is seen in Example 6-8. All parameters values assigned for this instance of Capture and the method they are acquired appears in the log. Capture_server and startmode are obtained from the command line as start-up parameters, capture_schema and capture_path are assigned by system defaults and all other values are from the IBMSNAP_CAPPARMS table. During start-up, the keys of the IPC queues created are listed on the log with message number ASN8008D.
Example 6-8 Capture log
2002-09-02-11.54.17.327000 <setEnvDprRIB> ASN8003D Capture : "ASN" : Program "capture 8.1.0" is starting. 2002-09-02-11.54.19.820000 <asnParmClass::printParms> ASN0529I "Capture" : "ASN" : The value of "CAPTURE_SERVER" was set to "SAMPLE" at startup by the following method: "COMMANDLINE". 2002-09-02-11.54.19.830000 <asnParmClass::printParms> ASN0529I "Capture" : "ASN" : The value of "CAPTURE_SCHEMA" was set to "ASN" at startup by the following method: "DEFAULT". 2002-09-02-11.54.19.830000 <asnParmClass::printParms> ASN0529I "Capture" : "ASN" : The value of "LOGREUSE" was set to "N" at startup by the following method: "IBMSNAP_CAPPARMS". 2002-09-02-11.54.19.830000 <asnParmClass::printParms> ASN0529I "Capture" : "ASN" : The value of "LOGSTDOUT" was set to "N" at startup by the following method: "IBMSNAP_CAPPARMS". 2002-09-02-11.54.19.830000 <asnParmClass::printParms> ASN0529I "Capture" : "ASN" : The value of "TERM" was set to "Y" at startup by the following method: "IBMSNAP_CAPPARMS". 2002-09-02-11.54.19.830000 <asnParmClass::printParms> ASN0529I "Capture" : "ASN" : The value of "CAPTURE_PATH" was set to "C:\capture\" at startup by the following method: "DEFAULT". 2002-09-02-11.54.19.830000 <asnParmClass::printParms> ASN0529I "Capture" : "ASN" : The value of "AUTOSTOP" was set to "N" at startup by the following method: "IBMSNAP_CAPPARMS". 2002-09-02-11.54.19.830000 <asnParmClass::printParms> ASN0529I "Capture" : "ASN" : The value of "STARTMODE" was set to "WARMSI" at startup by the following method: "COMMANDLINE". 2002-09-02-11.54.19.830000 <asnParmClass::printParms> ASN0529I "Capture" : "ASN" : The value of "RETENTION_LIMIT" was set to "10080" at startup by the following method: "IBMSNAP_CAPPARMS". 2002-09-02-11.54.19.830000 <asnParmClass::printParms> ASN0529I "Capture" : "ASN" : The value of "LAG_LIMIT" was set to "10080" at startup by the following method: "IBMSNAP_CAPPARMS". 2002-09-02-11.54.19.830000 <asnParmClass::printParms> ASN0529I "Capture" : "ASN" : The value of "COMMIT_INTERVAL" was set to "30" at startup by the following method: "IBMSNAP_CAPPARMS". 2002-09-02-11.54.19.830000 <asnParmClass::printParms> ASN0529I "Capture" : "ASN" : The value of "PRUNE_INTERVAL" was set to "300" at startup by the following method: "IBMSNAP_CAPPARMS". 2002-09-02-11.54.19.830000 <asnParmClass::printParms> ASN0529I "Capture" : "ASN" : The value of "SLEEP_INTERVAL" was set to "5" at startup by the following method: "IBMSNAP_CAPPARMS". 2002-09-02-11.54.19.830000 <asnParmClass::printParms> ASN0529I "Capture" : "ASN" : The value of "AUTOPRUNE" was set to "Y" at startup by the following method: "IBMSNAP_CAPPARMS". 2002-09-02-11.54.19.830000 <asnParmClass::printParms> ASN0529I "Capture" : "ASN" : The value of "TRACE_LIMIT" was set to "10080" at startup by the following method: "IBMSNAP_CAPPARMS". 2002-09-02-11.54.19.830000 <asnParmClass::printParms> ASN0529I "Capture" : "ASN" : The value of "MONITOR_LIMIT" was set to "10080" at startup by the following method: "IBMSNAP_CAPPARMS". 2002-09-02-11.54.19.830000 <asnParmClass::printParms> ASN0529I "Capture" : "ASN" : The value of "MONITOR_INTERVAL" was set to "300" at startup by the following method: "IBMSNAP_CAPPARMS".
259
2002-09-02-11.54.19.830000 <asnParmClass::printParms> ASN0529I "Capture" : "ASN" : The value of "MEMORY_LIMIT" was set to "32" at startup by the following method: "IBMSNAP_CAPPARMS". 2002-09-02-11.54.19.830000 <Asnenv:setEnvIpcQRcvHdl> ASN8008D "Capture" : "ASN" : "Created" IPC queue with key(s) "(OSSEIPC0tempDB2.SAMPLE.ASN.CAP.IPC, OSSEIPC1tempDB2.SAMPLE.ASN.CAP.IPC, OSSEIPC2tempDB2.SAMPLE.ASN.CAP.IPC)". 2002-09-02-11.54.20.031000 <CWorkerMain> ASN0100I CAPTURE "ASN". The Capture program initialization is successful. 2002-09-02-11.54.20.031000 <CWorkerMain> ASN0109I CAPTURE "ASN". The Capture program has successfully initialized and is capturing data changes for "0" registrations. "0" registrations are in a stopped state. "2" registrations are in an inactive state. 2002-09-02-11.59.20.332000 <PruneMain> ASN0111I CAPTURE "ASN". The pruning cycle started at "Mon Sep 02 11:59:20 2002". 2002-09-02-11.59.20.362000 <PruneMain> ASN0112I CAPTURE "ASN". The pruning cycle ended at "Mon Sep 02 11:59:20 2002".
Figure 6-18 shows the asncap and asnccmd commands issued from uss on z/OS. Note that DB2 subsystem name is specified for capture_server. Capture runs as a process under uss on z/OS and a process id (pid=200 for this example) is assigned. A capture log is created on the directory where command is issued.
260
Figure 6-18 Start Capture and query status from USS shell
The status of Capture is queried using asnccmd command with status option. If Capture is running, the status of each thread is displayed as in Figure 6-18. Refer to 6.1.1, Basic operations from the Replication Center on page 234 for the functions of the threads. For this example, the capture_server on the command, is the subsystem name because the capture control server is DB2 UDB for z/OS. If the capture control server were DB2 UDB for UNIX or Windows, it must be the one of the database alias listed in the database directory. If capture is not running, you will receive ASN0506E, The program could not attach to the replication communications message queue error message. Three possible ways of stopping Capture are: You can stop it with asnccmd command as in the example below.
asnccmd capture_server=sample stop
261
You can cancel it with CTRL-C. You can stop it manually by inserting a row into the IBMSNAP_SIGNAL table using the SQL below:
INSERT INTO schema.IBMSNAP_SIGNAL (SIGNAL_TYPE, SIGNAL_SUBTYPE, SIGNAL_STATE) VALUES(CMD ,STOP ,P)
262
Figure 6-19 Start Apply and query status from the command prompt
The apply_path is not given on the asnapply command. Therefore it is defaulted to the directory where the command is issued. The naming convention for the log is as follows:
<instance name>.<apply_server>.<apply_qual>.APP.log
The log file is seen in Example 6-9. The control_server, apply_qual and pwdfile are start-up parameters. The password file is located under the pwd sub-directory of apply_path and it is named password.aix. All other values are received from IBMSNAP_APPPARMS. Although we have only inserted the delay parameter, values for other columns are filled by the defaults (which are the shipped defaults) defined on the IBMSNAP_APPPARMS. For this reason, all the parameters other than overridden during start up are marked as they are obtained from the IBMSNAP_APPPARMS. Apply log is also used by Apply for recording errors encountered during Apply cycle.
263
264
The command used for querying the status of Apply is asnacmd with status option. If Apply is running, the state of each thread is displayed as in Figure 6-19. Refer to 6.1.1, Basic operations from the Replication Center on page 234 for the functions of these threads. If the apply control server is DB2 UDB for z/OS then the parameter specification changes slightly: If the apply control server is DB2 UDB for UNIX and Windows, the database alias is specified as the control_server in the command. If the apply control server is DB2 UDB for z/OS, the control_server must be the location name of the subsystem. If apply is started in DB2 UDB for z/OS, the subsystem id is provided to specify the subsystem where Apply will run with the with db2_subsystem parameter. Apply can run anywhere provided that connectivity to source, target and apply control servers are configured. Apply program needs the LOCATION and subsystem id name of the apply control server to connect and access the apply control tables. There is one set of apply control tables per DB2 subsystem. All information for each Apply qualifier are in the control tables. Apply_qual qualifies the necessary rows in the control tables for the subscription set processed by Apply. The pwdfile parameter of Apply is not valid for DB2 UDB for z/OS.
C A P T U R E c ap s ch em a C ON TR OL C AP1 SE RVE R
ap p ly_ qu a l APPLY C O N TR O L A P Y 1 SERV ER
TA R G E T SE RVE R
P U S H
D 7D P
DSN7
L O C A T IO N :S TP L E X 4A _ D S N 7 P O R T:8 02 0
The subsystem D7DP is defined as both the Capture and Apply control servers and subsystem DSN7 is the target server on the sample replication configuration of Figure 6-20.
265
The following command can be used for starting Apply for the replication configuration in Figure 6-20.
asnapply control_server=d7dp db2_subsystem=d7dp apply_qual=apy1
For control_server=d7dp, d7dp is the location name, and for db2_subsystem=d7dp, d7dp is the subsystem name where apply will run. As a coincidence location name and subsystem name are same for our sample. The location name must be assigned to control_server and subsystem name must be assigned to db2_subsystem in the commands. Two possible ways of stopping Apply are: You can stop it with asnacmd command as in the example below.
asnacmd control_server=aixsamp apply_qual=apy1 stop asnacmd control_server=d7dp db2_subsystem=d7dp apply_qual=apy1 stop
You can cancel it with CTRL-C. If you tried the stop or query status of Apply and if Apply is not running, you will receive ASN0506E, The program could not attach to the replication communications message queue error message.
266
Start DPR Capture (STRDPRCAP) Type choices, press Enter. Restart after end . . . Job description . . . . Library . . . . . . . Wait . . . . . . . . . . Clean up interval: Wait time . . . . . . Start clean up . . . . Capture control library Journal . . . . . . . . Library . . . . . . . . . . . . RESTART . JOBD . . WAIT CLNUPITV . . . . . . . . . . . . CAPCTLLIB . . . . JRN . . . . + for more values . . . . . . . . . . . . . . . . TRCLMT MONLMT MONITV MEMLMT . . . . . . . . *YES QZSNDPR *LIBL 120 *DFT *IMMED > COLINSRC *ALL
Trace limit . . Monitor limit . Monitor interval Memory limit . . More... F3=Exit
. . . .
. . . .
. . . .
. . . .
F4=Prompt
F5=Refresh
F12=Cancel
Press the page down key to display the last screen of prompts (Figure 6-22).
Start DPR Capture (STRDPRCAP) Type choices, press Enter. Retention period . . . . . . . . RETAIN Lag limit . . . . . . . . . . . LAG Force frequency . . . . . . . . FRCFRQ *DFT *DFT *DFT
The default values from the IBMSNAP_CAPPARMS are used as described in Figure 6-3 on page 239, they overridden when you enter another value on this prompt screen. See 6.2.1, Change Capture parameters on page 296 The command prompt screen in Figure 6-23 is the ENDDPRCAP.
267
End DPR Capture (ENDDPRCAP) Type choices, press Enter. How to end . . . . . . . . . . . OPTION Capture control library . . . . CAPCTLLIB Reorganize control tables . . . RGZCTLTBL *CNTRLD ASN *NO
The CAPCTLIB is new to Version 8. It will end Capture for the Capture schema specified, which contains the Capture controls tables The RGZCTLTBL is a new parameter in Version 8, that performs a Reorganize Physical File Member (RGZPFM) over the CD and UOW tables opened during that instance of the Capture program. Therefore, if you started Capture for a specific journal, only those CD tables that are associated with the registered source table for that journal are reorganized. If you start Capture using the default value to use all journals then all the CD tables are reorganized for all the registered source table within the Capture schema. Beware that when you select to reorganized your CD tables, it could delay the end Capture process. However, it could improve the performance of running Capture.
268
Start DPR Apply (STRDPRAPY) Type choices, press Enter. User . . . . . . . . . . . . . . USER Job description . . . . . . . . JOBD Library . . . . . . . . . . . Apply qualifier . . . . . . . . APYQUAL Control server . . . . . . . . . CTLSVR Trace . . . . . . . . . . . . . TRACE Full refresh program . . . . . . FULLREFPGM Library . . . . . . . . . . . Subscription notify program . . SUBNFYPGM Library . . . . . . . . . . . Inactive message . . . . . . . . INACTMSG Allow inactive state . . . . . . ALWINACT Delay . . . . . . . . . . . . . DELAY Retry wait time . . . . . . . . RTYWAIT Copy Once . . . . . . . . . . . COPYONCE Trail Reuse . . . . . . . . . . TRLREUSE More... F3=Exit F4=Prompt F24=More keys *CURRENT QZSNDPR *LIBL > APY2 *LOCAL *NONE *NONE *NONE *YES *YES 6 300 *NO *NO
F5=Refresh
F12=Cancel
Press page down key display the next screen (Figure 6-25).
Start DPR Apply (STRDPRAPY) Type choices, press Enter. Optimize single set Bottom . . . . . . OPTSNGSET *NO
ENDDPRAPY when prompt will display the screen in Figure 6-26 to end Apply for a specific Apply qualifier.
269
End DPR Apply (ENDDPRAPY) Type choices, press Enter. User . . . . . . How to end . . . Apply qualifier Control server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . USER OPTION APYQUAL CTLSVR *CURRENT *CNTRLD *USER *LOCAL
For details about all these parameters refer to Chapter 18 under in the IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1 , SC27-1121.
270
SAMPLE in Figure 6-27 is using circular logging, the legend next to it indicates that it is not enabled for replication. The other Windows server (CAPPRJ1) in the figure is enabled for replication. 2. From Command Window or Command Window: You can detect the logging of the database by issuing the following DB2 command.
db2 get db cfg for sample
Circular logging is in effect for this database if both LOGRETAIN and USEREXIT parameters are OFF. 3. From Control Center: You can also use the Control Center to control your database logging type. After launching the Control Center, there are two places you can query this configuration parameter: follow the path Systems -> Instances -> Databases, right-click on your database and either select Configure Database Logging or Configure Parameters.
271
1. From the Replication Center: You can change the logging type to archival and backup the database without leaving the Replication Center. Right-click the database to be enabled after selecting Replication Definitions -> Capture Control servers and from the pull-down menu select Enable Database for Replication. This action initiates a call to the DB2 command to update the LOGRETAIN value of the database to RECOVERY and launches Backup Database. A database automatically becomes backup pending after changing its log type to archival. An offline backup should be taken to switch it to normal state. This is the only allowed option on Backup Database wizard which you launched by selecting Enable Database for Replication. 2. From Command Window or Command Center: The following are the commands you can issue for changing the logging type and backing up the database from CLP or Command center. The database name is SAMPLE. The backup file will be generated to a directory named backup. All the connected applications should disconnect before taking an offline backup.
cd \ mkdir backup db2 terminate db2 update database configuration for sample using logretain yes db2 backup database sample to c:\backup
The file and directory structure used in the example is for Windows platforms, you have to alter them appropriately for UNIX platforms. 3. From Control Center: You can change the logging parameter from two different wizards at the Control Center like you did when you query this parameter. Follow Systems -> Instances -> Databases, right-click on your database and select Configure Database Logging or Configure Parameters. If you change the logging type to archival from Configure Database Logging, a backup is also created whereas if the LOG_RETAIN is changed from the Configure Parameters, you should backup the database afterwards.
272
On Sun Solaris servers, you should also place this statement in the .profile: export NLSPATH=/usr/lib/locale/%L/%N:$DB2DIR/sqllib/msg/en_US/%N.
Where, node-name is any name which will be used to catalog databases under the node, host-name is either the IP adress or the hostname and service-name is the port number remote DB2 uses.
CATALOG DATABASE database-name AS alias AT NODE node-name
Where, node-name is the name used on catalog tcpip node command, database-name is the name of the remote database alias you want to catalog, alias is the name you want use in the connect statements. You can list other options available for these commands from the command line by using db2 ? catalog. If the remote server(s) are DB2 UDB for z/OS, the communication protocol used is Distributed Relational Database Architecture (DRDA). DB2 Connect or DB2 UDB ESE V8 for UNIX and Windows possesses the DRDA AR functionality. Besides cataloging the remote server to the database directory and node directory, DB2 UDB for z/OS subsystem must also be cataloged to the DCS directory. You can use the following command to catalog the remote z/OS subsystem as a DCS database:
CATALOG DCS DATABASE database-name AS location-name
Where, database-name is any name you choose location-name must be the location name of the remote DB2 subsystem. CCA can also be used to configure client connectivity. Consider the DB2 subsystem with the following location name, port number and hostname:
273
This remote DB2 subsystem can be cataloged to the DB2 UDB for UNIX and Windows DRDA AR with the following commands:
db2 catalog tcpip node stplex2 remote stplex4a.stl.ibm.com server 8020 db2 catalog database stpdsn7 at node stplex2 authentication dcs db2 catalog dcs database stpdsn7 as stplex4a_dsn7
Where, db2-instance is DB2 instance service name, account and password are the account and password you logged on to Windows. You can obtain DB2 instance service name from the Start Menu of Windows by following the path Settings -> Control Panel -> Administrative Tools -> Services. On the Services window, select and right-click the DB2 instance and choose Properties. Use the Service Name on the General Panel as the DB2 instance name on asnscrt command. Account must always start with a period and a backslash (.\). The following examples creates Capture and Apply services:
asnscrt -c DB2-0 .\db2drs3 db2drs3 asncap capture_server=sample capture_path=c:\capture asnscrt -a DB2-0 .\db2drs3 db2drs3 asnapply control_server=applydb apply_qual=xyz1 apply_path=c:\apply
If capture_path or apply_path are not specified, it is defaulted to the path in DB2PATH. If DB2PATH is not set, then an error message will be issued.
274
DB2.<instance>.<alias>.CAP.<schema>
Where, instance is the DB2 instance service name, alias is the database alias for capture control server, schema is the capture schema. The service name created for the above asnscrt -c command is: DB2.DB2-0.SAMPLE.CAP.ASN. The following naming convention is used for the Apply service:
DB2.<instance>.<alias>.APP.<qualifier>
Where, instance is the is the DB2 instance service name, alias is the database alias for apply control server, qualifier is the apply qualifier. The service name created for the above asnscrt -a command is: DB2.DB2-0.APPLYDB.APP.ASN. The replication service created can be dropped with the following command:
asnsdrop <service-name>
Where, service-name is the service name of Capture or Apply. An example command created by the Replication Center is as follows:
ASNSCRT -A <db2_instance> .\<user_ID> <password> asnapply CONTROL_SERVER=APPLYDB APPLY_QUAL=XYZ1 APPLY_PATH=c:\apply PWDFILE=psw/psw.psw
275
This version of DB2 DataPropagator V8.1 requires DB2 UDB for z/OS V6 (5645-DB2) with PTF UQ56678 or DB2 UDB for OS/390 and z/OS V7 (5675-DB2) with PTF UQ62179. SMP installation of the product is required. Capture, Apply, Monitor and Trace programs of DB2 DataPropagator V8.1 are z/OS Unix System Services (uss) applications. HFS installation is required for NLS message services and for running the programs listed above from the uss. The profile of USS users must include the required load libraries in STEPLIB as in Example 6-7 on page 257. Uss users must include /usr/lpp/db2repl_08_01 on the PATH in their profile. The /bin directory under this directory contains the executables as empty files with sticky bits. ASNAPLX and ASNCAP modules which read DB2 log IFI must be placed on an APF-authorized library. DB2 DataPropagator V8.1 must be installed on all systems in the data-sharing group. The Capture, Apply, Monitor and Trace programs can be run from JCL, BPX Batch as well as from uss. See Using JCL to operate Capture and Apply on page 280. If Replication Center will be used for operating Capture, Apply or Monitor programs in this subsystem, DAS (DB2 Administration Server) should be installed and configured. Capture, Apply and Monitor are DB2 application programs. They must be bound manually. Necessary DBRMs to bind these packages come in the installation libraries. There are totally three plans and twenty-five packages bound under four collections. List of plans and packages are seen on Table 6-1. On the third column of the table, the list of packages that should be bound are listed. The packages must be grouped into collections as seen on the second column of the table. The collection name is selected arbitrarily. You may prefer a different collection name according to your sites standards. Each collection must be included in the PKLIST of the bind command for the plan given on the first column. The plans for DB2 DatatPropagator V8.1 are ASNTC810 for Capture, ASNTA810 for Apply and ASNTM810 for Monitor. Binding ASNLOAD package under Apply plan (ASNTA810) is only required if you plan to refresh your target tables with the ASNLOAD exit routine by using loadxit parm in Apply. If you are running Apply on z/OS, the default sample program will call DB2 for z/OS Crossloader utility on ASNLOAD. The crossloader package DSNUGSQL and DSNUTILS store procedure package should also be bound into Apply plan (ASNTA810).
276
Apply packages must be bound to source, apply control and target servers. The Apply plan on the server where you run Apply must include all the apply packages from the source, control and target servers which replicates data to and from. It is recommended to use generic location name on the PKLIST of Apply plan. This is recommended in order not to re-bind the Apply plan after every new source or target server definition to the subscription sets Apply processes. If the subsystem which Capture, Apply or Monitor are running is DB2 UDB for OS/390 and z/OS V7 and unicode encoding scheme is selected as default for your subsystem then ENCODING(EBCDIC) option in your bind commands is required. The following are the recommended isolation levels for the packages in the following plans: All packages in ASNTC810: ISOLATION(UR), except ASNCCPWK and ASNREG which are ISOLATION(CS). All packages in ASNTA810: ISOLATION(UR), except ASNLOAD and ASNAFET which is ISOLATION(CS). All packages in ASNTM810: ISOLATION(UR) except ASNMDATA which is ISOLATION(CS). The recommended isolation level for all the plans is UR. KEEPDYNAMIC(YES) bind option is recommended for better performance.
Table 6-1 DB2 DataPropagator V8.1 plans and packages
PLAN COLLECTION PACKAGES
ASNCOMMON
ASNDBCON ASNMSGT ASNSQLCF ASNSQLCZ ASNADMIN ASNCCPWK ASNCDINS ASNCTSQL ASNCMON ASNPRUNE ASNREG ASNTXS ASNUOW
ASNCAPTURE
277
PLAN
COLLECTION
PACKAGES
ASNTA810
ASNAPPLY
ASNAAPP ASNAWPN ASNACMP ASNAFET ASNAISO ASNAMAN ASNAPPWK ASNAPRS ASNLOAD ASNMDATA ASMONIT ASMNUPDT
ASNAM810
ASNMONITOR
278
Consider the configuration in Figure 6-20 on page 265. Subsystem D7DP is the apply control server. Target server is subsystem DSN7. D7DP must be configured as AR to DSN7. The location name of AS is required. Location name: STPLEX4A_DSN7. The following insert statement to the SYSIBM.LOCATIONS table of D7DP is sufficient to configure the communication in this case because DSN7 and D7DP subsystems are on the same z/OS system:
INSERT INTO SYSIBM.LOCATIONS(LOCATION,LINKNAME) VALUES(STPLEX4A_DSN7,LNKDSN7)
Assume D7DP is also propagating to the target server A23BK31Z seen in Figure 6-5 on page 241. Configuration information of A23BK31Z is as follows:
Database alias: SAMPLE IP address: 9.1.39.85 Port number : 50000 Userid:db2drs3 Password:db2drs3
The AS communication configuration is inserted to the CDB of D7DP:
INSERT INTO SYSIBM.LOCATIONS(LOCATION,LINKNAME,PORT) VALUES(SAMPLE,LNKBK31,50000) INSERT INTO SYSIBM.IPNAMES(LINKNAME,SECURITY_OUT,USERNAMES,IPADDR) VALUES(LNKBK31,P,O,9.1.39.85) INSERT INTO SYSIBM.USERNAMES (TYPE,LINKNAME,AUTHID,NEWAUTHID,PASSWORD) VALUES(O,LNKBK31,CAYCI,db2drs3,db2drs3)
Where, CAYCI is a RACF userid on z/OS where D7DP runs. The value assigned to LOCATION column in the SYSIBM.LOCATIONS is the location name of the AS (if the AS is DB2 UDB for z/OS) or database alias (if the AS is DB2 UDB for UNIX and Windows). LINKNAME given is any name chosen to associate the row in SYSIBM.LOCATIONS table to the row in SYSIBM.IPNAMES. The LINKNAME in SYSIBM.IPNAMES is the linkname from SYSIBM.LOCATIONS. The IPADDR column is either a real IP adress or host name. The AUTHID in the SYSIBM.USERNAMES is an userid from the AR system that needs to be translated. NEWAUTHID and PASSWORD columns define the authentication information for AS. You insert this translation information if the AS is DB2 UDB for UNIX and Windows and AUTHENTICATION(SERVER) is specified on the database manager configuration. Translation is also required for DB2 UDB for z/OS ASs where TCP/IP ALREADY VERIFIED is set to NO.
279
You must specify apply_qual,control_server and db2_subsystem in your apply job. Apply reads the IBMSNAP_APPPARMS table to get its defaults. There is one IBMSNAP_CAPPARMS table per capture schema with one row in it whereas there can be one row in IBMSNAP_APPPARMS per apply qualifier (Apply_qual is the unique index for IBMSNAP_APPPARMS table). The diagnostic log written by the replication programs into the data path specified by their respective options (apply_path, capture_path, monitor_path) may hit a duplicate file name error if the data path specified is a HLQ, specified by using "//". The replication programs attempt to generate unique names by concatenating respective options into a file name a follows: For Capture: HLQ.capture_server.capture_schema.CAP.LOG
280
For Apply: HLQ.control_server.apply_qual.APP.LOG For Monitor : HLQ.monitor_server.monitor_qualifier.MON.LOG The generated file names must conform to z/OS data set names rules. Dataset paths cannot includes names greater then 8 characters. The names cannot have underscores. The replication programs truncate anything greater then 8 characters to 8 and they also remove any underscores the find in the name. Doing so allows us to create the Z/OS datasets but also prevents us from generating unique names. Use the minimum number of characters for the keywords that will differentiate them from other keywords. For example, capture_sc can be used instead of capture_schema.
Important: It is recommended to exploit the parameter tables IBMSNAP_CAPPARMS and IBMSNAP_APPPARMS to specify the frequently used parameters.
Although, DataPropagator programs can be run from the JCL, they are still the uss programs and they use HFS as default file system. If either capture_path or apply_path do not exist in the parameter table or not specified during start-up, the home directory of the user who submitted the JCL or RACF userid of the started task, is assigned as the path for file I/O. You can also give a directory which exists in HFS as absolute capture_path or apply_path. If you are using JCL you may not prefer to use uss shell to browse the log created by Capture or Apply. It is possible to forward the file I/O to z/OS datasets. If the capture_path or apply_path starts with //, this enables the Capture and Apply programs to direct their file I/O to sequential datasets on z/OS. The sequential datasets are created by system defaults. The // in the path indicates that HLQ of the dataset will be the userid who submitted the start Capture or start Apply job.
Important: It is possible to create the log files of Capture and Apply as sequential datasets of z/OS. You append // in front of the capture_path or apply_path, where // stands for the userid.
If you do not want userid as the HLQ of the dataset, you must append a quotation mark in front of //. Assume that capture_path is set to //SYSADM, dataset will be created with the following naming convention: <userid>.SYSADM.<capture_server>.<capture_schema>.CAP.LOG.
281
If capture_path is set to //SYSADM, dataset will be created with the following naming convention: SYSADM.<capture_server>.<capture_schema>.CAP.LOG. The sample JCLs used in this section are for starting Capture and Apply for the replication configuration seen in Figure 6-20 on page 265. Example 6-11 is a sample JCL for starting Capture as a batch job on z/OS. The Capture control server is D7DP, which is a DB2 subsystem and the capture schema is CAP1. Some of the defaults are altered with following SQL statements in the IBMSNAP_CAPPARMS table before submitting this JCL.
UPDATE CAP1.IBMSNAP_CAPPARMS SET CAPTURE_PATH='//SY4A' UPDATE CAP1.IBMSNAP_CAPPARMS SET STARTMODE=WARMSI
The log file for Capture of Example 6-11, is created as a sequential dataset named: CAYCI.SY4A.D7DP.CAP1.CAP.LOG, You can browse this sequential dataset from ISPF.
Example 6-11 Sample JCL for starting Capture
//CAPCAP1 JOB USER=CAYCI,NOTIFY=CAYCI, // MSGCLASS=H,MSGLEVEL=(1,1), // REGION=0M,TIME=50 /*JOBPARM SYSAFF=SY4A //ASNCAP EXEC PGM=ASNCAP, // PARM='ENVAR("LANG=en_US")/CAPTURE_SERVER=D7DP CAPTURE_SC=CAP1' //STEPLIB DD DISP=SHR,DSN=DPROPR.V810.BASE.TESTLIB, // UNIT=SYSDA,VOL=SER=RMS002 // DD DISP=SHR,DSN=SYS1.SCEERUN // DD DISP=SHR,DSN=DSN.D7DP.SDSNLOAD //CAPSPILL DD DSN=&&CAPSPL,DISP=(NEW,DELETE,DELETE), // UNIT=SYSDA,SPACE=(CYL,(50,100)), // DCB=(RECFM=VB,BLKSIZE=6404) //CEEDUMP DD DUMMY //SYSTERM DD SYSOUT=* //SYSUDUMP DD DUMMY //SYSPRINT DD SYSOUT=*
Capture keeps the uncommitted changes in memory until the memory_limit (Capture parameter) is reached. When this limit is reached, Capture spills to VIO using a default value. You can override the default VIO allocation of Capture by providing a CAPSPILL DD card in your capture job. You can assign CAPSPILL DD to VIO or SYSDA. A sample JCL is provided in Example 6-12 to start Apply for the replication configuration seen in Figure 6-20 on page 265. Before starting Apply, the following insert statement is done to direct file I/O of Apply to sequential dataset:
282
On the sample JCL, the control_server is the location name and db2_subsystem is the subsystem id. Apply_qual defines the subscription set which will be processed by this Apply program. The log file allocation is similar to Capture. If an apply_path that implies to open the log as a sequential dataset is not specified, the log file goes to HFS under the home directory of the userid who submitted the job or RACF userid of the started task. The log file for Apply of Example 6-12, is created as a sequential dataset named: CAYCI.SY4A.D7DP.APY1.APP.LOG. The spill file specifications are provided with ASNASPL DD card. This DD card on the Apply JCL is optional. If it does not exist in the JCL, the default unit for spill file is VIO for DB2 UDB for z/OS. If you want to direct it to disk or manage the allocation parameters, you must provide DD statement as in Example 6-12. Several new parameters are added to Capture in this version. All the new parameters are used by Capture running on DB2 UDB for z/OS. Besides, some of the old parameters which were not supported on DB2 UDB z/OS are supported in this platform too. There are no positional parameters anymore. The parameters seen as bold on the first column are old positional parameters. Old parameters and corresponding new parameters and defaults for new parameters for Capture are shown on Table 6-2.
283
N/A term=y term=n startmode=warmns startmode=warmsa startmode=warmsi startmode=cold autoprune=y autoprune=n N/A sleep_interval N/A as Capture parameter. Specified when registering source. capture_path capture_schema autostop=n autostop=y commit_interval lag_limit logreuse=n logreuse=y logstdout=n logstdout=y memory_limit monitor_limit monitor_interval prune_interval retention_limit trace_limit term=y startmode=warmns
TERM (default) NOTERM WARM (default) WARMNS COLD PRUNE (default) NOPRUNE NOTRACE(default) TRACE SLEEP (n)(default=0) ALLCHG CHGONLY N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A
home dir. of user in HFS capture_schema=ASN autostop=n commit_interval=30 lag_limit=10080 logreuse=n logstdout=n memory_limit=32 monitor_limit=10080 monitor_interval=300 prune_interval=300 retention_interval=10080 trace_limit=10800
284
The old parameter, corresponding new parameter and defaults for the new parameter are seen on Table 6-3. There are no positional new parameters, the old parameters which were positional are shown in bold.
Table 6-3 Old and new Apply parameters
OLD PARAMETER Apply_qual DB2_subsystem_name Control_server_name NEW PARAMETER DEFAULT FOR NEW
apply_qual db2_subsystem control_server loadxit=n loadxit=y spillfile=mem spillfile=disk inamsg=y inamsg=n notify=n notify=y sleep=y sleep=n delay errwait N/A
N/A N/A N/A loadxit=n spillfile=mem inamsg=y notify=n sleep=y delay=6 errwait=300 N/A
LOADX NOLOADX (default) MEM (default) DISK INAMSG (default) NOINAMSG NOTIFY NONOTIFY (default) SLEEP NOSLEEP (default) DELAY(n) (default=6) ERRWAIT(n) (default=300) NOTRC (default) TRCERR TRCFLOW N/A N/A N/A N/A N/A N/A
apply_path copyonce=n copyonce=y logreuse=n logreuse=y logstdout=n logstdout=y opt4one=n opt4one=y trlreuse=n trlreuse=y
285
OLD PARAMETER
NEW PARAMETER
N/A N/A
term=y sqlerrorcontinue=n
The Capture or Apply jobs started with JCL can be stopped in an orderly way with the modify command as follows: /f capcap1,stop and /f asnapy1,stop where asnapy1 and capcap1 are jobnames in the JOB card.
286
operating system commands to display Capture and Apply processes. See Manage processes and threads on page 290 for the alternative commands that display information of processes and threads. Although Capture and Apply are successfully started, data propagation is not performed according to the replication definitions, problem must be investigated based on the error messages in the log and the contents of the capture and apply control tables. The command to analyze the control information is explained on Analyzing the control tables on page 294. If there are performance problems, you can collect and analyze the trace data with asntrc command. Refer to 10.4.13, Time spent on each of Applys suboperations on page 460 on how to benefit the asntrc data for performance problems.
287
Run ANZDPR to analyze your replication configuration to help you determine the program with your Capture or Apply program. See Analyzing the control tables on page 294.
288
There must be at least one active subscription set for the Apply qualifier. The subscription set must contain at least one of the following: Subscription-set member SQL statement Procedure Refer to Chapter 2, Setting up for replication, in IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1 , SC27-1121, for authorizations required for operating Apply.
2. Add userid, password combinations for the servers: This operation is repeated for every database alias or DB2 subsystem on z/OS Apply connects during apply cycle and require authentication.
asnpwd add alias sample id db2drs3 password db2drs3 asnpwd add alias stpdsn7 id cayci password cayci using os390pwd.aut
289
The command options can be queried from the command line by entering
asnpwd ?
You can find the complete syntax of the command on Chapter 17,System commands for replication (UNIX, Windows, z/OS), in IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1 , SC27-1121.
290
-------- -------------- ---------CAYCI asnapply 14 CAYCI asnapply 13 CAYCI asnapply 12 CAYCI asnapply 10 C:\>db2 list applications Auth Id Application Appl. Name Handle -------- -------------- ---------DB2DRS3 asnapply 16 $ ps -ef | grep -i 32896 cayci 32896 27922 0 17:26:19 apply_qual=APY1
---1 1 1 1
DB # of Name Agent ------------------------------ -------- ---G90126B2.O01B.083847002625 SAMPLE 1 pts/0 0:00 asnapply control_server=aixsamp
The ps command displays the process of Apply. This is a UNIX command and cannot be issued from Windows.
291
DSNV401I #D7DP- DISPLAY THREAD REPORT FOLLOWS DSNV402I #D7DP- ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN D7DP RA * 0 028.DBAA 02 SYSOPR 040A 39 V445-G91E4A2B.G4A3.B82EA9FFC629=39 ACCESSING DATA FOR 9.30.74.43 SERVER RA * 5 db2bp.exe CAYCI DISTSERV 040A 626 V437-WORKSTATION=A23BK31Z, USERID=cayci, APPLICATION NAME=db2bp.exe V445-G9012755.H004.005A06182424=626 ACCESSING DATA FOR 9.1.39.85 DB2CALL T 24 CAPCAP1 CAYCI ASNTC810 00AF 627 DB2CALL T 5 CAPCAP1 CAYCI ASNTC810 00AF 628 DB2CALL T 3679 CAPCAP1 CAYCI ASNTC810 00AF 629 DB2CALL T 2556 CAPCAP1 CAYCI ASNTC810 00AF 630 DB2CALL T 28 CAPCAP1 CAYCI ASNTC810 00AF 631 -D OMVS,U=CAYCI BPXO040I 11.38.56 DISPLAY OMVS 395 OMVS 000D ACTIVE OMVS=(00,4A) USER JOBNAME ASID PID PPID STATE START CT_SECS CAYCI CAPCAP1 00AF 16777523 1 HRI--- 11.11.54 131.39 LATCHWAITPID= 0 CMD=ASNCAP COMMAND INPUT ===> SCROLL ===> CS -D OMVS,PID=16777523 BPXO040I 11.49.23 DISPLAY OMVS 552 OMVS 000D ACTIVE OMVS=(00,4A) USER JOBNAME ASID PID PPID STATE START CT_SECS CAYCI CAPCAP1 00AF 16777523 1 HRI--- 11.11.54 131.39 LATCHWAITPID= 0 CMD=ASNCAP THREAD_ID TCB@ PRI_JOB USERNAME ACC_TIME SC STATE 1C55A8D000000000 007AAE88 .158 STA YU 1C5677D000000001 007AA660 .008 SWT JY 1C4C4F6000000002 007AD058 122.812 SPM JY 1C4CA9F000000003 007B5088 .191 STA JY 1C4D1E6000000004 007AD2E8 7.020 STA JY CAYCI:../CAYCI:> ps -ef UID PID PPID C STIME TTY TIME CMD CAYCI 122 1 - 12:15:23 ? 0:02 OMVS CAYCI 140 184 - 12:16:15 ttyp0030 0:00 ps -ef CAYCI 184 122 - 12:15:23 ttyp0030 0:02 -sh CAYCI 16777523 1 - 11:11:55 ? 2:11 ASNCAP
The same commands are used in Example 6-15 to display Apply. Apply has four threads running on the Apply server. Three of them are HOLDL, WORKER and ADMIN threads. They are explained in Querying the Status of Capture and Apply on page 248.
Example 6-15 Display Apply threads on z/OS
#D7DP- DISPLAY THREAD(*)
292
DSNV401I #D7DP- DISPLAY THREAD REPORT FOLLOWS DSNV402I #D7DP- ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN DB2CALL T 100 APPAPY1 CAYCI ASNTA810 00BD 632 DB2CALL T 3 APPAPY1 CAYCI ASNTA810 00BD 633 DB2CALL T 13 APPAPY1 CAYCI ASNTA810 00BD 634 DB2CALL T 135 APPAPY1 CAYCI ASNTA810 00BD 635 -D OMVS,ASID=00BD BPXO040I 13.57.40 DISPLAY OMVS 801 OMVS 000D ACTIVE OMVS=(00,4A) USER JOBNAME ASID PID PPID STATE START CT_SECS CAYCI APPAPY1 00BD 33554615 1 HRI--- 13.49.32 .27 LATCHWAITPID= 0 CMD=ASNAPPLY -D OMVS,PID=33554615 BPXO040I 13.58.44 DISPLAY OMVS 813 OMVS 000D ACTIVE OMVS=(00,4A) USER JOBNAME ASID PID PPID STATE START CT_SECS CAYCI APPAPY1 00BD 33554615 1 HRI--- 13.49.32 .28 LATCHWAITPID= 0 CMD=ASNAPPLY THREAD_ID TCB@ PRI_JOB USERNAME ACC_TIME SC STATE 1C4D6C0000000000 007CA718 .171 STA YU 1D835C3000000001 007ABE88 OMVS .003 SWT JK 1D8E945000000002 007AB798 .015 CLO JY 1E25671000000003 007AAE88 OMVS .061 SLP JS CAYCI:../CAYCI:> ps -ef UID PID PPID C STIME TTY TIME CMD CAYCI 33554615 1 - 13:49:32 ? 0:00 ASNAPPLY
293
It is also possible to locate the associated IPC queue on the Apply log. The ipcs command in Example 6-17 which is issued on AIX, displays the IPC queue of Apply.
Example 6-17 IPC queue of Apply on AIX
2002-09-06-17.26.22.460339 <Asnenv:setEnvIpcQRcvHdl> ASN8008D : "Created" IPC queue with key(s) "(0x30000084)". $ ipcs | grep 0x30000084 q 917553 0x30000084 --rw-rw---cayci staff "Apply" : "APY1"
Detailed analysis of databases sample and aixsamp is requested on the example. First database is capture control server and the second one is apply control server. The output (a020906.htm) produced for the above command contains the following: Contents of capture control tables for all capture schemas from SAMPLE. Contents of apply control tables from AIXSAMP. CD table column analysis. List of Capture and Apply packages and some of the bind options (like isolation level) from DB2 catalog. Database alias connection summary. Results of queries run against the control tables to diagnose the cause of existing Capture and Apply problems. Statistics on the number of rows that are eligible for pruning in the CD and UOW tables.
294
Subset of table and tablespace statistics of control tables from DB2 catalog. List of (registered) tables with insufficient indexes Results of queries run against the control tables to detect subscription definition errors. Inconsistencies which are detected on the control tables. There are several options exist for this command which limits or increases the amount of information gathered. You should consider using this command with the appropriate options based on your requirements.
Note: iSeries users can run this command from your workstation to analyze the Capture and Apply control tables on the iSeries. As regards the preceding asnanalyze command example the -db parameter could be the iSeries RDB name. Then make sure you have the password table created on the workstation with the user ID and password to access the iSeries. See Chapter 17, in IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1 , SC27-1121 on the asnpwd command to create the password table.
The ANZDPR command on the iSeries could also analyze your iSeries Apply and Control tables and output htm file, similar to the asnanalyze described in this section. You can find details of this command in Chapter 18, from the IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1 , SC27-1121.
295
The shipped default of capture_server of Capture and control_server of Apply are not hard-coded on DB2 UDB for UNIX and Windows but defaults to DB2DBDFT registry variable. The capture_path and apply_path also do not have shipped defaults but if they are not set at start-up or not specified at the parameter table, are assigned a value depending on how they are run. If the programs run from the Replication Center, the working directory at Run now or Save command window is assigned. If the programs run from the command line, the current directory is the path. If run as a Windows service, the path in DB2PATH (which defaults to DB2 installation directory) is assigned. It is possible to update Capture parameter values in IBMSNAP_CAPPARMS from the Replication Center. The assigned values of the Capture parameters can also be changed dynamically from the Replication Center.
Attention: The updates made to the IBMSNAP_CAPPARMS will not be effective until Capture is recycled as Capture reads this table only during start-up.
If you want your change to be effective without stopping Capture, from the Replication Center, follow the path Operations -> Capture Control Servers. On the content pane, select and right-click the capture control server. From the option list select Change Operational Parameters. Most of the Capture parameters except capture_schema, capture_server, capture_path, startmode and logstdout can be changed dynamically. The changes made will be effective immediately but will be lost after Capture stops.
Attention: Dynamic changes made to the parameters are not updated to IBMSNAP_CAPPARMS and will be lost when Capture is stopped.
296
Some of the parameters define the working environment of Capture. The parameters logreuse, logstdout, trace_limit, term, monitor_limit, monitor_interval can be included into this group. They can be set based on your requirements. These parameters are described shortly below: logreuse: Default is n. If no logreuse is requested, Capture appends to the log file even after restart. If logreuse is requested previous information in the log is deleted during start-up. logstdout: Default is n. The messages in the log file are also directed to the standard output if logstdout is set to y. trace_limit: Default is 10080 (min). This is the prune threshold for rows in IBMSNAP_CAPTRACE. monitor_interval: Default is 300 (sec). This is the interval Capture writes to IBMSNAP_CAPMON table. monitor_limit: Default is 10080 (min). This is the interval Capture writes to IBMSNAP_CAPMON table. Specific use parameters: The parameter autostop can be included in this group. Default for autostop is n. If autostop is requested, Capture stops when reaches the end of log. Mobile users may benefit from autostop, otherwise it should be set to n.
297
apply_qual control_server db2_subsystem (valid only for z/OS) apply_path pwdfile (valid only for UNIX and Windows)
These parameters are explained in detailed under the topics how to Start Apply. The following are performance related parameters which are described in 3.4.5, Control tables described on page 129 and explained in detail on the related to performance topics in 10.4.10, Apply operations and Subscription set parameters on page 452. delay opt4one spillfile There are also parameters of Apply which are related to the working environment of the program. The parameters logreuse, logstdout, inamsg, term, trlreuse, errorwait, sqlerrorcontinue can be combined in that group. These parameters can be set based on your choice according to your sites resources (like disk) and your requirements. These parameters are described on 3.4.5, Control tables described on page 129. The following can be grouped under special use parameters: copyonce: Default is n. If it is set to y, Apply processes the subscription sets of the apply qualifier once and stops. notify: Default is n. If it is set to y, Apply calls the ASNDONE exit routine after every successful subscription set processing. sleep: Default is y. The default behavior of Apply is to process the subscription sets until no more data to replicate is left. It then sleeps for a certain period of time until it wakes up again and starts processing. When sleep is set to n, apply terminates when no data is left for replication. This parameter is suitable for mobile users who periodically connect to the Capture server and process the subscription sets altogether. loadxit: This parameter is explained in 6.4, Using ASNLOAD for the initial load on page 302.
298
Important: It recommended to have minimum number of rows in the control tables for the efficiency of Apply program. Capture with autoprune set to yes, prunes the control tables at prune_intervals and guarantees that control tables only keep necessary data for replication.
Automatic pruning is highly recommended but if you have reasons to keep data in the control tables longer than data replication requires, there is a command to prune data from the control tables on demand:
299
Where, server is the database alias if capture control server is DB2 UDB for UNIX and Windows database but subsystem name if capture control server is on DB2 UDB for z/OS. This command prunes data from the capture control tables of a particular schema. This operation can also be done from the Replication Center. Follow the path Operations -> Capture Control Servers. On the content pane, select and right-click the capture control server. From the option list select Prune Capture Control Tables. You will be prompted for the capture schema. The command prepared by the Replication Center is displayed on the Run now or Save Command window. If you are manually pruning control tables, the information about the number of eligible rows in CD and UOW tables on asnanalyze report can help you to determine if you need pruning.
Where, server is the database alias if capture control server is DB2 UDB for UNIX and Windows database but subsystem name if capture control server is on DB2 UDB for z/OS. This operation can also be done from the Replication Center. Follow the path Operations -> Capture Control Servers. On the content pane, select and right-click the capture control server. From the option list select Reinitialize Capture. You will be prompted for the capture schema. The command prepared by the Replication Center is displayed on the Run now or Save Command window. The following registration options can be altered: Row-capture rule Before-image prefix Stop Capture on error Allow full refresh of target table Capture updates as pairs of deletes and inserts
300
Capture changes from replica table Conflict detection level The message ASN0023I is recorded to the Capture log indicating that the reinitialization command is issued. If a new registration is made, Capture becomes aware of this new source when Apply signals during subscription cycle and reads the registration from the IBMSNAP_REGISTER. Reinitialization is not necessary for this case. Reinitialization with reinit command is not meaningful when a registration is deleted or the CD table is altered. Refer to 8.1.3, Removing registrations on page 355 and 8.2.7, Adding a new column to a source and target table on page 369 how these operations accomplished.
where server is the database alias of capture control server if it is DB2 UDB for UNIX and Windows database but subsystem name if capture control server is on DB2 UDB for z/OS. These operations can also be done from the Replication Center. Follow the path Operations -> Capture Control Servers. On the content pane, select and right-click the capture control server. From the option list either select Suspend Capture or Resume Capture. You will be prompted for the capture schema. The command prepared by the Replication Center is displayed on the Run now or Save Command window. These commands result on ASN0028I (for suspend) and ASN0029I (for resume) messages to be written to Capture log. You can detect when Capture is suspended and resumed by locating the messages in the log.
301
302
Stands for do not call ASNLOAD for this member. If loadxit is set for Apply, initial load of all the members of all subscription sets Apply processes are done by calling ASNLOAD. By setting the LOADX_TYPE column to 1 for a member you can exclude this member from initially loaded by ASNLOAD. Is for user defined load. Set this value if you have altered ASNLOAD based on your site requirements.
2 3 4 5
If LOADX_TYPE is null, the method is selected by ASNLOAD. If crossloader is the preferred method and source is a remote DB2 database to target, then you must create a nickname for the source on the target server and update the schema and nickname to LOADX_SRC_N_OWNER and LOADX_SRC_N_TABLE columns of IBMSNAP_SUBS_MEMBR. This is not required if source is not a DB2 source.
Important: You can specify the LOAD type of ASNLOAD by updating the LOADX_TYPE column in ASN.IBMSNAP_SUBS_MEMBR. This column cannot currently be updated using the Replication Center. That function will be added in a future fixpack.
Configuration File
It is possible to pass some parameters to ASNLOAD through configuration file (asnload.ini). This file is on samples\repl under the installation directory. If you want to pass parameters to ASNLOAD, you must copy the configuration file to apply_path and update. The parameters are related to utility statements prepared by ASNLOAD. UID and PWD if specified used on connect statement. The possible keywords are seen in Example 6-18. A comma at the beginning of a line makes that line comment. Parameters grouped under COMMON are valid for all databases. The parameter values for a certain database are grouped under the database name alias in square brackets. Database assignments override the assignments made under COMMON. If a value is not assigned to a parameter neither under database nor under COMMON, defaults in ASNLOAD program are used.
303
The database alias may be the source, target or the apply control server. The following can be specified: An userid and password for connect. If not specified for a database alias connect will be tried without userid and password. A backup copy of the target database can be generated while loading if the target database is forward recovery enabled (LOGRETAIN or USERCOPY set to ON on database configuration). If replication source has LOB columns, LOBPATH, LOBFILE and the limit on the number of LOB files can be specified. DATA_BUFFER_SIZE, DISK_PARALLELISM, CPU_PARALLELISM can be coded for improving the performance of the utility.
Important: Apply searches for the configuration file under the apply_path. If you want to pass parameters, copy asnload.ini to apply_path.
Message files
Message files are created by ASNLOAD under apply_path. Assume apply_qual used is, APY1. Following message files will be created:
304
asnaIMPT.msg if IMPORT is used as the method asnaLOAD.msg if LOAD is used as the method
Customizing ASNLOAD
It is possible to customize the ASNLOAD since the source is also shipped with the product. The source (asnload.smp) is on directory samples\repl under the installation directory. The source is a C program with SQL calls. The following steps can be used for customization: There are instructions on how to modify the source on the sample. Rename (with sqc extension) and modify the source according to your site needs. Precompile, compile and linkedit. The program must be linked to a directory in the PATH. The LOADX_TYPE must be set to 2 at IBMSNAP_SUBS_MEMBR and start Apply with loadxit parameter.
305
306
When the ASNLOAD RPG program is successfully created, you need to create an SQL package using the CRTSQLPKG command:
Example 6-20 CRTSQLPKG command
CRTSQLPKG PGM(libraryname/ASNLOAD) RDB(DB2400D)
The RDB parameter is the remote Capture control server name The library name is the same library name were the ASNLOAD object is created and specified in the CRTSQLRPGI, which will be located at the Capture control server specified in the RDB parameter. Therefore, you need to create this library first, using the CRTLIB command . To use the ASNLOAD program, specify the ASNLOAD program name and library name in the FULLREFPGM parameter on the start Apply program command from either: STRDPRAPY CL command; see Figure 6-24 on page 269. Replication Center; see Figure 6-7 on page 247.
307
308
Chapter 7.
309
Analyzer and DB2 Replication Tracing (asntrc). For the scenarios in this chapter, the following environments were used, primarily: Windows NT / DB2 ESE V8 Open Beta 2 AIX 4.3.3 / DB2 ESE V8 Open Beta 2 Redhat Linux 7.3 / DB2 ESE V8 Open Beta 2 This chapter assumes that you have basic DB2 database administration and replication knowledge.
310
Attention: On iSeries and OS/400 to check the status use the WRKSBSJOB subsystem command, where subsystem is the name of the subsystem. In most cases the subsystem is QZSNDPR, unless you created your own subsystem description.
In the list of running jobs, look for the jobs youre interested in. The journal job is named after the journal to which it was assigned. If a job is not there, use the Work with Submitted Jobs (WRKSBMJOB) system command or the Work with Job (WRKJOB) system command to locate the job. Find the jobs log (joblog) to verify that it completed successfully or to determine the cause of failure.
Capture Messages
The Capture Messages window displays the Capture programs messages. These are rows in the capture control table IBMSNAP_CAPTRACE. The amount of data contained in this table is limited by the Capture configuration element TRACE_LIMIT, seven days by default. To view the Capture Messages window: 1. Within RC double-click on Operations so that the folders for Capture Control Servers, Apply Control Servers, and Monitor Control Servers are showing. 2. Select Capture Control Servers. 3. In the right pane, select the control server you are interested in seeing the status of Capture for. 4. From the menu bar at the top of the window choose Selected -> Show Capture Messages. Figure 7-1 shows the Capture Messages window after selecting Retrieve.
311
You will likely have to scroll right to see the whole message. Clicking on any of the messages will highlight it, making reading the message as you scroll easier. This is also a good opportunity to use the knowledge gained on DB2 on page 334. None of the messages shown are current. By default the From is Specify date and time, selecting Retrieve will show the previous 24 hours. If we had another capture schema on this capture server we could retrieve its messages using the drop down menu Capture Schema. It is often useful to select All for Messages to display to view all of the most recent messages. They will likely be informational messages.
Apply Report
The Apply Report window displays the Apply programs messages. These are rows in the apply control table IBMSNAP_APPLYTRAIL. The amount of data contained in this table is not limited. You will want to manually delete rows on a regular basis. To view the Apply programs messages:
312
1. Within RC double-click on Operations so that the folders for Capture Control Servers, Apply Control Servers, and Monitor Control Servers are showing. 2. Double-click Apply Control Servers. 3. Select the control server you are interested in seeing the status of Apply for. 4. Select Apply Qualifies. 5. In the right pane select the apply qualifier you are interested in. 6. From the menu bar at the top of the window choose Selected -> Show Apply Report. Figure 7-2 shows the Apply Report window.
The usefulness of this window is in the selection of Information to display. The information is divided by subscription sets, and you can select All subscription sets, Failed subscription sets, Successful subscription sets, or the default of Error summary per failed subscription set. Clicking Refresh will display the set information for that selection. This may result in a message box with ASN1560E with quoted string SQL0100W No row was
313
found for FETCH, UPDATE or DELETE; or the result of a query is an empty table. Remember, the default is to show Error summary information. If there havent been any errors, there wont be any records to display. If you select All subscription sets or Successful subscription sets in the Information to display field and click Refresh you will see if there are any records available for successful replications. If you still get the ASN1560E message window containing SQL0100W, then Apply wasnt running between From and To times you selected in the Range of time to display section of the Apply Report window. The timestamp in the Last Run column of the Apply Report indicates when a record in the report was created. To see all the information, including error codes, from the IBMSNAP_APPLYTRAIL record represented by a record in the Apply Report, highlight the record, right-click, and select View. This is an example of an APPLYTRAIL record displayed from Replication Centers Apply Report.
Query Status
This status tool was explained in detail in Querying the Status of Capture and Apply on page 248. Included there is using the status option for asnccmd and asnacmd. Near that section the process threads are also described. We find it generally more useful to look at the Capture and Apply logs.
314
When you try to query status, stop, or otherwise try to collect information from Capture or Apply, you may get SQL2220 12 and ASN0506E the Command captureSchema or applyQualifier. The ASN0506E message describes not being able to attach to the replication communications message queue. The cause is very likely that the program is not running.
A possible cause for the repeated full-refreshes is that the trigger and procedure on the ibmsnap_reg_synch table in Informix were not created or are not functioning properly. The reg_synch trigger and procedure are created when the Capture Control Tables are created in Informix. If you saved the
315
SQL that created the Capture Control Tables in Informix, you should be able to find the CREATE PROCEDURE capschema."ibmsnap_synch_proc" statement after the CREATE TABLE and CREATE UNIQUE INDEX statements for the ibmsnap_reg_synch table. The text for the CREATE PROCEDURE ibmsnap_synch_proc is long, but immediately following it you should find the CREATE TRIGGER capschema."reg_synch_trigger" statement. If youre certain that the non-DB2 source table is being updated, you can query the corresponding CCD table to see if the Capture triggers are inserting records. Keep in mind, if there is an active Subscription Set involving the non-DB2 source table, Apply, at the end of each cycle, is causing the trigger/procedure on the ibmsnap_pruncntl table to delete records that have been replicated from the CCD table. The Apply Throughput Report for a Subscription Set that is replicating from a source table will show the number of inserts, updates, and deletes performed at the target tables for a Subscription Set. These same numbers also reflect the number of source table inserts, updates, and deletes that were inserted into the CCD tables by the Capture triggers on the source tables. If you want to obtain this information without going through the Replication Center, you can use SQL on the ASN.IBMSNAP_APPLYTRAIL table at the Apply Control Server. You need to know the Apply_Qualifier and Set_Name for the set. Then use the following SQL:
SELECT LASTSUCCESS, EFFECTIVE_MEMBERS, SET_INSERTED,SET_DELETED, SET_UPDATED FROM ASN.IBMSNAP_APPLYTRAIL WHERE APPLY_QUAL=apply_qual AND SET_NAME=set_name
The ASN.IBMSNAP_APPLYTRAIL table is described in IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1 , SC27-1121.
Note: When replicating from a non-DB2 source, the Apply End-to-End Latency report is not valid. The synchtime at the non-DB2 source server that Apply uses to calculate Captures latency is actually set by Apply itself, though indirectly, just before Apply reads this synchtime value. Just before Apply reads the Synchpoint and Synchtime in the ibmsnap_register table in the non-DB2 server, Apply updates the ibmsnap_reg_synch table; the reg_synch_trigger on this table calls the ibmsnap_synch_proc procedure which always puts a current timestamp into the synchtime column of the ibmsnap_register table.
316
the Apply Control tables, such as IBMSNAP_APPLYTRAIL and IBMSNAP_APPLYTRACE, when Apply is replicating to non-DB2 targets are the same as when Apply replicates to DB2 targets.
Tip: If you are not interested in otherwise using Linux in your replication solution, it can make an ideal monitor server. All Linux distributions include SMTP services either preconfigured or otherwise easily configured. The previous DB2 versions difficulty getting Java to work well on UNIX and Linux seems resolved. The installation and administration clients graphical user interfaces are near identical in appearance on Windows. The performance of the interface is comparable, if it does not favor Linux.
317
each table is briefly described in Table 7-1. This is followed by Table 7-2 presenting calculations to assist you with sizing the monitoring tables table spaces.
Table 7-1 Monitor control tables
Table name Description
All alerts issued by the Replication Alert Monitor are recorded. Contains conditions to contact on occurrence. Contains individuals that make up contact groups. Contains the contact groups and their description. Future use. Monitor parameters. Information on which control servers where monitored and when. Every action of the monitor is recorded here. Contains information about each monitoring cycle.
IBMSNAP_ALERTS
1172
1 row for each alert detected. Rows are eligible for pruning based on ALERT_PRUNE_LIMIT. 1 row for each condition defined 1 row for each contact group 1 row for each individual contact 1 row for each contact group none 1 1 row for each monitored server 1 row for each Alert Monitor message. Rows must be pruned manually.
318
Row length
Number of rows
IBMSNAP_MONTRAIL
1115
1 row for each monitor cycle (MONITOR_INTERVAL). Rows must be pruned manually.
319
320
8. Select a capture control server by clicking .... Or, for Apply alert conditions, select either the Select subscription sets check-box or the Select Apply qualifiers check-box and then the Add button to bring up the filter to select either the Subscription Set or the Apply Qualifier for which you want to add alerts. 9. Add... capture schemas. 10.If you want to replace any existing alert conditions, check the Replace any existing alert conditions check-box. 11.Select alert conditions, identify a value as needed, and decide who is contacted. The Hint provides sufficient information about the majority of the alert conditions and their possible values. It is worth noting while considering the conditions that the monitor runs on an interval specified by monitor_interval at monitor start. Some of the conditions will be better grasped after reading later chapters. Apply alert conditions can be set at either the Apply Qualifier level or at the Subscription Set name level, depending on whether the Select subscription sets or the Select Apply qualifiers check-box was selected in step eight above. When setting conditions at the Subscription Set level, you can specify conditions that were not specified for the Apply Qualifier for this set. For alert conditions that were specified at the Apply Qualifier level, you can specify the same condition at the Set level with a different threshold. Replication Monitor, when checking for alerts for the Set, will check to see if the conditions threshold for the Set is exceeded and ignore the threshold that was specified at the Apply Qualifier level.
Attention: The Replication Alert Monitor does not monitor triggers associated with non-DB2 relational databases used as sources in a federated database system.
12.Select contacts. This is contained in the Values section. It may be hidden from view, scroll that portion if this is the case. 13.Select OK to generate an SQL script. 14.In the Run Now or Save SQL window select OK. See 2.13, Run Now or Save SQL on page 82 if you want assistance with this window.
321
The remaining condition - CAPTURE_CLATENCY - uses the SYNCHTIME value in the IBMSNAP_REGISTER table; in a non-DB2 source server, this value is actually set by Apply just before it reads the value. Also, for non-DB2 source servers, the APPLY_LATENCY condition will not be a measure of the same thing as when replication is from a DB2 source server. Since Apply sets the SYNCHTIME value in the IBMSNAP_REGISTER table at a non-DB2 source server just before it reads the value, the End-to-End Latency calculation is just a measure of how long Apply takes to execute a replication cycle.
Note: asnmon is a DB2 application program. This program must be bound to a DB2 database or subsytem which is the monitor server and to all monitored servers (Capture Control Servers, Apply Control Servers). asnmon on Linux, UNIX, and Windows should bind its packages the first time it connects to a server. ASNMON on z/OS will require that its packages be bound manually. See 6.1.4, Considerations for DB2 UDB for z/OS on page 275 for the recommended bind parameters.
Monitor status can be checked from the Replication Center, or by using the asncmd which is also described in the chapter on system commands for
322
replication (UNIX, Windows, z/OS) in DB2 UDB Version 8 Replication Guide. asnmcmd can also be used to re-initialize or stop replication monitor.
Reinitialize Monitor will load the changed values for that monitor qualify. This is unlike Capture and Apply, Monitor does not dynamically collect new and changed parameters and alert conditions from its control tables.
323
Server is "CORP". Capture schema is "ASN". Current Memory is "26", Threshold is "24".
We can highlight either the C (Capture component alert grouping) or the A (Apply component alert grouping), right-click, and select Properties to see the alert conditions in place. Figure 7-6 is an example of the apply alert conditions properties, showing the apply alert conditions that we set using the Select Alert Conditions for Apply Qualifiers or Subscriptions dialog.
324
We have purposely set an alert condition for a low end-to-end latency (5 seconds) and an alert condition for when full-refreshes occur so that we are sure to get alerts in this example. We can see our alert conditions by querying the ASN.IBMSNAP_CONDITIONS table at the monitor control server. Example 7-3 shows our query. The columns of the conditions table are described in detail in Table structures in IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1 , SC27-1121.
Example 7-3 Query of monitor conditions table
SELECT MONITOR_QUAL, SERVER_ALIAS, COMPONENT, SCHEMA_OR_QUAL, SET_NAME, CONDITION_NAME, PARM_INT, PARM_CHAR, CONTACT FROM ASN.IBMSNAP_CONDITIONS WHERE MONITOR_QUAL=MIXMON
Figure 7-7 shows the result of this query displayed in DB2 Command Center.
325
We can then start the alert monitor to check for alert conditions at our capture and apply control servers. We force Apply to do a full-refresh without stopping Capture or Apply. This can be done in Replication Center. In the left pane we double-click on Replication Definitions -> Apply Control Servers -> ourControlServer -> Subscription Set. We highlight the subscription set MIXSET in the right pane. Then choose from the menu bar Selected -> Full Refresh -> Automatic. The SQL that is generated, when run, connects to the apply control server and sets the last success, synch time, and synch point of a set each to NULL, causing a full-refresh. Before starting Replication Monitor, we first designate a working directory and copy or create a asnpwd type password file containing the userids and passwords the Replication Monitor needs to connect to the capture and apply control servers. On Windows, for a Monitor working directory, we create a directory: mkdir d:\DB2Repl\MIXMON.
326
At the command prompt we cd to that directory and use asnpwd to create a password file MIXMON.aut containing userids and passwords to connect to SRC_NT and TGT_NT, our capture and apply control servers respectively. asnpwd is described in Chapter 6, Operating Capture and Apply on page 233. asnpwd has online help which can be viewed from a command prompt by entering asnpwd -help. There is additional information in the chapter System commands for replication (UNIX, Windows, z/OS) in IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1, SC27-1121. Alert Monitor can be started from the Replication Center. To do this, we click on the icon for a monitor qualifier, right-click, and select Start Monitor from among the options. Figure 7-8 shows the Start Monitor window.
We start alert monitor with RunOnce=Y instead of specifying a Monitor_Interval since we just want to collect any alerts that might have been generated by a short run of Apply. Besides using Replication Center, DB2 Replication Alert Monitor can be started at a command prompt. Below is the command entered it in this example:
327
In this example: monitor_server=TGT_NT TGT_NT is the DB2 server containing the monitor control tables. monitor_qual=MIXMON MIXMON is the monitor qualifier. runonce=y Instead of setting a regular Monitor_Interval, we tell monitor to just run once, then we check for the occurrence of alert conditions. monitor_path=d:\DB2Repl\MIXMON The working directory for this monitor. pwdfile=MIXMON.aut The asnpwd-type password file we have created in the monitor working directory for monitor to use to get the userids and passwords to connect to the capture and apply control servers. On z/OS, Replication Monitor could have been started with the JCL in Example 7-4.
Example 7-4 Sample JCL to start monitor on z/OS
//ASNMON EXEC PGM=ASNMON, // PARM='/monitor_server=V71A monitor_qual=MIXMON // monitor_path=//JAYAV8' //STEPLIB DD DISP=SHR,DSN=DPROPR.V810.BASE.TESTLIB, // UNIT=SYSDA,VOL=SER=RMS002 // DD DISP=SHR,DSN=SYS1.SCEERUN // DD DISP=SHR,DSN=DB2A.SDSNLOAD //CEEDUMP DD DUMMY //SYSTERM DD SYSOUT=* //SYSUDUMP DD DUMMY //SYSPRINT DD SYSOUT=*
In this JCL example, Monitor_server = V71A is the db2 subsystem id name of the monitor control server. Runonce=Y has not been specified, nor Monitor_Interval so Replication Monitor would check at Capture and Apply Control servers at the default Monitor_Interval (300 seconds / 5 minutes) for alert conditions. In our example using asnmon, using Runonce=Y, monitor runs for about one minute and then stops. In that time frame we also received email notification.
328
If we look in Replication Center at our monitor qualifier MIXMON, we notice in the right pane that the icon for apply qualifier MIXQUAL has a red light next to it indicating an alert. See Figure 7-9.
After the run, we use query to look in the ASN.IBMSNAP_ALERTS table at the monitor control server (TGT_NT) to see if there are any records. We expect at least one record since we cold start Capture, which forces Apply to do a full-refreshes for all members. We can display the alerts themselves in the Replication Center. We highlight the icon for MIXQUAL, right-click, and select Show alerts. In the Show Alerts window, we can specify a range of times for which we want to see alerts and then click Retrieve. Since we know we do not have many alerts, we accept the defaults. When we click Retrieve, the alerts are displayed. See Figure 7-10.
329
We could also query the ASN.IBMSNAP_ALERTS table at the monitor server to see the records for the alerts. Example 7-5 shows the query. Here we will use DB2 Command Center to do our query. Figure 7-11 shows the result of the query.
Example 7-5 Query of Alerts
SELECT MONITOR_QUAL, COMPONENT, SERVER_ALIAS, SCHEMA_OR_QUAL, SET_NAME, CONDITION_NAME, OCCURRED_TIME FROM ASN.IBMSNAP_ALERTS WHERE MONITOR_QUAL=MIXMON
330
331
main administration client windows, select Tools -> Health Center. The health center is shown in Figure 7-12. It seems to default to the three lit shapes. You will likely find your instances and databases clicking the four lit shapes button. Clicking on the instance and databases you can configure and start the health monitoring
332
7.4 Troubleshooting
There are many components involved in DB2 Replication: Replication Center, Capture, Apply, DB2 data definition (DDL), authorization, DB2 connectivity, and so on. If the source of a DB2 Replication problem is not obvious, it is necessary to take steps to narrow the scope of your search for the cause. Throughout the chapters of this document, we have suggested troubleshooting specific to the topic discussed. For example, there is a lot of information on troubleshooting problems with Capture and Apply operations in Chapter 6, Operating Capture and Apply on page 233. Also 1.5, DB2 Replication V8 close up on page 22 and the details on Capture and Applys sub-operations in Chapter 10, Performance on page 417 are useful in providing ideas of what to look for as you investigate a problem. Many of the common user errors from previous versions cannot occur in this version, or are more detectable. An example is Apply will now function well if asnapply, when started, is given the apply qualifier in lowercase. Still, there will undoubtable be situations that you have to spend some effort troubleshooting. DB2 Replication, and DB2 itself on each platform, offer facilities for narrowing problem cause. First, you will likely look at online help, use RC to check the status of Capture and Apply, or look in the Capture or Apply program logs. Those topics are described in this section. If this does not allow you to resolve the problem, you will likely use SQL selects from the various control tables, CD, source, and target tables. You may also want to monitor for conditions to try and catch the problems occurrence. If you still do not have resolution, you may use tools such as asnanalyze (on iSeries use ANZDPR) or asntrc to obtain more detailed information. Those topics will be described in later sections.
333
Web browser with the page that describes the topic. The Web page also provides links to related information. If information does not appear correct in the RC or other graphical tools, try selecting from the menu bar, View -> Refresh. This will ensure that what you see in the window is up to date.
Restriction: When cataloging new DB2 systems and databases, it is currently necessary to shutdown RC and all the administration clients graphical tools to have the catalog entries to be shown in RC. We have been informed that this issue will likely be resolved in the Version 8 fixpak 2 time frame.
Information center
DB2 Information Center is a Web page (HTML) based help system. One of its interfaces is Java based and integrated into the DB2 Administration Client. From the Replication Center (RC) you can access the information center from the menu bar, Help -> Information Center. From this Java interface, selecting any topic will load a Web page on that topic into your Web browser. You need to have a recent version of your Web browser with Java and Javascript enabled. The Web browser based information is now consistent across Linux, Windows, and UNIX, including the documentation search facility.
Tip: When not using the help, we prefer viewing and searching through the whole documents. For this reason, we tend to download the PDF versions of the documents from the DB2 support sites.
The documentation for DB2 on Linux, UNIX, and Windows no longer includes a troubleshooting guide. The information that was contained in it is now in the documents on the topics. The documents are now more task oriented.
Note: Information Center searches are exact phrase. Boolean, wild-card, and partial word searches do not work.
DB2
DB2 and replication operations return information, warning or error messages. Many commands return a combination of these. These messages identify the success or failure of an operation, and if a failure, often how to correct the problem. Figure 7-13 shows the messages returned one time when OK was selected from the Register Tables window.
334
There are three informational messages, one warning message, and zero error messages. Each message begins with an identifier. They are ASN1503I, ASN1589W, ASN15801I, and ASN1511I. Detailed information about a particular message can be looked up using the Command Center or other DB2 Command Line Processor (CLP) interfaces. See Example 7-6.
Example 7-6 Detailed output of a DB2 Message
db2 => ? asn1589 ASN1589W The calculation of the size of the table space container "<container>" of the table space "<tspace>" resulted in an incorrect container size. Therefore the container size has been changed to size "<size>" megabytes. Explanation: The calculation of the table space container size has resulted in a value that is too low to be used in a valid table space container definition. To ensure that the definition will be accepted by DB2, a replication specific minimum container size has been provided for the table space container definition. User Response: For the calculation based on a percentage of the current source
335
table size, check whether the source table contains data and if the statistics of the source table are up to date (using the RUNSTATS utility). For the calculation based on a number of rows, check whether the number of rows is realistic.
All DB2 messages begin with an identifier of the following format: a prefix of three characters for the component, a four or five digit number, and a single character identifying the severity. Table 7-3 shows prefixes that are commonly encountered doing DB2 Replication.
Table 7-3 Common Message Identify Prefixes
Prefixes Description
DB2 Replication. Database Administration tools. CLP. database manager (the instance) when a warning or error condition has been detected.
The four or five digit message number unique identifies that message from other messages with the same prefix. As shown in Example 7-6, the ending is optional when you enter the command to see the detailed information about a message. The ending provides important information about the nature of a message. Table 7-4 shows the endings that are most commonly encountered doing DB2 Replication.
Table 7-4 Message Identify Endings
Ending Description
I W N E C
Information. Warning. Error, used by messages with a prefix of SQL. Error. Critical error, usually a crash.
There are often quoted strings in the message, example <captureSchema>. These have values particular to the result the message is reporting. Sometimes you will see the quotes together, . This means there is no value for this variable. The detailed response gained by ? messageID contains three sections: message, explanation, and user response. If there are multiply explanations, they will be
336
numbered, and the user response will contain the same numbering. Often this information will allow you to resolve the problem. When seeking assistance from DB2 Customer Support, or others, the exact message identify and the quoted springs are important. If we were seeking assistance for an ASN0068E problem, we would say, We are encountering ASN0068E, insert statement too long for CD table with strings ASN, and ASN.manyLargeColumnsTable. We would also provide the exact output of db2level, if on Linux, UNIX, or Windows. Simply, looking up the meaning of the DB2 message would likely have allowed us to solve the problem ourselves. Full information about the DB2 message format, and a listing of all messages are in the DB2 UDB Message Reference Volume 1 & 2.
instance.database.captureSchema.CAP.log
And
instance.database.applyQualifier.APP.log
And these are often your best tool for start and stop failures. View them in your preferred text editor. Start at the end and work backwards.
instance.database.captureSchema.CAP.log
The Capture program log. This file is generated in CAPTURE_PATH and contains a trail of the Capture programs programmatic status information. This file is plain text. It can usually be readily interpreted. Still, it is mainly for problem investigation by DataPropagator support and development. An example is db2inst1.SAMPLE.ASN.CAP.log. Example 7-7 shows a portion of a capture log.
Example 7-7 CAP.log
2002-09-09-08.48.37.271000 <CWorkerMain> ASN0100I CAPTURE "ASN". The Capture program initialization is successful. 2002-09-09-08.48.37.271000 <CWorkerMain> ASN0109I CAPTURE "ASN". The Capture program has successfully initialized and is capturing data changes for "1" registrations. "0" registrations are in a stopped state. "0" registrations are in an inactive state.
337
2002-09-09-08.53.37.603000 <PruneMain> ASN0111I CAPTURE "ASN". The pruning cycle started at "Mon Sep 09 08:53:37 2002". 2002-09-09-08.53.37.603000 <PruneMain> ASN0112I CAPTURE "ASN". The pruning cycle ended at "Mon Sep 09 08:53:37 2002". 2002-09-09-08.54.09.138000 <handleCAPSTART> ASN0104I CAPTURE "ASN". In response to a CAPSTART signal with MAP_ID "2
Further troubleshooting of the operations has been covered in 6.1.6, Troubleshooting the operations on page 286.
instance.database.applyQualifier.APP.log
The Apply program log. This file is generated in APPLY_PATH and contains a trail of the Apply programs programmatic status. This file is plain text. It can usually be readily interpreted. Still, it is mainly for problem investigation by DataPropagator support and development. An example is DB2.REPSAMPL.WINSAM.APP.log.
db2diag.log
On Linux, UNIX, and Windows, DB2 logs fairly low level programatic status. This is generated at the location identified by the database manager configuration parameter DIAGPATH. DIAGPATH is where all automatically generated DB2 diagnostics files are on these platforms. There are four diagnostic levels set by the database manager configuration parameter DIAGLEVEL. Unless advised by a DB2 Customer Support Analyst do not change DIAGLEVEL from three to a different level. To view these parameters, as the instance owner: db2 get dbm cfg
338
The output of analyzer is frequently requested by IBM Customer Support while assisting in solving problems with replication. asnanalyze runs on Linux, UNIX, and Windows and connects to and reads the contents of the replication control tables and catalog tables on any platform including z/OS and iSeries. asnanalyze is described in the chapter System commands for replication (UNIX, Windows, z/OS) in IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1 , SC27-1121. asnanalyzes online help can also be viewed by entering asnanalyze at a command prompt with no input parameters. Section 7.5.1, asnanalyze and ANZDPR on page 342 contains an example of running asnanalyze ANZDPR runs on iSeries and can collect information from replication control tables and the system libraries. ANZDPR is described in System commands for replication (OS/400) in IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1 , SC27-1121.
339
(OS/400) in IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1, SC27-1121. WRKDPRTRC also has online help that can be viewed by moving your cursor to the command in a display list and pressing F1 key. Within the WRKDPRTRC prompt screens, help for specific parameters can be seen by moving your cursor to the parameter and pressing F1.
7.4.6 db2support
Introduced in a recent Version 7 fixpak of DB2 for Linux, UNIX, and Windows is the db2support tool. It is shipped with Version 8 on those platforms. This collects a fairly complete picture of the DB2 environment. Entering the command with no options will give complete syntax. Most often the basic invocation is all that is desired: db2support . -d databaseAlias -c
Attention: Do not run db2support while experiencing a severe performance degradation -- contact DB2 Customer Support.
The output will be packaged into a file in the zip compress format in the directory, . that you invoked it. Unfortunately, the command does not know about the DB2 Replication environment, so you will have to manually collect those files.
340
From there you can navigate to DB2 product family support Web sites such as DB2 DataPropagator Support:
http://www-3.ibm.com/software/data/dpropr/support.html
Or to platform specific resources such as the DB2 for Linux, UNIX, and Windows Support:
http://www.ibm.com/software/data/db2/udb/winos2unix/support.
iSeries does not have a support Web site dedicated to DB2 and replication. There main support site is very good:
http://www-912.ibm.com/
There is also the USENET news group comp.databases.ibm-db2 which has a very active user community. The DB2 Magazine at http://www.db2mag.com is worth subscribing to. IBM DB2 Customer Support can be reached in the United States of America at 1-800-237-5511. The world wide number is 1-404-238-1234 (charges may apply). Both phone numbers have live operators at all times of every day.
z/OS
Check System Services Address Space ( ssidMSTR where ssid is your DB2 subsystem), and the job message log (JESMSGLG) output listing at the System Display and Search Fascility (SDSF) to detect some of the errors of Capture and Apply. This tooling will keep track of errors like time-out, deadlock, resource unavailable, extend failure, or lock escalation experienced by Capture and Apply.
341
iSeries
Care has been taken to include iSeries troubleshooting content throughout the topics covered in this book, particularly in Chapter 6, Operating Capture and Apply on page 233. If the topics presented here do not offer assistance, please refer to the topic of interest in the other chapters.
(With no keywords or parameters specified, and the online help will be still be displayed.)
asnanalyze example:
This example was run on Windows. It could also be run on Linux or UNIX. The Capture and Apply Control servers analyzed could be on any platform. DB2 connectivity has been defined and tested to both the capture control server (SRC_NT) and the apply control server (TGT_NT).
342
Create a password file using asnpwd to provide the userids and password asnanalyze will need to connect to the capture control server (SRC_NT) and the apply control server (TGT_NT): First, create the password file analyze.aut:
asnpwd init using analyze.aut
Run analyze for the capture schema MIXCAP and apply qualifier MIXQUAL. Example 7-9 shows the single command listed over multiple lines for clarity.
Example 7-9 asnanalyze command
asnanalyze -db -la -ct -cm -sg -tl -at -aq -cs -od -fn -pw SRC_NT TGT_NT standard 2 2 2 2 2 MIXQUAL MIXCAP d:\DB2Repl\analyze MIXanlyz.htm analyze.aut
Our Capture Control Server Our Apply Control Server Provide standard level of detail in the output Get CAPTRACE records for passed 10 days; default is 3 days Get CAPMON records for passed 10 days; default is 3 days Get SIGNAL records for passed 10 days; default is 3 days
343
Get APPLYTRAIL records for passed 10 days; default is 3 days Get APPLYTRACE records for passed 10 days; default is 3 days Apply Qualifier to retrieve records for Capture Schema to retrieve records for
-od d:\DB2Repl\analyzeOutput directory for analyze output file; default is current directory -fn MIXanlyz.htm -pw analyze.aut
Output file from analyzer The DProp password file containing userids/passwords for Analyze to use on connections to SRC_NT and TGT_NT; default is asnpwd.aut
344
We can open the output file (MIXanlyze.htm) with a Web browser. At top the them an HTML document is a table of contents. See Example 7-11. Each entry in the HTML document is a link to the respective section of the output.
Example 7-11 Analyser output table of contents
Table Of Contents SRC_NT Control table detail SRC_NT Packages and plans (link not available if OS/400) SRC_NT Change data table (CD) column analysis SRC_NT Internal consistent change data table (CCD) column analysis SRC_NT Subscription target key synopsis SRC_NT Federated DB nickname details from SYSIBM.SYSCOLUMNS (link not available if OS/400) TGT_NT Control table detail TGT_NT Packages and plans (link not available if OS/400) TGT_NT Change data table (CD) column analysis TGT_NT Internal consistent change data table (CCD) column analysis TGT_NT Subscription target key synopsis TGT_NT Federated DB nickname details from SYSIBM.SYSCOLUMNS (link not available if OS/400) Server connection summary How many rows in each CD table are eligible for pruning How many rows in each unit of work table are eligible for pruning Selected DB2 for z/OS table,tablespace information Selected DB2 Common Server, Febderated, workstation UDB tablespace information Missing table or view references Inconsistency - Orphan Details Incorrect or inefficient indexes Incorrect or inefficient tablespace LOCKSIZE Subscription errors, omissions or anomalies Apply process summary When your CAPTURE program is not capturing CAPTURE tuning troubles Additional Findings
chapter System commands for replication (UNIX, Windows, z/OS) in IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1, SC27-1121. asntrc -help provides online help. WRKDPRTRC is described in the same document in the chapter System commands for replication (OS/400). Below we show one an of running asntrc. There is another example of using asntrc in 10.4.13, Time spent on each of Applys suboperations on page 460. That example is specific to finding Apply TRACE PERFORMANCE RECORDS.
345
asntrc tips
There could be several Capture, Apply, and Monitor processes running simultaneously on a system, every time we enter an asntrc command, we need to specify: -db parameter with a value indicating the control server database for that Capture, Apply, or Monitor process being traced. -cap, -app, or -mon to indicate whether the command is for a Capture, Apply, or Monitor process, -schema or -qualifier with a value indicating the the Capture Schema, Apply Qualifier, or Monitor Qualifier of the process being traced The above parameters need to be specified when we do any of the following asntrc commands:
Turning on asntrc for a Capture, Apply, or Monitor process Turning off anstrc Clear the memory buffers of asntrc Kill an asntrc that cant be stopped with off Dump the current asntrc buffers to a file Format the current asntrc buffers to a file
v7fmt Format the current asntrc buffers to a file, in the format of a Capture/Apply V5/6/7 trace flw
To format current asntrc buffers in abbreviated format
For dump, specify the output filename after the keyword dmp. For fmt, v7fmt, and flw, at the end of the command, also specify > and an output file name.
Attention: If you are tracing a Capture, Apply, or Monitor process that you suspect will stop abnormally, when you start asntrc (asntc on), you will want to specify the -fn parameter followed by an output filename. This will force asntrc to continuously dump its buffer. This can have a negative impact on performance, but otherwise the trace may not be dumped succesfully.
The above is also true when tracing an Apply that is started with CopyOnce=Y, since it will also release its memory when it stops, including any asntrc buffers. The contents of the file will be in asntrcs dump. asntrc fmt, v7fmt, or flw can then be used to create a file containing formatted asntrc output.
346
When formatting the contents of an existing asntrc dump output file, use -fn filename to indicate the dmp file being formatted, and >outfilename to indicate the name of the formatted output file to create.
The Apply Control Server Trace Apply The apply qualifier of the Apply process were tracing The asntrc buffer is 5 megabytes
Note: If we were tracing an Apply that we suspect may terminate abnormally, we would start asntrc before starting Apply. Also, we would have specified a filename for trace records to ensure the buffers will be written to disk. asntrc will be started with:
asntrc on -fn MIXQUAL.dmp -db TGT_NT -app -qualifier MIXQUAL We watch the ASN.IBMSNAP_APPLYTRAIL table looking for a new record with the Apply Qualifier of the Apply process we are tracing. Then the asntrc buffers should contain the information from one complete cycle. We check the APPLYTRAIL table with the following query: SELECT COUNT(*) FROM ASN.IBMSNAP_APPLYTRAIL WHERE APPLY_QUAL=MIXQUAL The result of this query indicates that there is a new apply trail record. We dump the contents of the asntrc buffers to a file. asntrc dmp MIXQUAL.dmp -db TGT_NT -app -qualifier MIXQUAL
347
Note: If we had wanted to bypass the dump file and just format the asntrc buffers, we might have used:
asntrc v7fmt -db TGT_NT -app -qualifier MIXQUAL > MIXQUAL.v7fmt We can turn off asntrc now since, one way or another, weve written the trace records out to a file. We stop asntrc with the following command: asntrc off -db TGT_NT -app -qualifier MIXQUAL If we just dumped the asntrc records to a file, and didnt format them yet, we can create a file with the formatted trace output with the following command: asntrc v7fmt -fn MIXQUAL.dmp > MIXQUAL.v7fmt In this example: -fn MIXQUAL.dmp: The input file to asntrc v7fmt > MIXQUAL.v7fmt: The output file in the format of a Apply V5/6/7 trace
Note: As with the asntrc for Apply, if we suspect Capture might terminate abnormally, we turn asntrc on before starting Capture. When we start asntrc we include -fn filename so that asntrc would write all trace records to a file:
asntrc on -fn MIXCAP.dmp -db SRC_NT -cap -schema MIXCAP
348
We let Capture run through whatever event we want to trace. We update one of the registered tables and issued a commit so that Capture has a transaction to track in memory. Then Capture inserts records into the CD table and the unit of work table. We could query the CD table to see when Capture has inserted new records. We then dump the asntrc records to a file, with the following command: asntrc dmp MIXCAP.dmp -db SRC_NT -cap -schema MIXCAP We can turn the trace off with the following command: asntrc off -db SRC_NT -cap -schema MIXCAP And then we created a formatted trace file from the contents of the dump file, with the following command: asntrc v7fmt -fn MIXCAP.dmp > MIXCAP.v7fmt
349
350
Chapter 8.
351
I A S
Deactivating registrations
Before deleting a registration, you should deactivate the registration to ensure that the Capture program has completed processing all of its captured entries. Another reason to deactivate a register table is to temporary stop capture for a particular registered table, and continue capturing the changes for the other registered tables.
Attention: Since capturing is stopped for the registration, there may have been changed in the log that were missed. Capture does not go back in the log or journal to look for these changes. Setting the STATE to I forces a full refresh of all subscriptions sets that copy from this source table.
352
When you are deactivating a registration table, the Capture program will only stop capturing changes. The CD tables, subscriptions sets, registration attributes are still defined within your replication environment. All subscription sets associated with the deactivated registered table need to deactivated also, just in case the apply program reactivate the registered table while you are in the process or deleting or making changes or before you are ready to reactive it. If you want to continue processing the other members of the associated subscription sets, then you should remove the member that uses the deactivated registration. There are two methods to deactivate a registration table:
Using the Replication Center: First you need to deactivate all the associated subscription sets. Figure 8-1 shows how to deactivate a single subscription set:
Expand the replication Definition -> Apply Control Servers folder->The Apply control server -> Subscription Sets folder (right-click to filter). Right-click the actual subscription set -> Deactivate. Figure 8-1 shows this action:
The next step is to deactivate the registration table. See Figure 8-2: Expand the replication Definition -> Capture Control Servers folder -> The Capture control server -> Registration folder (right-click to filter). Right-click the actual subscription -> Stop Capturing Changes.
353
When you click the option to stop capturing changes, the IBMSNAP_SIGNAL table is updated with a CAPSTOP in the SIGNAL_SUBTYPE column and a P value for the SIGNAL_STATE, indicating this signal is pending and during the next cycle when the capture program is running, this registration will become inactive. The IBMSNAP_REGISTER table will have a I value in the STATE column.
Manually inserting the CAPSTOP signal in the IBMSNAP_SIGNAL table as shown in Example 8-1.
Example 8-1 Manually deactivating a registration
CONNECT TO capture control server INSERT INTO capschema.IBMSNAP_SIGNAL (SIGNAL_TIME,SIGNAL_TYPE,SIGNAL_SUBTYPE,SIGNAL_INPUT_IN, SIGNAL_STATE,SIGNAL_LSN) VALUES (CURRENT TIMESTAMP,CMD,CAPSTOP,sourceschema.sourcetable, P,NULL)
Activating registrations
When your registration is temporarily inactive and you are ready to reactivate your registration to resume capturing, all you need to is activate the associated subscription sets using the replication center, see 8.2.2, Deactivating and activating subscriptions on page 359. Then after you start the Apply program, it
354
will update the IBMSNAP _SIGNAL table with a CAPSTART subtype signal, which will tell the Capture program to reactive the registration. The Capture program could have deactivated the registration, because of an unexpected error during the capture process. When this occurs the following conditions happens: The STATE column value is set to an S in the IBMSNAP_REGISTER table if the STOP_ON_ERROR column value is set to an N. See Stop on error on page 145 for details about this setting. When STATE column is set to S, Capture will no longer capture changes for this registration until the problem is resolved. To activate the registration, which was deactivated because of the unexpected error during the Capture program to need to do the following: Resolve the problem that causing there errors in Capture program, so that registration is eligible to be activated. At the Capture control server run the SQL statement in Figure 8-3 to reset the STATE column value in the IBMSNAP_REGISTER table:
UPDATE Schema .IBMSNAP_REGISTER SET STATE =I WHERE SOURCE_OWNER = SrcSchema AND SOURCE_TABLE =SrcTbl AND SOURCE_VIEW_QUAL =SrcVwQual AND STATE =S ;
Figure 8-3 SQL to activate registration after unexpected capture errors
Schema is the name of the Capture schema, SrcSchema is the registered source table schema, SrcTbl is the name of the registered source table, and SrcVwQual is the source-view qualifier for this source table. After the STATE column is set to I (Inactive), the Capture program will start capturing data when the CAPSTART signal is received, from the Apply program.
355
The actual source table or view are not effected in any way. They still remain on the source server. The remove a registration, make sure you deactivate it first, see 8.1.2, Deactivating and activating registrations on page 352. Then from the replication center (see Figure 8-2 on page 354) to display the menu, select the Delete option to display the registered tables you want to remove. See Figure 8-4:
Click OK to display the Run Now or Save SQL screen. If your registered tables reside on iSeries server, then you have an option to remove your registrations using the RMVDPRREG CL command. To use this command enter it on the iSeries command line and press the F4 key, then press the F11 key to display the actual parameter names. See Figure 8-5:
Remove DPR Registration (RMVDPRREG) Type choices, press Enter. Capture control library . . . . CAPCTLLIB Source table . . . . . . . . . . SRCTBL ASN
Library
. . . . . . . . . . .
356
For field level help move the curser to the parameter and press the F1 key. Or refer to Chapter 18 in the IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1 , SC27-1121.
357
The changes that are made to an existing registered tables, as indicated from this window are updated in the IBMSNAP_REGISTER table. The following are the columns that are changed:
Row capture rule CHGONLY Before image prefix BEFORE_IMG_PREFIX Restriction: You cannot change an existing BEFORE_IMG_PREFIX. You can use this screen to add a before image prefix if you entered blank for this field when the table was originally registered. Stop Capture on error STOP_ON _ERROR Allow full refresh of target tables DISABLE_REFRESH Capture updates as pairs of deletes and inserts CHG_UPD_TO_DEL_INS Capture changes from replica targets tables RECAPTURE Conflict detection level CONFLICT_LEVEL
358
Important: After changing the registration attributes, you must re-initialize Capture using the Replication Center, asnccmd, or, on iSeries, the INZDPRCAP command.
359
Depending on how long your subscription is deactivated, will determine if you need to run these additional steps to prevent system performance problems in the future.
360
UPDATE CaptureSchema.IBMSNAP_PRUNE_SET SET SYNCHPOINT = x 00000000000000000000AND SYNCHTIME = NULL WHERE APPLY_QUAL =ApplyQualAND SET_NAME =SetNameAND TARGET_SERVER =TargetServer; UPDATE CaptureSchema.IBMSNAP_PRUNCNTL SET SYNCHPOINT =NULL AND SYNCHTIME =NULL WHERE APPLY_QUAL =ApplyQualAND SET_NAME =SetNameAND TARGET_SERVER =TargetServer;
Run this SQL at the capture server to reset the pruning information in the IBMSNAP_PRUNE_SET and IBMSNAP_PRUNCNTL control tables for deactivated subscriptions sets. You can deactivate all associated registered tables, as long as they are not used by other subscription members, to prevent capturing data you dont need.
361
UPDATE ASN.IBMSNAP_SUBS_SET SET SET_NAME =NewSetName WHERE APPLY_QUAL =ApplyQualAND SET_NAME =ExistSetName; UPDATE ASN.IBMSNAP_SUBS_MEMBR SET SET_NAME =NewSetName WHERE APPLY_QUAL =ApplyQualAND SET_NAME =ExistSetName; UPDATE ASN.IBMSNAP_SUBS_COLS SET SET_NAME =NewSetName WHERE APPLY_QUAL =ApplyQualAND SET_NAME =ExistSetName;
3. If your subscription set has any SQL before and after statements or procedures, then run the SQL shown in Figure 8-9 also at the Apply control server to update the IBMSNAP_SUBS_STMTS table:
UPDATE ASN.IBMSNAP_SUBS_STMTS SET SET_NAME =NewSetName WHERE APPLY_QUAL =ApplyQualAND SET_NAME =ExistSetName;
Figure 8-9 Change subs set name in subs set statement table
4. From the Capture control server, run the SQL shown in Figure 8-10 to change the subscription set name in the IBMSNAP_PRUNE_SET and IBMSNAP_PRUNCNTL tables:
UPDATE CaptureSchema.IBMSNAP_PRUNE_SET SET SET_NAME =NewSetName WHERE APPLY_QUAL =ApplyQualAND SET_NAME =ExistSetNameAND TARGET_SERVER =Target_Server ; UPDATE CaptureSchema.IBMSNAP_PRUNCNTL SET SET_NAME =NewSetName WHERE APPLY_QUAL =ApplyQualAND SET_NAME =ExistSetNameAND TARGET_SERVER =Target_Server;
Figure 8-10 Change subscription set name in the pruning control tables
5. If you started the Apply program using the OPT4ONE parameter for UNIX, Windows and z/OS or OPTSNGSET parameter on the iSeries, then you have to stop and start the Apply program for new subscription set name to take effect. See 6.2.3, Apply parameters on page 298 for details on this parameter. 6. Reactivate the subscription to resume Apply processing, see 8.2.2, Deactivating and activating subscriptions on page 359.
362
UPDATE ASN.IBMSNAP_SUBS_SET SET APPLY_QUAL =NewApplyQual WHERE APPLY_QUAL =ExistApplyQual AND SET_NAME =SetName; UPDATE ASN.IBMSNAP_SUBS_MEMBR SET APPLY_QUAL =NewApplyQual WHERE APPLY_QUAL =ExistApplyQualAND SET_NAME =SetName; UPDATE ASN.IBMSNAP_SUBS_COLS SET APPLY_QUAL =NewApplyQual WHERE APPLY_QUAL =ExistApplyQualAND SET_NAME =SetName;
Figure 8-11 SQL to change apply qualifier at the apply control server
9. If your subscription set has any SQL before and after statements or procedures, then run the SQL shown in Figure 8-12 at the Apply control server to update the IBMSNAP_SUBS_STMTS table:
UPDATE ASN.IBMSNAP_SUBS_STMTS SET APPLY_QUAL =NewApplyQual WHERE APPLY_QUAL =ExistApplyQualAND SET_NAME =SetName;
Figure 8-12 Change subs set name in subs set statement table
10.From the Capture control server, run the SQL shown in Figure 8-13 to change the apply qualifier in the IBMSNAP_PRUNE_SET and IBMSNAP_PRUNCNTL tables:
363
UPDATE CaptureSchema.IBMSNAP_PRUNE_SET SET APPLY_QUAL =NewApplyQual WHERE APPLY_QUAL =ExistApplyQualAND SET_NAME =SetNameAND TARGET_SERVER =Target_Server ; UPDATE CaptureSchema.IBMSNAP_PRUNCNTL SET APPLY_QUAL =NewApplyQual WHERE APPLY_QUAL =ExistApplyQualAND SET_NAME =SetNameAND TARGET_SERVER =Target_Server
Figure 8-13 SQL to change the apply qualifier in the pruning control tables
11.Repeat steps 2 to 4 to change the apply qualifier for additional subscription sets. 12.If you started the Apply program using the OPT4ONE parameter for UNIX, Windows and z/OS or OPTSNGSET parameter on the iSeries, then you have to stop and start the Apply program for new subscription set name to take effect. See Apply parameters on page 298 for details on this parameter. 13.Reactivate the subscription to resume Apply processing, see 8.2.2, Deactivating and activating subscriptions on page 359.
Note: If you setup monitoring definitions then you need to change them, by removing and recreate new monitor definitions with the new subscriptions name or apply qualifiers as described previously in this section. You can use the replication center to make these changes. See Chapter 7, Monitoring and troubleshooting on page 309 on replication monitoring details.
364
Click OK to display the Run Now or Save screen. Review SQL script before selecting the option to run it. Note the click box to drop the target table and indexes. When you remove a subscription set, you could either deactivate or remove all associated registered tables, to prevent capturing data you dont need.
365
Remove DPR Subscription (RMVDPRSUB) Type choices, press Enter. Apply qualifier . . . . Set name . . . . . . . . Control server . . . . . Remove members . . . . . Remove DPR registration Delete target table . . . . . . . . . . . . . . . . . . . . . . . . . . APYQUAL SETNAME CTLSVR RMVMBRS RMVREG DLTTGTTBL
For field level help move the curser to the parameter and press the F1 key. Or refer to Chapter 18 in the IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1 , SC27-1121.
UPDATE CaptureSchema.IBMSNAP_PRUNCNTL SET SYNCHPOINT = (SELECT SYNCHPOINT FROM CaptureSchema.IBMSNAP_PRUNE_SETWHERE APPLY_QUAL =ApplyQualAND SET_NAME =SetNameAND TARGET_SERVER =Target_Server), SYNCHTIME = (SELECT SYNCHTIME FROM CaptureSchema.IBMSNAP_PRUNE_SET WHERE APPLY_QUAL =ApplyQual AND SET_NAME =SetName AND TARGET_SERVER =Target_Server) WHERE APPLY_QUAL =ApplyQual AND SET_NAME =SetNameAND TARGET_SERVER=Target_Server;
Figure 8-16 SQL to update pruning control tables to resume apply processing
366
This SQL will ensure the Apply program will resume processing from the correct starting point in the CD for each member in the subscription set. 3. Add members to the subscription set. See 5.3.5, Adding subscription members to existing subscription sets on page 178. 4. Insert the CAPSTART signal in the IBMSNAP_SIGNAL table at the capture server to indicate a new subscription member was added. First we need to locate the map ID for the new subscription member to insert the CAPSTART signal as shown in Figure 8-17:
SELECT MAP_ID FROM CaptureSchema.IBMSNAP_PRUNCNTL WHERE SOURCE_OWNER =SrctableSchemaAND SOURCE_TABLE =SrcTbSetlAND SOURCE_VIEW_QUAL =SrcVwQual AND APPLY_QUAL =ApplyQual AND SET_NAME =SetNameAND TARGET_SERVER =Target_Server AND TARGET_OWNER =TgtSchema AND TARGET_TABLE =TgtTblWITH UR;
With the MAP ID, the SQL shown in Figure 8-18 will insert the CAPSTART signal.
INSERT INTO CaptureSchema.IBMSNAP_SIGNAL (SIGNAL_TIME,SIGNAL_TYPE, SIGNAL_SUBTYPE,SIGNAL_INPUT_IN, SIGNAL_STATE,SIGNAL_LSN) VALUES (CURRENT TIMESTAMP,CMD , CAPSTART ,Mapid ,P ,NULL);
5. Ensure the CAPSTART signal is picked up by the capture program, by running the SQL in Figure 8-19 to check the IBMSNAP_PRUNCNTL table.
Continue to run this SQL and check the SYNCHPOINT value is not null. 6. Using your method of choice, load the source table data into the subscription member target table.
367
7. If you started the Apply program using the OPT4ONE parameter for UNIX, Windows and z/OS or OPTSNGSET parameter on the iSeries, then you have to stop and start the Apply program. See 6.2.3, Apply parameters on page 298 for details on this parameter. 8. At the apply control server, run the SQL in Figure 8-20 to update IBMSNAP_SUBS_SET, which will activate the subscription set, making the Apply program to immediately process the subscription set and prevent a full refresh.
UPDATE ASN.IBMSNAP_SUBS_SET SET SYNCHPOINT=NULL, SYNCHTIME =CURRENT TIMESTAMP,LASTSUCCESS =CURRENT TIMESTAMP, LASTRUN = CURRENT TIMESTAMP -X MINUTES,STATUS =0,ACTIVATE =1WHERE APPLY_QUAL =ApplyQual AND SET_NAME =SetName;
Figure 8-20 SQL for Apply to start subscription set and prevent full refresh.
9. At this point the subscription set should resume processing with the addition of a new member.
368
To activate a subscription set, see 8.2.2, Deactivating and activating subscriptions on page 359. Which will resume processing this subscription set in the Apply program.
Note: If your registered a source table on the iSeries with RRN as the primary key, then you cant use the following procedures. The RRN has to be the last column in the CD table. Therefore, you have to remove the registration, add the new column to the source table, then reregister specifying RRN to be captured. See 8.1.3, Removing registrations on page 355 and 8.1.1, Adding new registrations on page 352 specifically on the iSeries.
Non DB2 restrictions: You cannot use these steps to add columns to registered sources on non-DB2 relational databases. A registration for a non-DB2 relational source includes a set of triggers used for capturing changes. You cannot alter these triggers. Therefore, if you need to add new column to this source table and need to replicate the data in these columns, you must drop and re-create the existing registered source. 1. Quiesce all activity against the source table were you want to add a new column 2. On the Windows, UNIX and Z/OS platform you can keep the Capture program active while adding a new column to the source table, by inserting a USER signal in the signal (IBMSNAP_SIGNAL) table. Then wait for the Capture program to process the USER signal. After the Capture program processes the USER signal, the Capture program has no more activity to process against the associated CD table and no longer requires access to this CD table. 3. On the iSeries the on-the-fly adding of a new column, as described in step 2 is not supported. Because the Capture program has a lock on the corresponding CD table, that will contain the new column, as describe in step 6. Therefore, you need to stop the Capture program running on the iSeries. Use one of the following methods to the Capture program: Use the Stop Capture window from the Replication Center ENDDPRCAP iSeries CL command
369
See Stopping Capture and Apply on page 244 for details on stopping the Capture program. 4. Deactivate the subscription set, see 8.2.2, Deactivating and activating subscriptions on page 359. 5. Add the new column to the source table using the SQL ALTER ADD statement. This is done through the DB2 Command Center or any other tool available for your platform. The ALTER for the source table is not done through the Replication Center 6. Adding the new column to the CD table is done using the Replication Center. See Figure 8-6 on page 358, to display the Registered Table Properties window. Scroll down the list of columns, looking for the new column that was added in Step 4. When you find it, select either After-Image, Before-image or both, then click OK button to generate and run the SQL script, running the SQL alter to the CD table. 7. Start the Capture program, if it was stopped. The Capture program automatically initialize the registration and captures the changes to these new columns when the Capture program first reads log or journal entries with the new columns. 8. Quiesce all activity against the target table. 9. Add the new column to the target table using the SQL ALTER statement. 10.Adding the new column to a subscription is done using the Replication center. See Figure 5-2 on page 181 to display the Create Subcription Set note page -> Source to Target Mapping page (see Figure 5-3 on page 186). Select the member -> Details... (see Figure 5-5 on page 189). Move the new column from the Registered columns to the Selected columns -> Column Mapping page (see Figure 5-6 on page 190). Map the new column in the Selected columns to the Target columns. Clicking the arrow in the blue box, then dragging the mouse to the circle in the red box will create the arrow, indicating the source and target column are mapped. Click -> OK -> OK from the Create Subcription Set note page to generate and run the SQL script. 11.Reactivate the subscription set. See 8.2.2, Deactivating and activating subscriptions on page 359 to resume processing this subscription set in the Apply program.
370
have to recreate them through the Replication Center. You can also use the PROMOTE function to duplicate your configuration on other systems.
Important: The promote function generates scripts to duplicate an existing replication configuration. It does not connect to the new server to validate the scripts.
You can promote: Registered tables Registered views Subscription sets
Restriction: You cannot use the promote function to copy replication definitions to or from non-DB2 (Informix) databases. You cannot use the promote function to copy replication definitions that include iSeries remote journals. You only promote replication definitions to like systems. For example, if your existing definitions are for DB2 z/OS, they cannot be promoted to DB2 for Windows and Unix. They can only be promoted to another DB2 z/OS system.
371
Important: The PROMOTE SUBSCRIPTION SET dialog, described in the next section, does not have an input field for this new CD table schema. If you change it here, then you will need to manually update the promote subscription set scripts. This will be corrected in a future fixpack.
Create source tables check this box to create source tables at the new server and to optionally change the source table schema at that server. Click OK to generate the promotion scripts. Review the messages in the Messages and SQL Scripts window for any errors and then click on CLOSE to close the Messages window. Refer to 2.13, Run Now or Save SQL on page 82 for the procedure to Run Now or Save SQL. The only option available for promote scripts is Save SQL.
372
Change CD table schema new schema for CD tables in the promoted registrations (optional).
Important: The PROMOTE SUBSCRIPTION SET dialog, described in the next section, does not have an input field for the new CD view and CD table schema. If you change it here, then you will need to manually update the promote subscription set scripts. This will be corrected in a future fixpack
Create source views check this box to create the source views. You can also optionally: Create any unregistered base tables. Specify a new schema for the promoted views.
Click OK to generate the promotion scripts. Review the messages in the Messages and SQL Scripts window for any errors and then click on CLOSE to close the Messages window. Refer to 2.13, Run Now or Save SQL on page 82 for the procedure to Run Now or Save SQL. The only option available for promote scripts is Save SQL.
373
Target server alias database alias of the target server for this promoted subscription Apply qualifier apply qualifier for this promoted subscription. Name of subscription set set name for the promoted subscription. Capture schema capture schema used at the source server for this subscription. Schema of source tables or views source schema used by the promoted subscription Schema of target tables or views target schema used by the promoted subscription If the subscription set includes replica target tables, there are two additional input fields:
Capture schema capture schema used at the target server for this subscription CD table schema schema of the CD table used at the target server
Click OK to generate the promotion scripts. Review the messages in the Messages and SQL Scripts window for any errors and then click on CLOSE to close the Messages window. Refer to 2.13, Run Now or Save SQL on page 82 for the procedure to Run Now or Save SQL. The only option available for promote scripts is Save SQL.
374
There are several replication tables that are not pruned by Capture or Apply and must be pruned manually. These tables are: ASN.IBMSNAP_APPLYTRAIL at the apply control server ASN.IBMSNAP_APPLYTRACE at the apply control server ASN.IBMSNAP_MONTRAIL at the monitor control server CCD tables at the target server populated by an Apply program. You can use the Apply program to automatically prune these tables based on an SQL DELETE statement that you supply.
Apply control server alias location of tables to be pruned. Set name any name you choose. Apply qualifier any existing or new apply qualifier. Target server alias must be the same as apply control server alias. Check Activate the subscription set.
3. Click on the Statements tab of the Create Subscription Set notebook and click the Add button. 4. Do the following tasks on the Add SQL Statement or Procedure Call window: a. Choose At the target server after the subscription set is processed. b. Choose SQL Statement and enter the SQL statement shown in Example 8-2:
Example 8-2 SQL statement to prune APPLYTRAIL table
DELETE FROM ASN.IBMSNAP_APPLYTRAIL WHERE STATUS = 0 and (CURRENT DATE - 7 DAYS) > DATE(LASTRUN) This will keep all rows reporting an error and all rows for Apply processing in the last week.
375
5. Repeat Steps 3 and 4, but use the SQL statement in Example 8-3 to prune from the APPLYTRACE table:
Example 8-3 SQL statement to prune ASN.IBMSNAP_APPLYTRACE
DELETE FROM ASN.IBMSNAP_APPLYTRACE WHERE (CURRENT DATE - 7 DAYS) > DATE(TRACETIME) This will keep all Apply messages for the last week.
6. Click on the Schedule tab of the Create Subscription Set notebook and set the relative timing for the subscription set 7. Click OK to generate the subscription scripts. 8. Review the messages in the Messages and SQL Scripts window for any errors and then click on CLOSE to close the Messages window. 9. Refer to 2.13, Run Now or Save SQL on page 82 for the procedure to Run Now or Save SQL. 10.If this is a new apply qualifier, start Apply. If this is an existing Apply qualifier, there is nothing further to be done.
376
Important: You should use the RUNSTATS utility to update statistics in the DB2 catalog for the CD and UOW tables when those tables are populated with change information that reflects the maximum activity on your registered tables. Apply performance will be degraded if the statistics are updated with low values that do not represent the normal state of these tables.
Apply control servers IBMSNAP_APPLYTRACE IBMSNAP_APPLYTRAIL Monitor control servers IBMSNAP_MONTRAIL IBMSNAP_MONTRACE IBMSNAP_ALERTS
Important: If these tables will not be available during the reorganization, SUSPEND Capture, stop Apply, or stop the Alert Monitor while the reorganization takes place.
377
4. Apply will full refresh all the members in the subscription set that contains the target table. You can choose to only refresh the member that copied from the recovered register table. Refer to 8.2.5, Adding members to existing subscription sets on page 366 for details. If consistent change data (CCD) tables copy from this registered table, then the procedure depends on the type of (CCD): Non-condensed with one row for every change to the registered table.
378
DELETE all rows from the CCD where IBMSNAP_LOGMARKER is greater than the recovery point in time. This is valid whether the CCD is complete or non-complete. Condensed and complete with one row for every row in the registered table, and one rows for every row that has been deleted from the registered table Copy all rows from the CCD where IBMSNAP_LOGMARKER is less than the recovery point in time and IBMSNAP_OPERATION = D to preserve the record of deletions. Unload these rows to a file or store them in a different table. Force a full refresh of the CCD table using the technique above for user copies and replicas. All these rows will have an IBMSNAP_OPERATION = I and IBMSNAP_LOGMARKER = current timestamp. Copy the saved deletion rows back to the CCD table. Condensed and not complete with one row for every changed row in the registered table. There is no way to recover this data. The previous values for the changed rows are not available.
379
Important: You must ensure that no log or journal is deleted or moved before Capture has processed all the changes in the log or journal. If the Capture program requests a log record and the log file containing that record is not available, Capture will stop. This is not a recoverable error. The resolution is to start Capture with the COLD parameter and full refresh all your target tables.
You can use the capshema.IBMSNAP_RESTART table and the DB2 command db2flsn on DB2 for Windows and Unix capture control servers to determine which logs have been completely processed by Capture and can be removed from the system. The capschema.IBMSNAP_RESTART table and the DSNJU004 (Print Log Map Utility) can be used on DB2 for OS/390 and z/OS capture control servers to determine which logs are no longer needed by Capture. DB2 DataPropagator for iSeries includes a journal receiver exit program (DLTJRNRCV) which is registered automatically when you install DB2 DataPropagator. You specify DLTRCV(*YES) and MNGRCV(*SYSTEM) on the CHGJRN or CRTJRN command to use this exit program to prevent the deletion of journals until Capture is finished processing them.
380
Expand the Replication Definitions -> Apply Control Servers folder -> The Apply control server -> Subscription Sets folder. Left-click on one subscription set to select it for refresh. key while clicking to select multiple sets. Right-click on the selected set, left-click on Full Refresh in the list of operations and then choose Automatic from the next list. Refer to 2.13, Run Now or Save SQL on page 82 for the procedure to Run Now or Save SQL.
Important: If you started the Apply program using the OPT4ONE parameter for UNIX, Windows and z/OS or OPTSNGSET parameter on the iSeries, then you have to stop and start the Apply program after requesting the refresh. See 6.2, Capture and Apply parameters on page 295 for details on this parameter.
381
382
Chapter 9.
383
Important: Do not forget that each user copy, replica, point in time and condensed CCD target table must include one or more columns that uniquely identify each target table row. Those target table types must have a primary key or unique index defined. Be sure to include the columns needed for uniqueness in your column list, regardless of the method you use to filter columns.
384
When you register a source table using the Replication Center, you choose the columns you want to include in the Change Data table. Only those columns are available for replication. This has the obvious advantage of reducing the size of the CD table. If a large number of changes occur to source table columns that are not in the CD table, even greater space savings and performance improvements can be gained if you choose the option Capture changes to registered columns only. See Capture changes to all on page 143 for more information about this option. The disadvantage is that the excluded columns are not available if needed in the future. On the iSeries, using Capture column filters can increase CPU utilization. It does not pay to use it in order to save CD table storage, unless the saving is significant. As an alternative, you can filter columns using a source table view. Example 9-1 illustrates a source table view that filters out unwanted columns:
Example 9-1 Subsetting columns using a source table view
Source table is created as: CREATE TABLE SAMP.EMPLOYEE (EMPNO CHAR(6) NOT NULL, FIRSTNME CHAR(10), LASTNAME CHAR(10), HIREDATE DATE, SALARY DECIMAL(6,2), PRIMARY KEY(EMPNO)) You do not want to replicate the SALARY column, so you create a view: CREATE VIEW SAMP.EMPVIEW (EMPNO,FIRSTNME,LASTNAME,HIREDATE) AS SELECT E.EMPNO,E.FIRSTNME,E.LASTNAME,E.HIREDATE FROM EMPLOYEE E
You must register both the source table and the source table view for replication. The advantage of this technique is that the SALARY column is kept confidential. If you chose all the EMPLOYEE columns when registering the source table, then the SALARY column is available for other replication scenarios that may need it. If you are already using a view for other reasons, such as joining data, then it makes sense to include the filtering logic in this view as well. The disadvantage of this technique is that you must create the view and the view registration, in addition to the source table registration.
385
There is no performance penalty in Apply regardless of the option chosen. Apply builds a column list for a select against the source table or view (full refresh) or against the CD table or CD table view (change processing) based on the entries in ASN.IBMSNAP_SUBS_COLS at the apply control server. An advantage of filtering columns at the subscription member level is that this only restricts the columns for a particular subscription member, so the only limit for other replication is the columns chosen for the CD table. You can use any combination of these three options: Exclude columns during source table registration Exclude columns with a source table view Exclude columns when defining a subscription member
Important: Excluding rows during replication can have a negative impact on the data integrity and consistency of your target tables. Certainly, excluding rows means that your source and target tables will not match.
386
REFERENCING NEW AS CD FOR EACH ROW MODE DB2SQL WHEN (CD.IBMSNAP_OPERATION = D) SIGNAL SQLSTATE 99999 (CD DELETE FILTER)
Skipping deletes may cause unusual behavior in your target table if key values are re-used. If a row is deleted from the source and then a new row is inserted into the source with the old key values, the existing row in the target table will be updated with the new values. If you need to preserve the old values, you may need to specify additional key columns for the target or change the target type to a non-condensed CCD. Skip all changes based on column values in the change. Example 9-3 is a trigger to exclude all changes where the LOCATION column is equal to TEXAS:
Example 9-3 CD table trigger to exclude changes based on a column value
CREATE TRIGGER SAMP.DEPT_SKIPTEXAS NO CASCADE BEFORE INSERT ON SAMP.DEPARTMENT REFERENCING NEW AS CD FOR EACH ROW MODE DB2SQL WHEN (CD.LOCATION = TEXAS) SIGNAL SQLSTATE 99999 (CD TEXAS FILTER)
During a full refresh, all the rows with LOCATION = TEXAS will be copied to the target table unless you specify a subscription member row filter. The Capture Throughput Analysis Report shows the number of rows skipped because of a CD table trigger or because you chose to capture only changes to your selected columns. The disadvantage of using triggers to skip changes is that those changes are not available for replication to any location. If you have some subscriptions which need those changes and some that do not, a better choice is to use the Apply row filtering for the members that do not need the changes. This is described in , Apply row filters on page 388.
387
Example 9-4 is the modification to the generated Informix insert trigger which skips all rows where the column CITY is set to Denver:
Example 9-4 Informix row filter
The generated trigger is: CREATE TRIGGER sampifx.itccdcustomer insert on sampifx.customer referencing new as new for each row (execute procedure..... The modified trigger is: CREATE TRIGGER sampifx.itccdcustomer insert on sampifx.customer referencing new as new for each row when (new.city <> Denver) (execute procedure....
You can skip all deletes to an Informix source table by dropping the delete trigger generated from the Replication Center.
In this case, LOCATION is called the partitioning key, since it is used to partition the source table during replication. If this was a data distribution scenario, you might have a second subscription set for a different target server, the same target table, and a row filter of LOCATION = CALIFORNIA. If the column used for the partitioning key may be updated at the source server, then you should also choose the Capture updates as pairs of deletes and inserts option when registering the source table. This ensures that, if the LOCATION value is changed from TEXAS to CALIFORNIA, then the corresponding rows will be deleted from the target server receiving only TEXAS data and inserted on the target server receiving only CALIFORNIA data.
388
In some cases, you may need to apply a filter only to the Apply select from the CD table. For this, you use the UOW_CD_PREDICATES. The Replication Center does not currently have an input method for this column in the ASN.IBMSNAP_SUBS_MEMBR at the apply control server, so you must update it manually. If you need to refer to UOW table columns in your predicate and your target table is a user copy, then you must also manually set the JOIN_UOW_CD column in ASN.IBMSNAP_SUBS_MEMBR to Y at the apply control server. The CD filter and the Join specification will be added to the Replication Center in the future. Suppose that you did not want to replicate deletes to a target table. Other target tables need the deletes, so you cannot use a Capture trigger to skip captured deletes. You would insert IBMSNAP_OPERATION <> D in the UOW_CD_PREDICATES for the subscription member for that target table. Another example is the case where you want to skip all changes made by a certain authorization userid (perhaps a batch job that does nightly archives and deletes). The capschema.IBMSNAP_UOW table includes the authorization id for each unit of work. If your target table type is user copy, you must set JOIN_UOW_CD to Y in the subscription set member entry to make the authorization id available to Apply. You do not need to update JOIN_UOW_CD for other target table types. Insert IBMSNAP_AUTHID <> skipid in the UOW_CD_PREDICATES in ASN.IBMSNAP_SUBS_MEMBR. Source table views can also be used to subset source table and CD table data before it is applied. Views may be required if your filtering predicates are very long. The ASN.IBMSNAP_SUBS_MEMBR predicates columns are limited in length: 1024 bytes for row filter column PREDICATES 1024 bytes for CD filter column UOW_CD_PREDICATES
Important: Any predicate that you define for Apply is used as part of a SELECT statement at the capture control server. You can improve performance by including the columns of the predicate in the existing index for the CD or UOW table. We do not recommend creating an additional index, since that adds overhead to the Capture process. Instead, note the columns of the existing index, drop the index, and recreate with your added columns at the end.
389
Source table views Subscription columns based on SQL expressions SQL statements or stored procedures issued after Apply processes changes
Important: The CD table trigger cannot change the data type or length of a captured column. Those attributes are fixed when the CD table is created and must match the source table column attributes so that DB2 log records can be decoded. If the column attributes of your target are different from the column attributes of your source, this transformation should be done either by a source table view or by using SQL expressions in the source to target table column mapping.
Example 9-6 is a trigger that sets default values for a source table column with a null value:
Example 9-6 CD table trigger to set default values for a null column
CREATE TRIGGER SAMP.DEPT_FIXLOC NO CASCADE BEFORE INSERT ON SAMP.CDDEPARTMENT REFERENCING NEW AS NEW FOR EACH ROW MODE DB2SQL WHEN (NEW.LOCATION IS NULL) BEGIN ATOMIC SET NEW.LOCATION = Unknown; END
Example 9-7 shows a trigger that sets default values for a column based on the value given for some other column in the source table.
Example 9-7 Set values in one column based on another column
CREATE TRIGGER SAMP.DEPT_FIXLOC NO CASCADE BEFORE INSERT ON SAMP.CDDEPARTMENT REFERENCING NEW AS NEW FOR EACH ROW MODE DB2SQL WHEN (NEW.LOCATION IS NULL) BEGIN ATOMIC
390
SET NEW.LOCATION = CASE WHEN NEW.ADMRDEPT WHEN NEW.ADMRDEPT WHEN NEW.ADMRDEPT WHEN NEW.ADMRDEPT WHEN NEW.ADMRDEPT ELSE Unknown END; END
= = = = =
Example 9-8 is a trigger where the reference information is stored in a separate table at the source server:
Example 9-8 Set values in one column based on another table
CREATE TRIGGER SAMP.DEPT_FIXLOC NO CASCADE BEFORE INSERT ON SAMP.CDDEPARTMENT REFERENCING NEW AS NEW FOR EACH ROW MODE DB2SQL WHEN (NEW.LOCATION IS NULL) BEGIN ATOMIC SET NEW.LOCATION = (SELECT L.LOCNAME FROM SAMP.LOCATIONS WHERE NEW.ADMRDEPT = L.DEPT); END
All of the triggers in these examples were tested using DB2 Universal Database for Windows V8 and the DB2 SAMPLE database.
391
The target table is defined as: CREATE TABLE EMPLOYEE (EMPLOYEE_NUMBER INTEGER NOT NULL, EMPLOYEE_NAME VARCHAR(40), PRIMARY KEY(EMPLOYEE_NUMBER) The source table view to map this transformation is: CREATE VIEW REPEMPLOYEE (EMPLOYEE_NUMBER, EMPLOYEE_NAME) AS SELECT INTEGER(E.EMPNO), VARCHAR(E.LASTNAME CONCAT , CONCAT E.FIRSTNME CONCAT CONCAT E.MIDINIT,40) FROM EMPLOYEE E
You register the source table EMPLOYEE and then the source table view REPEMPLOYEE. When you subscribe to the source view, the names, data types and lengths of the columns already meet the target table definitions. A join view is more complex in terms of processing, since it involves multiple source tables and multiple CD tables. When you register a join view, the Replication Center creates a join view for each CD table. Example 9-10 shows a join view with two source tables:
Example 9-10 Source table view over two tables
CREATE VIEW REPEMPLOYEE (EMPLOYEE_NUMBER, EMPLOYEE_NAME, DEPARTMENT, LOCATION, MANAGER_NUMBER) AS SELECT INTEGER(E.EMPNO), VARCHAR(E.LASTNAME CONCAT , CONCAT E.FIRSTNME CONCAT CONCAT E.MIDINIT,40), E.WORKDEPT, D.DEPTNAME, D.MGRNO FROM EMPLOYEE E, DEPARTMENT D WHERE E.WORKDEPT = D.DEPTNO
Example 9-11 shows the CD table views created when you register this source table view:
Example 9-11 CD table views
CDEMPLOYEE view CREATE VIEW REPCDEMPLOYEE (<IBMSNAP columns>, EMPLOYEE_NUMBER, EMPLOYEE_NAME, DEPARTMENT, LOCATION, MANAGER_NUMBER) AS SELECT INTEGER(E.EMPNO), VARCHAR(E.LASTNAME CONCAT , CONCAT E.FIRSTNME CONCAT CONCAT E.MIDINIT,40),
392
E.WORKDEPT, D.DEPTNAME, D.MGRNO FROM CDEMPLOYEE E, DEPARTMENT D WHERE E.WORKDEPT = D.DEPTNO CDDEPARTMENT view CREATE VIEW REPCDDEPARTMENT (<IBMSNAP columns>, EMPLOYEE_NUMBER, EMPLOYEE_NAME, DEPARTMENT, LOCATION, MANAGER_NUMBER) AS SELECT INTEGER(E.EMPNO), VARCHAR(E.LASTNAME CONCAT , CONCAT E.FIRSTNME CONCAT CONCAT E.MIDINIT,40), E.WORKDEPT, D.DEPTNAME, D.MGRNO FROM EMPLOYEE E, CDDEPARTMENT D WHERE E.WORKDEPT = D.DEPTNO
A problem can occur with this technique if the following happens: 1. There is an EMPLOYEE row with WORKDEPT = AAA and a DEPARTMENT row with DEPTNO = AAA, so the target table has a row with EMPLOYEE_NUMBER = AAA. 2. The AAA rows in EMPLOYEE and DEPARTMENT are deleted. 3. This delete is never replicated because the CD table view shows a change with AAA, but the source table in the join no longer has this value. You can work around the double delete problem by modifying the CD table views. Example 9-12 shows the modification:
Example 9-12 CD table view modification for double delete problem
Join condition for CD table views WHERE E.WORKDEPT = D.DEPTNO Modified predicate to include deletes regardless of the join predicate WHERE E.WORKDEPT = D.DEPTNO OR IBMSNAP_OPERATION = D
If your view has a predicate, you may need a different modification. The goal of the modification is to always include any deletes in the CD table, regardless of whether or not the deletes meet the join condition or any other predicates that you have specified. Apply will ignore any deletes for rows that do not exist in the target table, so there is no penalty for replicating deletes that do not match the join condition or any predicates you provide.
393
Calculated columns
The Replication Center Member Properties window has a Column Mapping tab. You use the Column Mapping page to map source table columns to target table columns. You use the Change Calculated Column button to provide SQL expressions which define columns on the target table. In Example 9-9 on page 391, we showed a source table view that transformed EMPLOYEE table. That same transformation could be accomplished by using the SQL expressions to create calculated columns. CASE expressions can be used to provide conditional logic for a calculated column. You can map source table columns to different column names, different data types and include DB2 special register values like timestamp or user. You can also map a calculated column to a literal value. In a data consolidation scenario, you might want to have a column in your target table that identifies the source server of the row. The expression for this calculated column would be a literal like SAMPLE representing the source server for that member.
394
defined to run at the target server after the subscription set is processed, so that Apply will automatically call it every time changes are processed.
Note that, in the CD table, the PICTURE column is defined as one character. Capture puts a U in this column if the BLOB column PICTURE is updated in the source table. When a subscription member is defined that copies from EMP_PHOTO and the PICTURE column is selected, the COL_TYPE value in
395
ASN.IBMSNAP_SUBS_COLS at the apply control server is set to L for this target table column. When Apply processes this subscription member, the L in ASN.IBMSNAP_COLS indicates that this is a LOB column and requires special handling: If this is a full refresh, then the LOB column and the other columns are selected from the source table and inserted into the target table. If this is change processing, there are two steps: All the changes are processed from the CD table are selected to a spill file and then applied to the target table. Only the non-lob columns are processed. If there are changes which are inserts or which are updates to LOBS, then the LOB values for those changes are selected from the source table to a spill file and updated in the target table. This method conserves space in the CD table and minimizes the amount of data which is transferred to the target server. It also allows replication of LOB columns that are defined as NOT LOGGED.
396
The DB2 Spatial Extender includes functions which can be used to convert (cast) spatial columns to text. Views and triggers using the spatial functions can be defined to replicate spatial data: 1. Create a view of the source table which casts the spatial column to datatype of varchar as shown in Example 9-15:
Example 9-15 Source table view for spatial replication
CREATE VIEW SAMP.CUSTVIEW (ID,NAME,ADDRESS,CITY,STATE,ZIP,INCOME,PREMIUM,CATEGORY,LOCATION_TEXT) AS SELECT C.ID,C.NAME,C.ADDRESS,C.CITY,C.STATE,C.ZIP,C.INCOME,C.PREMIUM,
397
C.CATEGORY, CAST(db2gse.ST_Astext(C.LOCATION) AS VARCHAR(50)) You must determine the correct function to convert the spatial data type to text and also determine the appropriate length.
2. Create the target table, including a column for the text representation of the spatial data as shown in Example 9-16:
Example 9-16 Target table for spatial replication
CREATE TABLE TARGET.CUSTOMER (ID INTEGER NOT NULL,NAME VARCHAR(30), ADDRESS CHAR(30),CITY CHAR(28),STATE CHAR(2),ZIP CHAR(5), INCOME DOUBLE,PREMIUM DOUBLE,CATEGORY SMALLINT, LOCATION_TEXT VARCHAR(40), LOCATION ST_POINT, PRIMARY KEY (ID)) The LOCATION is registered as a spatial column using the DB2 spatial extender stored procedure ST_register_spatial_column.
3. Register the source table (SAMP.CUSTOMER) using the Replication Center. Do not register the LOCATION column. 4. Register the source view (SAMP.CUSTVIEW) using the Replication Center. Include the LOCATION_TEXT column in the registration. 5. Create a subscription set with a new member (or add a member to an existing subscription set). The source is SAMP.CUSTVIEW. Map all the columns from the view to the target table. Note that there is no mapping for the spatial column. This column will be populated by a trigger on the target table. 6. Create a trigger on the target table that converts the LOCATION_TEXT column to spatial data in the LOCATION column. Example 9-17 is the trigger for the TARGET.CUSTOMER TABLE:
Example 9-17 Target table trigger to convert text to spatial
CREATE TRIGGER TARGET.CUST_SPATIAL_TRIG NO CASCADE BEFORE INSERT ON TARGET.CUSTOMER REFERENCING NEW AS NEW FOR EACH ROW MODE DB2SQL SET NEW.LOCATION = db2gse.ST_PointFromText(NEW.LOCATION_TEXT,db2gse.coordref()..srid(1)) In this example, LOCATION is defined as a point, so the ST_PointFromText function is called for the conversion. Refer to the DB2 Spatial Extender documentation for information on the function parameters.
398
7. Start Capture and Apply. Apply will select the spatial data as text (LOCATION_TEXT) from the source table view (full refresh) or the CD table view (change processing) and the target table trigger will convert the text to the spatial representation (LOCATION).
399
MASTER REPLICA
YES YES
NO YES
As you will see later on, you will run Capture on both servers, but Apply only on the REPLICA server.
400
A row is inserted into the apply control table ASN.IBMSNAP_SUBS_SET for replication from the REPLICA to the MASTER. The WHOS_ON_FIRST column in ASN.IBMSNAP_SUBS_SET for this set is F. Two rows are inserted into the apply control table ASN.IBMSNAP_SUBS_MEMBR for each target table. One row has the WHOS_ON_FIRST column set to S and the other has the WHOS_ON_FIRST column set to F. Two rows are inserted into the apply control table ASN.IBMSNAP_SUBS_COLS for each column in the target table. One row has the WHOS_ON_FIRST column set to S and the other has the WHOS_ON_FIRST column set to F. If the replica target table does not already exist, it is created with DATA CAPTURE CHANGES. If the target table does exist, but does not have the DATA CAPTURE CHANGES attribute, then the target table is altered to include DATA CAPTURE CHANGES. The replica target table is registered at the REPLICA server and a CD table is created to hold changes made at the replica. If conflict detection was chosen when registering the source table at the MASTER server, then this CD table includes the before-image values for each change, since they will be needed to reverse conflicting transactions. Rows are inserted into capschema.IBMSNAP_PRUNE_SET and capschema.IBMSNAP_PRUNCNTL at both the MASTER and the REPLICA. It may be easier to think of this as two different replication scenarios: WHOS_ON_FIRST = S copies from a table registered on the MASTER to a target table on the REPLICA. WHOS_ON_FIRST = F copies from a table registered on the REPLICA to a target table on the MASTER.
401
log. Capture reads the capschema.IBMSNAP_REGISTER table to find registrations. A registration is not active until Apply has signalled that a full refresh has been done.
402
Then, Apply inserts one row into capschema.IBMSNAP_SIGNAL at the MASTER capture control server for each member in the set. The value for SIGNAL_TYPE is CMD, for SIGNAL_SUBTYPE is CAPSTART and for SIGNAL_INPUT_IN is the MAP_ID of the subscription member from the capschema.IBMSNAP_PRUNCNTL table at the MASTER capture control server. The REPLICA target tables (replicas) are loaded from the MASTER source tables using the same process described in 1.5, DB2 Replication V8 close up on page 22 for read only target tables. If your REPLICA target tables have referential constraints, then you must do a manual refresh yourself to avoid violating the constraints or use a modified ASNLOAD exit which loads the data in the correct order. Another technique is to remove the constraints, do the full refresh, and then put the constraints back. Apply then updates the LASTSUCCESS and SYNCHTIME columns for this set in ASN.IBMSNAP_SUBS_SET and changes the MEMBER_STATE for each member in ASN.IBMSNAP_SUBS_MEMBR to L to indicated that the target tables have been loaded.
403
404
Reads the spill files and issues inserts, updates, deletes against the MASTER target tables. Changes may need to be reworked when issuing inserts, updates, and deletes. Executes any SQL statements from the ASN.IBMSNAP_SUBS_STMTS which are marked to be run AFTER Apply processing. Updates the ASN.IBMSNAP_SUBS_SET SYNCHPOINT and SYNCHTIME columns for this set (WHOS_ON_FIRST = F) at the apply control server with the LSN of the upper bound and the timestamp of the upper bound. The MEMBER_STATEs in ASN.IBMSNAP_MEMBR for all members of the set are set to S. Updates the capschema.IBMSNAP_PRUNE_SET SYNCHPOINT column for this set at the MASTER capture control server with the upper bound LSN. Inserts an audit row into ASN.IBMSNAP_APPLYTRAIL at the apply control server.
405
Replication may be relatively inexpensive compared to the other solutions, both in the initial cost and ongoing administration. Peer to peer can operate across distances that are not easily supported, if supported at all, by disk mirroring. Applications can run on any peer system, so query workload can be balanced across the peer configuration. Note that this is not workload balancing for change processing. Each peer server must be able to handle local changes as well as the replicated changes from all other peers. The disadvantages of peer to peer are: Replication is ASYNCHRONOUS. There is always some delay in copying changes from one system to another. At any given point in time, two peer systems may not exactly match. If a failure occurs on one system and application activity is transferred to the second system, there may be changes that have not yet been replicated to the second system. The replication delay is called latency. Refer to Configuring for low-latency replication on page 467 for tips on reducing this delay. Subscription sets in each direction are processed independently, so Apply does not have information for conflict detection. Conflict detection using triggers on page 413 describes the use of triggers to detect conflicts. Most of the peer to peer setup is done through the Replication Center, but there are some updates to control tables that must be done manually. Administration and operations for peer to peer replication on page 406 includes all the manual updates needed.
406
to the source table and once when Apply processes them. Changes are only applied once. Modify the capture and control tables to resemble half of an update anywhere configuration so that Capture will not recapture changes made by Apply. The advantage here is that you do not have to use a special userid for Apply and changes are only captured once. The disadvantage is that the Replication Center cannot be used for this configuration after the manual modifications are done. In the steps that follow, we will refer to these methods as the APPLYID method or the UAHALF method to distinguish tasks that are unique to a particular method.
Important: You must set all of the above correctly or peer to peer replication will not work properly.
3. Define a subscription set with members from PEER1 to PEER2. The definitions are for standard user copies except where noted in this list: Apply control server: PEER2. Set name: anything you want. Apply qualifier: anything you want.
Important: If you are using the UAHALF method, then the same Apply qualifier must be used for both directions of replication.
Capture control server: PEER1. Capture schema: capschema for PEER1 capture control tables. Target server: PEER2. Check Activate the subscription set. Check Allow Apply to use transactional processing for set members. Specify a number for the Number of transactions. Schedule: Relative timing: 0 minutes.
407
4. Start Capture on PEER1. 5. Use the Replication Center to do a Manual Full Refresh from PEER1 to PEER2. Load the tables on PEER2 using your favorite utility. 6. Register the source tables at PEER2. Before-image columns are not needed. Be sure to choose the following options: No Allow full refresh of target table. No Capture changes from replica target table. Conflict detection: No detection.
Important: You must set all of the above correctly or peer to peer replication will not work properly.
7. Define a subscription set with members from PEER2 to PEER1. The definitions are for standard user copies except where noted in this list: Apply control server: PEER2. Set name: anything you want. Apply qualifier: for UAHALF method, must match the step3 qualifier.
Important: If you are using the UAHALF method, then the same Apply qualifier must be used for both directions of replication.
Capture control server: PEER2. Capture schema: capschema for PEER2 capture control tables. Target server: PEER1. Check Activate the subscription set. Check Allow Apply to use transactional processing for set members. Specify a number for the Number of transactions. Schedule: Relative timing: 0 minutes.
8. Start Capture on PEER2. 9. Use the Replication Center to do a Manual Full Refresh from PEER1 to PEER2..
Important: This step updates the capture and apply control tables. DO NOT LOAD THE PEER1 TABLES. These tables are already populated with data.
The setup for the two peer to peer methods is different from this point on.
408
1. Identify a userid that will dedicated for Apply. This is the userid that will be used to start Apply. It should be used only for that purpose. You should use the same userid for both PEER1 and PEER2 to make this solution less complex. The userid, called APPLYID in this example, must be granted the necessary privileges for Apply. 2. If the peer servers are DB2 for Windows and Unix, then use the asnpwd command to create a password file on PEER1 and PEER2 that Apply will use to connect to both the source and target servers. On z/OS, modify the started task or JCL for Apply to ensure that this userid is used. 3. Update JOIN_UOW_CD and UOW_CD_PREDICATE for all the members in the subscription sets at PEER1 and PEER2.
409
Attention: You do not need to quiesce existing peers when adding a new peer. You can use the capture and apply control tables for coordination to ensure that no data is lost.
When adding PEER2, we knew that PEER1 data could be used to full refresh the PEER2 tables. When adding PEER3, you cannot arbitrarily choose a server for the full refresh, since that server might not have all the changes from the other peer servers. Here are the steps to add PEERn to a peer to peer configuration with x existing peers.
Important: If you are using the UAHALF method, then the same Apply qualifier must be used for all peer subscriptions.
410
Capture control server: PEER1 through PEERx. Capture schema: capschema for PEER capture control tables. Target server: PEERn. Check Activate the subscription set. Check Allow Apply to use transactional processing for set members. Specify a number for the Number of transactions. Schedule: Relative timing: 0 minutes. Member definitions are the same as usual.
2. Use the Replication Center to do a Manual Full Refresh for each of the x subscription sets you just created.
b. The ASN.IBMSNAP_SUB_SET at our chosen full refresh server PEER1 has a SYNCHPOINT column for each subscription set. We use this column to make sure that any changes on the other peer servers that happened BEFORE the PEERn starting point have been copied to PEER1. Suppose Table 9-3 that lists the values at PEER1:
Table 9-3 Changes processed at PEER1
Capture control server PEER1 subscription set PEER1 SYNCHPOINT
PEER2 PEER3
PEER2-TO-PEER1 PEER3-TO-PEER1
xBB7D0000000000000000 xABFF0000000000000000
411
c. You must wait until the ASN.IBMSNAP_SUBS_SET SYNCHPOINT values at PEER1 are greater than or equal to capschema.IBMSNAP_PRUNE_SET values at all the other peers before loading the new PEERn with the PEER1 data. In our example, PEER3 changes that happened before PEERns starting point are not yet copied to PEER1. You must wait for these changes to be copied to PEER1 before you do the unload the data from PEER1 and load the data on PEERn. Do not continue with the next steps until you have loaded data to PEERn. 4. Register the source tables at PEERn. Before-image columns are not needed. Be sure to choose the following options: No Allow full refresh of target table. No Capture changes from replica target table. Conflict detection: No detection.
Important: You must set all of the above correctly or peer to peer replication will not work properly.
5. Define x subscription set with members from PEERn to PEER1 through PEERx. The definitions are for standard user copies except where noted in this list: Apply control server: PEER1 through PEERx. Set name: anything you want. Apply qualifier: for UAHALF method, must match the PEERx qualifier.
Important: If you are using the UAHALF method, then the same Apply qualifier must be used for all subscriptions.
Capture control server: PEERn. Capture schema: capschema for PEERn capture control tables. Target server: PEER1 through PEERx. Check Activate the subscription set. Check Allow Apply to use transactional processing for set members. Specify a number for the Number of transactions. Schedule: Relative timing: 0 minutes. Members are defined as usual.
6. Start Capture on PEERn 7. Use the Replication Center to do a Manual Full Refresh for each of the x subscription sets you just created.
412
8. Complete the setup by following the instructions for the method you have chosen: APPLYID peer to peer method on page 408. UAHALF peer to peer method on page 409. 9. Start Apply at PEERn.
413
ALTER TABLE SAMP.ORDERS ADD COLUMN DELETED SMALLINT NOT NULL DEFAULT 0 ALTER TABLE SAMP.ORDERS ADD COLUMN LAST_UPDATED TIMESTAMP NOT NULL DEFAULT CURRENT TIMESTAMP ALTER TABLE SAMP.ORDERS ADD COLUMN LAST_UPDATED_SITE CHAR(18) NOT NULL DEFAULT PEER1 NOTE: The default for LAST_UPDATED_SITE for PEER2 application tables is set to PEER2 instead of PEER1.
414
Example 9-20 shows the update trigger after modifications to prevent conflicts:
Example 9-20 Trigger with modifications to prevent conflicts
CREATE TRIGGER CC_ORDERS_UPDATE NO CASCADE BEFORE UPDATE ON SAMP.ORDERS REFERENCING NEW AS NEW OLD AS OLD FOR EACH ROW MODE DB2SQL BEGIN ATOMIC IF USER = 'APPLYID' AND NEW.LAST_UPDATED < OLD.LAST_UPDATED THEN SIGNAL SQLSTATE '99997' ('CHANGE TIMESTAMP < CURRENT TIMESTAMP'); END IF; IF USER = 'APPLYID' AND NEW.LAST_UPDATED = OLD.LAST_UPDATED AND NEW.LAST_UPDATED_SITE <>PEER1 THEN SIGNAL SQLSTATE 99998 (CHANGE TIMESTAMP = CURRENT TIMESTAMP); END IF; IF USER <> APPLYID THEN SET NEW.LAST_UPDATED = CURRENT TIMESTAMP - CURRENT TIMEZONE; SET NEW.LAST_UPDATED_SITE = CURRENT SERVER; END IF; END NOTE: This trigger could certainly be written more efficiently. It is designed to clearly show the concepts involved.
If the USER (CURRENT USER on z/OS) is Apply, then conflict checking is done: If the new LAST_UPDATED is earlier than the current LAST_UPDATED, then reject the change. If the new LAST_UPDATED is the same as the current LAST_UPDATED and the change was not made on the PEER1 server, then reject the change. If there is no conflict, process the change, using LAST_UPDATED and LAST_UPDATED_SITE from the source server. If the USER is not Apply, then set the current values for LAST_UPDATED and LAST_UPDATED_SITE.
415
you want Apply to ignore. The filename is applyqualifier.sqs. It must be located in the apply_path directory that you specify when starting Apply. For our example, the apply qualifier is PEERQUAL, so the file would be named PEERQUAL.sqs. Example 9-21 lists the contents of PEERQUAL.sqs:
Example 9-21 File with conflict SQLSTATEs used by Apply
99997 99998
You start Apply with the parameter SQLERRORCONTINUE set to Y. When this parameter is used, Apply will compare any non-zero SQLSTATEs to the list in the sqs file. If it finds a match, it will write the change information to a file named applyqualifier.err (PEERQUAL.err for our example), skips the failing change and continues on with the processing for all other changes. Apply reports this in the apply control table ASN.IBMSNAP_APPLYTRAIL with a STATUS code of 16.
Important: Only the one change that was rejected is skipped. Any other changes in the unit of work containing the rejected change will be processed by Apply.
416
10
Chapter 10.
Performance
In this chapter we discuss: End-to-end system designs for replication Capture performance Apply performance Configuring replication for low latency Development benchmark results DB2 Replication Version 8, like previous versions, is still SQL based, meaning that the most operations depend on DB2 to do the bulk of work in response to requests, in the form of SQL, from Capture and Apply. We discuss Capture and Apply separately, focusing on the key sub-operations they each perform. The principal new aspect with Version 8 is Captures accumulating information about whole transactions in memory before inserting records to CD tables. Lastly, DB2 Replication Development performed a few benchmarks on pSeries (AIX) and we provide some of the results. But, first we discuss overall system design for replication.
417
418
C a pture
A pply
DB2
insert insert insert w rite w rite w rite w rite read read read read
1
insert (com m it) insert insert insert
3
fetch (block)
4D B 2
insert insert insert
S ource Table
UO W Table
C D Table
Target Table
419
For those wondering why Apply wont block the updates to the target table, the answer is that data integrity is a higher priority for DB2 Replication than performance. Apply needs to get the result of inserting, updating, deleting each record in the spill file to the target table so that it can determine if an insert or update needs to be re-worked in order to satisfy the data-integrity objective of making the target table row look like the corresponding source table row.
Apply spill
record record record
Memory record
record record
Capture
Apply
DB2
read read read read
1
insert (commit)
2
insert insert insert
3
fetch (block)
DB2
4
insert insert insert
DB2
UOW Table
420
will perform the insert/update/deletes to the target table from a select from the CD table and eliminate the intermediate step of writing all the change records first to a spill file and then doing the insert/update/deletes from the spill file. Also, iSeries is very efficient in the way it sends records to a remote journal. DRDA is not used for this function, where as DRDA is used if the CD tables are local to the source table and Apply at the target system fetches the changes from the CD table using the usual pull design. Figure 10-3 depicts iSeries-iSeries replication using remote journalling.
Memory Application
insert insert insert commit
Capture Apply
DB2 iSeries
insert insert insert write write write write write write write write
1
read read read read
DB2 iSeries
insert (commit)
2
insert insert insert
3/4
insert insert insert
Source Table
Journal File
UOW Table
CD Table
Target Table
In the figure, weve labeled the insert into the target table as steps 3/4 to be consistent with our original description of the operations involved replication. Step 3 - the fetch from the CD table - is in effect imbedded as a select into the insert operations to the target table.
421
There are some restrictions for replication between iSeries systems using remote journalling: View registrations are not supported if any of the the underlying registered tables are using remote journals for capturing changes Replication of LOBs and DataLinks is not supported via remote journals
422
Apply spill
record record record
Memory record
record record
Capture
Apply
DB2
read read read read
1
insert (commit)
2
insert insert insert
3
fetch (block)
DB2
4
insert insert insert
Informix
UOW Table
CD Table
Target Table
In Figure 10-4 we can see that Apply and the DB2 ESE/Connect federated server with the nicknames for the Informix target tables are on a different system from the Informix target tables themselves. Informix Client SDK is installed on the DB2 federated server because federated server uses a non-DB2 servers own client APIs to access a federated data source. A faster replication system design would be to install DB2 ESE/Connect, with Apply, on the Informix server itself so that Applys inserts to the target table would not have to cross a network. Another design, not shown, would be to use Apply at the DB2 source server like the DB2-to-DB2 push design described earlier. Apply could connect to the DB2 federated server that has the nicknames for the Informix target tables. If the DB2 source server were DB2 ESE, then the DB2 source server could also be the federated server that has the nicknames for the Informix target tables and Informix Client SDK installed.
423
A pply spill
re co rd re co rd re co rd
A p p ly
Ap p lica tion
insert insert insert com m it
In fo rm ix
insert insert insert Cap tu re Trig g er s in sert in sert in sert
1/2
3
fetch (block)
In fo rm ix C lien t SDK
4D B 2
in sert in sert in sert
S ource Table
C C D Table
Target Table
In the figure, trying to be consistent with the beginning discussion on major operations in replication, weve labeled the Capture triggers as performing
424
operations 1/2 since they simulate the function of both the 1) log-read and the insert into the CCD tables that is performed by Capture in a DB2 system. Another design, not shown would be to install DB2 ESE/Connect on the Informix server and create nicknames in the DB2 ESE/Connect database on the Informix server itself for the Capture Control Tables, source tables, and CCD tables in Informix. The faster performing pull design would still use the Apply on the DB2 target system. The push design would be to use the Apply with the DB2 ESE/Connect on the Informix server.
425
On z/OS, the parameters that increase DB2s memory allocations for log buffers are specified on DB2 installation panel for Active Log Data Sets (DSNTIPL) OUTBUFF - DB2 for z/OS and OS/390 Versions 6 and 7 WRTTHRSH - DB2 for z/OS and OS/390 Version 6 only On Linux, UNIX, and Windows, it is: LOGBUFSZ in the Database Configuration. The default value for LOGBUFSZ is 8 pages. We recommend specifying a larger value. If Capture lags behind so that it cannot process the log records as fast as DB2 is creating them and so DB2 has to provide log records from disk from the active log file, then attention to the placement of the log file on the disk system will affect how quickly DB2 can provide the records to Capture, which will affect Capture performance. But the active log file may be optimally placed already as its location on the disk system also affects DB2s performance of updates for applications. If Capture is not running when there is update activity in DB2 and so Capture has to catch up for hours, days, weeks, and so on, and you are not going to reload the target tables and COLD start Capture, then availability and placement of the DB2 archive log files will affect DB2s performance in providing the log records to Capture. If the point in the log that Capture has to start reading from is in a DB2 archive log file that is on tape, then when Capture starts DB2 will request that the tape be loaded so DB2 can start providing the log records requested by Capture. For better performance, if Capture has not been running for some time, we recommend that the archive log files be recalled from tape to disk before starting Capture. A possible consideration with Capture Version 8 is when Capture is stopped while there is a long-running transaction in progress involving registered tables. When Capture is restarted, even if it is right after it stopped, it will ask DB2 for log records from a point in the log where the long transaction started. In capshema.IBMSNAP_REGISTER record where GLOBAL_RECORD=Y, the SYNCHTIME value gives an indication of the timestamps in the log records that Capture is currently reading. This value can be compared with current system time to find out how far Capture is behind. We will discuss this more under Capture latency.
426
SLEEP_INTERVAL Capture parameter. In a DB2 for z/OS data sharing environment, Capture goes to sleep if the log record provided by DB2 does not have an efficient amount of data for Capture to process; usually, this means that the log buffer from DB2 is less than half full. Without having a SLEEP_INTERVAL, the alternative would be for Capture to flood DB2 with requests for the next log record until there is enough activity in DB2 for DB2 to have another log record to give to Capture. The default value for SLEEP_INTERVAL is 5 seconds. It can be set to a different value through the CAPPARMS table, by starting Capture with a different value, or by changing the current value using asnccmd...chgparms sleep_interval n where n is the number of seconds you want Capture to be inactive if it gets to the end of the DB2 log.
427
Note: this value is in KiloBytes, but the Capture parameter Memory_Limits is in megabytes.
TRANS_PROCESSED - number of transactions processed since the last CAPMON entry.
428
TRANS_SPILLED - number of transactions spilled to I/O since the last CAPMON entry MAX_TRANS_SIZE - largest transaction since the last CAPMON entry. This information from the CAPMON table can be displayed in the Replication Center. In the left window, expand Replication Center -> Operations -> Capture Control Servers, highlight the Capture control server in the right window, right-click and select Show Capture Throughput Analysis. The Capture Throughput Analysis window will open. In the upper part of the window, select the Capture Schema for the Capture you want throughput numbers for and Memory Usage from the Information to be displayed field. You can select From and To time periods in the window fields below, or accept the defaults and select time intervals if you dont want the display to have one record for each record in the CAPMON table. Then click Retrieve from the buttons at the bottom of the window and the memory usage information will be displayed in the result-display area at the bottom of the window. Note, in the Memory Usage display. the memory used by Capture will be displayed in KiloBytes though Captures memory_limit parameter is specified in megabytes. To see the Transactions Processed and Spilled information in the Replication Center, at the top of the Show Capture Throughput Analysis window, select Number of Transactions Committed in the Information to be displayed field and then Retrieve at the bottom of the window. We found when occasionally when trying to switch from Transactions Committed to Memory Usage in the Show Capture Throughput Analysis window that we had to close the window and re-open from the options available for a source server in the right side of the main Replication Center window. The frequency with which Capture writes records to the CAPMON table is determined by Capture parameter MONITOR_INTERVAL; the value is specified in seconds and the default shipped value is 300 (5 minutes). In times when there is a possibility of Capture running out of memory, you might lower the Monitor_Interval so that Capture rights records to the CAPMON table more frequently and set a Monitor Condition for CAPTURE_MEMORY so that an alert will be sent when a current_memory threshold is reached.
429
Capture Memory_Limit can also be changed while Capture is running by using the asnccmd command with chgparms and specifying a new Memory_Limit.
Note: On z/OS, it is recommended that a memory limit not be specified on Captures job card and that Captures memory_limit parameter be used to indicate how much memory Capture is allowed to use. That is, on Captures job card, specify REGION=0M
Captures current setting for Memory_Limit can be determined using the asnacmd command, specifying the Capture_Server, Capture_Schema, and qryparms. This command can be issued by the Replication Center if the Capture Server supports DB2 Administration Server (DASe). In the Replication Center, highlight a particular Capture Server, right-click and select Change Operational Parameters. In the Change Operational Parameters window, select the Capture Schema in the upper left corner, and the shipped defaults, CAPPARMS values, and current values currently in effect should be displayed for that Capture. It may take a few minutes for Replication Center to retrieve and fill-in the current settings. Captures memory limit can be changed several ways:
In Replication Center using the Change Operational Parameters window described above. Change the current value for Memory Limit and press OK or Apply in the lower right corner. Replication Center will first generate the asnccmd...chgparms command in a Run Now or Save SQL window and then you can execute it. Note: If you dont have DB2 Administrative Server (DASe) at the Capture Server, then you cant do this with the Replication Center.
430
When Capture spills transactions to I/O, it creates a file for each separate transaction that it is collecting log records for. Capture removes each file when Capture reads the commit for the transaction from the DB2 log and has inserted records for the transaction into the CD tables and into the IBMSNAP_UOW table. On z/OS, these files are created in VIO. An alternative on z/OS, if you dont want Capture to spill to VIO, is to have Capture spill transactions to a directory file specified on a CAPSPILL DD card in the Capture start JCL. Either UNIT=VIO or UNIT=SYSDA can be specified on this CAPSPILL DD card. Specifying UNIT=VIO on such a card would be equivalent to taking the default. UNIT=SYSDA would permit specifying a particular directory where you want Capture to create transaction spill files. In any case, the transaction files created by Capture will be temporary, and each file will be deleted by Capture after it has inserted the data from the file into the CD tables. Here is an example of what a CAPSPILLS DD card might look like in the Capture start JCL: //CAPSPILL DD DSN=&&CAPSPL,DISP=(NEW,NEW,DELETE), // UNIT=SYSDA.SPACE=(CYL(50,100)), // DCB=(RECFM=VB,BLKSIZE=6404) On Linux, UNIX, and Windows, Capture will create the spilled transaction files in the file system directory specified by the Capture_Path start parameter. Their file names will be their transaction ID and extension .000. Capture will remove each file when it receives a commit from the DB2 log for the transaction and has inserted records for the transaction into the CD tables and the IBMSNAP_UOW table. Captures performance will be better if CAPTURE_PATH specifies a file system directory where there will be fast read/write access and no contention when Capture has to spill transactions.
431
With DB2 Replication Version 8, it can be expected that Capture will be inserting records into the CD tables in groups as Capture detects from the log-read the commit associated with a transaction involving registered tables. For short transactions that update one or a few tables and commit frequently, there should be just a few inserts to a few CD tables. For long transactions that update one, few, or many tables and commit infrequently, there should be a large number of inserts, possibly to many CD tables, when Capture detects the commit for a transaction involving registered tables. With Replication Version 8, there are several occurrences where Capture, when if detects the commit for a transaction involving registered tables, will not insert records into CD table for every insert/update/delete record for a registered table: CHGONLY=Y in the registration for a source table, and none of the columns updated match any of registered columns, which are the data columns of the respective CD table. If not all the columns of source table are being made available for replication, then CHGONLY on the registration for this table could save INSERT activity on the CD table and, in turn, Apply update activity on the target. However, the reduction in CD table size and insert activity to the CD table does have a cost, which is the CPU that Capture will use to compare the before and after image of the column values in the log record to determine whether any of the columns with different before and after images in log or journal match the registered columns. If all, or even most, of the columns of a source table are registered, then CHGONLY on the registration will increase Captures overhead unnecessarily; Capture is going to insert a record into the CD table anyway for almost every update to the source table. To find out if a registration has CHGONLY=Y: In the Replication Center left window, select Replication Center -> Replication Definitions -> Capture Control Server -> capture control server name -> Registered Tables. Highlight the registered table in the right window, right-click, and select Properties. In the Registered Table Properties window, at the top of the Definition tab, check the Row-capture rule fields value. Capture changes to all columns (default) means CHGONLY=N. Capture changes to registered columns means CHGONLY=Y. Or query the capschema.IBMSNAP_REGISTER table at the Capture Control Server for the record for the source table and check the value of the CHGONLY column. RECAPTURE=N in capschema.IBMSNAP_REGISTER record for a source table, and Apply replicates into the source table. RECAPTURE=N tells Capture that if it detects that the application that insert/update/deleted the
432
source table is Apply, dont insert any records into the CD tables. The application updating a replication source table could be Apply when: Using a three-tier replication environment where one Apply replicates to an intermediate User Copy table that is registered, and then another Apply replicates to down-stream user copies. If this is the case, then RECAPTURE=Y should be set so that the first Applys changes will be captured for replication by the second Apply. Actually, a more efficient design might be to have the intermediate table be a CCD, which would have all the control information needed by the down-stream Apply and avoid the overhead of an intermediate Capture. The replication source table is a Master or a Replica in a Update Anywhere replication configuration. RECAPTURE needs to be =Y only if the changes made the source table by Apply need to be replicated to other table. This is the case for the Master if changes made at one Replica need to be replicated to other Replicas via the central Master. RECAPTURE can be N at Replicas if they will not be used for replicating to other down-stream copies of the table, and RECAPTURE can also be N for Replicas in a Peer-to-Peer update-anywhere configuration. To find out if changes made by Apply are being recaptured: In the Replication Center, open the Registered Tables Properties window for a registered table as described above. At the bottom of the Definition tab, see if Capture changes from replica target tables is checked. Or, query the capschema.IBMSNAP_REGISTER table at the Capture Control Server for the record for the source table and check the value of the RECAPTURE column. The before triggers on CD tables to tell Capture not to insert. It is possible to create a before trigger on a CD table that evaluates the values of an insert and returns control to Capture without doing an insert if certain conditions are met. For instance, if you dont want to replicate deletes, the before trigger on a CD table could detect the IBMSNAP_OPERATION of the insert into a CD table and return control to Capture if IBMSNAP_OPERATION=D. There are also conditions where Capture will insert more than one record in the CD tables for each update of the source tables. CHG_UPD_TO_DEL_INS=Y in a registration Dont use this feature unless you really need for Apply to delete a record at one target table or in one partition of the target table and insert a record in another target table or in another partition of the target table as a result of the update. This feature will cause Capture to insert two records into the CD table for every update of the source table and for Apply to perform both
433
a delete and an insert at the target table for every update of the source table. If the source application updated a primary key/unique index column of the source table, and you want to effect the same update at the target table, register the before image of the source tables primary key/unique index columns. Then on Subscription Members from this source table specify TARGET_KEY_CHG. With this combination, Capture will do only one insert into the CD table for each source table update, and Apply will only do one update at the target table for each update that was performed on the source table.
To find out if source table updates are captures as an delete record and an insert record in the CD table In the Replication Center, open the Registered Tables Properties window for a registered table as described above. At the bottom of the Definition tab, see if Capture updates as pairs of deletes and inserts is checked. The default is for this not to be checked. Or, query the capschema.IBMSNAP_REGISTER table at the Capture Control Server for the record for the source table and check the value of the CHG_UPD_TO_DEL_INS column.
434
columns of the CD table should be included in the unique index of the CD table. Definitely do not specify CHGONLY in the registration of the source table. The technique assumes that Capture will insert a record into the CD table for every update of the source table, but the CD table only has the primary key/unique columns which will probably not be changed by the update of the source table.
435
is started (STRDPRCAP) by specifying parameter FRCFRQ with a value when Capture is started. Both COMMIT_INTERVAL and FRCFRQ are specified in seconds. The range for FRCFRQ is 30-600 seconds.
Note: Locksize on CD tables and IBMSNAP_UOW table should not be at table-level. If the locksize of the CD and UOW tables is at table-level, then Captures PRUNING thread, which is deleting records form the CD and UOW tables, could slow down Captures WORKER thread that is inserting changes into these tables.
On z/OS and OS/390, CD tables and IBMSNAP_UOW locksize should be set to PAGE or ANY. It should not be at row-level. On Linux, UNIX, and Windows, CD tables and IBMSNAP_UOW locksize can be at row-level, which is the default LOCKSIZE when a table is created. If you want, you can still defer the pruning workload by starting Capture with NO_PRUNE so that it does not automatically prune. Then asnccmd...prune can be used to make Capture prune at a specific time. Since the CD table now contains the IBMSNAP_COMMITSEQ values for each record, Capture can prune CD tables without joining them with the IBMSNAP_UOW table. Capture performs intermediate commits when deleting from CD tables, and from the UOW table. Fewer rows will be locked in the CD tables at once and shorter strings of deletes will be logged per transaction when Capture is pruning. The time when Capture will prune is controlled by two of the Capture operations parameters, which can be entered when Capture is started, can be changed with asnccmd ...chgparms or can be specified in the capschema.IBMSNAP_CAPPARMS table:
436
AUTOPRUNE If N, Capture only prunes when it receives the prune command via asncmmd...prune If Y, Capture prunes at the Prune_Interval PRUNE_INTERVAL Interval, in seconds, between times when Capture is to prune, if AUTOPRUNE=Y. Default setting is 300 seconds (5 minutes). Captures performance when pruning will largely be governed by the characteristics of the CD tables and the IBMSNAP_UOW table. Since their characteristics also affect Captures insert throughput and Applys performance when fetching changes to be replicated, the characteristics of the CD tables and the UOW table are discussed in a separate topic. Capture pruning activity is recorded in records Capture writes to the capschema.IBMSNAP_CAPTRACE table. In the CAPTRACE records for pruning, the first 8 characters of the DESCRIPTION value will be ASN0105I. In the characters farther on in the DESCRIPTION value will be the number of rows pruned. The number of rows pruned each time Capture has pruned can also be seen via the Replication Center. In the left window, expand Replication Center -> Operations -> Capture Control Servers, highlight the Capture control server server in the right window, right-click and select Show Capture Throughput Analysis. The Capture Throughput Analysis window will open. In the upper part of the window, select the Capture Schema for the Capture you want throughput numbers for and Number of Rows Pruned from CD tables from the Information to be displayed field. Fill in the From- and To- time in the middle of the window or take the default and then press Retrieve from the buttons at the bottom of the display. In the display window, one record will be display for each time that Capture pruned, giving the date and time that Capture pruned and the number of rows pruned from all CD tables. If when you press Retrieve you get an error message SQL0100W No Records found... it is possibly because there are no records in the capschema.IBMSNAP_CAPTRACE starting with ASN0105I which suggests that Capture has not pruned recently.
437
be ROW. Therefore, Capture could be making a lot of individual lock requests. We recommend that more resources be allocated in the DB2 system for locks in order to avoid having Captures row locks be escalated to table locks which will drastically slow down either pruning or inserting into the CD tables. In DB2 on Linux, UNIX, and Windows, specify a large LOCKLIST value in the Database Configuration.
438
This will give you a picture of Captures latency over time. Or Replication Center can provide the same information. In the left-window tree, select Replication Center -> Operations -> Capture Control Server. In the right window, highlight the capture control server you are interested in, right-click, and select Show Capture Latency. The Capture Latency window will open. Pick a Capture Schema at the top of the window, a from- and to-time in the middle of the window and/or an interval from the Time Intervals pulldown, and press Retrieve. Figure 10-6 is an example of the Replication Centers Capture Latency window for a Capture whose Commit_Interval is 30 seconds (the default), showing the latency statistics by hour.
439
As you can see, the Average Latency for this capture is less than the Commit_Interval, so this Capture is keeping up with the changes to source applications.
440
2. Apply fetching changes to be applied at target tables 3. Capture Pruning The characteristics of the UOW table will not be important for Applys replication of changes if: All your target tables are User Copies or Replicas and: You dont source any table columns from IBMSNAP_UOW table columns You dont use any IBMSNAP_UOW columns in member predicates (WHERE clauses) If this is the case, then Apply will not touch the UOW table at all. However, if the any of your target tables are CCDs or Point-in-Time tables, then Apply will need to involve the UOW table since the values for some of the replication-information columns in CCD and Point-in-Time tables come only from the UOW table. The CD tables and the IBMSNAP_UOW table should have: Only a single index each so that Captures insert and pruning (deleting) performance will only cause DB2 to have to write /remove one record in the CD tables tablespaces and one record to the CDs tables indexs tablespaces. On z/OS, the indexes should be Type 2. Replication Center will create the IBMSNAP_UOW table and the CD tables with only one index. If for some reason you want more columns to be indexed (such as for Subsription Member predicates used by Apply), add them to the index that was defined by the Replication Center; dont create a second index. The UOW table is created when the Capture Control Tables are created. A CD table is created when a source table is Registered. The Create Table DDL that Replication Center generates to create the UOW table and CD tables will include the Create Unique Index statement for the UOW/CD table indexes; the Create Unique Index statement will specify the columns that are used by Capture to find the records to prune and by Apply to find and order the records to fetch. If Subscription Set Members involving the source table for a CD table have predicates that reference columns of the source table, consider adding these columns to the index of the CD table. When Apply fetches changes from the CD table, DB2 at the source server will still be able to use the index to find the records for Apply. If the CD table and its unique index already exists, consider dropping the unique index and recreating it with the IBMSNAP_INTENTSEQ and IBMSNAP_COMMITSEQ columns plus the column referenced in the member predicate. Place CD tables index in a separate tablespace from the CD table so that Captures insert and pruning activity wont have to go back and forth between two places in the same tablespace file as it, say, inserts a record in the table
441
and inserts a related record in the index, then insert next record in the table and the its related record in the index. Set locksize the CD and UOW table at PAGE level on DB2 z/OS and at ROW level on DB2 for Linux, UNIX, Windows, so that Captures worker thread can insert records into the CD and UOW table at the same time that Captures pruning thread is deleting records. On z/OS, locksize ANY is not recommended for CD/UOW tables as this could cause escalation to TABLE-level. Also, if necessary, allocate more resources for DB2 to track locks so that row/page locks wont be escalated to table or tablespace level. With DB2 Replication Version 7 and before, the recommendation was for table-level locking since Capture did not insert and prune at the same time. If Apply will be replicating at short intervals from the CD table, create a bufferpool for all the CD tables and for the IBMSNAP_UOW table so that when Apply asks the source DB2 for records from the CD (and at times that it is joined with the UOW table), the source DB2 can fulfill the requests from records still in bufferpool pages from when Capture just inserted the records into the CD tables and UOW table. Run RUNSTATS only once for each CD table and for the IBMSNAP_UOW table when they are full of records. Both Applys fetch, and Captures pruning deletes have WHERE clauses for the columns that are indexed in the CD and UOW table, and are dynamically prepared. If the statistics indicate that the CD tables and IBMSNAP_UOW tables are large, DB2 will use the indexes on the CD tables and UOW table to perform the fetch for Apply and the pruning deletes for Capture. If the statistics indicate these tables are small, such as when RUNSTATS is done right after Capture pruning, then DB2 will use tablescans to process Applys fetch and Captures pruning.
Attention: If RUNSTATS is performed regularly for all tables in a DB2 on z/OS, Linux, UNIX, or Windows, it is recommended the the CD and IBMSNAP_UOW table be excluded from the tables upon which this is done. It is important for replication performance that the source server have relatively accurate statistics for the CD and UOW tables, but if the CD and UOW tables are included in a list of tables upon which regular RUNSTATS is performed, there is the distinct possibility that these tables will have few records when RUNSTATS is performed; but later in the day, after much activity in the DB2 system, these tables could be large. Since the statistics will indicate they are small, DB2 will use table scans, instead of index access, when Apply replicates from these tables. Even if there are only a few records for Apply to fetch at each iteration, DB2 will have to scan the whole CD (and UOW) tablespaces each time. There are many reports of DPROP having high CPU and many page fetches; most frequently these occurrences can be attributed to the statistics for the CD and UOW tables indicating these tables are small when in fact they are large.
442
On z/OS and OS/390, if there is many updates being replicated, the REORG utility can be run for the CD and IBMSNAP_UOW tables weekly. the PREFORMAT option should be specified. On iSeries, CD tables and the IBMSNAP_UOW table should be re-organized periodically to reclaim unused space, such as space no longer needed for records that have been pruned by Capture. The CL command to do this is RGZPFM. RGZPFM can be performed by Capture when it stops if RGZCTLBL *YES is specified with the End-Capture command (ENDDPRCAP). If RGZPFM is done manually (i.e. not by Capture), then you will need to find out the library and short file name for the CD table to specify with RGZPFM command. This information can be obtained from System Catalog View QSYS2.SYSTABLES using the CD_OWNER and CD_TABLE values for the CD table from the IBMSNAP_REGISTER table. The query can be entered using STRSQL. The query would look like this:
SELECT SYSTEM_TABLE_SCHEMA,SYSTEM_TABLE_NAME FROM QSYS2.SYSTABLES WHERE TABLE_OWNER=cd_owner AND TABLE_NAME=cd_table
The SYSTEM_TABLE_SCHEMA and SYSTEM_TABLE_NAME values in QSYS2.SYSTABLES are the library and short file name for the CD table.
443
for Apply it is a pull scenario, federated server will have to push each change across a network to Informix.
444
replication interval, the last update to the source table will be the last update in the spill file, and the last update Apply will perform on the target. The unique index that was created by the Replication Center when the source table was Registered would have included both of the COMMITSEQ and INTENTSEQ columns. If the target table is a CCD or a Point-in-Time type, or if a user copy with columns sourced from the IBMSNAP_UOW table, or if there are member predicates referencing UOW columns, then the SELECT statement sent the source server by Apply will involve a join between the CD and the UOW table on the IBMSNAP_COMMITSEQ column in both these tables. When the Replication Center created the Capture Control tables, the generated SQL included a Unique Index for the UOW table that included the IBMSNAP_COMMITSEQ column. Statistics that are a good estimate for the capschema.IBMSNAP_UOW table in the DB2 source server catalog will help the DB2 optimizer make a good decision on whether to use the index on the UOW table or not. If a particular Subscription Member includes WHERE clauses for columns of the source table or columns of the UOW table, then adding these columns the index of the CD table and UOW table respectively could also affect performance of Applys SELECT statement at the source server. Applys SELECT from the CD tables and from the UOW table should be executed with isolation Uncommitted Read (UR, Dirty Read) so it will not be affected by any locks on the CD table and UOW table by Capture, which is inserting records into these tables. Applys using UR on this SELECT wont compromise the integrity of the replicated data since Capture wont begin to insert any records into the CD and UOW table related to a transaction involving the respective Registered source tables until Capture has seen a commit for that transaction in the source servers DB2 log. The use of UR on the SELECT from the CD and UOW table was determined when Applys packages were bound at the source server. Applys SELECT statement from the CD tables and UOW should also be executed with blocking so that multiple records can be transported from source server to Applys spill files in a single block and in a single network packet. Apply on Linux, UNIX, and Windows autobinds at the Capture Control Server if it cant find its packages the first time it connects to the source server. Applys autobind for these packages specifies isolation and blocking as appropriate. Most of Applys packages are bound with ISOLATION UR; one is bound with ISOALTION CS. All are bound with BLOCKING ALL. Apply on z/OS and iSeries does not autobind its packages.
445
For zOS, sample JCL for binding Apply packages and plans are shipped with IBM DB2 DataPropagator for z/OS Version 8. The sample JCL for binding each package specifies correctly whether the package should be bound with ISOLATION(UR) or (CS). The packages used for fetching data from the CD and UOW tables specify ISOLATION UR; the default blocking behavior for fetching data when ISOLATION UR is to fetch multiple records in a block. When replicating from a non-DB2 server, such as from Informix, Apply will be sending the federated server SELECT statements referencing the nicknames for the CCD tables. Federated server will in turn send Informix a select statement for the ccd tables. Federated server will request that the results of this select be blocked. Since the results are blocked across the network to from Informix to the federated server and from the federated server to Apply, there will in most cases be little advantage to the federated server being close to the Informix server or close to the Apply system. If the target server is a DB2 ESE Version 8 on Linux, UNIX, or Windows, there will be slight advantage to letting the target system also be the Capture Control Server containing nicknames for the source tables in Informix; with this configuration, the results from Informix are only blocked once since they go straight from Informix to the system with Apply. If the federated server and the target server are different system - which is unfortunately required when the target server is DB2 for z/OS and iSeries - then the same result will be blocked twice: first from Informix the federated server (DB2 ESE or DB2 Connect on Linux, UNIX, or Windows) and from the federated server to the target server.
Note: Also, for DB2 for z/OS and OS/390 to keep Applys dynamically prepared SQL statements in memory for use by Apply the next time Apply connects to a DB2 for z/OS or OS/390 subsystem or sharing group, Applys packages need to be bound with BIND PACKAGE option KEEPDYNAMIC(YES) specified. The sample BIND PACAKGE JCL provided with Apply on z/OS should include this option. See DB2 for z/OS and OS/390 Command Reference for a description of bind options.
446
447
For DB2 on Linux, UNIX, and Windows the distributed database blocksize is determined by the Database Manager Configuration parameter RQRIOBLK. The default value is large 32,767 bytes, but can be even larger 65,535 bytes. This block size is in effect when replicating between DB2 on Linux, UNIX, and Windows, and also when replicating from DB2 for z/OS or iSeries to DB2 on Linux, UNIX, and Windows. When replicating from DB2 for z/OS to DB2 for z/OS, block size is determined by the Extra Blocks Req and Extra Blocks Srv on the DB2 for z/OS and OS/390 Distributed Data Facility installation panel (DSNTIP5). The default setting is the maximum allowed. See the DB2 for z/OS and OS/390 Installation Guide. We wont cover here how to set the network packet size. That should be investigated in TCP/IP documentation and discussed with your network administrators.
448
with the rows from the CD tables and then opening and reading from the spill file to apply the changes to the target tables. Apply will create its spill files in the directory indicated by the Apply_Path start variable. There should be adequate space in that directory for the spill files. If Apply is replicating at short intervals so that the volume of changes fetched is not great than not as much disk space may be needed for spill files. But if Apply is replicating at long intervals during which there could be lots of changes to the source tables, then the number of records fetched by Apply could be large and more disk space is needed for spill files. Row length of a CD table at the source server times the number of changes replicated each interval provides an estimate of the size of the spill file. If you are doing this calculation, dont forget the IBMSNAP_COMMITSEQ, IBMSNAP_INTENTSEQ, and IBMSNAP_OPERATION columns; there combined length is 21 bytes. The Apply_Path should be selected so that there is minimal contention between Applys I/O with the spill files and other activity on the system with Apply. For instance, if Apply is on the same system as the target tables - which is the optimal pull replication system design for performance - then the Apply_Path should not be to the same disk as the DB2 log, the storage groups / containers of the tablespaces for the target tables or their indexes.
449
updated. To make sure the column with the change is also changed at the target, Apply just sets all the non-key/non-unique index columns of the target with the after-image values from the update record from the CD table. Applys insert/update/deletes to the target tables typically take up the largest slice of time of a replication cycle. And the amount of time taken to insert/update/delete the target tables can be expected to be even longer if Apply is not on the same system as the target tables and so each insert/update/delete, and its result back to Apply, has to traverse a network. When replicating to a non-DB2 server, such as Informix, remember to think about whether Apply and the nicknames for the target tables are on the same system as the Informix server that contains the target tables themselves. If not, though Apply is configured for the faster pull system design, each insert/update/delete that Apply does still has to cross the network from the federated server where Apply is running to the Informix server.
10.4.7 Target table and DB2 log characteristics at the target server
The target servers performance of Applys insert/update/deletes to the target tables will be affected by the usual factors for insert/update/delete performance. Well list some of these factors: Fewer indexes on the target tables will make for faster inserts and deletes since fewer underlying files will need to be updated. If the users queries of the target tables have WHERE clauses referencing columns not in the primary key or unique index, consider adding these columns to the unique index over creating a second index on the target table. The tablespaces for the target tables and their indexes, and the characteristics of those tablespaces should be specified to optimize both Applys insert, update, delete operations and the users queries. The target table and its index should not be in the same tablespace so that when Apply inserts or deletes records in the target table, the target server wont have to perform I/O two places on the same disk path to execute the insert or delete. Regarding locksize for target tables, see the discussion below in . RUNSTATS should be run for the target tables so that the optimizer at the target server will pick the fastest access plan for executing Applys insert, update, and deletes, as well as the users queries. The DB2 log files, the tablespace containers for the table, the tablespace containers for the index, and the Apply_Path directory that contains the spill files should ideally be on separate disk drivers to minimize the I/O contention as DB2 reads from the spill file and writes to table tablespace, index tablespace, and log file.
450
451
DB2 on Windows: default = 25 4k pages; range = 4 - 60,000 pages Specify a lower, non-zero/non-null value for COMMIT_COUNT for the Apply Subscription Set. Specifying a non-zero COMMIT_COUNT causes Apply to issue intermediate commits which will unlock all the target tables rows that were locked. Apply will starting locking another set of target table rows as it processes the next set of spill file records but release these locks as it performs the next intermediate commit. The intermediate commits caused by specifying a low/non-zero COMMIT_COUNT could slow Applys over-all throughput and this should be balanced against the benefit of having Apply lock fewer records in the target tables. We will have more information on COMMIT_COUNT in the next section.
Sleep_Minutes
Sleep_Minutes is the interval, in minutes, between replication iterations. If Sleep_Minutes=0, then Applys frequency for replicating the set is determined by the value of Applys operations parameter Delay which will be discussed below. SLEEP_MINUTES is a column of the IBMSNAP_SUBS_SET table. In the Replication Centers Subscription Set Properties window, the Sleep_Minutes is indicated on the Schedule tab, in the section on Frequency of replication but it is indicated in Minutes + Hours + Days + Weeks rather than just in minutes. If Continuously is checked, then Sleep_Minutes = 0.
452
Note: Keep in mind that Apply inserts a record into ASN.IBMSNAP_APPLYTRAIL every time it attempts to replicate a Subscription Set, whether the replication was successful or not. If Sleep_Minutes for any Subscription Sets are a low value, such as 1 minute, IBMSNAP_APPLYTRAIL could grow quickly.
If the target server is also the Apply Control Server, then an After SQL statement can be added to one or more of the Sets defined at the Apply Control Server that will cause Apply to delete records from the ASN.IBMSNAP_APPLYTRAIL table and keep it from growing. Below is an example of an SQL statement that can be added to a Subsription Set for this purpose.
DELETE FROM ASN.IBMSNAP_APPLYTRAIL WHERE LASTRUN< (CURRENT TIMESTAMP - 1 HOUR) AND STATUS = 0
If this SQL statement is added to a Subscription Set via the Replication Center Create Subscription Set windows Statements tab, select the At the target server after the subscription set is processed button and enter the statement into the SQL Statement field.
Max_Synch_Minutes
Max_Synch_Minutes, or the blocking factor, tells Apply how to determine which changes to fetch from the source server if there are lots of changes available from a long period of time. If Max_Synch_Minutes is null (no value), than Apply fetches all available changes that have not been previously replicated. If Max_Synch_Minutes has a value, it is in mines, and it tells Apply to check the timestamps associated with the transactions of the changes in the CD table and to only fetch changes within a certain range. If that is not all the changes eligible to be replicated, then Apply, after if applies a set of changes to the target table, will return to the source server and fetch another block of changes. With prior versions of DB2 Replication, the blocking factor could be used for two purposes: To reduce the amount of space required for Apply spill files. To get Apply in effect to perform intermediate connects at the target server. With DB2 Replication Version 8, the second objective can be achieved with the Commit_Count parameter on a subscription set. Overall, replicating the changes using the blocking factor can decrease Applys overall throughput in two ways: Apply will have to do more connections to source and target servers to replicate an equivalent number of changes.
453
Apply, while connected to the source server, has an additional calculation to make to determine the range of changes to be fetched. We recommended that Max_Synch_Minutes (blocking factor) for a Subscription Set be 0 or null (no value) unless the blocking is needed. The blocking factor for a Subscription set can be checked two ways: In the Replication Center left window, select Replication Center -> Replication Definitions -> Apply Control Servers -> apply control server name -> Subsriptions. In the right window, highlight the set name, right-click, and select Properties. On the Set Information tab, under Set processing properties, check the Data blocking factor. Query the ASN.IBMSNAP_SUBS_SET table at the Apply Control Server with the Apply_Qual and Set_Name values for the set and check MAX_SYNCH_MINUTES.
Commit_Count
If the target tables of a Subscription Set have Referential Integrity between them, or you would just like Apply to perform intermediate commits at the target server so that fewer rows will be locked by Apply at any point in time, you can specify that Apply open all the spill files together and apply the changes to all the targets in the same order in which the changes were made to all the sources tables of the set. Apply does this if the Commit_Count value for a Subscription Set has a value. If the value is zero (0), Apply applies all the changes from all the spill files in the order they were done on all the source tables and does one Commit at the end. If Commit_Count > 0, then Apply will perform intermediate commits; if Commit_Count is 2, then Apply will issue a commit after it has applied the changes of every 2 transactions as indicated by the different IBMSNAP_COMMITSEQ values of the records in the spill files. If Commit_Count has no value (i.e. is null) then Apply opens each spill file in succession, applying all the changes from one spill file to its target before opening the next spill and applying its changes. In any case, if Commit_Count has any value, even if that value is 0, Apply has to do some additional processing with the spill files to determine the order to apply the changes from all the spill files. Also, if Commit_Count>0, then Apply is doing intermediate commits at the target server, which also impedes Applys overall throughput in applying changes to the target tables. The additional processing by Apply to do the Commit_Count evaluations is probably not significant. However intermediate commits could have a significant effect on Applys throughput. We would recommend if you are trying to squeeze out every unnecessary bit of processing by Apply in order to decrease the latency of the data in the target tables that you not have any RI between the
454
target tables and that you leave Number of transactions applied at the target table before Apply commits (i.e. Commit_Count) with no value. You can check the Commit_Count value for a set either in the Replication Center or by querying the Apply Control Tables. In the Replication Center, open the Subscription Set Properties window for a set as described above for checking Max_Synch_Minutes. On the Set Information tab, under Set processing properties, check Number of transactions applied to target table before Apply commits. Query the ASN.IBMSNAP_SUBS_SET table at the Apply Control Server with the Apply_Qual and Set_Name values for the set and check COMMIT_COUNT
DELAY
The Delay parameter has no bearing for Subscription Sets which are interval based and have Sleep_Minutes (replication interval) of 1-minute or greater, and for Subscription Sets that are event based. The Delay parameter is relevant for Subscription Sets that are Continuous; that is, their Sleep_Minutes=0 in the record for the set in in the ASN.IBMSNAP_SUBS_SET table at the Apply Control Server. Delay determines the number of seconds that an Apply is to use as the interval for the Set that it is replicating. The default value is 6 seconds. The range is 0 seconds to 6 seconds. the value can be set when Apply is started, or it can be set by inserting/updating a record into ASN.IBMSNAP_APPPARMS table at the Apply Control Server for the Apply_Qual of the set. With Delay=0, Apply still will go inactive for a millisecond between cycles, which is just enough to avoid causing thrashing when there is no data to replicate.
Note: Keep in mind that Apply inserts a record into ASN.IBMSNAP_APPLYTRAIL every time it attempts to replicate a set whether the replication is successful or not. If the target server is also the Apply Control Server, a SQL statement can be added to one or more of the Subscription Sets processed by an Apply that will delete records from the APPLYTRAIL table and keep it from growing. An example of such a SQL statement is provided in the discussion of Sleep_Minutes above.
455
OPT4ONE
Apply normally does checks of its Control Tables after it finishes a replication and also, if it goes inactive even for a sub-second, when it returns from inactivity. Apply is checking to see if there are changes to its control information and whether any Subscription Sets have become eligible, based on time interval event, for immediate processing. This select from Apply Control Tables does not take very long, but if youre interested in squeezing all possible unnecessary steps out Applys cycle, you can have an Apply process just read the Control Tables for the information about the only set it processes when Apply starts, and not take time to re-read this information again. To do this, when Apply starts, include the parameter OPT4ONE=YES, or you can specify OPT4ONE=Y in the record in ASN.IBMSNAP_APPARMS for this Apply qualifier. For instance:
UPDATE ASN.IBMSNAP_APPPARMS SET OPT4ONE=YES WHERE APPLY_QUAL=MIXQUAL
Or, if there is not yet a record in APPARMS for this Applys Qualifier, you can insert a record with:
INSERT INTO ASN.IBMSNAP_APPPARMS (APPLY_QUAL,OPT4ONE) VALUES (MIXQUAL,Y)
456
SELECT ENDTIME, (ENDTIME - LASTRUN) + SOURCE_CONN - SYNCHTIME) AS APPLY_LATENCY FROM ASN.IBMSNAP_APPLYTRAIL WHERE SET_NAME = set_name AND APPLY_QUAL=apply_qualifier
ENDTIME is the Apply Control Server timestamp when it finished a subscription cycle. LASTRUN is the Apply Control Server timestamp when it started a subscription cycle ENDTIME - LASTRUN is the time to execute a subscription cycle SOURCE_CONN is a Capture Control Server timestamp when Apply connects to the Capture Control Server at the start of a Subscription cycle. SYNCHTIME is the SYNCHTIME from capschema.IBMSNAP_REGISTER GLOBAL_RECORD=Y SOURCE_CONN - SYNCHTIME indicates Captures latency So the whole calculation can be summarized as (time it took Apply to complete the replication) + (Captures latency at the time the replication cycle was executed) The same information is available through the Replication Center. In the left-window tree, select Replication Center -> Operations -> Apply Control Servers -> Apply_Control_Server_Name. In the right window, highlight the Apply Qualifier for which you want the latency, right-click, and select End-to-End-Latency from the options. The End-to-End Latency window opens. Select the From-Time and To- Time and/or select a Time Interval and press Refresh. A report of Average, Minimum, and Maximum Latency will appear for the Subscriptions Set included with that Apply Qualifier. See Figure 10-7 for an example of the End-to-End Latency window.
457
Note: When replicating from a non-DB2 source server, the Apply End-to-End Latency measurement is not meaningful. The ibmsnap_register synchtime value at Apply uses to calculate Captures latency is actually set with a current timestamp right before Apply reads the value. Before reading the synchpoint and synchtime in the ibmsnap_register table, which indicate to Apply if there are new changes and it should check the CCD tables, Apply updates the ibmsnap_reg_synch table, which has a trigger which updates the synchpoint values in the register table, and updates the synchtime values with a current timestamp. Therefore, when replicating from a non-DB2 source server, the End-to-End Latency really only measures how long it takes Apply to execute each cycle.
458
459
If in the Time Intervals field you select No Time Interval, the display will be one record for each replication cycle that was executed, essentially showing the values from each record in the APPLYTRAIL table. If in the Rate field you select No Time Interval, the data displayed will be of the total records inserted, updated, deleted, reworked per interval. If in the Rate field you select Rows/Second, the display will be the average rows/second inserted, updated, deleted, reworked. If in the Time Intervals field you select Seconds, Minutes, Hours, Days, etc, then the display will be aggregates of the APPLYTRAIL statistics with one record for each interval selected (second, minute, etc).
We will show an example of using asntrc to get the TRACE PERFORMANCE RECORDs, describe the fields in the Trace Performance Records, and then show how we calculate the time spent by Apply fetching changes for each member of a set, and the time spent by Apply to apply the records from each spill file to the corresponding target table. In our example, Capture is running and we are making updates to two source tables: MICKS.SRCTAB and MICKS.SRCTAB2. Apply could be running when we turn asntrc on, but in our case we begin with Apply stopped. We will turn asntrc on, and then start Apply with COPYONCE=Y so that Apply will do exactly one replication cycle and stop. The trace wont be too large, and we should find one set of PERFORMANCE TRACE RECORDS to analyze when we format the trace.
460
In our example, we will also be starting asntrc specifying that it write all trace records to file; this is not the default behavior and may actually be slowing Apply down a little. We could also not specify an output file when we start asntrc so trace records would only be held in memory. Then we would need explicitly to enter an asntrc command while the asntrc is still on to write the asntrc memory buffers to file before turning asntrc off. Lets proceed. 1. We start asntrc with this command:
asntrc on -fn MIXQUAL.dmp -db TGT_NT -app -qualifier MIXQUAL
In this example: on: asntrc is turned on with this command. -fn MIXQUAL.dmp: asntrc will write all records to the MIXQUAL.dmp file as it is tracing. -db TGT_NT: we specify the Apply Control Server database -app: we tell asntrc to trace Apply (not Capture) -qualifier MIXQUAL: we tell asntrc the Apply Qualifier of the Apply process to be traced 2. We start Apply, specifying that it perform just one cycle:
asnapply control_server=TGT_NT apply_qual=MIXQUAL apply_path=d:\DB2Repl\MIXQUAL pwdfile=MIXQUAL.aut copyonce=Y
In this example: control_server=TGT_NT: our Apply Control Server database apply_qual=MIXQUAL: our Apply Qualifier apply_path=d:\DB2Repl\MIXQUAL: the directory containing the password file and also where we want Apply to create the spill files. pwdfile=MIXQUAL.aut: the file containing userid/passwords that Apply will use to connect to Apply Control Server, Capture Control (source) Server, and Target Server. The file was created and records added to it using the asnpwd command. copyonce=Y: we want Apply to replicate one subscription set cycle and stop 3. We wait while Apply starts, goes through one cycle and stops. First we see the message that Apply is started. A little while later, we see a message that Apply is stopped. 4. We stop the asntrc.
asntrc off -db TGT_NT -app
In this example:
461
off: asntrc is turned off -db TGT_NT: the database specified for the trace instance were stopping -app: were stopping a trace that was for Apply 5. We format the asntrc dmp output so we can look for the Performance Trace Records:
asntrc v7fmt -fn MXQUAL.dmp > MIXQUAL.v7fmt
In this example: v7fmt: provide a formatted trace similar to the Apply V7 trace. The alternative would have been fmt to provide a trace in the newer DB2 Replication trace format that looks somewhat like a DB2 for Linux, UNIX, Windows formatted trace. -fn MIXQUAL.dmp: the input file to be formatted > MIXQUAL.v7fmt: the output file from the trace-format operation 6. We use an editor to look at the formatted trace to find the Performance Trace Records. In this example, we are on Windows and use Notepad. The formatted traces are large, even for just one Apply replication cycle. The editors (i.e. Notepad) find capability can take us to the Performance Trace Records. The Performance Trace Records look like Example 10-1.
Example 10-1 Performance Trace Records in formatted asntrc for Apply
===== PERFORMANCE TRACE RECORD ===== S, MIXSET, 1, S, 2002/09/04 15:02:05, 2002/09/04 15:02:05, 2002/09/04 15:02:05, 2002/09/04 15:02:05, 2002/09/04 15:02:11, 2002/09/04 15:02:11, , , 2002/09/04 15:02:36, 2002/09/04 15:02:36 M, MIXSET, 1, 0, 2002/09/04 15:02:05, 2002/09/04 15:02:05, 2002/09/04 15:02:05, 2002/09/04 15:02:08, 2002/09/04 15:02:11, 2002/09/04 15:02:15, 200, 5, 2, 0 M, MIXSET, 1, 1, 2002/09/04 15:02:08, 2002/09/04 15:02:08, 2002/09/04 15:02:08, 2002/09/04 15:02:11, 2002/09/04 15:02:15, 2002/09/04 15:02:20, 50, 150, 2, 0
S Performance Trace Record For The Entire Subscription Set. M Performance trace record for a specific Member in a Subscription Set.
The fields at the beginning of the S and M records indicate several things.
462
3rd field=1:
the numerical number of this Performance Trace Record. This number becomes relevant if there is more than one group of Performance Trace Records in the trace output. In this case, we are looking at either the first or the only set of Performance Trace Records in the trace output. WHOS_ON_FIRST value. S means the target is a User Copy, CCD, Point-in-Time, or a Replica. If M, the target is the Master in a Update-Anywhere replication.
4th field of M record=0 or 1:Indicates the Member Number within the set. To reconcile the Member Number to the Target Table name, go back to the top of formatted trace and find mem_info entries. See Example 10-2:
Example 10-2 Member info in formatted anstrc output for Apply
------------------------mem_info i = 0 ------------------------SOURCE_OWNER = MICKS SOURCE_TABLE = SRCTAB SOURCE_VIEW_QUAL = 0 TARGET_OWNER = MICKS TARGET_TABLE = TGSRCTAB .... ------------------------mem_info i = 1 ------------------------SOURCE_OWNER = MICKS SOURCE_TABLE = SRCTAB2 SOURCE_VIEW_QUAL = 0 TARGET_OWNER = MICKS TARGET_TABLE = TGSRCTAB2
So we see: Member 0 = source table MICKS.SRCTAB - target table MICKS.TGSRCTAB Member 1 =source table MICKS.SRCTAB2 - target tale MICKS.TGSRCTAB2 The meanings of the timestamp fields in the S records are given in Table 10-1.
463
information about the Subscription Set from the Apply Control Tables
4th timestamp 5th timestamp After Connect to Capture Control (Source) Server Before Connect to Target Server Between the 4th and 5th timestamps, Apply:
- reads the capschema.IBMSNAP_REGISTER table to learn if there are new changes and to find out names of CD tables - fetches changes from CD tables for all members in the set for which there are new changes. See the M records 1st, 2nd, 3rd, and 4th timestamps to find out how long the source server took to prepare and execute the SELECT FROM the CD tables.
Note: this timestamp should follow all the M records 4th timestamps, which marked end of fetches from the CD tables at the source server. 6th timestamp 7th timestamp After Connect to Target Server Before Opening Spill Files Value provided only if the Sets Commit_Count is not null. That is, this is the timestamp when Apply opened all the spill files. If Commit_Count=null, spill files were opened in succession. Look at M-records 5th timestamp After Closing Spill Files Value provided only if Sets Commit_Count is not null Between the 7th and 8th timestamp, Apply applies the changes from all the spill files to all the target tables Before Connect to Source Server to update Prune_Set
8th timestamp
9th timestamp
464
Meaning After Connect to Source Server to update Prune_Set Between the 9th and 10th timestamps, Apply updates the
synchpoint and synchtime for the Subscription Set in the capschema.IBMSNAP_PRUNE_SET table. The meaning of the timestamp and number fields in the M records are shown in Table 10-2.
Table 10-2 Description of Apply Trace M Performance Trace Records
Timestamp/Numeric Field Number 1st timestamp 2nd timestamp Meaning At Source Server, before preparing Select from CD table Note: this timestamp should follow S records 4th timestamp At Source Server, after preparing Select from CD table Between the 1st and 2nd timestamps, the source server optimizes and compiles Applys dynamic SQL SELECT statement for the CD table At Source Server, open cursor before fetch from CD table At Source Server: after last fetch from CD table Between the 3rd and 4th timestamp, the source server provides the results of the SELECT statement for the CD table. At Apply server: open spill file Value provided only if Sets Commit_Count is null. If Sets Commit_Count is not null, all spillfiles are opened together. See S record 7th timestamp. At Apply server: close spill file Between the 5th and 6th timestamp, Apply updates the target table with the changes from the spill file. Value provided only if Sets Commit_Count is null. If Sets Commit_Count is not null, all spillfiles are closed together. See S record 7th timestamp. Number of rows inserted to target table Number of rows updated in target table Number of rows deleted from target table Number of rows reworked to insert or update of target table
5th timestamp
6th timestamp
1st numeric field 2nd numeric field 3rd numeric field 4th numeric field
465
Before proceeding, well point out that in our example that our Subscription Set has Commit_Count = null. Apply is opening each spill file in succession, not together. Apply opened the spill file for Member 0, applied all the changes to TGSRCTAB, and closed that spill file. Then Apply opened the spill file for Member 1, applied all the changes to TGSRCTAB2, and then closed that spill file. So the S record in our example has no values for the 7th and 8th timestamps and the M records have values for their 5th and 6th timestamps, indicating when Apply opened and closed each spill file. If the Sets Commit_Count had a value, even if it was zero, the S record would have 7th and 8th timestamps indicating when the two spill files were opened and closed together, and the M records would not have 5th and 6th timestamps. Here well repeat in Example 10-3, the Performance Trace Records were using in the calculations below of Applys time to fetch the changes at the source server and the time to apply the changes to the target tables.
Example 10-3 Performance Trace Records used in our calculations
===== PERFORMANCE TRACE RECORD ===== S, MIXSET, 1, S, 2002/09/04 15:02:05, 2002/09/04 15:02:05, 2002/09/04 15:02:05, 2002/09/04 15:02:05, 2002/09/04 15:02:11, 2002/09/04 15:02:11, , , 2002/09/04 15:02:36, 2002/09/04 15:02:36 M, MIXSET, 1, 0, 2002/09/04 15:02:05, 2002/09/04 15:02:05, 2002/09/04 15:02:05, 2002/09/04 15:02:08, 2002/09/04 15:02:11, 2002/09/04 15:02:15, 200, 5, 2, 0 M, MIXSET, 1, 1, 2002/09/04 15:02:08, 2002/09/04 15:02:08, 2002/09/04 15:02:08, 2002/09/04 15:02:11, 2002/09/04 15:02:15, 2002/09/04 15:02:20, 50, 150, 2, 0
Now here are our calculations: 1. Total Elapsed Time for the Subscription Cycle:
S 10th timstamp - S 1st timestamp = 15:02:36 - 15:02:05 = 31 seconds
466
Note: We learned the name of the CD tables by querying the IBMSNAP_REGISTER table at the Capture Control Server. Or in the Replication Center, under Replication Center -> Replication Definitions -> Capture Control Servers -> capture control server name -> Registered Tables, we highlighted the source table, right-click, and select Properties. In the Registered Tables Properties window, select the CD Table tab and look at the CD Table schema and CD Table name fields.
In our example, the number of records insert/update/deleted/re-worked to each of the target tables is in the numeric fields at the end of the M records in Example 10-1: TGSRCTAB had 200 records inserted 5 records updated 2 records deleted No inserts or updates reworked
TGSRCTAB2 had: 50 records inserted 150 records updated 2 records deleted 0 records reworked
Well note in this case that the fetches from the source server took less time (3 seconds and 3 seconds respectively for the two members) than the insert/update/deletes to the targets (11 seconds and 13 seconds respectively). This is as expected. If the fetches from the CD tables had taken longer than the updates of the targets, we would suspect that there might be something done that could done at the source server to improve the total throughput on this subscription set.
467
To make Apply check at the source server for new changes more frequently: When creating the Subscription Set, on the Create Subscription Schedule tab, under Frequency of Replication, specify Time-based - Continuous. The setting in ASN.IBMSNAP_SUBS_SET will be REFRESH_TYPE=R and SLEEP_MINUTES=0. When starting Apply, or in the record for this Apply Qualifier in the ASN.IBMSNAP_APPPARMS table, give attention to the DELAY parameter, which specifies the frequency in seconds that Apply will connect to the source server to check for new changes. With Delay=0, Apply checks for new changes at the source server every millisecond.
Note: Keep in mind that Apply inserts a record into the ASN.IBMSNAP_APPLYTRAIL table every time it attempts to replicate a Subscription set whether the replication is successful or not. If Delay=0, Apply will be inserting a record into this table many times a second. If the target server is also the Apply Control Server, a SQL statement can be added to one or more Subscription Sets to delete records from the APPLYTRAIL table and keep it from growing.
To make Capture set signals more frequently that new changes are available: When starting Capture, or in the record in capschema.IBMSNAP_CAPPARMS at the Capture Control Server, give attention to the COMMIT_INTERVAL value. At the Commit_Interval, Capture stops inserting changes into the CD tables and updates the new-change signals in the capschema.IBMSNAP_REGISTER table. Specifically: In the IBMSNAP_REGISTER record where GLOBAL_RECORD=Y, Capture sets SYNCHPOINT to the Log Sequence Number (LSN) that it has last read. This value is the first thing that Apply checks when it connects to the source server. If different from the SYNCHPOINT in the ASN.IBMSNAP_SUBS_SET record for this set at the Apply Control Server, it tells Apply that there has been some kind of update activity at the source server since DB2 is adding records in the Log. In the IBMSNAP_REGISTER record for each source table for which Capture has inserted records into CD tables since the last Commit_Interval processing, Capture sets CD_NEW_SYNCHPOINT to the highest IBMSNAP_COMMITSEQ value of records recently inserted into the CD table.
468
If Apply detects that IBMSNAP_REGISTER GLOBAL_RECORD=Y SYNCHPOINT has advanced, Apply then checks the CD_NEW_SYNCHPOINTs for all the tables in the set. If any of them are greater than the SYNCHPOINT for the set back at the Apply Control Server, then this indicates to Apply that there are new changes for that source table. Apply then includes that CD table on the list to fetch from. The Commit_Interval default setting is 30 seconds. A different value can be set through the Replication Center: Replication Center -> Operations -> Capture Control Servers. Highlight the Capture Control Server by name in the right window, right-click and select Manage Values in CAPPARMS. The above probably seems confusing when first read. It might be good to give an example of what happens with Apply Delay=1 and Capture Commit_Interval=15: 1. Every 1 second, Apply connects to the source server and reads the SYNCHPOINT in the IBMSNAP_REGISTER record where GLOBAL_RECORD=Y. Apply does this 14 times (i.e. once a second for 14 seconds) without finding a new value and so Apply goes back to sleep for part of a second and then wakes up and connects again to the source server to check the SYNCHPOINT in IBMSNAP_REGISTERs GLOBAL_RECORD 2. The 15th time, Apply checks, it finds a new value for SYNCHPOINT in IBMSNAP_REGISTER GLOBAL_RECORD because 15 seconds is the interval at which Capture updates this value if theres been any update activity at all in DB2. 3. Apply then checks CD_NEW_SYNCHPOINT in each of the IBMSNAP_REGISTER records for source tables in the set. For those with new values, Apply puts the CD table on the list to fetch from. 4. Apply selects from the CD tables with new changes, puts the changes into spill files, connects to the target server and insert/update/deletes the target tables with the new changes. 5. Apply goes back to 1) above. The temptation, to decrease latency, is to set Captures Commit_Interval=1 second, but this can be counter-productive since Capture has to stop inserting into the CD tables in order to do the Commit_Interval processing. If Capture overall rate of inserting into the CD tables is less than the rate that the source applications are updating the source tables, the data available in the CD tables will be farther and farther behind and so will the data in the target tables. We have recommended trying Capture Commit_Interval of 10, or 15 seconds and seeing if Capture is keeping up. The lowest end-to-end latencies we have
469
seen have been 15-20 seconds. What is achievable in your environment will depend on whether Capture can keep up with the source applications while setting the new-change signals for Apply at short commit_intervals.
470
benchmarks. Source/Capture Control Server and Target/Apply Control Server were on different machines. The Pull design was used; that is Apply on the target server was used to replicate the Subscription Sets.
Table 10-4 provides some characteristics of the IRWW workload itself, with Capture running on the same system:
Table 10-4 IRWW workload characteristics
Number of users 200 300 400 500 600 Transaction/Second 174 276 374 424 466 Rows changed/second 1700 2500 3200 3500 4000
471
Below is information regarding the configuration of the DB2 source database that contained the IRWW tables and the CD tables for them. The catalog and temporary tablespaces were SMS tablespaces with 4k pages on a single file system spanning 8 disks. They shared the IBMDEFAULTBP bufferpool which had 5000 4k pages. DB2 log files were on a file system that spanned 10 disks. The 9 IRWW tables that were replicated were together in a DMS tablespace with 4K pages with a bufferpool of 80000 pages. Their indexes were in a separate DMS tablespace with 4K pages and a bufferpool of 40000 pages. The IBMSNAP Capture Control Tables, except for the IBMSNAP_UOW table were in userspace1 tablespace, which was a SMS tablespace with 4K pages and shared the IBMDEFAULTBP bufferpool. CD tables plus the IBMSNAP_UOW table were together in a single tablespace, and their indexes in another tablespace, with the characteristics indicated below: Pagesize 4096 Managed by database Using (raw) device: The CD_UOW table tablespace had 16 disk paths specified The CD_UOW index tablespace had 4 disk paths specified
Extentsize 16 Prefetchsize 64 Bufferpool specified 20000 pages for the CD_UOW tablespace 10000 pages for the CD_UOW index tablespace
Overhead 24.10000 Transfer rate 0.900000 Also, RUNSTATS was run for the CD tables when they were full of data. The Database Configuration parameters updated at the source/Capture Control Server were as indicated in Table 10-5.
Table 10-5 Benchmark source server Database Configuration parameters
Database Configuration parameter DBHEAP LOGBUFSZ Setting during benchmark 2048 pages 512 pages
472
Database Configuration parameter LOGFILSIZ LOGPRIMARY LOGSECOND SOFTMAX PCKCACHESZ LOCKLIST MAXLOCKS CHNGPGS_THRESH NUM_IOCLEANERS NUM_IOSERVERS
Setting during benchmark 8000 pages 60 20 300 percent 2048 pages 2048 pages 60 percent 60 percent 10 10
At the DB2 instance level, the only Database Manager Configuration parameter specifically set was INTRA_PARALLEL=NO. They following DB2 Profile Registry (db2set) variables were also specified: DB2_MMAP_READ = OFF (for AIX) To take advantage of JFS file system caching for tables in SMS tablespaces DB2_MMAP_WRITE = OFF (for AIX) To promote parallel I/O DB2_MINIMIZE_LIST_PREFETCH=YES Reduces likelihood of sort overflow when the DB2 source server executes the SQL from Apply to fetch changes from CD tables
473
The IBMSNAP Apply Control Tables were in userspace1 tablespace, which was a SMS tablepace with 4K pages and shared the IBMDEFAULTBP bufferpool. The nine target tables that were replicated were together in a DMS tablespace with 4K pages with a bufferpool of 80000 pages. Their indexes were in a separate DMS tablespace with 4K pages and a bufferpool of 40000 pages. The Apply_Path, where the Apply spill files would be created, mapped to a file system spanning four disks RUNSTATS were run for the target tables when they were full. Database Configuration parameters for the target database were the same as for the database at containing the source tables at the source server. See Table 10-5.
474
Runs were done with Capture pruning regularly while Capturing changes and while Apply was fetching and applying changes to targets. The elapsed time for Capture to capture all the changes and for Apply to get them into the targets was slower if Capture was allowed to prune, but only by 1 percent or less total elapsed time. End-to-end latency observed was excellent with up to 2396 rows changed per second. Table 10-6 provides a summary of the latencies seen for different change rates.
Table 10-6 Benchmark target table latency
Throughput (rows/second) 76 223 398 1323 2396 2932 3448 Min (seconds) 1 1 1 1 1 1 1 Max (seconds) 6 6 7 8 17 44 71 Avg (seconds) 2 2 2 3 4 12 25
Some of the replication settings specified during these latency tests were: Capture Commit_Interval = 1 second The benchmark team reports that Capture Version 8s throughput does not seem to be as affected by frequent stops to update the new-change signals (SYNCHPOINTs and SYNCHTIMEs) in the IBMSNAP_REGISTER table and commit new records in the CD and UOW table. But we are still cautious in recommending that Commit_Interval be set this low. Apply Sleep_Minutes = 0 This is the setting for Continuous Replication. How frequently Apply connects to the source server looking for new changes will be determined by the Apply start parameter Delay. Delay=0 seconds
475
476
Appendix A.
477
We selected Install Products from the Setup Launchpad window. We accepted the License Agreement in the next window and pressed Next. The next dialog window that appears is Select the Installation Type. It asks us if we want our installation to be: Typical With or without Additional Functions, such as Data Warehousing Compact Or Custom We selected Custom so that we could see the individual components available for installation, even though Typical without Data Warehousing would be fine. Next was a dialogue window with heading Select the Installation Action. We selected Install on this computer and Next. This brought us to the window with title Select the features you want to install.
478
All the features in the list are in the selected status. Clicking the + by each icon, we can see the sub-features that are available. We can de-select a feature from installation by clicking the icon next to it and then selecting the line with red-X from among the options presented. The sub-features under Client Support are: Interfaces Base Client Support System Bind Files Java Runtime Environment LDAP Exploitation XML Extender Communications Protocol We definitely want Interfaces, Base Client Support, System Bind Files, Java Runtime Environment, and, under Communications Protocols, TCP/IP. Under Base Support -> Interfaces, we find:
479
JDBC Support MDAC 2.7 ODBC Support OLE DB Support SQLJ Support JDBC is needed by DB2 Replication Center, Control Center, Command Center, and Configuration Assistant. The other interfaces may be needed with other software we have on our workstation. We should probably accept the Application Development Tools though some, if not all, the features in the list will certainly not be needed to administer DB2 Replication. Application Development Tools may not be included if we were installing Administration Client instead of DB2 Connect Personal Edition. Under Administrative Tools we find: Control Center Client Tools Command Center Configuration Assistant Event Analyzer Replication Center is included in Control Center. All the Administrative Tools may be useful and we should accept them.
Server Support probably would not be included if we were installing DB2 Administration Client instead of DB2 Connect Personal Edition. But in a DB2 Connect Personal Edition installation we need this feature. This feature provides the connectivity to DB2 for z/OS and OS/390 and to iSeries.
We do not need the Business Intelligence features to administer DB2 Replication. Before we click Next on the Select Features to Install window, we note that the default directory and path for the install are: Drive: C:\ Directory: C:\Program Files\IBM\SQLLIB The default drive and directory are fine with us. We go ahead and press Next. We next get a window with the title Select the Languages to be installed. English has been pre-selected for us. This is fine with us, so we click Next yet again.
480
Finally, we get the window is titled Start Copying Files, and it contains a full list of the DB2 features to be installed. We review the list and click Install. Next is the window titled Installing DB2. This takes several minutes. Then we get the window Setup is Complete. We read the informational message about DB2 documentation and click Finish. We are returned to what looks like the DB2 Installation Launchpad, but it is now titled DB2 First Steps. At this point, you can explore some of the options offered here. We select the last option on the list: Exit. To verify installation of Replication Center we go to:
Start -> Programs -> DB2 -> Administrative Tools -> Replication Center.
481
482
Appendix B.
483
In each section we will first discuss the information that needs to be obtained from the data source and then how to configure the connection on the Replication Center workstation. We will give examples using DB2 UDB Version 8 Configuration Assistant and using DB2 UDB Linux, UNIX, Windows commands. The DB2 Commands are described in the DB2 UDB Version 8 Command Reference. If you do not have it in hard copy or online, it can be found at:
http://www.ibm.com/software/data/db2/library
484
Whether you use prefixDIS DDF, the DSNL004I message in the System Log, or the utility DSNJU004, in the output look for the LOCATION and TCPPORT values for DDF. In our example, the values are: System name: stplex4a.stl.ibm.com LOCATION: STPLEX4A_DSN6 TCPPORT: 8008
Then we can configure DB2 connectivity to DB2 for z/OS or OS/390 on that mainframe. First well use the Configuration Assistant. We open it with Start -> Programs -> DB2 -> Setup Tools -> Configuration Assistant. Once Configuration Assistant opens, we want to add a new database. To do this click Selected from the menu bar and Add database using Wizard as in Figure B-1.
485
The Add a Database wizard starts with the Source panel. The word Source is in the upper left hand corner of the notebook. On the Source panel we select Manually Configure a Connection to a Database and click Next. The Protocol tab is next. we select TCP/IP, and the following tabs now appear on the left margin of the Add Database wizard notebook. Source - tab is already filled in Protocol -current tab TCP/IP - next tab to fill in Database (grayed) Data source (grayed) Node Options (grayed) Systems Options (grated) Security Options (grayed) The Add Database wizard will walk us through each of these tabs. We can go forward or backward among the tabs to check or change information. Here is how we fill in the information in each of the remaining tabs: Protocol TCP/IP - already checked above The database resides on a host or AS/400 - be sure to check this. Connect directly to the server - check this also TCP/IP tab: Hostname: stplex4a.stl.ibm.com Service name: [blank] Port: 8008 Database Note that the description says, For z/OS and OS/390 databases, specify the Location name. Database name: STPLEX4A_DSN6 Alias: will automatically be first 8 characters of Database name Can over-type. Data Source Note the description says this is for registering the data source as an ODBC System DSN. We want this. Register this data base for ODBC - leave checked As System data source - leave checked Data source name: - we accept the default STPLEX4A Optimize for application - we accept None
486
Node Options Operating System - OS/390 or z/OS Remote instance name - [blank] System Options System name - stplex4a.stl.ibm.com - we accept the default Hostname - stplex4a.stl.ibm.com -we accept the default Operating System - OS/390 or z/OS - we accept the default Security Options Use authentication value in servers DBM Configuration - checked DCS Options We make no entries. We click Finish and are returned to the Configuration Assistant main window with an entry added for our DB2 for z/OS sub-system STPLEX4A as in Figure B-2.
The same connection could be configured in DB2 Command Line Processor using the following commands in Example B-1.
Example: B-1 DB2 CLP commands for connection direct to DB2 for z/OS
db2 catalog tcpip node stplex4a remote stplex4a.stl.ibm.com server 8008 system stplex4a ostype os390 db2 catalog dcs db stplex4a as stplex4a_dsn6 db2 catalog db stplex4a at node stplex4a
487
Attention: If the DB2 CLP commands are used to configure the connection, be sure in the catalog tcpip node command to include OSTYPE. Replication Center uses this value to recognize that the server is a DB2 for z/OS or OS/390.
Well cover testing connections and binding packages in Testing Connections and Binding Packages on page 505 since the techniques will be the same for DB2 for z/OS and OS/390, iSeries, and DB2 on Linux, UNIX, and Windows.
If the DB2 Connect server is on Windows, where grep isnt available, you can either scan the whole output of the db2 get dbm cfg command looking for SVCENAME, or use the DB2 Administration Tool -> Control Center,
488
highlighting the instance name for the DB2 Connect Server, right-click, and select Configure Parameters from the available options. In the DBM Configuration window, SVCENAME is under Communications. If the result of this command is a name (as it is in our case) rather than a port number, then we need to check the /etc/services file to find out the associated port number:
$ cat /etc/services | grep DB2_aix43db2
On windows the TCP/IP Services file is in the folder: c:\Winnt\System32\Drivers\etc DB Alias name in the DB2 Database Directory entry for the DB2 for z/OS sub-system. shows the command to obtain the DB2 Alias name (Example B-2).
Example: B-2 DB2 z/OS Alias name at DB2 Connect server
db2 list db directory | more Database 1 entry: Database alias Database name Node name Database release level Comment Directory entry type Catalog database partition number
In summary, the information we needed is: System name = sthelens.almaden.ibm.com Port number = 60012 Database Alias = STPLEX4A
489
Pinging sthelens.almaden.ibm.com [9.1.38.178] with 32 bytes of data: Reply from 9.1.38.178: bytes=32 time=10ms TTL=255
Next we configure DB2 connectivity. Using Configuration Assistant: Source Manually configure a connection to the data source. Here is where we could try Search the network and see if DB2 Administration Server at the DB2 Connect server can provide all the information needed to configuration by point-and-click. Protocol TCP/IP Database physically resides on host or OS/400 system - check this Connect to server via gateway - check this also TCP/IP Hostname: sthelens.almaden.ibm.com - our AIX system with DB2 ESE Service - [blank] Port - 60012 - TCP/IP listener port of DB2 ESE on sthelens Database Database name: STPLEX4A - the Database Alias name in the Database directory on sthelens Data Source Register as ODBC System DSN - STPLEX4 Node Options Operating system - AIX Instance name - aix43db2 System Options No changes (accepted defaults) Security Options No changes (accepted default values) Finish The new entry in Configuration Assistant should appear as in Figure B-3.
490
Figure B-3 Configuration Assistant - DB2 for z/OS via DB2 Connect Gateway
The DB2 CLP Commands to do the same configuration are in Example B-3:
Example: B-3 DB2 CLP commands for B2 for z/OS via DB2 Connect server
db2 catalog tcpip node aix43db2 remote sthelens.almaden.ibm.com server 60012 system sthelens ostype aix db2 catalog db stplex4a at node aix43db2 db2 catalog system odbc data source stplex4a
Please note in this example: We picked a node name that is the same as the DB2 instance name on the AIX server. sthelens.almaden.ibm.com is the hostname of our DB2 Server on AIX. 60012 is the TCP/IP listener port of the DB2 Server (or DB2 Connect) on the AIX server. Note: when configuring the connection through a DB2 Connect or DB2 Server on Linux, UNIX, or Windows, we dont execute the db2 catalog dcs db... command. As before when we configured the connection directly to DB2 for z/OS, we can test the connection and bind using Configuration Assistant.
491
Note: Well also need the iSeries DRDA-TCP/IP listener port number, but we expect that to be the standard, which is 446.
When you have the hostname or IP address of the iSeries system, we recommend that you verify TCP/IP connectivity, such as by opening a DOS or UNIX command prompt and using ping. To get the Relational Database name of the iSeries system, we need to log onto the iSeries system and look at the output of command WRKRDBDIRE. We are looking for the entry where Remote Location = *LOCAL. On our iSeries system STL400G.STL.IBM.COM, we find in the second page of output for WRKRDBDIRE as in Figure B-4.
492
So in our case: hostname = stl400g.stl.ibm.com Relational Database name = STL400G Well point out some other things that need to be in place an iSeries system for it to be accessed from DB2 Connect on Linux, UNIX, or Windows: Library/Collection NULLID must exist. It will be used for the DB2 Connect Utility, CLP, and ODBC packages. It can be created with the CL command:
CRTLIB LIB(NULLID)
The DDM TCP/IP job needs to have certain attributes. They can be checked/changed with command:
CHGDDMTCPA
If you get security errors (SQL30082) when trying to access from DB2 Connect, try changing the Password setting in CHGDDMTCPA. The DDM TCP/IP Server job needs to be running. It can be started with:
STRTCPSVR SERVER (*DDM)
And check for QRWTLSTN in the QSYSWRK subsystem. If the CCSID of the iSeries system is 66535 (which is the default CCSID for iSeries systems), then the CCSID of the iSeries Userid specified on the connection from DB2 Connect needs to be changed. The CCSID of AS/400 user HCOLIN can be changed to the standard English CCSID 37 with the command:
CHGUSRPRF USRPRF(HCOLIN)) CCSID(37)
In summary, the information we needed from the iSeries system is: System name = stl400g.stl.ibm.com Port number = 446 (DRDA listener port on iSeries) Relational Database name = STL400G
493
Once there, we can configure a new connection to an iSeries system by choosing Selected -> Add Database using wizard. Here is how we complete the configuration for our example using Configuration Assistant: Source Manually configure a connection to a database Protocol TCP/IP Database resides on a host or AS/400 - check this Connect directly to the server - check this also TCP/IP Hostname: stl400g.stl.ibm.com Service name: [blank] Port: 446 - this is the DRDA listener port on iSeries Database Database name: STL400G (the RDB name from WRKRDBDIRE) Data Source Register this data source for ODBC As System DSN Data Source name: STL400G Optimize for application: None
Node Options Operating system: OS/400 Instance: [blank] System options System name: stl400g.stl.ibm.com Hostname: stl400g.stl.ibm.com Operating system: OS/400 Security Options User authentication value in servers DBM configuration DCS Options No specifications Finish Our new entry for the iSeries in the Configuration Assistant looks like Figure B-5.
494
As indicated before, we can use the Configuration Assistant to test the connection to the iSeries and to bind DB2 Connect Utility, CLP, and ODBC packages to the iSeries. From the menu bar, select Selected and then Bind or Test Connection. The connection to the same iSeries system could have also been configured with the following DB2 CLP commands in Example B-4:
Example: B-4 DB2 CLP commands for connection direct to iSeries
db2 catalog tcpip node stl400g remote stl400g.stl.ibm.com server 446 system stl400g ostype os400 db2 catalog dcs db stl400g as stl400g db2 catalog db stl400g at node stl400g db2 catalog system odbc data source stl400g
Attention: If the DB2 CLP commands are used to configure the connection, be sure in the catalog tcpip node command to include OSTYPE. Replication Center uses this value to recognize that the server is an iSeries.
Well cover testing connections and binding packages in Testing Connections and Binding Packages on page 505 since the techniques will be the same for DB2 for z/OS and OS/390, iSeries, and DB2 on Linux, UNIX, and Windows.
495
496
Node options Operating system: AIX Instance name: aix43db2 System options System name: stlhelens.almaden.ibm.com Hostname: sthelens.almaden.ibm.com Operating system: AIX Security Options Use authentication value in Servers DBM Configuration
Finish
To do the same configuration using DB2 Command Line on Windows, we used the following commands in Example B-5:
Example: B-5 DB2 CLP commands for connection to iSeries via DB2 Connect server
db2 catalog tcpip node aix43db2 remote sthelens.almaden.ibm.com server 60012 system sthelens ostype aix db2 catalog db stl400g at node sthelens db2 catalog system odbc data source stl400g
497
Here well assume you may need to configure the connection from your workstation to the DB2 for Linux or UNIX manually, keying the information required. The information youll need about the DB2 for UNIX or Linux server are: Hostname or IP address TCP/IP Listener Port number of the DB2 instance The Database Alias name of the database After obtaining the hostname or IP address, we recommend verifying TCP/IP connectivity, such as by going to a DOS or UNIX command prompt on the Replication Center workstation and using ping. To get the TCP/IP port number and database name, we need to log into the DB2 server system as the DB2 instance owner, for instance using telnet. To get the TCP/IP port number we enter:
db2 get dbm cfg | grep SVCENAME
Since the SVCENAME value is a Service name (DB2_aix43db2), and not a port number we need to look in /etc/services to get the TCP/IP listener port number that is associated with the DB2 instances TCP/IP Service name. The command to do this is, in our example, is:
cat /etc/services | grep DB2_aix43db2
The number we want is the first one: 60012. To get the Database Alias name, we need to look at the DB2 instances Database Directory. Example B-6 shows the command to do this.
Example: B-6 Obtaining DB2 UNIX Alias from DB directory
db2 list db directory | more Database alias Database name Local database directory Database release level Comment = AIX43DB2 = AIX43DB2 = /db2 = a.00 =
498
= Indirect = 0
So, in summary, the information we needed was: TCP/IP hostname = sthelens.almaden.ibm.com TCP/IP listener port = 60012Database Alias = AIX43DB2
Next well configure DB2 connectivity using both the DB2 Configuration Assistant and the DB2 CLP commands. We open DB2 Configuration Assistant here on our workstation by doing Start -> Programs -> DB2 -> Setup Tools -> Configuration Assistant. Once there, we go to the Tool Bar at the top and pick Selected -> Add database using wizard. We fill out the Add database wizards notebook as follows: Source Manually configure a connection to the database
Note: here is where you could select Search the network and see if Configuration Assistant can find the DB2 server system and get the information from it to let you do the connection configuration all by point-and-click.
Protocol TCP/IP We leave unchecked Database physically resides on host or AS/400 TCP/IP Hostname: sthelens.almaden.ibm.com Service name: [blank] Port: 60012 Database Database name: AIX43DB2
499
Database alias: AIX43DB2 Data source Register this database for ODBC As system data source Data source name: AIX43DB2 Node Options Operating system: AIX Instance name: aix43db2 System Options System name: sthelens.almaden.ibm.com Hostname: sthelens.almaden.ibm.com Operating System: AIX Security Options Use authentication value in Servers DBM Configuration
Finish
The DB2 CLP commands to accomplish the same thing are in Example B-7:
Example: B-7 DB2 CLP commands for connection to DB2 on UNIX
db2 catalog tcpip node aix43db2 remote sthelens.almaden.ibm.com server 60012 system sthelens ostype aix db2 catalog db aix43db2 at node aix43db2 db2 catalog system odbc data source aix43db2
Our new entry for the DB2 for Linux/UNIX database in DB2 Configuration Assistant appears as in Figure B-6.
500
Well cover testing connections and binding packages in Testing Connections and Binding Packages on page 505 since the techniques will be the same for DB2 for z/OS and OS/390, iSeries, and DB2 on Linux, UNIX, and Windows.
501
and look for TCP/IP Service Name SVCENAME in the result. There are many parameters in the output. SVCENAME is near the bottom. An alternative is open the DB2 Control Center on the DB2 Windows server, select the instance (DB2), right-click, and select Configure Parameters. Look under Communications for SVCENAME. We find:
SVCENAME micksdb2
Since the SVCENAME is a Service name and not a TCP/IP Port number, we need to look in the Windows TCP/IP services file on the DB2 Windows server to find out the port number. The services file is in folder: c:\WINNT\system32\drivers\etc We can open the services file with Notepad and use Edit>Find to look for micksdb2. We find the entry:
micksdb2 3846/tcp
Well point out here that these are a customized Service Name and TCP/IP Port number for DB2 on Windows. If the DB2 Installation process on the DB2 Windows server had configured the TCP/IP communications, the Service Name would be more like db2c_DB2 and TCP/IP port number would probably be 50000. To obtain the database alias name, we can either use another command in DB2 Command Window or we can use the Control Center. Example B-8 shows the command to do this.
502
In the Control Center on the DB2 Windows Server, if we look under the instance DB2 at the Databases, we find the database SAMPLE. If the database we are looking for had an Alias that was different from the database name, what we would see is something like MICKSDB(SAMPLE) where SAMPLE is the real Database name and MICKSDB is the Database Alias. It is the Database Alias that we need. In our case here, the Database Name and Database Alias are the same: SAMPLE. So, in summary, the information we needed from the DB2 Windows server was: System Name: MICKS DB2 TCP/IP Listener Port: 3846 DB2 Database Alias: SAMPLE
503
Note: Here is where you could select Search the network and see if Configuration Assistant can find the DB2 server system and get the information from it to let you do the connection configuration all by point-and-click.
Protocol TCP/IP We leave unchecked Database physically resides on host or AS/400 TCP/IP Hostname: micks.stl.ibm.com Service name: [blank] Port: 3846 Database Database name: SAMPLE Database alias: SAMPLE
Note: If we had DB2 UDB on our workstation containing a database SAMPLE, we could use a Database alias for the SAMPLE database at the DB2 Windows server to avoid conflicting entries in the DB2 Database Directory here on our workstation. Or, prior to configuring the connection to the remote DB2 Windows server, we could either drop our local SAMPLE database, or uncatalog it and catalog it again with an Database Alias name.
Data source Register this database for ODBC As system data source Data source name: SAMPLE Node Options Operating system: Windows Instance name: DB2 System Options System name: micks.stl.ibm.com Hostname: micks.stl.ibm.ocm Operating System: Windows Security Options Use authentication value in Servers DBM Configuration
Finish
The DB2 CLP commands to accomplish this are shown in Example B-9.
504
We can test the connection to the DB2 Windows server, and, if needed, bind packages also using the configuration section. See the next section, Testing Connections and Binding Packages.
505
if you see SQL0805 or -805, or package NULLID.____ not found messages when using Replication Center, Control Center, Command Center or other software on your workstation to access the DB2 for z/OS data source, it will be because some packages need to be bound from the workstation to the data source. Well cover how to do that here. To test the connection to a DB2 server using the Configuration Assistant, open the Configuration Assistant, highlight a particular database, and in the Tools bar at the top, select Selected -> Test Connection. See Figure B-8.
506
We select the following options: Standard - tests DB2 Command Line Processor and embedded SQL interface CLI and ODBC - may be used for query tools and other ODBC software JDBC - will definitely be used by the DB2 Administration tools that are written in Java We include a User ID (with password) that we got from the data source (mainframe host, iSeries, Linux, UNIX, or Windows), and press Test Connection. Next we might see a black Dos Prompt type window while the connection request is being process, then we should see a result window like Figure B-10.
507
Some errors we could see, and their causes are: SQL1336: We entered the wrong system name in Configuration Assistant. SQL30081- TCP/IP -10061 or 10060: At the data source, or the intermediate DB2 Connect Server, the DB2 TCP/IP Listener isnt running, the DB2 Service isnt started, or on our workstation we entered the DB2 TCP/IP Listener port information incorrectly in Configuration Assistant. SQL30082: The userid we entered is not correct, doesnt exist at the data source, or the password we entered is not correct. We can also make the connection test in a DB2 Command Line Processor window to DB2 for z/OS or OS/390, iSeries, or DB2 for Linux, UNIX, Windows server. Example B-10 shows how to do this. In the example, we enter the connect command without specifying a password, so we are prompted for the password to use on the connection.
Example: B-10 Testing connectivity using DB2 CLP
db2 connect to sample user micks Enter current password for micks:
508
Database Connection Information Database server = DB2/NT 8.1.0 SQL authorization ID = MICKS Local database alias = SAMPLE
If we need to bind any packages from our workstation to a DB2 for z/OS or OS/390, iSeries, or DB2 Linux, UNIX, Windows server, we can do that either using Client Configuration Assistant or DB2 Command Line Processor. Using Configuration Assistant, highlight the particular database, and on the tool bar, select Selected -> Bind. The Bind dialogue box opens. See Figure B-11.
509
In this window, we can select one of the groups of DB2 Client packages (in the example, weve selected the CLI/ODBC Support packages, or we can indicate another BND file on our workstation. If we want to override any of the default values for the options associated with the BND files, we can focus on Bind Options in the middle of the window and press Add and we will be presented with a list of Options from which we can select and specify a value. Under connection information, we fill in a userid we obtain from the data source system that can connect to the data source and can bind packages. We press the Bind button in the lower right corner. The Results tab should move to the foreground and show us if our packages are successfully bound or if we get have any errors. We could also bind packages using a DB2 Command Window. The BND files for the DB2 Client facilities, such as for Command Line Processor, ODBC/CLI, REXX, and DB2 Utilities, are in the folder c:\Program Files\IBM\SQLLIB\bnd They can be bound in groups, rather than individually, using the *.LST files. To bind packages using DB2 CLP, you must first connect to the data source where you want to bind the packages. See the DB2 CLP Connect example above. You should cd to the directory containing the BND files. On Windows, that would be an example of the command to bind the appropriate BND files to DB2 for z/OS is:
db2 bind @ddcsmvs.lst blocking all sqlerror continue
The Bind command is described in the DB2 UDB Version 8 Command Reference SC09-4828. Again, it appears that DB2 Administrative Client is automatically binding all the packages it needs at each DB2 data source and so it should be unnecessary for you to have to explicitly bind any packages to use Replication Center. Binding is described here just in case it is needed for some reason.
510
Appendix C.
511
If replicating to Informix If target tables already exist, need to be able to insert, update, delete records into the target tables If target table dont yet exist, create table
Information about the Infomrix TCP/IP protocol listener. See the discussion below on how to obtain this information
512
The name of the database within the Informix server that has the target tables or the source tables for replication. Informix database names are case sensitive. This can be found using Informixs dbaccess. In our example, it will be stores_demo.
We also need to know the Informix main directory. This is indicated by environment variable INFORMIXDIR. On this system:
INFORMIXDIR=/informix/93server
513
We should also see the settings for INFORMIXSERVER and INFORMIXSQLHOSTS. The latter indicates if the active sqlhosts file for this instance is in another directory besides the default location, which is /INFORMIXDIR/etc. In our case, we find that INFORMIXSQLHOSTS is not set, which means that the active sqlhosts file is in /INFORMIXDIR/etc. We will need to see the active onconfig file and the sqlhosts file. The will be in INFORMIXDIR/etc; in other words, in
/informix/93server/etc
We cd to that directory. We use ls to see the names of the files in that directory to verify that the onconfig file we are looking for (onconfig.inf93) and the sqlhosts file are there. We need to see the DBSERVERNAME and DBSERVERALIASES values in the active onconfig file. We could use vi or we could use cat with grep:
cat onconfig.inf93 | grep DBSERVERNAME
(Since there is no value between the variable name and the #, the variable is not set with a value). Next we need to look in the sqlhosts file. It is also in INFORMIXDIR/etc. The file is probably not large so we could use vi. In sqlhosts we find: inf724 inf92 inf93 inf731 onsoctcp onsoctcp onsoctcp onsoctcp anaconda python viper boa ifmx724 ifmx92 ifmx93 ifmx731
For those who are not familiar with format of sqlhosts entries: First value is the dbservername Second value is the Informix protocol Third value is the system name Fourth value is the TCP/IP service name or port number.
The record we are looking for is the third one (inf93). It indicates: dbservername = inf93 Informix protocol = onsoctcp
514
This is for one of the Informix TCP/IP protocols. This what we need for Informix Client SDK to connect to this Informix server from the DB2 ESE Version 8 system using federated access. TCP/IP Service name = ifmx93 Since the fourth field is a service name, not a port number, we need to look in the systems TCP/IP services file (etc/services) to make sure there is an entry there and to find out the port number. We can use cat with grep: cat /etc/services | grep ifmx93 The result is: ifmx93 1652/tcp # Informix online v9.3 viper
Lets summarize the information we have obtained from the Informix server: userid: gjuner2 Informix DB server name: inf93 Informix TCP/IP protocol listener: onsoctcp Informix server system name: viper.svl.ibm.com Informix servers TCP/IP service name: infx93 Informix servers TCP/IP port: 1652 Informix database name: stores_demo For an Informix server on Windows, the DBSERVERNAME of the Informix/Windows server can be found in the Windows Register. Open the Windows Registry (at a DOS command prompt, enter regedit), then select HKEY_LOCAL_MACHINE -> SOFTWARE -> Informix -> Online -> dbservername In the Environment folder under the our dbservername, we can find more information, including the name of the onconfig file. The onconfig file referenced can be found in INFORMIXDIR/etc. Typically that would be in
c:\Program Files\informix\etc
The sqlhosts entries on the Infomix/Windows server machine can also be found in the Windows Registry. Select HKEY_LOCAL_MACHINE -> SOFTWARE -> Informix -> SQLHOSTS -> servername If the protocol is olsoctcp for the SQLHOSTS entry referenced by DBSERVERNAME in the onconfig information, this is ok. The TCP/IP services file on Windows can be found in the folder c:\Winnt\System32\Drivers\etc.
515
When the Informix Client SDK installation is complete, you should have the information for the INFORMIXDIR environment variable. In our case it is
INFORMIXDIR=/home/informix
If the Informix client software was previously installed on the system, you can verify that you have the Client SDK by going to the INFORMIXDIR/lib directory and using ls *.a (on AIX) to verify that archive libraries of the Client SDK are there. For instance on our AIX system, in /home/informix/lib we find netstub.a. If we cd to INFORMIXDIR/lib/esql, we find more of the archive libraries of the Informix Client SDK that DB2 federated access will need. The Informix Client SDK archive libraries on AIX end with the extension .a.
If there are fixpacks available for DB2 Version 8. the latest should be downloaded at this time and applied before the next step (djxlink).
djxlink
On AIX, Solaris, HP-UX, and Linux, the wrapper library that DB2 uses to interface with the Informix Client SDK has to be built by a link between the
516
wrapper input library and Informix Client SDK libraries. The wrapper input library for creating the Informix wrapper on AIX is
/usr/opt/db2_08_01/lib/libdb2STinformixF.a
On Solaris and Linux, this would be /opt/IBM/V8.1/libdb2STinformixF.so On HP-UX, it would be /opt/IBM/V8.1/libdb2STinformix.sl It is important that djxlink be run after each fixpack is applied so that that wrapper library that is in use is at the same fixpack level as the DB2 engine.
/usr/opt/db2_08_01/bin/djxlinkInformix
djxlinkInformix will write detailed warning/error messages to
/usr/opt/db2_08_01/lib/djxlinkInformix.out If successful, djxlinkInformix will create the Informix wrapper library:
/usr/opt/db2_08_01/lib/libdb2informix.a
Note: on Windows, djxlink is not run. DB2s Informix wrapper library (db2informix.dll) is a dynamic link library (dll) as are the Informix Client SDK libraries.
517
The meaning of the values in the four fields is: First field - dbservername: inf93. This will be setting for the Node option in our federated Server definition for the Informix server. Second field - Informix protocol: onsoctcp Third field - hostname of the Informix server: viper.svl.ibm.com Fourth field - TCP/IP port number or service name: 1652 If the value were a name instead of a number, there would need to be an entry in /etc/services here on the DB2 system to resolve this service name to the port number that the Informix server is listening on. On Windows, Informix sqlhosts entries are recorded in the Windows Registry (regedit). Select HKEY_LOCAL_MACHINE -> SOFTWARE -> Informix -> SQLHOSTS. If there is not an entry yet for the Informix server, it can be added using Informix-Connects setnet32. If the service name and port for the Informix server needs to be added to the TCP/IP services file (such as because the SQLHOSTS entry has a service name instead of a port number), then Notepad or some other editor can be used to add the entry. On Windows, the TCP/IP services file is in the directory c:\Winnt\System32\Drivers\etc.
518
As the instance owner, you might verify that the Informix wrapper library now appears in the instances /sqllib/lib sub-directory since this library was created by root using djxlinkInformix. cd to /home/aix43db2/sqllib/lib/ and use ls to find libdb2informix.a
db2dj.ini
db2dj.ini file contains the environment variables that non-DB2 data source client software requires. In otherwords, we need to specify in db2dj.ini the environment variables required by Informix Client SDK.
The db2dj.ini file is normally in the instance owners /sqllib/cfg sub-directory. For instance, in our example, /home/aix43db2/sqllib/cfg/db2dj.ini. If the file db2dj.ini is not in that directory, it can be created with an editor (vi). The db2dj.ini variables required for use with Informix Client SDK are INFORMIXDIR and INFORMIXSERVER. Another variable INFORMIXSQLHOSTS - is required if the Informix sqlhosts file is not in the directory INFORMIXDIR/etc. INFORMIXDIR - the main directory of Informix Client SDK here on the DB2 system. INFORMIXSERVER - the dbservername from one of the sqlhosts entries on the DB2 system INFORMIXSQLHOSTS - the path to the sqlhosts file if it is not in INFORMIXDIR/etc In our example, db2dj.ini did not exist yet. We used vi to create it and made the following entries:
INFORMIXDIR=/home/informix INFOMRIXSERVER=inf93 INFORMIXSQLHOSTS=/home/informix/etc
Note: We really didnt need to specify INFORMIXSQLHOSTS, since the sqlhosts file is in the usual directory. Attention: do not use variable names in the values specified in db2dj.ini. This will cause unpredictable DB2 errors, including crashes. For instance:
INFORMIXSQLHOSTS=INFORMIXDIR/etc
519
Note: db2dj.ini is not used on Windows. On Windows, federated server uses Windows System Environment Variables.
Warning: it is important that the setting include the full path to the db2dj.ini file. Note: DB2_DJ_INI is not used on Windows, since DB2 federated server on Windows does not use a db2dj.ini file.
There is an optional DB2 Profile Registry variable - DB2_DJ_COMM - that will tell DB2 to load wrapper libraries whenever DB2 is started. This way, the first time the wrapper is used, there wont be a delay while it is loaded into memory. Once a wrapper is loaded into memory, it remains there until DB2 is stopped. Here is an example to tell DB2 to load the Informix wrapper library into memory on AIX.
db2set DB2_DJ_COMM=libdb2informix.a
Note: db2set DB2_DJ_COMM can be used on Windows to load db2informix.dll whenever the DB2 service is started.
520
FEDERATED is among the first 10 parameters in the output. In our case, the result is:
Federated Database System Support (FEDERATED) = YES
521
The Wrapper to be used with this federated Server Options that specify connectivity information, Other options that may be required or could improve performance For replication to/from informix, the Server Option IUD_APP_SVPT_ENFORCE must be specified and set to N The Server definition depends on a Wrapper already being defined for use with the specified Server Type User Mapping - registers a mapping of a userid that accesses the DB2 database to a userid at a federated Server. The User Mapping provides the Federated Server function with the userid to specify in the under-the-covers connections that federated server will make to an Informix server on behalf of a specific DB2 database user. Nicknames - registers a remote table or view in the DB2 database. A nickname is a two-part name adhering to the same naming rules as for tables and views in the DB2 database. The first part of the name is the schema. If when the nickname is created only one part is specified, that one part will be the second part of the two-part name and schema (the first part) for the nickname will be the default schema. The default schema is the userid itself that is creating the nickname. The nickname specification must include the remote schema and the remote name of the data source object for which it is created. Once created, the nickname can be referenced in Select, Insert, Update, and Delete statements made to the DB2 database, and Federated Server will make an under-the-covers connection to the data source to execute the appropriate action. Multiple nicknames can be referenced in an SQL statement. SQL statements can also include local tables/views as well as nicknames. Federated Server can do joins between multiple data sources and between local DB2 data and remote data source data.
Create Wrapper
Before creating the wrapper, you might check for the appropriate wrapper library. On AIX, this would be in the instance owners /sqllib/lib directory and the library name is libdb2informix.a On Windows, this would be in c:\Program Files\IBM\SQLLIB\bin and the library name is db2informix.dll.
522
Note: Wrappers, once they have been defined, can be found in the SYSCAT.WRAPPERS catalog view.
Create Server
Before defining a Server for an Informix data source we need: The INFORMIX wrapper to have already been defined. The dbservername for the Informix data source from the sqlhosts file here on the DB2 server. In our case, the value is inf93 The database name at the Informix server that contains the tables that will be replication sources or into which we will create replication target tables. In our case, that is stores_demo At DB2 CLP prompt (db2=>) or in DB2 script file:
CREATE SERVER IDS_VIPER TYPE INFOMRIX VERSION 9.3 WRAPPER INFORMIX OPTIONS (NODE inf93, DBNAME stores_demo, IUD_APP_SVPT_ENFORCE N, FOLD_ID N, FOLD_PW N )
In this example: SERVER IDS_VIPER: the Server name we are making up to reference this Informix server TYPE INFOMRIX: the appropriate type for Informix
523
VERSION 9.3 : the version of the Informix server WRAPPER INFORMIX: the wrapper we use with Informix NODE inf93: dbservername in sqlhosts for the Informix server. This value is case sensitive DBNAME stores_demo: database at the Informix server. This value is case sensitive. IUD_APP_SVPT_ENFORE N : option we need to specify to enable insert/update/delete (i.e. replication) with Informix FOLD_ID/FOLD_PW N: these are optional. They tell federated server when it connects to this Informix server, attempt the connection only once and use the REMOTE_AUTHID/REMOTE_PASSWORD values of the User Mapping exactly as they are without folding them to either upper or lower case.
Note: Servers, once they have been defined, can be found in the SYSCAT.SERVERS catalog view. The options are in SYSCAT.SERVEROPTIONS.
At UNIX prompt:
db2 create user mapping for aix43db2 server ids_viper options (remote_authid gjuner2,remote_password gjunerpw )
In this example: aix43db2: the userid that connects to the DB2 database SERVER IDS_VIPER: the Server name for the specific Informix server REMOTE_AUTHID gjuner2: the userid at the Informix server REMOTE_PASSWORD gjunerpw: the password at the Informix server
524
Set Passthru
Before trying to create any nicknames, it would be a good idea to test the Server and User Mapping definitions. We can also verify the schema and name of an Informix table for which we will create a nickname. At DB2 CLP prompt (db2=>): set passthru ids_viper
Note: This statement does not exercise the Server definition, it just tells DB2 to send the next statement to the specified server.
select count (*) from systables This statement causes DB2 to use the Server and User Mapping information to attempt a connection to the Informix server through the Infomrix Client SDK. If the connection is successful, DB2 sends the SQL statement. This statement queries a catalog table. select count(*) from informix.customer This statement queries a table for which we want to create a nickname. We specify both the schema and the table name in order to verify both parts of the name. set passthru reset This statement returns us to the world of the DB2 database where Informix tables can only be accessed via nicknames.
Create Nickname
Create Nickname will cause DB2 to: Connect to the Informix server using the Server and User Mapping information. Query the Informix catalog for information about the Informix table or view for which are creating the nickname. Insert records into the DB2 catalog for the nickname. To create a nickname, we need both the schema and the table/view name of the objects at the Informix server. Also, if we will be doing Selects in the DB2 database referencing this nickname, performance will be better if DB2 has accurate statistics for the nickname. These are gathered from Informix when the nickname is created, if there are statistics. The Informix command to update the statistics in the Informix catalog for a table is update statistics. For instance:
update statistics for table informix.customer
The Informix update statistics command can be run in a DB2 Set Passthru session to an Informix server.
525
Note, in the example immediately above: the backlashes (\) are used before the double-quotes () that surround the remote schema name and the remote table name to tell AIX and Windows not to do their normal operation that they do when they find a double-quote within a command. The backslash is not needed when the statement is entered in DB2 CLP session (db2=>) or in a DB2 script file In this example: IDS_VIPER (1st) will be the schema of the nickname CUSTOMER will be the nickname part of the nickname IDS_VIPER (2nd) is the Server name for the Informix server. The Node option of the Server definition points to an Informix server instance, and the DBNAME option points to a specific database within the Informix server instance. Informix is the schema of the table for which we are creating the nickname. The value is enclosed in double-quotes so that DB2 will not fold it to upper case before querying Informix about the remote table Customer is the name of the table for which we are creating the nickname The value is enclosed in double-quotes so that DB2 will not fold it to upper case before querying Informix about the remote table Once a nickname has been created, there are records for the nickname in the DB2 Catalog. Look in the following Catalog Views: SYSCAT.TABLES - a record for the nickname. Type=N SYSCAT.TABOPTIONS - information about the remote table. There are multiple records for each nickname. SYSCAT.COLUMNS - one record for each column of the nickname. Shows the DB2 data types of the columns. SYSCAT.TABOPTIONS - information about the columns of the remote table, including there data types. There are multiple records for each column. SYSCAT.INDEXES - information about indexes on the remote table, such as which columns are indexed. The DB2 optimizer uses this information to help choose the best plan for queries involving the nickname. There is not a real index in DB2 for the remote table.
526
Create Wrapper
With the Federated Database Objects icon highlighted, right-click and select Create Wrapper.
527
The Create Wrapper dialog window opens. Select INFORMIX in the pulldown window. See Figure C-2.
When you select INFORMIX as the wrapper name, the Library name should automatically be filled in. Click OK to create the wrapper. In the Control Centers right window, the INFORMIX wrapper should appear ,and also it should be an object under Federated Database Objects in the tree in the left window.
Create Server
In the Control Center s tree in the left window, highlight the INFORMIX wrapper icon and, if necessary, click the + to expand the tree below. Highlight the Server icon, right-click, and then select Create from among the options to open the Create Server dialog window. Key in the Server Name. Select the Server Type from the pull down list Key in the Version of the Informix server For the Node, key in the dbservername value from the appropriate sqlhosts entry. For the Database, key in the name of the database at the Informix server. When filled in the the Create Server window should appear as in Figure C-3.
528
Dont click OK yet. First, click Options, to open the Server Options window. Check and highlight each of the options one at a time:
529
Now click OK on the Options window. Then OK on the Create Server window. An icon for the new server defining should now appear under the Server icon under the INFORMIX icon in the tree in the Control Centers left window.
530
In the Remote Userid fields below, put the Userid and Password that can access the Informix server. See Figure C-5.
Now click OK to create the User Mapping. The new User Mapping should appear as an object in the Control Centers right window. If your password changes at the Informix server, you can find this User Mapping object in the Control Center, highlight it, right-click, and select Alter to open the Alter User Mapping window where you can change the Remote_Password.
531
(FED_DB) and use the commands in the SQL Statements section above for Set Passthru to test the definitions. You can also do this test here in the Control Center using the Create Nickname dialog. Under the icon for the Server, highlight the Nicknames icon, right-click, and select Create. The Filter Table for Nicknames window should open. Check the box for Remote Table Name, be sure the operator is =, and in the values field put the name of a known table. For Informix, well use systables which is one of the Informix catalog tables. Then click the Count button. Under the covers, federated server will use the information in our Server and User Mapping definitions to connect to the Informix server and query Informix for the number of tables whose name is systables. We should get a response in the window, Number of objects meeting this filter criteria: 1. See Figure C-6.
Figure C-6 Control Center - server connection test with nickname filter
Create Nickname
With the Control Center we can create one or many nicknames at once. As you will see, the Create Nickname dialog can build a list of remote tables pre-checked to have nicknames created for them. We can if we want customize the nickname schema for one or all of the nicknames that will be created at once. To get to that point, we need to go through the Filter Tables for Nicknames dialog that we demonstrate above in the connection test to the Informix server.
532
In the Filter Table for Nicknames, we can use the different operators in the middle of the window with different values in the fields on the right side of the window to get various lists of remote tables from the Informix server. If you use the LIKE operator, put the percent sign (%) before or after the value specified to get all remote tables that meet the criteria. In our example here we just want to create a nickname for some of the tables in the stores_demo database. In the filter, we fill in the schema of the tables of the stores_demo database. In this case it is informix and we use the = operator and click OK. The Create Nicknames window opens with the list of remote tables whose schema is informix. In this case, that includes both the stores_demo tables and the catalog tables. You may need to re-size the window to see all the fields. They are: Create check box, with check mark as the default setting. Nickname - two part name for the nickname that will be created Local Schema - the nicknames schema Remote Schema - the schema of the table in the Informix database Remote Name - the name of the table in the Informix database. In our case, before proceeding, we want a remote table list that excludes the Informix catalog tables. We go back to the filter. For the Remote Table Name, we select the NOT LIKE operator and put sys% in the Values field. Then we click OK . The Create Nickname window now has only user data tables and not the Informix catalog tables. The Create Nickname window now appears as in Figure C-7.
533
We want to: Create nicknames only for the customer and orders tables We want the nickname schema to be the same as our Server name The nickname name can be the same as the remote table name, but it should be upper case. To accomplish this, we unchecked the check box under Create for all but the customer and orders table. Then we click the Change All button. This opens the Change All Nicknames window. In the Schema field, we select Custom from the pulldown. In the field that now becomes available we key the Server name: IDS_VIPER. We note that in the Nickname field the value is Remote table name We then go to the Fold field in the lower left, and select Upper Case from the pulldown. We click OK on the Change All Nicknames window. Now our Create Nicknames window looks like Figure C-8.
534
We see that only the nicknames we want are checked, and they have the schema and upper-case folding we wanted. We can click Show SQL to see the SQL that would be used to create the nickname. We click OK to create the two nicknames. The nicknames now appear in the Control Centers right window. See Figure C-9.
535
If we highlight one of the nicknames and right-click, we can see our options include Sample Contents. We can, if we want, see a few records from the table in Informix.
536
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook.
IBM Redbooks
For information on ordering these publications, see How to get IBM Redbooks on page 538.
My Mother Thinks Im a DBA! Cross-Platform, Multi-Vendor, Distributed Relational Data Replication with IBM DB2 DataPropagator and IBM DataJoiner Made Easy!, SG24-5463
Other resources
These publications are also relevant as further information sources:
IBM DB2 Universal Database Administration Guide: Planning, Version 8, SC09-4822 IBM DB2 Universal Database Command Reference, Version 8 , SC09-4828 IBM DB2 Universal Database Message Reference Volume 1, Version 8, GC09-4840 IBM DB2 Universal Database Replication Guide and Reference, Version 8, Release 1, SC27-1121 IBM DB2 Universal Database SQL Reference Volume 2, Version 8 , SC09-4845 IBM DB2 Universal Database System Monitor Guide and Reference, Version 8, SC09-4847 z/OS V1R4.0 MVS System Commands, SA22-7627
537
DB2 Replication Support home page http://www.software.ibm.com/data/dpropr/support.html DB2 home page http://www.software.ibm.com/data/db2 DB2 Support home page http://www.software.ibm.com/data/support DB2 for Linux, UNIX, and Windows Support home page http://www.software.ibm.com/data/db2/udb/winos2unix/support Informix home page http://www.software.ibm.com/data/informix DB2 Spatial Extender home page http://www.software.ibm.com/data/spatial
Administration Made Easier: New and Improved Tools in DB2 Universal Database, by Jason Gartner found at: http://www7b.software.ibm.com/dmdd/library/techarticle/0207gartner/0 207gartner.html
ftp.software.ibm
ftp://ftp.software.ibm.com
You can also download additional materials (code samples or diskette/CD-ROM images) from that site.
538
Related publications
539
540
Index
Symbols
.IBMSNAP_REGISTER 432 .profile 69 ANDSOURCE_OWNER 200 ANZDPR 342 APP.log 338 APPC 49 Application Programming Defaults 447 Application Requestor 278 Apply 8, 13, 40, 181182, 512 Apply Control Servers 24, 177, 234 Apply Control Tables and non-DB2 target servers 122 Apply full refreshes the target table 30, 35 Apply messages 42 Apply Parameters 285, 298 Apply Performance 443 Apply processes changes 32, 36 Apply Qualifiers 25, 37, 230, 235 Apply Report 310, 312 Apply reports from the Replication Center 255 Apply Throughput 459, 474 Apply Throughput analysis 42 Apply Transformations 393 APPLY_PATH 243, 338, 449 APPLY_QUAL 210, 242, 459 Applys sub-operations 460 Apply-Status-Down 18 AS/400 68, 483 ASN.IBMSNAP_ALERTS 18, 38 ASN.IBMSNAP_APPARMS 456 ASN.IBMSNAP_APPLYTRAIL 174, 459 ASN.IBMSNAP_APPPARMS 468 ASN.IBMSNAP_CAPSCHEMAS 12, 34, 116 ASN.IBMSNAP_REGISTER table 30 ASN.IBMSNAP_SIGNAL table 30 ASN.IBMSNAP_SUBS_COLS 28, 32 ASN.IBMSNAP_SUBS_EVENT 14, 32 ASN.IBMSNAP_SUBS_MEMBR 2728, 3032 ASN.IBMSNAP_SUBS_SE 30 ASN.IBMSNAP_SUBS_SET 2728, 3032, 452, 456, 468 ASN.IBMSNAP_SUBS_STMTS 28, 32, 36 ASN1560E 313 asnacmd 262, 314 asnanalyze 294, 333, 339, 342 ASNAPLDD 448
Numerics
3270 terminal emulator 68 5250 terminal emulator 68
A
Activate 184 ACTIVATE column 30 Add 191, 225, 230 Add a Capture or Apply Control Server 93 Add a Database wizard 486 Add Calculated Column 191 Add one subscription set member to one or more subscription sets 178 Add Registrable Tables 137 Add subscription set members to a subscription set 178 ADDDPRREG 51 ADDDPRSUBM 27 Adding database 100 Adding databases while Replication Center is open 100 Adding new Subscriptions Sets 359 ADDRMTJRN command 11 ADMIN thread 29 Administration 8, 37 Administration defining a replication scenario 23 Administration and operations Alert Monitor 37 Administration Client 46, 54, 6061, 63, 65 Administration for all scenario 18 Administration Tools 65 Advanced Replication Topics 383 AIX 55 Alert Conditions 3738, 319320 Alert Monitor 8, 16, 21 Alert Monitor configuration 21 ALERT_PRUNE_LIMIT 38 Allow 184 Analyzing the control tables 294
541
asnapply 262, 333 asncap 258, 333 asncap and asnccmd 258 asnccmd 42, 258, 314 asnccmd command 34 ASNCLP 48, 123 ASNDLCOPYD daemon 41 ASNLOAD 30, 302, 459 asnmcmd 322 ASNMON 331 asnpwd 243, 327, 343 asnscrt 239 asnsdrop 239 asntrc 42, 333, 339, 347, 460 asntrc example 347 Asynchronous Read Log API 425 Audit trail 7 Automatic full refresh 380 Automatic Restart Manager (ARM) 41 AUTOPRUNE 34, 437 Available columns in the target name window 195
B
Base 192 Base aggregate 27, 209 Before starting the Capture 270 Before-image columns 10 Bidirectional exchange of data 5 Bidirectional with a master (update anywhere) 20 Bidirectional with no master (peer-to-peer) 21 BIND PACKAGE 446 Binding Package 505 BLOCKING 445 blocksize 447 Boolean 334 BPXBATCH 280 Bufferpool 472 Business intelligence tools 3 Bypassing the full refresh 381
C
CACHE DYNAMIC SQL 447 Caching dynamic SQL 447, 451 CAPPARMS 427 CAPPARMS Value 239 capschema.IBMSNAP_PRUNCNTL 30 capschema.IBMSNAP_PRUNCNTL table 27 capschema.IBMSNAP_PRUNE_SET table 27
capschema.IBMSNAP_REGISTER 3031 capschema.IBMSNAP_RESTART 32 capschema.IBMSNAP_SIGNAL 30 capschema.IBMSNAP_SIGNAL table 31 CAPSPILL 431 Capture 8, 182183 Capture and Apply log and trace files 337 Capture and Apply status 310 Capture Control Servers 23, 134, 176 Capture initializes global information 29 Capture Latency 41, 439 Capture Messages 41, 310311 Capture parameters 284, 297 Capture prunes applied changes 34, 37 Capture schema 11, 37, 236 Capture server 236 Capture spill 428, 430 Capture starts capturing changes 31 Capture Throughput 440, 474 Capture Throughput analysis 41 Capture transformations 390 Capture triggers 424 Capture triggers begin capturing 35 Capture updates as pairs of deletes and inserts 434 CAPTURE_MEMORY 429 CAPTURE_PATH 338 capture_path 237 capture_server 238 Capture-Status-Down 18 Catalog db 487 Catalog dcs db 487 Catalog system odbc data source 488 Catalog tcpip node 487 CCD 12, 209, 213 CCD Table replication source for Multitier Staging 213 CCD tables attributes 209 CCD_OLD_SYNCHPOINT 35 CCD_OWNER 35 CCD_TABLE 35 CCSID 493 CD tables 10, 425, 431, 436, 440, 444, 468, 472 CD_NEW_SYNCHPOINT 468 CD_OLD_SYNCHPOINT 31 CD_ROWS_INSERTED 440 Central data warehouse 3 Change 209 Change aggregate 27, 209 Change Capture parameters 296
542
Change Data 10 Changes from replica target tables 433 Checking Capture program status 250 Checking iSeries Apply program status 252 CHG_ROWS_SKIPPED 440 CHG_UPD_TO_DEL_INS 433 CHGDDMTCPA 493 CHGONLY 432 CHGUSRPRF 493 CL commands 68 CLI 507 Client Access 61, 68 Client-to-serve 49 CLIST CALCULATIONS 447 Collection NULLID 493 Column Mapping 190, 192 Command Center 68, 70, 74 Command Line Processor 66, 68 COMMIT 10 COMMIT_COUNT 428, 449, 452 Commit_Count 454 COMMIT_INTERVAL 32, 435 Commit_Interval 469, 475 comp.databases.ibm-db2 341 Complete 211212 Complete, condense CCD tables 214 Complete, non condense CCD table 214 Condense 211212 Condense CCD 194 Configuration 123 Configuration Assistant 66, 485 Configuration File 303 Configure UNIX/Windows apply control server as AR 273 Connectivity configuration 245 Connectivity of Apply 278 Consistent change data (CCD) 12, 26 Consolidation of data from remote systems 4 Contact information 38 Contact or contact group 38 Continuously 204 Control Center 46, 68, 70, 74, 521, 527 Control Tables 8, 80, 92 Control Tables Profile 78, 92 control_server 242 Count 137 CPU 54, 442 Create 225 Create contacts 319
Create Monitor Control 317 Create Nickname 525, 532 Create Server 523, 528 Create SQL packages 246 Create Subscription Set 176, 200 Create User Mapping 524, 530 Create Wrapper 522, 527 Create your own index radio button 195, 197 Creating a subscription set 180 Creating control tables at a command prompt 123 Creating monitoring control tables 317 Creating multiple sets of apply control table 126 CRTLIB 493 CURRENT_MEMORY 428 Customizing ASNLOAD 305
D
DAS 17 DASe 49, 430 Data 184 Data blocking factor 454 Data consolidation 14 Data distribution 14 Data distribution and data consolidation 19 Data marts 3 Data Mining 4 Data sharing 427 Data Source 486 Data transformation 3 Data transformation and denormalization 3 Database 486 Database Administration Server 17 Database Manager Configuration 521 DataJoiner 58 DataJoiner Replication Administration 46 DataJoiner Version 2 52 DB2 Administration Client 333334 DB2 Administration Server 57, 430 DB2 Administration Tools 68 DB2 Administrative Server 49 DB2 capture for bidirectional replication 11 DB2 Connect 58, 483, 488, 512 DB2 Connect EE 57 DB2 Connect Enterprise Edition 61 DB2 Connect Personal Edition 49, 6061, 6364 DB2 Customer Support 340341 DB2 Customer Support Analyst 338 DB2 Database Directory 489
Index
543
DB2 DataPropagator for z/OS and OS/390 16 DB2 ESE 5758, 61, 64, 174, 483, 488, 511512 DB2 export 30 DB2 federated server 116 DB2 for iSeries and OS/400 115, 122, 125 DB2 for OS/390 60 DB2 for z/OS 60 DB2 for z/OS and OS/390 114, 121 DB2 for z/OS or OS/390 68 db2 get dbm cfg 488 DB2 instance owner 69 DB2 LOB replication 395 DB2 log 425, 450 DB2 LSN 35 DB2 Peer to peer replication 405 DB2 Personal Edition 64 DB2 Profile Registry 57, 473, 520 DB2 Replication Centers architecture 46 DB2 Replication V8 close up 22 DB2 Runtime Client 49, 60 DB2 source and Informix target 14 DB2 table 134 DB2 trace 102 DB2 UDB Developers Edition 61 DB2 UDB Enterprise Server Edition 61 DB2 UDB Personal Edition 61 DB2 Universal Database for Linux, UNIX, Windows 60 DB2 V8 replication from 30,000 feet 8 DB2 z/OS and OS/390 IFI 306 10 DB2/400 60 DB2_DJ_COMM 520 DB2_DJ_INI 520 DB2COMM 57 db2dj.ini 519 DB2INSTANCE 69 db2mag 341 db2profile 69 db2rc 69 db2ReadLog 10 db2repl.prf 76 db2set 57, 473, 520 db2setup 65 db2support 340 db2trc 42, 102, 338, 340 DBNAME 524 DBSERVERALIASES 513 DBSERVERNAME 513514, 524 DCS Options 487
DDF 57 DDM 57 Deactivating and activating subscriptions 359 Decision support system 3 Define 176, 211 Define a replication source from Replication Center 134 Defining an empty subscription set 26 Defining database servers as replication sources and targets 23 Defining replication source tables 24 Defining replication subscription 25 Defining subscription members 26 DELAY 455, 468 Delay 475 Dependent Targets 96 Desktop environment for Replication Center 67 DIAGLEVEL 338 DIAGPATH 338 Dirty Read 445, 451 DIS DDF 484 Disk 54 Display commands on z/OS 291 Display IPC queue on UNIX and z/OS 293 Distributed Data Facility 57, 448, 484 Distribution of data to other locations 3 DJRA 46 djxlink 60, 516 djxlinkInformix 517 Downloads 62 DRDA 21 DRDA-TCP/IP listener port 492 DSNJU004 484 DSNL004I 484 DSNTIP4 447 DSNTIP5 448 DSNTIPC 447 DSNTIPM 484 Dynamic SQL 446, 451
E
EDM Pool 447 EFFECTIVE_MEMBERS 459 Email 323 Enable archival logging 270 END_OF_PERIOD 221 END_SYNCHPOINT 221 END_TIME 459
544
ENDDPRAPY 266 ENDDPRCAP 266 End-to-end latency 42, 456457, 475 Etc/services 515 Event 204 Event timing 220 EVENT_NAME 32, 36, 221 EVENT_TIME 221 Events 16 Extentsize 472 External 212 External CCD tables 212 Extra Blocks Req 448
I
IBM Customer Support 339 IBM Personal Communications 68 IBM Toolbox for Java 50 IBM z/Series 54 IBMSNAP_ALERTS 317318, 329 IBMSNAP_APPPARMS 234, 295, 448 IBMSNAP_APPTRACE 312 IBMSNAP_AUTHID 210 IBMSNAP_AUTHTKN 210 IBMSNAP_CAPMON 428, 438 IBMSNAP_CAPPARMS 234, 295, 427, 436 IBMSNAP_CAPTRACE 311, 437 IBMSNAP_COMMITSEQ 209, 436, 444, 449, 454, 468 IBMSNAP_CONDITIONS 318 IBMSNAP_CONTACTGRP 318 IBMSNAP_GROUPS 318 IBMSNAP_INTENTSEQ 209, 444, 449 IBMSNAP_LOGMARKER 210 IBMSNAP_MONENQ 318 IBMSNAP_MONPARMS 318 IBMSNAP_MONSERVERS 318 IBMSNAP_MONTRACE 317318 IBMSNAP_MONTRAIL 317318 IBMSNAP_OPERATION 200, 210, 449 IBMSNAP_REG_SYNCH 470 IBMSNAP_REGISTER 426, 444, 456, 468 IBMSNAP_REJ_CODE 210 IBMSNAP_UOW 431, 440, 472 IBMSNAP_UOWID 211 I-Connect 512 IFI 425 Import from File 199 Index name 194 Index schema 194 Infopops 333 Information Center 74, 334 Informix 2, 46, 5253, 57, 95, 422, 444, 446, 450, 470, 511 Informix Capture 12 Informix Client SDK 5859, 512, 516 Informix LOB replication 396 Informix SDK Client 15 Informix source and DB2 target 15 INFORMIXDIR 513, 516, 519 INFORMIXSERVER 514, 519 INFORMIXSQLHOSTS 514, 519 INIT thread 29
F
Failover systems 6 FEDERATED 23, 521 Federated database objects 521 Federated server 16, 46, 52, 58, 444, 446, 470, 483 Fetch 418, 443 Files generated 337 Filters 97 Fixpacks 61 FOLD_ID 524, 529 FOLD_PW 524, 529 Full refresh 14, 326 Full refresh procedures 380 FULL_REFRESH 459
G
GLOBAL_RECORD 426, 456, 468 GLOBAL_RECORD column 30 GROUP 192 Grouping 173 Grouping members to subscription sets 173
H
Hardware requirements 54 HFS 276 HOLDL thread 29 Hostname 484 How to detect logging type? 270 How to enable for replication? 271 HP 9000 54 HP-UX 55, 63
Index
545
Installing DB2 Administration Client with Replication Center 64 Installing DB2 Replication Center 63 Internal 212213, 215 Internal and External CCD tables 212 Internal CCD tables 213 interval timing 220 Introduction to DB2 Replication V8 1 IP address 484 IRWW 471 iSeries 50, 57, 60, 68, 342, 448, 483 iSeries Apply program job logs 253 iSeries RCVJRNE command 10 Isolation levels 277 IUD_APP_SVPT_ENFORCE 60, 522, 529 IUD_APP_SVPT_ENFORE 524
LOCATION 485 Location 212 Locking level 436 LOCKLIST 438, 451 Locksize 436, 451 Log 418 log record sequence number (LSN) 10, 29 Log Sequence Number 468 LOGBUFSZ 426 log-merge 427 Lotus Notes 18 lower bound 36 low-latency 467 LSN 29, 31, 468
M
Maintaining capture and apply control servers 374 Maintaining Registrations 352 Maintaining Subscriptions 359 Maintaining the password file 289 Maintaining Your Replication Environment 351 Managing DB2 logs and journals used by Capture 379 Manual Full Refresh 14, 381 Manually pruning replication control tables 374 MAP_ID 27, 3031 master location 5 master-slave 5 master-slave replication 5 MAX_NOTIFICATIONS_MINUTES 38 MAX_NOTIFICATIONS_PER_ALERT 38 MAX_SYNCH_MINUTES 428 Max_Synch_Minutes 453 MAX_TRANS_SIZE 429 Member 174 MEMBER_STATE 27, 31 memory 54 Memory usage 38 MEMORY_LIMIT 429 Microsoft Outlook 18 Monitor 41 Monitor Condition 429 monitor control tables 37 monitor qualifier 3738 Monitor Server 17 MONITOR_INTERVAL 429 Monitor_Interval 38, 327 MONITOR_TIME 428
J
Java 334 Java Runtime Environment 55 Javascript 334 JCL to operate Capture and Apply 280 JDB661D 49 JDB771D 49 JDBC 47, 507 Joblog 311 JOIN_UOW_CD 33 Journal 418, 425, 427 Journal Receivers 427 JRE 55
K
KEEPDYNAMIC(YES) 446, 451
L
LASTRUN 32, 36, 459 LASTSUCCESS 3031 Latency 406, 438 Latency thresholds 38 Launchpad 71, 98, 235 LDAP 55 Legacy data 3 Let the Replication Center suggest an index 195 Linux 5455, 57, 63, 6869, 483 Linux for z/Series 55 LOADXIT 30 Local 211
546
More Replication Center tips 100 Move Down 190 multi-master replication 6 multi-threaded 28
N
Named Pipes 49, 55 NetBIOS 49, 55 network 419, 447 network adapter 54 network packet size 448 Networking requirements 56 newsgroup 341 nickname 15, 53, 60, 446, 522 NODE 524 Node Options 487 non 187 Non complete, condense CCD table 215 Non complete, Non condense CCD table 215 non-DB2 424 non-DB2 relational source 134 Non-DB2 relational sources, Informix 115 non-DB2 server 52, 58, 95, 422, 443, 446, 450, 470, 520 non-DB2 source server 125 Notes for creating additional sets of Capture control tables 124 NULLID 506 Number of transactions applied at the target 455
OS/390 57, 483 OSTYPE 101 Other operations 299 Other requirements 7 OUTBUFF 426 Overview of the IBM replication solution 2
P
package NULLID 506 packages 277 page fetches 442 pagesize 472 Passwords 77 PATH 69, 520 PCKCACHESZ 447 PDF 334 Peer-to-peer 12 peer-to-peer replication 6 plans 277 Point 208 Point-in-time (PIT) 14, 26, 194 PREDICATES 27 prefetchsize 472 prefixDIS DDF 484 Prepare Statement 206 Prerequisites for Apply 288 Procedure call 206 procedures 53, 58, 512 process 290 Products to apply changes 19 Products to capture changes 19 profile 76 Promoting a replication configuration to another system 370 Protocol 486 PRUNCNTL 31 prune 300 PRUNE thread 29 PRUNE_INTERVAL 34, 437 pruning 25, 425, 436 Pull 418, 444 purchased package 3 Push 419, 444 Putting the pieces together 18
O
ODBC 507 OLAP 4 olsoctcp 515 ONCONFIG 513 onconfig 58 onsoctcp 514 Opening DB2 Replication Center 69 Operating 101 Operating System Type 101 Operating system type of servers 101 Operational Navigator 68 Operations 38 Operations DB2 Capture and Apply 28 Operations Informix Capture and Apply 34 OPT4ONE 456 Oracle 58 ORDER BY 444
Q
Query Status 248, 250, 310, 314 Query Status option 35
Index
547
Querying Capture and Apply control table 101 Querying Capture and Apply status on the iSeries 250 Querying target tables 451 Querying the Status of Capture and Apply 248 QZSNDPR 311
R
RC 106 Reasons why you might want to run multiple captures 124 Rebinding replication packages and plans 377 RECAP_ROWS_SKIPPED 440 RECAPTURE 432 Receiving an alert 323 Recovering source tables, replication tables, or target tables 378 Red Hat Linux 55 Redbook environment 42 Redbooks Web site 538 Contact us xvii Referential Integrity 454 REFRESH_TYPE 468 regedit 515 REGION 430 Register Nicknames 135 Register Tables 135 Register Views 135 Registered 186 Registered Nicknames 96 Registered Tables 96, 176 Registered Views 96, 176 registering a source table 24 reinit 300 Reinitialize Monitor 322323 Reinitializing Capture 300 Relational Connect 58 Relational Database 492 Relative 204, 220 Remote 186187 remote journaling 11, 420, 448 Remote Location 492 REMOTE_AUTHID 524 REMOTE_PASSWORD 524 remoteschema.IBMSNAP_PRUNCNTL 35 remoteschema.IBMSNAP_PRUNE_SET 35 remoteschema.IBMSNAP_REG_SYNCH 35 remoteschema.IBMSNAP_REGISTER 34
remoteschema.IBMSNAP_SEQTABLE 3536 remoteschema.IBMSNAP_SIGNAL 35 Removing apply control tables 121 Removing capture control tables 113 REORG for replication tables 377 Replica 14, 27, 194, 216 Replicating column subsets 384 Replicating from non-DB2 - Capture trigger status 315 Replicating row subsets 386 Replicating to non-DB2 - Apply status 316 Replication Alert Monitor 42 Replication Center 8, 45, 483 Replication Center and file directories 67 Replication Center connectivity to iSeries 50 Replication Center dialogue windows 80 Replication Center Profile 51, 76, 101 Replication Center tracing 101 Replication Centers connectivity for defining replication 47 Replication Definitions 134, 176 Replication filtering 384 Replication monitor program operations 322 Replication monitoring and non-DB2 targets 322 Replication of DB2 Spatial Extender data 397 Replication of large objects 395 replication operations 49 replication source 134 Replication transformations 389 Requirements at replication servers 57 resume 301 Retrieve All 136, 179 Rework 33, 449 RISC System/6000 54 ROLLBACK 10 root 65, 517 Row Filter 26, 195 Row-capture rule 432 Run now or save SQL 112, 121 RunOnce=Y 327 RUNSTATS 442, 450, 472 RUNSTATS for replication tables 376 Runtime Client 61
S
SAMPLE 99 Sample Contents 536 Schedule 203
548
SDSF 341 SecureWay 55 Security Options 487 Selected tables 137 sendmail program in UNIX 18 Server 60, 521 Server Option 60, 522, 529 Service name 488 Set 181182 Set name 230 Set Passthru 53, 60, 525 SET_DELETED 459 SET_INSERTED 459 SET_NAME 200, 459 SET_REWORKED 459 SET_UPDATED 459 setnet32 518 Setting UNIX environment variables 272 Setting Unix environment variables 272 Show 197 Show Alerts 322, 329 Show Apply Throughput Analysis 315 Show Capture Throughput Analysis 315 Show End-to-End Latency 315 SIGNAL 31 SIGNAL_INPUT_IN 3031 SIGNAL_SUBTYPE 30 SIGNAL_TYPE 30 SLEEP_INTERVAL 426 SLEEP_MINUTES 468 Sleep_Minutes 452, 475 SMTP server 18 Software Requirements 54 Solaris 55, 63 Source 185 Source Object Profiles 79 SOURCE_TABLE 200 Source-to-Target Mapping 203 Specifying the preferred method 303 spill file 33, 218, 418, 443, 448 SPUFI 68 SQL file 51 SQL Server 58 SQL0100W 313 sqlhosts 5859, 513, 517, 524 ssid 341 ssidMSTR 341 staging tables 418 Start Capture 234235
Start Monitor 322 Starting Apply 240, 247 Starting Apply for DB2 for UNIX and Windows and z/OS 240 Starting Capture for DB2 for UNIX and Windows and z/OS 235 Starting Capture for iSeries 239 startmode 237 Statements 204 status 42 status of Apply threads 250 status of the Capture threads 249 status of the threads 249 Stop Capture 234 Stop Monitor 322 Stopping and Capture and Apply Program 248 Stopping Capture and Apply 244 stored procedure 13 STR DPRCAP 266 STR DPRCAP and ENDDPRCAP 266 STRDPRAPY 266 STRDPRAPY and ENDDPRAPY 268 STRTCPSVR 493 Subscription 170172, 175, 177, 180 Subscription Event 172 subscription members 13 Subscription set and apply qualifier grouping 175 Subscription Set parameters 452 Subscription Sets 13, 38, 96, 177 Subscription Sets from non-DB2 servers 172 Subscription Sets to non-DB2 target servers 172 Subset of DB2 table 134 subsystem 311 Sun Solaris SPARC 54 Support 341 SuSE Linux 55 suspend 301 Suspend and resume Capture 301 SVCENAME 488 Sybase 58 SYNCHPOIN 30 SYNCHPOINT 29, 31, 468 SYNCHPOINT/SYNCHTIME 30 SYNCHTIME 3031, 426, 456 SYSCAT.COLUMNS 526 SYSCAT.INDEXES 526 SYSCAT.SERVEROPTIONS 524 SYSCAT.SERVERS 524 SYSCAT.TABLES 526
Index
549
SYSCAT.TABOPTIONS 526 SYSCAT.WRAPPERS 523 system design 418 System Display and Search Facility 341 System Environment Variables 520 System kernel parameters 56, 63 System Options 487 System Services Address Space 341
T
tablespace 109, 472 Target 183, 187 Target Object Profiles 79 target server 23 target table 418, 443, 450 Target-Table Index 193 Target-Table Table Space 195 TCP/IP 49, 55, 484 TCP/IP Service Name 57 TCP/IP services 502 TCPPORT 485 Teradata 58 Testing Connections 505 Testing the Server/User Mapping 531 thread 290 thresholds 16 Time 204 Tools Settings 74 TRACE PERFORMANCE RECORD 460 TRACE_LIMIT 311 tracing 101 TRANS_PROCESSED 428 TRANS_SPILLED 429 transactional processing 33 Transactions reworked 38 transitional replication 428 triggers 53, 58, 470, 512 TRIGR_ROWS_SKIPPED 440 Troubleshooting 42 Types 212
Unix System Services shell (USS) 18 UOW table 11 UOW_CD_PREDICATES 28, 200 Update Anywhere 11 Update anywhere replication 399 update statistics 525 UR 445, 451 Use a tablespace already defined in this session 109 Use an existing index radio button 196 Use select columns to create primary key 195 USENET 341 User 208 User computed columns 209 user copies (read only) 14 User copy 26, 194 user exit program 30 User interface 46 User Mapping 60, 522 Userid and password prompts 101 User-interaction methodology 47 Using ASNLOAD as is 302 Using ASNLOAD on DB2 UDB for UNIX and Windows 302 Using ASNLOAD on DB2 UDB for z/OS 305 Using ASNLOAD on the iSeries 306 Using JCL to start monitoring on z/OS 323
V
view 134 VIO 428, 431, 448 Virtual I/O 428
W
Web application 3 Web browser 334, 338 What is a replication source 134 Whats new in DB2 Replication V8 39 WHOS_ON_FIRST 459 Why use replication? 3 Windows 54, 57, 65, 6869, 483 Windows 2000 55 Windows 98 55 Windows Explorer 69 Windows ME 55 Windows Registry 65, 515 Windows service 274 Windows XP 55
U
Uncommitted Read 445, 451 UNI 69 UNION 5 Unit of Work 10 UNIX 57, 65, 68, 483 Unix System Services 276
550
Windows.NET 55 Windows/NT 55 WORKER thread 29 Wrapper 60, 521 WRKACTJOB 493 WRKDPRTRC 339 WRKJOB 311 WRKRDBDIRE 492 WRKSBMJOB 311 WRKSBSJOB 311 WRTTHRSH 426
Z
z/OS 57, 341, 483
Index
551
552
Back cover
IBM DB2 Replication (called DataPropagator on some platforms) is a powerful, flexible facility for copying DB2 and/or Informix data from one place to another. The IBM replication solution includes transformation, joining, and the filtering of data. You can move data between different platforms. You can distribute data to many different places or consolidate data in one place from many different places. You can exchange data between systems. The objective of this IBM Redbook is to provide you with detailed information that you can use to install, configure, and implement replication among the IBM database family DB2 and Informix. The redbook is organized so that each chapter builds upon information from the previous chapter.
BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.