Data Migration To IBM Storage Systems PDF
Data Migration To IBM Storage Systems PDF
Data Migration To IBM Storage Systems PDF
Chris Seiwert
Peter Klee
Lisa Martinez
Max Pei
Mladen Portak
Alex Safonov
Edgar Strubel
Gabor Szabo
Ron Verbeek
ibm.com/redbooks
International Technical Support Organization
February 2012
SG24-7432-01
Note: Before using this information and the product it supports, read the information in “Notices” on
page xi.
This edition applies to data migration products and techniques as of July 2011.
© Copyright International Business Machines Corporation 2011, 2012. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
Contents v
7.2.1 Data migration using SAN Volume Controller volume migration. . . . . . . . . . . . . 185
7.2.2 Data migration using SAN Volume Controller FlashCopy. . . . . . . . . . . . . . . . . . 187
7.2.3 Data migration using SAN Volume Controller Metro Mirror . . . . . . . . . . . . . . . . 188
7.2.4 Data migration using mirrored volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
7.3 Migrating using SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
7.3.1 Migrating extents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
7.3.2 Migrating extents off an MDisk that is being deleted. . . . . . . . . . . . . . . . . . . . . . 192
7.3.3 Migrating a volume between storage pools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
7.3.4 Using volume mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
7.3.5 Image mode volume migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
7.3.6 Migrating the volume to image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
7.3.7 Migrating a volume between I/O groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
7.3.8 Monitoring the migration progress. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
7.3.9 Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
7.3.10 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
7.3.11 Migration tips. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
7.4 SAN Volume Controller Migration preparation prerequisites. . . . . . . . . . . . . . . . . . . . 200
7.4.1 Fabric zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
7.4.2 Connect SAN Volume Controller to the fabric for migration . . . . . . . . . . . . . . . . 201
7.4.3 Remove SAN Volume Controller from the fabric after migration. . . . . . . . . . . . . 202
7.4.4 Back-End storage consideration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
7.4.5 Unsupported storage systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
7.4.6 Host attachment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
7.5 Migrating SAN disks to SAN Volume Controller volumes and back to SAN . . . . . . . . 203
7.5.1 Connecting the SAN Volume Controller to your SAN fabric . . . . . . . . . . . . . . . . 205
7.5.2 Preparing your SAN Volume Controller to virtualize disks . . . . . . . . . . . . . . . . . 206
7.5.3 Moving the LUNs to the SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . 210
7.5.4 Migrating image mode volumes to volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
7.5.5 Performance analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
7.5.6 Preparing to migrate from the SAN Volume Controller . . . . . . . . . . . . . . . . . . . . 220
7.5.7 Creating new LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
7.5.8 Migrating the managed volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
7.5.9 Removing the LUNs from the SAN Volume Controller . . . . . . . . . . . . . . . . . . . . 225
7.6 SAN Volume Controller Volume migration between two storage pools . . . . . . . . . . . 227
7.6.1 Environment description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
7.6.2 Performance measurement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
7.6.3 Migration steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
7.6.4 Performance Analyses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
7.7 Data migration using SAN Volume Controller mirrored volumes . . . . . . . . . . . . . . . . 237
7.7.1 Environment description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
7.7.2 Creating mirrored volumes using the SAN Volume Controller GUI. . . . . . . . . . . 241
7.7.3 Creating mirrored volumes using CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
7.7.4 Performance analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
7.8 Data migration using SAN Volume Controller Metro Mirror. . . . . . . . . . . . . . . . . . . . . 246
7.8.1 SAN Volume Controller Metro Mirror partnership . . . . . . . . . . . . . . . . . . . . . . . . 247
7.8.2 SAN Volume Controller Metro Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . 249
7.8.3 Starting and monitoring SAN Volume Controller Metro Mirror Copy. . . . . . . . . . 254
7.8.4 Stopping SAN Volume Controller Metro Mirror Copy . . . . . . . . . . . . . . . . . . . . . 257
7.8.5 Performance overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
7.9 SAN Volume Controller as data migration engine. . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
7.10 Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Contents vii
9.3 TDMF preferred practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
9.3.1 Keeping current. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
9.3.2 Setting default options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
9.3.3 Storage requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
9.3.4 Communications data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
9.3.5 Participation of agent systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
9.3.6 Protection of target volume data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
9.3.7 Pacing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
9.3.8 Rank contention and storage subsystem performance. . . . . . . . . . . . . . . . . . . . 422
9.3.9 TDMF interaction with other programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
9.3.10 Identification of volumes requiring special handling . . . . . . . . . . . . . . . . . . . . . 423
9.3.11 Migration considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
9.3.12 Estimating how long it takes to move the data . . . . . . . . . . . . . . . . . . . . . . . . . 428
9.3.13 Terminating a TDMF session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
Contents ix
x Data Migration to IBM Disk Storage Systems
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX® IBM® RMF™
BladeCenter® IMS™ S/390®
CICS® Informix® Storwize®
DB2® Lotus Notes® System i®
DS4000® Lotus® System Storage DS®
DS6000™ MVS™ System Storage®
DS8000® Notes® System z®
Easy Tier® OS/390® TDMF®
Enterprise Storage Server® Parallel Sysplex® Tivoli®
ESCON® PowerHA® VM/ESA®
eServer™ PR/SM™ XIV®
FICON® pSeries® z/OS®
FlashCopy® RACF® z/VM®
GDPS® Rational® z/VSE®
Global Technology Services® Redbooks® zSeries®
HACMP™ Redbooks (logo) ®
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Snapshot, and the NetApp logo are trademarks or registered trademarks of NetApp, Inc. in the U.S. and other
countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Disk Magic, and the IntelliMagic logo are trademarks of IntelliMagic BV in the United States, other countries,
or both.
QLogic, and the QLogic logo are registered trademarks of QLogic Corporation. SANblade is a registered
trademark in the United States.
Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel
SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its
subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
This section describes the technical changes made in this edition of the book. This edition
might also include minor corrections and editorial changes that are not identified.
Summary of Changes
for SG24-7432-01
for Data Migration to IBM Disk Storage Systems
as created or updated on February 23, 2012.
New information
Data migration in IBM PowerHA® clustered environments
IBM XIV® data migration
IBM® DSCLIbroker scripting framework
IBM TDMF® TCP/IP for IBM z/OS®
Changed information
Screen captures updated to reflect latest software available at the time of writing
Data migration has become a mandatory and regular activity for most data centers.
Companies need to migrate data not only when technology needs to be replaced, but also for
consolidation, load balancing, and disaster recovery.
This IBM Redbooks® publication addresses the aspects of data migration efforts while
focusing on the IBM System Storage® as the target system. Data migration is a critical and
complex operation, and this book provides the phases and steps to ensure a smooth
migration. Topics range from planning and preparation to execution and validation.
The book also reviews products and describes available IBM data migration services
offerings. It explains, from a generic standpoint, the appliance-based, storage-based, and
host-based techniques that can be used to accomplish the migration. Each method is
explained including the use of the various products and techniques with different migration
scenarios and various operating system platforms.
The material presented in this book was developed with versions of the referenced products
as of June 2011.
Chris Seiwert is an IT Specialist in the IBM European Storage Competence Center (ESCC),
Mainz, Germany. He has 12 years of experience in SAN, High End Storage, Data Center
Analysis and Planning, and 19 years in computer science. He holds a Bachelor of Science
degree in Computer Science, and his areas of expertise also include Data Migration, JavaTM
development, and HA Cluster solutions. Chris filed his first patent in 2005 for an IBM Data
Migration tool. He has written about SAN and data migration in IBM internal and external
publications, and has delivered presentations and workshops about these topics. He has
co-authored three previous IBM Redbooks and was the ITSO lead for this book.
Peter Klee is an IBM Professional Certified IT specialist in IBM Germany. He has 17 years of
experience in Open Systems platforms, SAN networks, and high end storage systems in
huge data centers. He formerly worked in a large bank in Germany where he was responsible
for the architecture and the implementation of the disk storage environment. This environment
has included the installation from various storage vendors, including EMC, HDS, and IBM. He
joined IBM in 2003, where he worked for Strategic Outsourcing. Since July 2004, he has
worked for ATS System Storage Europe in Mainz. His main focus is Copy Services, Disaster
Recovery, and storage architectures for IBM DS8000® in the open systems environment.
Lisa Martinez is a storage architect in the Speciality Services Area in GTS. She has been in
this position since the beginning of 2011. In 2010 Lisa took a temporary assignment as a
Global Support Manager for Cardinal Health. Prior to this assignment she was a test architect
Max Pei is an Infrastructure Architect for GTS in IBM Canada, specializing in SAN, Storage,
and Backup systems. He has 14 years of experience in the IT industry and has been with IBM
since 2008. He holds a degree in Metallurgical Engineering. His areas of expertise include
planning and implementation of midrange and enterprise storage, storage networks, backup
systems, data migration and server virtualization.
Mladen Portak is a Client Technical Storage Specialist for STG in IBM Croatia, and
specializes in Storage systems. He is certified on the IBM Midrange and Enterprise Storage
Systems and Microsoft MCP. Mladen has 16 years of experience in the IT industry and has
been with IBM since 2008. Before joining IBM he worked on customer side as a team leader
responsible for virtualization solutions. His current area of expertise includes the planning,
architecture and implementation of midrange and enterprise storage, and storage area
networks for Open Systems.
Alex Safonov is a Senior IT Specialist with System Sales Implementation Services, IBM
Global Technology Services® Canada. He has over 20 years of experience in the computing
industry, with the last 15 years spent working on Storage and UNIX solutions. He holds
multiple product and industry certifications, including IBM Tivoli® Storage Manager, IBM
AIX®, and SNIA. Alexander spends most of his client contracting time working with Tivoli
Storage Manager, data archiving, storage virtualization, and replication and migration of data.
He holds an M.S. Honors degree in Mechanical Engineering from the National Aviation
University of Ukraine.
Edgar Strubel is a Server and Storage Specialist with the STG LAB Services Europe, Mainz,
Germany. He started working at IBM at the end of 2000 (Mainz Briefing Center, ATS Team)
and since 2002 he has been involved in online data migration in IBM zSeries® environments.
Prior to joining IBM, starting in 1980, Edgar worked at BASF/COMPAREX in mainframe IT
environments, performing service and support in hardware and software for printers, tapes,
libraries, disks, and processors.
Gabor Szabo is an IBM Certified Solution Designer working as Test Engineer team leader of
DS8000 Development Support team in IBM Hungary, Vac. He has 8 years of experience in
the IT industry and has been with IBM since 2005. Gabor current area of expertise includes
the high-end storage system testing, test optimization, and new product implementation.
Ron Verbeek is a Senior Consulting IT Specialist with Data Center Services, IBM Global
Technology Services Canada. He has over 23 years of experience in the computing industry
with IBM, with the last 11 years working on Storage and Data solution services. He holds
multiple product and industry certifications, including SNIA Storage Architect, and has
co-authored one previous IBM Redbooks on the IBM XIV Storage Subsystem. Ron spends
most of his client time in technical pre-sales solutions, defining and creating storage and
optimization solutions. He has extensive experience in data transformation services and
information life cycle consulting. He holds a Bachelor of Science degree in Mathematics from
McMaster University in Canada.
From left to right: Bert Dufrasne, Edgar Strubel, Chris Seiwert, Alex Savonov, Ron Verbeek,
Peter Klee, Mladen Portak, Gabor Szabo, Max Pei. Missing: Lisa Martinez.
Special thanks to Uwe Heinrich Mueller, Uwe Schweikhard, and Mike Schneider
IBM Lab services Mainz, for excellent lab support during the residency.
Michael Moss
Technical Support Softek (an IBM Company)
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Preface xvii
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Data migration is needed when you change computer systems or upgrade to new products.
As with any data center change, you want to avoid disrupting or disabling active applications.
To avoid impact to business operations, most data migration projects are scheduled during
off-hours, primarily during weekends and over extended holiday periods. However, this delay
can increase migration costs because of staff overtime and negatively impact staff morale.
Furthermore, taking systems down for migration, even over a weekend period, can severely
affect business operations. The impact is especially severe if you cannot bring the systems
back up in time for the online day.
These potential problems cause some organizations to significantly delay purchasing new
technology, or even to delay the deployment of already purchased technology. These delays
cause further problems because older hardware can require more hands-on maintenance,
generally has lower performance, and is inherently more prone to failure.
You buy and deploy new technology to eliminate these issues. Therefore delays in
implementing new technology increases business risk because of your need to run
around-the-clock 24x7 applications with ever-shrinking batch windows. In addition, delaying
deployment of an already purchased or leased storage device raises its effective cost
because you are paying for both old and new devices.
Data migration can be low risk. Current IBM data migration technologies allow you to perform
most migrations with no downtime. IPL or server restarts are not always required, and no
volumes need to be taken offline. However, you might have to perform a scheduled IPL or
restart of a server if you are adding new equipment or applying system maintenance. In
addition, the latest migration software tools allow nondisruptive migration, allowing
applications to remain online during data movement without significant performance delays.
The chapter also includes a summary of the migration process based on a three-phase
approach:
Planning phase
Migration phase
Validation phase
Selecting the appropriate technique depends on the criticality of the data being moved, the
resources available, and other business constraints and requirements. The different
techniques have different risks. Select the technique that provides the best combination of
efficiency of migration and low impact to system and users.
Each product or tool has strengths and weaknesses. These include impact on performance,
operating system support, storage vendor platform support, and whether application
downtime is required to migrate data. Select the best product or tool for your needs.
AIX, Solaris, HP-UX, Linux, Windows, and IBM z/OS are equipped with volume management
software that manages disk storage devices. You can use this software to configure and
manage volumes, file systems, paging, and swap spaces.
File copy
If the data you are migrating is a group of individual files and no volume management
software is available, use a file-level migration technique. This technique uses native OS or
third-party utilities and commands that support the file copy feature. Using copy commands
and utilities is the simplest form of data migration between two file systems.
Important: If your storage systems are attached through multiple paths, verify that the
multipath drivers for the old and new storage can coexist. If they cannot, revert the host to
a single path configuration and remove the incompatible driver before attaching the new
storage system.
The raw device copying method is an offline migration method. Applications accessing data
on raw logical volumes must remain offline for the duration of the data copying process. The
tools do not prevent you from reading data from a volume being used by an application. This
might result in inconsistent data on the target volume.
The appliance-based migrations addressed in this book are based on the SAN Volume
Controller for block level and f5 ARX-VE for file level storage.
Important: SAN Volume Controller-Based migrations can be used only for fixed block type
storage type of open systems. CKD storage used by IBM System z® is not supported by
SAN Volume Controller.
The backup and restore options allow for consolidation because the tools are aware of the
data structures they handle.
These methods are unusual in that they do not require the source and target storage systems
to be connected to the host at the same time.
However, if you must remove the old storage system before installing the new one, you must
use an external storage device.
Migrating data using backup and restore generally have the most impact on system usage.
This process requires that applications and in certain cases file systems be in quiescent
states to ensure data consistency.
Other commands
You can find other commands on UNIX systems for backing up data. Again, these commands
require that you create an intermediate backup image of an object before restoring to a target
location.
You might not be able to use the volume management methods of migrating data in the
following cases:
For databases that use their own storage administration software for managing raw
volumes
Specialized applications that use raw volumes
In some cases, applications use volume names or serial numbers to generate license keys,
effectively becoming location dependent. These applications might not tolerate data migration
done with tools other than the tools supplied with the application itself such as data
export/import tools.
All open systems platforms and many applications provide native backup and restore
capabilities. They might not be sophisticated, but they are often suitable in smaller
environments. In large data centers, it is customary to have a common backup solution
across all systems. Either solution can be used for data migration.
The most common software packages that provide this function are:
IBM Tivoli Storage Manager
Legato Networker
BrightStor ARCserve
Symantec NetBackup
Storage-based:
– IBM Rational® Method Composer functions
The following software products and components can be used for data logical migrations:
DFSMS allocation management.
Softek zDMF.
System utilities such as:
– IDCAMS using the REPRO and EXPORT/IMPORT commands.
– IEBCOPY for Partitioned Data Sets (PDS) or Partitioned Data Sets Extended (PDSE).
– ICEGENER as part of DFSORT, which can handle sequential data but not VSAM data
sets.
– IEBGENER, which has the same restrictions as ICEGENER.
Database utilities for data that are managed by certain database managers, such as IBM
DB2® or IBM IMS™. IBM CICS® as a transaction manager usually uses VSAM data sets.
Tip: Issuing the EXTVTOC command requires you to delete and rebuild the VTOC
index using EXTINDEX in the REFORMAT command
4. Perform the logical data set copy operation to the larger volumes. This operation allows
you to use either DFSMSdss logical copy operations or the system-managed data
approach.
When there are no more data moves because the remaining data sets are in continual use,
schedule downtime to move them. You might have to run DFSMSdss jobs from a system that
has no active allocations on the volumes that need to be emptied.
2.1.6 Summary
Each method of data migration has strengths and limitations. Table 2-1 lists the pros and cons
of the suggested products and techniques covered in this book. It is not, however, a
comprehensive list of every technology and technique.
Table 2-1 An overview of migration techniques
Migration technique Pros Cons
Various tools have the following capabilities to pause, pace, or throttle the migration speed:
Throttling of host-based data migration rate using LVM or LDM depends on the tool
selection. Certain advanced Volume Managers provide a function that adjusts the data
mirroring synchronization rate. The function either manipulates the number of processes
or threads running in parallel (multithreading), or increases or decreases the delay
between I/O within a single thread.
TDMF includes a throttling/pacing capability that can adjust data movement.
SAN Volume Controller has the built-in capability to specify the number of threads at which
the data is copied from source to target.
IBM Copy Services using Rational Method Composer can also prioritize the host I/O and
Rational Method Composer writes. It does so by using dedicated paths for the Rational
Method Composer channels. No impact to host I/O is experienced.
Before the data migration takes place, create a set of target volumes. Thorough planning for
the target volume allocation is essential for storage optimization. Storage optimization is a
complex task and the optimization goals can be contradictory. You might need to prioritize
based on your company standards and available tools and techniques available for the data
migration. Any mistakes made at this stage of the data migration process will be difficult to
correct in an environment with non-virtualized storage.
Architecture of various disk platforms, allocation techniques, connectivity, storage tiering, and
alignment with applications are complex tasks. For more information about architectural
considerations for specific disks subsystems, see 2.4, “Preparing DS8000 for data migration”
on page 27.
Migration tools have different capabilities for migrating data to volumes with differing
capacities. Tools such as UNIX LVM-based tools are the most flexible when it comes to
dealing with the differences in volume capacities. Windows LDM migrations require the target
volume to be the same size or larger.
IBM Copy Services-Based migrations in most cases require the target volume to be of the
same capacity as the source. Generally, work with matching capacity volumes. You might
need to reverse the direction of data copying, which is not allowed from a volume of greater
capacity to a lesser one. After the migration is completed, you can freely increase the
capacity of a volume if required. Use operating system tools to recognize the increased
capacity of an underlined volume, and to expand boundaries of a file system on that volume.
SVC-Based migration from image mode to managed mode virtual disk cannot involve any
volume capacity changes. After the migration is finished, the target volume capacity can be
dynamically expanded. This process is similar to SVC-Based data migrations using Copy
Services or volume mirroring techniques.
Important: Be careful that the data capacity on the source volume is not greater than the
free space on the target volume.
Migration Process
Planning phase Migration phase Post migration
phase
Test &
Analyze Plan Execute Validate
Develop
Identify affected Define test Define Future Validate HW and SW Run post-
application environment Storage requirements validation test
Find dependencies Develop test and Environment Customize migration Perform
between verify migration Create Migration procedures knowledge
application scenarios Plan Install and configure transfer
Analize hardware Develop Develop Design Run pre-validation Communicate
requirements automation scripts Requirements test project
Analize bandwidth if required Create Migration Perform migration information
requirements Document step-by- Architecture Verify migration Create report on
Develop feasable step procedures Obtain Software completion migration
migration scenrios Develop tools and licences statistics
Develop back out verification Start Change Conduct
scenarios procedures Management migration close
process out meeting
The higher the complexity of an environment and the more critical the data, the more critical
migration planning becomes. Careful migration planning identifies potential issues, allowing
you to avoid or mitigate them. Migration planning also identifies which data to migrate first,
which applications must be taken offline, and the internal and external colleagues you need to
communicate with.
Successful migration planning involves more than just the IT staff. The business owners of the
applications and data being migrated must also be included. The owners are the best
resource for determining how important an application or set of data is to the business.
Coordinate the migration with the application owners because uncoordinated tasks can cause
problems during the migration. Do not, for example, plan a migration of the financial system
To uncover and understand the implications of such dependencies, carefully assess the
complete environment using the following steps:
1. Identify all applications that are directly affected by the planned migration.
2. Find application dependencies throughout the entire application stack, all the way down to
the hardware layer.
3. If any additional cross-application dependencies are identified, expand the migration
scope to include the newly identified application.
4. Repeat the previous steps until no additional cross-application dependencies are
uncovered.
5. Identify and document the requirements for a downtime of all the applications in scope.
6. Evaluate the requirements against any restrictions for application downtime.
Additional constraints might be identified during the assessment process, including the need
for extra capacity, connectivity, network bandwidth, and so on. Discovering these additional
requirements early is vital because it gives the project team more time to address them or
develop alternatives.
When the migration scenarios are developed in the test environment, plan to be able to
backout the migration so you can effectively reset the test environment. When all scenarios
have been developed, tested them extensively until you are confident that the scenarios will
work during the migration of live data.
You might also want to develop automation scripts. The following examples demonstrate
situations where automation scripts are useful:
When the time frame of the migration is weeks or months. Changes to the production
environment, like storage expansion, might happen during the migration, and these
additional volumes must be taken into account.
When the scenarios are so complex that operators need guidance. These instructions can
be provided with a script. For example, in a migration where the scenarios are issued in a
defined sequence, the operator can be guided step by step using a script.
Budget sufficient time and attention for this important phase. The better you test the
scenarios and provide automation, the smoother the live migration will go. The team running
the migration must be familiar with every step of the development process, and must feel
confident with the proposed timelines. Strive to minimize the risk of unplanned outages or
other complications during the migration.
Thorough development and testing of the migration process can reduce the potential impact
of the migration and increasing success rate of individual migrations.
Migration plan
As part of the planning and preparation, create a high-level migration plan (Table 2-3) and
communicate the plan to all stakeholders.
The plan also allows you to track schedule commitments while completing the migration.
Always allocate extra time for tasks, generally 15 - 20% more time than would be required for
the best case migration scenario. This gives you time to resolve unexpected issues. The
better you plan, the fewer unexpected or unforeseen issues will be encountered with both
resource (team members) commitments and technology.
Table 2-3 Sample high-level migration plan
Action item Assigned to Status Date
1. Establish a migration
management team.
8. Schedule a pre-migration
rehearsal that includes all the
members on the migration team
and a representative data
sampling.
Number of processors
Type of file system (UFS, VxFS, HFS, JFS, JFS2 (inline or outline), NFS, NTFS,
FAT, FAT32
Operating system (OS) version (AIX 5.1, zOS 1.4, IBM System i®)
Database version
Database size
System DASD
Storage environment
Storage vendor and model (EMC, HDS, IBM, STK, Dell, HP)
Channel type (IBM ESCON®, IBM FICON®, Fibre Channel, iSCSI, SAN)
Volume sizes
Topology
Speed of network
Use checklists to ensure that operating patches and software are at the correct
levels.
Implement the migration procedures and timeline built in the design phase.
Verify the migration completion by checking the status of the migration jobs.
Migration phase
During the migration phase, communicate your plans and obtain, install, and configure
hardware, software, automation scripts, and tools needed to perform the actual data
migration. Run a pre-migration data validation test in addition to post-migration validation
testing. For more information about validation, see 2.3.2, “Validation phase” on page 27.
These tests confirm that the data is in the same state after the migration as it was before. Test
your plan on a test or development (non-production) environment if possible.
The most important part of this stage is the actual migration itself. As outlined, proper
methodology can simplify this process by:
Enhancing the speed of migration
Minimizing or eliminating application downtime
Allowing migration during regular business hours
Table 2-6 illustrates a high-level sample plan only. Customize it for your specific environment.
You can use it as a model, or write your own plan. Part 2, “Migration scenarios” on page 39
contains several sample plans for host-based, network-based, array-based, and
appliance-based migration plans. See the sample plan that best matches your migration to
become familiar with the concepts and principles. The details need to be worked out with the
migration team members. This example plan illustrates what steps might be taken to migrate
Windows SAN volumes to a DS8000 using SAN Volume Controller.
Table 2-6 High-level test project plan for a single host server
Location Activity Owner Comment
Host server/ Fabric Verify the WWPN numbers of the Storage Admin
or Storage unit HBAs with the System Admins and System
Admin
ESS unit Locate LUNs on ESS unit for this Storage Team
host server
Note: It is important to validate that you have the same data and functions of the
application after the migration. Make sure that the application runs with the new LUNs, that
performance is still adequate, and that operations and scripts work with the new system.
After you complete the migration, compile migration statistics and prepare a report to highlight
what worked, what did not work, and any lessons learned. Share this report with all members
of the migration team.
These reports are critical in building a repeatable and consistent process by building on what
worked, and fixing or changing what did not. Also, documenting the migration process can
help you train your staff, and simplify or streamline the next migration you do, reducing both
expense and risk.
Make sure during the validation phase that the backout scenarios can still be applied. If there
are problems with the data or the infrastructure after the migration, you can fix the problem or
revert to the original state.
Also, keep the data at the original site available and usable. Maintaining this data allows you
to discard the migrated data and restart production at the original site if something goes
wrong with the migration. Another migration can then be run after you fix the problems without
affecting your applications.
Make sure that you understand and document the following items:
1. The logical layout of the Arrays and LUNs (understanding array characteristics with
workloads and data placement on arrays and LUNs)
a. The size of the arrays
b. The number of arrays
c. The type of arrays (6+P) versus (7+P)
d. The array locations in relationship to the owning DA Pair
e. Logical LUN (Volume) sizes versus physical disk sizes
I/O enclosures
Host adapters
I/O ports are available on each host adapter
Evenly distribute the data across as many hardware resources as possible. However, you
might want to isolate data to specific hardware resources for guaranteed resource dedication
to that data I/O. For best performance results, create only one rank per extent pool. Limiting
the ranks helps you to map and identify problem resources throughout the lifetime of the data
or database. Performance issues might arise later because of the following changes:
Database growth
Saturation of certain hardware resources as the database changes
New data or files/filesystems being created on the same set of hardware resources,
changing the performance of all data on those resources
Remember: The following explanation applies only to the initial creation of LUNs in the
array.
Figure 2-2 on page 29 shows an example of how the data workloads on the disk arrays affect
one another. The example is a DS8000 array of eight DDMs, in a 7+P format with no spare.
The eight physical disks are divided into 5 logical LUNs.
The first LUN (logical-disk1) is formed from strips of sections (numbered 1) along the outer
edge of each of the DDMs making up the array. Subsequent LUNs are similarly formed from
areas on the DDMs, each sequentially closer to the center.
Tip: This example is only true if there is a one-to-one relationship between rank and extent
pool. If an extent pool has multiple ranks, the LUN might span ranks.
LUNs from the same array can be assigned to the same or separate servers on an individual
basis. Workload sharing or isolation depends on the LUN to array mapping. An example of
two databases or workloads on the same array is shown in Figure 2-3.
A database might consist of two of the LUNs (logical volumes made up of strips 1 and 3) in
the array. Another database might consist of the other three logical volumes (logical disks 2,
4, and 5) in the array shown in Figure 2-3. The volumes in the array can even be assigned to
two different servers.
Because the disks share physical heads and spindles, I/O contention can result if the two
application workloads peak at the same time. Prepare the logical configuration of arrays,
ranks, extent pools, and LUNs for the DS8000. You must meet or exceed the I/O throughput
parameters of the source storage platform you are migrating from.
Larger arrays always outperform smaller arrays. For example, an RAID-5 (7+P) array
outperforms a (6+P) array. Data that needs to perform better would do better on a 7+P array
when using RAID-5 and (4x4), rather than (3x3) when using RAID-10 arrays.
Important: The speed of the drives is also a factor to consider. An array made up of 15-K
DDMs outperforms one consisting of 10-K DDMs. Speed is an important consideration
when moving to a DS8000 with mixed speed DDMs.
The 7+P or 4x4 array outperforms a 6+P or 3x3 array because the array will stripe data
across more disks. For example, writing data across a 7+P RAID-5 array will stripe across
Spare
Chunk3 Chunk3 Chunk3 Chunk3 Parity Chunk3 Chunk3
Generally, however, try to balance workload activity evenly across RAID arrays, regardless of
the size. The cache mitigates most of the performance differences, but keep in mind this
guideline when you are fine-tuning the DS8000 I/O throughput.
The services combine the experience of IBM services professionals with the following
powerful migration tools:
IBM System Storage SAN Volume Controller
Transparent Data Migration Facility (TDMF)
z/OS Dataset Mobility Facility (zDMF)
IBM can help with project planning and management, and can provide technical assistance
for migrating data including the following activities:
Hardware environment refreshes
Storage reclamation
Consolidation
Data center migrations
Data Migration Services are one part of the IBM Global Services portfolio of Data Center
Services. Data Center Services includes Storage Optimization and Integration Services for
end to end storage consulting.
The following data migration services from IBM Global Services are currently available:
IBM Migration Services for data for Open Systems
IBM Migration Service for System z Data
IBM Migration Services for Network Attached Storage Systems
In partnership with IBM Global Services, IBM Systems and Technology Group offers services
for early adopters of IBM technology and the following migration services:
IBM XIV Migration Services
IBM DS8000 Data Migration Services using temporary Licenses for Copy Services
This migration can be accomplished with minimal and often no interruption to service using
mirroring. Mirroring is an operating system or software/hardware tool. It allows your data to
be replicated to the IBM System Storage disk system or other storage vendor products. In
addition, IBM can provide a migration control book that details the activities performed during
delivery of the services.
In executing these projects, IBM uses the following technologies and products, among others:
IBM System Storage SAN Volume Controller
OS-Specific mirroring
Global Copy
Softek Transparent Data Migration Facility (TDMF)
Value proposition
Using IBM Migration Services can help you achieve the following goals:
Reduce or eliminate downtime and data loss
Preserve data updates throughout the migration, allowing the process to be interrupted if
necessary
Improve post-migration management (a migration control book is provided that explains
the work performed)
Benefits
The implemented solution allows you to realize the following benefits:
Professional, speedy, and efficient planning and implementation of data migration
Greater flexibility and improved data migration capabilities
Skills instruction for members of your staff
Focus on business-critical activities
Opportunities to reduce the total cost of your IT infrastructure
Deal size/pricing
Engagement pricing varies depends on the following factors:
Amount of data being migrated
Complexity and type of data being migrated
Migration method and technology
Travel and living expenses
Learn more
IBM TotalStorage hardware-assisted data migration services are available around the world.
For more information, visit the following web address:
http://www-935.ibm.com/services/us/en/it-services/data-migration-services.html
In addition to providing support for moving data to IBM System Storage products, IBM can
also help you move data among disk systems from other storage manufacturers.
Migration can be accomplished using hardware or software that allows direct access to
storage device volumes to be copied to new storage devices. The migration takes place
without interruption to data availability. IBM can work with your personnel to plan the data
migration activities, and can install the migration software or migration hardware tools in your
environment.
At the completion of these services, data can be transferred from your existing 3380/3390
formatted DASD volumes to the TotalStorage disk system.
In executing these projects, IBM uses technologies and products such as:
Softek Transparent Data Migration Facility for z/OS (TDMF z/OS)
Softek z/OS Dataset Mobility Facility (zDMF).
Global Copy
Value proposition
Using IBM Migration Services can help you achieve the following goals:
Reduce or eliminate downtime
Preserve data throughout the migration
Benefits
The implemented solution allows you to realize the following benefits:
Professional, speedy, and efficient planning and implementation of data migration
Greater flexibility and improved data migration capabilities
Skills instruction for members of your staff
Focus on business-critical activities
Opportunities to reduce the total cost of your IT infrastructure
Learn more
IBM TotalStorage hardware-assisted data migration services are available around the world.
For more information, visit the following web address:
http://www-935.ibm.com/services/us/en/it-services/migration-services-for-system-z-
data.html
Value Proposition
IBM Migration Services for Network Attached Storage combines proven migration methods
and tools with the planning and management experience of highly skilled IBM storage
specialists. IBM offers you continuous access to critical data throughout the migration process
to mitigate project risk. IBM also uses new storage and data platforms for the best IT and
business results. Additionally, this service provides both hardware- and software-based
migration options that are customized to your needs.
Benefits
The implemented solution allows you to realize the following benefits:
Increased value of investment in network-attached storage because of faster migration
Reduced system downtime
Reduced risk associated with data transfers
Learn More
IBM Migration Services for Network Attached Storage (NAS) are available around the world.
For more information, visit the following web address:
http://www-935.ibm.com/services/us/en/it-services/storage-and-data-services.html
STG Lab Services has the knowledge and skills to support your entire information technology
solution. STG Lab Services is focused on the delivery of new technologies and niche
offerings. It collaborates with IBM Global Services and IBM Business Partners to provide
complete solutions to keep your business competitive.
Value Proposition
IBM XIV Migration Services from STG Lab Services assists you with data migration to an IBM
XIV Storage System quickly and with minimal disruption. Services include planning,
implementation, validation of data migration, and skills transfer.
Benefits
Using IBM XIV Migration Services from STG Lab Services provides these benefits:
Correct migration of data to the XIV Storage System
Shorter migration schedule
Expert storage architects
Reduced risk of disruption to business
Value Proposition
When you do not need long-term copy services, IBM DS8000 Data Migration Services allows
you to use Copy Services functions for short-term migration scenarios. It allows enhanced
migrations of data between disk subsystems without the long-term commitment and cost of
licenses.
Benefits
Using IBM DS8000 Data Migration Services provides these benefits:
Correct migration of data to the DS8000 Storage System
Shorter migration schedule
Expert storage architects
Reduced risk of disruption to business
Use the Contact Now link in the right corner to contact STG Lab Services. You will be directed
to fill out a form.
The Remote Mirror and Copy functions are optional licensed functions of the DS8000 that
include:
Metro Mirror
Global Copy
Global Mirror
Metro/Global Mirror
For more information about these topics, see the following books:
IBM System Storage DS8000: Copy Services in Open Environments, SG24-6788
DS8000 Copy Services for IBM System z, SG24-6787
This chapter specifically deals with two members of this family of products, Metro Mirror and
Global Copy.
Exception: The Global Mirror member of the Remote Mirror and Copy family is not
addressed because using IBM Global Mirror is not a practical approach for data migration.
Global Mirror is intended to be a long-distance business continuity solution. The cost and
setup complexity of IBM Global Mirror are not typically justified for a one-time data
migration project.
Remote Mirror and Copy functions can be used to migrate data between IBM enterprise
storage subsystems. These subsystems can include IBM Enterprise Storage Server® Model
800 and Model 750.
Important: IBM Enterprise Storage Server Model F20 is not supported for Remote Mirror
and Copy functions to the DS8000. If you intend to migrate from the Model F20, you must
move the data to a Model 800 or 750 before migrating to the DS8000.
Remote Mirror and Copy migration methods replicate data between IBM storage subsystems,
and are sometimes called hardware-based replication. Remote Mirror and Copy migration
offers the following advantages:
High performance
Operating system-independent
Does not consume application host resources
Global Copy migration methods require you to stop application updates and changes while
the remaining data is moved. Both Global Copy and Metro Mirror require the application host
operating system to acquire the new storage subsystem volumes. The following factors must
be taken into account when planning the data migration:
Acquire new migration volumes and make them known to the operating system
Modify application configuration files
Perform data integrity checks
Appropriate license codes must be purchased and applied to each of the systems in the
migration scenario to use the Remote Mirror and Copy functions. For information about
licensing requirements, speak to your IBM storage marketing representative.
Server 4
write Write acknowledge
1 Write to secondary
2
LUN or
LUN or 3 LUN
LUN oror
volume
volume volume
volume
Write complete
Primary Secondary
(source)
acknowledgment (target)
When the application performs a write update operation to a source volume, the following
steps occur:
1. Write to source volume (storage unit cache and nonvolatile storage)
2. Write to target volume (storage unit cache and nonvolatile storage)
3. Signal write complete from the remote target DS8000
4. Post I/O complete to host server
The Fibre Channel connection between the local and the remote disk subsystems can be
through a switch. Other supported distance solutions include dense wave division
multiplexing (DWDM).
As part of your implementation project, identify and distribute hot spots across your
configuration to manage and balance the load.
When operating in Global Copy mode, the source volume sends a periodic, incremental copy
of updated tracks to the target volume instead constant updates. Incremental updates cause
less impact to application writes for source volumes and less demand for bandwidth
resources, allowing a more flexible use of the available bandwidth.
With Global Copy, write operations complete on the source disk subsystem before they are
received by the target disk subsystem. This capability is designed to prevent the local
performance from being affected by wait time of writes on the remote system. Therefore, the
source and target copies to be separated by any distance.
LUN or LUN
LUN oror
LUN or
volume
4
volume volume
volume
Write to secondary
Primary (non-synchronously) Secondary
(source) (target)
Data Migration using Global Copy requires a consistent copy of the data to move applications
and servers to the remote subsystem. By design, the data on the remote system is a fuzzy
copy, with the volume pairs in a copy pending state. To create a consistent copy of the
migrated data, the application must be quiesced and the volume pairs suspended. If the pairs
are terminated before quiescing I/O and suspending the pairs, you might lose ordered
transactions to the remote site.
There are two ways to ensure data consistency during migration using Global Copy:
Shut down all applications at the primary site and allow the out-of-sync sectors drain
completely.
Issue the go to sync command on the Global Copy relationships when the out-of-sync
sectors are approaching or at zero. When the out-of-sync sectors are fully drained, the
pairs are in full duplex mode and there is a consistent relationship. Using go to sync is the
more reliable method.
Global Copy performance depends on the bandwidth of the Fibre Channel interface (the pipe)
between the Global Copy primary and remote storage subsystems. During the first pass,
Global Copy is of no practical use to data migration because the first pass must complete. If
the data to be transferred is 5 TB and the WAN/DWDM interface operates at 27 MBps, allow
about 60 hours for the first pass. This large amount of time is not a reflection on the
performance of Global Copy. Instead, it reflects how long it takes to move 5 TB at 27 MBps.
The pipe must also be large enough to handle the write load of the migration volumes. Using
the same pipe bandwidth previously mentioned, the amount of data being written to all the
migration volumes should be less than 27 MBps. Otherwise, Global Copy will not be able to
deliver the data to the remote storage quickly enough to keep up. As a result, the data at the
remote site will fall further and further behind the data at the primary site. Again, this limitation
is not a reflection on the capabilities of Global Copy. Make sure that the pipe is large enough
to handle the write load of the migration candidate application.
Dedicate the host adapter ports used for Global Copy to the Global Copy migration.
Host-based activities should not use the same ports as Global Copy because this would
negatively affect Global Copy performance.
Although Global Copy would be able to successfully migrate the data, the application
performance after the migration would surely be far less than on the DS8000. This is limited
because the performance characteristics (I/O per second and MBps) of the 16 Enterprise
Storage Server arrays far outweigh the performance capabilities of two DS8000 arrays.
If you are using Tivoli Storage Productivity Center for Replication to manage Copy Services,
use it to facilitate data migration when using Copy Services as your migration technique.
If you are not using IBM Copy Services, do not use Tivoli Storage Productivity Center for
Replication because it adds unnecessary cost and complexity to the implementation. For a
one-time migration project using Copy Services, use Global Copy controlled by one of the
DS8000 user interfaces.
The DS Storage Manager can be accessed through the Tivoli Storage Productivity Center
Element Manager of the System Storage Productivity Center (SSPC). It can be access from
any network-connected workstation with a supported browser. It can also be accessed
directly from the DS8000 management console using the browser on the HMC.
Remotely connect to the DS Storage Manager using the IP address or fully qualified name
resolved by the DNS server. The correct port number is the address as shown in the following
example.
https://10.0.0.1:8452/DS8000/Login
In this example, 10.0.0.1 is the IP address of the HMC. 8452 is the port number, and
DS8000/Login is required to access the Storage Manager.
The login window for newer versions of DS8000 is shown in Figure 4-4.
Figure 4-4 DS8000 new Storage Manager login window from release 6.1
For information about how to create login profiles, see IBM System Storage DS8000
Command Line Interface User’s Guide, SC26-7916. Access can also be obtained by starting
the DS8000 CLI using the TCP/IP address or the fully qualified name of the storage unit
HMC. The user ID and password used for DS8000 CLI are the same as those used for the DS
Storage Manager. For more details about using DS8000 CLI for Remote Mirror and Copy
functions, see 4.4, “Data migration from DS8000 to DS8000” on page 81.
These interfaces send commands directly to the DS8000 storage unit over a FICON or
ESCON channel to a conduit count key data (CKD) volume. The command is passed to the
microcode for execution from this volume. The commands are issued the same way for
DS8000 and Enterprise Storage Server.
For a detailed description of these commands, see z/OS DFSMS Advanced Copy Services,
SC35-0428, and DS8000 Copy Services for IBM System z, SG24-6787.
The ICKDSF commands used for Metro Mirror and Global Copy management are shown in
Table 4-2.
For more information, see the Device Support Facilities User’s Guide and Reference,
GC35-0033.
This section highlights the steps required to set up the configuration to migrate data from
Enterprise Storage Server to DS8000. It includes paths and pairs using DS Storage Manager,
DS8000 CLI, TSO, and ICKDSF for Metro Mirror, and Global Copy. It also includes the
specific steps to complete the data migration from an open systems or a System z
perspective. The final actions to attach the host to the target DS8000 and start the
applications are covered in 4.5, “Post-migration tasks” on page 90.
To establish PPRC pairs, the Enterprise Storage Server must have a PPRC license and the
DS8000 must have a Remote Mirror Copy license. The Enterprise Storage Server must also
have Fibre Channel adapters that have connectivity to the DS8000.
Important: To manage the Enterprise Storage Server Copy Services from the DS8000,
you must have your IBM Service Representative install Licensed Internal Code Version
2.4.3.65 or later on the Enterprise Storage Server. You also need DS8000 code bundle
6.0.500.52 or later on the DS8000.
4.3.1 Adding Enterprise Storage Server Copy Services Domain to the DS8000
Storage Complex
To migrate data using the DS Storage Manager from the Enterprise Storage Server to the
DS8000, add the Enterprise Storage Server Copy Services Domain server to the DS8000
Storage Complex. Adding the server can only be done using the DS8000 Storage Manager.
The DS8000 CLI does not require authentication like the DS Storage Manager does.
The DS8000 Storage Manager must authenticate to the Enterprise Storage Server before you
can issue any commands. The user ID and password used to log on to the DS Storage
Manager must be defined in the Enterprise Storage Server Specialist.
The new user is displayed in the User Administration window as shown in Figure 4-8.
Figure 4-8 Enterprise Storage Server Specialist User Administration panel with DS Storage Manager
ID
5. Connect to the Storage Manager and click Real-time manager Manage Hardware
Storage complexes.
Figure 4-9 Adding a 2105 Copy Services Domain to the DS8000 Storage Complex
7. Enter the Server 1 address. You can enter a Server 2 IP addresses by checking the
Define a second Copy Services Server check box. Click OK.
8. Verify that the CS Domain has been added to the Storage Complex by viewing the Storage
Complexes after the task completes (Figure 4-10).
Figure 4-10 2105 Copy Services Domain added to DS8000 Storage Complex
The remaining steps for setting up Metro Mirror paths can be done with both the Storage
Manager and the DS8000 CLI. Both methods are shown in 4.3.2, “Creating Remote Mirror
and Copy paths” on page 54.
Remember: If channel extension technology is used for Metro Mirror links, make sure the
product used is supported in the environment (direct connect or SAN switch). Also ensure
that the SAN switch used is supported by the product vendor.
You need to know what the physical-to-logical layout of the I/O ports on both storage units is
so that you can set up the paths correctly. The chart in Figure 4-12 shows the numbering
scheme for the ports on the Enterprise Storage Server. There are four host bays, each with
four slots for adapters. The Enterprise Storage Server can have up to 16 host adapters,
allowing for a maximum of 16 Fibre Channel ports per Enterprise Storage Server.
The chart in Figure 4-13 on page 56 displays the scheme for the DS8000 in the first frame.
There are four I/O enclosures, each with six slots for adapters. Two slots in each enclosure
are reserved for the device adapters connected to the disk drive enclosures. That leaves four
slots for host adapters.
The logical number is one less than the physical number. The four ports on the Fibre Channel
adapters are labeled 0-3, and the numbering starts at the top port on each adapter.
Tip: Selecting the source LSS is optional because the next window requires the LSS
selection. If it is selected, any existing paths are displayed.
5. In the Select source LSS window shown in Figure 4-16, select the source LSS you want
and click Next.
Remember: Each LSS to be used in the data migration requires paths to be created
one at a time using DS Storage Manager or the DS8000 CLI. A source LSS can have
multiple target LSSs, but the paths must be created for each target LSS separately.
In this example, LSS 17 is selected as the source LSS. Only the LSSs on the Enterprise
Storage Server are displayed in this list.
7. In the Select source I/O ports window as shown in Figure 4-18, select one or more source
ports and click Next.
The ports listed are available in the active zone set in the switch, or on the paths physically
connected between the two storage units. Figure 4-18 on page 59 shows that the ports
listed for the Enterprise Storage Server (Source) are the same as the zone members
shown in Figure 4-11 on page 55.
The ports are displayed by location rather than WWPN.
8. In the Select target I/O ports window as shown in Figure 4-19, select at least one target
port for each source port and click Next. As with the source ports, the target ports
available depends on how the zoning or cabling is set up. To select multiple target ports for
a single source port, press the Shift key while selecting the ports.
For Metro Mirror, use multiple paths due to timing sensitivity issues. Global Copy does
not have this sensitivity to shared host I/O ports and paths.
Each Source I/O port has a path available to both Target I/O ports due to the way the zone
was established on the switch. Both ports from both storage units are in a single zone.
With this selection, there are four logical paths for LSS 17 on the Enterprise Storage
Server, to LSS 10 on the DS8000.
9. In the Select path options window (Figure 4-20 on page 61), select Define as
consistency group and click Next.
For Remote Mirror and Copy pairs, selecting the consistency group option supports the
consistent data between two LSSs (not a group of LSSs). Data consistency means that
the sequence of dependent writes is always kept in the copied data.
Tip: The consistency group option is not required for Global Copy paths in a data
migration scenario.
The Define as consistency group option itself can keep consistent data at the remote
site. In a rolling disaster, all volumes go into the queue full condition within the time interval
specified in the Consistency Group time-out value. The default time-out value is 120
seconds.
However, if all the volumes do not go into the queue full condition, use the commands
freezepprc and unfreezepprc to hold the I/O activity to the volumes not in the queue full
condition. You can also resume or release the held I/O without waiting for the Consistency
Group timeout to minimize the impact on the applications. These commands are issued at
the LSS level through the DS8000 CLI.
For more information about using consistency groups, see IBM System Storage DS8000:
Copy Services in Open Environments, SG24-6788, and DS8000 Copy Services for IBM
System z, SG24-6787.
10.In the Verification window shown in Figure 4-21, review your selections carefully and then
click Finish. If changes need to be made, click Back to make the modifications before
clicking Finish.
Start the DS8000 CLI using the IP address of one cluster on the Enterprise Storage Server
and enter the user ID and password as shown in Example 4-1. A list of the storage units in the
Copy Services Domain Server is displayed. In this example, there are two storage units.
To create Remote Mirror and Copy paths, perform the following steps:
1. Before the paths can be created, you need to determine the remote WWNN and the
available paths. Use the lssi command to determine the remote WWNN as shown in
Example 4-2.
Example 4-2 Using the lssi command to obtain the WWNN of the remote system
dscli> lssi
Date/Time: July 12, 2011 11:46:06 AM CET IBM DSCLI Version: 5.2.410.182
Name ID Storage Unit Model WWNN State ESSNet
==================================================================================
DS8k-SLE05 IBM.2107-75L4741 IBM.2107-75L4740 931 5005076305FFC786 Online Enabled
Example 4-3 Query available PPRC ports between Enterprise Storage Server and DS8000
dscli> lsavailpprcport -dev IBM.2105-22673 -remotedev IBM.2107-75L4741
-remotewwnn 5005076305FFC786 -fullid 17:10
Date/Time: July 6, 2011 10:02:47 AM CET IBM DSCLI Version: 5.2.410.182 DS:
IBM.2105-22673
Local Port Attached Port Type
================================================
IBM.2105-22673/I0004 IBM.2107-75L4741/I0140 FCP
IBM.2105-22673/I0004 IBM.2107-75L4741/I0142 FCP
IBM.2105-22673/I00AC IBM.2107-75L4741/I0140 FCP
IBM.2105-22673/I00AC IBM.2107-75L4741/I0142 FCP
3. Use the WWNN and a port pair to create a path using the mkpprcpath command from the
Enterprise Storage Server (Example 4-4). Verify that the information entered is correct
before running the command. In this example, four paths are created between LSS 17 on
the Enterprise Storage Server and LSS 10 on the DS8000.
Example 4-4 Creating PPRC paths between Enterprise Storage Server and DS8000
dscli> mkpprcpath -dev IBM.2105-22673 -remotedev IBM.2107-75L4741 -remotewwnn
5005076305FFC786 -srclss 17 -tgtlss 10 I0004:I0140 I00AC:I0140 I0004:I0142
I00AC:I0142
Date/Time: July 6, 2011 10:11:39 AM CET IBM DSCLI Version: 5.2.410.182 DS:
IBM.2105-22673
CMUC00149I mkpprcpath: Remote Mirror and Copy path 17:10 successfully
established.
4. Query and verify the paths using the lspprcpath command as shown in Example 4-5. If
you need to make changes, remove the path and recreate or modify it using DS Storage
Manager. The output lists the Enterprise Storage Server information in the Src, SS, and
Port columns, and the DS8000 information in the Tgt column.
Example 4-5 Query PPRC paths between Enterprise Storage Server and DS8000
dscli> lspprcpath -dev IBM.2105-22673 17
Date/Time: July 6, 2011 10:19:04 AM CET IBM DSCLI Version: 5.2.410.182 DS:
IBM.2105-22673
Src Tgt State SS Port Attached Port Tgt WWNN
=========================================================
17 10 Success FF10 I00AC I0140 5005076305FFC786
17 10 Success FF10 I00AC I0142 5005076305FFC786
17 10 Success FF10 I0004 I0140 5005076305FFC786
17 10 Success FF10 I0004 I0142 5005076305FFC786
The example shown in Example 4-6 establishes a path between the Enterprise Storage
Server and the DS8000. The SSID of the primary volume is x’1710’, the WWNN is
5005076300C09629, and the LSS is x’00’. The SSID of the secondary volume is x’1711’, the
WWNN is 5005076305FFC786, and the LSS is x’01’. In this example, the consistency group
option is set to NO.
In Example 4-7, the FCPP parameter specifies up to eight paths. Each path is an 8-digit
hexadecimal address in the form x’aaaabbbG67 b’. In this form, aaaa is the primary system
adapter (SAID) and bbbb is the remote system adapter ID (SAID). The World Wide Node
Name (WWNN) for the primary and remote are specified in the WWNN parameter, with the
primary listed first followed by the remote.
When creating fixed block volumes on the DS8000, you have three size choices:
ds: The number of bytes allocated will be the requested capacity value times 230 .
Enterprise Storage Server: The number of bytes allocated will be the requested capacity
value times 109 .
blocks: The number of bytes allocated will be the requested capacity value times 512
bytes (each block is 512 bytes).
For more information, see IBM System Storage DS8000: Copy Services in Open
Environments, SG24-6788.
CKD volumes have the same considerations about volume size. CKD volumes are specified
in number of cylinders. The volumes on the DS8000 must have the same or greater number
of cylinders as the volumes on the Enterprise Storage Server.
For more information, see DS8000 Copy Services for IBM System z, SG24-6787.
Another important aspect to consider before creating the configuration for the data migration
to the DS8000 is the volume address differences between the two storage units. On the
Enterprise Storage Server, open systems volume IDs are given in an 8-digit format,
xxx-sssss. In this form, xxx is the LUN ID and sssss is the serial number of the Enterprise
Storage Server. When referring to these volumes with DS8000 CLI, add 1000 to the volume
ID. Remember the limitations on the Enterprise Storage Server address ranges shown in
Figure 4-23.
On the DS8000, the range of available addressing is significantly greater than on the
Enterprise Storage Server. The entire storage unit can be configured for just CKD or just FB
using LUNs 00-FF on LSSs/LCUs 00-FE (FF is reserved).
If the configuration on the DS8000 is a mixed CKD and FB environment, the CKD or FB
volumes must be contained within a grouping of 16 LSSs/LCUs. For example, CKD volumes
are configured in 2000-2FFF and FB volumes in 3000-3FFF, where 20-2F and 30-3F are the
range of 16 LCUs/LSSs.
No restrictions exist for the creation of the LSS/LCU ranges other than to group by 16. The 16
groups can be all CKD, all FB, or mixed, as long as any single group is the same type. Take
grouping into account when planning the DS8000 configuration because the storage unit
enforces the groupings.
After you complete the volume configuration on the DS8000 compatible with the Enterprise
Storage Server, the volumes will be formatted by an internal DS8000 process. Otherwise the
volumes cannot be used as target volumes on the DS8000. This formatting must be complete
before creating the pairs. The time needed for the volume initialization completion varies
depending on the size.
Remember: If you attempt to use the volumes before the volume initialization has
completed, the establish of the copy pairs fails. This is an expected result in this case.
3. If you selected LSS in the Resource Type list, select the specific LSS in the Specify LSS
list. If you selected Show All Volumes, select All FB Volumes or All CKD Volumes.
4. Select the volumes to be used, and click the Select Action list and select Create.
In this example, the selection is being made by LSS. Specify LSS 17 (which we have
already made paths for) volume 1700.
5. In the Volume Pairing Method window, select the method for Volume Pairing:
– Automated volume pair assignment automatically pairs the first selected source volume
with a target volume of the same size. All subsequent pairs are automatically assigned
based on compatible size in a sequential fashion. The lowest source volume number
are paired with the lowest target volume number.
6. The Select source volumes window displays the available source volumes based on an
LSS. You can also create paths from this panel by clicking Create Paths, which starts the
Paths wizard.
In this example, the source volumes are in LSS 17 on the Enterprise Storage Server.
Select the source volume you want and click Next (Figure 4-26).
11.After the pairs have been created, verify the state of the relationship by checking the
Metro Mirror: Real-time window for the defined source volumes (Figure 4-30).
Remote Mirror and Copy pairs are created one LSS at a time. All the volumes in an LSS can
be used, but the process must be repeated for each LSS involved in the migration.
Requirement: The DS8000 CLI must be used from the Enterprise Storage Server to
create the Remote Mirror and Copy pairs.
Example 4-9 illustrates the creation of Global Copy pairs with DS8000 CLI on the same
volumes used in the preceding examples. Before starting this example, the Metro Mirror pair
was removed.
Creating the Remote Mirror and Copy pairs can be tedious task if many volumes are involved
in the migration. Up to 4096 source volumes are possible on the Enterprise Storage Server.
An easy way to create many pairs is to use the DS8000 CLI in a scripting mode. A script to
create Global Copy pairs is shown in Figure 4-31.
This example shows creating 1024 Global Copy pairs. The source volumes are on the
Enterprise Storage Server (which has 64 volumes in each LCU) and pairs are created with
corresponding target volumes on the DS8000. The pairs are created as a single pair of LSSs
at a time. For example, source LSS 00 with volumes 00-3F is paired with target LSS C8 with
volumes 00-3F as 0000-003F:C800-C83F.
In this example, you are establishing an initial copy from a simplex state.
The option parameter OPTION has two mutually exclusive values: SYNC and XD. SYNC is
specified to create Metro Mirror pairs. XD is used to specify Global Copy pairs.
In Example 4-10, the primary volume is in LSS x’00’ on the Enterprise Storage Server. The
primary volume has the following characteristics:
The SSID of the LSS is x’1710’
The serial number of the Enterprise Storage Server is 22673
The CCA is x’28’
The LSS is x’00’
The remote volume is in LSS x’01’ on the DS8000, and has the following characteristics:
The SSID of the LSS is x’1711’
The serial number of the DS8000 is L4741
The CCA is x’28’
The LSS is x’01’.
The MSGREQ(YES) parameter specifies that Metro Mirror waits until the initial full volume
copy operation is complete before issuing the completion message.
In order for a pair to be created as Global Copy, specify whether the pair comes from the
simplex or suspended state. This means an initial copy of a newly established pair (simplex)
or a resynchronization of a suspended pair. The MODE parameter is used to specify either
COPY or RESYNC. The CESTPAIR command in Example 4-11 includes the OPTION(XD)
and MODE(COPY) to signify Global Copy from a simplex state.
After the Remote Mirror and Copy pairs are established, the data will start copying. For Metro
Mirror, wait for the pairs to enter a Full Duplex state so that the data migration can be
completed. For Global Copy, there is an intermediate state required to get to the Full Duplex
state.
The following steps synchronize the data from the Enterprise Storage Server to the DS8000.
These steps are common for any interface used to manage the Remote Mirror and Copy
relationships.
1. Verify full duplex mode (Metro Mirror) or out-of-sync tracks (Global Copy) are at or near
zero.
2. If you are using Global Copy, convert to synchronous (go-to-sync function).
3. Suspend the source I/O.
4. Suspend pairs.
5. Delete pairs.
2. The same command is used to query the Global Copy relationship. Query the pairs to
monitor the Out of Sync Tracks, as seen in Example 4-14. In this example, the local and
remote copies still have many tracks to copy (288991) and the First Pass has not yet
completed.
As this number approaches zero (or gets to zero) and the First Pass field becomes true,
the go to sync function is started. This state must be reached for each pair in the data
migration. Example 4-15 shows the Out of Sync Tracks at zero (0) and the First Pass
Status at true.
3. This Global Copy pair is now ready for the final copy to be performed. Before running this
command, stop the host I/O and synchronize all the data to disk to avoid losing any
updates after the relationship is removed.
4. Use the mkpprc command to start the go to sync function on the same pair, but change
the -type option to mmir (Example 4-16).
5. Query the pair using the lspprc command to verify that the pair has completed the copy
when the state changes to Full Duplex (Example 4-17).
Tip: Using this command with multiple pairs in multiple LSSs will result in a
confirmation question for each range of pairs. The -quiet option can be included to turn
off this confirmation prompt.
7. You are asked to confirm the removal of the pair. Respond y (for yes) to delete the
relationship, or n (for no) to cancel.
Complete these steps for all pairs involved in the data migration.
Tip: The CESTPAIR command does not support the go-to-sync and suspend operation
(ICKDSF, DS8000 CLI, and DS Storage Manager do). When using the TSO interface,
you can set the process to trigger when the system issues a state change message that
the duplex state is reached.
2. Query the status of the target volumes using the CQUERY command as shown in
Example 4-20. Notice that the state of the pair is PENDING.XD and no path information is
displayed. This is a normal response for the target storage unit.
3. Before removing relationships, stop all host activity and ensure that all data is written to
disk. These steps ensure that a full and complete copy exists on the DS8000. After that,
move the pairs to a synchronous state, complete the copy, and attach the application to
the target storage unit.
Monitor the copy, using the CQUERY command until the pair reaches a Duplex state. When
this state is reached, the copy is complete.
5. Use the CSUSPEND command to suspend and remove the pairs as shown in Example 4-22.
Query the state to confirm that the pairs are suspended using the CQUERY command.
The state will be SUSPEND(3), which means the Global Copy was suspended by a host
command to the source storage unit.
6. The copy is now complete and the pairs are suspended. Delete the pairs by issuing the
CDELPAIR command to the source storage volumes (Example 4-23).
After these steps are performed for all pairs involved in the data migration, you are ready to
move the host and application to the target DS8000.
When the pair reaches either of the following states, you are ready to suspend the host
application and flush all writes to disk:
If the pair created is a Metro Mirror relationship and reaches DUPLEX state, or it is a Global
Copy relationship and the out-of-sync tracks number is near zero,
2. A Global Copy relationship must be converted to synchronous to complete the copy. Use
the PPRCOPY CESTPAIR command, changing the parameter OPTION(XD) to
3. Use the PPRCOPY SUSPEND command to suspend the Metro Mirror relationship before you
remove it as shown in Example 4-26. Removing the mirror ensures all the data has been
copied from the source to the target storage unit.
4. Delete the pair using PPRCOPY DELPAIR command as shown in Example 4-27.
After performing these steps for all pairs involved in the data migration, you are ready to move
the host and application to the target DS8000.
The example in Figure 4-33 shows the out-of-sync tracks as non-zero. Monitor this state
until it gets to the state shown in Figure 4-32.
4. In the next window, select which system to suspend the Global Copy relationship at and
select Suspend from the Select Action list (Figure 4-35).
7. After the command completes, query the state of the relationship. As seen in Figure 4-38,
the pairs are now in full duplex state. After stopping the host I/O and confirming that all the
out-of-sync tracks are at zero, you are ready to remove the relationship.
Important: Do not remove the relationship if the out-of-sync tracks are non-zero.
Removing the pairs any earlier would result in having incomplete data on the remote
system.
The data migration using IBM Copy Services is complete. The remaining steps for bringing
the applications up on the new DS8000 are listed in 4.5, “Post-migration tasks” on page 90.
If the DS Storage Manager is used for data migration, the storage unit of one system must be
added to the storage complex of the other. This process is described in 4.3.1, “Adding
Enterprise Storage Server Copy Services Domain to the DS8000 Storage Complex” on
page 51.
Because the commands for Enterprise Storage Server and DS8000 as source storage are
similar, the migration between two DS8000s is covered using the DS8000 CLI only. The
following steps are described:
Configuring the remote DS8000
Creating Remote Mirror and Copy paths
Creating Remote Mirror and Copy pairs
Completing the data migration from DS8000 to DS8000
The examples in this section use DS8000 CLI for performing the configuration. To configure
the remote DS8000, follow these steps:
1. Confirm the logical configuration on the source storage unit using the lsfbvol command
as shown in Example 4-28. For simplicity reasons, the fb volumes in volume group V3 are
listed as the source volumes. Notice that the capacities of these volumes are 5 GB and 1
GB (DS sizes).
2. Issue the mkfbvol command to configure the target DS8000 to match the source DS800
as shown in Example 4-29. You need to issue the command once for each size of volume.
In this example, the first command creates two 5 GB volumes with the same nickname
(pprc_tgt_tic6) and assigns both to the volume group V20. The second command is the
same except for the size (1 GB).
Reminder: Volumes created with DS8000 CLI in a single command will be the same size
and from the same extent pool. Volumes with even LSSs are created in even extent pools.
Volumes with odd LSSs are created in odd extent pools.
3. Verify the configuration on the target DS8000 using the lsfbvol command
(Example 4-30).
Repeat these steps until the configuration of the target DS8000 matches the configuration on
the volumes used in the data migration. Remember to plan the layout of the volumes based
on performance considerations and opportunities for growth.
If the remote system is a logically partitioned (LPAR) storage unit, the same lssi
command can be used. However, it returns the information for both LPARs (also known as
Storage Facility Images or SFI). You must know which SFI the data will be migrated to.
2. After the zoning and connectivity are ready, query which paths are available to use for
Remote Mirror and Copy using the lsavailpprcport command. Issue this command from
the local system to the remote.
In Example 4-33, the command queries the possible ports between the local system
IBM.2107-75ABTV1 and the remote system IBM.2107-7520781, between LSS 42 and E4. As
you can see in the output, there are two available paths configured. Notice that each
attached port (remote) is visible to each local port.
Example 4-33 Check for available paths between the storage units
dscli> lsavailpprcport -dev IBM.2107-75ABTV1 -remotedev IBM.2107-7520781
-remotewwnn 5005076303FFC1A5 42:E4
Date/Time: July 8, 2011 2:26:04 PM PDT IBM DSCLI Version: 7.6.10.511 DS:
IBM.2107-75ABTV1
Local Port Attached Port Type
=============================
I0011 I0400 FCP
I0012 I0402 FCP
3. Create the Remote Mirror and Copy paths between the two storage units using the
DS8000 CLI command mkpprcpath. In Example 4-34, paths are created between two
different ports on each storage unit, I0011:I0400 and I0012:I0402 (). These paths are
created using the same physical adapter on the local system. For more information, see
“Global Copy performance considerations” on page 46.
Important: Remote Mirror and Copy paths need to be created between each LSS on
the local and remote storage units for use in the data migration.
Example 4-35 Query the Remote Mirror and Copy paths with lspprcpath
dscli> lspprcpath -dev IBM.2107-75ABTV1 42
Date/Time: July 8, 2011 3:11:36 PM PDT IBM DSCLI Version: 7.6.10.511 DS:
IBM.2107-75ABTV1
Src Tgt State SS Port Attached Port Tgt WWNN
=========================================================
42 E4 Success FFE4 I0011 I0400 5005076303FFC1A5
42 E4 Success FFE4 I0012 I0402 5005076303FFC1A5
Example 4-36 Create Remote Mirror and Copy pairs using mkpprc
dscli> mkpprc -dev IBM.2107-75ABTV1 -remotedev IBM.2107-7520781 -type gcp -mode
full 4204-4243:E404-E443
Date/Time: July 8, 2011 3:27:51 PM PDT IBM DSCLI Version: 7.6.10.511 DS:
IBM.2107-75ABTV1
CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 4204:E404
successfully created.
CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 4205:E405
successfully created.
CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 4206:E406
successfully created.
CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 4207:E407
successfully created.
Perform the following steps to complete the data migration using IBM Copy Services:
1. Monitor the copy progress using the lspprc -l command. The -l option is used to view
the out-of-sync tracks. The command and output are displayed in Example 4-37.
This output is formatted to highlight important information, specifically the out-of-sync
tracks and the First Pass Status.
Example 4-37 Query the status of the Remote Mirror and Copy pairs using lspprc -l
dscli> lspprc -dev IBM.2107-75ABTV1 4208-4243
Date/Time: July 11, 2011 3:25:18 PM PDT IBM DSCLI Version: 7.6.10.511 DS:
IBM.2107-75ABTV1
ID State Reason Type Out Of Sync Tracks ... First Pass Status
==================================================================================
4208:E408 Copy Pending - Global Copy 100892 ... ... ... ... False
Query the state of the relationships until all pairs have zero (or near-zero) out-of-sync
tracks and the First Pass Status is True (Example 4-38).
2. After this state is reached, the application is stopped and all data is written to disk. Convert
the Global Copy to a synchronous copy by issuing the mkpprc command using mmir as the
type as shown in Example 4-39.
3. Monitor the state of the relationships using the lspprc -l command until all pairs reach a
full duplex state. Notice in Example 4-40 that First Pass Status is changed to Invalid,
which is the expected output for a Metro Mirror relationship. This example shows both Copy
Pending and Full Duplex states.
4. After the copy is complete, remove the Remote Mirror and Copy relationships using
rmpprc command. Example 4-42 shows removing the 64 relationships. You can use the
-quiet option as shown to disable the confirmation prompt.
5. Query the relationships to confirm the removal by issuing the lspprccommand. If the
relationships are all removed, the command indicates that no Remote Mirror and Copy
relationships were found as shown in Example 4-43.
Example 4-43 Confirm Remote Mirror and Copy relationships are removed
dscli> lspprc -dev IBM.2107-75ABTV1 -l 4204-4243
Date/Time: July 11, 2011 4:42:19 PM PDT IBM DSCLI Version: 7.6.10.511 DS:
IBM.2107-75ABTV1
CMUC00234I lspprc: No Remote Mirror and Copy found.
If many LSSs are being used in the data migration, you might want to create a DS8000
CLI script to create paths and pairs. Automation allows you to verify all the information you
are passing to DS8000 CLI in a single place, and avoid any typing mistakes. A script can
also be used to set up queries to the pairs. These queries will let you know when to
synchronize Global Copy, or remove the pairs when full duplex state is reached.
Start the script, using the -hmc option with the name or IP address of the HMC on the
primary system as shown in Example 4-44. You can also have a profile file set up. Place
the name of the script after -script option followed by the -user and -passwd options. In
this case, it is placed from a Windows XP Professional system,
This section addresses considerations and clarification details not covered in those
references.
There are two phases to accessing an AIX volume group (VG) after an IBM Copy Services
copy operation:
1. Configuring the LUNs. Depending on the multi-path code, the LUNS can be hdisks,
vpaths, or something else if non-IBM storage is used.
This step is accomplished by assigning the LUNs to a host from the storage side, then
running cfgmgr at the host.
2. Accessing the VG.
Import the VG with importvg. You can also run recreatevg against the LUNs if another
copy (which can be the original VG) of the VG is located on the host.
Use recreatevg if a copy of the VG is already on the host because the disks will have
duplicate physical volume IDs (PVIDs), logical volume names, and file system mount
points. The AIX Logical Volume Manager (LVM) does not allow duplicates of these
characteristics. AIX does allow duplicate PVIDs on a system, but only as a temporary
situation.
If there is a LUN varied on in a VG with PVID, you cannot configure another hdisk/vpath on
the system with the same PVID using cfgmgr. To configure the disks in such a situation, you
must vary off the VG with the duplicate PVID, and then run cfgmgr.
You can also configure the LUN on AIX before the copy of data is placed on the LUN (which
creates the duplicate PVID). After the LUN is configured on AIX, subsequent FlashCopies or
Global Copies will not require the LUN to be configured again. The LUN will not have a PVID
because it has just been created.
Tip: It is not necessary to issue the following commands for clearing and setting a new
PVID on the LUNs:
# chdev -l <hdisk#> -a pv=clear
# chdev -l <hdisk#> -a pv=yes
These commands are run by the recreatevg command, so they can be skipped.
Note: In this example, the new hdisks were configured before creating the FlashCopy,
so they do not have a PVID. However, if you perform the FlashCopy first, you need to
vary off existingvg before step 2.
If you want to update the copies later to match the VG again, follow these steps:
1. Vary off flashcopyvg using # varyoffvg flashcopyvg.
2. Export flashcopyvg with # exportvg flashcopyvg. This command removes information
about flashcopyvg from the ODM on AIX, but does not change any information on the
disks in the VG.
3. Create the copy with FlashCopy. For more information about when the application is not
quiesced and the file systems are not unmounted, see “Maintaining file system and data
consistency” on page 91.
4. Run recreatevg to create a VG from the FlashCopy volumes with new LV names using the
format # recreatevg -y flashcopyvg hdisk10 hdisk11 hdisk12. In this example, it is
called flashcopyvg This command also changes the PVIDs of the disks to unique PVIDs
and loads the ODM with this information.
Tip: You do not need to run importvg because the recreatevg command loads the
ODM with the VG information.
Remember: In this example, step 2 is necessary because you are going to rerun
recreatevg, and you cannot have the VG already defined in the ODM.
To create a FlashCopy of VGs used by a running application that supports recovery after a
system crash, perform the following steps:
1. Put the application in a hot backup or quiesced mode if the application supports it. This
speeds recovery of the application after it is started on the FlashCopy.
2. Use disk subsystem consistency groups for the disks in the VG.
3. Freeze the JFS2 file systems by using the chfs command if using Journaled File System 2
(JFS2). JFS2 is preferable because there is no similar function in JFS. For more
information, see the chfs man page at:
http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.
aix.cmds/doc/aixcmds1/chfs.htm
4. Preferably, have one file system log per file system.
5. Initiate the FlashCopy.
6. Thaw the JFS2 file system.
7. Turn off the hot backup or quiesce mode of the application.
8. Use these procedures to get the new VG activated on the system:
a. Run logredo against the file systems.
b. Run fsck against the file systems.
9. Verify the consistency of the data using the application that uses that data.
For more information about ensuring file system and data consistency, see:
http://www-1.ibm.com/support/docview.wss?rs=0&q1=%2bFlashcopy+%2bconsistency&uid=i
sg3T1000673&loc=en_US&cs=utf-8&cc=us&lang=en
# format
Searching for disks...done
# format
Searching for disks...done
Figure 4-47 Using the format command to display the LUNs that are currently visible
Comparing the output in Figure 4-46 on page 94 and Figure 4-47, you can see that you
now have two more disks. Actually, they are actually two paths to the same target (the
dual-pathed target device that you just made available to the operating system).
The output of the format command shows as an IBM 2105800. It is, however, a DS8000.
The DS8000 is labeled incorrectly because the target LUN is an exact replica of the
source Enterprise Storage Server LUN as created by Remote Mirror and Copy. The
source Enterprise Storage Server LUN initially was correctly labeled as IBM 2105800 by
Solaris, so the label (including the source geometry) was copied. However, do not rewrite
the label on the target LUN because user data might be affected.
# format
Searching for disks...done
selecting c2t50050763030B048Ed0
[disk formatted]
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> inq
Vendor: IBM
Product: 2107900
Revision: .437
Figure 4-49 Using the format menu to display inquiry data
c. Enter vxdisk list to see the status of the target volume. Figure 4-50 shows that VxVM
recognizes it as DS8000, despite the label on the disk that it is an IBM 2105-800. It
also shows that it is attributed with a flag udid_mismatch. This flag shows that VxVM is
aware that it is a cloned volume (Remote Mirror and Copy).
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
EMC0_0 auto:none - - online invalid
EMC0_1 auto:none - - online invalid
EMC0_2 auto:none - - online invalid
IBM_DS8x000_0 auto:none - - online invalid
IBM_DS8x000_4 auto:none - - online invalid
IBM_DS8x000_5 auto:none - - online invalid
IBM_DS8x001_0 auto:cdsdisk - - online udid_mismatch
IBM_SHARK0_1 auto - - offline
c1t0d0s2 auto:none - - online invalid
c1t1d0s2 auto:none - - online invalid
Figure 4-50 Using vxdisk list to show the status of our target device
11.Issue the vxprint command to see that the disk group was imported, but the VxVM
volume itself is still disabled (Figure 4-52).
# vxprint
Disk group: dg0
12.To be able to mount it later on, start the volume by entering vxvol -g dg0 start vol0 as
shown in Figure 4-53.
# mount /ESS0
# df -k
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c1t0d0s0 33279289 5697589 27248908 18% /
/devices 0 0 0 0% /devices
ctfs 0 0 0 0% /system/contract
proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
swap 2731024 1296 2729728 1% /etc/svc/volatile
objfs 0 0 0 0% /system/object
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1
33279289 5697589 27248908 18%
/platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
33279289 5697589 27248908 18%
/platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd 0 0 0 0% /dev/fd
swap 2729792 64 2729728 1% /tmp
swap 2729776 48 2729728 1% /var/run
swap 2729728 0 2729728 0% /dev/vx/dmp
swap 2729728 0 2729728 0% /dev/vx/rdmp
/dev/vx/dsk/dg0/vol0 35298304 17205629 16961890 51% /ESS0
Figure 4-54 Mount the volume and verify that it became mounted
To verify the operation, check that the last entry in a file from the source LUN can also be
seen on the target LUN (Figure 4-55).
# cd /ESS0
# tail -3 samplefile
Mar 26 17:14:06 SunFire280Rtic2 Corrupt label; wrong magic number
Mar 26 17:14:06 SunFire280Rtic2 scsi: [ID 107833 kern.warning] WARNING:
/pci@8,700
************** LAST ENTRY TO SAMPLEFILE ******************
Figure 4-55 Check data on target LUN
Chapter 4. Data migration using IBM Remote Mirror and Copy 101
4. Issue the go-to-sync command and verify that the pair is in duplex state as shown in
Figure 4-58.
The new DS8000 devices are varied online and available for use. The application is ready to
restart.
Chapter 4. Data migration using IBM Remote Mirror and Copy 103
104 Data Migration to IBM Disk Storage Systems
5
Chapter 5. DSCLIbroker
The DSCLIbroker is a scripting framework that automates Copy Services functions. The
DSCLIbroker provides the following features:
Grouping volumes according to applications or other context
Simplifies the execution of Copy Services commands
Provides a scripting framework for implementing automation functions
Copy Services are frequently used for data migrations, but migrations are not a daily
operation. Depending on your data center environment, applications, and type of migration,
the migration tasks can get complicated. As a result, you might want to automate certain
tasks, especially when the required actions cannot be accomplished using the standard
storage management software.
The DSCLIbroker is a scripting framework that allows you to create user customized
automation scripts. For example, consider a multitiered layered stack consisting of the
DS8000 hardware as the lowest level and DSCLI above. You might want to write automation
scripts where DSCLI commands are run against the storage. The DSCLIbroker can be
positioned as an extra layer between the DSCLI and the applications (Figure 5-1).
A p p lic a tio n s
T P C -R C u s to m e r
w r itte n s c r ip ts D S C L Ib r o k e r
D SC LI
H a r d w a re (D S 8 0 0 0 )
The framework is written in Perl scripting language and consists of a series of perl library
modules that support all DS8000 Copy Services functions. You can write your own scripts
using the library. The framework also provides a perl script for each Copy Services DSCLI
command that can be used without additional programming.
In the subject field, enter Migration using DSCLIbroker and continue to process the form.
If you are outside of IBM, talk to your IBM Representative about ordering the Service.
Using DSCLI Copy Services commands, with every command a list of copy relations must be
specified. In a migration scenario with the steps: mkpprc, lspprc, pausepprc, lspprc,
failoverpprc and again lspprc, the list of copy relations must be specified six times. It
requires much effort to maintain these commands either in self written scripts or on the
DSCLI command line.
With DSCLIbroker, the configuration data of the Copy Services relations is separated from the
scripting code in a repository. In this repository, multiple Copy Services relations belonging to
a single application can be grouped together and tagged with a name. When a DSCLIbroker
script is run, it refers to the tagged name, and the DSCLIbroker then fetches all copy relations
from the repository. Maintenance of these relationships is done in the repository, and so you
do not need to change the scripting code.
In addition, the DSCLIbroker provides a scripting framework that offers an easy way to write
user customized scripts. This framework is implemented in modular libraries written in perl
scripting language with an object-oriented approach. There is one library for each DS8000
Copy Services function available. The libraries themselves are organized so they can be
extended to support other storage platforms too. The current storage platforms supported
include DS8000, IBM DS6000™ and ESS Model 800. Plans are in place to support SAN
Volume Controller and V7000 storage platforms and the TotalStorage Productivity Center for
Replication command-line interface.
DSCLIbroker libraries
The libraries are the core of the scripting framework. If you write perl scripts using the
DSCLIbroker, you must include the libraries.
The following are the libraries and a short summary of their purposes and contents:
DSPPRC.pm This library is an object class where all remote copy functions are
implemented. The functions include managing the paths and the pair
relations.
DSFlashCopy.pm This library is an object class that holds all functions that maintain
FlashCopy relations.
DSGlobalMirror.pm This library is an object class as well. It holds all functions related to
DS8000 Global Mirror.
DSCLInator.pm This library is the meta class for the preceding classes. Commonly
used functions for Copy Services are located here.
DSlib.pm This library contains global functions with no relations to Copy
Services functions such as maintaining the DSCLIbroker environment
and querying and retrieving data from the repository.
DBbox.pm This library contains all necessary functions to communicate with the
storage subsystems.
D SC LInator.pm
_ collect_option s
_ xml_pa rse
_ xml_sho w
_ re ad_License
DS PP RC .pm
_c hec k_ directio n
_c ollect_pa ir s
mkpprcpath
rmppr cp ath
ls pp rcp ath
D Slib .p m D SFlashC o py.p m scanpp rc path
se tD ebug Level _co lle ct_p airs mkpprc
D Sbo x.pm g etD ebug Level DS GlobalM irror.pm scan _flash pausp prc
se tBox se tS imulate Mod e mk g mir mkflas h rmppr c
g et_d evID g etSimulate Mo de pausgmir rmflash fa ilo verpprc
g et_w w nn g etC onfig r es ume gmir resyncfla sh fa ilb ackpprc
g et_b oxN ame q uer yStanza r mg mir reversefla sh re sumepprc
C reateD SC LIs cript g etSta nza show gmir revertflas h freeze ppr c
W riteD SC LIsc ript _ _chec k_ results mk sessio n commitflash unfre ezepprc
SetCommandMod e g et_ cfgF ile r ms es sion un freeze flash ls pp rc
D oD SC LIco mman d o penSe ssio n lss e ssion lsflash scanpp rc
Because some DSCLI commands use the same information, this information can be collected
in the same repository entity. For example, when creating a Metro Mirror or a Global Copy, the
same DSCLI command is used. Metro Mirror is denoted by the option -type mmir and Global
Copy uses the option -type gcp. The remaining parameters are the source and target device,
and the source and target volumes. A FlashCopy needs the same set of information except
that the target device is not required. All these copy pair relations can be described with the
same set of data, which is in the form of a database table or stanza file.
CopyPairs.cfg DSdev.cfg
name name
T ype IPaddress1
PPRCpath.cfg
[mmir|gcp|flash] IPaddress2 name
Source WWNN srclss
Target DSCLIprof tglss
srcvol pwfile srcport
tgtvol user tgtport
optset Session.cfg Source
Target
name
consist
GlobalMirror
PPRCpairs
LSS
SessionID
Volumes
OptSet.cfg
name
param
GlobalMirror.cfg
value name
Box
SessionID
subordpath
OptSet
LSS
This set of tables is a normalized data model, which in theory can be used by a relational
database system. For DSCLIbroker however, the tables are implemented as flat stanza files
because they can handle hundreds of thousands of entities without any problems. When the
data migrations are done, the data in the repository is obsolete and can be discarded. In other
engagements, the data in the repository must be maintained for a longer time. In these
engagements, the data management capabilities of a database system might be more useful
or even required.
As shown in Figure 5-3, each table has a name that is the key to a set of configuration data. A
table might have an entry that refers to a key name of another table. For example, in the table
CopyPairs.cfg, the entries for Source and Target reference a storage device defined in the
table DSdev.cfg.
The following section includes an example for a complete repository definition d where an
application named SAPHR1 manages a copy relation using the DSCLIbroker. The whole
setup includes the following steps:
1. Define the copy relations.
2. Define the storage devices.
3. Define the paths.
CopyPair {
name = SAPHR1_ab
type = gcp
srcss = 84
tgtss = 8E
srcvol = 8410-841F
tgtvol = 8E10-8E1F
source = ATS_3
target = ATS_1
seqnun =
optset = gcp_cascade
}
In some cases, one or more command-line options are required multiple times. For example,
a Global Copy relation that is cascaded to an existing remote copy relation requires the option
-cascade. This option can be placed in the repository using the stanza file OptSet.cfg as
shown in Example 5-2. The defined option set gcp_cascade is referenced in the CopyPair
stanza with the tag optset, as shown in Example 5-1.
When this entry is referenced in a CopyPairs.cfg stanza, the option -cascade is the default
every time DSCLIbroker generates a DSCLI command for this relation.
DSdev {
name = ATS_3
IPaddress1 = 9.155.62.97
IPaddesss2 = 9.155.62.97
WWNN = 5005076303FFC1A5
devID = IBM.2107-7520781
DSCLIprof = script_20781.profile
PWfile = script_20781.pwfile
user = script
}
Remember: As you can see in the stanza, password files are used for the authentication to
DSCLI. When writing scripts that runs command against the DSCLI, an automated
authentication to the DSCLI is useful. Otherwise you must type in the user name and
password each time the script is run. DSCLI offers a secure method to log in to the DSCLI
automatically. For more information, see the Command-Line Interface User’s Guide for the
DS6000 series and DS8000 series, GC53-1127, at:
http://www-05.ibm.com/e-business/linkweb/publications/servlet/pbi.wss
After identifying the storage devices, the paths must be specified. To establish the paths for
copy relations, a physical link must be established first (see 4.4.2, “Creating Remote Mirror
and Copy paths” on page 83). Verify that the links are available and the required information
for the stanza entries can be obtained using the script lsavailpprcport.pl, as shown in
Example 5-4.
In this example, two ports have physical connections to the target storage device. Use this
information to create the stanza file for the paths. For each repository stanza file, a script is
Use the gen_pprcpaths.pl command to generating path stanzas as shown in Example 5-5.
The option -name SAPHR1 defines the base name of the stanza. The option -d ‘f:ab’
specifies that the paths data are created for the forward direction only. This option adds _ab to
the stanza base name, which results in the real stanza name SAPHR1_ab. The option -l is
used to specify the LSS relation as given. The option -p is used for the port pairs as shown by
the lsavailpprc.pl command in Example 5-4 on page 111.
PPRCpath {
name = SAPHR1_ab
srclss = 82
tgtlss = 8E
srcport = I0142
tgtport = I0400
Source = ATS_1
Target = ATS_3
consist = no
}
#############
# LSS 84 -> 8E
# Box ATS_1 -> ATS_3
#############
PPRCpath {
name = SAPHR1_ab
srclss = 84
tgtlss = 8E
srcport = I0141
PPRCpath {
name = SAPHR1_ab
srclss = 84
tgtlss = 8E
srcport = I0142
tgtport = I0400
Source = ATS_1
Target = ATS_3
consist = no
}
The following examples are based on the configuration data that was created in the previous
chapter. You can now establish the Global Copy relation using the script mkpprc.pl as shown
in Example 5-7. The only required parameter you must specify is the name of the stanza
where all copy relations are defined. SAPHR1_ab is the corresponding stanza as shown in
Example 5-1 on page 110.
All other scripts work in the same way. The only required parameter is the name of the copy
relation located in the stanzas. Additional parameters can be supplied depending on what you
want to do. For an overview of the available options, use the option -help or run the command
without any parameters. Example 5-8 shows how to pause the Global Copy.
In the previous examples, no information is provided about how the script is generating the
DSCLI commands. To make the scripts more verbose, the -d (debug) option can be used.
There are four levels of debug information available. The first debug level shows the
generated DSCLI command.
Another helpful option is -simulate, which displays the DSCLI command but does not run it.
This option can be used to verify whether the generated DSCLI command is the one you are
expecting before it takes effect. The option -d 1 also shows the generated command, but the
command is run.
Example 5-9 shows the output when using the debug and simulate options.
Example 5-9 Simulate mode and verbose command execution for mkpprc.pl
$ mkpprc.pl -n SAPHR1_ab -simulate
When a PPRC relation has to be failed over, the DSCLI command failoverpprc must be
issued at the target storage device. Provide the pair relations in the reverse order. The
failoverpprc.pl script does both automatically. In Example 5-10 the generated
failoverpprc command is displayed. Comparing it to the Example 5-9 on page 114, the
device IDs of the storage devices and the pair relations are reversed.
The following example demonstrates a simplified migration to target DS8000 storage devices
using a Global Copy replication. The following steps are processed by the scripts:
Create the paths to the target storage device.
Establish a Global Copy and wait until all tracks have been copied.
Pause the Global Copy.
Fail over Global Copy to the target site.
Using this script, you can perform a series of migrations without any changes to the code. For
each application that needs to be migrated, you must change the data in the corresponding
stanza files.
Example 5-11 shows the complete script. This script uses the libraries DSlib.pm, where the
debug levels and the simulation mode are included, and DSPPRC.pm, where all remote copy
functions are defined. The libraries are referenced in lines 6 and 7 of the script.
1 #!/usr/bin/perl -w
2
In the lines 9 - 27 of the script, the required command options are declared. The -name option
is required and therefore the lines 24 - 27 are checking whether this option is supplied. If the
option is not given, an error message is reported and the script exits.
Verify the DSCLI commands that are generated by this script before they take effect.
Therefore, the simulation mode is enabled when the command-line option -simulate is
specified. The simulation mode is engaged in line 33.
In lines 35 - 44, the necessary objects are defined. There is one object for each storage
device required, and an object that provides the remote copy function to the script.
In lines 49 - 51, the DSCLI commands for the paths and the copy pairs are generated. In this
case, a Metro Mirror is established with the option -wait. The Metro Mirror allows all tracks to
be copied to the secondary site before the operation continues. When the copying is
complete, the copy relation is paused using the pausepprc command. This command
sequence must be applied to the primary storage device.
In line 56, all DSCLI commands generated in the previous lines are sent to the primary
storage device for execution.
In line 59, the failover to the secondary device is applied in the same manner. This command
is sent to the auxiliary storage device. This script is now completed.
Example 5-12 shows the output of the simulation mode of the script, which is the generated
sequence of DSCLI commands. The commands, up to the pausepprc command, are sent to
the primary storage device. The failoverpprc command is sent to the target storage device.
$_
5.2.1 Architecture
The DSCLIbroker itself is organized as a client server application. The server part includes
the broker itself. The broker is a daemon that waits for a request from a client to open a
connection to a storage device. This connection is called a session. The session is opened by
forking a child process where the DS8000 command-line interface (DSCLI) is started. The
session waits at the input prompt. The broker manages the communication to the client.
The server also hosts the repository with all its stanza files. In this way, the configuration data
is in one centralized spot and does not need to be distributed to the administrators.
Repository
Administrator 1
connects
Broker
connects
Administrator 2 starts
connects
The client part of the DSCLIbroker is dedicated to the storage administrators, who work with
one or more storage devices. Every DSCLIbroker script that runs at the client site
communicates with the server to perform the following tasks:
Retrieve data from the repository
Compose the DSCLI commands
Send the commands back to the server
The server forwards the commands to the corresponding DSCLI daemon waiting in the
background. When the command is run by the DSCLI, the results are sent back to the client.
5.2.2 Communication
The communication between the DSCLIbroker client and the server is organized in sessions.
The storage administrator, working as a client, must establish a session to a storage device.
Use the script startSession.pl to send a request with a name to the corresponding entry in
The broker sends the client a unique session ID and the port where the worker is listening.
This information is located in a dedicated session file at the client. The broker then sends a
lssi command to the DSCLI in the background. The output of this command is sent back to
the client. Use this output to verify whether the session was opened against the correct
DS8000 storage system.
A session can also be closed. The command stopSession.pl sends an appropriate request
to the corresponding worker process that quits the DSCLI and terminates the process.
5.2.3 Users
Multiple users can be defined in DSCLI for each storage system. Users can also be defined
using the DSCLIbroker. To do so, define an additional stanza in the DSdev.cfg stanza file for
the same storage system. The user name must be declared (Example 5-3 on page 111). The
session must be started with that user name supplied using the option -user as shown in
Example 5-13.
Session on port 55559 for ATS_3 with id 1311037799 was created successfully!
$_
Although with DSCLI a user can log in multiple times to the same storage device, with
DSCLIbroker only one session per storage device is allowed.
The authentication to the DSCLI is done using a password file, which must be generated
manually in DSCLI. The password files must be located in the directory ./pwfiles of the
home directory of the DSCLIbroker scripting framework. This password file is an encrypted
file, and access should be restricted to read access for the administrator only. The password
file is generated with the DSCLI command managepwfile. See the DSCLI online help or the
DSCLI documentation for details.
The most important task is to start and stop the broker daemon using the script Broker.pl
start. After the broker daemon is started, the DSCLIbroker server is ready to receive
requests for starting sessions from any client. To clean up the DSCLIbroker server, run the
same script with the parameter stop. This parameter stops all worker processes running in
the background, and then stops the broker daemon itself.
To monitor the activities of the broker and the worker processes, a logging mechanism has
been implemented. The log files are located in the directory ~/log in the home directory of the
5.2.5 Licenses
The DSCLIbroker is licensed per function and number of storage devices that are managed
by the DSCLIbroker. For this reason, the serial numbers of the storage devices must be
registered. There is no capacity-based license. For migrations, the DSCLIbroker can be
ordered as a tailored STG Lab Service, in which the DSCLIbroker is available for no additional
fee.
In line 6 the paths are checked for the status and if all paths defined in the repository are
available, and the results are displayed. In line 9, you are asked by the script to confirm the
paths status before the script continues. If not, the script quits with a message. Otherwise the
Global Copy is established in line 15 using the DSCLIbroker script mkpprc.pl. The
synchronization of the Global Copy is monitored in line 16 using lspprc.pl -waitnull. With
this option, the script waits until the synchronization completes. In line 19, you are asked if
everything is ready to fail over. If not the script stops with a message. If you enter yes, the
script failoverpprc.pl is applied.
You can use DSCLIbroker scripts in other script languages like python. In this case, system
calls must be used that pass the command execution to the shell.
...
In this example, the source and the target volumes of a copy relation are retrieved. Include the
DSlib library with the use statement somewhere at the top of your program. With getConfig,
the CopyPairs.cfg stanza is queried for a certain relation as shown in the example. The result
is an array of hashes, where each key of a hash element corresponds to a tag entry of the
stanza. The value in each hash entry is the data you are looking for. To unpack this data, a
simple for loop is used.
The DSCLIbroker uses the XML output format from the DSCLI to parse for a required
parameter. In the following example, the Out-of-Sync values for a copy relation are collected
from the output of the lspprc.pl -sum command (Example 5-16).
#
# Create two new boxes because it is a remote copy
my $source=DSbox->new();
my $target=DSbox->new();
#
# Create PPRC thingy
my $PPRC=DSPPRC->new($source, $target);
#
# send 'lspprc -fmt xml' command using the pairs of 'myCopyRelation'
my @SCAN=$PPRC->scanpprc('myCopyRelation','','',-l');
#
# Collect the OOS from each pair
my $totalOOS=0;
for my $pair (@SCAN) {
totalOOS = $totalOOS + $pair->{outsynctrks};
}
However, the underlaying libraries where the commands are sent to the broker can be used to
issue any command. You can use them to allocate volumes, create volume groups, and so
on.
Example 5-17 shows how to set a host connection for a target host system.
#
# Create a box object
#
my $box=DSbox->new(‘ATS_1’);
#
# Create DSCLI script
#
$box->WriteDSCLIscript('managehostconnect -volgrp v12 42');
$box->WriteDSCLIscript('lsconnect -volgrp v12 42');
#
# Execute DSCLI script
#
my $ret=$box->DoDSCLIcommand ('server');
exit($ret);
In this script, an object $box of the storage device ‘ATS_1’ is allocated. All DSCLI commands
are generated and run using this object. Any valid DSCLI command can be defined with the
function $box->WriteDSCLIscript.
Attention: All DSCLI commands that are sent to the broker must not be interactive. When
using removing commands such as rmpprc and rmpprcpath, make sure the option -quiet
is used.
In this example, two DSCLI commands are collected into a script that is stored in the object.
This DSCLI script is sent and executed to that storage device using the function
$box->DoDSCLIcommand.
Typically a migration consists of several steps of storage operations and operations that must
be done on the servers. To determine where automation can be implemented, the technical
workflow and the responsibilities must be defined first. Typically, you have a classification of
organizational responsibilities in place, which is reflected in a dedicated storage
management, server management and application management. Typically automation does
not go across the management boundaries. A meaningful automation includes all steps that
can be handled by a single operator. End the script at the hand over to the next system
management instance in the workflow.
Generally, implement simulation mode for the migration scenario so you can review the steps
of the script before they take effect. During this process, print out detailed logging information
and the DSCLI commands. In the simulation mode, no DSCLI commands are run. However,
any commands that are display information can be issued and the output checked for
accuracy before the real execution.
You can maintain the repository data by implementing a script that generates the data using
the scripts mentioned in 5.1.4, “Additional useful scripts” on page 118. These scripts require a
list of volumes that are mapped to the applications and the migration wave, which is applied
next as input.
Important: You need to know which volumes are subject of the migration and which
applications are affected before generating repository data.
The DSCLIbroker provides a script that can generate these DSCLI scripts automatically.
Therefore the repository data for the copy pairs, the storage devices, and optionally the paths
must be available. When all the scenarios are defined, all DSCLI commands are extracted
and collected in a control file. This control file is used to produce all DSCLI commands with
the required parameters and volumes. Example 5-18 shows that a control file containing all
commands is required for a Global Mirror migration.
Example 5-18 Control file example that created all commands for Global Mirror
#
# cmd option relation outputfile
mkpprc.pl ab LBG_mkpprc
mkflash.pl bc LBG_mkGMflash
mksession.pl ab LBG_mksession
pausepprc.pl ab LBG_pauseppc
failoverpprc.pl ab LBG_failoverpprc
reverseflash.pl -fast -tgtpprc bc LBG_reverseflash
lspprc.pl ab LBG_lspprc
lspprc.pl -r ab LBG_lspprc_remote
lspprc.pl -l ab LBG_lspprc_long
lsflash.pl bc LBG_lsflash
lsflash.pl -l bc LBG_lsflash_long
rmpprc.pl ab LBG_rmpprc
rmpprc.pl -direction reverse ab LBG_rmpprc_remote
rmsession.pl ab LBG_rmsession
The DSCLI commands are written in text files into a certain directory. Each text file holds the
commands for a single step of the scenario. Finally the DSCLI scripts themselves must be
placed in the directory where the command files are located.
In Example 5-18 on page 126, a control script was used to generate the DSCLI script. The
format of the file used in this example is generally the same as for any other control file. The
required parameters are the command itself, a list of parameters, and the stanza entry to
which the operation applies. The other fields in the control file are optional, for example a
name for a log file or a comment.
A single script can be used to read the content of a selected control file and run, step by step,
the listed commands with their parameters. The control file can be selected using a
command-line option. For each execution for a scenario, write logging information into a
dedicated log file. Every DSCLIbroker script has an option -banner, which allows you to
supply comments that are printed at the top of each execution.
The advantage of this approach is that only one program must be developed in the beginning
of the migration project. All migration scenarios are defined and maintained using the control
files. No programming is required when a scenario must be changed.
Usually you maintain this information using a data management system such as Tivoli
TotalStorage Productivity Center for Replication, or even just a spreadsheet. In any case, a
data export function that provides comma-separated value data is needed to import the data
into the DSCLIbroker repository.
The success of every migration depends on the quality of the data in the repositories. The
information in the repository must represent the exact production environment. Use
DSCLIbroker as an interface for each migration. It is a vital connectivity between the
production environment and the migration environment. Re-import the production data using
this interface into the repository of the DSCLIbroker before each migration
Assuming you keep your configuration database at the most current state, import this data to
the DSCLIbroker repository just before the next migration is started. A script can read a csv
export of your configuration data and generate a whole new set of the DSCLIbroker repository
automatically. Use the repository generation scripts as described in “Generating repository
data” on page 125.
5.6.1 Overview
The client was running several data centers in a metropolitan area. In a consolidation project,
data center sites must be migrated to a new, larger site. In this context, two DS8000 storage
devices and the connected servers must be moved to the new data center location. Both
storage devices were the primary storage of a Global Mirror configuration. The new location
was intended to be the new primary site, while the target site for the Global Mirror remains.
The whole migration comprised several thousand primary volumes and hundreds of
applications. The server platforms were mostly AIX -based, but included VMware ESX server,
OpenVMS, and zSeries server. The Global Mirror was managed using IBM TotalStorage
Productivity Center for Replication. After the migration, a new TotalStorage Productivity
Center for Replication server was established. The migration was organized in groups of
applications that must be moved one by one to the new location.
B
0
1
0
1
A 1
0
0 0
0
0
0
0
0 C
1
1 0
0
or
irr
lM
ba
M
ig
o
Gl
ra
te
w
to
Ne
ne
w
si
te
N
New Primary Site
Because of the huge number of applications, the projected duration of the whole project was
several months. The main reason for this long time was the data center logistics like change
management, hardware ordering, and installation management. Also, the client wanted the
current production to be impacted as little as possible.
The general approach was to establish another Global Copy at the new data center by
cascading the target volumes of the current Global Copy. During the initial copy phase, the
production running at the original volumes was not affected. The volumes that had to be
removed from the Global Mirror session and the direction of the new Global Copy reversed.
The Target volumes and the FlashCopy relation at the secondary site were reused for the new
Global Mirror. The new Global Mirror was started at the new data center site.
5.6.2 Analysis
In the analysis phase, the planned scenario had to be proofed for feasibility. The production
environment was analyzed and dependencies discovered. In this case, the applications had a
strong dependency on the server hardware because most applications where hosted in
logical partitions LPARS that shared hardware. Therefore, the applications had to be migrated
in groups according to their server hardware.
The original idea for the migration was to use TotalStorage Productivity Center for Replication
to establish the cascaded Global Copy relation and to transfer that relationship to the new
Global Mirror session. But it is not possible to establish the cascaded Global Copy relation to
the new data center site without removing the current Global Mirror session. Therefore all
migration steps were planned using DSCLI commands.
DSCLIbroker provides a solution for both these concerns. Mapping applications to volumes is
covered by supplying the repository with the correct data. In addition, the scripting framework
was used to write scripts that fulfill the automation requirements.
Scripts to generate the repository data and the migration steps were developed and tested.
For the repository data, the customer provided the TotalStorage Productivity Center for
Replication session export. They also provided a spreadsheet with the applications, the
volumes, and the group they were to be migrated with. Using both data sources, the
repository data was generated for all entities.
The test environment was also used to educate the client about using the automation scripts
and provide hands-on training.
5.6.4 Planning
With the tools and the scenarios developed in the testing phase, the final planning was
finalized. All information was put in place and used to generate the instructions for every
required production change management task.
A detailed time line plan was finalized after the test and development phase. All roles were
assigned so the complete migration could be conducted by the customer.
5.6.5 Running
The migration execution was divided in to two general steps.
A trial run a couple of days before the go-live migration was scheduled. In this trial run, the
migration was run until the data was failed over from the storage perspective to the new data
center. However, production was continued at the primary site. At the new data center, the
applications were started using the migrated data and a health check was performed. After all
tests were passed, the application was ready for the go-live migration.
In the second step, the go-live migration was run. This full migration ended when the
production was started in the new data center.
5.6.6 Validating
Most validation was done during the trial run. This validation made sure that the data was
replicated as expected, which was the final approval for the go-live migration.
At a high level, the steps to migrate to XIV using the XIV Data Migration function are:
1. Establish connectivity between the source device and XIV. The source storage device
must have Fibre Channel or iSCSI connectivity with the XIV.
Important: If the IP network includes firewalls between the mirrored XIV systems, TCP
port 3260 must be open within the firewalls for iSCSI replication to work.
The IBM XIV Data Migration solution offers a smooth data transfer for the following reasons:
Requires only a single short outage to switch LUN ownership. This outage allows the
immediate connection of a host server to the XIV Storage System. This connection
provides you with direct access to all existing LUNs before they are copied to the XIV
Storage System.
Synchronizes data between the two storage systems using a method not apparent to the
XIV Storage System as a background process with minimal performance impact.
Supports data migration from practically all storage vendors.
Can be used with Fibre Channel or iSCSI.
Can be used to migrate SAN boot volumes.
The XIV Storage System manages the data migration by simulating host behavior. When
connected to the storage device containing the source data, XIV looks and behaves like a
SCSI initiator. In other words, it acts like a host server. After the connection is established, the
storage device containing the source data acts as though it is receiving read or write requests
from a host. In fact, the XIV Storage System is doing a block-by-block copy of the data, which
the XIV then writes onto an XIV volume.
The connections between the two storage systems must remain intact during the entire
migration process. If at any time during the migration process the communication between the
storage systems fails, the process also fails. In addition, if communication fails after the
migration reaches synchronized status, writes from the host will fail if the source updating
option was chosen. For more information, see 6.2, “Handling I/O requests” on page 135. The
process of migrating data is performed at a volume level, as a background process.
The data migration facility in XIV firmware revisions 10.1 and later supports the following
functions:
Up to four migration targets can be configured on an XIV. A target is either one controller in
an active/passive storage device or one active/active storage device. XIV firmware revision
10.2.2 increased the number of targets to 8. The target definitions are used for both
Remote Mirroring and data migration. Both Remote Mirroring and data migration functions
can be active at the same time. An active/passive storage device with two controllers can
use two target definitions unless only one of the controllers is used for the migration.
The XIV can communicate with host LUN IDs ranging from 0 to 512 (in decimal). This
function does not necessarily mean that the non-XIV disk system can provide LUN IDs in
that range. You might be restricted by the non-XIV storage controller to use only 16 or 256
LUN IDs (depending on hardware vendor and device).
Up to 4000 LUNs can be concurrently migrated.
Important: The source system is called a target when setting up paths between the XIV
Storage System and the non-XIV storage. This term is also used in Remote Mirroring, and
both functions share terminology for setting up paths for transferring data.
Source updating
This method for handling write requests ensures that both storage systems are updated when
a write I/O is issued to the LUN being migrated. Source updating keeps the source system
updated during the migration process, and the two storage systems remain in sync after the
background copy process completes. The write commands are only acknowledged by the XIV
Storage System to the host after the following steps occur:
Writing the new data to the local XIV volume
Writing data to the source storage device
Receiving an acknowledgement from the non-XIV storage device.
If there is a communication failure between the storage systems or a write fails, the XIV
Storage System also fails the write operation to the host. This process ensures that the
systems remain consistent. Application requirements will determine whether you use the
Keep Source Updated option.
No source updating
This method for handling write requests ensures that only the XIV volume is updated when a
write I/O is issued to the LUN being migrated. This method decreases the latency of write I/O
operations because write requests are only written to the XIV volume. This limits your ability
to back out a migration unless you have another way of recovering updates to the volume
being migrated. You can avoid this risk by shutting down the host for the duration of the
migration.
Tip: Do not select Keep Source Updated if migrating a boot LUN. Doing so allows you to
quickly back out of a migration of the boot device if a failure occurs.
Important: If multiple paths are created between an XIV and an active/active storage
device, the same SCSI LUN IDs must be used for each LUN on each path. Otherwise, data
corruption might occur. Configure a maximum of two paths per target because defining
more paths does not increase throughput. With some storage arrays, defining more paths
adds complexity and increases the likelihood of configuration issues and corruption.
Define the target to the XIV per non-XIV storage controller (controller, not port). Define at
least one path from that controller to the XIV. All volumes active on the controller can be
migrated using the defined target for that controller. For example, suppose that the non- XIV
storage device contains two controllers (A and B):
Define one target (Ctrl+A) with at least one path between the XIV and one controller on
the non-XIV storage device (controller A). All volumes active on this controller can be
migrated using this target. When defining the XIV initiator to the controller, define it as not
supporting fail-over if that option is available on the non-XIV storage array. By doing so,
Tip: If your controller has two target ports (DS4700, for example), both can be defined as
links for that controller target. Make sure that the two target links are connected to separate
XIV modules. Connecting the links this way makes one redundant in a module failure.
Important: Certain examples shown in this chapter are from an IBM DS4000®
active/passive migration with each DS4000 controller defined independently as a target to
the XIV Storage System. If you define a DS4000 controller as a target, do not define the
alternative controller as a second port on the first target. Doing so causes unexpected
issues such as migration failure, preferred path errors on the DS4000, or slow migration
progress.
Because the non-XIV storage device views the XIV as a host, the XIV must connect to the
non-XIV storage system as a SCSI initiator. Therefore, the physical connection from the XIV
must be from initiator ports on the XIV. The default initiator port for Fibre Channel is port 4 on
each active interface module. The initiator ports on the XIV must be fabric attached, so they
need to be zoned to the non-XIV storage system.
Use two physical connections from two separate modules on two separate fabrics for
redundancy. However, redundant pathing is not possible on active/passive controllers.
It is possible that the host might be attached through one medium (such as iSCSI), whereas
the migration occurs through the other. In this case, the host-to-XIV connection method and
the data migration connection method are independent of each other.
Depending on the non-XIV storage device vendor and device, it might be easier to zone the
XIV to the ports where the volumes being migrated are already present. Zoning in this way
might avoid needing to reconfigure the non-XIV storage device. For example, in EMC
Symmetrix/DMX environments, it is easier to zone the fiber adapters (FAs) to the XIV where
the volumes are already mapped.
If you already zoned the XIV to the non-XIV storage device, the WWPNs of the XIV initiator
ports might be displayed in the WWPN list. Whether they are displayed depends on the
non-XIV storage device and storage management software. If they are not there, you must
manually add them. The WWPNs might not be displayed if you need to map an LUN0 or if
SAN zoning is not done correctly.
The XIV must be defined as a Linux or Windows host to the non-XIV storage device. If the
non-XIV device offers several variants of Linux, you can select SuSE or RedHat Linux, or
Linux x86. Selecting the host defines the correct SCSI protocol flags for communication
between the XIV and non-XIV storage device. The principal criterion is that the host type must
start LUN numbering with LUN ID 0. If the non-XIV storage device is active/passive, check
whether the host type selected affects LUN failover between controllers. For more
information, see 6.12, “Other considerations” on page 173.
There might also be other vendor-dependent settings. For more information, see 6.12, “Other
considerations” on page 173.
Tip: If Create Target is disabled, you have reached the maximum number of targets.
The number of allowed targets includes both migration and mirror targets.
3. Make the appropriate entries and selections, then click Define (Figure 6-6).
– Target Name: Enter a name of your choice.
– Target Protocol: Select FC from the list.
Tip: The data migration target is represented by an image of a generic rack. If you want
to delete or rename the migration device, right-click that rack.
5. Right-click the dark box that is part of the defined target and select Add Port (Figure 6-8).
6. Enter the WWPN of the first (fabric A) port on the non-XIV storage device zoned to the
XIV. There is no list, so you must manually type or paste in the correct WWPN.
Tip: You do not need to use colons to separate every second number. It makes no
difference if you enter a WWPN as 10:00:00:c9:12:34:56:78 or 100000c912345678.
7. Click Add.
Tip: Ensuring that LUN0 is visible on the non-XIV storage device down the controller path
that you are defining helps ensure functional connectivity. Connections from XIV to
DS4000, EMC DMX, or Hitachi HDS devices require a real disk device to be mapped as
LUN0. However, other devices, such as IBM ESS 800 and EMC CLARiiON, do not need a
LUN to be allocated to the XIV.
Note: In clustered environments, you might choose to work with only one node until the
migration is complete. If so, consider shutting down all other nodes in the cluster.
3. Perform a point-in-time copy of the volume on the non-XIV storage device if that function is
available. Perform the copy before changing any host drivers or installing new host
software, particularly if you are going to migrate boot from SAN volumes.
4. Unzone the host from non-XIV storage. The host must no longer access the non-XIV
storage system after the data migration is activated. The host must perform all I/O through
the XIV.
Important: You must unmap the volumes from the host during this step, even if you
plan to power the host off during the migration. The non-XIV storage presents only the
migration LUNs to the XIV. Do not allow the host to detect the LUNs from both the XIV
and the non-XIV storage.
Important: You cannot use the XIV data migration function to migrate data to a source
volume in an XIV remote mirror pair. If you need to use a remote mirror pair, migrate the
data first and then create the remote mirror after the migration is completed.
Important: If the non-XIV device is active/passive, the source target system must
represent the controller (or service processor) on the non-XIV device that currently
owns the source LUN. Find out from the non-XIV storage which controller is
presenting the LUN to the XIV.
– Source LUN: Enter the decimal value of the host LUN ID as presented to the XIV from
the non-XIV storage system. Certain storage devices present the LUN ID as hex. The
number in this field must be the decimal equivalent. Ensure that you do not accidentally
use internal identifiers that you might also see on the source storage systems
management windows. In Figure 6-11 on page 145, the correct values are in the LUN
column.
Note: Do not select Keep Source Updated if you are migrating the boot LUN so
that you can quickly back out if a failure occurs.
4. Click Define.
The migration begins as shown in Figure 6-13. Define Data Migration queries the
configuration of the non-XIV storage system and create an equal sized volume on XIV. To
check whether you can read from the non-XIV source volume, run Test Data Migration.
Tip: If you are migrating volumes from a Microsoft Cluster Server (MSCS) that is still
active, migration testing might fail due to reservations on the source LUN placed by MSCS.
You must bring the cluster down properly to get the test to succeed. If the cluster is not
brought down properly, errors occur either during the test or when activated. The SCSI
reservation must then be cleared for the migration to succeed.
Important: After it is activated, the data migration can be deactivated. However, after
deactivating the data migration, the host is no longer able to read or write to the migration
volume and all host I/O stops. Do not deactivate the migration with host I/O running. If you
want to abandon the data migration, see the back-out process described in section 6.10,
“Backing out of a data migration” on page 169.
Right-click to select the data migration object/volume and choose Activate. Activate all
volumes being migrated so that they can be accessed by the host. The host has read and
write access to all volumes, but the background copy occurs serially volume by volume. If two
targets (such as non-XIV1 and non-XIV2) are defined with four volumes each, two volumes
are actively copied in the background: One volume from non-XIV1 and another from
non-XIV2. All eight volumes are accessible by the hosts.
6.3.4 Defining the host on XIV and bringing the host online
Defining the host on the XIV and bringing to online involves the following steps:
Zoning the host to XIV
Defining the host being migrated to the XIV
Mapping volumes to the host on XIV
Bringing the host online
Important: The host cannot read the data on the non-XIV volume until data migration is
activated. The XIV does not pass through (proxy) I/O for an inactive migration. If you use
the XCLI dm_list command to display the migrations, ensure that Yes is shown in the
Active column for every migration.
The host must be configured using the XIV host attachment procedures. These procedures
include the following steps:
Removing any existing/non-XIV multi-pathing software
Installing the native multi-pathing drivers and recommended patches,
Installing the XIV Host attachment kit, as identified in the XIV Host Attachment Guides
Installing the most current HBA driver and firmware
One or more reboots might be required. Documentation and software can be found at:
http://www.ibm.com/support/search.wss?q=ssg1*&tc=STJTAG+HW3E0&rs=1319&dc=D400&dtm
When volume visibility is verified, the application can be brought up and operations verified.
Tip: In clustered environments, bring only one node of the cluster online initially after the
migration is started. Leave all other nodes offline until the migration is complete. After the
migration is complete, update all other nodes (driver, host attachment package, and so on),
in the same way as the primary node. For more information, see “Performing pre-migration
tasks for the host being migrated” on page 145.
After all of the data in a volume is copied, the data migration achieves synchronization status.
After synchronization is achieved, all read requests are served by the XIV Storage System. If
source updating was selected, the XIV continues to write data to both itself and the outgoing
storage system until the data migration is deleted. Figure 6-17 shows a completed migration.
Important: If performing an online migration, do not deactivate the data migration before
deletion. Deactivating before deletion causes host I/O to stop, which can cause data
corruption.
Restriction: For safety purposes, you cannot delete an inactive or unsynchronized data
migration from the Data Migration panel. An unfinished data migration can be deleted only
by deleting the relevant volume from the Volumes Volumes & Snapshots section in the
XIV GUI.
When using the XIV GUI, every command issued is logged in a text file with the correct syntax
so you can review it later. This log is helpful for creating scripts. If you are running the XIV GUI
under Microsoft Windows, look for a file named guicommands_< todays date >.txt. This file is
typically found in the following folder:
C:\Documents and Settings\ < Windows user ID >\Application Data\XIV\GUI10\logs
The commands in the next few pages are in the order in which you must run them, starting
with the commands to list all current definitions. You need these definitions when you delete
migrations.
List targets.
Syntax target_list
List target ports.
Syntax target_port_list
To make these changes permanent, update the relevant profile, making sure that you export
the variables to make them environment variables.
Tip: It is also possible to run XCLI commands without setting environment variables with
the -u and -p switches.
Run an equivalent script or batch job to run the data migrations as shown in Example 6-4.
As an example, consider the ESS 800 volume 00F-FCA33 as depicted in Figure 6-26 on
page 167. The size reported by the ESS 800 web GUI is 10 GB, which suggests that the
volume is 10,000,000,000 bytes in size. The ESS 800 displays volume sizes using decimal
counting. The AIX bootinfo -s hdisk2 command reports the volume as 9,536 GiB, which is
9,999,220,736 bytes (1,073,741,824 bytes per GiB). Both of these values are too small.
When the volume properties are viewed on the volume information window of the ESS 800
Copy Services GUI, it correctly reports the volume as being 19,531,264 sectors. This number
is equivalent to 10,000,007,168 bytes (512 bytes per sector). When the XIV automatically
creates a volume to migrate the contents of 00F-FCA33, it creates it as 19,531,264 blocks. Of
the three information sources considered to calculate volume size, only one of them is
correct. Using the automatic volume creation eliminates this uncertainty.
After you determine the exact size, select Blocks from the Volume Size list and enter the
size of the XIV volume in blocks. If your sizing calculation is correct, you create an XIV volume
that is the same size as the source (non-XIV storage device) volume. Then you can define a
migration:
1. In the XIV GUI, click Remote Migration.
2. Right-click Migration and select Define Data Migration. Make the appropriate entries
and selections:
– Destination Pool: Select the pool where the volume was created.
– Destination Name: Select the pre-created volume from the list.
– Source Target System: Select the already defined non-XIV storage device from the
list.
Important: If the non-XIV device is active/passive, the source target system must
represent the controller (or service processor) on the device that owns the source
LUN. You must find out from the non-XIV storage which controller is presenting the
LUN to the XIV.
If the volume that you created is the wrong size, an error message is issued during the test
data migration as shown in Figure 6-19. If you activate the migration, you get the same error
message. You must delete the volume on the XIV and create a new, correctly sized one. It
must be deleted because you cannot resize a volume that is in a data migration pair. In
addition, you cannot delete a data migration pair unless it has completed the background
copy. Delete the volume and then investigate why your size calculation was wrong. After
correcting the problem, create a new volume and a new migration, and test it again.
Increasing the max_initialization_rate parameter might decrease the time required to migrate
the data. However, doing so might affect existing production servers on the non-XIV storage
device. By increasing the rate parameters, more outgoing disk resources are used to serve
migrations and less for existing production I/O. Be aware of how these parameters affect
migrations and production. You can choose to use the higher rate only during off-peak
production periods.
The rate parameters can only be set using XCLI, not the XIV GUI. Run the target_list -x
command, where the -x parameter displays the current rate. If the setting is changed, the
change takes place dynamically, so you do not need to deactivate or activate the migrations.
As shown in Example 6-5, first display the target list and then confirm the current rates using
the -x parameter. The example shows that the initialization rate is still set to the default value
(100 MBps). You then increase the initialization rate to 200 MBps. You can then observe the
completion rate, as shown in Figure 6-16 on page 151, to see whether it has improved.
Important: Increasing the initialization rate does not necessarily increase the actual speed
of the copy. The outgoing disk system or the SAN fabric might be the limiting factor. In
addition, you might cause host system impact by committing too much bandwidth to
migration I/O.
If you have a Cisco-based SAN, start Device Manager for the relevant switch and then select
Interface Monitor FC Enabled.
During the migration, the value displayed in the Used column of the Volumes and Snapshots
window drops every time empty blocks are detected. When the migration is completed, you
can check this column to determine how much real data was written into the XIV volume. In
Figure 6-22, the used space on the Windows2003_D volume is 4 GB. However, the Windows
file system using this disk shown in Figure 6-24 on page 164 shows only 1.4 GB of data.
This discrepancy occurs because when file deletions occur at a file system level, the data is
not removed. The file system reuses this effectively free space. However, the system does not
write zeros over the old data because doing so generates a large amount of unnecessary I/O.
The result is that the XIV copies old and deleted data during the migration. Copying this
obsolete data makes no difference to the migration speed because the blocks must be read
into the cache regardless of what they contain.
If you are not planning to use the thin provisioning capability of the XIV, this is not an issue.
In a Windows environment, you can use a Microsoft tool known as sdelete to write zeros
across deleted files. You can find this tool in the sysinternals section of Microsoft Technet at
the following web address:
http://technet.microsoft.com/en-us/sysinternals/bb897443.aspx
If you to allow the XIV to determine the size of the migration volume, a small amount of extra
space is consumed for every volume that was created. The volumes automatically created by
the XIV often reserve more XIV disk space than is made available to the volume. Avoid this
problem by creating volume sizes on the non-XIV storage device in multiples of 16 GiB. An
example of the XIV volume properties of such an automatically created volume is shown in
Figure 6-23. In this example, the Windows2003_D drive is 53 GB in size, but the size on disk
is 68 GB on the XIV.
Because this example is for a Microsoft Windows 2003 basic NTFS disk, you can use the
diskpart utility to extend the volume (Example 6-9).
Confirm that the volume has grown by displaying the volume properties. In Figure 6-25, you
can see that the disk is now 68 GB (68,713,955,328 bytes).
In terms of when to do the resize, a volume cannot be resized while it is part of a data
migration. The migration process must complete and the migration for that volume must be
deleted before the volume can be resized. For this reason, you might choose to defer resizing
until after the migration of all relevant volumes is complete. This technique also separates the
resize change from the migration change. Depending on the operating system using that
volume, you might not get any benefit from doing this resize.
6.9 Troubleshooting
This section lists common errors that are encountered during data migrations using the XIV
data migration facility.
The volume on the source non-XIV storage device might not be initialized or been
low-level formatted. If the volume has data on it, this is not the case. However, if you are
assigning new volumes from the non-XIV storage device, these new volumes might not
complete the initialization process. On ESS 800 storage, the initialization process can be
displayed from the Modify Volume Assignments window. Figure 6-26 shows the volumes
are still 0% background formatted, so they are not accessible by the XIV. For ESS 800,
keep clicking Refresh Status on the ESS 800 web GUI until the formatting message
disappears.
Tip: This error might also happen in a cluster environment where the XIV is holding a SCSI
reservation. Make sure that all nodes of a cluster are shut down before starting a migration.
The XCLI command reservation_list lists all SCSI reservations held by the XIV. If you
find a volume with reservations where all nodes are offline, remove the reservations using
the XCLI command reservation_clear. See the XCLI documentation for further details.
For more information, see 6.2.2, “Multi-pathing with data migrations” on page 137, and 6.12,
“Other considerations” on page 173.
From the XIV you can again map them to each host as LUN IDs 20, 21, and 22 (as they were
from the non-XIV storage).
If migrating from an EMC Symmetrix or DMX, there are special considerations. See 6.12,
“Other considerations” on page 173.
6.10.2 Back out after a data migration is defined but not activated
If the data migration definition exists but is not activated, follow the steps described in 6.10.1,
“Back out before migration is defined on the XIV” on page 169. To remove the inactive
migration from the migration list, delete the XIV volume that was going to receive the migrated
data.
6.10.3 Back out after a data migration is activated but is not complete
If the data migration status is initialization in the GUI or the XCLI shows it as active=yes, the
background copy process is started. Deactivating the migration in this state blocks any I/O
passing through the XIV from the host server to the LUNs on the XIV and non-XIV systems.
To back out, first shut down the host server or its applications. Then deactivate the data
migration and delete the XIV data migration volume if wanted. Finally, restore the original LUN
masking and SAN fabric zoning and bring your host back up.
Important: Choosing to not allow source updating and write I/O occurs after the migration
started prevents the LUN on the non-XIV storage device from containing those writes.
Important: If you chose to not allow source updating and write I/O has occurred during or
after the migration, the LUN on the non-XIV storage device will not contain those writes.
For site setup, the high-level process includes the following steps:
1. Install XIV and cable it into the SAN.
2. Pre-populate SAN zones in switches.
3. Pre-populate the host/cluster definitions in the XIV.
4. Define XIV to the non-XIV disk as a host.
5. Define the non-XIV disk to XIV as a migration target.
6. Confirm paths.
For each host, the high-level process includes the following steps:
1. Update host drivers, install Host Attachment Kit, and shut down the host.
2. Disconnect/Un-Zone the host from non-XIV storage.
3. Zone the host to XIV.
4. Map the host LUNs away from the host instead of mapping them to the XIV.
5. Create XIV data migration.
6. Map XIV data migration volumes to the host.
7. Start the host.
When all data on the non-XIV disk system is migrated, perform site cleanup using these
steps:
1. Delete all SAN zones related to the non-XIV disk.
2. Delete all LUNs on non-XIV disk and remove it from the site.
2 Site Run fiber cables from SAN switches to XIV for host connections
and migration connections.
3 Non-XIV storage Select host ports on the non-XIV storage to be used for migration
traffic. These ports do not have to be dedicated ports. Run new
cables if necessary.
4 Fabric switches Create switch aliases for each XIV Fibre Channel port and any
new non-XIV ports added to the fabric.
5 Fabric switches Define SAN zones to connect hosts to XIV, but do not activate the
zones. Define them by cloning the existing zones from host to
non-XIV disk and swapping non-XIV aliases for new XIV aliases.
6 Fabric switches Define and activate SAN zones to connect non-XIV storage to XIV
initiator ports (unless direct connected).
8 Non-XIV storage Define the XIV on the non-XIV storage device, mapping LUN0 to
test the link.
9 XIV Define non-XIV storage to the XIV as a migration target and add
ports. Confirm that links are green and working.
11 XIV Define all the host servers to the XIV (cluster first if using clustered
hosts). Use a host listing from the non-XIV disk to get the WWPNs
for each host.
After the site setup is complete, the host migrations can begin. Table 6-2 shows the host
migration checklist. Repeat this checklist for every host. Task numbers identified with an
asterisk must be performed with the host application offline.
1 Host From the host, determine the volumes to be migrated and their relevant LUN
IDs and hardware serial numbers or identifiers.
2 Host If the host is remote from your location, confirm that you can power the host
back on after shutting it down. You can use tools such as an RSA card or IBM
BladeCenter® manager.
3 Non-XIV Get the LUN IDs of the LUNs to be migrated from non-XIV storage device.
Storage Convert from hex to decimal if necessary.
5* Host Set the application to not start automatically at reboot. Using this setting helps
when performing administrative functions on the server such as upgrades of
drivers, patches, and so on.
6* Host On UNIX servers, comment out disk mount points on affected disks in the
mount configuration file. Commenting them out helps with system reboots while
configuring for XIV.
8* Fabric Change the active zone set to exclude the SAN zone that connects the host
server to non-XIV storage. In addition, include the SAN zone for the host server
to XIV storage. Create the new zone during site setup.
10* Non-XIV Map source volumes to the XIV host definition created during site setup.
storage
11* XIV Create data migration pairing (XIV volumes created dynamically).
13* XIV Start XIV migration and verify it. If you want, wait for migration to finish.
14* Host Boot the server. Be sure that the server is not attached to any storage.
15* Host Coexistence of non-XIV and XIV multi-pathing software is supported with an
approved SCORE(RPQ) only. Remove any unapproved multi-pathing software
16* Host Install patches, update drivers, and HBA firmware as necessary.
17* Host Install the XIV Host Attachment Kit. Be sure to note the prerequisites.
18* Host You might need to reboot depending on the operating system.
19* XIV Map XIV volumes to the host server. (Use original LUN IDs.)
21* Host Verify that the LUNs are available and that pathing is correct.
22* Host For UNIX servers, update the mount points for new disks in the mount
configuration file if they have changed. Mount the file systems.
24* Host Set the application to start automatically if this setting was previously changed.
26 XIV When the volume is synchronized, delete the data migration. Do not deactivate
the migration.
27 Non-XIV Unmap migration volumes away from XIV if you need to free up LUN IDs.
Storage
28 XIV Consider resizing the migrated volumes to the next 17 GB boundary if the host
operating system is able to use new space on a resized volume.
29 Host If XIV volume was resized, use host procedures to use the extra space.
30 Host If non-XIV storage device drivers and other supporting software were not
removed earlier, remove them when convenient.
When all the hosts and volumes are migrated, perform the site cleanup tasks shown in
Table 6-3.
For more information, see IBM XIV Storage System: Copy Services and Migration,
SG24-7759, at:
http://www.redbooks.ibm.com/redbooks/pdfs/sg247759.pdf
This chapter provides a brief description of the IBM System Storage SAN Volume Controller,
SAN Volume Controller terminology, and architecture.
In October 2010, IBM introduced a new midrange disk system called IBM Storwize® V7000.
V7000 is based on SAN Volume Controller virtualization technology and provides the same
functions and interoperability as SAN Volume Controller.
For more information about Storwize, see the following web address:
http://www-03.ibm.com/systems/storage/disk/storwize_v7000/
Tip: The SAN must be zoned in such a way that the application servers cannot see the
storage. This zoning prevents conflict between the SAN Volume Controller and the
application servers that are both trying to manage the storage.
Storage virtualization: SAN Volume Controller creates a storage pool of managed disks
from attached disk storage subsystems. These managed disks are then mapped to a set
of volumes for use by host computer systems.
Scalable: SAN Volume Controller can be used to manage all of your disk storage
requirements, or just a subset of them. SAN Volume Controller also offers a large scalable
cache using an algorithm.
Reduces the requirement for additional partitions: SAN Volume Controller consumes only
one storage partition for each storage server that connects to it.
Improves access: SAN Volume Controller improves capacity utilization, and spare
capacity. Underlying physical disks can be reallocated non-disruptively from an application
server point of view irrespective of the server operating system or platform type.
Simplifies device driver configuration on hosts: All hosts within your network use the same
IBM device driver to access all storage subsystems through the SAN Volume Controller.
Supports split I/O group implementations: Split implementations are not apparent to an
application and are used across sites to cover application high availability requirements.
These implementations can be used in case of high availability demands.
SAN Volume Controller is licensed according to the usable capacity that is being managed.
The advanced functions available on SAN Volume Controller, such as FlashCopy, IBM Easy
Tier®, Split I/O group, Mirrored volumes, Metro Mirror, and Global Mirror are included.
The license cost is for the capacity of all storage managed by the SAN Volume Controller,
plus the capacity of the copy services maintained by the SAN Volume Controller. You can
upgrade at any time by purchasing a license for the additional capacity required.
SAN Volume Controller supports a wide variety of disk storage and host operating system
platforms. For the latest information, see the following web address:
http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html
For details and information about SAN Volume Controller implementation, see Implementing
the IBM System Storage SAN Volume Controller V6.1, SG24-7933.
According to new hardware and software releases (at the writing of this book, release 6.2),
SAN Volume Controller supports the following additional new functions:
Support for 10 Gbps iSCSI functionality with 2145-CG8 node hardware
Real-time performance monitoring
Support for VMware vStorage APIs for Array Integration (VAAI)
Internal SSD drive support
Support for FlashCopy targets as Metro or Global Mirror sources
Critical update notifications
Additional language support
New management GUI
No requirement for separate console installation
Direct cluster access using a web browser
Easy Tier hotspot management
Service Assistant
Remember: Depending on your SAN Volume Controller code level, some functions might
not be available.
SAN Volume Controller naming terminology has recently changed as shown in Table 7-1.
Error Event
Node
A node is an individual server in a SAN Volume Controller cluster on which the SAN Volume
Controller software runs. Nodes are always installed as pairs and represent one I/O group in
the SAN Volume Controller cluster concept. Each node is connected to its own UPS to ensure
a safe power off during a power outage.
Configuration node
At any time, a single node in the cluster is used to manage configuration activity. This
configuration node manages an information cache that describes the cluster configuration
and provides a focal point for configuration commands.
Similarly, at any time, a single node is responsible for the overall management of the cluster.
In a configuration node failure, the SAN Volume Controller fails over to the second node,
which takes the configuration function. The IP address and user credentials remain the same.
I/O group
An input/output (I/O) group contains two SAN Volume Controller nodes defined by the
configuration process. Each SAN Volume Controller node is associated with exactly one I/O
group. The nodes in the I/O group provide access to the volumes in the I/O group.
The switch fabric must be zoned, so that the SAN Volume Controller nodes can detect the
back-end storage systems and the front-end host HBAs. The SAN Volume Controller prevents
direct host access to managed disks. The managed disks are looked after by the back-end
Storage pool
A storage pool is a collection of MDisks that jointly contain all the data for a specified set of
volumes. Each storage pool is divided into a number of extents. When creating a storage pool,
you must choose an extent size. After it is set, the extent size stays constant for the life of that
storage pool. Each storage pool can have a different extent size.
Volumes
A volume is a logical entity that represents extents contained in one or more MDisks from a
storage pool. Volumes are allocated in whole numbers of extents. Each volume is only
associated with a single I/O group. The volume is then presented to the host as a LUN for
use.
In this dynamically tiered environment, data movement is seamless to the host application
regardless of the storage tier in which the data is located.
Quorum disks
Quorum disks are used when there is a problem in the SAN fabric or when nodes are shut
down. SAN Volume Controller uses quorum disks in such situation to ensure data
consistency and data integrity while maintaining data access. A quorum disk determines
which group of nodes stops operating and processing I/O requests. In this tie-break situation,
the first group of nodes that accesses the quorum disk marks its ownership of the quorum
disk. As a result, the group continues to operate as the system, handling all I/O requests.
Access modes
The access mode determines how the SAN Volume Controller system uses the MDisk. The
following are the types of access modes:
Important: If you add an MDisk that contains existing data to a storage pool while the
MDisk is in unmanaged or managed mode, the data is lost. The image mode is the only
mode that preserves this data.
Mirrored volumes
Volume mirroring allows a volume to have two physical copies. The mirrored volume feature
provides a simple RAID-1 function. Each volume copy can belong to a different storage pool,
and can belong to separate storage subsystems.
The primary copy indicates the preferred volume for read requests. When a server writes to a
mirrored volume, the system writes the data to both copies. You can create a volume with one
or two copies, and you can convert a non-mirrored volume into a mirrored volume by adding a
copy. Alternately, you can convert a mirrored volume into a non-mirrored volume by deleting
one copy or by splitting one copy to create a new, non-mirrored volume.
System state
The state of the clustered system holds all of the configuration and internal data. The system
state information is held in nonvolatile memory. If the power fails, the UPS units maintain
internal power long enough for system state information to be stored. This information is
stored on the SAN Volume Controller internal disk drive of each node.
Backup is the process of extracting configuration settings from a SAN Volume Controller
system and writing them to disk. The restore process uses backup configuration data files to
restore system configuration.
If power fails on a system or a node in a system is replaced, the system configuration settings
are automatically restored. This restoration occurs when the repaired node is added to the
system. To restore the system configuration in a disaster, plan to back up the system
configuration settings to tertiary storage. Use the configuration backup functions to back up
the system configuration.
Important: For complete disaster recovery, regularly back up the business data that is
stored on volumes at the application server or host level.
Event notifications
SAN Volume Controller can use Simple Network Management Protocol (SNMP) traps, syslog
messages, and call home email. These alerts notify you and the IBM Support Center when
significant events are detected.
These notification methods can be used simultaneously. Notifications are normally sent
immediately after an event occurs.
SAN Volume Controller migration can be used for the following tasks:
Redistribution of volumes and their workload within an SAN Volume Controller cluster
across back-end storage
Moving workload onto newly installed storage subsystems
Moving workload off storage so that old or failing storage subsystems can be
decommissioned
Moving workload to rebalance a changed workload
Migrating data from earlier model back-end storage to SVC-managed storage
Migrating data from one back-end controller to another using the SAN Volume Controller
as a data block mover
Removing the SAN Volume Controller from the SAN
Migrating data from managed mode back into image mode before removing the SAN
Volume Controller from a SAN
SAN Volume Controller migration can be performed at either the volume or the extent level,
depending on the purpose of the migration. The following are the supported migration
activities:
Migrating extents within a storage pool and redistributing the extents of a volume on the
MDisks in the storage pool
Migrating extents off an MDisk to other MDisks in the storage pool so the MDisk can be
removed
Migrating a volume from one storage pool to another
Changing the virtualization type of the volume to Image mode
Migrating a volume between I/O groups
Extents
An extent is a fixed-size unit of data that is used to manage the mapping of data between
MDisks and volumes. The extent size choices are 16, 32, 64, 128, 256, 512, 1024, 2048,
4096, and 8192 MB. The choice of extent size affects the total storage that can be managed
by the SAN Volume Controller. For the 8192-MB extent size, SAN Volume Controller supports
the following capacities:
A maximum volume capacity of 256 TB
A maximum SAN Volume Controller system capacity of 32 PB
Figure 7-2 shows the relationship between physical and virtual disks.
Mapping to Hosts
w/SDD or supported MultiPath Driver
Space-efficient
Storage
Pool mdiskgrp0 [EMC Group] mdiskgrp1 [IBM Group]
Stripe 400GB 600GB
16MB –8GB
Managed
Disk mdisk0 mdisk1 mdisk2 mdisk3 mdisk4 mdisk5 mdisk6
100GB 100GB 100GB 100GB 200GB 200GB 200GB
Consistency groups
Consistency groups preserve data consistency across multiple volumes, especially for
applications that have related data that spans multiple volumes. To preserve the integrity of
the data being written, ensure that dependent writes are run in the intended sequence of the
application.
SDD software for all supported platforms can be obtained for no additional fee at:
https://www-304.ibm.com/support/docview.wss?uid=ssg1S7001350
The SAN Volume Controller copy services functions can provide additional capabilities and
unique advantages over other storage devices:
SAN Volume Controller supports consistency groups for FlashCopy, Metro Mirror, and
Global Mirror.
Consistency groups in SAN Volume Controller can span across underlying storage
subsystems.
FlashCopy source volumes that are on one disk system can write to target volumes on
another disk system.
SAN Volume Controller supports FlashCopy targets as Metro or Global Mirror sources
(SAN Volume Controller code 6.2.0.1)
Metro Mirror and Global Mirror source volumes can be copied to target volumes on a
dissimilar storage subsystem.
The function of Metro Mirror is to maintain two real-time synchronized copies of a data set.
Often, the two copies are geographically dispersed on two SAN Volume Controller clusters,
although you can use Metro Mirror in a single cluster (within an I/O group). If the primary copy
fails, the secondary copy can then be enabled for I/O operation.
Metro Mirror works by defining a Metro Mirror relationship between volumes of equal size.
When creating the Metro Mirror relationship, define one volume as the master, and the other
volume as the auxiliary. Any data that exists on the auxiliary volume before the relationship
being set up is deleted.
The terms master and auxiliary are used instead of the industry standard terms of source
and target. These terms are used because they match the parameter keywords used in the
commands to set up the relationships between the volumes. The following terms are all
interchangeable:
Source and target
Primary and secondary
Master and auxiliary
To provide management and data consistency across a number of Metro Mirror relationships,
consistency groups are supported.
Intercluster Metro Mirror operations require a pair of SAN Volume Controller clusters that are
separated by a number of moderately high-bandwidth links. The two SAN Volume Controller
clusters must be defined in an SAN Volume Controller partnership. This definition must be
performed on both SAN Volume Controller clusters to establish a fully functional Metro Mirror
partnership. Correct link sizing is crucial to successfully implement Metro Mirror replication.
Metro Mirror is a fully synchronous remote copy technique. It ensures that updates (writes),
are committed at both primary and secondary volumes before the application is given “write
successful” status to an update and released. A write to the master volume is mirrored to the
cache for the auxiliary volume. An acknowledge of the write is then sent back to the host
(Figure 7-3).
If your system goes down, Metro Mirror provides nearly zero loss if data must be used from
the recovery site. While the Metro Mirror relationship is active, the secondary copy (auxiliary
volume) is not accessible for host application write I/O at any time. The SAN Volume
Controller allows read-only access to the secondary volume, when it contains a consistent
image. To allow access to the secondary volume for host operations, the Metro Mirror
relationship must first be stopped.
Because SAN Volume Controller Global Mirror is not intended to use for data migration, it is
not addressed further here. For more information about SAN Volume Controller Global Mirror,
see Software Installation and Configuration Guide, GC27-2286-01.
To be able to migrate, the destination MDisk must be greater than or equal to the size of the
VDisk. Also, the MDisk specified as the target must be in an unmanaged state at the time the
command is run.
If the migration is interrupted by a cluster recovery, the migration will resume after the
recovery completes.
Regardless of the extent size for the Storage pool, data is migrated in units of 16 MB. In this
description, this unit is called a chunk.
During the migration, the extent can be divided into three regions as shown in Figure 7-4.
Region B is the chunk that is being copied. The reads and writes are in the following states:
Writes to region B are queued (paused) in the virtualization layer, waiting for the chunk to
be copied.
Reads to Region A are directed to the destination because this data has already been
copied.
Writes to Region A are written to both the source and the destination extent to maintain
the integrity of the source extent.
Reads and writes to Region C are directed to the source because this region has yet to be
migrated.
Not to scale
16 MB
Figure 7-4 Migrating an extent
The migration of a chunk requires 64 synchronous reads and 64 synchronous writes. During
this time, all writes to the chunk from higher layers in the software stack (such as cache
destages) are held back. If the back-end storage is operating with significant latency, this
operation might take some time to complete. This latency can have an adverse effect on the
overall performance of the SAN Volume Controller. To avoid this situation, if the migration of a
particular chunk is still active after one minute, the migration is paused for 30 seconds. During
this time, writes to the chunk are allowed to proceed. After 30 seconds, the migration of the
chunk is resumed. This process is repeated as many times as necessary to complete the
migration of the chunk.
However, SAN Volume Controller does not guarantee data synchronization on writes during
the migration. After the metadata is updated for each 16-MB chuck, any new writes are
redirected to the target disk only. Because of the nature of the migration algorithm, restarting
hosts in the event of an SAN Volume Controller cluster failure is difficult.
Important: To guarantee that the source and target data is synchronous using this method
of migration, pause I/O during the migration. The risk of a complete SAN Volume Controller
cluster failure during migration is small. Weigh the decision to pause I/O during migration
against the impact of any I/O downtime.
When SAN Volume Controller is introduced, it takes time and effort to virtualize the storage.
Migration of data from non-virtualized to virtualized storage can be achieved in many different
ways. Image to managed volume migration and volume mirroring are the traditional
approaches for data migration.
FlashCopy can also be used for migrating data from non-virtualized to SAN Volume Controller
managed storage. The target volume can be anywhere within the SVC-managed
environment. The typical scenario for the migration is from image volume to managed
volume. FlashCopy allows you to create a read/write copy on the target volume in a matter of
a few seconds.
The significant advantage of FlashCopy migration is that the source volume remains
unchanged throughout the migration process. Another advantage is that when the copy is
created, the target volume can instantly be used in read/write mode.
For details and information about SAN Volume Controller Copy Services, see SAN Volume
Controller V4.3.0 Advanced Copy Services, SG24-7574.
FlashCopy -based data migration provides a simple and reliable backout path. If something
did not go as planned during the migration, unmask the original volumes from the SAN
Volume Controller and mask them back to a server. You do not need to copy data back
because the original volumes remained unchanged.
You can also repeat the copy process from the source to the target as many times as
required. If for whatever reason the target set of volumes become unusable, you can refresh
the contents by recopying data.
The disadvantage of this method is that it requires slightly more effort compared to the
volume migration and volume mirroring methods. The reason for this is you need to create the
target managed volumes of matching sizes, and create and start one FlashCopy mapping per
volume pair. However, this process can be automated using migration scripts.
The FlashCopy mappings in step 9 can be created with -autodelete flag so that the mapping
is removed after background copy process is finished. By removing the mapping
automatically, you reduce the number of steps in the cleanup process in step 15.
After the FlashCopy mapping is created and the background copy process is started, monitor
the progress and adjust the copy rate to minimize the performance impact. In most cases, the
background copy has no measurable performance impact on the application workload. In this
case, you can adjust the rate so that the background copy can complete sooner. You can
schedule copy rate reduction, for example to minimize the performance impact during a
period of heavy batch processing.
In order for a data migration to be successful the set of volumes must be consistent. To
ensure volume consistency, all the volume manipulations must be done for the entire volume
set. You must not present a subset of volumes to be accessed by a server.
By using dual cluster Metro Mirroring (intercluster Metro Mirror), the source volumes receive
the host updates at all times during the data migration. If a problem occurs between the host
and its data volumes, a normal recovery is possible. In the background, the SAN Volume
Controller copies the data over to the target volumes. You can restart from the source or, if
necessary, from the target volumes.
The target LUN must be exactly the same size as the source LUN.
When a server writes to a mirrored volume, the system writes the data to both copies. When
a server reads a mirrored volume, the system picks one of the copies to read. If one of the
mirrored volume copies is temporarily unavailable, the volume remains accessible to servers
through the other copy. The system remembers which areas of the volume are written and
resynchronizes these areas when both copies are available.
You can create a volume with one or two copies, and you can convert a non-mirrored volume
into a mirrored volume by adding a copy. When a copy is added in this way, the SAN Volume
Controller clustered system synchronizes the new copy so that it is the same as the existing
volume. Servers can access the volume during this synchronization process.
You can convert a mirrored volume into a non-mirrored volume by deleting one copy or by
splitting one copy to create a new non-mirrored volume. The volume copy can be any type:
Image, striped, sequential, and either thin provisioned or fully allocated. The two copies can
be of different types.
When you use volume mirroring, consider how quorum candidate disks are allocated. Volume
mirroring maintains some state data on the quorum disks. If a quorum disk is not accessible
and volume mirroring is unable to update the state information, a mirrored volume might need
to be taken offline to maintain data integrity. To ensure the high availability of the system,
ensure that multiple quorum candidate disks, allocated on different storage systems, are
configured.
You can perform migration at either the volume or the extent level, depending on the purpose
of the migration. The following migration activities are supported:
Migrating extents within a storage pool, redistributing the extents of a volume on the
MDisks within the same storage pool
Migrating extents off an MDisk, which is removed from the storage pool, to other MDisks in
the same storage pool
Migrating a volume from one storage pool to another
Migrating a volume to change the virtualization type of the volume to image
Migrating a volume between I/O groups
You can determine which MDisks are heavily used by gathering and analyzing input/output
(I/O) statistics about nodes, MDisks, and volumes. Using this information, you can migrate
extents to less used MDisks in the same storage pool. This migration can only be performed
using the command-line tools.
If performance monitoring tools indicate that a managed disk in the pool is being overused,
migrate some of the data to other MDisks within the same storage pool.
1. Determine the number of extents that are in use by each volume for the MDisk using the
following CLI command:
lsmdiskextent mdiskname
2. From the number of extents that each volume is using on the MDisk, select some of them
to migrate elsewhere in the group.
3. Determine the storage pool that the MDisk belongs to using this CLI command:
lsmdisk mdiskname | ID
4. List the MDisks in the group by issuing the following CLI command:
lsmdisk -filtervalue mdisk_grp_name=mdiskgrpname
5. Select one of these MDisks as the target MDisk for the extents. You can determine how
many free extents exist on an MDisk using the CLI command:
lsfreeextents mdiskname
6. Issue the lsmdiskextent newmdiskname command for each of the target MDisks to ensure
that you are not just moving the over-utilization to another MDisk. Check that the volume
that owns the set of extents to be moved does not already own a large set of extents on
the target MDisk.
7. For each set of extents, issue this CLI command to move them to another MDisk:
migrateexts -source mdiskname | ID -exts [num_extents] -target newmdiskname |
ID -threads 4 vdiskid
Where [num_extents] is the number of extents on the vdiskid. The newmdiskname | ID
value is the name or ID of the MDisk to migrate this set of extents to.
Tip: The number of threads indicates the priority of the migration processing, where 1 is
the lowest priority and 4 is the highest priority.
8. Repeat the previous steps for each set of extents that you are moving.
You can check the progress of the migration by issuing this CLI command:
lsmigrate
If a volume uses extents that need to be moved as a result of a rmmdisk command, the
virtualization type for that volume must be set to striped. This process is needed only if it was
previously sequential or image.
If the MDisk is operating in image mode, the MDisk transitions to managed mode while the
extents are being migrated. Upon deletion, it changes to unmanaged mode.
Remember: If the -force flag is not used and volumes occupy extents on one or more of
the MDisks that are specified, the command fails.
When the -force flag is used and volumes occupy extents on the specified MDisks, all
extents are migrated to other MDisks in the storage pool. This process will occur only if
there are enough free extents in the storage pool. The deletion of the MDisks is postponed
until all extents are migrated, which can take some time. If there are insufficient free
extents in the storage pool, the command fails.
Rule: The source and destination storage pool must have the same extent size for
migration to take place. Volume mirroring can also be used to migrate a volume between
storage pools if the extent sizes of the two pools are not the same.
Migration commands fail if the target or source volume is offline, or if there is insufficient
quorum disk space to store the metadata. Correct the offline or quorum disk condition and
reissue the command.
You can migrate volumes between storage pools using the command-line interface (CLI).
Determine the usage of particular MDisks by gathering input/output (I/O) statistics about
nodes, MDisks, and volumes. Analyze it to determine which volumes or MDisks need to be
moved to another storage pool.
After you analyze the I/O statistics data, you can determine which volumes are hot. You also
need to determine the storage pool that you want to move this volume to. Either create a
storage pool or determine an existing group that is not yet heavily used. Check the I/O
statistics files that you generated for heavily used groups. Ensure that the MDisks or VDisks
in the target storage pool are used less than those in the source group.
You can use data migration or volume mirroring to migrate data between MDisk groups. Data
migration uses the command migratevdisk. Volume mirroring uses the commands
addvdiskcopy and rmvdiskcopy.
When you issue the migratevdisk command, a check is made to ensure that the destination
of the migration has enough free extents to satisfy the command. If it does not, the command
fails. This command takes some time to complete.
Perform the following steps to use the migratevdisk command to migrate volumes between
storage pools:
1. After you determine the volume that you want to migrate and the new storage pool you
want to migrate it to, issue the following CLI command:
migratevdisk -vdisk vdiskname/ID -mdiskgrp newmdiskgrname/ID -threads 4
2. You can check the progress of the migration by issuing the following CLI command:
lsmigrate
Note: When you use data migration, the volume goes offline if either storage pool fails.
Volume mirroring can be used to minimize the impact to the volume because the volume
goes offline only if the source storage pool fails
MDisk modes
There are three MDisk modes:
Unmanaged MDisk: An MDisk is reported as unmanaged when it is not a member of any
storage pool. An unmanaged MDisk is not associated with any volumes and has no
metadata stored on it. The SAN Volume Controller does not write to an MDisk that is in
unmanaged mode. The only exception is when it attempts to change the mode of the
MDisk to one of the other modes.
Image mode MDisk: Image mode provides a direct block-for-block translation from the
MDisk to the volume with no virtualization. Image mode volumes have a minimum size of
one block (512 bytes) and always occupy at least one extent. An image mode MDisk is
associated with exactly one volume.
Managed mode MDisk: Managed mode MDisks contribute extents to the pool of available
extents in the storage pool. Zero or more managed mode volumes might use these
extents.
Image mode volumes have the special property that the last extent in the volume can be a
partial extent. Managed mode disks do not have this property.
To perform any type of migration activity on an image mode volume, the image mode disk
must first be converted into a managed mode disk. If the image mode disk has a partial last
extent, this last extent in the image mode volume must be the first extent to be migrated. This
migration is handled as a special case. After the special migration operation occurs, the
volume becomes a managed mode volume. If the image mode disk does not have a partial
last extent, no special processing is needed. The image mode volume is changed into a
managed mode volume and is treated in the same way as any other managed mode volume.
After data is migrated off a partial extent, there is no way to migrate data back onto the partial
extent.
You can use the command-line interface (CLI) to import storage that contains existing data
and continue to use this storage. You can also use the advanced functions, such as Copy
Services, data migration, and the cache. These disks are known as image mode virtual
volumes.
Make sure that you are aware of the following restrictions before you create image mode
volumes:
1. Unmanaged-mode managed disks (MDisks) that contain existing data cannot be
differentiated from unmanaged-mode MDisks that are blank. Therefore, it is vital that you
control the introduction of these MDisks to the clustered system by adding these disks one
at a time.
2. Do not manually add an unmanaged-mode MDisk that contains existing data to a storage
pool. If you do, the data will be lost. Use the command to convert an image mode volume
from an unmanaged-mode disk, and select the storage pool you want it added to.
Tip: The detectmdisk command also rebalances MDisk access across the available
storage system device ports.
6. Convert the unmanaged-mode MDisk to an image mode virtual disk. Issue the mkvdisk
command to create an image mode virtual disk object.
7. Map the new volume to the hosts that were previously using the data that the MDisk now
contains.
You can use the mkvdiskhostmap command to create a mapping between a volume and a
host. This mapping makes the image mode volume accessible for I/O operations to the host.
After the volume is mapped to a host object, the volume is detected as a disk drive with which
the host can perform I/O operations.
If you want to virtualize the storage on an image mode volume, you can transform it into a
striped volume. Migrate the data on the image mode volume to managed-mode disks in
another storage pool. Issue the migratevdisk command to migrate an entire image mode
volume from one storage pool to another storage pool.
If the migration is interrupted by a cluster recovery, the migration will resume after the
recovery completes.
Regardless of the mode in which the volume starts, it is reported as being in managed mode
during the migration. In addition, both of the MDisks involved are reported as being in image
mode during the migration. Upon completion of the command, the volume is classified as an
image mode volume. Issuing this command results in the inclusion of the MDisk into the user
specified MDisk group.
The migratetoimage CLI command allows you to migrate the data from an existing VDisk
(volume) onto a managed disk (MDisk). When it is issued, it migrates the data of the user
specified source VDisk onto the specified target MDisk. When the command completes, the
VDisk is classified as an image mode VDisk.
Note: Migration commands fail if the target or source VDisk is offline, or if there is
insufficient quorum disk space to store the metadata. Correct the offline or quorum disk
condition and reissue the command.
Issue the following CLI command to migrate data to an image mode VDisk:
migratetoimage -vdisk [vdiskname/ID] -mdisk [newmdiskname/ID] -mdiskgrp
[newmdiskgrpname/ID] -threads 4
Where:
[vdiskname/ID] is the name or ID of the VDisk
[newmdiskname/ID] is the name or ID of the new MDisk
[newmdiskgrpname/ID] is the name or ID of the new MDisk group (storage pool)
To move a volume between I/O groups, the cache must first be flushed. The SAN Volume
Controller attempts to destage all write data for the volume from the cache during the I/O
group move. This flush fails if data is pinned in the cache for any reason, such as a storage
pool being offline. By default, this failed flush causes the migration between I/O groups to fail,
but this behavior can be overridden using the -force flag. If the -force flag is used and SAN
Volume Controller cannot destage all write data from the cache, the cached data is lost.
During the flush, the volume operates in cache write-through mode.
Important: Do not move a volume to an offline I/O group under any circumstance. Ensure
that the I/O group is online before you move the volumes to avoid any data loss.
You must quiesce host I/O before the migration for two reasons:
If there is significant data in cache that takes a long time to destage, the command-line
times out.
Subsystem Device Driver vpaths that are associated with the volume are deleted before
the volume move takes place to avoid data corruption. Therefore, data corruption can
occur if I/O is still occurring for a particular LUN ID.
When migrating a volume between I/O groups, you can specify the preferred node, or you can
allow SAN Volume Controller assign the preferred node.
Modifying the I/O group that services the volume cannot be done concurrently with I/O
operations. It also requires a rescan at the host level to ensure that the multipathing driver is
notified of the following conditions:
The allocation of the preferred node has changed
The ports by which the volume is accessed have changed.
Modify the group only when one pair of nodes becomes overused.
Important: After a migration is started, you cannot stop it. The migration runs to
completion unless it is stopped or suspended by an error condition, or the volume being
migrated is deleted.
If you want to start, suspend, or cancel a migration, or control the rate of migration, use the
volume mirroring function or migrate volumes between storage pools.
Per cluster
An SAN Volume Controller cluster supports up to 32 active concurrent instances of the
following migration activities:
Migrate multiple extents
Migrate between storage pools
Migrate off a deleted MDisk
Migrate to image mode
These high-level migration tasks can be started by scheduling single extent migrations. Up to
256 single extent migrations can run concurrently. This number can be any combination of the
migration activities previously listed.
The migrate multiple extents and migrate between storage pools commands support a
flag that allows you to specify the number of parallel “threads” to use. The flag can be set from
1 - 4. This parameter affects the number of extents that are concurrently migrated for that
migration operation. Thus, if the thread value is set to 4, up to four extents can be migrated
concurrently for that operation, subject to other resource constraints.
Per MDisk
The SAN Volume Controller supports up to four concurrent single extent migrates per MDisk.
This limit does not take into account whether the MDisk is the source or the destination. If
more than four single extent migrates are scheduled for a particular MDisk, further migrations
are queued pending the completion of the currently running migrations.
If a migration is stopped and migrations are queued awaiting the use of the MDisk, these
migrations now commence. However, if a migration is suspended, the migration continues to
use resources, and another migration is not started.
The SAN Volume Controller attempts to resume the migration if the error log entry is marked
as fixed using the CLI or the GUI. If the error condition no longer exists, the migration
proceeds. The migration might resume on a node other than the node that started the
migration.
Have one storage pool for all the image mode volumes, other storage pools for the managed
mode volumes, and use the migrate volume facility.
Be sure to verify that enough extents are available in the target storage pool.
These steps assume that the SAN Volume Controller and storage are new to the fabric
environment.
Lay out new FC cables to the fabric switch to create the physical connections between the
target SAN Volume Controller cluster and the target storage. Have a clear picture of how you
want to zone the SAN Volume Controller to the back-end platform. Also, spread the fiber
adapter port connections for the most throughput and redundancy. Figure 7-7 on page 201
shows an example with two SAN Volume Controller nodes (one I/O group) and a back-end
disk system with eight ports.
The implementation can vary depending on back-end disk systems configuration. Always
check vendor recommendations for connectivity.
Create new zones on the switch, to include the new systems. This zoning must be carefully
planned, to avoid any interruptions for other users of the fabric.
Either SAN Volume Controller data migration scenario requires scheduled downtime, to
connect the application server to the SAN Volume Controller. Depending on the type of server
used, the system might need to be shut down, the original LUNs removed and the new LUNs
discovered. The specific steps for moving the servers are addressed in subsequent sections.
If the migration uses Metro Mirror, there are a source and target SAN Volume Controller
cluster. Two additional zones must be created for the application server and zoned to the
source SAN Volume Controller and the target storage.
Source SAN Volume Controller and source SAN Volume Controller and source storage
storage
Target SAN Volume Controller and target storage SAN Volume Controller and target storage
Application server and source SAN Volume Application server and SAN Volume Controller
Controller
Application server and target SAN Volume Application server and target SAN Volume
Controller Controller
If the SAN Volume Controller will remain after migration, additional zones are required for the
application server and the target SAN Volume Controller:
For volume migration, three new zones are required to connect the SAN Volume Controller
to both source and target storage. These zones also attach the application server to the
new target storage.
For migration using Metro Mirror, a zone for SAN Volume Controller and the application
server are created instead of the application server and target storage.
7.4.3 Remove SAN Volume Controller from the fabric after migration
The SAN Volume Controller can remain in the fabric for continued storage virtualization if
wanted. To remove the SAN Volume Controller, perform the following steps:
1. Shut down the server.
1. Remove the SAN Volume Controller and application server zone.
2. Create and activate a zone for the application server and the new storage.
3. Remove the SAN Volume Controller zones to the storage.
4. Bring the application server up on the new storage.
5. Disconnect and remove the SAN Volume Controller.
Remember: Additional steps on the storage are required to map the application server to
the target volumes.
Attention: Avoid configuring a storage system to present the same LU to more than one
SAN Volume Controller system. This configuration is not supported and is likely to cause
undetected data loss or corruption.
For the latest support information, see the SAN Volume Controller website at:
http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html
However, the SAN Volume Controller system does not regard accessing a generic device as
an error condition and does not log an error. Managed disks (MDisks) that are presented by
generic devices are not eligible to be used as quorum disks.
The SAN Volume Controller supports IBM and non-IBM hosts. Check the interoperability
information at the following web address:
http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html
Manage the LUNs with the SAN Volume Controller. Move them between other managed
disks, and then back to image mode disks so that they can be masked and mapped back to
the AIX server directly.
Figure 7-8 shows our AIX server connected to our SAN infrastructure. It has two LUNs
(hdisk1 and hdisk2) that are zoned directly to it from our storage subsystem. In this example,
a heavy I/O load was simulated to see how the migration process affects the host side.
AIX
Host
SAN
Green Zone
IBM or OEM
Storage
Subsystem
The AIX server represents a typical SAN environment. The host directly uses LUNs that were
created on a SAN storage subsystem (Figure 7-8 on page 204):
The HBA cards on the AIX server are zoned so that they are in the Green Zone (dotted
line) with the storage subsystem.
The two LUNs, hdisk1 and hdisk2, are defined on the storage subsystem. They are
directly zoned and available to the AIX server.
To connect the SAN Volume Controller to your SAN fabric, perform the following tasks:
1. Assemble your SAN Volume Controller components (nodes, UPS, and SSPC Console if
available).
2. Cable the SAN Volume Controller correctly.
3. Power the SAN Volume Controller on.
4. Verify that the SAN Volume Controller is visible on your SAN.
5. Create and configure your SAN Volume Controller cluster.
6. Create these additional zones:
– An SAN Volume Controller node zone (Black Zone)
– A storage zone (Red Zone)
– A zone for every host initiator (HBA) (Blue Zone)
AIX
Host
SVC
I/O grp0
SVC
SVC
SAN
Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone
Create an empty storage pool for these disks using the commands shown in Example 7-2.
IBM_2145:ITSO1:superuser>lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity
virtual_capacity used_capacity real_capacity overallocation warning easy_tier
easy_tier_status
First, get the worldwide name (WWN) for the HBA of your AIX server. Make sure that you
have the correct WWN to create a zone for every HBA in the Blue Zone and to reduce the AIX
server downtime.
Example 7-3 shows the commands to get the WWN. In this example, the host has WWNs of
10000000C95ADE4D and 10000000C9648394.
Part Number.................03N5014
EC Level....................A
Serial Number...............1D719080D1
Manufacturer................001D
Customer Card ID Number.....280D
FRU Number.................. 03N5014
Device Specific.(ZM)........3
Network Address.............10000000C95ADE4D
ROS Level and ID............02C82138
Device Specific.(Z0)........1036406D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FFC01159
Device Specific.(Z5)........02C82138
Device Specific.(Z6)........06C32138
Device Specific.(Z7)........07C32138
Device Specific.(Z8)........20000000C95ADE4D
Device Specific.(Z9)........BS2.10X8
Device Specific.(ZA)........B1D2.10X8
Device Specific.(ZB)........B2D2.10X8
Device Specific.(ZC)........00000000
Hardware Location Code......U5791.001.99B0832-P2-C04-T1
PLATFORM SPECIFIC
Name: fibre-channel
Model: LP11000
Node: fibre-channel@1
Device Type: fcp
Physical Location: U5791.001.99B0832-P2-C04-T1
Part Number.................10N8620
Serial Number...............1B71704A6C
Manufacturer................001B
EC Level....................A
Customer Card ID Number.....5759
FRU Number.................. 10N8620
Device Specific.(ZM)........3
Network Address.............10000000C9648394
ROS Level and ID............02C82138
Device Specific.(Z0)........1036406D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FFC01159
Device Specific.(Z5)........02C82138
Device Specific.(Z6)........06C12138
Device Specific.(Z7)........07C12138
Device Specific.(Z8)........20000000C9648394
Device Specific.(Z9)........BS2.10X8
Device Specific.(ZA)........B1F2.10X8
Device Specific.(ZB)........B2F2.10X8
Device Specific.(ZC)........00000000
Hardware Location Code......U5791.001.99B082W-P1-C04-T1
PLATFORM SPECIFIC
Name: fibre-channel
Model: LP11000
Node: fibre-channel@1
Device Type: fcp
Physical Location: U5791.001.99B082W-P1-C04-T1
#
The svcinfo lshbaportcandidate command lists all of the WWNs that are not yet allocated
to a host on the SAN fabric. Example 7-4 shows the output of the nodes that it found in the
example SAN fabric. If the port does not show up, there is a zone configuration problem.
IBM_2145:ITSO-CLS2:admin>svcinfo lshbaportcandidate
id
10000000C95ADE4D
10000000C9648394
IBM_2145:ITSO-CLS2:admin>
Tip: The svctask chcontroller command allows you to change the discovered storage
subsystem name in SAN Volume Controller. In complex SANs, change your storage
subsystem to a more meaningful name.
When you discover these MDisks, confirm that you have the correct serial numbers before
creating the image mode volumes.
Example 7-7 Use XIV XCLI to obtain WWN for LUN attached to ITSO_AIX host
XIV_7804143>>mapping_list host=ITSO_AIX
LUN Volume Size Master Serial Number Locked
6 lpar3_1 17 14276 no
7 lpar3_2 17 14277 no
XIV_7804143>>
Because you want to move only the LUN that holds your application and data files, move that
LUN without rebooting the host. You must unmount only the file system and vary off the
volume group (VG) to ensure data integrity after the reassignment.
Important: Moving LUNs to the SAN Volume Controller requires that the Subsystem
Device Driver is installed on the AIX server. You can install the Subsystem Device Driver
ahead of time. However, it might require an outage of your host to do so. The latest driver
information is available at following web address:
https://www-304.ibm.com/support/docview.wss?uid=ssg1S7001350
Subsystem Device Driver can coexist with other multipath drivers only during migration.
To move both LUNs at the same time, perform the following steps:
1. Confirm that the Subsystem Device Driver is installed.
2. Unmount the file system and vary off the VGs:
a. Stop the applications that are using the LUNs.
b. Unmount those file systems with the unmount MOUNT_POINT command.
# lsfs
Name Nodename Mount Pt VFS Size Options Auto
Accounting
/dev/hd4 -- / jfs2 1048576 -- yes no
/dev/hd1 -- /home jfs2 6291456 -- yes no
/dev/hd2 -- /usr jfs2 6291456 -- yes no
/dev/hd9var -- /var jfs2 3080192 -- yes no
/dev/hd3 -- /tmp jfs2 524288 -- yes no
/dev/hd11admin -- /admin jfs2 262144 -- yes no
/proc -- /proc procfs -- -- yes
no
/dev/hd10opt -- /opt jfs2 2097152 -- yes no
/dev/livedump -- /var/adm/ras/livedump jfs2 524288 -- yes no
/dev/odm -- /dev/odm vxodm -- -- no no
/dev/fslv01 -- /itsofs01 jfs2 33423360 rw yes no
/dev/fslv02 -- /itsofs02 jfs2 33423360 rw yes no
# lsvg -o
itsovg02
itsovg01
rootvg
# lsvg -l itsovg01
itsovg01:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
loglv01 jfs2log 1 1 1 open/syncd N/A
fslv01 jfs2 510 510 1 open/syncd /itsofs01
# lsvg -l itsovg02
itsovg02:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
loglv02 jfs2log 1 1 1 open/syncd N/A
fslv02 jfs2 510 510 1 open/syncd /itsofs02
# umount /itsofs01
# umount /itsofs02
# varyoffvg itsovg01
# varyoffvg itsovg02
3. From the storage side (the XIV system in the example), unmap the disks from the
ITSO_AIX server and remap the disks to the SAN Volume Controller.
Important: Match your discovered MDisk serial numbers (UID in the svcinfo lsmdisk
command task display) with the serial number that you discovered earlier.
5. After you verify that you have the correct MDisks, rename them to avoid confusion in the
future (Example 7-11).
6. Create the image mode volumes with the svctask mkvdisk command and the option
-vtype image (Example 7-12). This command virtualizes the disks in the same layout as
though they were not virtualized.
Tip: While the application is in a quiescent state, you can use FlashCopy to copy the new
image volumes onto other volumes. You do not need to wait until the FlashCopy process is
completed before starting your application.
To put the image mode volumes online, perform the following steps:
1. Remove the old disk definitions using the rmdev -dl <hdisk> command.
2. Run the cfgmgr -vs command to rediscover the available LUNs.
3. If your application and data are on an LVM volume, rediscover the VG, and then run the
varyonvg VOLUME_GROUP command to activate the VG.
4. Mount your file systems with the mount /MOUNT_POINT command.
Before starting the migration process, run heavy I/O load to hdisk1 and hdisk2 on the AIX side
and collect performance data using the nmon tool.
To check the overall progress of the migration, use the svcinfo lsmigrate command as
shown in Example 7-15 on page 215. Listing the storage pool with the svcinfo lsmdiskgrp
Use CLI command svctask lsvdiskextent to track the progress as extents move from image
mode volumes to new striped volumes.
IBM_2145:ITSO1:superuser>lsmigrate
migrate_type MDisk_Group_Migration
progress 0
migrate_source_vdisk_index 3
migrate_target_mdisk_grp 1
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type MDisk_Group_Migration
progress 3
migrate_source_vdisk_index 2
migrate_target_mdisk_grp 1
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO1:superuser>lsvdiskextent 2
id number_extents
5 3
6 3
7 3
8 23
IBM_2145:ITSO1:superuser>lsvdiskextent 3
id number_extents
5 4
6 2
7 3
9 23
IBM_2145:ITSO1:superuser>lsmigrate
migrate_type MDisk_Group_Migration
progress 62
migrate_source_vdisk_index 3
migrate_target_mdisk_grp 1
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO1:superuser>lsmigrate
IBM_2145:ITSO1:superuser>
IBM_2145:ITSO1:superuser>lsvdiskextent 2
id number_extents
5 11
6 11
7 10
IBM_2145:ITSO1:superuser>lsvdiskextent 3
id number_extents
5 11
6 11
7 10
IBM_2145:ITSO1:superuser>
After this task is completed, Example 7-16 shows that the volumes are spread over three
MDisks in the itso_aix_new storage pool. The old storage pool is empty.
IBM_2145:ITSO1:superuser>lsvdisk
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name
capacity type FC_id FC_name RC_id RC_name vdisk_UID
fc_map_count copy_count fast_write_state se_copy_count
2 itso_aix_hdisk1 0 io_grp0 online 1 itso_aix_new
16.00GB striped 60050768018106209800000000000009 0
1 not_empty 0
The migration to the SAN Volume Controller is complete. You can remove the original MDisks
from the SAN Volume Controller, and the LUNs from the storage subsystem.
If the LUNs are the ones used last on your storage subsystem, remove them from your SAN
fabric using the following steps, shown in Example 7-17:
1. Remove MDisks from the storage pool.
2. Remove the storage pool.
3. Delete the storage pool.
4. Remove mapping from SAN Volume Controller on your back-end storage.
5. Run the detectmdisk command on SAN Volume Controller to be sure that MDisks are
gone.
Example 7-17 Using SAN Volume Controller to remove the storage pool
IBM_2145:ITSO1:superuser>lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity
virtual_capacity used_capacity real_capacity overallocation warning easy_tier
easy_tier_status
IBM_2145:ITSO1:superuser>lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity
ctrl_LUN_# controller_name UID
tier
5 itso_aix_md1 online managed 1 itso_aix_new 15.0GB
4084400500000000 controller0
6005076303ffce63000000000000840500000000000000000000000000000000 generic_hdd
6 itso_aix_md2 online managed 1 itso_aix_new 15.0GB
4085400400000000 controller0
6005076303ffce63000000000000850400000000000000000000000000000000 generic_hdd
7 itso_aix_md3 online managed 1 itso_aix_new 15.0GB
4085400500000000 controller0
6005076303ffce63000000000000850500000000000000000000000000000000 generic_hdd
8 ITSO_AIX01 online managed 2 aix_imgmdg 16.0GB
0000000000000005 controller1
00173800102f37c4000000000000000000000000000000000000000000000000 generic_hdd
9 ITSO_AIX02 online managed 2 aix_imgmdg 16.0GB
0000000000000006 controller1
00173800102f37c5000000000000000000000000000000000000000000000000 generic_hdd
IBM_2145:ITSO1:superuser>lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity
virtual_capacity used_capacity real_capacity overallocation warning easy_tier
easy_tier_status
1 itso_aix_new online 3 2 43.50GB 512 11.50GB
32.00GB 32.00GB 32.00GB 73 0 auto
inactive
2 aix_imgmdg online 0 0 0 512 0
0.00MB 0.00MB 0.00MB 0 0 auto
inactive
IBM_2145:ITSO1:superuser>lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity
ctrl_LUN_# controller_name UID
tier
5 itso_aix_md1 online managed 1 itso_aix_new 15.0GB
4084400500000000 controller0
6005076303ffce63000000000000840500000000000000000000000000000000 generic_hdd
6 itso_aix_md2 online managed 1 itso_aix_new 15.0GB
4085400400000000 controller0
6005076303ffce63000000000000850400000000000000000000000000000000 generic_hdd
7 itso_aix_md3 online managed 1 itso_aix_new 15.0GB
4085400500000000 controller0
6005076303ffce63000000000000850500000000000000000000000000000000 generic_hdd
IBM_2145:ITSO1:superuser>rmmdiskgrp aix_imgmdg
IBM_2145:ITSO1:superuser>lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity
virtual_capacity used_capacity real_capacity overallocation warning easy_tier
easy_tier_status
1 itso_aix_new online 3 2 43.50GB 512 11.50GB
32.00GB 32.00GB 32.00GB 73 0 auto
inactive
IBM_2145:ITSO1:superuser>
IBM_2145:ITSO1:superuser>detectmdisk
IBM_2145:ITSO1:superuser>lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity
ctrl_LUN_# controller_name UID
tier
5 itso_aix_md1 online managed 1 itso_aix_new 15.0GB
4084400500000000 controller0
6005076303ffce63000000000000840500000000000000000000000000000000 generic_hdd
6 itso_aix_md2 online managed 1 itso_aix_new 15.0GB
4085400400000000 controller0
6005076303ffce63000000000000850400000000000000000000000000000000 generic_hdd
7 itso_aix_md3 online managed 1 itso_aix_new 15.0GB
4085400500000000 controller0
6005076303ffce63000000000000850500000000000000000000000000000000 generic_hdd
IBM_2145:ITSO1:superuser>
With the I/O pattern constant during the test, I/O activity increased after moving from image
mode MDisks (not striped) to stripe MDisks. Therefore, always use striping to improve
performance.
I/O Pattern: BS=4k, Mixed R/W 50/50, random, 50% Cache Hit
Figure 7-11 shows the disk transfers per second during the example migration.
There are other preparatory activities to be performed before we shut down the host and
reconfigure the zoning and LUN mapping. This section covers those activities.
AIX
Host
SVC
I/O grp0
SVC
SVC
SAN
Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone
After your zone configuration is set up correctly, use the svcinfo lscontroller command to
force SAN Volume Controller to recognize the new storage subsystem controller
(Example 7-18). In addition, you might want to rename the controller to a more meaningful
name using the svctask chcontroller -name command.
In the example, two 16 GB LUNs are on the XIV subsystem. Migrate them back to image
mode volumes and the XIV subsystem in one step.
Although the MDisks do not stay in the SAN Volume Controller for long, rename them.
Change the names to more meaningful ones so that you do not confuse them with other
MDisks. Also, create the storage pools to hold your new MDisks as shown in Example 7-20.
IBM_2145:ITSO1:superuser>lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity
virtual_capacity used_capacity real_capacity overallocation warning easy_tier
easy_tier_status
1 itso_aix_new online 3 2 43.50GB 512 11.50GB
32.00GB 32.00GB 32.00GB 73 0 auto
inactive
2 ITSO_AIXMIG online 0 0 0 512 0
0.00MB 0.00MB 0.00MB 0 0 auto
inactive
IBM_2145:ITSO1:superuser>
Your SAN Volume Controller environment is now ready for the volume migration to image
mode volumes.
IBM_2145:ITSO1:superuser>lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity
ctrl_LUN_# controller_name UID
tier
5 itso_aix_md1 online managed 1 itso_aix_new 15.0GB
4084400500000000 controller0
6005076303ffce63000000000000840500000000000000000000000000000000 generic_hdd
6 itso_aix_md2 online managed 1 itso_aix_new 15.0GB
4085400400000000 controller0
6005076303ffce63000000000000850400000000000000000000000000000000 generic_hdd
7 itso_aix_md3 online managed 1 itso_aix_new 15.0GB
4085400500000000 controller0
6005076303ffce63000000000000850500000000000000000000000000000000 generic_hdd
8 AIX_MIG01 online image 2 ITSO_AIXMIG 16.0GB
0000000000000005 controller1
00173800102f37c4000000000000000000000000000000000000000000000000 generic_hdd
9 AIX_MIG02 online image 2 ITSO_AIXMIG 16.0GB
0000000000000006 controller1
00173800102f37c5000000000000000000000000000000000000000000000000 generic_hdd
IBM_2145:ITSO1:superuser>lsmigrate
migrate_type Migrate_to_Image
progress 25
migrate_source_vdisk_index 2
migrate_target_mdisk_index 8
migrate_target_mdisk_grp 2
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type Migrate_to_Image
progress 3
migrate_source_vdisk_index 3
migrate_target_mdisk_index 9
migrate_target_mdisk_grp 2
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO1:superuser>
IBM_2145:ITSO1:superuser>lsvdiskextent 2
id number_extents
5 4
6 5
7 3
8 20
IBM_2145:ITSO1:superuser>lsmigrate
IBM_2145:ITSO1:superuser>
IBM_2145:ITSO1:superuser>lsvdiskextent 2
id number_extents
8 32
IBM_2145:ITSO1:superuser>lsvdiskextent 3
id number_extents
9 32
IBM_2145:ITSO1:superuser>IBM_2145:ITSO1:superuser>migratetoimage -vdisk
itso_aix_hdisk1 -mdisk AIX_MIG01 -mdiskgrp ITSO_AIXMIG
IBM_2145:ITSO1:superuser>migratetoimage -vdisk itso_aix_hdisk2 -mdisk AIX_MIG02
-mdiskgrp ITSO_AIXMIG
IBM_2145:ITSO1:superuser>lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity
ctrl_LUN_# controller_name UID
tier
5 itso_aix_md1 online managed 1 itso_aix_new 15.0GB
4084400500000000 controller0
6005076303ffce63000000000000840500000000000000000000000000000000 generic_hdd
6 itso_aix_md2 online managed 1 itso_aix_new 15.0GB
4085400400000000 controller0
6005076303ffce63000000000000850400000000000000000000000000000000 generic_hdd
7 itso_aix_md3 online managed 1 itso_aix_new 15.0GB
4085400500000000 controller0
6005076303ffce63000000000000850500000000000000000000000000000000 generic_hdd
8 AIX_MIG01 online image 2 ITSO_AIXMIG 16.0GB
0000000000000005 controller1
00173800102f37c4000000000000000000000000000000000000000000000000 generic_hdd
9 AIX_MIG02 online image 2 ITSO_AIXMIG 16.0GB
0000000000000006 controller1
00173800102f37c5000000000000000000000000000000000000000000000000 generic_hdd
IBM_2145:ITSO1:superuser>lsmigrate
migrate_type Migrate_to_Image
progress 25
migrate_source_vdisk_index 2
migrate_target_mdisk_index 8
migrate_target_mdisk_grp 2
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type Migrate_to_Image
progress 3
migrate_source_vdisk_index 3
migrate_target_mdisk_index 9
migrate_target_mdisk_grp 2
IBM_2145:ITSO1:superuser>lsvdiskextent 2
id number_extents
5 4
6 5
7 3
8 20
IBM_2145:ITSO1:superuser>lsvdiskextent 3
id number_extents
5 6
6 7
7 6
9 13
IBM_2145:ITSO1:superuser>lsmigrate
IBM_2145:ITSO1:superuser>
IBM_2145:ITSO1:superuser>lsvdiskextent 2
id number_extents
8 32
IBM_2145:ITSO1:superuser>lsvdiskextent 3
id number_extents
9 32
IBM_2145:ITSO1:superuser>
During the migration, the AIX server is unaware that its data is being physically moved
between storage subsystems.
After the migration is complete, the image mode volumes are ready to be removed from the
AIX server. The real LUNs can be mapped and masked directly to the host by using the
storage subsystems tool.
Remember: Moving LUNs to another storage system might need a driver other than SDD.
Check with the storage subsystems vendor to see which driver you need. You might be
able to install this driver ahead of time.
4. Remove the volumes from the SAN Volume Controller using the svctask rmvdisk
command, which makes the MDisks unmanaged as shown in Example 7-23.
Cached data: When you run the svctask rmvdisk command, the SAN Volume
Controller first double-checks that there is no outstanding dirty cached data for the
volume being removed. If uncommitted cached data still exists, the command fails with
the following error message:
CMMVC6212E The command failed because data in the cache has not been
committed to disk
You must wait for this cached data to be committed to the underlying storage
subsystem before you can remove the volume.
The SAN Volume Controller will automatically destage uncommitted cached data two
minutes after the last write activity for the volume. How much data there is to destage,
and how busy the I/O subsystem is, determine how long this command takes to
complete.
You can check whether the volume has uncommitted data in the cache by using the
svcinfo lsvdisk <VDISKNAME> command and checking the fast_write_state attribute.
This attribute has the following meanings:
empty No modified data exists in the cache.
not_empty Modified data might exist in the cache.
corrupt Modified data might have existed in the cache, but any modified data
has been lost.
5. Unmap LUNs from the AIX, and zone the disks from the SAN Volume Controller back to
the AIX server.
Important: This step is the last step that you can perform and still safely back out of
everything you have done so far. Up to this point, you can reverse all of the actions that
you have performed so far to get the server back online without data loss. However,
after you start the next step, you might not be able to turn back without data loss.
To access the LUNs from the AIX server, perform the following steps:
1. Run the cfgmgr -S command to discover the storage subsystem.
2. Use the lsdev -Cc disk command to verify the discovery of the new disk.
3. Remove the references to all of the old disks.
4. If your application and data are on an LVM volume, rediscover the VG and then run the
varyonvg VOLUME_GROUP command to activate the VG.
5. Mount your file systems with the mount /MOUNT_POINT command.
To make sure that the MDisks are removed from the SAN Volume Controller, run the svctask
detectmdisk command. The MDisks are first discovered as offline. Then they will
automatically be removed after the SAN Volume Controller determines that there are no
volumes associated with them.
If you are implementing SAN Volume Controller for the first time, see 7.2, “SAN Volume
Controller concepts for migrating the data” on page 185.
Using this feature, you can easily migrate the data between two physically different storage
subsystems, fully not apparent to the application.
In the example, the AIX system is named ITSO_AIX and the SAN Volume Controller cluster is
named ITSO1. There are two storage zones in use during the migration.
When using the SAN Volume Controller Volume migration feature for data migration, perform
the following basic steps:
1. Create the storage pool from LUNs created on target storage system.
2. Migrate the volumes between the storage pool that holds them to the target storage pool.
3. Check that data is consistent after the migration is finished.
4. Delete the source storage pool if not use in a future. This step is optional.
Important: Volume mirroring method can be used to minimize the impact to the volume
because the volume goes offline only if the source storage pool fails.
Figure 7-13 shows the environment used during testing. As described previously, from the
host side this process is a not apparent and non-disruptive way to migrate data.
Tip: Keep your OS level, HBA adapter, SAN environment, SAN Volume Controller code
level, and back-end storage systems up to date to minimize any potential problems.
Table 7-4 shows the AIX host configuration for this test.
Number of processors 1
Before starting the migration, verify the server information as shown in Example 7-24. The
example uses a single file system built in a volume group. It consists of two LUNs (hdisk41
and hdisk42).
# df -k /svc_itso
file system 1024-blocks Free %Used Iused %Iused Mounted on
/dev/fslv00 26116096 5631780 79% 5 1% /svc_itso
IBM_2145:ITSO1:superuser>lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
tier
0 mdisk1 online managed 1 ITSO_old 16.0GB
0000000000000001 controller1
00173800102f38d6000000000000000000000000000000000000000000000000 generic_hdd
1 mdisk0 online managed 1 ITSO_old 16.0GB
0000000000000002 controller1
00173800102f38d7000000000000000000000000000000000000000000000000 generic_hdd
2 mdisk3 online managed 1 ITSO_old 16.0GB
0000000000000003 controller1
00173800102f38d8000000000000000000000000000000000000000000000000 generic_hdd
3 mdisk2 online managed 1 ITSO_old 16.0GB
0000000000000004 controller1
00173800102f38d9000000000000000000000000000000000000000000000000 generic_hdd
4 mdisk4 degraded_ports unmanaged 15.0GB
4084400400000000 controller0
6005076303ffce63000000000000840400000000000000000000000000000000 generic_hdd
5 mdisk5 online managed 0 ITSO_new 15.0GB
4084400500000000 controller0
6005076303ffce63000000000000840500000000000000000000000000000000 generic_hdd
6 mdisk6 online managed 0 ITSO_new 15.0GB
4085400400000000 controller0
6005076303ffce63000000000000850400000000000000000000000000000000 generic_hdd
7 mdisk7 online managed 0 ITSO_new 15.0GB
4085400500000000 controller0
6005076303ffce63000000000000850500000000000000000000000000000000 generic_hdd
IBM_2145:ITSO1:superuser>
IBM_2145:ITSO1:superuser>lsmdiskgrp
IBM_2145:ITSO1:superuser>lsvdisk
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name
capacity type FC_id FC_name RC_id RC_name vdisk_UID
fc_map_count copy_count fast_write_state se_copy_count
0 AIX_hdisk41 0 io_grp0 online 0 ITSO_new
10.00GB striped 60050768018106209800000000000007 0
1 empty 0
1 AIX_hdisk42 0 io_grp0 online 0 ITSO_new
15.00GB striped 60050768018106209800000000000008 0
1 not_empty 0
IBM_2145:ITSO1:superuser>
IBM_2145:ITSO1:superuser>lshost
id name port_count iogrp_count
0 ITSO_AIX 2 4
1 Blade_sle_5v4 2 4
IBM_2145:ITSO1:superuser>
IBM_2145:ITSO1:superuser>lsvdiskhostmap 0
id name SCSI_id host_id host_name vdisk_UID
0 AIX_hdisk41 0 0 ITSO_AIX 60050768018106209800000000000007
IBM_2145:ITSO1:superuser>
IBM_2145:ITSO1:superuser>lsvdiskextent 0
id number_extents
5 14
6 13
7 13
IBM_2145:ITSO1:superuser>lsvdiskextent 1
id number_extents
5 20
6 20
7 20
IBM_2145:ITSO1:superuser>
On the SAN Volume Controller side, the storage pools and volumes are already created. For
more information about how to create storage pools and volumes, and how to map volumes to
the hosts, see 7.5, “Migrating SAN disks to SAN Volume Controller volumes and back to SAN”
on page 203 or see Implementing the IBM System Storage SAN Volume Controller V6.1,
SG24-7933.
Disk-KBytes/second-(K=1024,M=1024*1024)
Disk Busy Read Write Transfers Size Peak% Peak KB/s qDepth
Name KB/s KB/s /sec KB Read+Write or N/A
hdisk41 71% 4796.1 4902.1 2424.6 4.0 75% 12330.5 0
hdisk42 73% 5160.1 5380.1 2635.1 4.0 83% 10949.9 0
Totals(MBps) Read=9.7 Write=10.0 Size(GB)=3983 Free(GB)=18
Figure 7-14 shows I/O load from SAN Volume Controller side.
2. Start the migration process for volumes AIX_hdisk41 and AIX_hdisk42 using the
migratevdisk command as shown in Example 7-27. This process moves the volumes
from the ITSO_old storage pool to the ITSO_new storage pool.
3. During the migration process, track the progress using the lsmigrate command. You can
also track the extent level migration with the lsvdiskextent command as shown in
Example 7-28.
IBM_2145:ITSO1:superuser>lsvdiskextent 0
id number_extents
0 5
1 5
2 6
3 5
5 7
6 6
7 6
IBM_2145:ITSO1:superuser>
IBM_2145:ITSO1:superuser>lsvdiskextent 1
id number_extents
0 12
1 11
2 10
3 11
5 6
6 5
7 5
IBM_2145:ITSO1:superuser>
Tip: The migration process can take some time. The amount depends on the size of your
volumes and the performance of your source and target storage systems. Calculate the
time needed before you start the migration so you have good estimations for your
environment.
Your IBM technical representative can do a performance assessment before you go live
with your migration.
IBM_2145:ITSO1:superuser>lsvdiskextent 0
id number_extents
5 14
6 13
7 13
IBM_2145:ITSO1:superuser>lsvdiskextent 1
id number_extents
5 20
6 20
7 20
IBM_2145:ITSO1:superuser>
Volumes AIX_hdisk41 and AIX_hdisk42 are moved to new storage pool ITSO_new as
shown in Example 7-30.
5. Delete storage pool ITSO_old and all assigned Mdisks (Example 7-31).
Attention: Always be careful when using the -force flag because it forces action
without any warning.
# lspv
hdisk0 00ce0c7b1fcfc0e5 rootvg active
hdisk41 00ce0c7b4b252b09 svc_itso_vg active
hdisk42 00ce0c7b4b252c20 svc_itso_vg active
# df -k /svc_itso
file system 1024-blocks Free %Used Iused %Iused Mounted on
/dev/fslv00 26116096 26110500 1% 6 1% /svc_itso
Figure 7-15 shows overall System Summary for ITSO_AIX during the example migration.
Figure 7-15 SAN Volume Controller performance overview: ITSO_AIX System summary
Figure 7-16 SAN Volume Controller performance overview: ITSO_AIX Host Disk Busy
Migration using SAN Volume Controller does affect the host I/O performance in the example
environment, but only for a short time. This performance impact is closely connected to the
back-end disk subsystem and its performance profile.
The test used the default number of threads (four) on SAN Volume Controller. Four threads is
the maximum value and it can saturate back-end disk system, as you can see in the example.
However, the process can be prioritized by specifying the number of threads to be used in
parallel (from 1 to 4) while migrating. Using a single thread creates the least background load
on the system.
You can also see in the graphs that the data was migrated from a higher performance to a
lower performance tier storage.
Using this feature, you can easily migrate data between two physically separate storage
subsystems. The process is fully not apparent to the application. Volume mirroring is included
in the base virtualization license, so you do not need to purchase any additional functions for
the SAN Volume Controller. This function can also be used to migrate from a fully allocated
volume to a thin-provisioned volume.
When using volume mirroring, the zero detect feature for thin-provisioned volumes allows you
to reclaim unused allocated disk space (zeros) when converting a fully allocated volume to a
thin-provisioned volume.
Using SAN Volume Controller mirrored volumes feature for data migration involves the
following basic steps:
1. Add the target copy for source volume.
2. Run synchronization and wait for it to complete.
3. Remove the source volume.
Important: When using SAN Volume Controller mirroring volume features, the data
migration can be stopped at any time without compromising data consistency on the
primary volumes.
The SAN Volume Controller side has a storage pool created from an old disk subsystem
called ITSO_old. The target storage pool is on a new disk subsystem and called ITSO_new.
From a migration point of view, the process is similar to the one described in 7.6, “SAN
Volume Controller Volume migration between two storage pools” on page 227. However, the
two volumes are kept in synchronization after migration is complete.
The SVCs mirrored volume background architecture has the following characteristics:
By default, reads will still be serviced from the original volume, but this setting can be
changed after the two volumes are synchronized.
All writes are sent to both volumes synchronously.
It is possible to split the two volumes at a defined point in time. This characteristic allows
you to schedule a controlled split, such as when the host is powered off.
Allows a defined point in time copy to be taken.
The second copy of the volume can be removed at any time, allowing easy regression.
Example 7-33 shows the storage pools on SAN Volume Controller ITSO1 shown in the
command-line interface.
Example 7-33 SAN Volume Controller ITSO1 storage pools from CLI
IBM_2145:ITSO1:superuser>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity
virtual_capacity used_capacity real_capacity
0 ITSO__old online 3 1 45.00GB 256 35.00GB
10.00GB 10.00GB 10.00GB
1 ITSO__new online 4 1 62.50GB 256 52.50GB
10.00GB 10.00GB 10.00GB
Figure 7-19 shows the SAN Volume Controller ITSO1 volumes from the GUI.
Example 7-34 shows the storage pools on SAN Volume Controller ITSO1 shown in the
command-line interface.
To simulate I/O load on the host side, Iometer created constant I/O load on the volume used
for migration. Figure 7-20 shows the Iometer window with the simulated I/O load.
Tip: The steps described in this section are the same regardless of the host operating
system.
7.7.2 Creating mirrored volumes using the SAN Volume Controller GUI
To create mirrored volumes using the GUI, perform the following steps:
1. Click Volumes All Volumes.
2. Right-click the volume and select Volume Copy Actions Add Mirrored Copy
(Figure 7-22).
Note: The asterisk (*) shows that Copy1 is the primary volume.
Volume sle_5v4_vol01 changes storage pool location from ITSO_old to ITSO_new, but
retains the same UID number as shown in Figure 7-27.
Figure 7-27 SAN Volume Controller Mirrored Copy window after we change Primary Copy
Tip: SAN Volume Controller supports migration from virtualized MDisks and from image
mode MDisks.
IBM_2145:ITSO1:superuser>lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity
virtual_capacity used_capacity real_capacity overallocation warning easy_tier
easy_tier_status
0 ITSO__old online 3 1 45.00GB 256 35.00GB 10.00GB 10.00GB 10.00GB 22 80 auto
inactive
IBM_2145:ITSO1:superuser>svcinfo lsvdiskcopy
vdisk_id vdisk_name copy_id status sync primary mdisk_grp_id mdisk_grp_name
capacity type se_copy easy_tier easy_tier_status
0 sle_5v4_vol01 0 online no no 1 ITSO__new 10.00GB striped no on inactive
0 sle_5v4_vol01 1 online yes yes 0 ITSO__old 10.00GB striped no on inactive
IBM_2145:ITSO1:superuser>lsvdisksyncprogress
vdisk_id vdisk_name copy_id progress estimated_completion_time
0 sle_5v4_vol01 0 11 110719235627
IBM_2145:ITSO1:superuser>lsvdisksyncprogress
IBM_2145:ITSO1:superuser>
2. When the synchronization process is finished, change the primary volume from the
ITSO_old storage pool to ITSO_new as shown in Example 7-37.
3. Delete the copy from the storage pool ITSO_old as shown in Example 7-38.
Further information: For more information about CLI commands, see the Command-Line
Interface User's Guide, Version 6.2.0, GC27-2287-01.
In this example, a Metro Mirror relationship between two SAN Volume Controller clusters is
used to demonstrate how the copy services function can migrate data between two physically
separate locations. You can also use Metro Mirror as a data migration solution within a single
SAN Volume Controller cluster.
Important: Set the bandwidth used by the background copy process to less than or
equal to the bandwidth that can be sustained by the communication link between the
systems. The link must be able to sustain any host requests and the rate of background
copy.
If the -bandwidth parameter is set to a higher value than the link can sustain, the
background copy process uses the actual available bandwidth.
3. The final window shows a listing of the partnership just created as shown in Figure 7-33.
The status of the partnership is Fully Configured, which means that the partnership has
been created from both clusters. If this cluster is the first cluster to define the partnership,
the status is listed as Partially Configured. Repeat this process from the other cluster to
get to the Fully Configured state.
IBM_2145:ITSO2:superuser>svcinfo lsclustercandidate
id configured name
0000020060418826 yes ITSO1
2. To create the partnership between the two clusters, run the svctask mkpartnership
command from each cluster (Example 7-40). The bandwidth parameter is mandatory.
3. Verify the creation of the partnership using the svcinfo lscluster command as shown in
Example 7-41. The partnership is displayed as partially configured if only one SAN
Volume Controller cluster has created the partnership.
IBM_2145:ITSO2:superuser>svcinfo lscluster
id name location partnership bandwidth id_alias
00000200604187DE ITSO2 local 00000200604187DE
0000020060418826 ITSO1 remote fully_configured 50 0000020060418826
3. Select the Auxiliary Cluster ITSO02 (on another system) where the volumes are located
and click Next (Figure 7-35).
5. Select a synchronization option for the volumes (Figure 7-37). In this example, select No
because this is the status of the relationship. Click Next.
6. Select Yes to start replication immediately (Figure 7-38), and click Next.
IBM_2145:ITSO1:superuser>svcinfo lsrcconsistgrp
id name master_cluster_id master_cluster_name aux_cluster_id
aux_cluster_name primary state relationship_count copy_type
0 Lnxsmallfs_cg 0000020060418826 ITSO1 00000200604187DE ITSO2
empty 0 empty_group
2. After a consistency group is created, create Metro Mirror relationships as part of that
consistency group. Example 7-43 shows the Metro Mirror relationship named
lnx_smallfs_mm, created as part of consistency group Lnxsmallfs_cg. It is between
volume ITSO_Master on the source cluster and ITSO_Auxiliary on the target cluster. The
command svctask mkrcrelationship creates the Metro Mirror relationship and svcinfo
lsrcrelationship verifies its creation.
Example 7-43 Create a Metro Mirror relationship associated with a consistency group
IBM_2145:ITSO1:superuser>svctask mkrcrelationship -master ITSO_Master -aux
ITSO_Auxiliary -name lnx_smallfs_mm -consistgrp Lnxsmallfs_cg -cluster ITSO2
IBM_2145:ITSO1:superuser>svcinfo lsrcrelationship 1
id 1
name lnx_smallfs_mm
master_cluster_id 0000020060418826
master_cluster_name ITSO1
master_vdisk_id 1
master_vdisk_name ITSO_Master
aux_cluster_id 00000200604187DE
aux_cluster_name ITSO2
aux_vdisk_id 0
aux_vdisk_name ITSO_Auxiliary
primary master
consistency_group_id 0
consistency_group_name Lnxsmallfs_cg
state inconsistent_stopped
bg_copy_priority 50
progress 0
freeze_time
status online
sync
copy_type metro
IBM_2145:ITSO1:superuser>svcinfo lsrcrelationship 1
id 1
name lnx_mediumfs_mm
master_cluster_id 0000020060418826
master_cluster_name ITSO1
master_vdisk_id 1
master_vdisk_name ITSO_Master
aux_cluster_id 00000200604187DE
aux_cluster_name ITSO2
aux_vdisk_id 0
aux_vdisk_name ITSO_Auxiliary
primary master
consistency_group_id
consistency_group_name
state inconsistent_stopped
bg_copy_priority 50
progress 0
7.8.3 Starting and monitoring SAN Volume Controller Metro Mirror Copy
The next step in the data migration is to copy the data to the target disk. This section contains
the steps for the following procedures:
Starting Metro Mirror Copy using SAN Volume Controller GUI
Start Metro Mirror Copy using SAN Volume Controller CLI
After the data is copied, the state for the two Volumes is Consistent Synchronized as
shown in Figure 7-41.
After all the pairs are synced, the consistency group state is consistent_synchronized
(Example 7-46).
name lnx_mediumfs_mm
master_cluster_id 0000020060418826
master_cluster_name ITSO1
master_vdisk_id 1
master_vdisk_name ITSO_Master
aux_cluster_id 00000200604187DE
aux_cluster_name ITSO2
aux_vdisk_id 0
aux_vdisk_name ITSO_Auxiliary
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type metro
After you migrate from the old storage to the new storage using SVC Metro Mirror, you must
switch your application to the new storage. This requires the application to be shut down
because you have to switch application access from old to new storage.
Before starting these steps, shut down the application and preparing the system to move to
the new storage. In general, these steps include:
Quiesce all I/O.
Shut down the application server.
Stop the Metro Mirror copy.
3. Enable write access to the target disk. Be sure to select Allow secondary read/write
access as shown in Figure 7-43. Selecting this option allows the application server to
access the target volumes as the new source volumes. Click Stop Consistency Group to
stop the copy.
4. Select the consistency group (Lnxsmallfs_cg in the example) and click Delete as shown in
Figure 7-45.
Example 7-48 Stop a consistency group and allow access to the target VDisk
IBM_2145:ITSO1:admin>svctask stoprcconsistgrp -access Lnxsmallfs_cg
Figure 7-47 shows overall host System Summary during the Metro Mirror process.
The testing scenarios showed that Metro Mirror replication using SAN Volume Controller had
no impact on the host I/O performance. This allowed sustained I/O load during Metro Mirror
replication.
However, when planning for Metro Mirror replication, be sure that you use correct link sizing.
Insufficient link bandwidth can saturate source disks. Contact your IBM technical
representative to help you properly size the replication link.
The SAN Volume Controller allows you to copy image mode volumes directly from one
subsystem to another subsystem while host I/O is running. The only downtime required is
when the SAN Volume Controller is added to and removed from your SAN environment. This
scenario is described in 7.5.6, “Preparing to migrate from the SAN Volume Controller” on
page 220.
To use SAN Volume Controller for migration purposes only, perform the following steps:
1. Add SAN Volume Controller to your SAN environment.
2. Configure the SAN Volume Controller to fits your needs.
3. Prepare your application for data migration (unmount file systems and detach LUNs).
4. Add SAN Volume Controller between your storage and the host.
5. Create image mode disks on SAN Volume Controller from the LUNs that you migrate.
6. Attach LUNs, mount the file systems, and start the application.
7. Start the migration.
8. After the migration process is complete, detach the selected LUNs.
As you can see, little downtime is required. If you prepare everything correctly, you should be
able to reduce your downtime to a few minutes. The copy process is handled by the SAN
Volume Controller, so there is no performance impact on the host during the migration
process.
The LUNs provided by a DS8000 are presented to the LVM or LDM as physical SCSI disks.
The normal process for a migration is to set up a mirror of the data on the old disks to the new
LUNs. You wait until they are synchronized, and then split them at the cut-over time. Some
volume management software provides commands that automate this process.
Disk mirroring was first mentioned in a 1978 patent awarded to Norman Ken Ouchi of the IBM
Corporation. Logical Volume mirroring came into general use in UNIX operating systems
around the mid 1990s. At that time, it was used primarily for data protection. However, the
“split mirror” technique of moving data from one disk to another became a handy tool for the
system administrator.
A lot has changed in the last 10 years, but LVM Mirroring is still an effective way to get data
from one LUN to another. The process is straightforward. You have a logical volume on a LUN
(data source) with data that you want to relocate or migrate to a LUN on a DS8000 (data
destination). The process involves these steps:
1. Configure the destination LUN
2. Have the operating system recognize the LUN
3. Use the operating system LVM to create a logical volume on the DS8000 LUN
4. Establish a mirror between the source and destination logical volumes
5. Sync the mirror
6. Split the mirror
The application using the source logical volume or file system does not need to be stopped
during the LV mirror migration process. The process is non-disruptive as long as the operating
system allows you to add and remove devices dynamically. However, typically you want the
application to use the data on the destination logical volume after the migration. In this case,
the application must be quiesced and reconfigured to use the destination logical volume. For
a completely non-disruptive way to migrate data, use solutions such as those provided by
Softek TDMF or zDMF.
A large advantage of using the volume management software for data migration is that it also
allows for various types of consolidation. It can do this due to the virtualization nature of
volume management software,
Important: If you are planning to use volume management software functions to migrate
data, be careful with limitations. These limitations include the total number of physical disks
in the same volume group or volume set. In addition, if you are consolidating volumes with
different sizes, check the procedures to see whether consolidation is possible.
In this scenario, the XIV uses Microsoft Native MPIO and DS8000 LUNs managed by
SDDDSM device drivers.
Important: Initially installing SDDDSM on the host server might require an outage if a
reboot is needed.
The overall migration scenario is depicted in Figure 8-1. The three existing XIV data volumes
(shown on the left side of the diagram in solid lines) are configured on the Windows server
and are running live applications. The objective is to introduce the DS8000 into the
environment and migrate the XIV volumes (source volumes) to the DS8000 volumes (target
volumes). Three DS8000 volumes of equal or greater capacity using type ds are created in
DS8000. The target volume capacities (sizes) are shown on the right side of the diagram.
IBM XIV
IBM DS8100
To
ta lS to
r ga e
P7 2
48 GB 50 GB
64 GB 70 GB
FC Switch FC Switch
Step 2. Assign the DS8000 1. chvolgrp -dev image_id Step 1 is done on the DS8000
volumes to the Windows host -action add -volume vol_# with the dscli. You can choose
server and discover the new V# to use the GUI. The image_id is
volumes. 2. Click My Computer. the DS8000 id, and vol_num is
3. Click Manage. the number of the DS8000
4. Click Disk Management. volume in hex.
5. Click Action from the
toolbar. Steps 2-6 are performed on the
6. Click Rescan disks. Windows host server.
Step 3. Bring the DS8000 1. Right-click a new disk This step is done on the
target disks online. 2. Click Online. Windows host server.
Step 4. Identify the DS8000 1. Click Start This step is necessary if you
LUN IDs. 2. Select Subsystem Device have specific performance
DSM considerations. It is also
3. Click Subsystem Device needed if you have many LUNs
DSM that are the same size and you
4. Enter the command want to get LUN mapping
datapath query device details back to the DS8000
from the command prompt arrays.
Step 5. Initiate the mirroring 1. Right-click the source The synchronization of the
process. volume (x:). source and target volumes is
2. Click Add Mirror. automatic.
3. Select the chosen disk.
4. Click Add Mirror.
Step 6. Verify that the volumes Visual check Look for a disk that is
are copied and synced. functioning correctly.
Step 7. Remove the mirror. 1. Right-click the source disk. Make sure that you are
2. Click Remove Mirror. removing the source disk from
3. Select the source disk to the mirror, not the DS8000
remove. target disk.
4. Click Remove Mirror.
5. Click Yes.
6. Right-click the selected
volume.
7. Click Remove Mirror.
8. Click Yes.
Step 8. Verify that the mirror is Visual check Check that the disk is now
removed. deallocated.
Step 9. Change the name of the 1. Right-click the volume. Change the name of the new
new target disk to IDBM 2107. 2. Click Properties. target disk to something
3. Enter the volume name meaningful.
under the General tab.
Step 10. Repeat steps 5-9. N/A Repeat the mirroring and
renaming process for the
remaining source and target
disks.
Step 11. Expand the DS8000 1. Right-click the volume. In the example, the target disks
disks to their full capacity. 2. Click Extend Volume. are larger than the source
3. Click the target disk. LUNs, so you must expand
4. Click Next. them to use the remaining
5. Click Finish. capacity.
Step 12. Delete the source 1. Unassign the volume in the Follow the procedures for each
volumes from the Windows host XIV box from the Windows component listed in the right
server. host server. column.
2. Delete the zone in the
fabric.
3. Disconnect the XIV storage
device from the fabric or
host server.
Step 14. Verify that the device Visual check Check that source volumes are
definitions are removed. gone.
Tip: You can also start Disk Management from the command line by typing
diskmgmt.msc.
Tip: To use the LDM function to mirror the LUNs, both the source and target disks need
to be dynamic disks. For Windows 2008, you do not have to manually convert to
dynamic disks because it is part of the mirroring process.
Important: Before proceeding, verify that you have properly cabled and zoned the
connections from the DS8000 to the host server through the SAN fabric.
1. Assign the volumes, use the dscli command as shown in Example 8-1.
Example 8-1 Using the dscli commands to assign storage to the Windows host
mkfbvol -dev IBM.2107-7581981 -extpool P2 -name 8300_#h -type ds -cap 40 2000
mkfbvol -dev IBM.2107-7581981 -extpool P2 -name 8300_#h -type ds -cap 50 2001
mkfbvol -dev IBM.2107-7581981 -extpool P2 -name 8300_#h -type ds -cap 70 2002
mkvolgrp -dev IBM.2107-75L4741 -type scsimap256 -volume 2000-2002 V6
mkhostconnect -dev IBM.2107-7581981 -wwname 210000E08B875833 -hosttype Win2008
-volgrp V6 x345-tic-17
mkhostconnect -dev IBM.2107-7581981 -wwname 210000E08B09E5FD -hosttype Win2008
-volgrp V6 x345-tic-17
2. Verify that the LUNs are indeed assigned to the volume group V6 by issuing the
showvolgrp command as shown in Example 8-2.
Example 8-2 Using the showvolgrp command to verify the volumes in a volgrp from the dscli
dscli> showvolgrp -dev IBM.2107-75L4741 V6
Date/Time: March 28, 2007 1:18:43 PM CST IBM DSCLI Version: 5.2.400.304 DS:
IBM.2107-75L4741
Name V6
ID V6
Type SCSI Map 256
Vols 2000 2001 2002
In this example, Disks 4, 5, and 6 are the new DS8000 disks that will be part of the mirror
(the target volumes). The disks are offline and not allocated.
3. Use command-line utility diskpart to rescan the disks as shown in Example 8-3.
DISKPART> rescan
DISKPART> list disk
You can use Diskpart utility to set the POLICY=OnlineAll to get around the default policy.
However, if the disks are shared among servers, be aware that this setting can lead to data
corruption. Use the correct SAN policy to protect your data.
After being brought online once, offline disks will be automatically online after the reboot.
To bring the target disks online, right-click one of the new disks and select Online as shown in
Figure 8-5.
You can also use command-line utility Diskpart to make the disks online (Example 8-4).
Note: If you have specific requirements to share or isolate applications and files on
separate arrays, map each LUN back to the DS8000. Spread or isolate these LUNs
during the mirroring process.
3. In this example, the original disk is still a basic disk. A warning opens to confirm that you
want to convert it to a dynamic disk as shown in Figure 8-9. Click Yes to continue.
You can also use command-line utility Diskpart to start the mirroring as shown in
Example 8-5.
You can also check the volume synchronization process with Diskpart as shown in
Example 8-6. Notice that Volume 3 is rebuilding.
In the example, the XIV source disk is no longer wanted in the environment, so you remove
the mirror. To remove the mirror, perform these steps:
1. Right-click the disk that you want to remove the mirror and select Remove Mirror
(Figure 8-12).
You can also use Diskpart to remove the mirror as shown in Example 8-7.
To verify that the mirror is removed with Diskpart, list the volume and check if the type is
simple. If the mirror is not removed, the volume is listed as mirror as shown in Example 8-8.
3. Click Next.
4. Click Finish.
The volume is now extended as shown in Figure 8-18.
You can also extend the volume with Diskpart as shown in Example 8-9.
After this process is complete, visually verify that the device definitions are removed. If the
volumes are displayed as missing, right-click the volumes and click Delete.
The process addressed here is fairly generic. The number of LUNs and the type of storage
subsystem they originate from do not really matter. The process is the same for two or twenty
LUNs from any subsystem. Depending on the specifics of your environment, the process can
be disruptive to applications, requiring reboots and file system unmounts.
TotalStorage
1 2
8.43 GB 9.00 GB
Step 6. Make the source LUNs 1. metainit d22 1 1 The next command sequence
SVM objects. c5t6005076305FFC78600000 brings the DS8000 migration
00000004001d0s2 destination volumes under the
2. metainit d32 1 1 control of Solaris SVM. They
c5t6005076305FFC78600000 are designated as d22 and d32.
00000004003d0s
3. metastat
Step 7. Make the destination 1. metattach d20 d22 Create a two-way mirror by
volume a mirror member and 2. metattach d30 d32 integrating the metadevice
sync the mirror. (d22) based on DS8000 into the
already existing mirror (d20).
Step 8. Remove the source 1. metadetach d20 d21 After the sync is complete,
volumes from the mirror. 2. metastat d20 remove the mirror and verify
3. metadetach d30 d31 with the metastat command.
4. metastat d30
This file is in the /kernel/drv directory. The file needs to be updated as shown in Figure 8-20.
# vi /kernel/drv/scsi_vhci.conf
"/kernel/drv/scsi_vhci.conf" 31 lines, 1053 characters
#
# Copyright 2004 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
#pragma ident "@(#)scsi_vhci.conf 1.9 04/08/26 SMI"
#
name="scsi_vhci" class="root";
#
load-balance="round-robin";
#
auto-failback="enable";
#
# For enabling MPxIO support for 3rd party symmetric device need an
# entry similar to following in this file. Just replace the "SUN SENA"
# part with the Vendor ID/Product ID for the device, exactly as reported by
# Inquiry cmd.
#
# device-type-scsi-options-list =
device-type-scsi-options-list =
# "SUN SENA", "symmetric-option";
"EMC SYMMETRIX", "symmetric-option",
"IBM 2107900", "symmetric-option";
# symmetric-option = 0x1000000;
symmetric-option = 0x1000000;
Figure 8-20 Enable DS8000 for Solaris multipath support
# stmsboot -u
WARNING: This operation will require a reboot.
Do you want to continue ? [y/n] (default: y) y
The changes will come into effect after rebooting the system.
Reboot the system now ? [y/n] (default: y) y# stmsboot -u
WARNING: This operation will require a reboot.
Do you want to continue ? [y/n] (default: y) y
Figure 8-21 Using stmsboot to enable multipathing for the DS8000 LUNs
Example 8-10 \Showing the association between mount points or file systems, and an SVM object
# more /etc/vfstab
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd - /dev/fd fd - no -
/proc - /proc proc - no -
# non-mpxio: /dev/dsk/c2t0d0s1 - - swap - no -
# mpxio: /dev/dsk/c5t20000004CFA3C467d0s1 - - swap - no -
/dev/dsk/c5t20000004CFA3C467d0s1 - - swap - no -
# non-mpxio: /dev/dsk/c2t0d0s0 /dev/rdsk/c2t0d0s0 / ufs 1 no -
# mpxio: /dev/dsk/c5t20000004CFA3C467d0s0 /dev/rdsk/c5t20000004CFA3C467d0s0 /
ufs 1 no -
/dev/dsk/c5t20000004CFA3C467d0s0 /dev/rdsk/c5t20000004CFA3C467d0s0 / ufs
1 no -
# non-mpxio: /dev/dsk/c2t0d0s7 /dev/rdsk/c2t0d0s7 /export/home ufs 2 yes
-
# mpxio: /dev/dsk/c5t20000004CFA3C467d0s7 /dev/rdsk/c5t20000004CFA3C467d0s7
/export/home ufs 2 yes -
/dev/dsk/c5t20000004CFA3C467d0s7 /dev/rdsk/c5t20000004CFA3C467d0s7
/export/home ufs 2 yes -
/dev/md/dsk/d21 /dev/md/rdsk/d21 /emc08 ufs 2 yes -
/dev/md/dsk/d31 /dev/md/rdsk/d31 /emc19 ufs 2 yes -
/devices - /devices devfs - no -
ctfs - /system/contract ctfs - no -
objfs - /system/object objfs - no -
swap - /tmp tmpfs - yes -
#
#
# df
/ (/dev/dsk/c5t20000004CFA3C467d0s0): 3361778 blocks 482685 files
2. Discover the underlying LUNs of the metadevices by running the metastat command as
shown in Example 8-11. In this example, the Solaris disk device
/dev/dsk/c5t6006048000028470097553594D304342d0s2 is associated with the SVM
metadevice d21.
3. Determine the size of the migration source LUNs so you know what size is required for the
target LUN using the format command (Example 8-12).
In the sample environment, one unformatted EMC LUN with a device special file of
c5t6006048000028470097556434D303030d0 and a size of 6.56 MB exists. It is the
Volume Configuration Management Database LUN (VCMDB) that is used to store host
attachment-specific information. It must not be used to store data.
4. Select the disk you want to migrate in the format menu (Example 8-13).
5. Issue the partition command to print the partition table (Example 8-14).
6. Issue the print command to print the partition information (Example 8-15). In the example
environment, slice 2 of disk 3 is 8.43 GB. A slice in Solaris is a partition on the disk.
Typically, slice 2 refers to the entire disk.
Note: DS8000 volumes for Solaris must be configured with type = scsimap256 and volume
type = Sun.
Example 8-16 shows 5 disks/LUNs listed that are not labeled by Solaris. The DS8000 storage
administrator had assigned twice the number of LUNs as requested. The LUN sizes are listed
in the unlabeled section (first section).
Example 8-16 Output of the command format with DS8000 LUNs attached
# format
Searching for disks...done
c5t6006048000028470097556434D303030d0: configured with capacity of 6.56MB
c5t6005076305FFC7860000000000004001d0: configured with capacity of 9.00GB
c5t6005076305FFC7860000000000004002d0: configured with capacity of 9.00GB
c5t6005076305FFC7860000000000004000d0: configured with capacity of 9.00GB
c5t6005076305FFC7860000000000004003d0: configured with capacity of 33.98GB
The first LUN is the EMC storage that was noted previously. By comparing the device special
file number in the unlabeled LUNs and the numbered LUNs shown in the AVAILABLE DISK
SELECTIONS: section, you can determine which LUNs belong to the DS8000. See the disk
list in the format command output in Example 8-13 on page 294.
Select the LUNs that you want to use as migration targets as shown at the bottom of
Example 8-16 on page 296. In the example, LUNs 6 and 9 are chosen as the DS8000
migration targets.
Table 8-3 summarizes the source and target LUNs used in the scenario. It shows the native
device special file names, the associated metadevice names, and the respective sizes.
Important: This step is disruptive to the application because it requires the migration
source volumes to be unmounted. If the source LUNs are already submirrors, this step can
be omitted.
The command sequence shown in Example 8-17 includes the following steps:
1. Create a one-way mirror using the primary volume names (d20 and d30 in the example)
2. Make the source volumes (d21 and d31) submirrors.
3. Unmount the file systems on the disks that are to be migrated. You will remount them later,
using the name of the mirror rather than the name of the submirror.
Example 8-17 Creating a one-way mirror and taking DS8000 LUNs under control of SVM
# metainit d20 -m d21
d20: Mirror is setup
#
# metainit d30 -m d31
Edit /etc/vfstab to reflect the new primary volume (d20 and d30) names. Example 8-18
shows what /etc/vfstab looks like after these changes are completed.
Use the metastat command to show the hierarchal relationship between the primary volume
and its submirror members as shown in Example 8-21. The metadevices created on the
DS8000 LUNs (d22 and d32) do not belong to a mirror.
Example 8-21 Using the command metatat to show the status of the mirror
# metastat
d30: Mirror
Submirror 0: d31
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 70690560 blocks (33 GB)
d31: Submirror of d30
State: Okay
Size: 70690560 blocks (33 GB)
Stripe 0:
Device Start Block Dbase State
Reloc Hot Spare
/dev/dsk/c5t6006048000028470097553594D304533d0s2 0 No Okay
Yes
d20: Mirror
Submirror 0: d21
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 17671680 blocks (8.4 GB)
d21: Submirror of d20
State: Okay
Size: 17671680 blocks (8.4 GB)
Stripe 0:
Example 8-22 Using metattach to create a two-way mirror and check the status
# metattach d20 d22
d20: submirror d22 is attached
#
#
# metastat d20
d20: Mirror
Submirror 0: d21
State: Okay
Submirror 1: d22
State: Resyncing
Resync in progress: 3 % done
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 17671680 blocks (8.4 GB)
Repeat the steps for the other mirror (d30) and submirror (d32) as shown in Example 8-23.
Example 8-23 Using metattach to create a two-way mirror and check the status
metattach d30 d32
d30: submirror d32 is attached
#
#
# metastat d30
d30: Mirror
Submirror 0: d31
State: Okay
Submirror 1: d32
State: Resyncing
Resync in progress: 3 % done
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 70690560 blocks (33 GB)
Monitor the state of the destination volume submirror until the status is Okay. This status
indicates that the mirror is in sync and complete as seen in Example 8-24.
EMC Source
EMC Source /dev/dsk/c5t6006048000028
dev/dsk/c5t6006048000028 470097553594D304533d0s2
470097553594D304342d0s2
I Source
IBM Target dev/dsk/c5t6005076305FFC786
dev/dsk/c5t6005076305FFC7 0000000000004003d0s2
860000000000004001d0s2
Removing source volumes from the mirror and verifying the migration
The two-way mirror is now synchronized, so you can remove the original source volumes from
the SVM. Use the metadetach command to detach the mirrors and submirrors as shown in
Example 8-25. In the example, detach mirror d20 and submirror d21, and mirror d30 and
submirror d31.You can use the metastat command to verify the results.
The migration is now complete, and all data is located exclusively on DS8000 LUNs. There
might be further tasks remaining such as disconnecting the source LUNs from the system.
The example in this section describes an environment with the following characteristics:
The data you want to migrate is on three EMC LUNs. It can, however, be on any other type
of storage system without affecting the process described.
Each source LUN has one logical volume on slice 2. Slice 2 in Solaris covers the entire
disk.
The EMC source volumes are already under control of VxVM and coded appropriately in
/etc/vfstab. They are mounted on /emcN , 0 <= N <= 2.
You are migrating the data from three source LUNs to three target LUNs on a DS8000. It is
difficult to ensure that the target LUNs are the same size as the source LUNs. Therefore,
use DS8000 LUNs are a little larger than the source LUNs.
The same host system is attached to both the source and the destination storage
subsystems.
33.71 GB
9.00 GB
1 2
9.00 GB
8.43 GB
8.43 GB 9.00 GB
Step 1. Determine the source 1. format Determine the size of the EMC
devices. 2. df migration source file systems
3. vxprint controlled by VxVM and the
4. vxdisk list EMC0_0 migration source mounted file
systems.
Also look at the output of vxprint
to prepare to associate objects
with Solaris device files.
Step 2. Prepare the migration 1. vxdisk list Prepare the target LUNs to be
volumes. used as mirror targets.
Preparation includes verifying
that the target LUNs are visible
from the OS.
Label them if not already done.
Step 3. Investigate the VxVM 1. vxdisk list Determining the DS8000 LUN
migration target device status. IBM_DS8x000_0 size.
2. format
Step 4. Take IBM DS8000 vxdiskadm Take the target devices under
LUNs under control of VxVM. control of VxVM, in this case
using the CLI.
Step 6. Remove the source 1. vxplex -g dg0 -o rm dis Remove the source LUNs from
storage from the mirrors. vol0-01 the mirrors.
2. vxplex -g dg1 -o rm dis
vol1-01
3. vxplex -g dg2 -o rm dis
vol2-01
4. vxprint
Step 7. Remove the source 1. vxdg -g dg0 rmdisk dg001 Remove the EMC LUNs from
LUNs from the VxVM disk 2. vxdg -g dg1 rmdisk dg101 the disk group using vxdg -g
groups. 3. vxdg -g dg2 rmdisk dg201 <dgN> rmdisk.
Step 8. Verify that the LUN is 1. vxdisk list Display the migration status.
removed from the disk group. 2. vxdisk print
Tip: There are two paths representing the EMC LUNs (c2t500604843E0C4BCBd0 and
c4t500604843E0C4BD4d0; size: 6.56 MB) that stay unformatted. These paths are the
EMC Volume Configuration Management Database LUNs (VCMDB) that store host
attachment-specific information. These paths must not be used to store data.
2. Use the df command, as shown in Example 8-27, to determine the migration source
mounted file systems. They are already under VxVM control.
Example 8-27 Using the df command to display the currently mounted file systems
# df -k /emc*
Filesystem kbytes used avail capacity Mounted on
/dev/vx/dsk/dg1/vol1 8802304 5095101 3475604 60% /emc1
/dev/vx/dsk/dg0/vol0 8802304 5349648 3236972 63% /emc0
/dev/vx/dsk/dg2/vol2 35311616 6927899 26609769 21% /emc2
3. Run the vxprint command to determine the relationship between the VxVM objects and
associate them with the Solaris device files (Example 8-28). In the example, on disk group
dg2, there is one disk/LUN called dg201 with an enclosure-based name of EMC0_2. There is
also one logical volume called vol2.
Example 8-28 Using the command vxprint to show the properties of the VxVM disk groups
# vxprint
Disk group: dg2
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg dg2 dg2 - - - - - -
dm dg201 EMC0_2 - 70624768 - - - -
Example 8-29 Using the command vxdisk list to show the native device names
# vxdisk list EMC0_0
Device: EMC0_0
devicetag: EMC0_0
type: auto
hostid: SunFire280Rtic2
disk: name=dg001 id=1174034174.32.SunFire280Rtic2
group: name=dg0 id=1174034176.34.SunFire280Rtic2
info: format=cdsdisk,privoffset=256,pubslice=2,privslice=2
flags: online ready private autoconfig autoimport imported
pubpaths: block=/dev/vx/dmp/EMC0_0s2 char=/dev/vx/rdmp/EMC0_0s2
guid: {66fbe82e-1dd2-11b2-9096-0003ba17ecd4}
udid: EMC%5FSYMMETRIX%5F700975%5F750CD000
site: -
version: 3.1
iosize: min=512 (bytes) max=2048 (blocks)
public: slice=2 offset=65792 len=17605888 disk_offset=0
private: slice=2 offset=256 len=65536 disk_offset=0
update: time=1174047713 seqno=0.10
ssb: actual_seqno=0.0
headers: 0 240
configs: count=1 len=48144
logs: count=1 len=7296
Defined regions:
config priv 000048-000239[000192]: copy=01 offset=000000 enabled
config priv 000256-048207[047952]: copy=01 offset=000192 enabled
log priv 048208-055503[007296]: copy=01 offset=000000 enabled
lockrgn priv 055504-055647[000144]: part=00 offset=000000
Multipathing information:
numpaths: 2
c2t500604843E0C4BCBd29s2 state=enabled
c4t500604843E0C4BD4d27s2 state=enabled
The relationships between the Solaris device files and the VxVM disk group and disk
name in the example environment is summarized in Table 8-5.
Table 8-5 Summarizing the findings regarding the source LUNs so far
Native device names Enclosure- Diskgroup Diskname Volname
based name
In the example, disk numbers 3 (or 10), 4 (or 11), and 5 (or 12) will be migrated.
5. Select the disk in the menu.
Example 8-30 Use vxdisk list to see what is currently under control of VxVM
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
EMC0_0 auto:cdsdisk dg001 dg0 online
EMC0_1 auto:cdsdisk dg101 dg1 online
EMC0_2 auto:cdsdisk dg201 dg2 online
IBM_DS8x000_0 auto:none - - online invalid
IBM_DS8x000_4 auto:none - - online invalid
IBM_DS8x000_5 auto:none - - online invalid
c1t0d0s2 auto:none - - online invalid
c1t1d0s2 auto:none - - online invalid
#
Example 8-31 Using the command vxdisk list to show the native device names
# vxdisk list IBM_DS8x000_0
Device: IBM_DS8x000_0
devicetag: IBM_DS8x000_0
type: auto
info: format=none
flags: online ready private autoconfig invalid
pubpaths: block=/dev/vx/dmp/IBM_DS8x000_0s2 char=/dev/vx/rdmp/IBM_DS8x000_0s2
guid: -
udid: IBM%5F2107%5F75L4741%5F6005076305FFC7860000000000003005
site: -
Multipathing information:
numpaths: 2
c2t50050763050C0786d5s2 state=enabled
c4t50050763050C8786d5s2 state=enabled
#
# vxdisk list IBM_DS8x000_4
Device: IBM_DS8x000_4
devicetag: IBM_DS8x000_4
type: auto
info: format=none
flags: online ready private autoconfig invalid
pubpaths: block=/dev/vx/dmp/IBM_DS8x000_4s2 char=/dev/vx/rdmp/IBM_DS8x000_4s2
guid: -
udid: IBM%5F2107%5F75L4741%5F6005076305FFC7860000000000003004
site: -
Multipathing information:
numpaths: 2
c2t50050763050C0786d4s2 state=enabled
c4t50050763050C8786d4s2 state=enabled
#
# vxdisk list IBM_DS8x000_5
Device: IBM_DS8x000_5
devicetag: IBM_DS8x000_5
type: auto
info: format=none
flags: online ready private autoconfig invalid
pubpaths: block=/dev/vx/dmp/IBM_DS8x000_5s2 char=/dev/vx/rdmp/IBM_DS8x000_5s2
guid: -
udid: IBM%5F2107%5F75L4741%5F6005076305FFC7860000000000003006
site: -
Multipathing information:
numpaths: 2
c2t50050763050C0786d6s2 state=enabled
c4t50050763050C8786d6s2 state=enabled
Example 8-33 Having selected option 1, add or initialize one or more disks
Add or initialize disks
Menu: VolumeManager/Disk/AddDisks
Use this operation to add one or more disks to a disk group. You can
add the selected disks to an existing disk group or to a new disk group
that will be created as a part of the operation. The selected disks may
also be added to a disk group as spares. Or they may be added as
nohotuses to be excluded from hot-relocation use. The selected
disks may also be initialized without adding them to a disk group
leaving the disks available for use as replacement disks.
More than one disk or pattern may be entered at the prompt. Here are
some disk selection examples:
IBM_DS8x000_0
Example 8-34 Having selected option 1, add or initialize one or more disks (cont’d)
Add site tag to disk? [y,n,q,?] (default: n)
The selected disks will be added to the disk group dg0 with
default disk names.
IBM_DS8x000_0
IBM_DS8x000_0
Encapsulate this device? [y,n,q,?] (default: y) n
IBM_DS8x000_0
4. Use the list option on the vxdiskadm CLI command verify that the DS8000 migration
target volumes are under VxVM control as shown in Example 8-35.
You can also check using the vxdisk list and vxprint commands as shown in
Example 8-36.
Mirroring volumes from the boot disk will produce a disk that
can be used as an alternate boot disk.
At the prompt below, supply the name of the disk containing the
volumes to be mirrored.
4. The mirrors are in sync when the vxtask list command no longer shows any ongoing
activity (Example 8-39).
5. Enter vxprint and make sure that new plexes and subdisks have been created in each
disk group.
Example 8-40 Using vxplex to get the EMC LUN out of the mirror
# vxplex -g dg0 -o rm dis vol0-01
#
# vxprint -g dg0
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg dg0 dg0 - - - - - -
The same task needs to be done for dg1 and dg2, as shown in Example 8-41.
Example 8-41 Using vxplex to get the EMC LUN out of the mirrors
# vxplex -g dg1 -o rm dis vol1-01
# vxplex -g dg2 -o rm dis vol2-01
# vxprint
Disk group: dg2
Example 8-42 Removing EMC LUNs from the disk groups and displaying the results
# vxdg -g dg0 rmdisk dg001
# vxdg -g dg1 rmdisk dg101
# vxdg -g dg2 rmdisk dg201
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
EMC0_0 auto:cdsdisk - - online
EMC0_1 auto:cdsdisk - - online
EMC0_2 auto:cdsdisk - - online
IBM_DS8x000_0 auto:cdsdisk dg002 dg0 online
IBM_DS8x000_4 auto:cdsdisk dg202 dg2 online
IBM_DS8x000_5 auto:cdsdisk dg102 dg1 online
c1t0d0s2 auto:none - - online invalid
c1t1d0s2 auto:none - - online invalid
Example 8-43 Entering vxprint to see that EMC LUNs are not contained in any of the disk groups
# vxprint
Disk group: dg2
The migration is now complete and the data is located exclusively on DS8000 LUNs. There
might be some additional tasks left beyond the data migration itself, such as disconnecting
the EMC LUNs from the system.
In the scenario that follows, the volume to be migrated is part of a volume group (VG). In this
example, the EMC storage LUN to be migrated is integrated into a VG called vg_emc_to_ibm.
Step 1. Get an overview of the ioscan -fnC disk Discover the DS8000 LUNs on
attached disks. the HP-UX host system.
Step 2. Create the device insf -e You must create the device
special files. special files before you can
work with the DS8000 LUNs.
Step 5. Integrate the target vgextend /dev/vg_emc_to_ibm Bring an IBM storage LUN into
volumes into the respective /dev/dsk/c126t0d0 VG vg_emc_to_ibm.
volume group.
Step 9. Verify that the mirror is vgdisplay -v Verify that no EMC device
removed. /dev/vg_emc_to_ibm special file is being referred to.
8.6.2 Detailed steps for migration using HP-UX Volume Manager mirroring
The migration procedure has the following detailed steps:
Getting an overview of the attached disks
Creating device special files
Preparing the target volumes for inclusion in a volume group
Integrating the target volumes into the respective volume group
Assigning an alternative path to the LUN
Setting up a mirror
Removing the source LUN
Verifying the migration
Example 8-44 Using ioscan to show the LUNs that are visible to the OS
# ioscan -fnC disk
Class I H/W Path Driver S/W State H/W Type Description
==========================================================================
disk 0 0/0/1/1.2.0 sdisk CLAIMED DEVICE FUJITSU MAJ3182MC
/dev/dsk/c1t2d0 /dev/rdsk/c1t2d0
disk 1 0/0/2/1.2.0 sdisk CLAIMED DEVICE HP DVD-ROM 305
/dev/dsk/c3t2d0 /dev/rdsk/c3t2d0
disk 266 0/4/0/0.18.50.0.36.0.0 sdisk CLAIMED DEVICE IBM 2107900
disk 264 0/4/0/0.18.51.0.36.0.0 sdisk CLAIMED DEVICE IBM 2107900
disk 262 0/4/0/0.18.60.0.0.0.0 sdisk CLAIMED DEVICE EMC SYMMETRIX
/dev/dsk/c123t0d0 /dev/rdsk/c123t0d0
disk 263 0/4/0/0.18.60.0.0.1.4 sdisk CLAIMED DEVICE EMC SYMMETRIX
/dev/dsk/c123t1d4 /dev/rdsk/c123t1d4
disk 265 0/7/0/0.18.50.0.36.0.0 sdisk CLAIMED DEVICE IBM 2107900
disk 267 0/7/0/0.18.51.0.36.0.0 sdisk CLAIMED DEVICE IBM 2107900
disk 260 0/7/0/0.18.61.0.0.0.0 sdisk CLAIMED DEVICE EMC SYMMETRIX
/dev/dsk/c121t0d0 /dev/rdsk/c121t0d0
disk 261 0/7/0/0.18.61.0.0.1.6 sdisk CLAIMED DEVICE EMC SYMMETRIX
/dev/dsk/c121t1d6 /dev/rdsk/c121t1d6
2. Enter the ioscan -fnC disk command again to verify that the device special files were
created (Example 8-46).
Example 8-47 Using pvcreate to make a LUN ready to be integrated into a volume group
# pvcreate /dev/rdsk/c125t0d0
Physical volume "/dev/rdsk/c126t0d0" has been successfully
created
Example 8-49 Using the command vgextend to add an IBM LUN into the existing VG
# vgextend /dev/vg_emc_to_ibm /dev/dsk/c126t0d0
vgextend: Warning: Max_PE_per_PV for the volume group (2157) too small for this PV
(2303).
Using only 2157 PEs from this physical volume.
Current path "/dev/dsk/c123t1d4" is an alternate link, skip.
Volume group "/dev/vg_emc_to_ibm" has been successfully extended.
Volume Group configuration for /dev/vg_emc_to_ibm has been saved in
/etc/lvmconf/vg_emc_to_ibm.conf
# vgdisplay
--- Volume groups ---
VG Name /dev/vg00
VG Write Access read/write
VG Status available
VG Name /dev/vg_emc_to_ibm
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 1
Open LV 1
Max PV 16
Cur PV 2
Act PV 2
Max PE per PV 2157
VGDA 4
PE Size (Mbytes) 4
Total PE 4313
Alloc PE 2156
Free PE 2157
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
The warning message “vgextend: Warning: Max_PE_per_PV for the volume group (2157)
too small for this PV (2303)“ is displayed because the DS8000 LUN size exceeds the size
of the EMC LUN. When creating a volume group, the number of physical elements is set to a
default value. If the number of physical elements calculated when creating the volume group
exceeds that default value, the calculated number is used.
You can also accept that some space on the target volume is wasted.
Tip: More recent versions of HP-UX (HP-UX 11i v3 and later) allow you to modify the
physical element number dynamically by issuing the vgmodify command.
Example 8-50 Provide an alternative path for the target volume and verify successful completion
# vgextend /dev/vg_emc_to_ibm /dev/dsk/c125t0d0
vgextend: Warning: Max_PE_per_PV for the volume group (2157) too small for this
PV (2303).
Using only 2157 PEs from this physical volume.
Current path "/dev/dsk/c123t1d4" is an alternate link, skip.
Volume group "/dev/vg_emc_to_ibm" has been successfully extended.
Volume Group configuration for /dev/vg_emc_to_ibm has been saved in /etc/lvmconf
/vg_emc_to_ibm.conf
# vgdisplay -v /dev/vg_emc_to_ibm
--- Volume groups ---
VG Name /dev/vg_emc_to_ibm
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 1
Open LV 1
Max PV 16
Cur PV 2
Act PV 2
Max PE per PV 2157
VGDA 4
PE Size (Mbytes) 4
Total PE 4313
Alloc PE 2156
Free PE 2157
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
PV Name /dev/dsk/c126t0d0
PV Name /dev/dsk/c125t0d0 Alternate Link
PV Status available
Total PE 2157
Free PE 2157
Autoswitch On
Setting up a mirror
Use the lvextend -m 1 command to create a mirror for the logical volume. Force that mirror to
be on /dev/dsk/c125t0d0 (one of the paths to the IBM LUN) as shown in Example 8-51. Wait
until the mirror becomes established.
Example 8-52 Using the command vgreduce to remove one path to the EMC LUN
# vgreduce /dev/vg_emc_to_ibm /dev/dsk/c123t1d4
Device file path "/dev/dsk/c123t1d4" is an alternate path.
Volume group "/dev/vg_emc_to_ibm" has been successfully reduced.
Volume Group configuration for /dev/vg_emc_to_ibm has been saved in /etc/lvmconf
/vg_emc_to_ibm.conf
2. Use the lvreduce command to reduce the number of mirror copies so that the EMC LUN
is not part of the mirror (Example 8-53).
Example 8-53 Using the command lvreduce to remove the EMC LUN from the mirror
#lvreduce -m 0 /dev/vg_emc_to_ibm/lvol1 /dev/dsk/c121t1d6
Logical volume "/dev/vg_emc_to_ibm/lvol1" has been successfully reduced.
Volume Group configuration for /dev/vg_emc_to_ibm has been saved in /etc/lvmconf
/vg_emc_to_ibm.conf
3. Remove the last remaining path to the EMC LUN with the vgreduce command as shown in
Example 8-54.
Example 8-54 Remove the last remaining path to the EMC LUN
vgreduce /dev/vg_emc_to_ibm /dev/dsk/c121t1d6
Volume group "/dev/vg_emc_to_ibm" has been successfully reduced.
Volume Group configuration for /dev/vg_emc_to_ibm has been saved in /etc/lvmconf
/vg_emc_to_ibm.conf
#
There are two possible methods to migrate using AIX LVM mirroring:
Using migratepv -l
Using mklvcopy and syncvg
After the volumes are mirrored, break the mirror from the existing unit and remove the old
unit. Prepare the DS8000 with the correct Array to LUN spread and LUN size considerations
as it applies to the current setup of the ESS. For more information about DS8000 preparation,
see 2.4, “Preparing DS8000 for data migration” on page 27
The example environment is three existing ESS data volumes (left side of the diagram in solid
lines) configured on the Windows server and running live applications. Introduce the DS8000
into the environment and migrate the 2105 volumes (source volumes) to the DS8000 volumes
(target volumes). The three DS8000 volumes were created of equal or greater capacity using
type ds as in the DS8000. The target volume capacities (sizes, are shown on the right side of
the diagram in dotted lines.
Tota lS to ra g e
T otalStor age
1 2
M F G IN T
1
HMC
1
1
HMC OK OK
2
2
1 2 3 4 5 6
4.736 GB 5.12 GB
.896 GB 1.24 GB
Step1. Identify the ESS source lsvpcfg Gather information about the
LUNs. AIX host, such as the number of
ESS LUNs and sizes.
Step 2. Assign the DS8000 chvolgrp -dev image_id On the DS8000, use the
LUNs to the host. -action add-volume vol_# V3 DS8000 CLI (dscli) to assign
the LUNs. The image_id is the
DS8000 ID and vol_# is the
number of the DS8000 volume
in hex.
Step 3. Discover the DS8000 cfgmgr On the AIX host, discover the
LUNs. newly assigned LUNs.
Step 5. Identify the sizes of the bootinfo -s vpath# Where # is the number of the
DS8000 target LUNs. vpath.
Step 6. Move the DS8000 extendvg vg-name vpath# Where vg_name is the name of
LUNs into the appropriate VGs. the VG and # is the number of
the vpath.
Step 7. Verify that the DS8000 lsvg -p vg_name Where vg_name is the name of
LUNs are added to the VG. the VG.
Step 8. Identify the logical lsvg -l vg_name Where vg_name is the name of
volumes (LVs) to migrate. the VG.
Step 9. Copy LV data from the migratepv -l lv_name Where lv_name is the name of
ESS source LUNs to the the LV.
DS8000 target LUNs.
Step 10. Verify that the LUNs lsvg -p vg_name Where vg_name is the name of
are copied. lspv -l vpath# the VG and # is the number of
the vpath.
Step 11. Remove the ESS reducevg vg_name vpath# Where vg_name is the name of
source LUNs from the VGs and lsvg -p vg_name the VG and # is the number of
verify that the source ESS the vpath.
LUNs are removed from the
VG.
Step 12. Delete the device rmdev -dl vpath# Where # is the number of the
definitions from the host ODM. rmdev -dl hdisk# vpath and appropriate hdisk.
Step 13. Verify that the device lsdev -Cc disk Check that there are no defined
definitions are removed disks.
Step 14. Remove the source n/a Removes data path between
zone definition from the switch AIX host and disk subsystem
fabric.
Step 15. In the ESS, unassign 1. Click Modify Volume This sequence is done from the
the LUNs from the host server. Assignments. ESS Specialist GUI.
2. In the Volume assignments
window, click First from the
Host Nicknames list.
3. Click Perform Sort.
4. Look for the first assigned
volume assigned to the
host.
5. Press the Ctrl key and click
the volume.
6. In the Action list, select
Unassign selected
volume(s) to target hosts.
7. In the Target Hosts list,
select all the targeted
volumes using the Ctrl key.
8. Click Perform
Configuration Update.
9. Click OK.
Multiply the number of physical partitions (PPs) for the vpath by the PP size of the volume
group. This calculation gives you the LUN size in MB. The formula is:
(LUNPPs) * vgPPsize = LUN size
If you take the Total PPs for vpath2 and multiply it by the vg PP size of 64, you get 4736 MB:
(74) * (64) = 4736 MB
Example 8-57 The dscli command to assign storage to the AIX host
mkfbvol -dev IBM.2107-75L4741 -extpool P4 -name 8300_#h -type ds -cap 5 4004-4005
mkfbvol -dev IBM.2107-75L4741 -extpool P4 -name 8300_#h -type ds -cap 1 4006
mkvolgrp -dev IBM.2107-75L4741 IBM.2107-7581981 scsimask -volume 4004-4006 V3
mkhostconnect -dev IBM.2107-75L4741 -wwname 10000000C93E007C -hosttype pSeries
-volgrp V5 p615-tic-6_A0
mkhostconnect -dev IBM.2107-75L4741 -wwname 10000000C93E0059 -hosttype pSeries
-volgrp V5 p615-tic-6_A1
The ESS LUN IDs consist of the first three digits, which giving the LUN hex ID, and the last
five digits that are the ESS serial number. The DS8000 LUNs are the opposite way: The
DS8000 serial number is the first eight digits, and the last four digits are the LUN hex ID. For
example, vpath 5 is a DS8000 LUN, so 75L4741 is the serial number of the unit and 4005 is
the LUN ID. In the LUN ID, 40 is the LSS and 05 is the LUN ID in hex.
Example 8-63 The output results by issuing the lsvg -l command against a VG
# lsvg -l chuckvg
chuckvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
ess5GBlv1 jfs2 73 73 1 open/syncd /ess5GB1
ess5GBlv2 jfs2 73 73 1 open/syncd /ess5GB2
ess1GBlv jfs2 13 13 1 open/syncd /ess1GB
loglv00 jfs2log 1 1 1 open/syncd N/A
Note: Do not use the migratepv -p command. An error can leave the logical volume with
data on both the target and source LUNs in an unknown state.
Notice that vpath2 now has 73 PPs free and vpath5 has the contents on it.
Another way to verify that the data is by issuing the lspv -l command against the source
and target vpaths as shown in Example 8-66.
Example 8-66 Results from issuing the lspv -l command against the vpath
# lspv -l vpath5
vpath5:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
ess5GBlv1 73 73 16..16..15..16..10 /ess5GB1
# lspv -l vpath2
vpath2:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
loglv00 1 1 00..00..00..00..01 N/A
Example 8-67 Using the migratepv -l command and the output results
# migratepv -l loglv00 vpath2 vpath5
# lspv -l vpath2
# lspv -l vpath5
vpath5:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
ess5GBlv1 73 73 16..16..15..16..10 /ess5GB1
loglv00 1 1 00..00..00..00..01 N/A
Check the vpaths and associated hdisks by running the lsvpcfg command as shown in
Example 8-69.
In Example 8-70, note that only vpath2 and its associated disks were deleted. Use the same
process to delete all the vpaths and the associated hdisks that are owned by vpath3 and
vpath4.
8.7.4 High-level migration plan using AIX LVM mklvcopy and syncvg
commands
The AIX LVM is flexible. Under certain constraints or requirements for data spreading or
consolidation across LUNs, it might be easier to use the mklvcopy and syncvg commands. If
you choose to use the mklvcopy command, check that there is enough space in the VG on the
DS8000 LUNs collectively to create the mirrors. Table 8-10 shows the sequence, commands,
and a brief explanation of this scenario.
Table 8-10 Example migration procedure using mklvcopy
Process Commands Explanation
Step 1. Identify the ESS source lsvpcfg Gather information about the
LUNs. AIX host, such as the number of
ESS LUNs and sizes.
Step 2. Assign the DS8000 chvolgrp -dev image_id On the DS8000, use the
LUNs to the host. -action add-volume vol_# V3 DS8000 CLI (dscli) to assign
the LUNs. Where image_id is
the DS8000 ID and vol_# is the
number of the DS8000 volume
in hex.
Step 3. Discover the DS8000 cfgmgr On the AIX host, discover the
LUNs. newly assigned LUNs.
Step 5. Identify the sizes of the bootinfo -s vpath# Where # is the number of the
DS8000 target LUNs. vpath.
Step 6. Move the DS8000 extendvg vg-name vpath# Where vg_name is the name of
LUNs into the VGs the VG and # is the number of
appropriately. the vpath.
Step 7. Verify that the DS8000 lsvg -p vg_name Where vg_name is the name of
LUNs are added to the VG. the VG.
Step 8. Determine how the LVs lslv -l lv_name Where lv_name is the name of
are spread across the vpaths. the LV.
Step 9. Reserve free space on mklv -y lvdummy vg_name PPs Where vg_name is the name of
each LUN for an even spread of vpath# the VG and # is the number of
the data across LUNs. the vpath.
Step 10. Copy LV data from the mklvcopy lv_name 2 vpath# Where lv_name is the name of
ESS source LUNs to the vpath# the LV and # is the number of
DS8000 target LUNs. the vpath of the target DS8000
LUN.
Step 11. Verify that the LV lslv -l lv_name Where lv_name is the name of
copies are correct. the LV.
Step 12. Synchronize the LV syncvg -v vg_name Where vg_name is the name of
data from the ESS source LUNs the VG.
to the DS8000 target LUNs.
Step 13, Verify that the sync is lsvg -l vg_name If the lv still shows stale, you
displayed as sync’d rather than need to resync it before
stale. proceeding.
Step 14. Verify the source and lslv -l lv_name Where lv_name is the name of
target LUNs for each LV. the LV.
Step 15. Remove the source rmlvcopy lv_name 1 vpath# Where lv_name is the name of
copy of the LV from the ESS the LV and # is the number of
LUNs. the vpath of the target DS8000
LUN.
Step 16. Verify that all the lsvg -p vg_name Where vg_name is the name of
source ESS LUNs are free with the VG.
no data.
Step 17. Remove the ESS reducevg vg_name vpath# Where vg_name is the name of
source LUNs from the VGs and lsvg -p vg_name the VG and # is the number of
verify that the source ESS the vpath.
LUNs are removed from the
VG.
Step 18. Delete the device rmdev -dl vpath# Where # is the number of the
definitions from the host ODM. rmdev -dl hdisk# vpath and appropriate hdisk.
Step 19. Verify that the device lsdev -Cc disk Check that there are no defined
definitions are removed. disks.
Step 20. Remove the source n/a Removes data path between
zone definition from the switch AIX host and disk subsystem
fabric.
Step 21. In the ESS, unassign 1. Click Modify Volume This sequence is done from the
the LUNs from the host server. Assignments. ESS Specialist GUI.
2. In the Volume assignments
window, select First in the
Host Nicknames list.
3. Click Perform Sort.
4. Press the Ctrl key and
select the first volume
assigned to the host.
5. In the Action list, select
Unassign selected
volume(s) to target hosts.
6. In the Target Hosts list,
select all targeted volumes
using the Ctrl key.
7. Click Perform
Configuration Update.
8. Click OK.
If you take the Total PPs for vpath2 and multiply it by the VG PP size of 64 you get 4736 MB:
(74) * (64) = 4736 MB
Example 8-73 Using dscli commands to assign storage to the AIX host
mkfbvol -dev IBM.2107-75L4741 -extpool P4 -name 8300_#h -type ds -cap 5 4004-4005
mkfbvol -dev IBM.2107-75L4741 -extpool P4 -name 8300_#h -type ds -cap 1 4006
mkvolgrp -dev IBM.2107-75L4741 IBM.2107-7581981 scsimask -volume 4004-4006 V3
mkhostconnect -dev IBM.2107-75L4741 -wwname 10000000C93E007C -hosttype pSeries
-volgrp V5 p615-tic-6_A0
mkhostconnect -dev IBM.2107-75L4741 -wwname 10000000C93E0059 -hosttype pSeries
-volgrp V5 p615-tic-6_A1
Issue the showvolgrp command as shown in Example 8-74 to verify that LUNs are assigned
to V3.
The first three digits of the ESS LUN IDs are the LUN hex ID, and the last five digits are the
ESS serial number. The DS8000 LUNs are the opposite: the first eight digits are the DS8000
Serial number, and the last four digits are the LUN hex ID. For example, because vpath 5 is a
DS8000 LUN, 75L4741 is the serial number of the unit and 4005 is the LUN ID. The 40 is the
LSS and 05 is the LUN ID in hex.
Example 8-79 Using the lslv -l command to check for host LUN spreading requirements
# lslv -l ess5GBlv1
ess5GBlv1:/ess5GB1
PV COPIES IN BAND DISTRIBUTION
vpath0 037:000:000 100% 015:000:000:007:015
vpath1 036:000:000 100% 007:000:000:015:014
# lslv -l ess5GBlv2
ess5GBlv2:/ess5GB2
PV COPIES IN BAND DISTRIBUTION
vpath1 038:000:000 100% 008:015:014:000:001
vpath0 030:000:000 100% 000:008:014:008:000
vpath2 005:000:000 100% 001:000:000:003:001
# lslv -l ess1GBlv
ess1GBlv:/ess1GB
PV COPIES IN BAND DISTRIBUTION
vpath2 009:000:000 100% 002:003:002:000:002
vpath0 004:000:000 100% 000:004:000:000:000
# lslv -l loglv00
loglv00:N/A
PV COPIES IN BAND DISTRIBUTION
vpath0 001:000:000 100% 000:001:000:000:000
Remove lvdummy and continue making the other LV copies by using the rmlv command as
shown in Example 8-84.
Verifying LV copies
Verify that LV copies are shown from both the source and target LUNs for each LV by issuing
the lslv -l command (Example 8-87).
Example 8-87 Using the lslv -l command to verify lv copies on the source and target LUNs
# lslv -l ess5GBlv1
ess5GBlv1:/ess5GB1
PV COPIES IN BAND DISTRIBUTION
vpath0 037:000:000 100% 015:000:000:007:015
vpath5 037:000:000 100% 016:000:000:005:016
vpath1 036:000:000 100% 007:000:000:015:014
vpath7 036:000:000 100% 005:000:000:016:015
# lslv -l ess5GBlv2
ess5GBlv2:/ess5GB2
PV COPIES IN BAND DISTRIBUTION
vpath1 038:000:000 100% 008:015:014:000:001
vpath7 037:000:000 100% 011:010:015:000:001
vpath5 036:000:000 100% 000:010:015:011:000
vpath0 030:000:000 100% 000:008:014:008:000
vpath2 005:000:000 100% 001:000:000:003:001
# lslv -l ess1GBlv
ess1GBlv:/ess1GB
PV COPIES IN BAND DISTRIBUTION
vpath2 009:000:000 100% 002:003:002:000:002
vpath6 013:000:000 100% 003:003:003:003:001
vpath0 004:000:000 100% 000:004:000:000:000
# lslv -l loglv00
loglv00:N/A
PV COPIES IN BAND DISTRIBUTION
vpath0 001:000:000 100% 000:001:000:000:000
vpath5 001:000:000 100% 000:001:000:000:000
Example 8-88 Using the rmlvcopy command to remove the source ESS source LUNs
# rmlvcopy ess5GBlv1 1 vpath0 vpath1
# rmlvcopy ess5GBlv2 1 vpath0 vpath1 vpath2
Verifying that all the source ESS LUNs are free with no data
Verify that all the source ESS LUNs are free with no data by checking the LUNs in the VG.
Issue the lsvg -p command as shown in Example 8-89.
Example 8-90 An example of removing and verifying the LUN from the VG
# reducevg chuckvg vpath2
# lsvg -p chuckvg
chuckvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
vpath3 active 74 1 00..00..00..00..01
vpath4 active 14 1 00..00..00..00..01
vpath5 active 79 5 00..00..00..00..05
vpath6 active 15 15 03..03..03..03..03
vpath7 active 79 79 16..16..15..16..16
You can check the vpaths and associated hdisks by running the lsvpcfg command as shown
in Example 8-91.
In Example 8-92, only vpath2 and its associated disks were deleted. Use the same process to
delete all the vpaths and the associated hdisks owned by vpath3 and vpath4.
Example 8-93 Results from issuing the lsdev -Cc disk command
# lsdev -Cc disk
hdisk0 Available 1S-08-00-4,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-5,0 16 Bit LVD SCSI Disk Drive
hdisk14 Available 1V-08-01 IBM FC 2107
hdisk15 Available 1V-08-01 IBM FC 2107
hdisk16 Available 1H-08-01 IBM FC 2107
hdisk17 Available 1H-08-01 IBM FC 2107
hdisk18 Available 1H-08-01 IBM FC 2107
hdisk28 Available 1H-08-01 IBM FC 2107
hdisk29 Available 1H-08-01 IBM FC 2107
hdisk30 Available 1H-08-01 IBM FC 2107
hdisk32 Available 1V-08-01 IBM FC 2107
hdisk33 Available 1V-08-01 IBM FC 2107
hdisk37 Available 1V-08-01 IBM FC 2107
hdisk38 Available 1V-08-01 IBM FC 2107
vpath5 Available Data Path Optimizer Pseudo Device Driver
vpath6 Available Data Path Optimizer Pseudo Device Driver
vpath7 Available Data Path Optimizer Pseudo Device Driver
A key element of any PowerHA cluster is the data used by the highly available applications.
This data is stored on AIX LVM entities. PowerHA clusters use the capabilities of the LVM to
make this data accessible to multiple nodes. When you change the definition of a shared LVM
component in a cluster, the operation updates the following items:
The LVM data that describes the component on the local node
The Volume Group Descriptor Area (VGDA) on the disks in the volume group
AIX LVM enhancements allow all nodes in the cluster to be aware of changes to a volume
group, logical volume, and file system when those changes are made. Depending on the
version of AIX and PowerHA software, LVM changes might not propagate automatically to all
cluster nodes. Verify that the LVM definition of a volume group is the same on all cluster
nodes before a failover occurs.
Refrain from using AIX commands to modify such resources, and use only PowerHA
operations on the resources. Use only Cluster Single Point Of Control (C-SPOC) operations
on shared volume groups, physical disks, and file systems when the cluster is active. Using
AIX commands on shared volume groups while the cluster is active can result in the volume
group becoming inaccessible, and can corrupt data.
ITSO
PowerHA CLUSTER Diagram
Cluster Name = itsoclstr
HA Network : net_ether_01
Gateway
ent1
ent2
ent0
ent1
9.53.11.1
(255.255.252.0)
en0 en1 en0 en1
fcs2
fcs0
fcs2
(p520 AIX physical) (p520 AIX physical)
Running Software
AIX 6100-06-03-1048
itsovg PowerHA 6.1.0 SP5
itsolv
hdisk2
hdisk3
hdisk4
hdisk5
Disk Heartbeat
net_diskhb_01 net_diskhb_02
itso_engine1: /dev/hdisk2 itso_engine1: /dev/hdisk3
IBM DS4000 itso_engine2: /dev/hdisk2 itso_engine2: /dev/hdisk3
Similarly, you can get information about the LVM entity definitions using the lsvg itsovg
command (Example 8-95).
# lsvg -l itsovg
itsovg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
itsolv jfs2 12 12 4 open/syncd /itsofs
# lslv itsolv
LLOGICAL VOLUME: itsolv VOLUME GROUP: itsovg
LV IDENTIFIER: 0004afd80000d70000000131205383d4.1 PERMISSION: read/write
VG STATE: active/complete LV STATE: opened/syncd
TYPE: jfs2 WRITE VERIFY: off
MAX LPs: 512 PP SIZE: 32 megabyte(s)
COPIES: 1 SCHED POLICY: striped
# lslv -m itsolv
itsolv:/itsofs
LP PP1 PV1 PP2 PV2 PP3 PV3
0001 0104 hdisk2
0002 0104 hdisk3
0003 0104 hdisk4
0004 0104 hdisk5
0005 0105 hdisk2
0006 0105 hdisk3
0007 0105 hdisk4
0008 0105 hdisk5
0009 0106 hdisk2
0010 0106 hdisk3
0011 0106 hdisk4
0012 0106 hdisk5
0032 0111 hdisk5
# lsfs /itsofs
Name Nodename Mount Pt VFS Size Options Auto
Accounting
/dev/itsolv -- /itsofs jfs2 786432 rw no no
NODE itso_engine1:
Network net_diskhb_01
itso_engine1_hdisk2_01 /dev/hdisk2
Network net_diskhb_02
itso_engine1_hdisk3_01 /dev/hdisk3
Network net_ether_01
itso_cluster_svc 9.53.11.226
itso_engine1_boot 9.53.11.227
NODE itso_engine2:
Network net_diskhb_01
itso_engine2_hdisk2_01 /dev/hdisk2
Network net_diskhb_02
itso_engine2_hdisk3_01 /dev/hdisk3
Network net_ether_01
itso_cluster_svc 9.53.11.226
# /usr/es/sbin/cluster/utilities/clshowres -g'itso_rg'
To migrate data from the current set of hdisks, you need to migrate a single logical volume
(LV), itsolv, containing /itsofs file system. In the example, the second set of LUNs is
allocated on the same IBM DS5000 storage subsystem. You do not need to replace or
upgrade the multipathing device driver.
When both sets of LUNs are discovered on both cluster nodes, use C-SPOC commands to
mirror the volume group between both sets of LUNs. After the volume group is mirrored, you
can remove the mirrored copy from the original set of LUNs and remove those LUNs from the
volume group.
In the example, you also perform consolidation by reducing the number of LUNs. The data
from the original set of four LUNs are migrated to a set of two LUNs. The capacity of the
target LUNs (32 GB) is twice the capacity of the source LUNs (16 GB).
In the example, the logical volume is striped across all available volumes within the volume
group. Logical volume striping is used to improve performance by balancing the workload
between physical devices. As you consolidate the data on two volumes, the width of striping is
reduced from four volumes to two volumes. Verify the version of AIX you are working with
before using this function because LVM striping width reduction is OS version dependent.
Step1. Identify the DS5000 mpio_get_config -Av Gather information about the
source LUNs and LUN sizes. bootinfo -s hdiskX AIX host including the number
of DS5000 LUNs and sizes.
Step 2. Mapping of new set of DS5000 Storage Manager GUI Use DS5000 Storage Manager
DS5000 LUNs to the cluster to map new set of LUNs to the
nodes. cluster nodes.
Step 4. Determine LUN sizes mpio_get_config -Av Where # is the number of the
for both sets of LUNs bootinfo -s hdisk# hdisk.
Step 5. Include the new LUNs smitty cl_extendvg Use the C-SPOC smitty panel
into the existing volume group to assign a PVID to a new
volume and extend the volume
group using the new volume.
Must be repeated for all new
volumes.
Step 6. Verify that the DS5000 lsvg -p vg_name Where vg_name is the name of
LUNs are added to the VG on the VG
all cluster nodes.
Step 7. Identify the logical lsvg -l vg_name Where vg_name is the name of
volumes (LVs) to migrate. the VG.
Step 8. Create LV copies on the smitty cl_mirrorvg Use the C-SPOC smitty panel
new set of volumes. to create an LV mirror copy.
Step 9. Synchronize LVM smitty cl_syncvg_lv Use the C-SPOC smitty panel
mirrors. to synchronize LVM mirrors by
LV.
Step 10. Monitor the lsvg vg_name Where vg_name is the name of
synchronization process. the VG.
Step 11. Verify LVM mirror lsvg vg_name Where vg_name and lv_name
consistency. lslv lv_name are the names of VG and
mirscan -l itsolv mirrored LV.
Step 12. Remove LV copies smitty cl_unmirrorvg Use the C-SPOC smitty panel
from the old set of volumes. to unmirror VG.
Step 13. Remove old volumes smitty cl_reducevg Use the C-SPOC smitty panel
from the VG. lsvg -p vg_name for cluster-wide removal of old
volume from VG. The lsvg
command must be run on each
cluster node for verification.
Step 15. Delete the device rmdev -dl hdisk# Where # is the number of the
definitions from the server vpath and the appropriate
ODM. hdisk.
Step 16. Verify that the device mpio_get_config -Av Check that there are no defined
definitions are removed. lsdev -Cc disk hdisks.
Step 17. Unassign old DS5000 DS5000 Storage Manager GUI Use DS5000 Storage Manager
volumes from all cluster nodes. to remove mapping of old set of
LUNs to the cluster nodes.
Specifically, the command shows the following information about the subsystem:
The assigned name of the subsystem
The worldwide name of the subsystem
A list of hdisks in the Available state that are associated with the subsystem
To determine the size of hdisks associated with DS5000 LUNs, use LVM commands that
report the size of hdisks such as lspv and lsvg. These commands work when a hdisk is
already included into a VG (lsvg) or has a Physical Volume Identifier (PV ID) assigned. If no
PV ID is assigned to the hdisk, use the bootinfo command to determine the size. The
bootinfo command reports the size of a hdisk in Megabytes (MB) as shown in Example 8-98
on page 356.
The newly discovered hdisk6 and hdisk7 volumes do not have PVIDs assigned yet. To verify
the LUN sizes use the bootinfo command (Example 8-100).
The capacity of the newly discovered hdisk6 and hdisk7 volumes is 32 GB, twice the capacity
of the source volumes. The combined capacity of the new set of volumes matches the
capacity of the old set, 64 GB.
Example 8-101 Extending itsovg volume group using C-SPOC smitty interface “smitty cl_extendvg”
+--------------------------------------------------------------------------+
| Select the Volume Group that will hold the new Logical Volume |
| |
| Move cursor to desired item and press Enter. Use arrow keys to scroll. |
| |
| #Volume Group Resource Group Node List |
| itsovg itso_rg itso_engine1,itso_engine2 |
| |
| F1=Help F2=Refresh F3=Cancel |
| F8=Image F10=Exit Enter=Do |
| /=Find n=Find Next |
+--------------------------------------------------------------------------+
+--------------------------------------------------------------------------+
| Physical Volume Names |
| |
| Move cursor to desired item and press Enter. |
| |
| 0004afca24cd59c3 ( hdisk6 on all cluster nodes ) |
| 0004afca24cd5ae2 ( hdisk7 on all cluster nodes ) |
| |
| F1=Help F2=Refresh F3=Cancel |
| F8=Image F10=Exit Enter=Do |
| /=Find n=Find Next |
+--------------------------------------------------------------------------+
[Entry Fields]
VOLUME GROUP name itsovg
Resource Group Name itso_rg
Node List itso_engine1,itso_engine2
Reference node itso_engine1
VOLUME names hdisk6
If the PV IDs do not match, the interface does not allow you to expand the volume group until
the inconsistency is resolved. The simplest resolution is to clear the PVIDs on hdisk6 and
hdisk7 on all cluster nodes using the chdev -l hdisk# -a pv=clear command. You must go
through this process once for every disk used for expanding the volume group.
You can also use command-line interface (CLI) to achieve the same result. However, if you
use the CLI, you must manually check PV ID consistency. The advantage of using the CLI is
that you can automate this task. If you are adding many volumes, CLI can save you time. A
CLI command equivalent to the set of smitty panels shown in Example 8-101 on page 358 is
shown in Example 8-102.
The C-SPOC command can be run from any node in the cluster, regardless of which node
has the resource group online.
<itso_engine2>
# lsvg -p itsovg
itsovg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk2 active 511 508 103..99..102..102..102
hdisk3 active 511 508 103..99..102..102..102
hdisk4 active 511 508 103..99..102..102..102
hdisk5 active 511 508 103..99..102..102..102
hdisk7 active 1023 1023 205..205..204..204..205
hdisk6 active 1023 1023 205..205..204..204..205
In this example, both cluster nodes (itso_engine1 and itso_engine2) have the itsovg VG
successfully expanded with the newly discovered hdisk6 and hdisk7.
Example 8-104 The output results of issuing the lsvg -l command against a VG
<itso_engine1>
# lsvg -l itsovg
itsovg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
itsolv jfs2 12 12 4 open/syncd /itsofs
<itso_engine2>
# lsvg -l itsovg
itsovg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
itsolv jfs2 12 12 4 closed/syncd /itsofs
LV itsolv is in open/syncd state on one of the cluster nodes and closed/syncd on the other.
In PowerHA, cluster JFS2 LV can be opened on only one node at a time. After the file system
is unmounted, the LV switches into closed/syncd state and can be opened by another node.
Attempts to open LV on a cluster node while it is opened on another node results in an error
message. AIX LVM uses locking mechanisms to prevent from more than one node mounting
the same JFS2 file system at a time.
In this example, the JFS2 file system is configured with INLINE JFS2 log. There is no
separate JFS2 log LV device associated with the itsofs file system. If there was a separate
JFS2 log LV, it must be migrated to the new set of LUNs along with the JFS2 LV. By default the
JFS2 file system is configured with a separate jfs2log LV. In this example INLINE jfs2 is used
to keep the configuration simple. In this configuration, you only need to migrate itsolv LV.
To create LV copies of all LVs in the itsovg VG, use the C-SPOC smitty interface. The smitty
interface offers selection choices and performs all the necessary checking on all cluster
nodes before running LVM commands. If you are mirroring data in a few VGs, use the smitty
interface as shown in Example 8-105.
[Entry Fields]
* VOLUME GROUP name itsovg
Resource Group Name itso_rg
Node List itso_engine1,itso_engine2
Reference node itso_engine1
PHYSICAL VOLUME names hdisk6 hdisk7
After you select the VG to mirror and the set of hdisks to create LV copy on, you are presented
with choices for various attributes. Leave all attributes on the default values except for Mirror
sync mode. Change this attribute value to No Sync. This parameter illustrates the VG
configuration change before data synchronization takes place. In addition, changing this
attribute if you want to increase the parallelism of the data synchronization process. See
“Synchronizing LVM mirrors” on page 363 for more details about the LV synchronization
process.
# lslv itsolv
LOGICAL VOLUME: itsolv VOLUME GROUP: itsovg
LV IDENTIFIER: 0004afd80000d70000000131205383d4.1 PERMISSION: read/write
VG STATE: active/complete LV STATE: closed/stale
TYPE: jfs2 WRITE VERIFY: off
MAX LPs: 512 PP SIZE: 32 megabyte(s)
COPIES: 2 SCHED POLICY: striped
LPs: 12 PPs: 24
STALE PPs: 12 BB POLICY: relocatable
INTER-POLICY: maximum RELOCATABLE: no
INTRA-POLICY: middle UPPER BOUND: 4
MOUNT POINT: /itsofs LABEL: /itsofs
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes (superstrict)
Serialize IO ?: NO
STRIPE WIDTH: 4
STRIPE SIZE: 64K
As you can see in Example 8-106 on page 362, you now have a logical volume with two
copies, two stale physical volumes (PVs), and 12 stale physical partitions (PPs). Stale
volumes and stale partitions are present because you delayed the data synchronization. The
AIX command that scans all LV copies and identifies stale PVs is mirscan (Example 8-107).
The mirscan command can run only on a cluster node where the itso_rg resource group is
online.
The hdisk6 and hdisk7 physical volumes are in stale state as indicated in the SYNC column.
These PVs contain the second copy of the LV as shown in the CP column. FAILURE in the
STATUS column indicates that the partition was not synchronized and is still incapable of
performing I/O operations because it contains no valid data.
Example 8-108 LV copy synchronization using C-SPOC smitty interface “smitty cl_syncvg_lv”
+--------------------------------------------------------------------------+
| Select the Volume Group that holds the Logical Volume to be Synchronized |
| |
| Move cursor to desired item and press Enter. |
| |
| #Volume Group Resource Group Node List |
| itsovg itso_rg itso_engine1,itso_engine2 |
| |
| F1=Help F2=Refresh F3=Cancel |
| F8=Image F10=Exit Enter=Do |
| /=Find n=Find Next |
+--------------------------------------------------------------------------+
+--------------------------------------------------------------------------+
| Logical Volume name |
| |
| Move cursor to desired item and press Enter. Use arrow keys to scroll. |
| |
[Entry Fields]
LOGICAL VOLUME name itsolv
VOLUME GROUP name itsovg
Resource Group Name itso_rg
* Node List itso_engine1,itso_engine2
As smitty takes you through the three panels, it prompts you to select the VG and LV to
synchronize, and determines the list of cluster nodes.
In the third smitty panel, select a value for the Number of Partitions to Sync in Parallel
attribute. This attribute specifies the number of Logical Partitions (LPs) that are synchronized
in parallel. The valid range is 1 - 32. The higher the number, the higher the synchronization
rate.
However, consider the performance impact to avoid setting this attribute too high. Find a
balance between acceptable performance impact and the synchronization rate. If you are
unsure, do not change the default attribute value. This value allows the synchronization
process run with minimal performance impact, but it takes the longest to complete.
In this example, the LVM mirror synchronization is started with 2 LPs synchronized in parallel.
The rate depends on the following factors:
System type
Availability of system resources
Number of physical disks in the volume group
Performance of the disk subsystem
Load on the disk subsystem
Many other factors
# lsvg -l itsovg
itsovg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
itsolv jfs2 12 24 6 closed/syncd /itsofs
# lslv itsolv
LOGICAL VOLUME: itsolv VOLUME GROUP: itsovg
LV IDENTIFIER: 0004afd80000d70000000131205383d4.1 PERMISSION: read/write
VG STATE: active/complete LV STATE: opened/syncd
TYPE: jfs2 WRITE VERIFY: off
MAX LPs: 512 PP SIZE: 32 megabyte(s)
COPIES: 2 SCHED POLICY: striped
LPs: 12 PPs: 24
STALE PPs: 0 BB POLICY: relocatable
INTER-POLICY: maximum RELOCATABLE: no
INTRA-POLICY: middle UPPER BOUND: 4
MOUNT POINT: /itsofs LABEL: /itsofs
The results of the verification commands should be identical on all cluster nodes. In the
example, no stale PVs and PPs are found, indicating that the LVM mirrors are fully in sync.
For additional validation, run the mirscan command on the reference node. However, it takes
significantly longer for the mirscan command to complete report generation because it
examines each allocated partition on the specified device.
Example 8-110 shows a mirscan report of fully synchronized LVM mirrors for reference
purposes. The STATUS of physical volumes hdisk6 and hdisk7 is SUCCESS, and SYNC is
synced.
The C-SPOC tool used for unmirroring the VG removes an extra copy of all LVs within a VG.
You cannot unmirror only some of the LVs in a VG. In this case, use the standard AIX LVM
commands. For data migration, however, unmirroring of all LVs in a VG is adequate.
This example uses the first approach of removing and discarding the LV copy on the old set of
volumes: hdisk2 to hdisk5. Use the lslv -m command to view the relationship between PVs
and LV copies as shown in Example 8-111.
PP1 is the first copy of the LV, PP2 is the second, and PP3 i s the third copy. The first copy of
the LV is on the old set of volumes, hdisk2 to hdisk5 in the PV1 column. Before these
volumes can be excluded from the VG, remove the LV copy on these volumes.
To remove the LV copies of all LVs in itsovg VG, use the C-SPOC smitty interface
(Example 8-112). Smitty offers selection choices and performs all the necessary checking on
all cluster nodes before running LVM commands. If you are unmirroring data on a few VGs,
use the C-SPOC smitty interface to simplify the process.
Example 8-112 Unmirroring itsovg VG using C-SPOC smitty interface “smitty cl_unmirrorvg”
+--------------------------------------------------------------------------+
| Select the Volume Group to Unmirror |
| |
| Move cursor to desired item and press Enter. |
| |
| #Volume Group Resource Group Node List |
| itsovg itso_rg itso_engine1,itso_engine2 |
| |
| F1=Help F2=Refresh F3=Cancel |
| F8=Image F10=Exit Enter=Do |
| /=Find n=Find Next |
+--------------------------------------------------------------------------+#
+--------------------------------------------------------------------------+
| Physical Volume Names |
| |
| Move cursor to desired item and press F7. |
| ONE OR MORE items can be selected. |
[Entry Fields]
VOLUME GROUP name itsovg
Resource Group Name itso_rg
Node List itso_engine1,itso_engine2
Reference node itso_engine1
PHYSICAL VOLUME names hdisk2 hdisk3 hdisk4 hdisk5
On the second smitty panel, select the set of PVs where the first copy of the LV is located. If
you do not select the PVs, the tool removes the most recent copy of the LV, effectively deleting
the result of the migration. After you confirm the choices on the last smitty panel, the LV copy
is removed. To confirm the LV copy is removed, use lslv command as shown in
Example 8-113.
The LV now has only one copy on the new set of PVs, hdisk6 and hdisk7. To confirm that no
PPs are still used on the old set of LUNs, verify the USED PPs attribute. These attributes can
be view in the lspv command output (Example 8-114).
To remove the old volumes, use the C-SPOC commands to maintain consistency across all
cluster nodes. The removal is done using the smitty interface as shown in Example 8-115.
Example 8-115 Reducing itsovg VG using C-SPOC smitty interface “smitty cl_reducevg”
+--------------------------------------------------------------------------+
| Select the Volume Group that will hold the new Logical Volume |
| |
| Move cursor to desired item and press Enter. Use arrow keys to scroll. |
| |
| #Volume Group Resource Group Node List |
| itsovg itso_rg itso_engine1,itso_engine2 |
| |
| F1=Help F2=Refresh F3=Cancel |
| F8=Image F10=Exit Enter=Do |
| /=Find n=Find Next |
+--------------------------------------------------------------------------+
+--------------------------------------------------------------------------+
| Physical Volume Names |
| |
| Move cursor to desired item and press Enter. |
| |
| drs_engine1 hdisk2 |
| drs_engine1 hdisk6 |
| drs_engine1 hdisk7 |
| drs_engine1 hdisk3 |
| drs_engine1 hdisk4 |
[Entry Fields]
VOLUME GROUP name itsovg
Resource Group Name itso_rg
Node List itso_engine1,itso_engine2
VOLUME names hdisk2
Reference node itso_engine1
FORCE deallocation of all partitions on no +
this physical volume?
On the second smitty panel, select a single PV for exclusion. The same process needs to be
repeated for every volume of the old set. Exclusion of the volume must be done only on one
node. The update is then propagated by C-SPOC to all nodes in the cluster.
To verify that all the volumes from the old set are excluded from the VG, run the lspv
command on the cluster nodes (Example 8-116). Configuration on all the cluster nodes
should be consistent.
itso_engine2
# lspv
hdisk2 0004afca0b8dedbd None
hdisk6 0004afca24cd59c3 itsovg concurrent
hdisk7 0004afca24cd5ae2 itsovg concurrent
hdisk0 0004afd87db8511f old_rootvg
hdisk1 0004afd8955a5e21 rootvg active
hdisk3 0004afca0b8e068a None
All PVs from the old set of LUNs are now removed from the itsovg VG configuration.
NODE itso_engine1:
Network net_diskhb_01
itso_engine1_hdisk2_01 /dev/hdisk2
Network net_diskhb_02
itso_engine1_hdisk3_01 /dev/hdisk3
Network net_ether_01
itso_cluster_svc 9.53.11.226
itso_engine1_boot 9.53.11.227
NODE itso_engine2:
Network net_diskhb_01
itso_engine2_hdisk2_01 /dev/hdisk2
Network net_diskhb_02
itso_engine2_hdisk3_01 /dev/hdisk3
Network net_ether_01
itso_cluster_svc 9.53.11.226
itso_engine2_boot 9.53.11.228
In the example, two heartbeat networks are configured. Update one heartbeat network at a
time so that one heartbeat network is always available.
To remove a communication device from a heartbeat network, use the C-SPOC interface as
shown in Example 8-118. The same result can be achieved using the smitty C-SPOC
interface.
Example 8-118 C-SPOC communication device removal using C-SPOC CLI interface
# /usr/es/sbin/cluster/utilities/clrmnode -a'itso_engine2_hdisk2_01'
WARNING: Serial network [net_diskhb_01] has 1 communication device(s) configured.
Two devices are required for a serial network.
# /usr/es/sbin/cluster/utilities/clrmnode -a'itso_engine1_hdisk2_01'
Network removed: net_diskhb_01
To configure the new communication devices and heartbeat network correctly, use the
C-SPOC smitty interface on all participating cluster nodes. The interface allows you to select
devices and validates consistency across cluster nodes. To get to the smitty panel that
defines the heartbeat communication devices, perform the following steps:
1. Start the interface using the smitty hacmp command.
2. Select Extended Configuration Extended Topology Configuration Configure
HACMP Communication Interfaces/Devices Add Communication
Interfaces/Devices Add Discovered Communication Interface and Devices
Communication Devices.
3. Select a communication device candidate as shown in Example 8-119.
Example 8-119 C-SPOC smitty interface for creating disk heartbeat network
+--------------------------------------------------------------------------+
| Select Point-to-Point Pair of Discovered Communication Devices to Add |
| |
| Move cursor to desired item and press F7. |
| ONE OR MORE items can be selected. |
| Press Enter AFTER making all selections. |
| |
| # Node Device Pvid |
| drs_engine1 hdisk2 0004afca346ba911 |
| drs_engine2 hdisk2 0004afca346ba911 |
| drs_engine1 hdisk4 0004afca356ea2c6 |
| drs_engine2 hdisk4 0004afca356ea2c6 |
| drs_engine1 hdisk5 0004afca346ba720 |
| drs_engine2 hdisk5 0004afca346ba720 |
| > drs_engine1 hdisk6 0004afca346ba892 |
| > drs_engine2 hdisk6 0004afca346ba892 |
| |
| F1=Help F2=Refresh F3=Cancel |
| F7=Select F8=Image F10=Exit |
| Enter=Do /=Find n=Find Next |
+--------------------------------------------------------------------------+
Repeat this process for any other heartbeat networks. After the reconfiguration of
net_diskhb_02, the test cluster topology configuration looks as shown in Example 8-120.
NODE drs_engine1:
Network net_diskhb_01
drs_engine1_hdisk6_01 /dev/hdisk6
Network net_diskhb_02
drs_engine1_hdisk7_01 /dev/hdisk7
Network net_ether_01
drs_cluster_svc 9.153.1.226
drs_engine1_boot 9.153.1.227
The communication device replacement is now complete for both heartbeat networks.
The last step is to propagate the updates from the node where the changes were made to all
other nodes in the cluster. This step can be done either trough the smitty interface (smitty
cm_ver_and_sync.select) or the equivalent CLI command
(/usr/es/sbin/cluster/utilities/cldare -rt -V 'normal'). After the verification is
completed, ensure that the topology configuration on the other nodes in the cluster is identical
to the node used for heartbeat network configuration changes.
+--------------------------------------------------------------------------+
| Select A Disk To remove |
| |
| Move cursor to desired item and press Enter. |
| |
| 0004afca0b8dedbd ( hdisk2 on all cluster nodes ) |
| 0004afca0b8e068a ( hdisk3 on all cluster nodes ) |
| 0004afca0b8e136c ( hdisk4 on all cluster nodes ) |
| 0004afca0b8e1f63 ( hdisk5 on all cluster nodes ) |
| |
| F1=Help F2=Refresh F3=Cancel |
| F8=Image F10=Exit Enter=Do |
| /=Find n=Find Next |
+--------------------------------------------------------------------------+
[Entry Fields]
Nodes drs_engine1,drs_engin>
Disk 0004afca0b8dedbd
KEEP definition in database no +
If you are not planning on reusing the devices, change the KEEP definition in database
attribute to no to remove the ODM device entries.
Example 8-122 Results from issuing the lsdev -Cc disk and mpio_get_config -Av commands
# lsdev -Cc disk
hdisk0 Available 04-08-00-5,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 04-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk6 Available 00-08-02 MPIO Other DS4K Array Disk
hdisk7 Available 00-08-02 MPIO Other DS4K Array Disk
# mpio_get_config -Av
Frame id 0:
Storage Subsystem worldwide name: 60ab80026806c000049d5867c
Controller count: 2
Partition count: 1
Partition 0:
Storage Subsystem Name = 'DRS_FAStT1'
hdisk LUN # Ownership User Label
hdisk6 11 B (preferred) ITSO_vol_t001
hdisk7 12 A (preferred) ITSO_vol_t002
The LUN mapping removal must be repeated for all the LUNs from the old LUN set. After you
remove all the LUNs, the test migration is complete.
The alternative approach allows you to preserve an extra copy of the data. The splitlvcopy
command allows you to split a copy from the original LV and creates an LV from it. To maintain
consistency of data, however, this command requires the application to be stopped and the
file system to be unmounted. If you decide that having an extra copy is more important than
doing the migration nondisruptively, use splitlvcopy.
The splitlvcopy command does not allow you to select which copy is used to form the new
LV. It automatically uses the last copy of the LV. In the example, the command uses the copy
on the new set of LUNs. If this is not the result you want, rename the original LV and then
rename the new LV to use the original LV name. This way, the file system uses the copy on
the new set of LUNs. If you need to go back to the original set of LUNs, all you need to do is
to swap the LV names again. This process does not require any data to be copied.
Instead, Linux LVM can move physical extents in use by a volume group and its logical
volumes to the new target devices using a block-by-block copy. This process uses an
underlying mirror volume, but it does so on a segment basis and not an entire volume.
Restriction: This method has been presented as an “online” method (where application
I/O can continue to run, while the migration is taking place) in some RedHat publications.
However, testing with heavy I/O loads has found that migration can take a long time or stall
altogether. Therefore, stop application I/O, or schedule the migration when access to the
source volumes is light.
The Logical Volume Management on Linux is called LVM. The latest version of the LVM is
called LVM2 and is compatible with earlier versions of Linux LVM1 except for certain
clustering and snapshot features.
LVM2 requires the device mapper kernel driver, a generic device mapper that provides the
framework for the volume management. For more information about Linux LVM2, see:
http://tldp.org/HOWTO/LVM-HOWTO/
In this scenario, the system already has Linux LVM2 installed and the volumes to be migrated
are part of an existing LVM volume group.
Linux LVM can move physical extents used by a volume group and its logical volumes to the
new target devices using the pvmove command.
The pvmove command creates a temporary, mirrored logical volume and uses it to copy the
data from the source device to the target device in segments. The original logical volume
metadata is updated to use the temporary mirror segment until the source and target
segments are in sync. It then breaks the mirror and the logical volume uses the target location
for that segment. This process is repeated for each segment to be moved. After all segments
are mirrored, the temporary mirror logical volume is removed and the volume group metadata
is updated to use the new target volumes.
Migration environment
The physical test environment is composed of the following components:
Application Server: RedHat AS4, 2.6.9-42.ELsmp kernel
Fibre Channel HBA: Qlogic qla2340 HBAs
Source Storage Controller: EMC Symmetrix
Source volumes:
– /dev/sdd1: A primary partition on a 34-GB Symmetrix device
– /dev/sde1: A primary partition on a 9-GB Symmetrix device
– /dev/sdf1: A primary partition on a 9-GB Symmetrix device
Target Storage Controller: IBM System Storage DS8300
Target Volumes: A 36 GB and two 9 GB volumes are configured on the DS8000 to be used
as targets for the migration.
LVM version2
LVM configuration:
– One volume group
– Two logical volumes on the volume group
– Two file systems
Important: Back up the data on your source volumes before you start the migration
process.
PV Name /dev/sde1
PV UUID 99jSJO-DZi1-29hN-PwR3-ovFA-ga0N-0XDiAE
PV Status allocatable
Total PE / Free PE 269 / 3
Interactive mode DSCLI is used to configure the storage in this example. For more
information about DSCLI, see DS8000 Command-Line Interface User’s Guide,
SC26-7625-05. To configure the target storage, perform the following steps:
1. Create an empty DS8000 Volume Group to hold the DS8000 LUNs (Example 8-124).
2. Create fixed block DS8000 volumes and assign them to the Volume Group
(Example 8-125).
3. Create Host Definitions and assign the Volume Group to the host (Example 8-126).
Example 8-126 Create and verify the host on DS8000 using DSCLI
dscli> mkhostconnect -wwname 210000e08b879f35 -volgrp v0 -hosttype LinuxRHEL x345-tic-4-qla0
Date/Time: March 6, 2007 11:41:19 AM CET IBM DSCLI Version: 5.2.400.426 DS: IBM.2107-75L4741
Install SDD on the Linux host using the Linux rpm command as shown in Example 8-128.
List the vpath devices and their underlying sd devices using the lsvpcfg command
(Example 8-129).
For more information about SDD, see Multipath Subsystem Device Driver User's Guide,
SC30-4131-01.
In the example, fdisk was used to create a primary partition on each vpath and to set the
partition type (Example 8-130).
Make sure that your filter accepts the vpath devices and rejects the underlying DS8000 sd
disks. The LVM returns an error about duplicate physical volumes if both the vpath and
underlying sd disk are seen.
2. Add an entry for vpath in the types section. This entry adds vpaths as a block device type
recognized by the LVM (Example 8-131).
For more information about the /etc/lvm/lvm.conf file, see the Linux man page on
lvm.conf.
3. Run vgscan to allow the LVM to recognize the new disks (Example 8-132).
You can use a whole disk as an LVM physical volume. However, the LVM metadata written at
the beginning of the disk will not be recognized by outside operating systems. It is therefore at
risk of being overwritten.
The initialization process fails on a disk with an existing partition table on it. Double-check that
you are using the correct disk for the LVM if you see this error.
Tip: You can use the following commands to delete the partition table on the disk if you
want to use it for LVM. Use these commands with caution, as they will block access to the
existing data on the disk.
# dd if=/dev/zero of=/dev/diskname bs=1k count=1
# blockdev --rereadpt /dev/diskname
Example 8-134 Using vgextend to add vpath devices to the volume group
[root@x345-tic-4 /]# vgextend migvg /dev/vpatha1
Volume group "migvg" successfully extended
[root@x345-tic-4 /]# vgextend migvg /dev/vpathb1
Volume group "migvg" successfully extended
Use the vgdisplay command to verify the addition of the new vpath devices (Example 8-135).
Notice the free Physical Extents (PE) available to the migvg volume group on each new
device.
Important: The pvmove command takes a long time to run because it performs a
block-by-block copy.
If pvmove needs to be stopped during the migration, the pvmove --abort command can be
issued. The extents are spread between the old device and the new device. The migration
can be continued by issuing a pvmove command without any parameters.
3. After pvmove is complete, verify the migration with the vgdisplay command as shown in
Example 8-137. Notice that all the extents are free on /dev/sdd1 and are moved to
/dev/vpatha1.
Example 8-137 vgdisplay showing the extents migrated from device sdd1 to vpatha1
[root@x345-tic-4 ~]# vgdisplay -v migvg
Using volume group(s) on command line
Finding volume group "migvg"
--- Volume group ---
VG Name migvg
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size 42.09 GB
PE Size 32.00 MB
Total PE 1347
Alloc PE / Size 1344 / 42.00 GB
Free PE / Size 3 / 96.00 MB
VG UUID edL624-ocV6-q128-QCT6-7kkC-34Q6-8twJsB
PV Name /dev/sde1
PV UUID 99jSJO-DZi1-29hN-PwR3-ovFA-ga0N-0XDiAE
PV Status allocatable
Total PE / Free PE 269 / 3
PV Name /dev/vpatha1
PV UUID QKFc5R-tGRt-wX85-IJeJ-o0r3-8T62-DtHuo1
PV Status allocatable
Total PE / Free PE 1183 / 105
PV Name /dev/vpathb1
PV UUID AZweK9-jut4-T8fp-T9P8-7rLz-L2MQ-36958U
PV Status allocatable
Total PE / Free PE 319 / 319
4. Repeat the pvmove command to move extents from /dev/sde1 to /dev/vpathb1. Continue
with any additional devices with extents that need to be migrated.
5. Verify that the extents are now on the vpath devices using vgdisplay or lvdisplay
commands as shown in Example 8-138.
Example 8-138 vgdisplay command showing physical volumes with all extents moved to vpath
devices
--- Physical volumes ---
PV Name /dev/sdd1
PV UUID 54dWUW-OI1N-sHJC-wFTE-VCE2-xT4b-2sqZQx
PV Status allocatable
Total PE / Free PE 1078 / 1078
PV Name /dev/vpatha1
PV UUID tWVdi1-EpRZ-Nu5d-R27e-IhMq-AJgl-GqmSwa
PV Status allocatable
Total PE / Free PE 1183 / 105
PV Name /dev/vpathb1
PV UUID AZweK9-jut4-T8fp-T9P8-7rLz-L2MQ-36958U
PV Status allocatable
Total PE / Free PE 319 / 53
Verify that the new vpath devices remain part of the volume group as shown in
Example 8-140.
PV Name /dev/vpathb1
PV UUID AZweK9-jut4-T8fp-T9P8-7rLz-L2MQ-36958U
PV Status allocatable
Total PE / Free PE 319 / 53
The logical volumes can now be remounted and application I/O started, or you can remove
the old devices from the system.
The example uses the Qlogic dynamic reconfiguration utility to dynamically remove the old
storage controller devices as shown in Example 8-142.
Example 8-142 Qlogic dynamic reconfiguration tool to remove old storage devices
[root@x345-tic-4 ql-dynamic-tgt-lun-disc-2.2]# ./ql-dynamic-tgt-lun-disc.sh -r -s
Please make sure there is no active I/O before running this script
Do you want to continue: (yes/no)? yes
Issuing LIP on host2
Scanning HOST: host2
.............
Issuing LIP on host3
Scanning HOST: host3
.............
Removed
2:0:0:0
2:0:0:14
2:0:0:23
2:0:0:3
Total Devices : 3
Using this technique, it is possible to migrate data, block by block, to another destination. The
easiest way to migrate data to the new device is using shell copy commands like cp, dd, or
cpio. A more elegant way is to establish a mirror using the network devices and allow the
background synchronization copy the data to the new target.
The following example involves NDB and DRDB, which are both integrated in the Linux
kernel. Other implementations are also referenced.
Source Target
Location Location
nbd-client nbd-server
When the nbd server is started, it listens to a TCP or UDP port assigned by the administrator.
The server is started with the command shown in Example 8-145.
In this example, the nbd device is represented by a file in the directory /export. When the
server starts, it listens at the port 5000 for incoming requests.
After the server is started, the client can create a block device by referencing the IP address
of the server and the port where the server listens (Example 8-146).
The network block device is now ready to be used. For example, create a file system, mount it
and allocate files and directories. For data migrations, however, other methods are of
interests as shown in the following chapters.
This method requires that the applications using the data on the disk /dev/sda2 must be shut
down. In addition, the file system must be unmounted to catch the recent I/O from the cache.
Depending on the used capacity of the disk and the bandwidth of the network, this process
might take a while before all data is transmitted.
Example 8-148 Create the RAID1 out of the local disk and the nbd0 device
# mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sda2 /dev/nbd0
mdadm: /dev/sda2 appears to contain an ext2fs file system
size=1048576K mtime=Thu Jul 7 14:30:34 2011
Continue creating array? y
mdadm: array /dev/md0 started.
#
5. You can see a progress bar which shows the status of the synchronization. When the
synchronization is done, you see the status shown in Example 8-150.
6. The file system for the applications can be mounted at any time after the RAID1 was
established (Example 8-151).
7. Any updates from users or the applications are now mirrored to both underlaying devices.
When you are ready the cut over, close the applications and unmount the file systems,
then close the RAID1 as shown in Example 8-152.
8. When the RAID1 was established, a md-superblock was written to each device. To mount
a remote device on the target system, this superblock must be removed using the
command shown in Example 8-153.
The migration is now complete. The remaining operations are to stop the nbd-server at the
target system, stop the nbd-client at the source system, and clean up the RAID1.
8.10.4 Summary
The Network Block Device (nbd) can be used for a basic migration to new Linux systems and
new storage devices. For migrations involving only a few disks, this type of migration can easy
to implement.
The volume group is then extended by the new nbd device. Transmit the data using the
methods as described in 8.9, “Data migration using Linux LVM2 mirroring” on page 376.
Restriction: TDMF for z/OS does not allow access to the target volume during the
migration process.
A source volume might not contain an active local page data set or swap data set.
The source and target volumes must be of the same track geometry.
These characteristics represent the ideal of a transparent and non-disruptive migration facility.
9.1.2 Terminology
These terms describe the TDMF installation and migration process:
Master system: The TDMF system, running as an z/OS batch job or started task that is
responsible for the data copy function. There is only one master system in a TDMF
session.
Agent system: An associated TDMF z/OS image running in a shared storage environment
with the master. To ensure data integrity, any z/OS LPAR with access to the volumes must
be run on either the master or an agent system. The master and associated agent
systems communicate through a shared system communications data set (COMMDS).
Source volume: The DASD volume containing the data to be migrated.
Target volume: The DASD volume receiving the migrated data.
TDMF Architecture
Agent 3
Master SYSD
SYSA Agent 4 TDMF
Agent 2 SYSE
SYSC
Agent 1 TDMF TDMF
SYSB Agent 5
SYSF
ISPF ESCON/FICON
Monitor Director
Source Target
Source
Target
Source
Target
Communication
data sets
IBM ESS 2105 any vendor IBM DS8000
Important: Because of a possible data integrity exposure, all systems accessing migration
volumes must be identified to the master system. TDMF includes various controls and
checks so you do not make the following errors:
Assign or direct conflicting migrations to the same devices
Attempt migrations to nonexistent devices
Use the same COMMDS for two simultaneous or overlapping migration sessions
For audit purpose, do not use the same COMMDS for different sessions.
If all systems in the session are not started within a 15-minute interval, the session does not
complete system initialization. If you start a system that is not part of an active session, TDMF
terminates the master job and all agent jobs using the same COMMDS.
If you use System Authorization Facility (SAF), and any volume involved in the migration
session fails SAF, the migration session itself fails.
Volumes in a session can be terminated using the TDMF TSO Monitor or Batch Monitor on
the Master system before initialization of the agent systems. If you select the History option to
automatically record information about the migration session, the recording requires UPDATE
authority for the data set. For more information, see the System Authorization Facility (SAF)
and TDMF Installation and Reference, TDM-Z53IR.PDF at:
http://www-950.ibm.com/services/dms/en/support/tdmf/zos/v5/download530.html
The master system initiates and controls all migrations. The agents must each phase to
proceed. If any system detects a violation, that specific migration terminates. Depending on
the state of the current migration, you must perform back-out processing to establish the
original status before the migration session started.
Initialization
I/O error
Activation
I/O error
Quiesce
I/O error
Termination
Initialization phase: During this phase, all participating systems confirm the validity of the
source and target volumes. The phrase includes the following activities:
– Volume acknowledgement: Volumes that require an acknowledgement are not eligible
for confirmation and selection until one is received. This acknowledgement can use the
TDMF TSO Monitor, a Batch Monitor, or the IBM MVS™ Write-to-Operator/
Write-to-Operator with Reply (WTO/WTOR).
This acknowledgement is required for volumes being migrated whose disaster recovery
or mirror techniques are not compatible unless the
ALLOWmirrorchange(NOACKnowledge) option is specified. TDMF recognizes
volumes that use Peer-to-Peer Remote Copy (PPRC), extended remote copy (XRC),
TrueCopy from Hitachi, and Symmetrix Remote Data Facility (SRDF). These volumes
are recognized with or without Consistency Groups from EMC.
The disaster recovery or mirroring type must be compatible between the source and
target volumes. Compatibility includes the vendor command interface structure and
parameters. If they are not compatible, the ALLOWmirrorchange option must be
specified and an acknowledgement received.
– Volume confirmation: Volumes that require confirmation are not eligible for volume or
group selection until a confirmation is received. This confirmation can use the TDMF
TSO Monitor, a Batch Monitor, or the MVS Write-to-Operator/Write-to-Operator with
Reply (WTO/WTOR). The order of confirmation determines the order of volume
selection. Volumes or groups that do not require confirmation are immediately available
for volume or group selection.
Tip: If an LPAR in the same TDMF session failed an I/O redirection, TDMF resets the
redirection on all LPARs. TDMF goes on to the resume phase with an I/O error.
Resume phase: Immediately after successful I/O redirect processing, the master system
performs RESUME processing and initiates the RESUME request for the agent systems.
Resuming allows user I/O to continue to the volume on its new device. After all systems
process the RESUME request, the original (source) device is marked offline.
If an I/O error during the quiesce, synchronize, or I/O redirection phases, the source
device is re-enabled for I/O and the UCB quiesce is removed.
Terminate phase: When a volume completes a migration, the fixed storage of that volume
is released for possible reuse in the current session. All dynamic allocations are also
removed.
Because TDMF is a host software migration tool, there are no hardware prerequisites.
Pre-Installation considerations
The master system requires additional storage, depending on the number of volumes in the
session. This additional storage is not a significant amount, and is only allocated for volumes
with the Parallel Access Volumes (PAV) option set:
10 or less volumes = 4 K
10 - 32 volumes = 4 K to 12 K
32 - 64 volumes = 12 K to 24 K
64 - 128 volumes = 24 K to 48 K
128 - 256 volumes = 48 K to 96 K
256 - 512 volumes = 96 K to 192 K
The Softek TDMF for z/OS Installation Assurance Guide for V5R3.0, TDM-Z53IA-001.pdf,
has complete pre-installation and post-installation checklists that need to be followed. Be sure
to read the copy of this document that matches the level of software you are installing. It is
available at:
http://www-950.ibm.com/services/dms/en/support/tdmf/zos/v5/data/TDM-Z53IA-001.pdf
The following members in the distributed SAMPLIB can be copied and tailored to install
Softek TDMF. The DLIBZONE and TARGZONE must be updated in the samples to reflect the
zone definitions for the site.
The following procedure creates a complete and separate SMP/E environment for Softek
TDMF. Alternatively, you can install the product in any other SMP/E structure, However, you
must edit the jobs to fit your environment.
Note: In JES3 environments, you might need to separate this step into multiple jobs.
3. Edit INITCSI with the TDMFEDIT exec and submit (calls SMPE).
4. Edit DDDEF with the TDMFEDIT exec and submit.
5. Edit SMPEREC with the TDMFEDIT exec and submit.
6. Edit SMPEAPK with the TDMFEDIT exec and submit.
7. Edit SMPEAPP with the TDMFEDIT exec and submit.
8. Edit SMPEACK with the TDMFEDIT exec and submit.
9. Edit SMPEACC with the TDMFEDIT exec and submit.
Note: If the installation has a security package on the z/OS system on which TDMF is
installed, you must modify the security package. Otherwise TDMF will not run properly.
If a z/OS system is available, copy z/VM, Linux for System z, and IBM z/VSE® volumes
(3390-x volumes only) using REPLICATE instead of MIGRATE. These operating systems
must be OFFLINE because there is no AGENT available. Define the source and target
volumes to the z/OS system and run the same jobs as in z/OS, but with parameter
REPLICATE instead of MIGRATE. After the data is replicated, use ICKDSF REFORMAT to
rename the target volumes to their original VOLIDs.
Special considerations must be taken when z/OS is running under z/VM when allocating the
COMMDS. For more information, see the TDMF installation manual at:
http://www-950.ibm.com/services/dms/en/support/tdmf/zos/v5/download530.html
Security
If the installation has a security package on the z/OS system on which TDMF is installed, you
need to modify the security package. Otherwise TDMF does not run properly. Check the
profiles and command tables.
Limiting access to the TDMF authorized library to prevent unauthorized use of the TDMF
system can be accomplished through security packages. The user for the SYSOPTN batch
job (provided by the TDMF installation) must have UPDATE authority for the library indicated
by the SECCOM DD statement. The master migration job also needs UPDATE authority for
the SECCOM file if the authorization key specifies volume or terabyte limits. To update
authorization keys using Option 10 of the TDMF TSO Monitor as shown in Figure 9-3, you
must have UPDATE authority for the TDMLLIB library. In addition, the library must be
allocated as SECCOM in the SYSOPTN job.
If the History option is selected, UPDATE authority is required for the data set specified.
When viewing the history file (and any COMMDS) using the TDMF TSO Monitor, you must
have READ authority.
For a swap migration, ALTER authority must be in effect for the source and target volumes.
Error messages are issued for all volumes not meeting these requirements in a session.
If the authorization keys specify volume or terabyte limits, the TDMLLIB (SECCOM DD
statement) must have update authority for the user ID submitting the jobs.
Agent 1 TDMF
BVS2
Communication
Dataset Migrate
Source
Dev: 6000-6003
VOLID:
RS6000 Target
RS6001 Dev: 8100-8103
RS6002 VOLID:
RS6003 XX8100
XX8101
XX8102
XX8103
CPC ND = 002064.116.IBM.51.00000002423A
CPC SI = 2064.116.IBM.51.000000000002423A
CPC ID = 00
CPC NAME = P002423A
LP NAME = BVS1 LP ID = 4
MIF ID = 4
CPC ND = 002064.116.IBM.51.00000002423A
CPC SI = 2064.116.IBM.51.000000000002423A
CPC ID = 00
CPC NAME = P002423A
LP NAME = BVS2 LP ID = 5
MIF ID = 5
Figure 9-6 Displaying processor information
DISK subsystems: IBM TotalStorage ESS 800 and IBM System Storage DS8000
Channel attachment (ESCON / FICON) for storage
RO *ALL,D U,,,6000,4
IEE421I RO *ALL,D U,,,6000,4 112
MCECEBC RESPONSES ---------------------------
IEE457I 22.56.27 UNIT STATUS 111
UNIT TYPE STATUS VOLSER VOLSTATE
6000 3390 A RS6000 PRIV/RSDNT
6001 3390 O RS6001 PRIV/RSDNT
6002 3390 A RS6002 PRIV/RSDNT
6003 3390 O RS6003 PRIV/RSDNT
MZBCVS2 RESPONSES ---------------------------
IEE457I 22.56.27 UNIT STATUS 909
UNIT TYPE STATUS VOLSER VOLSTATE
6000 3390 O RS6000 PRIV/RSDNT
6001 3390 O RS6001 PRIV/RSDNT
6002 3390 O RS6002 PRIV/RSDNT
6003 3390 OFFLINE RS6003 PRIV/RSDNT
2. Bring the volume online by issuing a VARY ONLINE command to all LPARs for all devices
in the range (6100-6103) as shown in Figure 9-8.
RO *ALL,VARY 6000-6003,ONLINE
IEE421I RO *ALL,VARY 6000-6003,ONLINE 163
MCECEBC RESPONSES ---------------------------------------------------
IEE457I 22.58.30 UNIT STATUS 162
UNIT TYPE STATUS VOLSER VOLSTATE
6000 3390 O RS6100 PRIV/RSDNT
6001 3390 O RS6101 PRIV/RSDNT
6002 3390 O RS6102 PRIV/RSDNT
6003 3390 O RS6103 PRIV/RSDNT
MZBCVS2 RESPONSES ---------------------------------------------------
IEE457I 22.58.30 UNIT STATUS 482
UNIT TYPE STATUS VOLSER VOLSTATE
6000 3390 O RS6100 PRIV/RSDNT
6001 3390 O RS6101 PRIV/RSDNT
6002 3390 O RS6102 PRIV/RSDNT
6003 3390 O RS6103 PRIV/RSDNT
Figure 9-8 Check source device address
4. Bring the target devices online by issuing the VARY ONLINE command to the devices as
seen in Figure 9-10.
6. Run the TDMF member to start the TDMF monitor as shown in Figure 9-12.
Command ===>
8. The TDMF Monitor menu is displayed as seen in Figure 9-14. There are many options
presented in this menu. Each option is covered in detail in the TDMF Installation and
Reference manual. Select option 0 to open SYS1.IBM.HGTD530.SAMPLIB, where you find
example migration jobs.
****************************************************************
** Start AGENT, on for each LPAR connected to source and target
****************************************************************
11.After verifying that the new COMMDS exists, submit the master and agent jobs as shown
in Figure 9-16 on page 413.
12.Change to the TDMF TSO monitor using option 1 in the Monitor menu as seen in
Figure 9-14 on page 411. Option 1 allows you to monitor the progress of the current
session (Figure 9-18). In this example, the data is being migrated on four volumes. Two of
the volumes are copying at the same time. The progress monitor is updated to show the
migration phases and the percentage of completion for each phase after pressing the
Enter key. You also get an estimated end time for the initial copy phase for each volume.
Figure 9-20 shows that the first two volumes are completed, and the next two volumes are
in the copy phase. Based on the parameter (CONCurant=2), only two volumes run at a
time. However, this parameter can be dynamically changed in the monitor.
Figure 9-22 shows that all the copies for this session are complete. Remember that this
migration was a simple one involving only the four volumes. A typical migration scenario
usually involves many more volumes.
RO *ALL,D U,,,6000,4
IEE421I RO *ALL,D U,,,6000,4 978
MCECEBC RESPONSES ---------------------------------------------------
IEE457I 23.05.35 UNIT STATUS 977
UNIT TYPE STATUS VOLSER VOLSTATE
6000 3390 OFFLINE YY6000 /RSDNT
6001 3390 OFFLINE YY6001 /RSDNT
6002 3390 OFFLINE YY6002 /RSDNT
6003 3390 OFFLINE YY6003 /RSDNT
MZBCVS2 RESPONSES ---------------------------------------------------
IEE457I 23.05.35 UNIT STATUS 884
UNIT TYPE STATUS VOLSER VOLSTATE
6000 3390 OFFLINE YY6000 /RSDNT
6001 3390 OFFLINE YY6001 /RSDNT
6002 3390 OFFLINE YY6002 /RSDNT
6003 3390 OFFLINE YY6003 /RSDNT
******************************** BOTTOM OF DATA ********************************
Figure 9-24 shows the output of a query against the old target volumes (8100-8103).
These volumes are now the source volumes and online to the LPARs. The data migration
is complete.
Values set in the SYSOPTN batch job can be overridden within a session. The order of
overriding of options is as follows:
Session statement
Group statements
Migrate statements
In other words, values in the session statement override the installation defaults. Values in the
group statement override the session and installation defaults. And finally, the migrate
statements override all of the above.
For more information, see the TDMF Installation Manual, Chapter 2 “TDMF Control
Statements” at:
http://www-950.ibm.com/services/dms/en/support/tdmf/zos/v5/tibs530.html
If the Compare option or Full Speed Copy is requested, an additional 900K buffer for each
volume migration is allocated in ECSA. For more information about storage requirements, see
the TDMF Installation Manual, Chapter 2 “Storage Requirements” at:
http://www-950.ibm.com/services/dms/en/support/tdmf/zos/v5/tibs530.html
Important: If another session is submitted using the same COMMDS name while the first
session is running, unpredictable results occur.
If a problem occurs, the COMMDS is the primary tool for problem determination and
resolution. Although reuse of a COMMDS is normal, be aware that the previous session data
is no longer available. If there is a problem, do not reuse the COMMDS so it is available for
problem determination and resolution.
Tip: Use a new COMMDS for each session, so that if a problem occurs, all information is
available for analysis.
Member ALLOCCM in SAMPLIB allocates the COMMDS. This data set must be physically
located on a cylinder boundary with contiguous space. For an example, see Figure 9-15 on
page 412. Volumes containing COMMDS cannot be moved by TDMF
COMMDS reserve
TDMF periodically issues a RESERVE macro for the COMMDS to serialize communication
between the master and agent systems. For details, see the TDMF Installation Manual,
Chapter 4 “Unicenter CA-MIM Resource Sharing or Global Resource Serialization.”
Sizing a COMMDS
To calculate the size of a COMMDS, use the following formula:
COMMDS CYLS = V * (S + K)
In addition, S is the number of participating systems, and K is the size of the source volumes
involved as follows:
For 3390-3, K = 4
For 3390-9, K = 6
For 3390-27, K = 15
For example, consider a system which contains 128 3390-3 and 128 3390-9 volumes across
8 LPARs. Using the largest device type in the session (and therefore K = 6), the size is
calculated as follows:
CYLS = 7.5 * (8 + 6) (always use the largest device type in session)
CYLS = 7.5 * 14
CYLS = 105 (round down if required)
Tip: GDG data sets are not preferred because they can be overwritten during the migration
process. Instead, use sequential data sets, which are the default.
For more details, see the TDMF Installation Manual, Chapter 3 “Placement of the
Communications data set.”
Since TDMF version 5.2.0, you can run an AGENT while the source and target device are
offline to the LPAR. Therefore, you do not have to split the volumes into groups where the
volumes have the same status to the LPARs. Some volumes (source and target) can be
offline in one LPAR and the session will not terminate. For more information, see the TDMF
Installation Manual.
The Unidentified Systems option can be used to verify that the correct number of agent
systems are being run. The default action is set when the SYSOPTN job is run, but can be
overridden using the options within the SESSION control statement. The options available
are:
Terminate on Error: This issues an error message and terminates the migration/replication
(RC08).
Warning: This issues a warning message, but the migration/replication continues (RC04).
This parameter works with only 3990-6 control units or later (2105/2107). If you are using
3990-3 (old HW), TDMF cannot check the attached systems.
Important: You are responsible for running the correct number of AGENTs.
For more details, see the TDMF Installation Manual, Chapter 2 “Session Options.” In addition,
a Technical Information Bulletin (TIB) is available at the following web address:
http://www-950.ibm.com/services/dms/en/support/tdmf/zos/v5/tibs530.html
The CHECK TARGET option ensures that TDMF does not overwrite data on a target device.
Selection of this option informs TDMF that only the VTOC, VTOCIX, and VVDS entries are
allowed on the target volume.
Tip: Use the CHECK TARGET option as default to protect your data. Set the CHECK
TARGET option in each session control statement.
For details, see the TDMF Installation Manual, Chapter 2 “Session Options.”
9.3.7 Pacing
Always use pacing so that application performance is not affected. Some volumes regularly
experience high I/O rates.
Pacing is active during the Copy and Refresh phases only. After the volumes or session enter
the Quiesce phase and continue through the Termination phase, pacing is not started. For
more details, see the TDMF Installation Manual, Chapter 1 “Major Phases of Migration.”
Tip: Generally, use the default pacing options. Move frequently updated volumes during a
low activity time period.
Reverse pacing
Reverse pacing starts at one track per I/O operation, and scales upwards depending on the
activity on the source volume. For more details, see the TDMF Installation Manual, Chapter 3
“Dynamic Volume Pacing.”
FastCopy
FastCopy instructs TDMF to copy only the allocated tracks and cylinders, ignoring empty
ones. This option can decrease the amount of time required to migrate a volume with no
performance impact. Use the FastCopy option at all times. For more details, see the TDMF
Installation Manual, Chapter 2 “Common Options Table.”
Active in Copy
The volume pairs in a migration are considered active from volume initialization to volume
termination. Active in Copy instructs TDMF to treat a volume as being active from volume
initialization through the completion of the first refresh pass. Use of this option limits the
number of active copy tasks but speeds the normal copy process. For more details, see the
TDMF Installation Manual, Chapter 2 “Session Options.”
Use this information during the planning, coding, and execution of TDMF sessions. This
information helps ensure that these sessions do not adversely affect application, array, and
subsystem performance.
When experimenting, do not migrate more than two volumes from a logical control unit (LCU).
Using monitors such as RMFMON, check if there is an impact to the system performance.
Also, check channel utilization to make sure that it is no more than 70-75% per channel path.
Change the number of concurrent running volumes based on the results using the TDMF
monitor.
For more details, see the TDMF Installation Manual, Chapter 3 “Raid Subsystems and Rank
Contention.”
In addition, some volume migrations require an application recycle to complete the migration.
For more information, see the TDMF Installation Manual, Chapter 4: “Planning
Considerations.”
Important: Do not move active Local Page data sets or active Sysplex Couple data sets.
2. Check whether the Local Page data sets are on separate volumes. If not, create
temporary Local Page data sets on different volumes (old storage subsystem).
3. Release all active Local Page data sets on the same volume using the PAGE DELETE
command. It takes time to release them from the system. Use the DISPLAY ASM
command to check on the progress.
4. When all Local Page data sets on the same volume are released from the system Page
Pool, migrate that volume.
5. After migrating the volume, use the PAGE ADD command to add Local Page data sets to
the System Page Pool again.
6. Repeat the proceeding steps for all volumes where Local Page data sets are located.
PLPA and COMMON Page data sets can be moved by TDMF, but must be moved separately.
Check whether these data sets are on separate volumes. If not, do not move these volumes
because the time-out interval of the timer accessing the Couple Data Set (CDS) default is 15
seconds.
To check volumes also for Local Page data sets, perform the following steps:
1. Switch the couple data set from primary to alternate:
setxcf couple,type=sysplex,pswitch
2. Migrate the volume (COUPL1) with the non-active old primary couple data set.
3. Add the old primary to the system as the new alternate couple data set:
setxcf couple,type=sysplex,acouple=SYS1.CEBCPLEX.XCF.CDS01
4. Switch the couple data set again:
setxcf couple,type=sysplex,pswitch
5. Migrate the volume (COUPL2) with the non-active old alternate couple data set.
6. Add the old alternate to the system as the new alternate again:
setxcf couple,type=sysplex,acouple=SYS1.CEBCPLEX.XCF.CDS02
Sample job naming conventions are shown in Example 9-3. In a migration service, be aware
that you can run several jobs in parallel. A good idea is to reflect the type, LPAR name, and
the session number in the job name.
Important: Do not forget that AGENTs must run on all LPARs. Also, make sure that the
volumes are accessed and online.
Pacing
Standard I/O pacing is 15 tracks per I/O operation. TDMF samples device activity every 30
seconds during the copy and refresh phases. If device or channel path activity affects the
production environment, TDMF dynamically adjusts the number of tracks read/written in a
single I/O operation. You can also use reverse pacing, which is designed to be used when
moving a volume with high activity. This option starts at one track per I/O operation and, if
activity allows, increase the number of tracks per I/O dynamically.
User-specified pacing allows you to determine the number of tracks read/written in a single
I/O operation. The values allowed are five, three, and one track per I/O. This value might or
might not be static depending on whether pacing is specified.
To speed up the migration, you can select Full Speed Copy. This option allows the system to
use two buffers, increasing speed by around 45% on normal systems. See the Migration
Recommendations section of the TDMF Installation Manual for more detail.
You can also adjust the number of concurrent volumes. The current release of TDMF allows
up to 512 volumes to be coded per session. However, the number of volumes multiplied by
number of LPARs must not exceed 2048. Set this with the Number of concurrent volumes
option. Another option that can be used with the number of concurrent volumes is
Active-in-Copy. This option saves time in the overall session.
The number of concurrent running volumes can be changed dynamically using TDMF monitor
item 2 as shown in Figure 9-28. Select the field and change it, then press Enter to activate the
change. If you reduce the number, the running volumes continue copying until the end of the
volume. New volumes start when the new defined value of volumes is reached.
Requested Volume Device Group --- Migration --- - Error Info - Sync
Action Serial Number Name Status Type System Message Goal
Number of sessions
There is no limit to the number of TDMF sessions that can be started. The determining factor
is the number of available resources each LPAR has that is involved in the migration.
Channel paths
TDMF interacts with the Input/Output Supervisor, which manages transport mechanism or
channel paths. Various channel paths that are in use today, from older technologies such as
parallel paths to newer ones such as FICON. Each type of channel has a specific limit as to
the amount of data that can be transported.
In summary
The best migration practice is to plan. Every migration initiative, especially large migrations,
should be 90% planning and 10% implementation. Understanding the production
environment and when the heaviest loads occur is a part of panning, along with the impact of
old and new technology. Remember, data can be moved only as fast as the slowest device or
slowest channel allows.
If a TDMF session still hangs after MVS Cancel, use Display GRS,C to determine whether
canceling another job might break the deadlock. When a TDMF job is canceled, the program
signals the other systems in the session to terminate the active volumes and clean up
resources. Canceling for a second time, or using the MVS Force command, bypasses this
recovery and causes the TDMF RTM Resource Manager routine to receive control. Canceling
this way might result in migration volumes being varied offline and boxed as part of cleanup. If
you issue a Force or second Cancel command, run the TDMFCLUP program as described in
the chapter “Batch Utilities” of the TDMF Installation Manual.
Important: During an error, avoid issuing a CANCEL command using SDSF against
TDMF master or agents. Instead, use the TDMF monitor option 2 (interaction with TDMF)
to terminate a migration session.
This chapter outlines the steps needed to conduct a successful data migration project using
zDMF. It also describes the standards and provides guidelines when using zDMF for a data
migration project.
zDMF is a data set level migration tool that updates the information in the ICF catalog and
interacts with SMS. Therefore, there are sections that address its role in the migration
process, detailed explanations of the environmental requirements, and reference materials.
zDMF moves data sets without disrupting access to applications during the migration. zDMF
also provides the following advantages:
Makes it easy to combine smaller capacity volumes to better use storage
Helps to increase available system resources
Supports implementation of tiered storage
Improves application performance for service level compliance
Maintains application availability during data set level migrations
Allows continued growth of important or new applications
Reduces storage platform total cost of ownership (TCO)
In addition, allocated data sets can be migrated with a dynamic swap of metadata. Table 10-1
shows the capabilities of zDMF.
Flexible migration options Provides control over the I/O rate Allows online migration activity
for data migration reads/writes. while maintaining optimal
application performance and
service levels.
Data set grouping Allows a group of data sets to be Provides easy management of
collectively migrated. large migrations.
Completion phase (post migration) Although the metadata has been modified,
applications that were active before diversion
continue to have their I/O redirected until they
de-allocate the data set. An application bounce
might be required in a scheduled window.
Important: The planning phase is the most vital phase for a successful migration.
This section outlines the generic steps, personnel functions, and tasks that pertain to the
planning phase. The high-level process consists of the following steps:
1. Establish a migration management team that consists of:
– Primary Migration Manager
– Alternate Migration Manager
– Account Coordinator
– Security Coordinator (if required for sensitive data)
– Technical Lead Coordinator
2. Announce the migration with a notification that aligns with the application requirements.
3. Size the target configuration to match the space requirements of the source configuration.
6. Create and relate the following aliases to the proceeding target catalog:
Define HLQs (high-level qualifiers) that are not currently used such as X001, X002, and
X003.
7. Create or add the new target volume SMS Storage Group Configuration. The new storage
capacity must be the same or greater than the source storage. Consideration must also be
given to any multivolume data sets.
8. Add these volumes to the appropriate storage group in ENABLE status as soon they are
available.
9. Create a list of the source volumes that contain data to be moved and the associated
target volumes or storage group.
10.Change the status of the source volumes to DISNEW to accommodate SMS redirection.
11.Establish a naming standard for the new target data sets (work data sets) based on the
alias created in step 6.
12.Identify the time frames that you want to move between the phases. Consider DSN and
Group activity, and fallback requirements when scheduling. In particular, schedule around
heavy I/O loads like batch work or db2 reorganizations during the migration phase.
Remember: Some of these steps might not be applicable because of your internal policies
on data migration.
DFSMS overview
Data Facility Storage Management Subsystem (DFSMS), or SMS in short, is a software suite
that automatically manages data from creation to expiration. DFSMS provides the following
functions:
Allocation control for availability and performance
Backup/restore
Disaster recovery services
Space management
Tape management
DFSMS consists of DFSMSdf (an element of z/OS) and DFSMSdss, DFSMShsm, and
DFSMSrmm (features of z/OS).
Because DFSMS manages data creation, you can control the volume selection of these data
sets. Use ISMF or a VARY command to establish criteria to eliminate allocation of new data
sets on any volume in an SMS subsystem. This status is known as DISNEW, and disallows all
new allocations.
When starting a migration that includes moving all the data sets that populate a volume, the
source volumes must be placed in a DISNEW status. However, verify using ISMF that the
storage pool containing the volumes has enough free space so the restriction can be enabled.
Table 10-3 shows the components that address data set allocation.
Storage class The performance level required for the data set that is in direct
alignment with a specific service level agreement (SLA).
Management class Addresses the appropriate backup and migration cycle of the data
set.
Storage group A volume or group of volumes designated for specific data sets
based on predetermined criteria such as size, use, or type.
Because SMS Data Redirection is done on a data set level, zDMF is a good solution for data
sets that are not deleted on a regular basis. The first phase when migrating all the data sets in
an SMS environment is to have SMS move that data using SMS Data Redirection. zDMF
enables a scheduled data set movement process, which takes less time. It also allows you to
use performance enhancements from the new device for those data sets not moved using
SMS.
When a new device type is replacing an old one, data sets must be moved from the old device
to the new one. HSM allows you to perform such conversions of a volume with a single
command. The command to convert a level 0 volume migrates all the data sets not in use on
the source volume to migration level 1, then recalls them. This command allows you to clear a
DASD migration volume so it can be replaced with a different type of storage device.
For example, to remove volume SG2002 from the system and replace it with a new volume,
you must perform the following steps:
1. Prevent new allocations to volume SG2002.
To prevent new allocations to the volume, change the status of the volume in the storage
group. For example, you can establish the volume status in the storage group definition:
STORAGE GROUP NAME: STGGP2
VOLUME SERIAL NUMBER: SG2002
SMS VOL NAME STATUS
SG2002 ===> DISNEW
2. Move the non-allocated data sets so DFSMShsm can process from the volume using the
following HSM command:
MIGRATE VOLUME(SG2002 MIGRATE(0)) CONVERT
DFSMShsm migrates all of the non-allocated (NOT IN USE) data sets that it can process
off volume SG2002 and recalls them to other SMS-managed volumes. The DFSMSdfp
automatic class selection (ACS) routine selects the volume for data set recall.
3. Remove the allocated data sets that DFSMShsm cannot process from the volume. To
move any allocated data sets that DFSMShsm cannot process, use zDMF.
Note: Data Set Services (DFSMSdss), can also be used in this phase instead of HSM.
However, HSM does not require the creation of job control language (JCL).
Standard utilities
A successful migration requires a combination of various utilities that complement each other
and enhance the migration process. If there is a window of opportunity to accommodate this
migration, any data set migration utility, such as DFSMSdss, can be scheduled to be run.
Tip: zDMF can be used to move any data sets that are not included in the documented
restriction list. The phased approach option can be used if needed due to user
requirements.
The data set selection classifications can be used as selection criteria for zDMF candidates
and to identify data sets to be excluded (for example, WILDCARD selection).
The selection criteria are a key factor for data set level migration and must be included in the
migration planning process. Data set selection criteria can differ for each subsystem, which
might require use of reporting utilities.
The following are also important factors in zDMF data set selection planning:
Review the GDG data sets for the delete activity cycle. If the cycle is shorter than the
zDMF migration cycle, allow them move themselves.
Change non-SMS-managed volumes that are mounted with a STORAGE status to
PRIVATE.
The storage subsystem must have sufficient space to accommodate normal processing.
This might not be the case for data set relocation or workload balancing.
Identify data sets with primary space 0 (SPACE=(CYL,0,100)) because they will not be
migrated.
The security profile associated with a particular data set might prevent you from moving
the data set.
When moving a data set for performance reasons, be sure that the new target volume
does not already have any performance constraints.
When doing volume consolidation, be sure that you are not selecting multiple source
volumes that have performance constraints.
Try to schedule the completion phase to take place as part of a pre-planned outage.
Identify data sets that have a specific volser dependency. Moving data sets that have a
volser dependency without including the new volser can cause a problem.
Identify all the volumes associated with a multi-volume data set. The same number of
target volumes are required.
Use the Migration Planning Checklist to establish tasks, assignments, and status.
Restrictions
The following data sets are not supported by zDMF:
Data sets cataloged in the master catalog or on any system resident volumes should not
be included in a migration.
Data sets used by the operating system such as LINKLIST APF authorized, page, JES2,
and JES3. These data sets are not supported because they might not divert properly,
causing serious system outages.
Virtual Storage Access Method (VSAM) with IMBED, KEYRANGE, and Replicate
parameters. For more information, see the Installation and Reference manual at:
http://www-950.ibm.com/services/dms/en/support/tdmf/zos/v5/tibs530.html
Catalogs.
ISAM.
Individual PDS members.
HFS data sets.
Page and swap data sets.
Temporary (&&) data sets.
Undefined data sets.
Data sets allocated with no primary space or utilized space.
Volume-specific data sets, including volume table of contents (VTOC), VSAM volume data
set (VVDS), and volume table of contents index (VTOCIX).
The following data sets are flagged and do not go into Diversion phase until all associated
applications are terminated on all participating systems. For more information, see the
“Migration scheduling techniques” section in the Installation and Reference Manual.
VSAM record level sharing data sets
BDAM data sets that require absolute track
With zDMF, you can respond immediately to a storage-related issue. In addition, you can
manage your migration so that you maintain data set level mirroring until one of the following
phases:
The Diversion and Completion phases in the same window
The Completion phase at a later time after the Diversion phase
These phases are known as Extended Mirrored phase and Non-Extended Mirrored phase.
Additionally, zDMF can suspend and resume a migration using the ISPF interface. For more
information, see the zDMF Installation Manual.
Staying in the Non-Extended Mirrored phase can be used to accommodate the following
needs:
Migrations that require immediate relocation of a data set or data sets for performance
improvements.
Tip: When moving a data set for performance reasons, be sure that the new target
volume does not already have a performance problem.
The new subsystem must be installed. In this scenario, the old subsystem must be removed
within 30 days of the arrival of the new subsystem to avoid costly storage overlap. It was
quickly decided that conventional disk-to-disk copy was not feasible because of the long
application outage window required. Also, the hardware migration utility of the vendor was not
usable because of compatibility issues going from ESCON to FICON. Ultimately, a
combination of host-based utilities and products was chosen to perform the migration.
Because the environment consisted of 80-90% SMS, the operations staff felt they could take
advantage of SMS redirection and the following functions:
DFDSS to copy non-allocated files
TDMF for volume level migrations
zDMF for allocated files
Because the source VOLID was being copied to the target volume, no SMS changes were
needed. Changes were unnecessary because the volume was already part of the wanted
SMS rules. Also, the VOLID naming standard is propagated to the new volume.
The TDMF migration was set up to dynamically start ICKDSF to reformat and expand the
VTOC of a volume. This function is performed when the source VTOC characteristics do not
match that of the target device.
Execute this step if you have correct settings for your VTOC size when volume was moved to
new target volume. You can also do it if TDMF (ICKDSF) can extend all volumes to a correct
VTOC size.
Use the ICKDSF EXTVTOC option only if migrating from one size device to another, or if a
volume was migrated and no REFVTOC was performed. Only indexed VTOCs are extended.
Non-indexed VTOCs, including volumes with damaged indexes, are allowed.
You can specify a specific number of tracks you want the VTOC to be or you can allow TDMF
use its own algorithm as follows. The minimum size of the new VTOC is the greater of the
current VTOC size on the source and target. The VTOC can be extended further depending
on the number of data sets on the volume:
If the volume is less than half full, the VTOC is extended to contain the current number of
data sets multiplied by the ratio of target to source device size, plus 25%.
If the volume is more than half full, the VTOC is expanded to handle the situation where
the target volume is full of data sets with the same average size.
Important: Resizing works only if free space is available directly behind the VTOC. If there
is no free space VTOC cannot be resized, but the new number of cylinders are updated in
VTOC.
Operations staff can go on to their other migration tasks and allow the normal SMS routines to
move the files. However, they did need to revisit this task to determine which files SMS did not
migrate.
HSM can also be used to migrate those data sets that are not in use using the HSM
command MIGRATE VOLUME(SG2002 MIGRATE(0)) CONVERT.
Tip: Using HSM would eliminate the creation of JCL to accommodate the migration.
The files that remained were files that had not been through a processing cycle. These
included the following kinds of files:
Weekly job runs
Files that had been created a long time ago and never got deleted
Files that were constantly in use such as the DB2 and CICS files
Consideration was given to using DFDSS to migrate the remaining files not in allocation. It
was determined that using zDMF, operations staff can accomplish the same thing and also
handle data sets that were in use at the time. In addition, zDMF provided the flexibility of
allowing data sets to go through allocation during this process. DFDSS, however, would have
periodic allocation issues as production continued to run.
After zDMF completed all data set migrations, the Storage Migration project to the new
technology was complete.
For performance reasons, do not place hlq.HGZD325.GZDLLIB library in the system link list
(LNKLSTxx). If the library is placed in the system link list (LNKLSTxx), remove the STEPLIB
DD statement from the JCL before running zDMF. If you place hlq.HGZD325.GZDLLIB in
PROGxx or IEAAPFxx of SYS1.PARMLIB, use the STEPLIB DD statement.
The zDMF server must always be active and able to immediately process requests. zDMF
requires a high execution priority, and should be put in a response-oriented performance
group. The zDMF-server-started task JCL specifies the configuration PDS member that
Generally, do not change default values in CONFIG member. However, you might be directed
to do so by support in some special cases.
zDMF parameters
The hlq.HGZD325.SAMPLIB(CONFIG) member has the following parameters:
MAXIO: MAXIO determines the maximum overall number of I/O requests that can be
active at one time on the server. Because I/O buffers and control areas are allocated
based on the MAXIO value, select a number appropriate to the resources available.
Approximately 1 MB of memory is allocated for each integer you add to the MAXIO
specification. For example, you select a MAXIO value of 25, approximately 25 MB is
allocated. The memory allocation is fixed during active I/O.
MAXIO=number
Where number is the maximum number of Overall Requests that can be active at one time
on the server. The minimum value is 0. If you select 0 or do not specify a MAXIO value, the
number of Overall Requests defaults to 25.
– Default: MAXIO=25
MAX_CHANNEL_IO: Specifies the maximum number of concurrent I/O requests that can
be issued to a channel during the Copy phase or Mirror Synchronization phase. The
MAX_CHANNEL_IO limit applies to any active I/O against a channel group, whether read
or write. If the source and target devices are on the same channel group, the
MAX-CHANNEL_IO limits the total concurrent I/O requests on the channel.
MAX_CHANNEL_IO=Requests
Where Requests is the maximum number of concurrent I/O requests. The smallest value
is 0. There is no theoretical maximum value. However, the largest practical value is the
current MAXIO value.
– Default: MAX_CHANNEL_IO=15
MAX_DEVICE_IO: Specifies the maximum number of concurrent I/O requests that can be
issued to devices containing a data set migration pair. This limitation applies during the
Copy and Mirror Synchronization phases.
MAX_DEVICE_IO=Requests
Where Requests is the maximum number of concurrent I/O requests. A value of 1 to 5 can
be specified.
– Default: MAX_DEVICE_IO=3
MAXTRK: This optional parameter specifies the size of I/O operations in tracks that zDMF
I/O copy operations use to transfer less than a full cylinder (one extent) of data. The
MAXTRK value is used to reduce the application response time impact of zDMF Copy
operations immediately following activation. For example, MAXTRK=5 causes zDMF to
move one extent in three I/O operations. Splitting the extent reduces the time the device is
unavailable to application I/O operations into three short windows.
MAXTRK=n
Tip: Initially run in TERMINATE mode to verify correct operations and build an
IGNORE_SYSTEMS list if needed.
This parameter works with 3990-6 control units or later (2105/2107) only. TDMF cannot check
attached systems with 3990-3 (old HW.
Important: You are responsible for running the correct number of Started Tasks.
Application performance impact can be reduced by use of the MAXTRK zDMF server
parameter.
The MAXTRK parameter causes zDMF I/O operations to be performed using less than one
cylinder. For more information, see the “Configuring the zDMF Server Parameters” in the
zDMF Installation and Reference Guide.
Note: At least 0.85 MB is resident in fixed ECSA until the next IPL for all components,
except for the PC routine and possibly the storage pools.
Security
Protect zDMF using IBM RACF®, ACF2, or other security utilities to prevent use by
unauthorized personnel. Use of zDMF by unauthorized personnel might result in the
inappropriate transfer of a data set out of a specified isolated environment.
Volume checklist
Be sure that the TARGET configuration is added to the appropriate storage group and all the
volumes remain in DISALL status until you are ready to start the migration. The volume status
then changes to ENABLE.
Be sure that all of the TARGET volumes meet the following requirements:
Are initialized for SMS (ICKDSF STORAGEGROUP)
Contain a configuration large enough for a VVDS
Contain an INDEX VTOC
Because the INDEX VTOC is required on an SMS-managed volume, ISMF can be used to
identify a disabled or broken INDEX VTOC.
Be sure that all the SOURCE volumes are in DISNEW status. This includes volumes with a
similar capacity configuration even if they do not contain data sets that are scheduled to be
moved. An example is all mod 3 in the same STG GRP. In an SMS configuration, this status
prevents any movement back to a DEVICE TYPE that is scheduled for removal. If DISNEW
status was enabled using an SMS VARY command, the next ACS TRANSLATE and
VALIDATE can reverse that status.
Tip: SNAP requires an INDEX VTOC on the volume. Also, the VTOCIX must match the
volser or the migration fails.
zDMF can move any data sets that conform to supported data set lists and other restrictions.
However, consider additional application restrictions and consult with the person responsible
for the application. Also, keep in mind that each installation, although apparently similar, can
have subtle differences that might jeopardize the migration.
10.2.8 Post-migration
The creation of a separate user catalog for the zDMF target data sets expedites the
post-migration cleanup process. Additionally, the renamed source data set provides content
verification that you can use for a post-migration audit trail. The following are the steps for
various post-migration scenarios:
For a data set migration with post-migration verification, perform the following steps:
List the original zDMF candidate data sets using any listing utility and the same zDMF
selection criteria.
If the volume contained data sets that are either restricted or invalid, they must be factored
into the original data set count.
Based on the zDMF selection criteria, list the new data sets using IDCAMS.
Compare the data sets. Because both the target and source data set configurations
consist of cataloged data sets, the contents of either can also be verified.
When a migration group is deleted, the zDMF server performs the following tasks:
Ensures that the migration group is in a state that allows it to be deleted.
Removes the migration group from internal z/OS storage (memory). Removing the
group frees valuable system memory.
Deletes the migration group from the zDMF database.
The relationship between the BCS and the VVDS is many-to-many. A BCS can point to
multiple VVDSs, and a VVDS can point to multiple BCSs.
The records in both the VVDS and BCS are made up of variable-length cells and subcells.
The two cell types that are often referred to are the VSAM volume record (VVR) and the
non-VSAM volume record (NVR), which are both held in the VVDS. The VVR contains
information relating to VSAM data sets. The NVR contains information relating to non-VSAM
SMS-managed data sets.
Most data sets have entries in only one VVDS. However, multivolume data sets have entries
in the VVDS of each volume they are allocated to (type Q for a secondary record).
SMS class info BCS & VVDS BCS & VVDS n/a n/a
Because the zDMF data movement process includes catalog alteration, you must start with a
functional catalog. In addition, verify that the candidate data sets are correctly cataloged on
the system or systems to which zDMF requires access. Use SSMzOS or IDCAMS to perform
the required diagnostic tests.
BCS/VVDS/VTOC synchronization
A SSMzOS catalog diagnostic checks the following items:
Data structure
Index integrity
BCS/VVDS/VTOC synchronization
The catalog diagnostic tests can also generate IDCAMS control cards to fix the errors
detected. Specifically, the IDCAMS control cards for DEFINE RECATALOG can be generated
to recreate missing VSAM and non-VSAM catalog entries. The catalog commands and their
actions are shown in Table 10-6.
Additionally, there are diagnostic routines to detect and remove superfluous catalog entries.
These diagnostic routines can be directed to your entire catalog environment, or assigned to
a specific catalog, volume, or object (catalog, VVDS, or volume). Specific objects are selected
using keywords that allow specific and generic parameters. When SSMzOS catalog-related
functions are run on a scheduled basis, these steps establish consistent catalog
synchronization. This synchronization is important when running post-catalog recovery
diagnostic tests and explaining erroneous entries in a BCS or a VVDS.
DELETE NOSCRATCH Removes only BCS entries. VVDS and VTOC are
not affected.
DELETE VVR or DELETE NVR Removes unrelated VVR or NVR. Also removes
VTOC entry, if present.
Sample JCL
This section lists sample JCLs.
The EXAMINE command is used to analyze the index component of an integrated catalog
facility catalog. Its parameters are:
NAME: Specifies the catalog name and its master password. The catalog must be
connected to the master catalog.
INDEXTEST: Specified by default.
ERRORLIMIT(0): Suppresses the printing of detailed error messages.
Sample output:
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Then following is an example of the IDCAMS control statements that can be used for
VVDS and BCS diagnostics.
Compares the forward pointers from the BCS to the VVDS:
Because global resource serialization maintains information about global resources in system
storage, data integrity is maintained during a system reset while a reserve exists. A global
resource serialization complex also allows serialization of shared logical resources. These
resources do not have to be directly associated with a data set or DASD volumes. An ENQ
with a scope of SYSTEMS can be used to synchronize processing in multisystem
applications. Combining the systems that access shared resources into a global resource
serialization complex can solve the problems related to using the Reserve macro.
How global resource serialization works depends on the z/OS operating system level. It also
depends on whether the cross-system coupling facility (XCF) environment the systems are
running in is sysplex or non-sysplex.
The global resource serialization complex consists of every system that indicates, at IPL time,
that it is to be part of the complex. This complex is not affected by the physical configuration
of systems and links. The systems in the complex might not all be actively using global
resource serialization at any particular time. Those systems that are actively using global
resource serialization to serialize access to global resources make up the global resource
serialization ring.
System 1 System 3
System 2 System4
A sysplex requires full connectivity between systems. Therefore, when the sysplex and the
complex are the same, the complex has full connectivity. Although a mixed complex might not
be fully connected, a fully connected complex allows the systems to build the ring in any
order. It also allows any system to withdraw from the ring without affecting the other systems.
The complex offers more options for recovery if a failure disrupts ring processing.
For example, if system System1 in Figure 10-2 were to fail and end its active participation in
serializing access to global resources, it would still be part of the complex. However, it would
not be part of the ring.
System1
Coupling
Shared System 2 Facility
DASD (XCF)
System 3
XCF requires a DASD data set, called a sysplex couple data set. This data set is shared by all
systems. An alternate data set can be used to facilitate migration from a ring to a star
complex. On the sysplex couple data set, z/OS stores information related to the sysplex
systems, and XCF groups such as global resource serialization. The following policies are
used to manage global resource serialization:
Coupling facility resource management (CFRM) policy, which is required, defines how
MVS manages Coupling Facility resources.
Sysplex failure management (SFM) policy, which is optional, defines how MVS is to
manage the system. It signals connectivity failures and IBM PR/SM™ reconfiguration
actions. Generally, use the SFM policy.
Tip: XCF is a component of the z/OS operating system that supports cooperation between
authorized programs running in a sysplex.
GRS is useful in a shared storage environment in which multiple z/OS images are using the
same storage resources. To control use of a resource, the z/OS operating system issues a
hardware reserve to the physical device. This reserve ensures that no other z/OS image in
the shared storage environment can access that device so long as the reserve is still held.
To improve system performance and through-put, products that convert hardware reserves to
globally propagated ENQ requests were developed. An example is the Multi-Image Manager
by Computer Associates. These products communicate with all systems in the shared
storage complex. They ensure exclusive usage by issuing a software enqueue on all systems
in the shared storage environment.
When using zDMF to migrate data, the following protections are provided while converting
hardware reserves to globally propagated ENQ requests:
Ensuring the data integrity of data for data sets being migrated
Avoiding deadly embraces in multisystem environments
After entering the diversion phase, applications that remain active after diversion will continue
to serialize usage of the resource as though it still on the source volume. Applications starting
after diversion are allocated directly to the new target location and do not require any I/O
diversion activity by zDMF. These applications do not require alteration because all catalog
and volume level metadata are modified to reflect the new location of the data set.
When this scenario occurs, an exposure exists with resource serialization for the data set.
The following is an example of how this scenario can occur:
MVSA reserves Volume A for data set DS1 which is in diversion, reads data, and prepares
to write it back. The data read occurs on Volume B because the migration is in the
Diversion phase.
On MVSB, a new application starts and reserves volume B. The application reads data
from data set DS1, modifies it, and writes it back to data set DS1.
In this scenario, a classic data integrity exposure occurs. The data is overwritten without any
indication that this has occurred.
If the reserve is converted to a global enqueue, the resource is protected by name and
remains secure. This protection remains as long as all applications protecting any associated
resource use the same name and do not rely on the actual device reserve for serialization. An
example is when other data sets are thought to be on the same volume or any logical
resource. If the reserve is not converted, the benefit of reserve protection is lost between pre-
and post-diversion applications.
Data sets grow and shrink with usage. As they do, the VTOC and VVDS must be updated to
reflect the space usage on that volume. zDMF mirrors all activity for a data set on a source
volume to a target volume. When extent changes occur, zDMF must ensure that similar
changes occur on the target volume. For example, if DS1 (data set 1) on Volume A adds an
extend, zDMF must also add an extent on Volume B for the mirrored data set.
If hardware reserves are not converted for the VTOC and VVDS, hardware reserves must be
issued to both physical volumes. This can result in a deadly embrace.
To avoid this situation, these resources must be converted from hardware reserves to ENQ
requests. This conversion allows special handling code in zDMF to prevent multiple systems
from trying to exclusively acquire the same volume resources.
If these resources are not converted from hardware to ENQ requests, you might have to
cancel address spaces to eliminate the deadly embrace.
To convert the reserves, use products such as CA-MIM or GRS by IBM. zDMF provides a
stand-alone utility called Reserve Monitor. This utility detects hardware reserves and
determines whether they are currently being converted to globally propagated ENQ requests.
For more information, see “Reserve Monitor” in the zDMF installation manual.
db2 TSO
zDMF
Monitor
workload MCECEBC
CECE db2
Storage Group:
Communication
SG3390X1
Database
Migrate
Source
Dev: 6030-603F
VOLID: Target
RS6030-RS603F Dev: 850A-850F
3390-3 VOLID:
ZD850A-ZD850F
3390-3, SMS - DISNEW 3390-9
3390-9
SMS - ENABLE
zDMF was already installed and active as shown in Figure 10-5. For performance reasons,
the zDMF communication data base and the user catalog for zDMF work data sets were on
NONSMS managed volumes.
Operating Environment
IBM zDMF
zDMF VERSION : 3.2.5
zDMF COMMAND PREFIX (CPFX) : ZD
zDMF DATA BASE DATA SET NAME : SYS1.HGZD325.DB <-- DB name
zDMF DATA BASE VOLUME : ZDMFDB <-- DB vol
zDMF DB SYSTEM AUTHORIZATION : ALTER
zDMF SIMULATE DATA SET NAME : SYS1.HGZD325.SIMULATE.REPORTS
zDMF SIMULATE VOLUME : VSLNSM
zDMF SIM SYSTEM AUTHORIZATION : ALTER
SMF INTERVAL (# of minutes) : 5
SMF TYPE :
SMF SUBTYPES INCLUDED :
UNIDENTIFED SYSTEMS ACTION : IGNORE
IGNORED SYSTEMS : NONE DEFINED
USER ID : STXXXXX
CPU ID : 0004423A 2064
SCP NAME : SP7.1.2
SCP FMID : HBB7770
ETR ID : 00
LOCAL TIME : 07/09/2011 00:10:45.09
GMT TIME : 07/08/2011 21:59:40.67
LOCAL OFFSET : +02:11:04
LEAP SECONDS : +000
Figure 10-6 zDMF Monitor, Item 7
You can also check the database and user catalog with TSO 3.4 with the High Level Qualifier
(HLQ) of “CATALOG.ZDM*” (Figure 10-7).
VOLUME DEV DEV MOUNT FREE SPACE ! LARGEST FREE ! PCT FRAG
SERIAL TYPE NUM CHAR CYLS TRKS EXTS ! CYLS TRKS PCT ! FREE INDEX
****** ------ ---- ----- ----- ---- ---- ! ---- ---- ---- ! ---- -----
RS603A 33903 603A SMS X 1020 8 1 ! 1020 8 100% ! 30% .000
RS603B 33903 603B SMS X 1462 14 1 ! 1462 14 100% ! 43% .000
RS603C 33903 603C SMS X 1616 11 1 ! 1616 11 100% ! 48% .000
RS603D 33903 603D SMS X 568 7 1 ! 568 7 100% ! 17% .000
RS603E 33903 603E SMS X 1324 10 1 ! 1324 10 100% ! 39% .000
RS603F 33903 603F SMS X 996 6 1 ! 996 6 100% ! 29% .000
RS6030 33903 6030 SMS X 841 14 1 ! 841 14 100% ! 25% .000
RS6031 33903 6031 SMS X 696 6 2 ! 696 0 99% ! 20% .000
RS6032 33903 6032 SMS X 1764 10 1 ! 1764 10 100% ! 52% .000
RS6033 33903 6033 SMS X 1460 3 1 ! 1460 3 100% ! 43% .000
RS6034 33903 6034 SMS X 1002 4 1 ! 1002 4 100% ! 30% .000
RS6035 33903 6035 SMS X 697 12 1 ! 697 12 100% ! 20% .000
RS6036 33903 6036 SMS X 1762 14 1 ! 1762 14 100% ! 52% .000
RS6037 33903 6037 SMS X 1168 7 1 ! 1168 7 100% ! 35% .000
RS6038 33903 6038 SMS X 302 9 3 ! 297 0 98% ! 9% .013
RS6039 33903 6039 SMS X 704 5 1 ! 704 5 100% ! 21% .000
ZD850A 33909 850A SMS X 9974 5 3 ! 9822 0 98% ! 99% .007
ZD850B 33909 850B SMS X 9974 5 3 ! 9822 0 98% ! 99% .007
ZD850C 33909 850C SMS X 9974 5 3 ! 9822 0 98% ! 99% .007
ZD850D 33909 850D SMS X 9974 5 3 ! 9822 0 98% ! 99% .007
ZD850E 33909 850E SMS X 9974 5 3 ! 9822 0 98% ! 99% .007
ZD850F 33909 850F SMS X 9974 5 3 ! 9822 0 98% ! 99% .007
Figure 10-9 Space Information for Storage Group “SG3390X1”
DSNCAT.DSNDBD.DBX1ST01.*
The following are the zDMF parameters used in this test example:
DELETE_EXISTING_TARGET_DATASETS (NO): Do not delete existing work data sets
with same name.
EARLY_DATA_SET_COMPLETION (YES): Divert and complete all non-allocated data
sets as soon as MIRROR status is reached.
ALLOCSEQ (NONE): Move data sets as they are in list rather than sort by size.
Each record was updated by two different systems (MCECEBC and MZBCVS2). The check
was done from system MCECBC.
:
-- SYSTEM MCECEBC
-- DB2 SYSTEM NAME DB1A
-- DB2 TABLE SPACE TSA010F
-- DB2 TABLE TA010F
-- NUMBER OF RECORDS IN TABLE 25000
-- CHECK ALL RECORDS WHERE FIELD CHANGED1 = YES
-- DISPLAY NUMBER OF UPDATED RECORDS (FIRST 10)
---REC-----SYS------USERID---DATE-----TIME------------UPDATE-MARKED--
0000001 MCECEBC STXXXXX1 20110710 19:53:27.706677 000052 YES
__________ MZBCVS2 STXXXXX2 20110710 19:03:29.100314 000010 YES ___
0000010 MCECEBC STXXXXX1 20110710 19:53:27.706677 000052 YES
__________ MZBCVS2 STXXXXX2 20110710 19:03:29.100314 000010 YES ___
0000013 MCECEBC STXXXXX1 20110710 19:53:27.706677 000052 YES
__________ MZBCVS2 STXXXXX2 20110710 19:03:29.100314 000010 YES ___
0000025 MCECEBC STXXXXX1 20110710 19:53:27.706677 000052 YES
__________ MZBCVS2 STXXXXX2 20110710 19:03:29.100314 000010 YES ___
0000039 MCECEBC STXXXXX1 20110710 19:53:27.706677 000052 YES
__________ MZBCVS2 STXXXXX2 20110710 19:03:29.100314 000010 YES ___
0000065 MCECEBC STXXXXX1 20110710 19:53:27.706677 000052 YES
__________ MZBCVS2 STXXXXX2 20110710 19:03:29.100314 000010 YES ___
0000070 MCECEBC STXXXXX1 20110710 19:53:27.706677 000052 YES
__________ MZBCVS2 STXXXXX2 20110710 19:03:29.100314 000010 YES ___
0000076 MCECEBC STXXXXX1 20110710 19:53:27.706677 000052 YES
__________ MZBCVS2 STXXXXX2 20110710 19:03:29.100314 000010 YES ___
0000093 MCECEBC STXXXXX1 20110710 19:53:27.706677 000052 YES
__________ MZBCVS2 STXXXXX2 20110710 19:03:29.100314 000010 YES ___
0000105 MCECEBC STXXXXX1 20110710 19:53:27.706677 000052 YES
__________ MZBCVS2 STXXXXX2 20110710 19:03:29.100314 000010 YES ___
-- TOTAL RECORDS ----------------------------------------------------
-- MARKED RECORDS = 2500
-- OTHER RECORDS = 0
-- MARKED RECORDS = ALL RECORDS OK
-- OTHER RECORDS = NO PROBLEMS FOUND
*********************************************************************
Figure 10-12 Check data (window shot of last TS result, all others are the same)
The batch job will end after sending the data to the zDMF master started task.
:
IEF373I STEP/BATCH /START 2011193.0005
IEF032I STEP/BATCH /STOP 2011193.0009
CPU: 0 HR 00 MIN 00.03 SEC SRB: 0 HR 00 MIN 00.00 SEC
VIRT: 240K SYS: 288K EXT: 16468K SYS: 17476K
IEF375I JOB/STRZ999 /START 2011193.0005
IEF033I JOB/STRZ999 /STOP 2011193.0009
CPU: 0 HR 00 MIN 00.03 SEC SRB: 0 HR 00 MIN 00.00 SEC
DWRQBUF§ 2E104120 LN=00FFFEE0 USED=00000000 REQ=00000000
==> IF GROUP ITSO97 EXISTS
==> COMMAND DELETE ITSO99
*** COMMAND SKIPPED DUE TO CONDITION TEST
Group CEBC
Data Set Status
Extents
As soon all data sets (source and work DS) are Active, zDMF changes to Pending-Active
(Figure 10-17).
Group CEBC
Data Set Status
Extents
Group CEBC
Data Set Status
Extents
Group CEBC
Data Set Status
Extents
Group CEBC
Data Set Status
Extents
Group CEBC
Data Set Status
Extents
To see more details by modifying the display options, press the F4 key and modify the first
three lines (Figure 10-23).
Display Options
Command ===> Scroll ===> CSR
Group CEBC
Data Set Status
Extents
In cooperation with the DB2 team, you can complete this group in different ways. You can
close the single table spaces, or you can also stop one DB2 after the other. Wait until all data
sets are in divert status or already moved, and all metadata (Catalog, VTOC and VVDS) are
updated. You can restart of DB2 immediately.
You need to schedule a time, usually on the weekend or during the night, to stop application
for a short time. This outage is required to release allocation to the data sets. As long one
single data set of one session is allocated, the session does not end. As soon application is
accessing the catalog migration of all data sets in status DIVERT, the migration is complete.
Check with zDMF monitor that the group has no data sets allocated as shown in
Figure 10-27.
Group CEBC
Data Set Status
Extents
_ ITSO99 Complete
Activate, Divert, and Terminate reports available
Figure 10-28 zDMF Monitor: Status of group is Complete
After group reaches the status Complete, check the new location of the moved and work data
sets as shown in Figure 10-29.
-- SYSTEM MCECEBC
-- DB2 SYSTEM NAME DB1A
-- DB2 TABLE SPACE TSA010F
-- DB2 TABLE TA010F
-- NUMBER OF RECORDS IN TABLE 25000
-- CHECK ALL RECORDS WHERE FIELD CHANGED1 = YES
-- DISPLAY NUMBER OF UPDATED RECORDS (FIRST 10)
---REC-----SYS------USERID---DATE-----TIME------------UPDATE-MARKED--
0000001 MCECEBC STXXXXX1 20110712 18:13:55.728986 000064 YES
__________ MZBCVS2 STXXXXX2 20110712 18:56:06.467727 000050 YES ___
0000010 MCECEBC STXXXXX1 20110712 18:13:55.728986 000064 YES
__________ MZBCVS2 STXXXXX2 20110712 18:56:06.467727 000050 YES ___
0000013 MCECEBC STXXXXX1 20110712 18:13:55.728986 000064 YES
__________ MZBCVS2 STXXXXX2 20110712 18:56:06.467727 000050 YES ___
0000025 MCECEBC STXXXXX1 20110712 18:13:55.728986 000064 YES
__________ MZBCVS2 STXXXXX2 20110712 18:56:06.467727 000050 YES ___
0000039 MCECEBC STXXXXX1 20110712 18:13:55.728986 000064 YES
__________ MZBCVS2 STXXXXX2 20110712 18:56:06.467727 000050 YES ___
0000065 MCECEBC STXXXXX1 20110712 18:13:55.728986 000064 YES
__________ MZBCVS2 STXXXXX2 20110712 18:56:06.467727 000050 YES ___
0000070 MCECEBC STXXXXX1 20110712 18:13:55.728986 000064 YES
__________ MZBCVS2 STXXXXX2 20110712 18:56:06.467727 000050 YES ___
0000076 MCECEBC STXXXXX1 20110712 18:13:55.728986 000064 YES
__________ MZBCVS2 STXXXXX2 20110712 18:56:06.467727 000050 YES ___
0000093 MCECEBC STXXXXX1 20110712 18:13:55.728986 000064 YES
__________ MZBCVS2 STXXXXX2 20110712 18:56:06.467727 000050 YES ___
0000105 MCECEBC STXXXXX1 20110712 18:13:55.728986 000064 YES
__________ MZBCVS2 STXXXXX2 20110712 18:56:06.467727 000050 YES ___
-- TOTAL RECORDS ----------------------------------------------------
-- MARKED RECORDS = 2500
-- OTHER RECORDS = 0
-- MARKED RECORDS = ALL RECORDS OK
-- OTHER RECORDS = NO PROBLEMS FOUND
*********************************************************************
Figure 10-30 Test procedure, example result for the last table after migration completed
Group CEBC
Data Set Status
Extents
z ITSO99 Complete
Activate, Divert, and Terminate reports available
Figure 10-31 Start Cleanup procedure with line command “Z”
Based on the global setting of the zDMF installation, you can delete the work data sets in one
of two ways:
Change the setting for Create ICKDSF TRKFMT Statements to Y as shown in Figure 10-32.
The group TDMF creates a job with ICKDSF track format statements that overwrite all
work data sets with dummy data for each data set migrated. The work data sets and
extend maps created by zDMF are deleted as shown in Figure 10-33 on page 479. You
must update the job card information to your requirements and submit this job.
Remember: This ICKDSF job starts heavy I/O load to the volumes containing the work
data sets.
Change the setting for Create ICKDSF TRKFMT Statements to N as shown in Figure 10-34
to create an IDCAMS job with delete statements.
:
000046 DELETE 'XXX.ITSO99.D1193.T0020546.S00010' CLUSTER
000047 DELETE 'XXX.ITSO99.D1193.T0020546.S00011' CLUSTER
000048 DELETE 'XXX.ITSO99.D1193.T0020546.S00012' CLUSTER
000049 DELETE 'XXX.ITSO99.D1193.T0020546.S00013' CLUSTER
000050 DELETE 'XXX.ITSO99.D1193.T0020546.S00014' CLUSTER
000051 DELETE 'XXX.ITSO99.D1193.T0020546.S00015' CLUSTER
000052 DELETE 'XXX.ITSO99.D1193.T0020546.S00031' CLUSTER
000053
000054 /* GROUP NAME: ITSO99 END */
000055
000056 /*
000057 //EXZD850C EXEC PGM=IDCAMS,DYNAMNBR=1
000058 //SYSPRINT DD SYSOUT=*
000059 //SYSIN DD *
000060
000061 /* ATTEMPT TARGET EXTENT MAP DATA SET DELETE ONLY IF */
000062 /* NOT ALLOCATED AND STILL PRESENT ON VOLUME. */
000063 SET LASTCC=0
000064 ALLOCATE -
000065 DSNAME('SYS1.HGZD325.EXTMAP.ZDZD850C.DB') OLD
000066 IF LASTCC EQ 0 -
000067 THEN -
000068 DELETE 'SYS1.HGZD325.EXTMAP.ZDZD850C.DB'
000069 /*
:
Figure 10-36 Clean up with data set IDCAMS delete statements for the Target Extend Map DS
Delete extend map data sets only if you are finished with all sessions. If other sessions are
still active and you want to start cleaning the work data sets, remove the part of the jobs that
deletes the Extend maps.
Do not forget to export the user catalog for the work data sets and delete all aliases for the
work data sets
Restriction: TDMF for z/OS does not allow access to the target volume during the
migration process.
11.1.3 Terminology
These terms are used with the TDMF TCP/IP installation and replication process:
Master system (local) The local TDMF system. This system runs as an MVS batch job
that is responsible for the data copy function. Unlike a local TDMF
migration, there are two Master systems in one TDMF session:
Local and remote.
Agent system (local) An associated TDMF MVS image running in a shared local storage
environment with the Master. To ensure data integrity, any MVS
LPAR that has access to the source volumes must be run using
either the Master or an Agent system. The Master and associated
Agent systems communicate through a shared system
communications data set (COMMDS).
Master system (remote) The remote TDMF system. This system runs as an MVS batch job
that is responsible for receiving and storing the data at the remote
site.
Agent systems (remote) Agent systems are allowed at remote site. However, avoid sharing
the remote volumes with other LPARs. TDMF prevents access by
other users and systems to these volumes during the replication. If
the target volumes are shared at remote site, Agents must be
active at these systems.
Source volume DASD volumes containing the data to be replicated.
Target volume DASD volumes receiving the replicated data.
Migration pair The relationship between one source and one target volume for a
single replication.
Migration group A group of volume pairs that use the same group name.
Migration session One Master (local) plus any Agents (local), one Master (remote),
and pairs or groups to be processed in a single TDMF execution.
Data consistency The procedures to ensure that data remains intact and correct
through the migration. All applications at the local site must be
stopped before TDMF TCP/IP can go to synchronization to ensure
consistent data.
Synchronization The time when I/O is inhibited to the source devices (quiesce
phase), all applications stopped. During this time, the last of the
updates are collected and reflected to the target devices
(Synchronization phase). The Volid of the target volume is NOT
changed during this process. It must be changed separately. When
this process is complete, the TDMF session ends and the target
device is marked offline (depends on replication parameter).
TDMF
Monitor
(TSO User)
TCP/IP Link
Local Remote Master
Master TDMF
TDMF
Local
Agent
TDMF
ESCON/FICON ESCON/FICON
Director Director
Remote
Copy over IP
Replicate
over TCPIP
COMMDS link COMMDS
Storage Storage
(any vendor) (any vendor)
Local
Remote
Master and Agents: Each LPAR that has access to the storage subsystem and shares the
source volumes must run an Agent. Each Master/Agent session must have access to its
own communication data set. Multiple Agent systems can be involved in a session locally.
The remote site normally has only a single Master and no Agents.
Important: To avoid a possible data integrity exposure, all systems accessing replicated
volumes must be identified to the Master system. TDMF includes various controls and
checks that ensure that the user does not make the following errors:
Assign or direct conflicting migrations to the same devices
Attempt migrations to nonexistent devices
Use the same COMMDS for two simultaneous or overlapping migration sessions
Because TDMF is a host software migration tool, there are no particular hardware
prerequisites.
You need to install TDMF at the remote site, but you do not need any license key to run TDMF
TCP/IP.
Tip: Perform periodic checks of the Required IBM Maintenance and Technical Information
Bulletins (TIBs). These requirements must be implemented to ensure successful TDMF
operation.
The data structure has the following components. Figure 11-2 on page 490 shows how these
components relate to each other.
Local system: The local system consists of all volumes in relation to the system. Examples
include IPL, IODF, local page data sets, and system-related volumes.
TDMF volumes: These volumes are TDMF libraries and COMMDS volumes. These
volumes are not moved.
TDMF
Monitor
(TSO User)
TCP/IP Link
Storage Storage
Local Remote
Temp
System
: : System
= Temp TDMF
TDMF
local page DS
:
System
: (Remote)
Data : : Data
Replication, Volid:
:
remote copy over IP : XXdddd
: before
migration
starts
Figure 11-2 TDMF TCP/IP overview for a data center move to a new location
The replication is normally done in three stages. Along with the storage administrator, update
the volume list (relation source and target volumes) is updated to the actual status before
each s5age.
1. Perform a test without synchronization data as shown in Figure 11-3. This test is normally
done with system/network volumes only, and is also a network test. Application data are
copied to test the bandwidth.
1. Test Move
Non consistent Data
Must be done because of
Init all Rename all
Target volumes volumes to
setting the correct VOLID to
their original name the volumes and update
vary them online
different hardware at remote
site to the VTOC
1 Remote master
Start
2 Local Agents (if needed) Note all changes must be
sessions Check, modify
3 Local master made to the system
System for first IPL
parameter for the new
Check Check with customer location (IODF, sysparm., . . )
TCPIP bandwidth and performance
connection
IPL at remote site
for test
Data consistency Test IPL and network
When initial
copy is finished, is not important for a first
go to sync IPL/Network test When test is finished
shutdown system at remote site
Repeat this test if needed
2. Test Move
With consistent data
Initialize all Rename all Must be done because of
Target volumes volumes to setting the correct VOLID to
their original name the volumes and update
vary them online different hardware at remote
site to the VTOC
Remote master
Start
Local Agents (if needed)
sessions
Local master
Clients responsibility
Check Check with customer
Check, modify Note all changes must be
TCPIP bandwidth and performance
System for IPL made to the system
connection
parameter for the new
location (IODF, sysparm., . . )
When initial
copy is finished, Stop all applications, bring Test Network and applications
keep refresh mode IPL at remote site
down Systems except VTAM,
for a test
TCPIP, TSO, RACF and
TDMF sessions.
Go to sync Go to sync (manual When test is finished
command)
shutdown system
Restart systems at
local site Repeat this test if needed
IPL Systems
When initial remote site
copy is finished,
keep refresh mode Stop all applications, bring
down system except VTAM, Part of customer:
TCPIP, TSO, RACF and Check Network
connection
Check all data
TDMF sessions.
Go to sync (internal/external) Define a point of no return !
Go to sync Check all application
Shutdown systems GO
at local site
Figure 11-8 Example: Performance information client production move (TDMF TCP/IP 5.2)
The distance in this example was around 100 km between local and remote site. The client
wanted to test the connectivity and bandwidth for the two new defined links of 1 GBit each. To
retain the performance information, the test was run twice and the resulting information was in
different COMMDS. Sessions 01 - 03 were in the first test, and sessions 04 - 06 were in the
second.
Both tests used the same 30 volumes (10 out of each DISK Storage unit). Therefore, after the
first test the RENAME and INIT jobs were run, and all 30 volumes were set ONLINE again. All
devices with 2xxx and 3xxx addresses were attached with ESCON Channel paths. The rest
were FICON attached. Example 11-1 shows the list of volumes there were used.
LPAR SYCP ran the master session at local site. All others ran agents. LPAR SYX1 ran the
master session at remote site. There were no agents at the remote site. The target volumes
were not shared.
Clients 192.168.2.12
z/OS environment Router Router Remote
192.168.2.35 site
Local Site
CHP: 28 CHP: 7A
IP connection setup
CHP: 01
Network test and CHP: 2A
Production Test
Environment (PTE)
(2 * 1Gbit)
2086.A04.SN: 38xx x 2096.S07.SN: 36xxx
TDMF
TSO
Monitor
SY06 (remote)
SY07 SYCD
SYFP SYCP (M)
SYFD
LPARs SYCR SYX1 (M)
SYX2
TSO: 10.27.190.123 TSO: 10.28.64.73
SY02
DISK
DISK DISK DISK
These sessions require one master and six agents for each session. Therefore, there are
seven LAPARs. Two tests are run with three sessions each to test different links. All of the
information is stored in the COMMDS.
The result of this filter command is shown in Figure 11-12. All members for session 01 are
displayed, making it is easy to submit the right jobs.
For more information about the parameters, see the TDMF Installation and Reference
Manual.
Tip: Do not define the source or target volumes at the remote site. The port number in
relation to the local session port number is important. The remote master listens to the
TCP/IP port defined in the JCL. As soon the TCP/IP session is established, this port is
used to exchange the volume information. The following port number (port 8201) is
used at the remote site to establish for each source/target volume pair an own TCP/IP
session. See Figure 11-34 on page 513.
2. After replication, rename the target volumes to their original VOLID. Otherwise, the client
cannot work with these volumes for test. To rename the volumes, use ICKDSF
REFORMAT, which is a z/OS system tool (Figure 11-18).
3. The ICKDSF init jobs initialize the target volumes back to their old VOLID. Therefore, you
do not have to change the TDMF JCL again (Figure 11-20).
Rename the volumes to their original VOLIDs after replication so you can start tests. Before
the next test or the production move, you must switch it back. Use the ICKDSF verify (VFY)
parameter in combination with NOREPLY=U, so you can run this job without a manual REPLY
at the operator console.
TDMF is waiting now for the remote master. As soon the TDMF remote master is started,
TDMF starts the copy process for the defined concurrent number of volumes.
After a few seconds, you see the estimated copy phase end time behind each volume as
shown in Figure 11-23.
Requested Volume Device Group --- Migration --- - Error Info - Sync
Action Serial Number Name Status Type System Message Goal
3. Switch back to TDMF menu item 1 pressing F10. There are now four additional running
volumes, for a total of eight (Figure 11-25).
4. When the initial copy phase is done, you see the updates (here Refresh x) and TDMF
refreshes the volumes asynchronously. The message SYNC Volume Needed means that this
session is ready to go to SYNC (Figure 11-27).
Requested Volume Device Group --- Migration --- - Error Info - Sync
Action Serial Number Name Status Type System Message Goal
Requested Volume Device Group --- Migration --- - Error Info - Sync
Action Serial Number Name Status Type System Message Goal
As soon all volumes within the session are completed, the session is no longer displayed
in the TDMF monitor. The jobs belonging to this session end. Check all job log to see
whether MAXCC is greater than zero (Figure 11-30).
For the local master job MAXCC was equal to four as shown in Figure 11-31. All warning
messages result in a MAXCC=4. In this test, there was an additional LPAR with access to
the replicated source volumes. No Agent was started at this LPAR because the source
volumes were OFFLINE.
If MAXCC is greater than four, check the job logs for error messages. You will need to
restart the session.
There was not more than 64 MBps at the OSA connection. Because the source volumes were
ESCON attached, there might not be more than 15 M/sec each channel path. There were four
paths to the control unit plus TCP/IP usage.
You can view this information using the TSO netstat config command.
To see these TCP/IP sessions, use the TSO netstat conn command.
This methodology allows you to migrate data per blocks to a different destination. Depending
on the implementation, migrations can use low-level copy commands like dd or cpio. Some
implementations allow an RAID-1 function where the data is mirrored automatically to two
different network block devices.
The Linux Network Block Device ndb is a basic implementation of a generic network block
device. It has been included in the kernel since version 2.1.101. It provides read-only and
read/write access to a remote device. Many users mount the remote device as read only. In
this configuration, only one host is allowed write access the remote device. However, there is
no access control included in the code. This configuration carries the risk that data can be
deleted if more than one client tries to write to this network device.
When the ndb server is started, it listens to a TPC or UDP port assigned by the administrator.
The server is started with the command shown in Example A-1.
In this example, the ndb device is represented by a file in the directory /export. When the
server starts, it listens to port 5000 for incoming requests.
After the server is started, you can create a block device by referencing the IP address of the
server and the port where the server listens (Example A-2).
The network block device can be read to create a file system, mount it, and allocate files and
directories. For data migrations, however, you need to use other methods.
Suggested methods
Creating a file system in the network block device and copying the data file by file is a basic
migration technique. A more efficient method is to copy the whole device in a single dd
statement to the network device as shown in Example A-3.
Using method means that the applications using the data on the disk must be shut down. In
addition, the file system must be unmounted to capture all recent I/Os from the cache.
Depending on the used capacity of the disk and the bandwidth of the network, this process
might take a while.
Example A-4 Create the RAID1 out of the local disk and the nbd0 device
# mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sda2 /dev/nbd0
mdadm: /dev/sda2 appears to contain an ext2fs file system
size=1048576K mtime=Thu Jul 7 14:30:34 2011
Continue creating array? y
mdadm: array /dev/md0 started.
#
3. The device /dev/sda2 is the source which must be transmitted to the network block device
/dev/nbd0. The result is a new device /dev/md0, which is now the RAID1 device.
4. The synchronization of both mirrors starts immediately. The progress of the
synchronization can be monitored with the command shown in Example A-5.
When the synchronization is complete, you see the status shown in Example A-6.
5. Mount the file system for the applications after RAID1 is established (Example A-7).
6. Any updates from users or the applications are mirrored to both underlaying devices.
When you are ready for the cut over, close the applications and unmount the file systems,
and then close RAID1 as shown in Example A-8.
8. The migration is now complete. Stop the nbd-server at the target system, stop the
nbd-client at the source system, and clean up RAID1.
Summary
The Network Block Device nbd can be used for basic migration to new Linux systems and
storage devices at different sites. This migration technique is effective for migrations involving
only a few disks.
Highlight the task and click the information panel. Clicking the panel displays the task
information that will be used for converting to the DS8000 CLI. The information from the task
Flash10041005 is shown in Figure B-2. This task establishes a FlashCopy relationship
between source volume 004-23953 and target volume 005-23953 with the No Background
Copy option.
Select Flash10041005 and use the ESS CLI command esscli show task to look at the details
of the task (Figure B-4). In this output, the source and target volumes are listed as 1004 and
1005. This LUN format is used in the DS8000 CLI.
Review the specific server scripts that perform tasks that set up and start saved ESS CLI
tasks. These server scripts might have to be edited or translated to run the equivalent
DS8000 CLI commands.
In the task to establish PPRC paths named tstpths, there is a single path in the list from
source 21968:00 to 21968:01 (Figure B-6).
Important: Open systems volume IDs on the ESS are given in an 8-digit format:
xxx-sssss, where xxx is the LUN ID and sssss is the serial number of the ESS. The fixed
block LUN ID used for DS8000 CLI must be the xxx number in the ESS format plus 1000. If
you start the flash copy command on the ESS with the DS8000 CLI, the fixed block LUNs
must be specified as 1xxx to avoid overwriting the following CKD volumes:
0000 to 0FFF: System z CKD volumes (4096 possible addresses)
1000 to 1FFF: Open systems fixed block LUNs (4096 possible addresses)
Important: Unlike the ESS GUI, the DS8000 DS Storage Manager does not save Copy
Services tasks. The DS Storage Manager is only used in real-time mode for Copy Services
functions.
Table B-2 lists the conversions of ESS Copy Services parameters from the DS8000 CLI
format for the PPRC task est1011. This table focuses on establishing PPRC as related to the
saved task rather than all the saved tasks shown previously. The remaining tasks are
converted to DS8000 CLI in the next section.
To create the PPRC paths seen in Figure B-6 on page 524, use the DS8000 CLI command
mkpprcpath (Figure B-10). This example shows a single path between two ESS storage units
using source LSS 00 and target LSS 01. This output is slightly different than in the information
panel for tstpths because that task is creating paths within the same ESS.
To create the PPRC pairs seen in Figure B-7 on page 525, use the DS8000 CLI command
mkpprc as shown in Figure B-11. Use the following options:
The -type gcp option to create the extended distance PPRC relationship
The -mode nocp option to specify the Do Not Copy Vol parameter
The -tgtonline option to specify the Secondary Online OK parameter
Select the source volume range 0080 to 008B paired with target volumes 0180 to 018B, as
shown in the ESS task est1011.
The final task is converting the task to terminate PPRC, which is displayed in Figure B-8 on
page 526, using the DS8000 CLI command rmpprc. This conversion is shown in Figure B-12.
The saved task on the ESS did not include any options for the termination. Therefore, the new
DS8000 CLI command does not include any of the available options for this command.
Table B-3 DS8000 CLI copy services commands and equivalent ESS CLI commands
DS8000 CLI command ESS Copy Services Description
PPRC Commands
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
IBM Redbooks
For information about ordering these publications, see “Help from IBM” on page 531. Note
that some of the documents referenced here may be available in softcopy only.
DS8000 Copy Services for IBM System z, SG24-6787
IBM System Storage DS8000: Copy Services in Open Environments, SG24-6788
Implementing the IBM System Storage SAN Volume Controller V5.1, SG24-6423
Implementing the IBM System Storage SAN Volume Controller V6.1, SG24-7933
You can search for, view, download or order these documents and other Redbooks,
Redpapers, Web Docs, draft and additional materials, at the following website:
ibm.com/redbooks
Other publications
These publications are also relevant as further information sources:
DS8000 Command-Line Interface User’s Guide, SC26-7625-05
DS8000 Copy Services for IBM System z, SG24-6787
IBM System Storage DS8000 User’s Guide, SC26-7915
z/OS DFSMS Advanced Copy Services, SC35-0428
Online resources
These websites are also relevant as further information sources:
Data Migration Service
http://www.ibm.com/servers/storage/datamobility/
IBM System Storage
http://www.ibm.com/systems/storage/index.html
E I
ENQ 458 I/O enclosure 55
environment variables 155–156 I/O group 178
ESCON 49, 56, 425 IBM XIV
ESS 137, 144, 157, 330, 350 data migration solution 134
CLI 522 storage 135
GUI 522 ICF catalog 450
port 322 ICKDSF 47, 50, 72, 425, 443
ESS Specialist 52 IDCAMS 436, 449
ESTPAIR 72 idling 259
event log 160 IEBCOPY 403
EXAMINE 453 image mode MDisk 194
Extend Volume 286 image mode VDisk 185, 196
Extended Distance 525 importvg 90–91
Extended Mirrored phase 441 Inhibit Cache Load 431
extent 179, 181, 186 initialization phase 400–401
initialization rate 159
F initiator 134, 137, 139
fail-over 137, 168 Input/Output Supervisor (IOS) 430
FastCopy 422, 425 insf 322
FCEstablish 527 Intercluster Metro Mirror 184
fdisk 381 intercluster Metro Mirror 184, 188
FICON 49, 425 Intracluster Metro Mirror 184
Index 535
NOACKnowledge 400 RedHat 140
node 178 redirect phase 397
non-disruptive 266 reducevg 347
Non-Extended Mirrored phase 441 Refresh phase 401, 415
nonvolatile storage 43, 45 REFVTOC 443
non-VSAM volume record (NVR) 451 remote mirror pair 146
NVR see non-VSAM volume record (NVR) removing mirror 282
reserve handling 456
RESERVE macro 419
O resize 158, 164
ODM 91, 338, 348, 373 restore 10
offline data migration 185 restriction list 439
online data migration 188 resume phase 397
out of sync 46 reverse pacing 421, 428
bitmap 46 REXX 397, 486
ring complex 454
P RMC
pacing 421, 428 license code 51
pairing 425 pairs 70
parallelism 190 paths 54, 84
partial last extent 195 rmdev 338, 348
partition 295, 310, 381 RMFMON 422
PENDING.XD 75 rmkrcrelationship 253
per cluster 199 rmlv 345
per managed disk 199 rmpprc 74, 88
performance 44 rsExecuteTask 529
persistent allocation 444 RTM Resource Manager 432
physical extent 376
PID 291 S
planning 267 SAF see System Authorization Facility (SAF)
Point-in-Time Copy 522 SAN boot 134
point-in-time copy 145 SAN Volume Controller (SVC) 176
portperfshow 161 cluster 178
Post-Completion phase 435 Copy Services 183
PP 342 licence 176
PPRCOPY QUERY 76 Metro Mirror 183, 187–188
pre-migration 436 migration 180
preparation 267 migration algorithm 185
primary couple dataset 424 script 522–523
program temporary fix (PTF) 418 SCSI initiator 139
progress monitor 414 SDD command-line interface (SDDDSM) 277
PTF see program temporary fix (PTF) SDD see Multipath Subsystem Device Driver (SDD)
pvcreate 324, 382 SDDDSM see SDD command-line interface (SDDDSM)
PVID 90 SDSF see System Display and Search Facility (SDSF)
pvmove 376, 383 security 404, 436
profile 439
Q serialization 457
QLogic 380 SFM policy see sysplex failure management (SFM) policy
Quiesce phase 397, 401, 421, 485, 487 SFM see sysplex failure management (SFM)
showvolgrp 273, 334, 342
simplex 71
R slice 304
RACF 448 SMP/E 402
rank contention 422 SMS see Data Facility Storage Management Subsystem
raw device 9 (DFSMS)
READ authority 399 Solaris Volume Manager (SVM) 288
record type 451 mirroring process 289
recreatevg 90 source LUN 146, 158, 166
Redbooks website 531 Source Target System 146, 157
Contact us xviii source updating 135–136, 151, 169
Index 537
volumes
acknowledgement 400
confirmation 400
I/O redirect phase 401
initialization 401
pair assignment 66
selection 401
vpath 342, 347, 381–382
VPSMS 444
VSAM see Virtual Storage Access Method (VSAM)
VSAM volume data set (VVDS) 450, 458
VSAM volume record (VVR) 451
VTOC see volume table of contents (VTOC)
VTOCIX 440
VVDS see VSAM volume data set (VVDS)
VVR see VSAM volume record (VVR)
vxdg 98, 306
vxdisk list 310
vxdiskadm 306, 312, 314
vxplex 306, 318
vxprint 308, 319
vxtask 317
VxVM see Veritas Volume Manager (VxVM)
W
Windows Logical Disk Manager (LDM) 268
worldwide port name (WWPN) 59, 140, 142, 144, 153,
166
write access 258
WTO/WTOR see MVS Write-to-Operator/ Write-to-Oper-
ator with Reply (WTO/WTOR)
WWPN see worldwide port name (WWPN)
X
XCF see cross-system coupling facility (XCF)
XCLI see XIV command-line interface (XCLI)
XIV command-line interface (XCLI) 152
XIV Storage System 133–134
Z
z/OS Dataset Mobility Facility (zDMF) 433
parameters 446
performance 445
storage requirements 447
zDMF see z/OS Dataset Mobility Facility (zDMF)
zone 288, 338, 379
zoning 139–140, 149, 165
Highlights tool and Data migration has become a mandatory and regular activity for
most data centers. Companies need to migrate data not only when INTERNATIONAL
techniques for Open
technology needs to be replaced, but also for consolidation, load TECHNICAL
Systems and z/OS
balancing, and disaster recovery. SUPPORT
This IBM Redbooks publication addresses the aspects of data ORGANIZATION
Addresses appliance,
host, and migration efforts while focusing on the IBM System Storage as the
target system. Data migration is a critical and complex operation,
storage-based
and this book provides the phases and steps to ensure a smooth
migrations migration. Topics range from planning and preparation to execution
BUILDING TECHNICAL
and validation.
Includes z/OS TDMF, INFORMATION BASED ON
The book also reviews products and describes available IBM data PRACTICAL EXPERIENCE
TDMF TCP/IP, and migration services offerings. It explains, from a generic standpoint,
zDMF the appliance-based, storage-based, and host-based techniques IBM Redbooks are developed
that can be used to accomplish the migration. Each method is by the IBM International
explained including the use of the various products and techniques Technical Support
with different migration scenarios and various operating system Organization. Experts from
IBM, Customers and Partners
platforms. from around the world create
This document targets storage administrators, storage network timely technical information
administrators, system designers, architects, and IT professionals based on realistic scenarios.
who design, administer or plan data migrations in large data Specific recommendations
Centers. The aim is to ensure that you are aware of the current
are provided to help you
implement IT solutions more
thinking, methods, tools, and products that IBM can make available effectively in your
to you. These items are provided to ensure a data migration environment.
process that is as efficient and problem-free as possible.
The material presented in this book was developed with versions of
the referenced products as of June 2011.
For more information:
ibm.com/redbooks