V 5000 Implementing

Download as pdf or txt
Download as pdf or txt
You are on page 1of 712

Front cover

Implementing the IBM


Storwize V5000
Easily manage and deploy systems
with embedded GUI

Experience rapid and flexible


provisioning

Protect data with remote


mirroring

Jon Tate
Adam Lyon-Jones
Lee Sirett
Chris Tapsell
Paulo Tomiyoshi Takeda

ibm.com/redbooks
International Technical Support Organization

Implementing the IBM Storwize V5000

February 2015

SG24-8162-01
Note: Before using this information and the product it supports, read the information in “Notices” on
page ix.

Second Edition (February 2015)

This edition applies to the IBM Storwize V5000 and software version 7.4. Note that this book was produced
based on beta code and some screens may change when it becomes generally available.

© Copyright International Business Machines Corporation 2013, 2015. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x

IBM Redbooks promotions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi

Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii


February 2015, Second Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii

Chapter 1. Overview of the IBM Storwize V5000 system. . . . . . . . . . . . . . . . . . . . . . . . . 1


1.1 IBM Storwize V5000 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 IBM Storwize V5000 terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 IBM Storwize V5000 models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 IBM Storwize V5000 hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4.1 Control enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4.2 Expansion enclosure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.3 Host connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.4 Disk drive types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5 IBM Storwize V5000 terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5.1 Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5.2 Node canister . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5.3 I/O groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5.4 Clustered system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5.5 RAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5.6 Managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5.7 Quorum disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5.8 Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.5.9 Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5.10 iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.5.11 SAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.5.12 Fibre Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.6 IBM Storwize V5000 features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.6.1 Mirrored volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.6.2 Thin provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.6.3 Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.6.4 Storage Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.6.5 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.6.6 Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.6.7 External virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.7 Problem management and support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.7.1 IBM Assist On-site and remote service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.7.2 Event notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.7.3 SNMP traps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.7.4 Syslog messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.7.5 Call Home email . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

© Copyright IBM Corp. 2013, 2015. All rights reserved. iii


1.8 More information resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.8.1 Useful IBM Storwize V5000 websites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

Chapter 2. Initial configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25


2.1 Hardware installation planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.1.1 Procedure to install the SAS cables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2 SAN configuration planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.3 FC Direct-attach planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.4 SAS direct-attach planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.5 LAN configuration planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.5.1 Management IP address considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.5.2 Service IP address considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.6 Host configuration planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.7 Miscellaneous configuration planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.8 System management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.8.1 GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.8.2 CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.9 First-time setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.10 Initial configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.10.1 Adding enclosures after initial configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2.10.2 Configuring Call Home, email alert, and inventory . . . . . . . . . . . . . . . . . . . . . . . 70
2.10.3 Service Assistant tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

Chapter 3. Graphical user interface overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75


3.1 Getting started. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.1.1 Supported browsers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.1.2 Accessing the management GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.1.3 System panel layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3.2 Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3.2.1 Function icons navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.2.2 Breadcrumb navigation aid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.2.3 Suggested Tasks feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.2.4 Presets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.2.5 Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.2.6 Task progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.2.7 Navigating panels with tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.3 Status indicator menus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.3.1 Allocated / Virtual Capacity status bar menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.3.2 Running Tasks bar menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.3.3 Health status bar menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.4 Function icon menus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
3.4.1 Monitoring menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.4.2 Pools menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
3.4.3 Volumes menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
3.4.4 Hosts menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
3.4.5 Copy Services menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
3.4.6 Access menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
3.4.7 Settings menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
3.5 Management GUI help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
3.5.1 IBM Storwize V5000 Information Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
3.5.2 Watching an e-Learning video . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
3.5.3 Embedded panel help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
3.5.4 Hidden question mark help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

iv Implementing the IBM Storwize V5000


3.5.5 Hover help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
3.5.6 IBM endorsed YouTube videos. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

Chapter 4. Host configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155


4.1 Host attachment overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
4.2 Preparing the host operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
4.2.1 Windows 2012 R2: Preparing for FC attachment . . . . . . . . . . . . . . . . . . . . . . . . 157
4.2.2 Windows 2012 R2: Preparing for iSCSI attachment . . . . . . . . . . . . . . . . . . . . . . 161
4.2.3 Windows 2012 R2: Preparing for SAS attachment . . . . . . . . . . . . . . . . . . . . . . . 167
4.2.4 VMware ESXi 5.5: Preparing for FC attachment. . . . . . . . . . . . . . . . . . . . . . . . . 168
4.2.5 VMware ESXi 5.5: Preparing for iSCSI attachment . . . . . . . . . . . . . . . . . . . . . . 170
4.2.6 VMware ESXi 5.5: Preparing for SAS attachment . . . . . . . . . . . . . . . . . . . . . . . 183
4.3 Configuring hosts in IBM Storwize V5000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
4.3.1 Creating FC hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
4.3.2 Configuring IBM Storwize V5000 for FC connectivity . . . . . . . . . . . . . . . . . . . . . 189
4.3.3 Creating iSCSI hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
4.3.4 Configuring IBM Storwize V5000 for iSCSI host connectivity . . . . . . . . . . . . . . . 194
4.3.5 Creating SAS hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

Chapter 5. Volume configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201


5.1 Creating volumes in IBM Storwize V5000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
5.1.1 Creating generic volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
5.1.2 Creating thin provisioned volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
5.1.3 Creating mirrored volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
5.1.4 Creating thin mirrored volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
5.2 Mapping volumes to hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
5.2.1 Mapping newly created volumes using Create and Map to Host . . . . . . . . . . . . 215
5.2.2 Manually mapping volumes to hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
5.3 Discovering mapped volumes from host systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
5.3.1 Discovering FC volumes from Windows 2012 R2. . . . . . . . . . . . . . . . . . . . . . . . 220
5.3.2 Discovering iSCSI volumes from Windows 2012 R2 . . . . . . . . . . . . . . . . . . . . . 229
5.3.3 Discovering SAS volumes from Windows 2012 R2 . . . . . . . . . . . . . . . . . . . . . . 235
5.3.4 Discovering FC volumes from VMware ESXi 5.5 . . . . . . . . . . . . . . . . . . . . . . . . 235
5.3.5 Discovering iSCSI volumes from VMware ESXi 5.5 . . . . . . . . . . . . . . . . . . . . . . 237
5.3.6 Discovering SAS volumes from VMware ESXi 5.5 . . . . . . . . . . . . . . . . . . . . . . . 248

Chapter 6. Storage migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249


6.1 Interoperability and compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
6.2 Storage migration wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
6.2.1 External virtualization capability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
6.2.2 Overview of the storage migration wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
6.2.3 Storage migration wizard tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
6.3 Storage migration wizard example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
6.3.1 Example scenario: Storage migration wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
6.3.2 Example scenario: Fibre Channel cabling layout . . . . . . . . . . . . . . . . . . . . . . . . 273
6.3.3 Using the storage migration wizard for example scenario . . . . . . . . . . . . . . . . . 274

Chapter 7. Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309


7.1 Working with internal drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
7.1.1 Internal Storage window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
7.1.2 Actions on internal drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
7.2 Configuring internal storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
7.2.1 RAID configuration presets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
7.2.2 Customizing initial storage configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324

Contents v
7.2.3 Creating an MDisk and a pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
7.2.4 Option: Use the recommended configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
7.2.5 Option: Select a different configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
7.2.6 MDisk by Pools panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
7.2.7 RAID action for MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
7.2.8 More actions on MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
7.3 Working with storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
7.3.1 Create Pool option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
7.3.2 Actions on storage pools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
7.4 Working with child pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
7.4.1 Creating child pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
7.4.2 Actions on child pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
7.4.3 Resizing a child pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
7.5 Working with MDisks on external storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357

Chapter 8. Advanced host and volume administration . . . . . . . . . . . . . . . . . . . . . . . . 359


8.1 Advanced host administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
8.1.1 Modifying Mappings menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
8.1.2 Unmapping volumes from a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
8.1.3 Renaming a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
8.1.4 Removing a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
8.1.5 Host properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
8.2 Adding and deleting host ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
8.2.1 Adding a host port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
8.2.2 Adding a Fibre Channel port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
8.2.3 Adding a SAS host port. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
8.2.4 Adding an iSCSI host port. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
8.2.5 Deleting a host port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
8.3 Host mappings overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
8.3.1 Unmapping volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
8.3.2 Properties (Host) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
8.3.3 Properties (Volume) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
8.4 Advanced volume administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
8.4.1 Advanced volume functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
8.4.2 Mapping a volume to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
8.4.3 Unmapping volumes from all hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
8.4.4 Viewing which host is mapped to a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
8.4.5 Renaming a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
8.4.6 Shrinking a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
8.4.7 Expanding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
8.4.8 Migrating a volume to another storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
8.4.9 Exporting to an image mode volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
8.4.10 Deleting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
8.5 Volume properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
8.5.1 Overview tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
8.5.2 Host Maps tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
8.5.3 Member MDisks tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
8.5.4 Adding a mirrored volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
8.5.5 Editing thin-provisioned volume properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
8.6 Advanced volume copy functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
8.6.1 Thin-Provisioned menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
8.6.2 Splitting into a new volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
8.6.3 Validate Volume Copies option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410

vi Implementing the IBM Storwize V5000


8.6.4 Delete Volume Copy option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
8.6.5 Migrating volumes by using the volume copy features . . . . . . . . . . . . . . . . . . . . 412
8.7 Volumes by storage pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
8.8 Volumes by host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416

Chapter 9. Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419


9.1 Generations of IBM Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
9.2 New features in Easy Tier 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
9.3 Easy Tier overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
9.3.1 Tiered storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
9.4 Easy Tier process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
9.4.1 I/O Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
9.4.2 Data Placement Advisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
9.4.3 Data Migration Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
9.4.4 Data Migrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
9.4.5 Easy Tier operating modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
9.4.6 Easy Tier status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
9.4.7 Storage Pool Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
9.4.8 Easy Tier rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
9.5 Easy Tier configuration using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
9.5.1 Creating multi-tiered pools: Enabling Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . 430
9.5.2 Downloading Easy Tier I/O measurements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
9.6 Easy Tier configuration using the command-line interface . . . . . . . . . . . . . . . . . . . . . 438
9.6.1 Enabling Easy Tier measured mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
9.6.2 Enabling or disabling Easy Tier on single volumes. . . . . . . . . . . . . . . . . . . . . . . 442
9.7 IBM Storage Tier Advisor Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
9.7.1 Processing heat log files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
9.7.2 Storage Tier Advisor Tool reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446

Chapter 10. Copy services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451


10.1 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
10.1.1 Business requirements for FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
10.1.2 FlashCopy functional overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
10.1.3 Planning for FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
10.1.4 Managing FlashCopy using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
10.1.5 Managing FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
10.1.6 Managing a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
10.2 Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
10.2.1 Remote Copy concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
10.2.2 Global Mirror with Change Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
10.2.3 Remote Copy planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
10.3 Troubleshooting Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
10.3.1 1920 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
10.3.2 1720 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
10.4 Managing Remote Copy using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
10.4.1 Managing cluster partnerships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
10.4.2 Managing stand-alone Remote Copy relationships . . . . . . . . . . . . . . . . . . . . . 550
10.4.3 Managing a Remote Copy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . 563

Chapter 11. External storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579


11.1 Planning for external storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
11.1.1 License for external storage virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
11.1.2 SAN configuration planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
11.1.3 External storage configuration planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582

Contents vii
11.1.4 Guidelines for virtualizing external storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
11.2 Working with external storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
11.2.1 Adding external storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
11.2.2 Importing Image Mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
11.2.3 Managing external storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596
11.2.4 Removing external storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600

Chapter 12. RAS, monitoring, and troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . 601


12.1 Reliability, Availability, and Serviceability on the IBM Storwize V5000 . . . . . . . . . . . 602
12.2 IBM Storwize V5000 components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
12.2.1 Enclosure midplane assembly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
12.2.2 Node canisters: Ports and LED. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604
12.2.3 Node canister replaceable hardware components . . . . . . . . . . . . . . . . . . . . . . 608
12.2.4 Expansion canister: Ports and LED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
12.2.5 Disk subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
12.2.6 Power supply unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616
12.3 Configuration backup procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618
12.3.1 Generating a manual configuration backup by using the CLI . . . . . . . . . . . . . 618
12.3.2 Downloading a configuration backup by using the GUI . . . . . . . . . . . . . . . . . . 619
12.4 Updating software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
12.4.1 Node canister software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
12.4.2 Upgrading drive firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633
12.5 Event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637
12.5.1 Managing the event log. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638
12.5.2 Alert handling and Recommended Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
12.6 Collecting support information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
12.6.1 Support information via GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
12.6.2 Support information via Service Assistant. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
12.6.3 Support Information onto USB stick . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
12.7 Powering on and powering off the IBM Storwize V5000 . . . . . . . . . . . . . . . . . . . . . . 650
12.7.1 Powering off the system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
12.7.2 Powering on . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652
12.8 Tivoli Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652
12.8.1 Tivoli Storage Productivity Center benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653
12.8.2 Adding IBM Storwize V5000 in Tivoli Storage Productivity Center . . . . . . . . . . 653
12.9 Using Tivoli Storage Productivity Center to administer and generate reports for an
IBM Storwize V5000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
12.9.1 Basic configuration and administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
12.9.2 Generating reports by using Java GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
12.9.3 Generating reports using Tivoli Storage Productivity Center web console . . . . 661

Appendix A. Command-line interface setup and SAN Boot . . . . . . . . . . . . . . . . . . . . 667


Command-line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
Basic setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
Example commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
Upgrading drive firmware using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681
Copying, installing, and running the Upgrade Test Utility on the Storwize unit. . . . . . . 682
SAN Boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687


IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
IBM Storwize V5000 publications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
IBM Storwize V5000 support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687

viii Implementing the IBM Storwize V5000


Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.

Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.

Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.

Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.

This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.

© Copyright IBM Corp. 2013, 2015. All rights reserved. ix


Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml

The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
DS8000® Power Systems™ System i®
Easy Tier® Redbooks® System Storage®
FlashCopy® Redbooks (logo) ® Tivoli®
IBM® Storwize® XIV®

The following terms are trademarks of other companies:

Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.

Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Other company, product, or service names may be trademarks or service marks of others.

x Implementing the IBM Storwize V5000


IBM REDBOOKS PROMOTIONS

IBM Redbooks promotions

Find and read thousands of


IBM Redbooks publications
Search, bookmark, save and organize favorites
Get up-to-the-minute Redbooks news and announcements
Link to the latest Redbooks blogs and videos

Get the latest version of the Redbooks Mobile App

Download
Android
iOS

Now

Promote your business


in an IBM Redbooks
publication
®
Place a Sponsorship Promotion in an IBM
®
Redbooks publication, featuring your business
or solution with a link to your web site.

Qualified IBM Business Partners may place a full page


promotion in the most popular Redbooks publications.
Imagine the power of being seen by users who download ibm.com/Redbooks
millions of Redbooks publications each year! About Redbooks Business Partner Programs
THIS PAGE INTENTIONALLY LEFT BLANK
Preface

Organizations of all sizes are faced with the challenge of managing massive volumes of
increasingly valuable data. But storing this data can be costly, and extracting value from the
data is becoming more difficult. IT organizations have limited resources but must stay
responsive to dynamic environments and act quickly to consolidate, simplify, and optimize
their IT infrastructures. The IBM® Storwize® V5000 system provides a smarter solution that
is affordable, easy to use, and self-optimizing, which enables organizations to overcome
these storage challenges.

Storwize V5000 delivers efficient, entry-level configurations that are specifically designed to
meet the needs of small and midsize businesses. Designed to provide organizations with the
ability to consolidate and share data at an affordable price, Storwize V5000 offers advanced
software capabilities that are usually found in more expensive systems.

This IBM Redbooks® publication is intended for pre-sales and post-sales technical support
professionals and storage administrators.

The concepts in this book also relate to the IBM Storwize V3700.

This book was written at a software level of Version 7 Release 4.

Authors
This book was produced by a team of specialists from around the world working at the
International Technical Support Organization, Manchester Labs, UK.

Jon Tate is a Project Manager for IBM System Storage® SAN


Solutions at the International Technical Support Organization
(ITSO), San Jose Center. Before joining the ITSO in 1999, he
worked in the IBM Technical Support Center, providing Level 2
support for IBM storage products. Jon has 28 years of
experience in storage software and management, services,
and support, and is both an IBM Certified IT Specialist and an
IBM SAN Certified Specialist. He is also the UK Chairman of
the Storage Networking Industry Association.

© Copyright IBM Corp. 2013, 2015. All rights reserved. xiii


Adam Lyon-Jones is a Test Engineer in IBM Storage Systems
at the IBM Manchester Lab as part of a global team
developing, testing, and supporting IBM Storage products. He
has worked for two years performing system and integration
testing for IBM SAN Volume Controller and IBM Storwize
products. His areas of expertise include storage hardware,
storage virtualization, and storage area networks. Prior to
joining IBM, he studied for a Masters degree in Physics at
Durham University.

Lee Sirett is a Storage Technical Advisor for the European


Storage Competency Centre in Mainz, Germany. Before joining
the ESCC, he worked in IBM Technical Support Services for 10
years providing support on a range of IBM products including
IBM Power Systems™. Lee has 24 years experience in the IT
Industry. He is IBM Storage Certified and is both an IBM
Certified IBM XIV® Administrator and Certified XIV Specialist.

Chris Tapsell is a Certified Storage Client Technical Specialist


for IBM Systems & Technology Group in the UK. In his 25+
years at IBM, he has worked as a CE covering products such
as Office Products to AS400 (IBM System i®), as a Support
Specialist for all of the IBM Intel server products (PC Server,
Netfinity, xSeries & System x), PCs, and notebooks, and as a
Client Technical Specialist for System x. Chris holds a number
of IBM Certifications covering System x and Storage products.

Paulo Tomiyoshi Takeda is a SAN and Storage Disk specialist


at IBM Brazil. He has over nine years of experience in the IT
arena and is an IBM Certified IT Specialist. He holds a
bachelors degree in Information Systems from UNIFEB
(Universidade da Fundação Educacional de Barretos) and is
IBM Certified for IBM DS8000® and IBM Storwize V7000. His
areas of expertise include planning, configuring, and
troubleshooting DS8000 SAN Volume Controller and IBM
Storwize V7000. He is involved in storage-related projects such
as capacity growth planning, SAN consolidation, storage
microcode upgrades, and copy services in the Open Systems
environment.

Thanks to the following people for their contributions to this project:


򐂰 Martyn Spink
򐂰 Djihed Afifi
򐂰 Arthur Wellesley
򐂰 James Whitaker
򐂰 Imran Imtiaz

xiv Implementing the IBM Storwize V5000


򐂰 Tobias Fleming
IBM Manchester Lab
򐂰 John Fairhurst
򐂰 Paul Marris
򐂰 Paul Merrison
IBM Hursley
򐂰 Samrat Dutta
IBM Pune

Thanks to the following authors of the previous edition of this book:


򐂰 Uwe Dubberke
򐂰 Justin Heather
򐂰 Andrew Hickey
򐂰 Imran Imtiaz
򐂰 Nancy Kinney
򐂰 Saiprasad Prabhakar Parkar
򐂰 Dieter Utesch

Now you can become a published author, too!


Here’s an opportunity to spotlight your skills, grow your career, and become a published
author—all at the same time! Join an ITSO residency project and help write a book in your
area of expertise, while honing your experience using leading-edge technologies. Your efforts
will help to increase product acceptance and customer satisfaction, as you expand your
network of technical contacts and relationships. Residencies run from two to six weeks in
length, and you can participate either in person or as a remote resident working from your
home base.

Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html

Comments welcome
Your comments are important to us!

We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
򐂰 Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
򐂰 Send your comments in an email to:
[email protected]
򐂰 Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400

Preface xv
Stay connected to IBM Redbooks
򐂰 Find us on Facebook:
http://www.facebook.com/IBMRedbooks
򐂰 Follow us on Twitter:
http://twitter.com/ibmredbooks
򐂰 Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
򐂰 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
򐂰 Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html

xvi Implementing the IBM Storwize V5000


Summary of changes

This section describes the technical changes made in this edition of the book and in previous
editions. This edition might also include minor corrections and editorial changes that are not
identified.

Summary of Changes
for SG24-8162-01
for Implementing the IBM Storwize V5000
as created or updated on February 4, 2015.

February 2015, Second Edition


This revision includes the following new and changed information.

New information
򐂰 Scalability data
򐂰 Drive Support
򐂰 New GUI look and feel
򐂰 New CLI commands

Changed information
򐂰 GUI screen captures
򐂰 CLI commands and output

© Copyright IBM Corp. 2013, 2015. All rights reserved. xvii


xviii Implementing the IBM Storwize V5000
1

Chapter 1. Overview of the IBM Storwize


V5000 system
This chapter provides an overview of the IBM Storwize V5000 architecture and includes a
brief explanation of storage virtualization.

This chapter includes the following topics:


򐂰 IBM Storwize V5000 overview
򐂰 IBM Storwize V5000 terminology
򐂰 IBM Storwize V5000 models
򐂰 IBM Storwize V5000 hardware
򐂰 IBM Storwize V5000 terms
򐂰 IBM Storwize V5000 features
򐂰 Problem management and support
򐂰 More information resources

© Copyright IBM Corp. 2013, 2015. All rights reserved. 1


1.1 IBM Storwize V5000 overview
The IBM Storwize V5000 solution is a modular midrange storage solution. The Storwize
V5000 includes the capability to virtualize its own internal Redundant Array of Independent
Disk (RAID) storage and existing external SAN-attached storage.

IBM Storwize V5000 features the following benefits:


򐂰 Brings enterprise technology to entry and midrange storage
򐂰 Speciality administrators not required
򐂰 Easy client setup and service
򐂰 Ability to grow the system incrementally as storage capacity and performance needs
change
򐂰 Simple integration into the server environment

The IBM Storwize V5000 addresses the block storage requirements of small and midsize
organizations and consists of one 2U control enclosure and, optionally, up to nineteen 2U
expansion enclosures, which are connected via serial-attached Small Computer Systems
Interface (SCSI SAS) cables that make up one system that is called an I/O group.

Two I/O groups can be connected to form a cluster giving a maximum of two control enclosure
and thirty eight expansion enclosures.

The control and expansion enclosures are available in the following form factors and can be
intermixed within an I/O group:
򐂰 12 x 3.5-inch drives in a 2U unit
򐂰 24 x 2.5-inch drives in a 2U unit

Within each enclosure, there are two canisters. Control enclosures contain two node
canisters, and expansion enclosures contain two expansion canisters.

The IBM Storwize V5000 supports up to 480 x 3.5-inch or 960 x 2.5-inch or a combination of
both drive form factors for the internal storage in a two I/O group cluster.

SAS, NL-SAS, and solid-state drives (SSDs) types are supported.

The IBM Storwize V5000 is designed to accommodate the most common storage network
technologies to enable easy implementation and management. It can be attached to hosts via
a Fibre Channel SAN fabric, an iSCSI infrastructure, or via SAS. Hosts can be network or
direct attached.

Important: IBM Storwize V5000 can be direct-attached to a host. For more information
about restrictions, see the IBM System Storage Interoperation Center (SSIC), which is
available at this website:
http://www-03.ibm.com/systems/support/storage/ssic/interoperability.wss

More information is also available at this website:


https://ibm.biz/BdFNu6

2 Implementing the IBM Storwize V5000


The IBM Storwize V5000 is a virtualized storage solution that groups its internal drives into
RAID arrays (called Managed Disks or MDisks). MDisks can also be created by importing
LUNs from external FC SAN-attached storage. These MDisks are then grouped into storage
pools. Volumes are created from these storage pools and provisioned out to hosts. Storage
pools are normally created with MDisks consisting of the same type and capacity of drive.
Volumes can be moved non-disruptively between storage pools with differing performance
characteristics. For example, a volume can be moved between a storage pool that is made up
of NL-SAS drives to a storage pool made up of SAS drives.

The IBM Storwize V5000 system also provides several configuration options that are aimed at
simplifying the implementation process. It also provides configuration presets and automated
wizards called Directed Maintenance Procedures (DMP) to help resolve any events that might
occur.

Included with an IBM Storwize V5000 system is a simple and easy to use graphical user
interface (GUI) that is designed to allow storage to be deployed quickly and efficiently. The
GUI runs on any supported browser. The management GUI contains a series of
pre-established configuration options that are called presets that use commonly used settings
to quickly configure objects on the system. Presets are available for creating volumes and
IBM FlashCopy® mappings and for setting up a RAID configuration.

You can also use the command-line interface (CLI) to set up or control the system.

1.2 IBM Storwize V5000 terminology


The IBM Storwize V5000 system uses terminology that is consistent with the entire IBM
Storwize family and SAN Volume Controller. The terms are defined in Table 1-1.

Table 1-1 IBM Storwize V5000 terminology


IBM Storwize V5000 term Definition

Battery Each control enclosure node canister in a IBM Storwize V5000


contains a battery.

Chain Each control enclosure has two chains, used to connect


expansion enclosures in such a way as to give redundant
connections to the drives inside.

Clone A copy of a volume on a server at a particular point. The


contents of the copy can be customized while the contents of
the original volume are preserved.

Control enclosure A hardware unit that includes a chassis, node canisters, drives,
and power sources.

Data migration IBM Storwize V5000 can migrate data from existing external
storage to its internal volumes.

Drive IBM Storwize V5000 supports a range of hard disk drives


(HDDs) and SSDs.

Event An occurrence that is significant to a task or system. Events can


include completion or failure of an operation, a user action, or
the change in the state of a process.

Expansion canister A hardware unit that includes the SAS interface hardware that
enables the control enclosure hardware to use the drives of the
expansion enclosure. Each expansion enclosure has two
expansion canisters.

Chapter 1. Overview of the IBM Storwize V5000 system 3


IBM Storwize V5000 term Definition
Expansion enclosure A hardware unit that includes expansion canisters, drives, and
power supply units.

External storage MDisks that are SCSI logical units (LUs) presented by storage
systems that are attached to and managed by the clustered
system.

Fibre Channel port Fibre Channel ports are connections for the hosts to get access
to the IBM Storwize V5000.

Host mapping The process of controlling which hosts can access specific
volumes within an IBM Storwize V5000.

Internal storage Array MDisks and drives that are held in enclosures that are
part of the IBM Storwize V5000.
iSCSI (Internet Small Computer Internet Protocol (IP)-based storage networking standard for
System Interface) linking data storage facilities.
Managed disk (MDisk) A component of a storage pool that is managed by a clustered
system. An MDisk is part of a RAID array of internal storage or
a SCSI LU for external storage. An MDisk is not visible to a host
system on the storage area network.

Node canister A hardware unit that includes the node hardware, fabric, and
service interfaces, SAS expansion ports, and battery. Each
control enclosure contains two node canister.

PHY A single SAS lane. There are four PHYs in each SAS cable.

Power Supply Unit Each enclosure has two power supply units (PSU).

Quorum disk A disk that contains a reserved area that is used exclusively for
cluster management. The quorum disk is accessed when it is
necessary to determine which half of the cluster continues to
read and write data.

Serial-Attached SCSI (SAS) ports SAS ports are connections for expansion enclosures and direct
attachment of hosts to access the IBM Storwize V5000.

Snapshot An image backup type that consists of a point-in-time view of a


volume.

Storage pool A amount of storage capacity that provides the capacity


requirements for a volume.

Strand The SAS connectivity of a set of drives within multiple


enclosures. The enclosures can be control enclosures or
expansion enclosures.

Thin provisioning or thin The ability to define a storage unit (full system, storage pool, or
provisioned volume) with a logical capacity size that is larger than the
physical capacity that is assigned to that storage unit.

Volume A discrete unit of storage on disk, tape, or other data recording


medium that supports some form of identifier and parameter
list, such as a volume label or input/output control.

Worldwide port names Each Fibre Channel port and SAS port is identified by its
physical port number and worldwide port name (WWPN).

4 Implementing the IBM Storwize V5000


1.3 IBM Storwize V5000 models
The IBM Storwize V5000 platform consists of a number of different models.

More information: For more information about the features, benefits, and specifications of
IBM Storwize V5000 models, see this website:
http://www.ibm.com/systems/storage/disk/storwize_v5000/index.html

The information in this book is accurate at the time of writing. However, as the IBM
Storwize V5000 matures, expect to see new features and enhanced specifications.

The IBM Storwize V5000 models are described in Table 1-2. All models have two node
canisters. C models are control enclosures and E models are expansion enclosures.

Table 1-2 IBM Storwize V5000 models


Model Cache Drive slots

One-Year Warranty

2077-12C 16 GB 12 x 3.5-inch

2077-24C 16 GB 24 x 2.5-inch

2077-12E N/A 12 x 3.5-inch

2077-24E N/A 24 x 2.5-inch

Three-Year Warranty

2078-12C 16 GB 12 x 3.5-inch

2078-24C 16 GB 24 x 2.5-inch

2078-12E N/A 12 x 3.5-inch

2078-24E N/A 24 x 2.5-inch

Figure 1-1 shows the front view of the 2077/2078-12C and 12E enclosures.

Figure 1-1 IBM Storwize V5000 front view for 2077/2078-12C and 12E enclosures

The drives are positioned in four columns of three horizontal-mounted drive assemblies. The
drive slots are numbered 1 - 12, starting at upper left and going left to right, top to bottom.

Chapter 1. Overview of the IBM Storwize V5000 system 5


Figure 1-2 shows the front view of the 2077/2078-24C and 24E enclosures.

Figure 1-2 IBM Storwize V5000 front view for 2077/2078-24C and 24E enclosure

The drives are positioned in one row of 24 vertically mounted drive assemblies. The drive
slots are numbered 1 - 24, starting from the left. There is a vertical center drive bay molding
between slots 12 and 13.

1.4 IBM Storwize V5000 hardware


The IBM Storwize V5000 solution is a modular storage system that is built on a common
enclosure platform shared by the control enclosures and expansion enclosures.

Figure 1-3 shows an overview of hardware components of the IBM Storwize V5000 solution.

Figure 1-3 IBM Storwize V5000 hardware components

6 Implementing the IBM Storwize V5000


Figure 1-4 shows the control enclosure rear view of IBM Storwize V5000 models 12C and
24C.

Figure 1-4 IBM Storwize V5000 control enclosure rear view of models 12C and 24C

In Figure 1-4, you can see that there are two power supply slots at the bottom of the
enclosure. The power supplies are identical and exchangeable. There are two canister slots
at the top of the chassis.

In Figure 1-5, you can see the rear view of an IBM Storwize V5000 expansion enclosure.

Figure 1-5 IBM Storwize V5000 expansion enclosure rear view - models 12E and 24E

You can see that the only difference between the control enclosure and the expansion
enclosure are the canisters. The canisters of the expansion have only the two SAS ports.

For more information about the expansion enclosure, see 1.4.2, “Expansion enclosure” on
page 8.

1.4.1 Control enclosure


Each IBM Storwize V5000 system has one control enclosure that contains two node canisters
(nodes), disk drives, and two power supplies.

Figure 1-6 shows a single node canister.

Figure 1-6 IBM Storwize V5000 node canister

Chapter 1. Overview of the IBM Storwize V5000 system 7


Each node canister contains the following hardware:
򐂰 Battery
򐂰 Memory: 8 GB
򐂰 8 Gb Fibre Channel Host interface card
򐂰 Four 6 Gbps SAS ports
򐂰 Two 10/100/1000 Mbps Ethernet ports
򐂰 Two USB 2.0 ports (one port is used during installation)
򐂰 System flash

The battery is used in case of power loss. The IBM Storwize V5000 system uses this battery
to power the canister while the cache data is written to the internal system flash. This memory
dump is called a fire hose memory dump. After the system is up again, this data is loaded
back to the cache for destage to the disks.

Figure 1-6 on page 7 also shows the following features that are provided by the IBM Storwize
V5000 node canister:
򐂰 Two 10/100/1000 Mbps Ethernet ports, which are used for management. Port 1 (left port)
must be configured. The second port, port 2 on the right, is optional. Both ports can be
used for iSCSI traffic. For more information, see Chapter 4, “Host configuration” on
page 155.
򐂰 Two USB ports. One port is used during the initial configuration or when there is a
problem. They are numbered 1 on the left and 2 on the right. For more information about
usage, see Chapter 2, “Initial configuration” on page 25.
򐂰 Four serial attached SCSI (SAS) ports. They are numbered 1on the left to 4 on the right.
The IBM Storwize V5000 uses ports 1 and 2 for host connectivity and ports 3 and 4 to
connect to the optional expansion enclosures. The IBM Storwize V5000 incorporates two
SAS chains and up to 10 expansion enclosures can be connected to port 1 and up to 9
enclosures to port 2.
򐂰 Four Fibre Channel ports, which operate at 2 Gbps, 4 Gbps, or 8 Gbps. The ports are
numbered from left to right starting with 1.

Service port: Do not use the port marked with a wrench. This port is a service port only.

The two node canisters act as a single processing unit and form an I/O group that is attached
to the SAN fabric, an iSCSI infrastructure, or directly attached to hosts via FC or SAS. The
pair of nodes is responsible for serving I/O to a volume. The two nodes provide a highly
available fault-tolerant controller so that if one node fails, the surviving node automatically
takes over. Nodes are deployed in pairs that are called I/O groups.

One node is designated as the configuration node, but each node in the control enclosure
holds a copy of the control enclosure state information.

The IBM Storwize V5000 supports two I/O groups in a clustered system.

The terms node canister and node are used interchangeably throughout this book.

1.4.2 Expansion enclosure


The optional IBM Storwize V5000 expansion enclosure contains two expansion canisters,
disk drives, and two power supplies.

8 Implementing the IBM Storwize V5000


Figure 1-7 shows an overview of the expansion enclosure.

Figure 1-7 Expansion enclosure of the IBM Storwize V5000

The expansion enclosure power supplies are the same as the control enclosure. There is a
single power lead connector on each power supply unit.

Figure 1-8 shows the expansion canister ports.

Figure 1-8 Expansion canister ports

As shown in Figure 1-8, each expansion canister provides two SAS interfaces that are used
to connect to the control enclosure and any further optional expansion enclosures. The ports
are numbered 1 on the left and 2 on the right. SAS port 1 is the IN port and SAS port 2 is the
OUT port.

Use of the SAS connector 1 is mandatory because the expansion enclosure must be
attached to a control enclosure or another expansion enclosure. SAS connector 2 is optional
because it is used to attach to further expansion enclosures.

Each port includes two LEDs to show the status. The first LED indicates the link status and
the second LED indicates the fault status.

For more information about LED and ports, see this website:
https://ibm.biz/BdF7H4

1.4.3 Host connectivity


With 1 Gb iSCSI, 8 Gb FC, and 6 Gb SAS host interfaces supported as standard, the IBM
Storwize V5000 is designed to accommodate the most common storage networks. This broad
networking support enables deployment of IBM Storwize V5000 in existing storage network
infrastructures.

The 1 Gb iSCSI and 6 Gb SAS interfaces are built into the node canister hardware and the
8 Gb FC interface is supplied by a host interface card (HIC). At the time of writing, the 8 Gb
FC HIC is the only HIC that is available and is supplied as standard.

Chapter 1. Overview of the IBM Storwize V5000 system 9


1.4.4 Disk drive types
IBM Storwize V5000 enclosures support SSD, SAS, and Nearline SAS drive types. Each
drive has two ports (two PHYs) and I/O can be issued down both paths simultaneously.

Table 1-3 shows the IBM Storwize V5000 Disk Drive types that are available at the time of
writing.

Table 1-3 IBM Storwize V5000 Disk Drive types


Drive type Speed Size

2.5-inch form factor Solid-state disk N/A 200, 400, 800 GB and
1.6TB

2.5-inch form factor SAS 10,000 rpm 600, 900 GB, and 1.2, 1.8
TB

2.5-inch form factor SAS 15,000 rpm 146, 300 and 600 GB

2.5-inch form factor Nearline SAS 7,200 rpm 1 TB

3.5-inch form factor SAS 10,000 rpm 900 GB and 1.2, 1.8 TBa

3.5-inch form factor SAS 15,000 rpm 300 and 600 GBb

3.5-inch form factor Nearline SAS 7,200 rpm 2, 3, 4 and 6 TB


a. 2.5-inch drive in a 3.5-inch drive carrier
b. 2.5-inch drive in a 3.5-inch drive carrier

Note: The 1.8TB and 6TB and drives listed above support 4K block sizes.

1.5 IBM Storwize V5000 terms


In this section, we introduce the terms that are used for the IBM Storwize V5000 throughout
this book.

1.5.1 Hosts
A host system is a server that is connected to IBM Storwize V5000 through a Fibre Channel
connection, an iSCSI connection, or through a SAS connection.

Hosts are defined on IBM Storwize V5000 by identifying their WWPNs for Fibre Channel and
SAS hosts. iSCSI hosts are identified by using their iSCSI names. The iSCSI names can be
iSCSI qualified names (IQNs) or extended unique identifiers (EUIs). For more information,
see Chapter 4, “Host configuration” on page 155.

Hosts can be Fibre Channel attached via an existing Fibre Channel network infrastructure or
direct attached, iSCSI attached via an existing IP network, or directly attached via SAS. A
significant benefit of having direct attachment is that you can attach the host directly to the
IBM Storwize V5000 without the need for an FC or IP network.

10 Implementing the IBM Storwize V5000


1.5.2 Node canister
A node canister provides host interfaces, management interfaces, and SAS interfaces to the
control enclosure. A node canister has the cache memory, the internal storage to store
software and logs, and the processing power to run the IBM Storwize V5000 virtualizing and
management software. A clustered system consists of one or two node pairs. Each node pair
forms one I/O group. I/O groups are explained next in 1.5.3, “I/O groups” on page 11.

One of the nodes within the system is known as the configuration node that manages
configuration activity for the clustered system. If this node fails, the system nominates the
other node to become the configuration node.

1.5.3 I/O groups


Within IBM Storwize V5000, there are one or two pairs of node canisters, which are known as
I/O groups. The IBM Storwize V5000 therefore supports four node canisters in a clustered
system, which provides two I/O groups.

When a host server performs I/O to one of its volumes, all the I/Os for a specific volume are
directed to the I/O group. Also, under normal conditions, the I/Os for that specific volume are
always processed by the same node within the I/O group.

When a host server performs I/O to one of its volumes, all the I/O for that volume is directed to
the I/O group where the volume has been defined. Under normal conditions, these I/Os are
also always processed by the same node within that I/O group.

Both nodes of the I/O group act as preferred nodes for their own specific subset of the total
number of volumes that the I/O group presents to the host servers (a maximum of 2048
volumes per hosts). However, both nodes also act as a failover node for the partner node
within the I/O group. Therefore, a node takes over the I/O workload from its partner node
(if required) without affecting the server’s application.

In an IBM Storwize V5000 environment (which uses active-active architecture), the I/O
handling for a volume can be managed by both nodes of the I/O group. The I/O groups must
therefore be connected to the SAN such that all hosts can access all nodes. The hosts that
are connected through Fibre Channel connectors must use multipath device drivers to handle
this capability.

Up to 1024 host server objects can be defined to one I/O group or 2048 in a two I/O group
system. More information about I/O groups can be found in Chapter 5, “Volume configuration”
on page 201.

Important: The active/active architecture provides availability to process I/Os for both
controller nodes and allows the application to continue running smoothly, even if the server
has only one access route or path to the storage controller. This type of architecture
eliminates the path/LUN thrashing that is typical of an active/passive architecture.

1.5.4 Clustered system


A clustered system consists of one or two pairs of node canisters, each pair forming an I/O
group. All configuration, monitoring, and service tasks are performed at the system level. The
configuration settings are replicated across all node canisters in the clustered system. To
facilitate these tasks, one or two management IP addresses are set for the clustered system.
By using this configuration, you can manage the clustered system as a single entity.

Chapter 1. Overview of the IBM Storwize V5000 system 11


There is a process to back up the system configuration data on to disk so that the clustered
system can be restored in the event of a disaster. This method does not back up application
data; only IBM Storwize V5000 system configuration information is backed up.

System configuration backup: After the system configuration is backed up, save the
backup data on to your local hard disk (or at the least outside of the SAN). If you are unable
to access the IBM Storwize V5000, you do not have access to the backup data if it is on the
SAN. Perform this configuration backup after each configuration change to be safe.

The system can be configured by using the IBM Storwize V5000 management software
(GUI), CLI, or the USB key.

1.5.5 RAID
The IBM Storwize V5000 contains a number of internal drives, but these drives cannot be
directly added to storage pools. The drives must be included in a RAID array to provide
protection against the failure of individual drives.

These drives are referred to as members of the array. Each array has a RAID level. RAID
levels provide different degrees of redundancy and performance and have different
restrictions regarding the number of members in the array.

IBM Storwize V5000 supports hot spare drives. When an array member drive fails, the system
automatically replaces the failed member with a hot spare drive and rebuilds the array to
restore its redundancy. Candidate and spare drives can be manually exchanged with array
members.

Each array has a set of goals that describe the required location and performance of each
array. A sequence of drive failures and hot spare takeovers can leave an array unbalanced,
that is, with members that do not match these goals. The system automatically rebalances
such arrays when the appropriate drives are available.

The following RAID levels are available:


򐂰 RAID 0 (striping, no redundancy)
RAID 0 arrays stripe data across the drives. The system supports RAID 0 arrays with one
member, which is similar to traditional JBOD attach. RAID 0 arrays have no redundancy,
so they do not support hot spare takeover or immediate exchange. A RAID 0 array can be
formed by one to eight drives.
򐂰 RAID 1 (mirroring between two drives, which is implemented as RAID 10 with two drives)
RAID 1 arrays stripe data over mirrored pairs of drives. A RAID 1 array mirrored pair is
rebuilt independently. A RAID 1 array can be formed by two drives only.
򐂰 RAID 5 (striping, can survive one drive fault, with parity)
RAID 5 arrays stripe data over the member drives with one parity strip on every stripe.
RAID 5 arrays have single redundancy. The parity algorithm means that an array can
tolerate no more than one member drive failure. A RAID 5 array can be formed by 3 - 16
drives.
򐂰 RAID 6 (striping, can survive two drive faults, with parity)
RAID 6 arrays stripe data over the member drives with two parity stripes (which is known
as the P-parity and the Q-parity) on every stripe. The two parity strips are calculated by
using different algorithms, which give the array double redundancy. A RAID 6 array can be
formed by 5 - 16 drives.

12 Implementing the IBM Storwize V5000


򐂰 RAID 10 (RAID 0 on top of RAID 1)
RAID 10 arrays have single redundancy. Although they can tolerate one failure from every
mirrored pair, they cannot tolerate two-disk failures. One member out of every pair can be
rebuilding or missing at the same time. A RAID 10 array can be formed by 2 - 16 drives.

1.5.6 Managed disks


An MDisk refers to the unit of storage that IBM Storwize V5000 virtualizes. This unit can be a
logical volume on an external storage array that is presented to the IBM Storwize V5000 or a
RAID array that consists of internal drives. The IBM Storwize V5000 can then allocate these
MDisks into storage pools.

An MDisk is invisible to a host system on the storage area network because it is internal to the
IBM Storwize V5000 system.

An MDisk features the following modes:


򐂰 Array
Array mode MDisks are constructed from internal drives by using the RAID functionality.
Array MDisks are always associated with storage pools.
򐂰 Unmanaged
LUNs presented by external storage systems to IBM Storwize V5000 are discovered as
unmanaged MDisks. The MDisk is not a member of any storage pools, which means it is
not being used by the IBM Storwize V5000 storage system.
򐂰 Managed
Managed MDisks are LUNs presented by external storage systems to an IBM Storwize
V5000 that are assigned to a storage pool and provide extents so that volumes can use it.
Any data that might be on these LUNs when they are added is lost.
򐂰 Image
Image MDisks are LUNs that are presented by external storage systems to an IBM
Storwize V5000 and assigned directly to a volume with a one-to-one mapping of extents
between the MDisk and the volume. For more information, see Chapter 6, “Storage
migration” on page 249.

1.5.7 Quorum disks


A quorum disk is an MDisk that contains a reserved area for use exclusively by the system. In
the IBM Storwize V5000, internal drives can be considered as quorum candidates. The
clustered system uses quorum disks to break a tie when exactly half the nodes in the system
remain after a SAN failure.

The clustered system automatically forms the quorum disk by taking a small amount of space
from an MDisk. It allocates space from up to three different MDisks for redundancy, although
only one quorum disk is active.

To avoid the possibility of losing all of the quorum disks because of a failure of a single
storage system if the environment has multiple storage systems, you should allocate the
quorum disk on different storage systems. It is possible to manage the quorum disks by using
the CLI.

Chapter 1. Overview of the IBM Storwize V5000 system 13


1.5.8 Storage pools
A storage pool is a collection of MDisks (up to 128) that are grouped to provide capacity for
volumes. All MDisks in the pool are split into extents of the same size. Volumes are then
allocated out of the storage pool and are mapped to a host system.

MDisks can be added to a storage pool at any time to increase the capacity of the pool.
MDisks can belong in only one storage pool. For more information, see Chapter 7, “Storage
pools” on page 309.

Each MDisk in the storage pool is divided into a number of extents. The size of the extent is
selected by the administrator when the storage pool is created and cannot be changed later.
The size of the extent ranges from 16 MB - 8 GB.

Default extent size: The GUI of IBM Storwize V5000 has a default extent size value of
1 GB when you define a new storage pool.

The extent size directly affects the maximum volume size and storage capacity of the
clustered system.

A system can manage 2^22 (4,194,304) extents. For example, with a 16 MB extent size, the
system can manage up to 16 MB x 4,194,304 = 64 TB of storage.

The effect of extent size on the maximum volume and cluster size is shown in Table 1-4.

Table 1-4 Maximum volume and cluster capacity by extent size


Extent size Maximum volume capacity for Maximum storage capacity of
normal volumes (GB) cluster

16 2048 (2 TB) 64 TB

32 4096 (4 TB) 128 TB

64 8192 (8 TB) 256 TB

128 16384 (16 TB) 512 TB

256 32768 (32 TB) 1 PB

512 65536 (64 TB) 2 PB

1024 131072 (128 TB) 4 PB

2048 262144 (256 TB) 8 PB

4096 262144 (256 TB) 16 PB

8192 262144 (256 TB) 32 PB

Use the same extent size for all storage pools in a clustered system. This is a prerequisite if
you want to migrate a volume between two storage pools. If the storage pool extent sizes are
not the same, you must use volume mirroring to copy volumes between storage pools, as
described in Chapter 7, “Storage pools” on page 309.

A storage pool can have a threshold warning set that automatically issues a warning alert
when the used capacity of the storage pool exceeds the set limit.

14 Implementing the IBM Storwize V5000


Child storage pools
A child pool is a user configurable object similar to a storage pool, however, a user can only
create a Child Pool through the CLI. A Child Pool resides and gets capacity exclusively from
one parent storage pool. It shares the properties of the parent pool and provides most of the
functions that normal storage pools have. Capacity is specified at creation and can grow or
decrease non-disruptively within the bounds of the parent pool. More information about Child
Pools can be found in 7.4, “Working with child pools” on page 353.

Single-tiered storage pool


MDisks that are used in a single-tiered storage pool should have the following characteristics
to prevent performance and other problems:
򐂰 They should have the same hardware characteristics, for example, the same RAID type,
RAID array size, disk type, and disk revolutions per minute (RPMs).
򐂰 The disk subsystems providing the MDisks must have similar characteristics, for example,
maximum input/output operations per second (IOPS), response time, cache, and
throughput.
򐂰 Use MDisks of the same size, and ensure that the MDisks provide the same number of
extents. If this configuration is not feasible, you must check the distribution of the volumes’
extents in that storage pool.

Multi-tiered storage pool


A multi-tiered storage pool has a mix of MDisks with more than one type of disk, for example,
a storage pool that contains a mix of generic_hdd AND generic_ssd MDisks.

A multi-tiered storage pool will therefore contain MDisks with different characteristics unlike
the single-tiered storage pool. MDisks with similar characteristics then form the tiers within
the pool. However, each tier should have MDisks of the same size that provide the same
number of extents.

A multi-tiered storage pool is used to enable automatic migration of extents between disk tiers
using the IBM Storwize V5000 IBM Easy Tier® function, as described in Chapter 9, “Easy
Tier” on page 419. This functionality can help improve performance of host volumes on the
IBM Storwize V5000.

1.5.9 Volumes
A volume is a logical disk that is presented to a host system by the clustered system. In our
virtualized environment, the host system has a volume that is mapped to it by IBM Storwize
V5000. IBM Storwize V5000 translates this volume into a number of extents, which are
allocated across MDisks. The advantage with storage virtualization is that the host is
decoupled from the underlying storage, so the virtualization appliance can move around the
extents without impacting the host system.

The host system cannot directly access the underlying MDisks in the same manner as it can
access RAID arrays in a traditional storage environment.

Chapter 1. Overview of the IBM Storwize V5000 system 15


The following types of volumes are available:
򐂰 Striped
A striped volume is allocated one extent in turn from each MDisk in the storage pool. This
process continues until the space that is required for the volume is satisfied.
It also is possible to supply a list of MDisks to use.
Figure 1-9 shows how a striped volume is allocated, assuming 10 extents are required.

Figure 1-9 Striped volume

򐂰 Sequential
A sequential volume is a volume in which the extents are allocated one after the other from
one MDisk to the next MDisk, as shown in Figure 1-10.

Figure 1-10 Sequential volume

16 Implementing the IBM Storwize V5000


򐂰 Image mode
Image mode volumes are special volumes that have a direct relationship with one MDisk.
They are used to migrate existing data into and out of the clustered system to or from
external FC SAN-attached storage.
When the image mode volume is created, a direct mapping is made between extents that
are on the MDisk and the extents that are on the volume. The logical block address (LBA)
x on the MDisk is the same as the LBA x on the volume, which ensures that the data on
the MDisk is preserved as it is brought into the clustered system, as shown in Figure 1-11.

Figure 1-11 Image mode volume

Some virtualization functions are not available for image mode volumes, so it is often useful to
migrate the volume into a new storage pool. After it is migrated, the MDisk becomes a
managed MDisk.

If you want to migrate data from an existing storage subsystem, use the Storage Migration
wizard, which guides you through the process.

For more information, see Chapter 6, “Storage migration” on page 249.

If you add an MDisk that contains data to a storage pool, any data on the MDisk is lost. If you
are presenting externally virtualized LUNs that contain data to a IBM Storwize V5000, import
them as image mode volumes to ensure data integrity or use the migration wizard.

Chapter 1. Overview of the IBM Storwize V5000 system 17


1.5.10 iSCSI
iSCSI is an alternative method of attaching hosts to the IBM Storwize V5000. The iSCSI
function is a software function that is provided by the IBM Storwize V5000 code, not
hardware.

In the simplest terms, iSCSI allows the transport of SCSI commands and data over an
Internet Protocol network that is based on IP routers and Ethernet switches. iSCSI is a
block-level protocol that encapsulates SCSI commands into TCP/IP packets and uses an
existing IP network instead of requiring FC HBAs and a SAN fabric infrastructure.

Concepts of names and addresses are carefully separated in iSCSI.

An iSCSI name is a location-independent, permanent identifier for an iSCSI node. An iSCSI


node has one iSCSI name, which stays constant for the life of the node. The terms initiator
name and target name also refer to an iSCSI name.

An iSCSI address specifies the iSCSI name of an iSCSI node and a location of that node. The
address consists of a host name or IP address, a TCP port number (for the target), and the
iSCSI name of the node. An iSCSI node can have any number of addresses, which can
change at any time, particularly if they are assigned by way of Dynamic Host Configuration
Protocol (DHCP). An IBM Storwize V5000 node represents an iSCSI node and provides
statically allocated IP addresses.

Each iSCSI node, that is, an initiator or target, has a unique IQN, which can have a size of up
to 255 bytes. The IQN is formed according to the rules that were adopted for Internet nodes.
The IQNs can be abbreviated by using a descriptive name, which is known as an alias. An
alias can be assigned to an initiator or a target.

For more information about configuring iSCSI, see Chapter 4, “Host configuration” on
page 155.

1.5.11 SAS
The SAS standard is an alternative method of attaching hosts to the IBM Storwize V5000.
The IBM Storwize V5000 supports direct SAS host attachment providing easy-to-use,
affordable storage needs. Each SAS port device has a worldwide unique 64-bit SAS address
and operates at 6 Gbps.

1.5.12 Fibre Channel


Fibre Channel (FC) is the traditional method used for data center storage connectivity. The
IBM Storwize V5000 supports FC connectivity at speeds of 2, 4, and 8 Gbps. Fibre Channel
Protocol is used to encapsulate SCSI commands over the FC network. Each device on the
network has a unique 64 bit World Wide Port Name. The IBM Storwize V5000 supports FC
connections directly to a host server or to external FC switched fabrics.

18 Implementing the IBM Storwize V5000


1.6 IBM Storwize V5000 features
In this section, we describe the features of the IBM Storwize V5000.

1.6.1 Mirrored volumes


IBM Storwize V5000 provides a function that is called storage volume mirroring, which
enables a volume to have two physical copies. Each volume copy can belong to a different
storage pool, be generic or thin-provisioned, and be on different physical storage systems,
which provides a high-availability solution.

When a host system issues a write to a mirrored volume, IBM Storwize V5000 writes the data
to both copies. When a host system issues a read to a mirrored volume, IBM Storwize V5000
requests it from the primary copy. If one of the mirrored volume copies is temporarily
unavailable, the IBM Storwize V5000 automatically uses the alternative copy without any
outage for the host system. When the mirrored volume copy is repaired, IBM Storwize V5000
resynchronizes the data.

A mirrored volume can be converted into a non-mirrored volume by deleting one copy or by
splitting away one copy to create a non-mirrored volume.

The mirrored volume copy can be any type: image, striped, sequential, and thin-provisioned
or not. The two copies can be different volume types.

The use of mirrored volumes can also assist with migrating volumes between storage pools
that have different extent sizes. Mirrored volumes can also provide a mechanism to migrate
fully allocated volumes to thin-provisioned volumes without any host outages.

The Volume Mirroring feature is included as part of the base software and no license is
required.

1.6.2 Thin provisioning


Volumes can be configured to be thin-provisioned or fully allocated. A thin-provisioned
volume behaves as though it were a fully allocated volume in terms of read/write IO, there is
no difference. When a volume is created, however, the user specifies two capacities: the real
capacity of the volume and its virtual capacity.

The real capacity determines the quantity of MDisk extents that are allocated for the volume.
The virtual capacity is the capacity of the volume that is reported to IBM Storwize V5000 and
to the host servers.

The real capacity is used to store the user data and the metadata for the thin-provisioned
volume. The real capacity can be specified as an absolute value or a percentage of the virtual
capacity.

The thin provisioning feature can be used on its own to create over-allocated volumes, or it
can be used with FlashCopy. Thin-provisioned volumes can be used with the mirrored volume
feature as well.

Chapter 1. Overview of the IBM Storwize V5000 system 19


A thin-provisioned volume can be configured to autoexpand, which causes the IBM Storwize
V5000 to automatically expand the real capacity of a thin-provisioned volume as it gets used.
This feature prevents the volume from going offline. Autoexpand attempts to maintain a fixed
amount of unused real capacity on the volume. This amount is known as the contingency
capacity. When the thin-provisioned volume is initially created, the IBM Storwize V5000
initially allocates only 2% of the virtual capacity in real physical storage. The contingency
capacity and autoexpand features seek to preserve this 2% of free space as the volume
grows.

If the user modifies the real capacity, the contingency capacity is reset to be the difference
between the used capacity and real capacity. In this way the autoexpand feature does not
cause the real capacity to grow much beyond the virtual capacity.

A volume that is created with a zero contingency capacity goes offline when it must expand.
A volume with a non-zero contingency capacity stays online until it is used up.

To support the autoexpansion of thin-provisioned volumes, the volumes themselves have a


configurable warning capacity. When the used free capacity of the volume exceeds the
warning capacity, a warning is logged. For example, if a warning of 80% is specified, the
warning is logged when 20% of the free capacity remains. This is similar to the capacity
warning available on storage pools.

A thin-provisioned volume can be converted to a fully allocated volume by using volume


mirroring (and vice versa).

The Thin Provisioning feature is included as part of the base software and no license is
required.

1.6.3 Easy Tier


IBM Easy Tier provides a mechanism to seamlessly migrate extents to the most appropriate
tier within the IBM Storwize V5000 solution. This migration can be to different tiers of internal
drives within IBM Storwize V5000 or to external storage systems that are virtualized by IBM
Storwize V5000, for example an IBM FlashStorage 840.

The Easy Tier function can be turned on or off at the storage pool and volume level.

It is possible to demonstrate the potential benefit of Easy Tier in your environment before
installing SSDs by using the IBM Storage Advisor Tool.

For more information about Easy Tier, see Chapter 9, “Easy Tier” on page 419.

The IBM Easy Tier feature is licensed per enclosure.

1.6.4 Storage Migration


By using the IBM Storwize V5000 Storage Migration feature, you can easily move data from
other legacy Fibre Channel attached external storage to the internal capacity of the IBM
Storwize V5000. Migrating data from other storage to the IBM Storwize V5000 storage
system enables the benefits of the IBM Storwize V5000 functionality to be realized. Features
such as the easy-to-use GUI, internal virtualization, thin provisioning, and Copy Services.

The Storage Migration feature is included as part of the base software and no license is
required.

20 Implementing the IBM Storwize V5000


1.6.5 FlashCopy
The FlashCopy feature copies a source volume on to a target volume. The original contents of
the target volume is lost. After the copy operation starts, the target volume has the contents of
the source volume as it existed at a single point in time. Although the copy operation
completes in the background, the resulting data at the target appears as though the copy was
made instantaneously.

FlashCopy is sometimes described as an instance of a time-zero (T0) copy or point-in-time


(PiT) copy technology.

FlashCopy can be performed on multiple source and target volumes. FlashCopy permits the
management operations to be coordinated so that a common single point-in-time is chosen
for copying target volumes from their respective source volumes.

IBM Storwize V5000 also permits multiple target volumes to be FlashCopied from the same
source volume. This capability can be used to create images from separate points in time for
the source volume, and to create multiple images from a source volume at a common point in
time. Source and target volumes can be thin-provisioned volumes.

Reverse FlashCopy enables target volumes to become restore points for the source volume
without breaking the FlashCopy relationship and without waiting for the original copy
operation to complete. IBM Storwize V5000 supports multiple targets and thus multiple
rollback points.

The FlashCopy feature is licensed per enclosure.

For more information about FlashCopy copy services, see Chapter 10, “Copy services” on
page 451.

1.6.6 Remote Copy


Remote Copy can be implemented in one of two modes, synchronous or asynchronous.

With the IBM Storwize V5000, Metro Mirror and Global Mirror are the IBM branded terms for
the functions that are synchronous Remote Copy and asynchronous Remote Copy.

By using the Metro Mirror and Global Mirror Copy Services features, you can set up a
relationship between two volumes so that updates that are made by an application to one
volume are mirrored on the other volume. The volumes can be in the same system or on two
different systems.

For both Metro Mirror and Global Mirror copy types, one volume is designated as the primary
and the other volume is designated as the secondary. Host applications write data to the
primary volume and updates to the primary volume are copied to the secondary volume.
Normally, host applications do not perform I/O operations to the secondary volume.

The Metro Mirror feature provides a synchronous copy process. When a host writes to the
primary volume, it does not receive confirmation of I/O completion until the write operation
completes for the copy on the primary and secondary volumes. This ensures that the
secondary volume is always up-to-date with the primary volume if a failover operation must be
performed.

Chapter 1. Overview of the IBM Storwize V5000 system 21


The Global Mirror feature provides an asynchronous copy process. When a host writes to the
primary volume, confirmation of I/O completion is received before the write operation
completes for the copy on the secondary volume. If a failover operation is performed, the
application must recover and apply any updates that were not committed to the secondary
volume. If I/O operations on the primary volume are paused for a brief time, the secondary
volume can become an exact match of the primary volume.

Global Mirror can operate with or without cycling. When it is operating without cycling, write
operations are applied to the secondary volume as soon as possible after they are applied to
the primary volume. The secondary volume is less than one second behind the primary
volume, which minimizes the amount of data that must be recovered in the event of a failover.
However, this requires that a high-bandwidth link is provisioned between the two sites.

When Global Mirror operates with cycling mode, changes are tracked and where needed
copied to intermediate change volumes. Changes are transmitted to the secondary site
periodically. The secondary volumes are much further behind the primary volume, and more
data must be recovered in the event of a failover. Because the data transfer can be smoothed
over a longer time period, however, lower bandwidth is required to provide an effective
solution.

For more information about the IBM Storwize V5000 Copy Services, see Chapter 10, “Copy
services” on page 451).

The IBM Remote Copy feature is licensed per enclosure.

Copy Services configuration limits


For the most up-to-date list of these limits, see the following website:
https://ibm.biz/BdFNu6

1.6.7 External virtualization


By using this feature, you can consolidate FC SAN-attached disk controllers from various
vendors into pools of storage. In this way, the storage administrator can manage and
provision storage to applications from a single user interface and use a common set of
advanced functions across all the storage systems under the control of the IBM Storwize
V5000.

The External Virtualization feature is licensed per disk enclosure.

1.7 Problem management and support


In this section, we introduce problem management and support topics.

1.7.1 IBM Assist On-site and remote service


The IBM Assist On-site tool is a remote desktop-sharing solution that is offered through the
IBM website. With it, the IBM service representative can remotely view your system to
troubleshoot a problem.

You can maintain a chat session with the IBM service representative so that you can monitor
this activity and understand how to fix the problem yourself or allow the representative to fix it
for you.

22 Implementing the IBM Storwize V5000


To use the IBM Assist On-site tool, the management PC that is used to manage the IBM
Storwize V5000 must have access the Internet. For more information about this tool, see this
website:
http://www.ibm.com/support/assistonsite/

When you access the website, you sign in and enter a code that the IBM service
representative provides to you. This code is unique to each IBM Assist On-site session. A
plug-in is downloaded on to your PC to connect you and your IBM service representative to
the remote service session. The IBM Assist On-site contains several layers of security to
protect your applications and your computers.

You also can use security features to restrict access by the IBM service representative.

Your IBM service representative can provide you with more information about the use of the
tool, if required.

1.7.2 Event notifications


IBM Storwize V5000 can use Simple Network Management Protocol (SNMP) traps, syslog
messages, and email to notify you and the IBM Support Center when significant events are
detected. Any combination of these notification methods can be used simultaneously.

You can configure IBM Storwize V5000 to send different types of notification to specific
recipients and choose the alerts that are important to you. When configuring Call Home to the
IBM Support Center, all events are sent via email only.

1.7.3 SNMP traps


SNMP is a standard protocol for managing networks and exchanging messages. IBM
Storwize V5000 can send SNMP messages that notify personnel about an event. You can use
an SNMP manager to view the SNMP messages that IBM Storwize V5000 sends. You can
use the management GUI or the IBM Storwize V5000 CLI to configure and modify your
SNMP settings.

You can use the Management Information Base (MIB) file for SNMP to configure a network
management program to receive SNMP messages that are sent by the IBM Storwize V5000.
This file can be used with SNMP messages from all versions of IBM Storwize V5000
software.

1.7.4 Syslog messages


The syslog protocol is a standard protocol for forwarding log messages from a sender to a
receiver on an IP network. The IP network can be IPv4 or IPv6. IBM Storwize V5000 can
send syslog messages that notify personnel about an event. IBM Storwize V5000 can
transmit syslog messages in expanded or concise format. You can use a syslog manager to
view the syslog messages that IBM Storwize V5000 sends. IBM Storwize V5000 uses the
User Datagram Protocol (UDP) to transmit the syslog message. You can use the
management GUI or the CLI to configure and modify your syslog settings.

Chapter 1. Overview of the IBM Storwize V5000 system 23


1.7.5 Call Home email
The Call Home feature transmits operational and error-related data to you and IBM through a
Simple Mail Transfer Protocol (SMTP) server connection in the form of an event notification
email. When configured, this function alerts IBM service personnel about hardware failures
and potentially serious configuration or environmental issues. You can use the Call Home
function if you have a maintenance contract with IBM or if the IBM Storwize V5000 is within
the warranty period.

To send email, you must configure at least one SMTP server. You can specify as many as five
other SMTP servers for backup purposes. The SMTP server must accept the relaying of email
from the IBM Storwize V5000 clustered system IP address. You can then use the
management GUI or the CLI to configure the email settings, including contact information and
email recipients. Set the reply address to a valid email address. Send a test email to check
that all connections and infrastructure are set up correctly. You can disable the Call Home
function at any time by using the management GUI or the CLI.

1.8 More information resources


This section describes resources that are available for more information.

1.8.1 Useful IBM Storwize V5000 websites


For more information about Storwize V5000, see the following websites:
򐂰 The IBM Storwize V5000 home page:
http://www.ibm.com/storage/support/storwize/v5000
򐂰 IBM Storwize V5000 Online Information Center:
http://pic.dhe.ibm.com/infocenter/storwize/v5000_ic/index.jsp

The Online Information Center also includes a Learning and Tutorial section where videos
describing the use and implementation of the IBM Storwize V5000 can be found.

24 Implementing the IBM Storwize V5000


2

Chapter 2. Initial configuration


This chapter provides a description of the initial configuration steps for the IBM Storwize
V5000.

This chapter includes the following topics:


򐂰 Hardware installation planning
򐂰 SAN configuration planning
򐂰 FC Direct-attach planning
򐂰 SAS direct-attach planning
򐂰 LAN configuration planning
򐂰 Host configuration planning
򐂰 Miscellaneous configuration planning
򐂰 System management
򐂰 First-time setup
򐂰 Initial configuration
򐂰 Adding enclosures after initial configuration
򐂰 Configuring Call Home, email alert, and inventory
򐂰 Service Assistant tool

© Copyright IBM Corp. 2013, 2015. All rights reserved. 25


2.1 Hardware installation planning
After verifying that you have all of the hardware components that you purchased, it is
important to carry out proper planning before the actual physical installation. The following
checklist of requirements can be used to plan your installation:
 Install the hardware as described in IBM Storwize V5000 Quick Installation Guide,
GI13-2861-02, Chapter 2. The document is available at this website:
https://ibm.biz/BdFNh8
 An appropriate 19-inch rack should be available; depending on the number of enclosures
to install, more than one may be required. Each enclosure measures 2 U and a single
control enclosure supports with up to 19 expansion enclosures, 10 expansion enclosures
on SAS chain 1, and 9 expansion enclosures on SAS chain 2.
 There should be redundant power outlets in the rack for each of the two power cords
required for each enclosure to be installed. Several power outlets are required, depending
on the number of enclosures to be installed. The power cords conform to the IEC320
C13/C14 standards.
 A minimum of four Fibre Channel ports attached to redundant fabrics are required. For
dual I/O group systems, a minimum of eight Fibre Channel ports are required.

Fibre Channel ports: Fibre Channel (FC) ports are required only if you are using FC
hosts or clustered systems arranged as two I/O groups. You can use the IBM Storwize
V5000 with Ethernet-only cabling for iSCSI hosts or use serial-attached SCSI (SAS)
cabling for direct-attach hosts.

 For systems arranged as two I/O groups, up to eight hosts can be directly connected using
SAS ports 1 and 2 on each node canister, with SFF-8644 mini SAS HD cabling.
 You should have a minimum of two Ethernet ports on the LAN, with four preferred for more
configuration access redundancy or iSCSI host access.

 You should have a minimum of two Ethernet cable drops, with four preferred for more
configuration access redundancy or iSCSI host access. If you have two I/O groups, you
must have a minimum of four Ethernet cable drops. Ethernet port one on each node
canister must be connected to the LAN, with port two as optional.

Ports: Port 1 on each node canister must be connected to the same physical LAN or be
configured in the same VLAN and be on the same subnet or set of subnets.

 Verify that the default IP addresses that are configured on Ethernet port 1 on each of the
node canisters (192.168.70.121 on node one and 192.168.70.122 on node 2) do not
conflict with existing IP addresses on the LAN. The default mask that is used with these IP
addresses is 255.255.255.0 and the default gateway address that is used is 192.168.70.1.
 You should have a minimum of three IPv4 or IPv6 IP addresses for systems arranged as
one I/O group and minimum of five if you have two I/O groups. One is for the clustered
system and is what the administrator uses for management, and one for each node
canister for service access as needed.

IP addresses: An additional IP address should be used for backup configuration


access. This other IP address allows a second system IP address to be configured on
port 2 of either node canister, which the storage administrator can also use for
management of the IBM Storwize V5000 system.

26 Implementing the IBM Storwize V5000


 A minimum of one and up to eight IPv4 or IPv6 addresses are needed if iSCSI attached
hosts access volumes from the IBM Storwize V5000.
 At least two 0.6-meter, 1.5-meter, or 3-meter mini SAS cable per expansion enclosures
are required. The length of the cables depends on the physical rack location of the
expansion relative to the control enclosures or other expansion enclosures. Locate the
control enclosure so that up to nineteen enclosures can be located as shown in
Figure 2-1. The IBM Storwize V5000 supports two external SAS chains using SAS ports 3
and 4 on the control enclosure node canisters.

Figure 2-1 Connecting SAS expansion cables example

Chapter 2. Initial configuration 27


Figure 2-2 shows how to cable an IBM Storwize V5000 arranged as two I/O groups.

Figure 2-2 IBM Storwize V5000 arranged in two I/O groups

28 Implementing the IBM Storwize V5000


2.1.1 Procedure to install the SAS cables
Using the supplied SAS cables, connect the control enclosure to the first expansion enclosure
at the rack position below, as shown in Figure 2-3.

To install the SAS cables, follow these instructions:


1. Connect SAS port 3 of the left node canister in the control enclosure to SAS port 1 of the
left expansion canister in the first enclosure (below the control enclosure, as shown in
Figure 2-3).
2. Connect SAS port 3 of the right node canister in the control enclosure to SAS port 1 of the
right expansion canister in the first enclosure (below the control enclosure, as shown in
Figure 2-3).
3. Connect SAS port 4 of the left node canister in the control enclosure to SAS port 1 of the
left expansion canister in the second expansion enclosure (above the control enclosure,
as shown in Figure 2-3).
4. Connect SAS port 4 of the right node canister in the control enclosure to SAS port 1 of the
right expansion canister in the second expansion enclosure (above the control enclosure,
as shown in Figure 2-3).

Figure 2-3 Connecting SAS cabling

Chapter 2. Initial configuration 29


Continue to add expansion enclosures alternately on the two different SAS chains that
originate at ports 3 and 4 on the control enclosure node canisters. No expansion enclosure
should be connected to both SAS chains.

Disk drives: The disk drives that are included with the control enclosure (model 2077-12C
or 2077-24C) are part of the single SAS chain. The expansion enclosures should be
connected to the SAS chain as shown in Figure 2-3 on page 29, so that they can use the
full bandwidth of the system.

2.2 SAN configuration planning


Ensure that you use the proper Fibre Cables to connect the Storwize V5000 to the Fibre
Channel SAN.

The advised SAN configuration is composed of a minimum of two fabrics that encompass all
host ports and any ports on external storage systems that are to be virtualized by the IBM
Storwize V5000. The IBM Storwize V5000 ports must have the same number of cables
connected and they must be evenly split between the two fabrics to provide redundancy if one
of the fabrics goes offline (planned or unplanned).

Zoning must be implemented after the IBM Storwize V5000, hosts, and optional external
storage systems are connected to the SAN fabrics.

To enable the node canisters to communicate with each other in band, create a zone with only
the IBM Storwize V5000 WWPNs (two from each node canister) on each of the two fabrics.
If an external storage system is to be virtualized, create a zone in each fabric with the IBM
Storwize V5000 WWPNs (two from each node canister) with up to a maximum of eight
WWPNs from the external storage system. Assuming every host has a Fibre Channel
connection to each fabric, create a zone with the host WWPN and one WWPN from each
node canister in the IBM Storwize V5000 system in each fabric. The critical point is that there
should only ever be one initiator (host HBA) in any zone. For load balancing between the
node ports on the IBM Storwize V5000, alternate the host Fibre Channel ports between the
ports of the IBM Storwize V5000.

There should be a maximum of eight paths through the SAN from each host to the IBM
Storwize V5000. Hosts where this number is exceeded are not supported. The restriction is
there to limit the number of paths that the multi-pathing driver must resolve. A host with only
two HBAs should not exceed this limit with proper zoning in a dual fabric SAN.

Maximum ports or WWPNs: IBM Storwize V5000 supports a maximum of 16 ports or


WWPNs from a virtualized external storage system.

30 Implementing the IBM Storwize V5000


Figure 2-4 shows how to cable devices to the SAN. Optionally, ports 3 and 4 can be
connected to SAN Fabrics to provide additional redundancy and throughput. Refer to this
example as the zoning is described.

Figure 2-4 SAN cabling and zoning diagram

Create a host/IBM Storwize V5000 zone for each server that volumes are mapped to and
from the clustered system, as shown in the following examples in Figure 2-4:
򐂰 Zone Host A port 1 (HBA 1) with all node canister ports 1
򐂰 Zone Host A port 2 (HBA 2) with all node canister ports 2
򐂰 Zone Host B port 1 (HBA 1) with all node canister ports 3
򐂰 Zone Host B port 2 (HBA 2) with all node canister ports 4

Similar zones should be created for all other hosts with volumes on the IBM Storwize V5000
I/O groups.

Verify interoperability with which the IBM Storwize V5000 connects to SAN switches or
directors by following the requirements that are provided at this website:
http://www-03.ibm.com/systems/support/storage/ssic/interoperability.wss

Switches or directors are at the firmware levels that are supported by the IBM Storwize
V5000.

Important: IBM Storwize V5000 port login maximum that is listed in the restriction
document must not be exceeded. The document is available at this website:
http://www.ibm.com/storage/support/Storwize/V5000

Connectivity issues: If you have any connectivity issues between IBM Storwize V5000
ports and Brocade SAN Switches or Directors at 8 Gbps, see this website for the correct
setting of the fillword port config parameter in the Brocade operating system:
http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003699

Chapter 2. Initial configuration 31


2.3 FC Direct-attach planning
IBM Storwize V5000 can be used with a direct-attach Fibre Channel host configuration. The
advised configuration for direct attachment is to have at least one Fibre Channel cable from
the host that is connected to each node of the IBM Storwize V5000 to provide redundancy if
one of the nodes goes offline, as shown in Figure 2-5.

Figure 2-5 FC direct-attach host configuration

32 Implementing the IBM Storwize V5000


If your direct-attach Fibre Channel host requires connectivity to both IBM Storwize V5000 I/O
groups, the recommendation is to have at least one Fibre Channel cable from the host to
each of the node canisters of the IBM Storwize V5000, as shown in Figure 2-6.

Figure 2-6 FC direct-attach host configuration to I/O groups

Verify direct-attach interoperability with the IBM Storwize V5000 and the supported server
operating systems by following the requirements that are provided at this website:
http://http://www-03.ibm.com/systems/support/storage/ssic/interoperability.wss

2.4 SAS direct-attach planning


There are two SAS ports per node canister that are available for direct host attach on an IBM
Storwize V5000. These are ports 1 and 2. Do not use ports 3 and 4 because they are
reserved for expansion enclosure connectivity only. Refer to Figure 2-7 on page 34 to
correctly identify ports 1 and 2. Also, note the keyway in the top of the SAS connector.

Inserting cables: It is possible to insert the cables upside down despite the keyway.
Ensure that the blue tag on the SAS connector is underneath when you are inserting the
cables.

Chapter 2. Initial configuration 33


Figure 2-7 SAS port identification

Although it is possible to attach up to four hosts (one to each of the two available SAS ports
on the two node canisters) per I/O group. The advised configuration for direct attachment is to
have at least one SAS cable from the host that is connected to each node canister of the IBM
Storwize V5000. This configuration provides redundancy if one of the nodes goes offline, as
shown in Figure 2-8.

Figure 2-8 SAS host direct-attach

34 Implementing the IBM Storwize V5000


Depending on your requirements, it is possible to attach up to two hosts to an IBM Storwize
V5000 arranged as two I/O groups. This configuration allows the host to communicate to both
I/O groups of the cluster, as shown in Figure 2-9.

Figure 2-9 SAS host direct-attach to I/O groups

2.5 LAN configuration planning


There are two Ethernet ports per node canister that are available for connection to the LAN
on an IBM Storwize V5000 system.

Ethernet port 1 is for accessing the management GUI, the service assistant GUI for the node
canister, and iSCSI host attachment. Port 2 can be used for the management GUI and iSCSI
host attachment.

Each node canister in a control enclosure connects over an Ethernet cable from Ethernet
port 1 of the canister to an enabled port on your Ethernet switch or router. Optionally, you can
attach an Ethernet cable from Ethernet port 2 on the canister to your Ethernet network.

Configuring IP addresses: There is no issue with configuring multiple IPv4 or IPv6


addresses on an Ethernet port or the use of the same Ethernet port for management and
iSCSI access. However, you cannot use the same IP address for management and iSCSI
host use.

Table 2-1 shows possible IP configuration of the Ethernet ports on the IBM Storwize V5000
system.

Table 2-1 Storwize V5000 IP address configuration options per node canister
Storwize V5000 Management Node Canister 1 Storwize V5000 Partner Node Canister 2

IPv4/6 management address ETH PORT 1 IPv4/6 service address ETH PORT 1

IPv4/6 service address IPv4/6 iSCSI address

IPv4/6 iSCSI address

IPv4/6 management address ETH PORT 2 IPv4/6iSCSI address ETH PORT 2

IPv4/6 iSCSI address

Chapter 2. Initial configuration 35


IP management addresses: The IP management address that is shown on Node
Canister 1 in Table 2-1 is an address on the configuration node. If a failover occurs, this
address transfers to Node Canister 2 and this node canister becomes the new
configuration node. The management addresses are managed by the configuration node
canister only (1 or 2; in this case, by Node Canister 1).

2.5.1 Management IP address considerations


Because Ethernet port 1 from each node canister must be connected to the LAN, a single
management IP address for the clustered system is configured as part of the initial setup of
the IBM Storwize V5000 system.

The management IP address is associated with one of the node canisters in the clustered
system and that node then becomes the configuration node. Should this node go offline
(planned or unplanned), the management IP address fails over to the other node’s Ethernet
port 1.

For more clustered system management redundancy, you should connect Ethernet port 2 on
each of the node canisters to the LAN, which allows for a backup management IP address to
be configured for access, if necessary.

Figure 2-10 shows a logical view of the Ethernet ports that are available for configuration of
the one or two management IP addresses. These IP addresses are for the clustered system
and therefore are associated with only one node, which is then considered the configuration
node.

Figure 2-10 Ethernet ports available for configuration

36 Implementing the IBM Storwize V5000


2.5.2 Service IP address considerations
Ethernet port 1 on each node canister is used for system management and for service
access, when required. In normal operation, the service IP addresses are not needed.
However, if there is a node canister problem, it might be necessary for service personnel to
log on to the node to perform service actions.

Figure 2-11 shows a logical view of the Ethernet ports that are available for configuration of
the service IP addresses. Only port one on each node can be configured with a service IP
address.

Figure 2-11 Service IP addresses available for configuration

2.6 Host configuration planning


Hosts should have two Fibre Channel connections for redundancy, but the IBM Storwize
V5000 also supports hosts with a single HBA port connection. However, if that HBA, its link to
the SAN fabric or the fabric fails, the host loses access to its volumes. Even with a single
connection to the SAN, the host has multiple paths to the IBM Storwize V5000 volumes
because that single connection must be zoned with at least one Fibre Channel port per node.
Therefore, multipath software is required. This is also true for direct-attach SAS hosts. They
can be connected using a single host port that allows up to eight hosts in a dual I/O group
cluster, but for redundancy, two SAS connections per host are advised.

If two connections per host are used, multipath software is also required on the host. If an
iSCSI host is to be employed, they will also require multipath software. All node canisters
should be configured and connected to the network so any iSCSI hosts see at least two paths
to volumes and multipath software is required to resolve these.

Chapter 2. Initial configuration 37


SAN Boot is supported by IBM Storwize V5000. For more information, see the IBM Storwize
V5000 Information Center at this website:
http://www-01.ibm.com/support/knowledgecenter/search/san%20boot?scope=STHGUJ&lang=
en

Verify that the hosts that access volumes from the IBM Storwize V5000 meet the
requirements that are found at this website:
https://ibm.biz/BdFNu6

Multiple operating systems are supported by IBM Storwize V5000. For more information
about HBA/Driver/multipath combinations, see this website:
http://www-03.ibm.com/systems/support/storage/ssic/interoperability.wss

As per the IBM System Storage Interoperation Center (SSIC), keep the following items under
consideration:
򐂰 Host operating systems are at the levels that are supported by the IBM Storwize V5000.
򐂰 HBA BIOS, device drivers, firmware, and multipathing drivers are at the levels that are
supported by IBM Storwize V5000.
򐂰 If boot from SAN is required, ensure that it is supported for the operating systems that are
deployed.
򐂰 If host clustering is required, ensure that it is supported for the operating systems that are
deployed.
򐂰 All direct connect hosts should have the HBA set to point-to-point.

For more information, see Chapter 4, “Host configuration” on page 155.

2.7 Miscellaneous configuration planning


During the initial setup of the IBM Storwize V5000 system, the installation wizard asks for
various information that you should have available during the installation process. Several of
these fields are mandatory to complete the initial configuration.

The information in the following checklist is helpful to have before the initial setup is
performed. The date and time can be manually entered, but to keep the clock synchronized,
use a network time protocol (NTP) service:
 Document the LAN NTP server IP address that is used for synchronization of devices.
 For alerts to be sent to storage administrators and to set up Call Home to IBM for service
and support, you need the following information:
 Name of primary storage administrator for IBM to contact, if necessary.
 Email address of the storage administrator for IBM to contact, if necessary.
 Phone number of the storage administrator for IBM to contact, if necessary.
 Physical location of the IBM Storwize V5000 system for IBM service (for example,
Building 22, first floor).
 SMTP or email server address to direct alerts to and from the IBM Storwize V5000.
 For the Call Home service to work, the IBM Storwize V5000 system must have access
to an SMTP server on the LAN that can forward emails to the default IBM service
address: [email protected] for Americas-based systems and
[email protected] for the rest of the World.

38 Implementing the IBM Storwize V5000


 Email address of local administrators that must be notified of alerts.
 IP address of SNMP server to direct alerts to, if required (for example, operations or
Help desk).

After the IBM Storwize V5000 initial configuration, you might want to add more users who can
manage the system. You can create as many users as you need, but the following roles
generally are configured for users:
򐂰 Security Admin
򐂰 Administrator
򐂰 CopyOperator
򐂰 Service
򐂰 Monitor

The user in the Security Admin role can perform any function on the IBM Storwize V5000.

The user in the Administrator role can perform any function on the IBM Storwize V5000
system, except manage users.

User creation: The Security Admin role is the only role that has the create users function
and should be limited to as few users as possible.

The user in the CopyOperator role can view anything in the system, but the user can
configure and manage only the copy functions of the FlashCopy capabilities.

The user in the Monitor role can view object and system configuration information but cannot
configure, manage, or modify any system resource.

The only other role that is available is the service role, which is used if you create a user ID for
the IBM service representative. This user role allows IBM service personnel to view anything
on the system (as with the monitor role) and perform service-related commands, such as
adding a node back to the system after it is serviced or including disks that were excluded.

2.8 System management


The graphical user interface (GUI) is used to configure, manage, and troubleshoot the IBM
Storwize V5000 system. It is used primarily to configure RAID arrays and logical drives,
assign logical drives to hosts, replace and rebuild failed disk drives, and expand the logical
drives.

It allows for troubleshooting and management tasks, such as checking the status of the
storage server components, updating the firmware, and managing the storage server.

The GUI also offers advanced functions, such as FlashCopy, Volume Mirroring, Remote
Mirroring, and Easy Tier. A command-line interface (CLI) for the IBM Storwize V5000 system
also is available.

This section describes system management using the GUI and CLI.

Chapter 2. Initial configuration 39


2.8.1 GUI
A web browser is used for GUI access. You must use a supported web browser to access the
management GUI. For more information about supported web browsers, see Checking your
web browser settings for the management GUI, which is available at this website:
http://www.ibm.com/support/knowledgecenter/STHGUJ_7.4.0/com.ibm.storwize.v5000.740
.doc/v5000_ichome_740.html

Complete the following steps to open the Management GUI from any web browser:

1. Browse to one of the following locations:


a. http(s)://host name of your cluster/
b. http(s)://cluster IP address of your cluster/ Example: https://192.168.70.120

2. Use the following default login information:


– User ID: superuser
– Password: passw0rd

For more information about how to use this interface, see this website:
http://www.ibm.com/support/knowledgecenter/STHGUJ_7.4.0/com.ibm.storwize.v5000.740
.doc/v5000_ichome_740.html

More information also is available in Chapter 3, “Graphical user interface overview” on


page 75.

After the initial configuration that is described in 2.10, “Initial configuration” on page 49 is
completed, the IBM Storwize V5000 Welcome window opens, as shown in Figure 2-12.

Figure 2-12 Setup wizard: Welcome window

40 Implementing the IBM Storwize V5000


2.8.2 CLI
The CLI is a flexible tool for system management that uses the SSH protocol. A public/private
SSH key pair is optional for SSH access. For more information about setting up SSH Access
for Windows, Linux, or UNIX systems, see Appendix A, “Command-line interface setup and
SAN Boot” on page 667. The storage system can be managed using the CLI, as shown in
Example 2-1.

Example 2-1 System management by using command line interface


IBM_Storwize:ITSO-V5000:superuser>svcinfo lsenclosureslot
enclosure_id slot_id port_1_status port_2_status drive_present drive_id
1 1 online online yes 10
1 2 online online yes 11
1 3 online online yes 15
1 4 online online yes 16
1 5 online online yes 12
1 6 online online yes 4
1 7 online online yes 7
1 8 online online yes 8
1 9 online online yes 9
1 10 online online yes 5
1 11 online online yes 18
1 12 online online yes 14
1 13 online online yes 13
1 14 online online yes 2
1 15 online online yes 6
1 16 online online yes 3
1 17 online online yes 1
1 18 online online yes 0
1 19 online online yes 20
1 20 online online no
1 21 online online yes 19
1 22 online online yes 21
1 23 online online yes 22
1 24 online online yes 17
2 1 online online yes 25
2 2 online online yes 27
2 3 online online no
2 4 online online yes 31
2 5 online online yes 24
2 6 online online yes 26
2 7 online online yes 33
2 8 online online yes 32
2 9 online online yes 23
2 10 online online yes 28
2 11 online online yes 29
2 12 online online yes 30
IBM_Storwize:ITSO-V5000:superuser>

The initial IBM Storwize V5000 system setup should be done using the process and tools that
are described in 2.9, “First-time setup” on page 42.

Chapter 2. Initial configuration 41


2.9 First-time setup
This section describes how to perform a first-time IBM Storwize V5000 service and system
setup.

IBM Storwize V5000 uses an initial setup process that is contained within a USB key. The
USB key is delivered with each storage system and contains the initialization application file
that is called InitTool.exe. The tool is configured with your IBM Storwize V5000 system
management IP address, the subnet mask, and the network gateway address by first
plugging the USB stick into a Windows or Linux system.

The IBM Storwize V5000 starts the initial setup when you plug in the USB key with the newly
created file in to the storage system.

USB key: If you cannot find the official USB key that is supplied with the IBM Storwize
V5000, you can use any USB key that you have and download and copy the initTool.exe
application from IBM Storwize V5000 Support at this website:
http://www.ibm.com/storage/support/Storwize/V5000

The USB stick contains a readme file that provides details about how to use the tool with
various operating systems. The following operating systems are supported:
򐂰 Microsoft Windows (R) 7 (64-bit)
򐂰 Microsoft Windows XP (32-bit only)
򐂰 Apple Mac OS(R) X 10.7
򐂰 Red Hat Enterprise Server 5
򐂰 Ubuntu (R) desktop 11.04

We use Windows 7, 64-bit in the following examples.

42 Implementing the IBM Storwize V5000


Complete the following steps to perform the initial setup by using the USB key:
1. Plug the USB key into a Windows system and start the initialization tool. If the system is
configured to autorun USB keys, the initialization tool starts automatically; otherwise, open
My Computer and double-click the InitTool.bat file. The welcome window of the tool
starts and is shown in Figure 2-13. Click Next.

Figure 2-13 System Initialization tool

Mac OS or Linux:

For Mac OS or Linux, complete the following steps:


a. Open a terminal window.
b. Locate the root directory of the USB flash drive:
• For Mac systems, the root directory is often in the /Volumes/ directory.
• For Linux systems, the root directory is often in the /media/ directory.
• If an automatic mount system is used, the root directory can be located by
entering the mount command.
c. Change the directory to the root directory of the flash drive.
d. Enter: sh InitTool.sh

Chapter 2. Initial configuration 43


The options for creating a system are shown in Figure 2-14.

Figure 2-14 System Initialization: Create a system

There are other options available through the Tasks section. However, these options
generally are only required after initial configuration. The options are shown in Figure 2-15
on page 45 and are accessed by selecting No to the initial question to configure a new
system. A second question asks if you want to view instructions on how to expand a
system with a new control enclosure. Selecting No to answer this question gives the
option to reset the superuser password or set the service IP of a node canister. Selecting
Yes (as shown in Figure 2-14) progresses through the initial configuration of the IBM
Storwize V5000.

44 Implementing the IBM Storwize V5000


Figure 2-15 Inittool task options

2. Set the Management IP address, as shown in Figure 2-16.

Figure 2-16 System Initialization: Management IP

Chapter 2. Initial configuration 45


3. Click Apply and Next to display the IBM Storwize V5000 power up instructions and how to
insert the USB flash drive into one of the ports on the IBM Storwize V5000. You must wait
for the fault LED to start and stop blinking as shown in Figure 2-17. This process may take
up to 3 minutes.

Figure 2-17 Initialization application: V5000 Power up

Any expansion enclosures that are part of the system should be powered up and allowed
to come ready before the control enclosure. Follow the instructions to power up the IBM
Storwize V5000 and wait for the status LED to flash. Then, insert the USB stick in one of
the USB ports on the left side node canister. This node becomes the control node and the
other node is the partner node. The fault LED begins to flash. When it stops, return the
USB stick to the Windows PC.

Clustered system creation: While the clustered system is created, the amber fault
LED on the node canister flashes. When this LED stops flashing, remove the USB key
from IBM Storwize V5000 and insert it in your system to check the results.

46 Implementing the IBM Storwize V5000


The wizard then attempts to verify connectivity to the IBM Storwize V5000, as shown in
Figure 2-18.

Figure 2-18 Verify system connectivity

If successful, a summary page is displayed that shows the settings that are applied to the
IBM Storwize V5000, as shown in Figure 2-19.

Figure 2-19 Initialization Summary

Chapter 2. Initial configuration 47


If the connectivity to the IBM Storwize V5000 cannot be verified, the warning that is shown
in Figure 2-20 is displayed.

Figure 2-20 Initialization Failure

Follow the on-screen instructions to resolve any issues. The wizard assumes the system
that you are using can connect to the IBM Storwize V5000 through the network. If it cannot
connect, you must follow step 1 from a machine that does have network access to the IBM
Storwize V5000. After the initialization process completes successfully, click Finish.

48 Implementing the IBM Storwize V5000


The initial setup is now complete. If you have a network connection to the Storwize system,
the wizard redirects you to the system Management GUI, as shown in Figure 2-21.

Figure 2-21 System Initialization complete

We describe system initial configuration using the GUI in 2.10, “Initial configuration”.

2.10 Initial configuration


This section describes how to complete the initial configuration, including the following tasks:
򐂰 System components verification
򐂰 Email event notifications
򐂰 Setting system name, date, and time
򐂰 License functions
򐂰 Initial storage configuration
򐂰 Initial configuration summary

If you just completed the initial setup, that wizard automatically redirects to the IBM Storwize
V5000 GUI. Otherwise, complete the following steps to complete the initial configuration
process:
1. Start the service configuration wizard using a web browser on a workstation and point it to
the system management IP address that was defined in Figure 2-16 on page 45. Enter the
default superuser password <passw0rd> (where 0 = zero), as shown in Figure 2-22.

Chapter 2. Initial configuration 49


Figure 2-22 Setup wizard: Login

2. After you are logged in, a welcome window opens, as shown in Figure 2-23.

Figure 2-23 Initial Setup: Welcome window

Click Next to start the configuration wizard.

50 Implementing the IBM Storwize V5000


3. As shown in Figure 2-24, verify that all the components you installed are listed. If any are
missing, click the Rescan button. If it persists, check 2.1, “Hardware installation planning”
on page 26 for any missed hardware configuration. Click Next to go to the next page.

Figure 2-24 Initial Setup: Verify System

4. The next window in the initial service setup is setting up Email Event Notification. Select
Yes and click Next to set up email notification and call home, as shown in Figure 2-25.

Figure 2-25 Initial Setup: Email Event Notifications

Chapter 2. Initial configuration 51


It is possible to configure your system to send email reports to IBM if an issue that requires
hardware replacement is detected. This function is called Call Home. When this email is
received, IBM automatically opens a problem report and contacts you to verify whether
replacements parts are required.

Call Home: When Call Home is configured, the IBM Storwize V5000 automatically
creates a Support Contact with one of the following email addresses, depending on
country or region of installation:
򐂰 US, Canada, Latin America, and Caribbean Islands: [email protected]
򐂰 All other countries or regions: [email protected]

IBM Storwize V5000 can use Simple Network Management Protocol (SNMP) traps, syslog
messages, and Call Home email to notify you and the IBM Support Center when
significant events are detected. Any combination of these notification methods can be
used simultaneously.
To set up Call Home, you need the location details of the IBM Storwize V5000, Storage
Administrators details, and at least one valid SMTP server IP address. If you do not want
to configure Call Home now, it can be done later using the GUI and clicking Settings 
Notifications (for more information, see 2.10.2, “Configuring Call Home, email alert, and
inventory” on page 70).
If your system is under warranty or you have a hardware maintenance agreement, to
enable pro-active support of the IBM Storwize V5000, it is advised that Call Home is
configured.
5. Enter the system location information. These details appear on the Call Home data to
enable IBM Support to correctly identify where the IBM Storwize V5000 is located, as
shown in Figure 2-26.

Figure 2-26 Initial Setup: System Location

52 Implementing the IBM Storwize V5000


Important: Unless the IBM Storwize V5000 is in the US, the state or province field
should be completed by using XX. Follow the help for correct entries for locations inside
the US.

6. In next window, you must enter the contact details of the main storage administrator, as
shown in Figure 2-27. You can choose to enter the details for a 24-hour operations desk.
These details also are sent with any Call Home. This information allows IBM Support to
contact the correct people to quickly progress any issues.

Figure 2-27 Initial Setup: Contact details

Chapter 2. Initial configuration 53


7. Click Apply and Next to continue, as shown in Figure 2-28.

Figure 2-28 Initial Setup: Applying contact information

8. The next window is for email server details. To enter more than one email server, click the
+ icon, as shown in Figure 2-29 and then click Apply and Next to commit.

Figure 2-29 Initial Setup: Email server details

9. The next window is the email recipient. IBM Storwize V5000 can also configure local email
alerts. These can be sent to a storage administrator or an email alias for a team of
administrators or operators. To add more than one recipient, click the + icon, as shown in
Figure 2-30.

54 Implementing the IBM Storwize V5000


Figure 2-30 Initial Setup: Email recipient

10.Clicking Apply and Next displays the summary window for the contact details, system
location, email server, call home, and email notification options, as shown in Figure 2-31.

Figure 2-31 Initial Setup: Summary

11.Click Apply and Next.

Chapter 2. Initial configuration 55


12.The initial configuration wizard moves on to the License Information. By clicking the
Accept button, you certify that you agree to the terms and conditions, as shown in
Figure 2-32. To continue, click the Accept button.

Figure 2-32 Initial Setup: License Agreement

13.The wizard then prompts for the superuser to log on and proceed with the configuration,
as shown in Figure 2-33.

Figure 2-33 Initial Setup: GUI login

56 Implementing the IBM Storwize V5000


The initial setup wizard moves from the service setup to the system setup. In this section, we
use the wizard to complete additional system details and configuration, as shown in
Figure 2-34.

Figure 2-34 Initial Setup: System Setup

14.Click Next and complete the following steps in the System Setup:
15.In the System Name, set up the system name and click Apply and Next as shown in
Figure 2-35.

Figure 2-35 Initial Setup: Insert system name

Note: The IBM Storwize V5000 GUI shows the CLI as you go through the configuration
steps.

Note: Use the chsystem command to modify the attributes of the clustered system. This
command can be used at any time after a system has been created.

Chapter 2. Initial configuration 57


16.There are two options for configuring the date and time. Select the required method and
enter the date and time manually or specify a network address for an NTP server. After
this is done, the Apply and Next option becomes active, as shown in Figure 2-36.

Figure 2-36 Initial Setup: Date and Time

17.In this next window, the IBM Storwize V5000 GUI provides help and guidance on the
different types of license of certain system functions. A license must be purchased for
each enclosure that is attached to, or externally managed by, the IBM Storwize V5000. For
more information about external Storage virtualization, see Chapter 11, “External storage
virtualization” on page 579. For each of the functions, complete with the number of
enclosures as shown in Figure 2-37 and click Apply and Next to continue.

Figure 2-37 Initial Setup: LIcensed Functions

58 Implementing the IBM Storwize V5000


The IBM Storwize V5000 determines the following actions for each of the licensed functions:
򐂰 FlashCopy: Enter the number of enclosures that are licensed to use FlashCopy function.
򐂰 Remote copy: Enter the number of Remote Mirroring licenses. This license setting enables
the use of Metro Mirror and Global Mirror functions. This value must be equal to the
number of enclosures that are licensed for external virtualization plus the number of
attached internal enclosures.
򐂰 Easy Tier: Enter the number of enclosures that are licensed to use Easy Tier function.
򐂰 External Virtualization: Enter the number of external enclosures that you are virtualizing.
For each physical enclosure that is attached to your system, you must have an external
virtualization license.
18.The configuration wizard continues with the hardware configuration by detecting any
enclosures, as shown in Figure 2-38.

Figure 2-38 Initial Setup: Detecting enclosures

19.The next window is the Email Event Notification. Because the information has been
previously entered in step 6 on page 53, you can either update or just click Apply and
Next through the steps.
20.The initial setup wizard moves on to the Configure Storage option next. This option takes
all the disks in the IBM Storwize V5000 and automatically configures them into optimal
RAID arrays for use as MDisks. If you do not want to automatically configure disks now,
select Configure storage later and you exit the wizard to the IBM Storwize V5000 GUI. If
you select Configure storage now the system will examine the enclosures and will
provide the best array configuration. Click Next moves to the summary window that shows
the RAID configuration that the IBM Storwize V5000 will implement, as shown next in
Figure 2-39.

Chapter 2. Initial configuration 59


Figure 2-39 Initial Setup: Summary

Depending on your system configuration, the storage pools are created when you click
Finish. Closing that task box completes the Initial configuration wizard and automatically
directs you to the Create Hosts task option on the GUI, as shown in Figure 2-40.

Figure 2-40 Initial Setup: Create hosts

60 Implementing the IBM Storwize V5000


If you choose to create hosts at this stage, see Chapter 4, “Host configuration” on page 155.

Selecting Cancel exits to the IBM Storwize V5000 GUI. There is also a host link to the
e-Learning tours that are available through the GUI.

2.10.1 Adding enclosures after initial configuration


When the initial installation of the IBM Storwize V5000 is complete, all expansion enclosures
and control enclosures that were purchased at that time should be installed as part of the
initial configuration. This process enables the system to make the best use of the enclosures
and drives that are available.

Adding a control enclosure


If you are expanding the IBM Storwize V5000 after the initial installation by adding a second
I/O group (a second control enclosure), you must install it in the rack and connect it to the
SAN. Ensure that you re-zone your Fibre Channel switches so that the new control enclosure
and the existing one are connected. For more information about zoning the node canisters,
see 2.2, “SAN configuration planning” on page 30.

After the hardware is installed, cabled, zoned, and powered on, a second control enclosure
will be visible from the IBM Storwize V5000 GUI, as shown in Figure 2-41.

Figure 2-41 Second control enclosure

Chapter 2. Initial configuration 61


Complete the following steps to use the management GUI to configure the new enclosure:
1. In the main window, go to the Action menu in the upper left and select Add Enclosures.
Alternatively, you can click the available control enclosure as shown in Figure 2-42.

Figure 2-42 Option to add a control enclosure

62 Implementing the IBM Storwize V5000


2. If the control enclosure is properly configured, the new control enclosure is identified in the
next window, as shown in Figure 2-43.

Figure 2-43 New control enclosure identification

3. Select the control enclosure and click Actions  Identify to turn on the identify LEDs of
the new canister, if required. Otherwise, click Next.
4. At this point, you are prompted to configure the new storage:
a. If you do not want to continue, click Cancel to quit the wizard and return to the IBM
Storwize V5000 main window.

Chapter 2. Initial configuration 63


b. To continue, choose Automatic or Custom Configuration. Select Automatic as
shown in Figure 2-44 and click Next to continue.

Figure 2-44 Configuring storage from the new control enclosure

When you choose to configure storage automatically, the wizard adds the new control
enclosure. The task takes all the disks in the new control enclosure and automatically
configures them into RAID arrays for use as MDisks, as shown in Figure 2-45.

Figure 2-45 Adding new control enclosure automatic

64 Implementing the IBM Storwize V5000


After the wizard completes adding the new control enclosure, the IBM Storwize V5000
shows the management GUI containing two I/O groups, as shown in Figure 2-46.

Figure 2-46 IBM Storwize V5000 GUI with two I/O groups

Complete the following steps to add new a control enclosure using Custom configuration.
1. If you have the second I/O group ready, click the available I/O group as shown in
Figure 2-42 on page 62.
2. The add enclosure wizard starts as shown in Figure 2-47. Select Custom Configuration
and click Next.

Figure 2-47 Add control enclosure with custom storage configuration

Chapter 2. Initial configuration 65


3. The wizard adds the new control enclosure to the IBM Storwize V5000, as shown in
Figure 2-45.
4. Next, the wizard shows the adding storage is completed and redirects the administrator to
use the Internal Storage page to configure the new storage, as shown in Figure 2-48.

Figure 2-48 Internal storage ready to use

To configure Internal Storage, see Chapter 7, “Storage pools” on page 309.


5. Click Close to return to the IBM Storwize V5000 GUI.
6. The new control enclosure is now part of the cluster, as shown in Figure 2-49.

Figure 2-49 New control enclosure that is shown as part of existing cluster

Adding a new expansion enclosure


Complete the following steps to add a new expansion controller:
1. To add a new expansion enclosure, change to the Monitoring tab and select System. If no
new hardware is shown, check your cabling to ensure that the new expansion enclosure is
connected properly and refresh the window.
As shown in Figure 2-50, there are two different options to add a new enclosure to the IBM
Storwize V5000. Each option guides the user to the same wizard. Click Add Enclosure in
the upper left or click the empty space highlighted in Figure 2-50.

66 Implementing the IBM Storwize V5000


Figure 2-50 Adding an expansion enclosure

2. If the enclosure is properly cabled, the wizard identifies the candidate expansion
enclosure. Select the expansion enclosure and click Next, as shown in Figure 2-51.

Figure 2-51 Expansion enclosure cable check

Chapter 2. Initial configuration 67


3. As described in step 4 on page 63, you are prompted to configure the new storage, as
shown in Figure 2-52.

Figure 2-52 Add new expansion enclosure storage

If you choose Automatic configuration, the system will automatically configure the storage
into arrays for use as MDisks. If you choose Custom Configuration, the wizard offers a
more flexible way to configure the new storage, as shown in Figure 2-53.

Figure 2-53 New enclosure using custom configuration

To learn more about configuring internal storage, see 7.1, “Working with internal drives” on
page 310.

68 Implementing the IBM Storwize V5000


4. The task to add the new enclosure runs and completes, as shown in Figure 2-54. Click
Close.

Figure 2-54 Task adding new enclosure

5. The new expansion enclosure is now shown as part of the cluster attached to its control
enclosure, as shown in Figure 2-55.

Figure 2-55 New expansion enclosure as part of the cluster

For more information about how to provision the new storage in the expansion enclosure, see
Chapter 7, “Storage pools” on page 309.

Chapter 2. Initial configuration 69


2.10.2 Configuring Call Home, email alert, and inventory
If your system is under warranty or you have a hardware maintenance agreement, it is
advised that you configure your system to send email reports to IBM. In the event that an
issue that requires hardware investigation is detected, the IBM Storwize V5000 will send an
email to the support teams at IBM, creating a problem record and alerting them to the issue.
They will then contact you proactively to further investigate the issue. This feature is known as
Call Home and is typically configured during the Initial Configuration of the system.

To configure the Call Home and email alert event notification in the IBM Storwize V5000 after
the Initial Configuration, complete the following steps:
1. Click Settings  Event Notifications, as shown in Figure 2-56.

Figure 2-56 Enabling Call Home

2. If your system has the Email Notification disabled, as shown in Figure 2-57, you can
re-enable it by clicking Enable Notification.

70 Implementing the IBM Storwize V5000


Figure 2-57 Enabling email notification

If the Email and Call Home notifications were never configured in the IBM Storwize V5000,
a window opens and the wizard guides you to the steps described in 2.10, “Initial
configuration” on page 49.
3. Click Email  Edit as shown in Figure 2-58.

Figure 2-58 Selecting Event Notification

The fields to configure Call Home become available and you must enter accurate and
complete information about both company and contact. The Email contact is the person
who will be contacted by the support center about any issues and for further information.
You may want to enter a network operations center email address in here for 24 x7
coverage. Enter the IP address and the server port for one or more of the email servers
that will send an email to IBM.

Chapter 2. Initial configuration 71


If you would like email alerts to be sent to other recipients as well as IBM, add their
addresses into the Email Notification section. Use the “+” symbol to add further recipients
and select one or more event types for each user, as shown in Figure 2-59.

Figure 2-59 Setting Call Home

Click Save to save the changes.

2.10.3 Service Assistant tool


The IBM Storwize V5000, as a single I/O group, is initially configured with three IP addresses,
one Service IP address for each node canister and a Management IP address, which is set
when the cluster is started.

The Management IP and Service IP addresses can be changed within the GUI as shown in
3.4.7, “Settings menu” on page 136.

The Service Assistant (SA) tool is a web-based GUI used to service individual node canisters,
primarily when a node has a fault and is in a service state. A node cannot be active as part of
a clustered system while it is in a service state. The SA is available even when the
management GUI is not accessible. The following information and tasks are included:
򐂰 Status information about the connections and the node canister
򐂰 Basic configuration information, such as configuring IP addresses
򐂰 Service tasks, such as restarting the common information model object manager
(CIMOM) and updating the worldwide node name (WWNN)
򐂰 Details about node error codes and hints about what to do to fix the node error

72 Implementing the IBM Storwize V5000


Important: The Service Assistant tool can be only be accessed using the superuser
account. You should only access the service assistant tool under the direction of IBM
Support.

The Service Assistance GUI is available using a service assistant IP address on each node.
The SA GUI is accessed through the cluster IP addresses by appending service to the cluster
management URL. If the system is down, the only other method of communicating with the
node canisters is through the SA IP address directly. Each node can have a single SA IP
address on Ethernet port 1. It is advised that these IP addresses are configured on all
Storwize V5000 node canisters.

The default IP address of canister 1 is 192.168.70.121 with a subnet mask of 255.255.255.0.

The default IP address of canister 2 is 192.168.70.122, with a subnet mask of 255.255.255.0.

To open the SA GUI, enter one of the following URLs into any web browser:
򐂰 http(s)://cluster IP address of your cluster/service
򐂰 http(s)://service IP address of a node/service

Example:
򐂰 Management address: http://1.2.3.4/service
򐂰 SA access address: http://1.2.3.5/service

When you are accessing SA by using the <cluster address>/service, the configuration node
canister SA GUI login window opens, as shown in Figure 2-60.

Figure 2-60 Service Assistant Login

The SA interfaces can view status and run service actions on other nodes and the node
where the user is connected.

Chapter 2. Initial configuration 73


After you are logged in, you see the Service Assistant Home window, as shown in
Figure 2-61.

Figure 2-61 Service assistance home window

The current canister node is displayed in the upper left corner of the GUI. As shown in
Figure 2-61, this is node 1. To change the canister, select the relevant node in the Change
Node section of the window. You see the details in the upper left change to reflect the new
canister.

The SA GUI provides access to service procedures and shows the status of the node
canisters. It is advised that these procedures should only be carried out if directed to do so by
IBM Support.

For more information about how to use the SA tool, see this website:
https://ibm.biz/BdEY8E

74 Implementing the IBM Storwize V5000


3

Chapter 3. Graphical user interface


overview
This chapter provides an overview of the IBM Storwize V5000 graphical user interface (GUI)
and shows how to navigate the configuration panels.

This chapter includes the following topics:


򐂰 Getting started
򐂰 Navigation
򐂰 Status indicator menus
򐂰 Function icon menus
򐂰 Management GUI help

© Copyright IBM Corp. 2013, 2015. All rights reserved. 75


3.1 Getting started
This section provides information about accessing the IBM Storwize V5000 management
GUI. It covers topics such as supported browsers, log in modes, and the layout of the System
panel.

3.1.1 Supported browsers


The IBM Storwize V5000 management software is a browser-based GUI. It is designed to
simplify storage management by providing a single point of control for monitoring,
configuration, and management.

For information about supported browsers, see the IBM Storwize V5000 Information Center at
this website:
http://pic.dhe.ibm.com/infocenter/storwize/v5000_ic/index.jsp

3.1.2 Accessing the management GUI


To access the management GUI, open a supported web browser and enter the management
IP address or Hostname of the IBM Storwize V5000. The login panel is displayed, as shown
in Figure 3-1.

Figure 3-1 IBM Storwize V5000 login panel

Default user name and password:

You should change any default passwords immediately.

Use the following information to log in to the IBM Storwize V5000 storage management:
򐂰 User Name: superuser
򐂰 Password: passw0rd (a zero replaces the letter o)

76 Implementing the IBM Storwize V5000


Note: Older versions of the software allow low-resolution graphics to be used by selecting
a check box on the login panel. Low-resolution graphic mode is now selected via the GUI
preferences panel. See “GUI Preferences panel” on page 149

A successful login shows the System panel by default, as shown in Figure 3-2. Alternatively,
the last opened window from the previous session is displayed.

Figure 3-2 IBM Storwize V5000 System panel

Clicking the logged-in user name in the top banner allows the user to log out, or manage their
password and SSH keys, as shown in Figure 3-3.

Figure 3-3 Drop-down selection

Chapter 3. Graphical user interface overview 77


3.1.3 System panel layout
As shown in Figure 3-4, the System panel has three main sections: Function Icons, System
view, and Status Indicators.

The System panel is new in software v7.4 and replaces the Overview and System Details
panels from older levels of software.

Figure 3-4 Three main sections of the IBM Storwize V5000 System panel

The Function Icons section shows a column of images. Each image represents a group of
interface functions. The icons enlarge with mouse hover and the following menus are shown:
򐂰 Monitoring
򐂰 Pools
򐂰 Volumes
򐂰 Hosts
򐂰 Copy Services
򐂰 Access
򐂰 Settings

The System view section shows a 3D representation of the IBM Storwize V5000. Hovering
over any of the components will show a component overview, while right-clicking a
component will show an Action menu appropriate to that component. Clicking the arrow at the
bottom right of the graphic rotates it to show the rear of the system. This arrow may be blue or
red. Blue indicates no issues, while red indicates an issue of some kind with a component.

78 Implementing the IBM Storwize V5000


The Status Indicators section shows the following horizontal status bars:
򐂰 Allocated / Virtual: Status that is related to the storage capacity of the system.
򐂰 Running Tasks: Status of tasks that are running and the recently completed tasks.
򐂰 Health Status: Status relating to system health, which is indicated by using the following
color codes:
– GreenHealthy
– YellowDegraded
– RedUnhealthy

Hovering over or clicking the horizontal bars provides more information and menus, which are
described in 3.3, “Status indicator menus” on page 93.

There are also two links at the top of the System panel. The Actions link, at the top-left, opens
an Action menu as shown in Figure 3-5 and the Overview link at the top-right toggles an
Overview panel as shown in Figure 3-6.

Figure 3-5 Systems panel - Actions menu

Chapter 3. Graphical user interface overview 79


Figure 3-6 System panel - overview

For full details on the System panel, refer to “System panel” on page 97.

3.2 Navigation
Navigating the management tool is simple and, like most systems, there are many ways to
navigate. The two main methods are to use the Function Icons section or the Overview
drop-down.

This section describes the two main navigation methods and introduces the well-known
breadcrumb navigation aid and the Suggested Tasks aid. Information regarding the
navigation of panels with tables also is provided.

80 Implementing the IBM Storwize V5000


3.2.1 Function icons navigation
Hovering the mouse pointer over one of the seven function icons on the left side of the panel
enlarges the icon and provides a menu with which to access various functions. Move the
pointer to the required function and click the function. Figure 3-7 shows the results of
hovering the mouse pointer over a function icon.

Figure 3-7 Hovering over a function icon

Chapter 3. Graphical user interface overview 81


Figure 3-8 shows all the menus with options under the Function Icons section.

Figure 3-8 Options that are listed under Function Icons section

3.2.2 Breadcrumb navigation aid


The IBM Storwize V5000 panels use the breadcrumb navigation aid to show the trail that was
browsed. This breadcrumb navigation aid is in the top area of the panel and hovering over a
breadcrumb in the trail shows a menu. Figure 3-9 shows the breadcrumb navigation aid for
the System panel.

Figure 3-9 Breadcrumb navigation aid

82 Implementing the IBM Storwize V5000


3.2.3 Suggested Tasks feature
The Suggested Tasks feature is a panel that is shown at login and displays any outstanding
tasks. The number of suggested tasks changes, depending on the configuration of the
system. Clicking the suggested task opens the corresponding panel to complete the task.

Figure 3-10 shows the Suggested Tasks panel.

Figure 3-10 Suggested Tasks panel

Chapter 3. Graphical user interface overview 83


3.2.4 Presets
The management GUI contains a series of pre-established configuration options, called
presets, which are commonly used settings to quickly configure objects on the system.
Presets are available for creating volumes, IBM FlashCopy mappings, and for setting up a
RAID configuration. Figure 3-11 shows the available internal storage presets.

Figure 3-11 Internal storage preset selection

3.2.5 Actions
The IBM Storwize V5000 functional panels provide access to various actions that can be
performed, such as modify attributes and rename, add, or delete objects. The Action menu
options change, depending on the panel accessed.

The available Actions menus can be accessed by using one of two main methods: highlight
the resource and use the Actions drop-down menu, as shown in Figure 3-12, or right-click the
resource, as shown in Figure 3-13.

84 Implementing the IBM Storwize V5000


Figure 3-12 Actions menu

Figure 3-13 Right-clicking the Actions menu

3.2.6 Task progress


An action starts a running task and shows a task progress panel, as shown in Figure 3-14.
Clicking View more details shows the underlying command-line interface (CLI) commands.
The commands are highlighted in blue and can be copied and pasted into a configured IBM
Storwize V5000 SSH terminal session, if required. This is useful when you are developing CLI
scripts.

Chapter 3. Graphical user interface overview 85


Figure 3-14 Task progress panel

3.2.7 Navigating panels with tables


Many of the configuration and status panels show information in a table format. This section
describes the following methods to navigate panels with rows and columns:
򐂰 Sorting columns
򐂰 Reordering columns
򐂰 Adding or removing columns
򐂰 Multiple selections
򐂰 Filtering objects

Sorting columns
Columns can be sorted by clicking the column heading. Figure 3-15 shows the result of
clicking the heading of the Capacity column. The table is now sorted and lists volumes with
the least amount of capacity at the top of the table.

Figure 3-15 Sorting columns by clicking the column heading

86 Implementing the IBM Storwize V5000


Reordering columns
Columns can be reordered by dragging the column to the required location. Figure 3-16
shows the location of the column with the heading Host Mappings positioned in the last
column. Dragging this heading reorders the columns in the table.

Figure 3-16 Reordering columns

Figure 3-17 shows the column heading Host Mappings as it is dragged to the required
location.

Figure 3-17 Dragging a column heading to the required location

Chapter 3. Graphical user interface overview 87


Figure 3-18 shows the result of dragging the column heading Host Mappings to the new
location.

Figure 3-18 Reordering column headings

Adding or removing columns


To add or remove a column, right-click the heading bar and select or de-select the check box
for the required column headings. Figure 3-19 shows the addition of the Real Capacity
column.

Figure 3-19 Adding column heading Real Capacity

Important: Some users might run into a problem in which a context menu from the Firefox
browser is shown by right-clicking to change the column heading. This issue can be fixed
by clicking in Firefox: Tools Options Content  Advanced (for Java setting) 
Select: Display or replace context menus.

The web browser requirements and configuration settings that are advised to access the
IBM Storwize V5000 management GUI can be found in the IBM Storwize V5000
Information Center at this website:
http://pic.dhe.ibm.com/infocenter/storwize/V5000_ic/index.jsp

88 Implementing the IBM Storwize V5000


It is also possible to add or remove columns by clicking the icon at the far right of the heading
bar as shown in Figure 3-20.

Figure 3-20 Adding columns icon

Multiple selections
You also can select multiple items in a list by using a combination of the Shift or Ctrl keys.

Using the Shift key


To select multiple items in a sequential order, click the first item that is listed, press and hold
the Shift key, and then click the last item in the list. All of the items between the first and last
items are selected, as shown in Figure 3-21.

Figure 3-21 Selection of three sequential items

Using the Ctrl key


To select multiple items that are not in sequential order, click the first item, press and hold the
Ctrl key, and then click the other items that are required. Figure 3-22 shows the selection of
two non-sequential items.

Figure 3-22 Selecting two non-sequential items

Chapter 3. Graphical user interface overview 89


Filtering objects
To focus on a subset of the listed items that are shown in a panel with columns, use the filter
field. This tool shows items that match the entered value. Figure 3-23 shows the text Vol was
entered into the filter field. Now, only volumes with the text Vol in any column are listed and
the filter word also is highlighted.

Figure 3-23 Filtering objects to display a subset of the volumes

Advanced Filter
To use the Advanced Filter feature, click in the Filter field, then click the Advanced Filter icon,
as shown in Figure 3-24.

Figure 3-24 Advanced Filter

90 Implementing the IBM Storwize V5000


Figure 3-25 shows the Advanced filter column drop-down menu. Selecting a column name,
allows the user to filter by column. Select the appropriate operand, which changes depending
on the column being filtered and, finally, enter a filter value. Click Apply to display the filtered
data.

Figure 3-25 Advanced filter - Choosing values

Figure 3-26 shows the addition of more filters. Click the + icon to the right of the filter to add
further filters. Choose the appropriate logical operand (AND or OR). Click Apply to display
-
the filtered data. Filters can also be removed by clicking the icon.

Figure 3-26 Advanced filter - Additional filters

Figure 3-27 shows the result of the filter used in Figure 3-26. This displays all internal drives
that are Candidates or Spares. To reset a filter after the results have been displayed, click the
magnifying glass with a red cross through it, in the filter field.

Chapter 3. Graphical user interface overview 91


Figure 3-27 Advanced filter - Results and reset of filter

Saving tabular data to a file


It is possible to save tabular data to a file. To do this, click the diskette icon and specify the
name and location to save the file. A comma-delimited file is created for input to a
spreadsheet program, such as Microsoft Excel.

Figure 3-28 shows saving internal storage data.

Figure 3-28 Saving internal storage data

92 Implementing the IBM Storwize V5000


3.3 Status indicator menus
This section provides more information about the horizontal bars that are shown at the bottom
of the management GUI panels. The bars are status indicators, and include associated bar
menus. This section describes the Capacity allocation, Running Tasks, and Health Status bar
menus.

3.3.1 Allocated / Virtual Capacity status bar menu


The Allocated / Virtual Capacity status bar shows the capacity status. Hovering over the
image of two arrows on the right side of this status bar shows a description of the capacity
allocation that is in use. Figure 3-29 shows the comparison of the used capacity to the real
capacity.

Figure 3-29 The Allocated / Virtual Capacity bar that compares used capacity to real capacity

To change the capacity comparison, click the Allocated / Virtual status bar. This will toggle
between Allocated and Virtual. Figure 3-30 shows the new comparison of virtual capacity to
real capacity.

Figure 3-30 Changing the Allocated / Virtual comparison, virtual capacity to real capacity

3.3.2 Running Tasks bar menu


To show the Running Tasks bar menu, click the circular image to the left of the running tasks
status bar. This menu lists running and recently completed tasks and groups similar tasks.
Figure 3-31 shows the Running Tasks bar menu.

Figure 3-31 Running Tasks bar menu

Chapter 3. Graphical user interface overview 93


For an indication of the task progress, browse to the Running Tasks bar menu and click the
required task. This will open the task and show its progress, as shown in Figure 3-32.

Figure 3-32 Volume Synchronization task panel

3.3.3 Health status bar menu


The health status bar provides an indication of the overall health of the system. The color of
the status bar indicates the state of IBM Storwize V5000:
򐂰 Green: Healthy
򐂰 Yellow: Degraded
򐂰 Red: Unhealthy

If a status alert occurs, the health status bar can turn from green to yellow or to red. To show
the health status menu, click the attention icon on the left side of the health status bar, as
shown in Figure 3-33.

Figure 3-33 Health status menu

94 Implementing the IBM Storwize V5000


In this example, the health status bar menu shows the system as Unhealthy and provides a
description of Internal Storage as the type of event that occurred. To investigate the event,
open the health status bar menu and click the description of the event to show the Events
panel (Monitoring Events), as shown in Figure 3-34. This panel lists all events and
provides directed maintenance procedures (DMPs) to help resolve errors. For more
information, see “Events panel” on page 105.

Figure 3-34 Events panel via health status menu

Chapter 3. Graphical user interface overview 95


3.4 Function icon menus
The IBM Storwize V5000 management GUI provides function icons that are an efficient and
quick mechanism for navigation. As described in section 3.1.3, “System panel layout” on
page 78, each graphic on the left side of the panel is a function icon that presents a group of
interface functions. Hovering over one of the seven function icons shows a menu that lists the
functions. Figure 3-35 shows all of the Function Icon menus.

Figure 3-35 All Function Icon menus

96 Implementing the IBM Storwize V5000


3.4.1 Monitoring menu
As shown in Figure 3-36, the Monitoring menu provides access to the System, Events, and
Performance panels.

Figure 3-36 Monitoring menu

System panel
Select System in the Monitoring menu to open the panel. This panel shows a 3D graphical
representation of the front of the system and is shown in Figure 3-37.

Figure 3-37 System panel - front view

Chapter 3. Graphical user interface overview 97


Hovering over any of the components will show a component overview, while right-clicking a
component will show an Action menu appropriate to that component. Clicking the arrow at the
bottom-right of the graphic, as shown in Figure 3-37, rotates it to show the rear of the system.
This arrow may be blue or red. Blue indicates no issues, while red indicates an issue of some
kind with a component.

Figure 3-38 System panel - rear view

Clicking the arrow at the bottom-right of the graphic, again, rotates it to show the front of the
system.

Both Figure 3-37 on page 97 and Figure 3-38 here, show a system with one I/O group.

Figure 3-39 on page 99 shows a system with dual I/O groups. Hovering over an enclosure will
show basic information, while clicking an enclosure will show a detailed view, as shown in
Figure 3-40 on page 99.

In this book, you will see examples using both single I/O group and dual I/O group views.

The curved band below the graphic of the system shows capacity usage, including physical
installed capacity, the amount of capacity allocated, and over-provisioned capacity.

98 Implementing the IBM Storwize V5000


Figure 3-39 System panel showing dual I/O groups

Figure 3-40 Showing enclosure information in a dual I/O group environment

Chapter 3. Graphical user interface overview 99


Actions menu
The Actions link at the top-left of the System panel opens an Actions menu, as shown in
Figure 3-41.

Figure 3-41 System panel Actions menu

The Actions menu allows the user to do the following tasks:


򐂰 Add a new enclosure (see Chapter 2, “Initial configuration” on page 25).
򐂰 Rename the system.
򐂰 Turn off any illuminated Identity LEDs.
򐂰 Update the system and drive software (see Chapter 12, “RAS, monitoring, and
troubleshooting” on page 601).
򐂰 Power off the system (see Chapter 12, “RAS, monitoring, and troubleshooting” on
page 601).
򐂰 Display system properties, which includes the software version and environmental data.
See “System 3D view” on page 101 for further details.

100 Implementing the IBM Storwize V5000


Overview
The Overview link at the top-right of the System panel toggles an Overview panel, as shown
in Figure 3-42.

Figure 3-42 Overview panel showing all labels

Clicking any of the Overview icons, takes the user to the associated panel. As an example,
clicking Arrays takes the user to the Pools Internal Storage panel.

System 3D view
On the System panel, hovering over any of the components in the 3D view will show a
component overview, and right-clicking a component will show an Action menu appropriate to
that component.

The following component properties can be displayed:


1. System
The System properties show information such as software version, number of enclosures
and environmental details. Figure 3-43 on page 102 shows the System properties.

Chapter 3. Graphical user interface overview 101


Figure 3-43 System properties

2. Enclosure
The Enclosure properties show information such as the enclosure state, its machine type
and serial number, and Field Replaceable Unit (FRU) part number. Figure 3-44 shows the
Enclosure (Control or Expansion) properties.

Figure 3-44 Enclosure properties

102 Implementing the IBM Storwize V5000


3. Drive
The Drive properties show information such as state, capacity, use (unused, candidate,
spare, member), the drive specification, and its FRU part number. Figure 3-45 shows the
properties for drive 2.

Figure 3-45 Drive properties

4. Canister
The Canister properties show information such as Configuration node state, WWNN,
memory and CPU specifications, and FRU part number. Figure 3-46 shows the canister
properties.

Figure 3-46 Canister properties

Chapter 3. Graphical user interface overview 103


5. Power Supply
Right-clicking the Power Supply shows the Power Supply and Battery properties as shown
in Example 3-47 and Figure 3-48 respectively.
The Power Supply properties show information such as state and FRU part number.

Figure 3-47 Power Supply properties

The Battery properties show information such as state, charge state, and FRU part
number.

Figure 3-48 Battery properties

104 Implementing the IBM Storwize V5000


Events panel
Select Events in the Monitoring menu to open the Events panel. The machine is optimal
when all errors are addressed, and no items are found in this panel, as shown in Figure 3-49.

Figure 3-49 Events panel with all errors addressed

Filtering events view


To view Unfixed Messages and Alerts or to Show All, select the appropriate option from the
menu that is between the Actions and Filter fields, as shown in Figure 3-50.

Figure 3-50 Unfixed messages and alerts in the events panel

Event properties
To show actions and properties that are related to an event or to repair an event that is not the
next Recommended Action, right-click the event to show other options. Figure 3-51 shows the
selection of the Properties option.

Figure 3-51 Selecting event properties

Chapter 3. Graphical user interface overview 105


Figure 3-52 shows the properties and sense data for an event.

Figure 3-52 Event properties and sense data

Showing events within a time frame


To show events that occurred within a certain time of a particular event, select the required
event entry, then select Show entries within... from the Actions menu and set the period
value. Figure 3-53 shows the selection of the Show entries within... option with a period
value of 5 minutes. This shows all events within 5 minutes of the selected event.

Figure 3-53 Showing events within a set time

106 Implementing the IBM Storwize V5000


Performance panel
Select Performance in the Monitoring menu to open the Performance panel. This panel
shows graphs that represent the last 5 minutes of performance statistics. The performance
graphs include statistics about CPU Utilization, Volumes, Interfaces, and MDisks. Figure 3-54
shows the Performance panel.

Figure 3-54 Performance panel

Chapter 3. Graphical user interface overview 107


Custom-tailoring performance graphs
The Performance panel can be customized to show the workload of a single node, which is
useful to help determine whether the system is working in a balanced manner. Figure 3-55
shows the custom-tailoring of the performance graphs by selecting node 1 from the System
Statistics menu. The measurement type can also be changed between throughput (MBps) or
IOPS by selecting the relevant value.

Figure 3-55 Graphs representing performance statistics of a single node

Performance peak value


Peak values over the last 5-minute period can be seen by hovering over the current value, as
shown in Figure 3-56 for the SAS Interfaces.

Figure 3-56 Peak SAS Interface usage value over the last 5 minutes

108 Implementing the IBM Storwize V5000


3.4.2 Pools menu
The Pools menu provides access to the Volumes by Pools, Internal Storage, External
Storage, MDisks by Pools, and System Migration functions, as shown in Figure 3-57.

Figure 3-57 Pools menu

Volumes by Pool panel


Select Volumes by Pool in the Pools menu to open the panel. The Pool Filter can be used
view volumes associated to a particular Pool. This view makes it easy to manage volumes
and determine the amount of real capacity that is available for further use. Figure 3-58 shows
the Volumes by Pool panel.

Figure 3-58 Volumes by Pools panel

Chapter 3. Graphical user interface overview 109


Volume Allocation
The upper-right corner of the Volumes by Pool panel shows the Volume Allocation, which, in
this example, shows the physical capacity (738.00 GiB) and the used capacity (52.00 GiB in
the green portion). The red bar shows the threshold at which a warning is generated when the
used capacity in the pool first exceeds the threshold that is set for its physical capacity. By
default, this threshold is set to 80% but can be altered in the Pool properties. Figure 3-59
shows the volume allocation information that is displayed in the Volumes by Pool panel.

Figure 3-59 Volume Allocation

Renaming pools
To rename a pool, select the pool from the pool filter and click the name of the pool.
Figure 3-60 shows that pool V5000_Pool_01 was selected to be renamed.

Figure 3-60 Renaming a pool

Changing pool icons


To change the icon that is associated with a pool, select the pool in the Pool Filter, click the
large pool icon and use the Choose Icon buttons to select the required image. This helps to
manage and differentiate between the classes of drive or the tier of the storage pool.
Figure 3-61 shows the pool change icon panel.

110 Implementing the IBM Storwize V5000


Figure 3-61 Changing a pool icon

Volume functions
The Volumes by Pool panel also provides access to the volume functions via the Actions
menu, the Create Volumes option, and by right-clicking a listed volume. For more information
about navigating the Volume panel, see 3.4.3, “Volumes menu” on page 119. Figure 3-62
shows the volume functions that are available via the Volumes by Pool panel.

Figure 3-62 Volume functions are available via the Volume by Pools panel

Chapter 3. Graphical user interface overview 111


Internal Storage panel
Select Internal Storage in the Pools menu to open the Internal Storage panel, as shown in
Figure 3-63. The internal storage consists of the drives that are contained in the IBM Storwize
V5000 control enclosure and any SAS-attached IBM Storwize V5000 expansion enclosures.
By using the Internal Storage panel, you can configure the internal storage into RAID
protected storage (MDisks). You can also filter the displayed drive list by drive class.

Figure 3-63 Drive actions menu of the internal storage panel

Drive actions
Drive level functions, such as identifying a drive and marking a drive as offline, unused,
candidate, or spare, can be accessed here. Right-click a listed drive to show the Actions
menu. Alternatively, the drives can be selected and then the Action menu is used. For more
information, see “Multiple selections” on page 89. Figure 3-63 shows the Drive Actions menu.

Drive properties
Drive properties and dependent volumes can be displayed from the Internal Storage panel.
Select Properties from the Drive Actions menu. The drive properties panel shows the drive
attributes and the drive slot SAS port status. Figure 3-64 on page 113 shows the drive
properties with the Show Details option selected.

Drive firmware upgrade


In past versions of the Storwize software, the drive firmware could only be upgraded via the
CLI. It is now possible to upgrade the via the GUI. The drives can either be upgraded all at
once or individually. See Chapter 12, “RAS, monitoring, and troubleshooting” on page 601.

112 Implementing the IBM Storwize V5000


Figure 3-64 Drive properties

The Configure Internal Storage wizard


Click Configure Storage to show the Configure Internal Storage wizard, as shown in
Figure 3-65.

Figure 3-65 Internal Storage panel

Chapter 3. Graphical user interface overview 113


Using this wizard, you can configure the RAID properties and pool allocation of the internal
storage. Figure 3-66 shows the Configure Internal Storage wizard.

For full details on using the Configure Internal Storage wizard, see Chapter 7, “Storage pools”
on page 309.

Figure 3-66 Configure Internal Storage wizard

Figure 3-67 shows allocating the internal storage to a Pool.

Figure 3-67 Configuring Internal Storage wizard - Pool allocation

External Storage panel


Select External Storage in the Pools menu to open the External Storage panel. This panel
allows you to configure and use externally virtualized storage. Figure 3-68 shows the External
Storage panel.

114 Implementing the IBM Storwize V5000


For full details on configuring and using external storage, see Chapter 11, “External storage
virtualization” on page 579.

Figure 3-68 External Storage panel

MDisks by Pool panel


Select MDisks by Pool in the Pools menu to open the MDisks by Pool panel. Using this
panel, you can perform such tasks such as display MDisks in each Pool, create New Pools,
delete pools, and detect externally virtualized storage. Figure 3-69 shows the MDisks by Pool
panel.

Figure 3-69 MDisks by Pool panel

Pool actions
To create, delete, rename, change a Pool Icon, view a Child Pool, or view a Pool properties,
right-click a pool. Alternatively, the Actions menu can be used. Figure 3-70 shows the pool
actions.

Chapter 3. Graphical user interface overview 115


Figure 3-70 Pool actions

Note: In software v7.4, a pool cannot be deleted if it contains volumes. The volumes must
first be deleted via the Volumes by Pool panel. This is a safeguard against accidental
volume deletion.

RAID actions
Using the MDisks by Pool panel, you can perform MDisk RAID tasks, such as Set Spare
Goal, Swap Drive, and Delete. To access these functions, right-click the MDisk, as shown in
Figure 3-71.

Figure 3-71 RAID actions menu

116 Implementing the IBM Storwize V5000


System Migration panel
Select System Migration in the Pools menu to open the System Migration panel, as shown in
Figure 3-72. This panel is used to migrate data from externally virtualized storage systems to
the internal storage of the IBM Storwize V5000. The panel displays image mode volume
information. To begin a migration, click Start New Migration.

Figure 3-72 System Migration panel

Chapter 3. Graphical user interface overview 117


Storage Migration wizard
The Storage Migration wizard is used for data migration from other SAS and Fibre Channel
attached storage systems to the IBM Storwize V5000. Figure 3-73 shows the Storage
Migration wizard.

Figure 3-73 Storage Migration wizard

For more information, see Chapter 6, “Storage migration” on page 249.

118 Implementing the IBM Storwize V5000


3.4.3 Volumes menu
As shown in Figure 3-74, the Volumes menu provides access to the Volumes, Volumes by
Pool, and Volumes by Host functions.

Figure 3-74 The Volumes menu

Chapter 3. Graphical user interface overview 119


Volumes panel
Select Volumes in the Volumes menu to open the panel, as shown in Figure 3-75. The
Volumes panel shows all of the volumes in the system. The information that is displayed is
dependent on the columns that are selected.

Figure 3-75 Volumes panel

120 Implementing the IBM Storwize V5000


Volume actions
Figure 3-75 shows the Volume Action menu and its options.

Create volumes
Click Create Volumes to open the Create Volumes panel, as shown in Figure 3-76.

Using this panel, you can select a preset of the type of volume to create. The presets are
designed to accommodate most user cases. The presets are generic, thin-provisioned,
mirror, or thin mirror. After a preset is determined, select the storage pool from which the
volumes are to be allocated. An area to name and size the volumes is shown.

Figure 3-76 Create Volumes panel

Chapter 3. Graphical user interface overview 121


Creating multiple volumes
A useful feature is available for quickly creating multiple volumes of the same type and size.
Specify the number of volumes required in the Quantity field, then complete the volume
capacity and name. A number range can also be specified.

The Create Volumes panel displays a summary that shows the real and virtual capacity that is
used if the proposed volumes are created. Click Create or Create and Map to Host to
continue.

Figure 3-77 shows a quantity of 3 in the Quantity field and this will create 3 volumes named
V5000_Vol_05, V5000_Vol_06, and V5000_Vol_07.

Figure 3-77 Creating multiple volumes quickly

122 Implementing the IBM Storwize V5000


Volume advanced settings
Click Advanced to show more volume configuration options. Use this feature when the preset
does not meet your requirements. After the advanced settings are configured, click OK to
return to the Create Volumes panel. Figure 3-78 shows the Advanced Settings panel.

Figure 3-78 Advanced Settings panel

Volumes by Pool panel


For more information, see “Volumes by Pool panel” on page 109.

Volumes by Host panel


Select Volumes by Host in the Volumes menu to open the panel. In the Volume by Hosts
panel, you can focus on volumes that are allocated to a particular host using the host
selection filter. Figure 3-79 shows the Volume by Host panel.

Chapter 3. Graphical user interface overview 123


Figure 3-79 Volume by Host panel

3.4.4 Hosts menu


As shown in Figure 3-80, the Hosts menu provides access to the Hosts, Ports by Host, Host
Mappings, and Volumes by Host functions.

Figure 3-80 Selecting the Hosts menu

124 Implementing the IBM Storwize V5000


Hosts panel
Select Hosts in the Hosts menu to open the panel, as shown in Figure 3-81. The Hosts panel
shows all of the hosts that are defined in the system.

Figure 3-81 Hosts panel

Host Actions
Host Actions, such as Create Host, Modify Mappings, Unmap All Volumes, Duplicate
Mappings, Rename, Delete, and Properties can be performed from the Hosts panel.
Figure 3-81 shows the actions that are available from the Hosts panel.

Creating a host
Click Create Host to open the Add Host panel. Choose the host type from Fibre Channel
(FC), iSCSI, or SAS host and the applicable host configuration panel is shown. After the host
type is determined, the host name and port definitions can be configured. Figure 3-82 shows
the Choose the Host Type panel.

Figure 3-82 Choose the Host Type panel

Chapter 3. Graphical user interface overview 125


Ports by Host panel
Select Ports by Host in the Hosts menu to open the panel, as shown in Figure 3-83. The
panel shows the name, status, and type of ports that are assigned to the host definition. It is
possible Add and Delete host ports by clicking the respective icon.

Actions such as Create Host, Modify mappings, Unmap, Duplicate Mappings, Rename and
Properties can be performed from this panel by clicking the Actions drop-down menu.

Figure 3-83 Ports by Host panel

Host Mappings panel


Select Host Mappings in the Hosts menu to open the panel, as shown in Figure 3-84. This
panel shows the volumes that each host can access with the corresponding SCSI ID. The
Unmap Volume action can be performed from this panel.

Figure 3-84 Host Mappings panel

Volumes by Host panel


For more information, see “Volumes by Host panel” on page 123.

126 Implementing the IBM Storwize V5000


3.4.5 Copy Services menu
The Copy Services menu provides access to the FlashCopy, Consistency Groups, FlashCopy
Mappings, Remote Copy, and Partnership functions. Figure 3-85 shows the Copy Services
menu.

Figure 3-85 Copy Services menu

FlashCopy panel
Select FlashCopy in the Copy Services menu to open the panel, as shown in Figure 3-86.
The FlashCopy panel displays all of the volumes that are in the system.

Figure 3-86 FlashCopy panel

Chapter 3. Graphical user interface overview 127


FlashCopy actions
FlashCopy actions such as Create Snapshot, Create Clone, Create Backup, Advanced
FlashCopy, and Delete can be performed from this panel. Figure 3-86 shows all the actions
that are available from the FlashCopy panel.

Consistency Groups panel


Select Consistency Groups in the Copy Services menu to open the panel. A consistency
group is a container for FlashCopy mappings. Grouping allows FlashCopy mapping actions
such as prepare, start, and stop to occur at the same time for the group instead of
coordinating actions individually. This feature can help ensure that the group’s target volumes
are consistent to the same point and remove several FlashCopy mapping administration
tasks.

The Consistency Group panel shows the defined groups with the associated FlashCopy
mappings. Group Actions such as FlashCopy mapping Start, Stop, and Delete can be
performed from this panel. Create FlashCopy Mapping and Create Consistency Group can
also be selected from this panel. For more information, see “FlashCopy Mappings panel”.
Figure 3-87 shows the Consistency Group panel.

Figure 3-87 Consistency Groups panel

FlashCopy Mappings panel


Select FlashCopy Mappings in the Copy Services menu to open the panel. FlashCopy
mappings define the relationship between source volumes and target volumes. The
FlashCopy Mappings panel shows information that relates to each mapping, such as status,
progress, source and target volumes, and flash time. Select Create FlashCopy Mapping to
configure a new mapping or use the Actions menu to administer the mapping. Figure 3-88
shows the FlashCopy Mappings panel.

128 Implementing the IBM Storwize V5000


Figure 3-88 FlashCopy Mappings panel

For more information about how to create and administer FlashCopy mappings, see
Chapter 8, “Advanced host and volume administration” on page 359.

Remote Copy panel


Click Remote Copy in the Copy Services menu to open the panel as shown in Figure 3-89.
This panel shows both individual and grouped Remote Copy relationships. The Actions menu
provides options to start, stop, switch, rename and delete relationships as well as creating
and modifying relationships and consistency groups.

Figure 3-89 Remote Copy panel

Chapter 3. Graphical user interface overview 129


Partnerships panel
Clicking Partnerships opens the panel shown in Figure 3-90. From this panel, it is possible to
set up a new partnership or delete an existing partnership with another IBM Storwize or SAN
Volume Controller system for the purposes of Remote Copy.

Figure 3-90 Partnerships panel

Figure 3-91 shows the Partnership properties, which is accessed via the Actions menu. 
From here, it is possible to change the Link bandwidth and background copy rate. The Link
Bandwidth in Megabits per second (Mbps) specifies the bandwidth that is available for
replication between the local and partner systems. The Background copy rate specifies the
maximum percentage of the link bandwidth that can be used for background copy operations.
This value should be set so that there is enough bandwidth available to satisfy host write
operations in addition to the background copy.

Figure 3-91 Partnership properties

130 Implementing the IBM Storwize V5000


3.4.6 Access menu
The Access menu provides access to the Users and Audit Log functions, as shown in
Figure 3-92.

Figure 3-92 Access menu

Chapter 3. Graphical user interface overview 131


Users panel
Select Users in the Access menu to open the panel. The Users panel shows the defined user
groups and users for the system. The users that are listed can be filtered by user group.
Figure 3-93 shows the Users panel and the Users Actions menu.

Figure 3-93 Users panel

Creating a user group


Click Create User Group to open the Create User Group panel. Enter the group name, select
the role, then click Create, as shown in Figure 3-94. Hovering over each role will show a help
icon which describes the role.

Figure 3-94 Create User Group panel

132 Implementing the IBM Storwize V5000


Creating a user
Click New User to define a user to the system. Figure 3-95 shows the Users panel and the
New User option.

Figure 3-95 Create a new user

Chapter 3. Graphical user interface overview 133


Using the Create User panel, as shown in Figure 3-96, you can configure the user name,
password, and authentication mode. The user name, password, group, and authentication
mode are mandatory fields but the public Secure Shell (SSH) key is optional. After the user is
defined, click Create.

Figure 3-96 Create User panel

The authentication mode can be set to local or remote. Select local if the IBM Storwize V5000
performs the authentication locally. Select remote if a remote service, such as an LDAP
server, authenticates the connection. If remote is selected, the remote authentication server
must be configured in the IBM Storwize V5000 by using the Settings menu  Security
panel.

An SSH configuration can be used to establish a more secure connection to the


command-line interface.

134 Implementing the IBM Storwize V5000


Audit Log panel
Select Audit Log in the Access menu to open the panel. The audit log tracks action
commands that are issued through a CLI session or through the management GUI. The Audit
Log panel displays information about the command, such as the time stamp, user, and any
associated command parameters. The log can be filtered by date or by the Show entries
within... feature to reduce the number of items that are listed. It is not possible to delete or
alter the Audit log. Figure 3-97 shows the Audit Log panel.

Figure 3-97 Audit Log panel

Chapter 3. Graphical user interface overview 135


3.4.7 Settings menu
The Setting menu provides access to the Notifications, Network, Security, System, Support,
and GUI Preferences panels. Figure 3-98 shows the Settings menu.

Figure 3-98 Settings menu

Notifications panel
Select Notifications in the Settings menu to open the panel. The IBM Storwize V5000 can
use Simple Network Management Protocol (SNMP) traps, syslog messages, emails, and IBM
Call Home to notify users and IBM when events are detected. Each event notification method
can be configured to report all events or alerts. Alerts are the significant events and might
require user intervention. The event notification levels are critical, warning, and information.

The Notifications panel provides access to the Email, SNMP, and Syslog configuration
panels. IBM Call Home is an email notification for IBM Support. It is automatically configured
as an email recipient and is enabled when the Email event notification option is enabled by
following the Call Home wizard.

136 Implementing the IBM Storwize V5000


Enabling Email Notification option
As shown in Figure 3-99, the Notifications panel provides access to the Email configuration
panel. Click Email to open the panel and click Enable Notifications to open the Email Event
Notifications wizard.

Figure 3-99 Notifications panel: Email

Email Event Notifications wizard


The Email Event Notifications wizard, as shown in Figure 3-100, guides the user through
system location, account contact, and email configuration tasks.

Figure 3-100 Email Event Notification wizard

Chapter 3. Graphical user interface overview 137


SNMP event notification
As shown in Figure 3-101, the Notifications panel provides access to the SNMP configuration
panel. Click SNMP to open the panel and enter the server details. Multiple servers can be
configured by clicking +
to add more servers.

Figure 3-101 SNMP configuration panel

Syslog event notification


As shown in Figure 3-102, the Notifications panel provides access to the Syslog configuration
panel. Click Syslog to open the panel and enter the server details. Multiple servers can be
configured by clicking +
to add more servers.

Figure 3-102 Syslog configuration panel

138 Implementing the IBM Storwize V5000


Network panel
Select Network in the Settings menu to open the panel. As shown in Figure 3-103, the
Network panel provides access to the Management IP Addresses, Service IP Addresses,
Ethernet Ports, iSCSI, Fibre Channel Connectivity and Fibre Channel port configuration
panels.

Figure 3-103 Network panel

Chapter 3. Graphical user interface overview 139


Management IP addresses
The Management IP address is the IP address of the system and is configured during initial
setup. The address can be an IPv4 address, IPv6 address, or both. The Management IP
address is logically assigned to Ethernet port 1 of each node canister, which allows for node
canister failover.

Another Management IP address can be logically assigned to Ethernet port 2 of each node
canister for more fault tolerance. If the Management IP address is changed, use the new IP
address to log in to the Management GUI again. Click Management IP Addresses and then
click the port that you want to configure (the corresponding port on the partner node canister
is also highlighted). Figure 3-104 shows Management IP Addresses configuration panel.

Figure 3-104 Management IP Addresses configuration panel

140 Implementing the IBM Storwize V5000


Service IP addresses
Service IP addresses are used to access the Service Assistant Tool. Each Node Canister
should have its own Service IP address. The address can be an IPv4 address, IPv6 address,
or both. The Service IP addresses are configured on Ethernet port 1 of each node canister.
Click Service IP Addresses and the select the Control Enclosure and Node Canister to
configure. Figure 3-105 shows the Service IP addresses configuration panel.

Figure 3-105 Service IP Addresses configuration panel

Ethernet ports
The Ethernet ports can be used for iSCSI connections, host attachment, and remote copy.
Click Ethernet Ports to view the available ports. Figure 3-106 shows the Ethernet Ports panel
and associated Actions menu, which can be used to modify the port settings.

Figure 3-106 Ethernet Ports panel

Chapter 3. Graphical user interface overview 141


iSCSI configuration
The IBM Storwize V5000 supports iSCSI connections for hosts. Click iSCSI to configure the
System Name, iSCSI Aliases and iSNS settings. Figure 3-107 shows the iSCSI Configuration
panel.

Figure 3-107 iSCSI Configuration panel

Fibre Channel connectivity


The Fibre Channel Connectivity panel displays the connectivity between nodes and other
storage systems and hosts that are attached through the Fibre Channel network. Click Fibre
Channel Connectivity and select the required view from the View connectivity for:
drop-down menu. Figure 3-108 shows the Fibre Channel panel with the All nodes, storage
systems, and hosts option selected.

Figure 3-108 Fibre Channel Connectivity panel

142 Implementing the IBM Storwize V5000


Fibre Channel ports
The Fibre Channel Ports panel displays the Fibre Channel ports used by the system. Each
port is allowed to communicate with hosts and storage systems. Figure 3-109 shows the
Fibre Channel Ports panel.

Figure 3-109 Fibre Channel Ports

Chapter 3. Graphical user interface overview 143


Security panel
Select Security in the Settings menu to open the panel. The Security panel provides access
to the Remote Authentication wizard. Remote authentication must be configured to create
remote users on the IBM Storwize V5000. A remote user is authenticated on a remote
service, such as IBM Tivoli® Integrated Portal or a Lightweight Directory Access Protocol
(LDAP) provider.

Enabling remote authentication


Click Configure Remote Authentication to open the wizard. Following the prompts to
configure Remote Authentication, as shown in Figure 3-110.

Figure 3-110 Security panel

144 Implementing the IBM Storwize V5000


System panel
Select System in the Settings menu to open the panel. This panel provides access to the
Date and Time, Licensing, and Upgrade Software configuration panels.

Date and time


Click Data and Time to configure the date and time manually or via a Network Time Protocol
(NTP) server. Figure 3-111 shows the Date and Time function of the Settings System
panel.

Figure 3-111 System panel

Licensing
The Licensing panel shows the current system licensing. The IBM Storwize V5000 uses the
same honor-based licensing as the Storwize V7000, which is based on per enclosure
licensing.

The following optional licenses are available:


򐂰 FlashCopy
򐂰 Remote Copy
򐂰 Easy Tier
򐂰 External Virtualization

Figure 3-112 shows the Update License panel. In this example, an IBM Storwize V5000 that
is configured as a control enclosure with two expansion enclosures and externally virtualizes
a storage system consisting of two disk enclosures. Therefore, five enclosures are licensed
for FlashCopy, Remote Copy, and Easy Tier, and two enclosures are licensed for External
Virtualization.

Chapter 3. Graphical user interface overview 145


Figure 3-112 Update License

Updating the system


IBM recommends that you use the latest version of software. The Update System panel
shows the current software level. If the system is connected to the Internet, it connects to the
IBM upgrade server to check whether the current level is the latest. If an update is available, a
direct link to the software is provided to the make code download process easier.

To update the software, the IBM Storwize V5000 software and the IBM Storwize V5000
Upgrade Test Utility must be downloaded. After the files are downloaded, it is best to check
the checksum to ensure that the files are sound. Read the release notes, verify compatibility,
and follow all IBM recommendations and prerequisites.

To update the software of the IBM Storwize V5000, click Update. Figure 3-113 shows the
Update System panel.

Figure 3-113 Update System panel

146 Implementing the IBM Storwize V5000


Support panel
Select Support in the Settings menu to open the Support panel. As shown in Figure 3-114,
this panel provides access to the IBM support package, which is used by IBM to assist with
problem determination. Click Download Support Package to access the wizard.

Figure 3-114 Support panel

Using the Download Support Package wizard


The Download Support Package wizard provides a selection of various package types. IBM
Support provides direction on package type selection as required. To download the package,
select the type and click Download. The output file can be saved to the user’s workstation.
Figure 3-115 shows the Download Support Package wizard.

Figure 3-115 Download Support Package wizard

Chapter 3. Graphical user interface overview 147


Showing the full log listing
The Support panel also provides access to the files that are on the node canisters, as shown
in Figure 3-116. Click Show full log listing... to access the node canister files. To save a file
to the user’s workstation, select a file, right-click the file, and select Download. To change to
the file listing to show the files on a partner node canister, select the node canister from the
node selection menu.

Figure 3-116 Full log listing

148 Implementing the IBM Storwize V5000


GUI Preferences panel
Select GUI Preferences in the Settings menu to open the panel. As shown in Figure 3-117,
this panel provides options to Refresh GUI objects, Restore default browser preferences, and
configure the Information Center web address.

There are also two further check boxes that allow the Extent size to be changed when
creating a new Pool and enable the low-resolution graphics mode.

Figure 3-117 GUI Preferences panel

The option to Allow extent size selection during pool creation toggles the Advanced Settings
button when creating a new Pool as can be seen in Chapter 7, “Storage pools” on page 309.

The option to Enable low graphics mode can be useful for remote access over low bandwidth
links. The Function Icons no longer enlarge and list the available functions. However, you can
navigate by clicking a Function Icon and by using the breadcrumb navigation aid.

After selecting the “Enable low graphics mode” check box, the user must log off and log on
again for this to take effect.

For more information about the Function Icons, see 3.1.3, “System panel layout” on page 78.

For more information about the breadcrumb navigation aid, see 3.2.2, “Breadcrumb
navigation aid” on page 82.

Chapter 3. Graphical user interface overview 149


Figure 3-118 shows the management GUI in low graphics mode.

Figure 3-118 Management GUI low graphics mode

3.5 Management GUI help


This section shows how to find help while you use the IBM Storwize V5000 management GUI:
򐂰 IBM Storwize V5000 Information Center
򐂰 e-Learning modules
򐂰 Embedded panel help
򐂰 Question mark help
򐂰 Hover help
򐂰 IBM endorsed YouTube videos

150 Implementing the IBM Storwize V5000


3.5.1 IBM Storwize V5000 Information Center
The best source of information for the IBM Storwize V5000 is the Information Center. After
clicking the help icon in the top-right corner of any panel, select Information Center for direct
access to the online version of the IBM Storwize V5000 Information Center, as shown in
Figure 3-119.

Figure 3-119 Information Center help

3.5.2 Watching an e-Learning video


IBM e-Learning videos are also available from the Learning and Tutorials section of the IBM
Storwize V5000 Information Center.

3.5.3 Embedded panel help


The IBM Storwize V5000 provides embedded help for each panel. Click the help icon in the
top-right corner of any panel and select the first item in the list which is the panel name.

Figure 3-120 shows the selection of the panel help for the Volumes panel.

Figure 3-120 Embedded panel help

Chapter 3. Graphical user interface overview 151


Figure 3-121 shows the embedded panel help for the Volumes panel and includes links to
various other panels and the Information Center.

Figure 3-121 Information panel

3.5.4 Hidden question mark help


The IBM Storwize V5000 provides a hidden question mark help feature for some settings and
items found in various configuration panels. This help feature is accessed by hovering next to
an item where a question mark is shown and a help bubble is displayed, as shown in
Figure 3-122.

Figure 3-122 Hidden question mark help

152 Implementing the IBM Storwize V5000


3.5.5 Hover help
The IBM Storwize V5000 provides hidden help tags, shown when you hover over various
functions and items, as shown in Figure 3-123.

Figure 3-123 Hover help

3.5.6 IBM endorsed YouTube videos


IBM endorses various YouTube videos for the IBM storage portfolio. Client feedback suggests
that these videos are a good tool to show management GUI navigation and tasks. Check for
new videos from IBM Storage to find useful information at the IBM System Storage Channel
at this website:
https://www.youtube.com/user/ibmstorage/

Chapter 3. Graphical user interface overview 153


154 Implementing the IBM Storwize V5000
4

Chapter 4. Host configuration


In this chapter, we provide an overview on preparing open system hosts for use with IBM
Storwize V5000 and using the IBM Storwize V5000 GUI to create logical host objects. This
chapter is a prerequisite for creating and mapping IBM Storwize V5000 volumes to hosts,
which is described in Chapter 5, “Volume configuration” on page 201.

This chapter includes the following topics:


򐂰 Host attachment overview
򐂰 Preparing the host operating system
򐂰 Configuring hosts in IBM Storwize V5000

CLI: Refer to Appendix A, “Command-line interface setup and SAN Boot” on page 667 for
more information about the command-line interface setup as it applies to this chapter.

© Copyright IBM Corp. 2013, 2015. All rights reserved. 155


4.1 Host attachment overview
A host system is an open system computer that is connected to a switch through a Fibre
Channel (FC), Internet Small Computer System interface (iSCSI) or a direct-attached Serial
Attached SCSI (SAS) connection.

In short, IBM Storwize V5000 supports the following host attachment protocols:
򐂰 8 Gb FC or 10 Gb iSCSI/FCoE (optional host interface)
򐂰 6 Gb SAS (standard host interface)
򐂰 1 Gb iSCSI (standard host interface)

In this chapter, we assume that your hosts are physically ready and attached to your FC and
IP network, or directly attached if using SAS Host Bus Adapters (HBAs) and that you have
completed the steps that are described in Chapter 2, “Initial configuration” on page 25.

Follow basic switch and zoning recommendations and ensure that each host has at least two
network adapter ports, that each adapter is on a separate network (or at minimum in a
separate zone), and that at least one connection to each node exists per adapter. This setup
assures redundant paths for failover and bandwidth aggregation.

Further information about SAN configuration is provided in 2.2, “SAN configuration planning”
on page 30.

For SAS attachment, ensure that each host has at least one SAS HBA connection to each
IBM Storwize V5000 canister. Further information about configuring SAS attached hosts is
provided in 2.4, “SAS direct-attach planning” on page 33.

Before new volumes are mapped to the host of your choice, it is advised that preparation is
done that will help with ease of use and reliability. There are several steps required on a host
system to prepare for mapping new IBM Storwize V5000 volumes. Use the System Storage
Interoperation Center (SSIC) to check which code levels are supported to attach your host to
your storage. SSIC is an IBM web tool that checks the interoperation of host, storage,
switches, and multipathing drivers. Go to SSIC web page to get the latest IBM Storwize
V5000 compatibility information:
http://ibm.com/systems/support/storage/ssic/interoperability.wss

The Storwize V5000 support portal can be found here:


http://www.ibm.com/support/entry/portal/product/system_storage/disk_systems/mid-ra
nge_disk_systems/ibm_storwize_v5000?productContext=-2033461677

The following sections provide worked examples of host configuration for each supported
attachment type on Microsoft Windows Server 2012 R2 and VMware ESXi 5.5. If you must
attach hosts running any other supported operating system, the required information can be
found in the IBM Storwize V5000 Knowledge Center:
http://www.ibm.com/support/knowledgecenter/STHGUJ/landing/V5000_welcome.html

156 Implementing the IBM Storwize V5000


4.2 Preparing the host operating system
In this section, we describe how to prepare the host attachment that is required to use IBM
Storwize V5000 with FC, iSCSI, or SAS connectivity.

In general, the following steps are required to prepare a host to connect to IBM Storwize
V5000:
1. Make sure that the latest supported system updates are applied to your host operating
system.
2. Make sure that the HBAs are physically installed in the host.
3. Make sure that the latest supported HBA firmware and driver levels are installed on your
host.
4. Configure the HBA parameters. While settings are given for each Host OS in the following
sections, review the IBM V5000 Infocenter to the latest supported settings.
5. Configure host I/O parameters (such as the disk I/O timeout value).
6. Install and configure multipath software.
7. Determine the host WWPNs.
8. Connect the HBA ports to switches using the appropriate cables, or directly attach to the
ports on IBM Storwize V5000.
9. Configure the switches, if applicable.
10.Configure SAN Boot (optional).

Steps 1-9 will be covered in this chapter.

4.2.1 Windows 2012 R2: Preparing for FC attachment


The following steps must be completed in preparation for FC attachment.

Installing and updating supported HBAs


Install a supported HBA model with the latest supported firmware and drivers for your
configuration. The latest supported HBAs and levels for Windows 2012 R2 are available at
this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1004971

Install the driver using Windows Device Manager or vendor tools, such as QLogic
QConvergeConsole, Emulex HBAnyware or Brocade HBA Software Installer. Also, check and
update the firmware level of the HBA using the manufacturer’s provided tools. Always check
the readme file to see whether there are Windows registry parameters that should be set for
the HBA driver.

Important: When using the IBM Subsystem Device Driver DSM (SDDDSM) for multipath
support, the HBA driver must be a Storport miniport driver.

Chapter 4. Host configuration 157


Configuring Brocade HBAs for Windows
After the device driver and firmware are installed, you must configure the HBAs. To perform
this task, use the Brocade host connectivity manager (HCM) software or reboot into the HBA
BIOS, load the adapter defaults, and set the following values:
򐂰 Host Adapter BIOS: Disabled (unless the host is configured for SAN Boot)
򐂰 Queue depth: 4

Configuring QLogic HBAs for Windows


After the device driver and firmware are installed, you must configure the HBAs. To complete
this task, use the QLogic QConvergeConsole software or reboot into the HBA BIOS, load the
adapter defaults, and set the following values:
򐂰 Host Adapter BIOS: Disabled (unless the host is configured for SAN Boot)
򐂰 Adapter Hard Loop ID: Disabled
򐂰 Connection Options: 1 (point-to-point only)
򐂰 LUNs Per Target: 0
򐂰 Port Down Retry Count: 15

Configuring Emulex HBAs for Windows


After the device driver and firmware are installed, you must configure the HBAs. To complete
this task, use the Emulex HBAnyware software or reboot into the HBA BIOS, load the
defaults, and set topology to 1 (10F_Port Fabric).

Setting the Windows disk I/O timeout value


To configure the disk I/O timeout value on Windows 2012 R2, complete the following steps:
1. Press Win + R.
2. In the dialog box, enter regedit and press Enter. This will open the registry editor.
3. In the registry editor, go to
HKEY_LOCAL_MACHINE\System\CurrentControlSet\services\Disk\TimeOutValue.
4. Double-click TimeOutValue, or do Right click  Modify to view and modify the current
timeout value. The window shown in Figure 4-1 displays.

Figure 4-1 Editing the disk I/O timeout value on Windows 2012 R2

The default timeout is 60 seconds, and this is the value advised by IBM. However, depending
on your requirements, you may want to adjust this higher or lower. The following link explains
in detail why it may be useful to adjust the timeout:
http://blogs.msdn.com/b/san/archive/2011/09/01/the-windows-disk-timeout-value-unde
rstanding-why-this-should-be-set-to-a-small-value.aspx

158 Implementing the IBM Storwize V5000


Installing Microsoft MPIO multipathing software
Microsoft Multipath Input/Output (MPIO) solutions are designed to work with device-specific
modules (DSMs) that are written by vendors. The MPIO driver package does not form a
complete solution on its own. By using this joint solution, the storage vendors can design
device-specific solutions that are tightly integrated with the Microsoft Windows operating
system. MPIO in Microsoft Windows 2008 is a Device Specific Module (DSM) designed to
work with Storage Arrays that support the Asymmetric Logical Unit Access (ALUA) control
model, basically active-active Storage Controllers.

The intent of MPIO is to provide better integration of a multipath storage solution with the
operating system. It also allows the use of multipath in the SAN infrastructure during the boot
process for SAN Boot hosts.

To enable MPIO on Windows 2012 R2, complete the following steps:


1. Open Server Manager.
2. Click Manage  Add Roles and Features.
3. On the Installation Type pane, select Role-based or feature-based installation and
click Next.
4. On the Server Selection pane, select the server you want to configure and click Next.
5. On the Server Roles pane, click Next.
6. On the Features pane, select Multipath I/O and click Next.
7. On the next pane, click Install and wait for the installation to complete.
8. After this is complete, restart the server to enable MPIO.

For FC attachment, the installation of IBM Subsystem Device Driver DSM (SDDDSM) is also
required. The SDDDSM setup program should enable MPIO automatically. Otherwise, follow
the foregoing steps to enable MPIO manually.

Installing the IBM SDDDSM multipathing software


The IBM Subsystem Device Driver DSM (SDDDSM) is an IBM multipath I/O solution based
on Microsoft MPIO technology. This is a device-specific module designed to support IBM
storage devices on a range of platforms, including Microsoft Windows. Its purpose is to
support a fully redundant multipath configuration environment for IBM Storage Systems.

To ensure correct multipathing with IBM Storwize V5000 when using the FC protocol,
SDDDSM must be installed on Microsoft Windows hosts. To install SDDDSM, complete the
following steps:
1. Check the SDDDSM download matrix to determine the correct level of SDDDSM to install
on Windows 2012 R2 for IBM Storwize V5000:
http://ibm.com/support/docview.wss?uid=ssg1S7001350#WindowsSDDDSM
2. Download the software package from this website:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S4000350#Storwize
3. Copy the software package to your Windows server and on the server, run setup.exe to
install SDDDSM. A command prompt window opens, as shown in Figure 4-2. Enter Yes to
confirm the installation.

Chapter 4. Host configuration 159


Figure 4-2 Running setup.exe to install SDDDSM on Windows 2012 R2

4. After the setup completes, enter Yes to restart the system, as shown in Figure 4-3.

Figure 4-3 Completion of the SDDDSM installation on Windows 2012 R2

After the server comes back online, the installation should be complete. You can check the
installed driver version by opening a Powershell terminal and running the datapath query
version command, as shown in Example 4-1.

Example 4-1 Query SDDDSM version details


PS C:\Program Files\IBM\SDDDSM>./datapath.exe query version
IBM SDDDSM Version 2.4.3.5-3
Microsoft MPIO Version 6.2.9200.16384

For more information about configuring SDDDSM for all supported platforms, refer to the
Multipath Subsystem Device Driver User’s Guide. This can be found at the following website:
http://www-01.ibm.com/support/docview.wss?rs=540&context=ST52G7&q=ssg1*&uid=ssg1S7
000303&loc=en_US&cs=utf-8&lang=en%20en

160 Implementing the IBM Storwize V5000


Determining host WWPNs
The Worldwide Port Names (WWPNs) of the FC HBA are required to correctly zone switches
and configure host attachment on the IBM Storwize V5000. These can be found via vendor
tools, the HBA BIOS, the native Windows command line, or SDDDSM.

To determine the host WWPNs using SDDDSM, we can open a Powershell terminal and run
the datapath query wwpn command, as shown in Example 4-2.

Example 4-2 The datapath query wwpn command


PS C:\Program Files\IBM\SDDDSM> .\datapath.exe query wwpn
Adapter Name PortWWN
Scsi Port2: 21000024FF2D0CC2
Scsi Port3: 21000024FF2D0CC3

After the WWPNs are known, connect the cables and perform any necessary switch
configuration (such as zoning). The host is now prepared to connect to the IBM Storwize
V5000.

These are the basic steps to prepare a Windows 2012 R2 host for FC attachment. For
information about configuring FC attachment on the IBM Storwize V5000 side, see “Creating
FC hosts” on page 186. For information about discovering mapped FC volumes from
Windows 2012 R2, see “Discovering FC volumes from Windows 2012 R2” on page 220.

4.2.2 Windows 2012 R2: Preparing for iSCSI attachment


This procedure is described in the following sections.

Installing and updating supported HBAs


Install a supported HBA model with the latest supported firmware and drivers for your
configuration. The latest supported HBAs and levels for Windows 2012 R2 are available at
this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1004971

Install the driver using Windows Device Manager or vendor tools. Also, check and update the
firmware level of the HBA using the manufacturer’s provided tools. Always check the readme
file to see whether there are Windows registry parameters that should be set for the HBA
driver.

Important: For Converged Network Adapters (CNAs) which are capable of both FC and
iSCSI, it is important to ensure that the Ethernet networking driver is installed in addition to
the FCoE driver. This is required for configuring iSCSI.

If using a hardware iSCSI HBA, refer to the manufacturer’s documentation and IBM Storwize
V5000 Knowledge Center for further details on hardware and host OS configuration. The
following section describes how to configure iSCSI using the software initiator.

Chapter 4. Host configuration 161


Enabling the iSCSI Initiator
On Microsoft Windows systems, the Microsoft iSCSI Initiator can be used to connect to
external storage attached via Ethernet. Using the iSCSI protocol enables the use of block
storage in a SAN at low cost compared to FC or SAS.

In Windows 2012 R2, the Microsoft iSCSI Initiator is preinstalled. To start the iSCSI Initiator,
open Server Manager and go to Tools  iSCSI Initiator, as shown in Figure 4-4.

Figure 4-4 Starting the iSCSI Initiator

If this is the first time you have started the iSCSI Initiator, the window shown in Figure 4-5
displays. Click Yes to enable the iSCSI Initiator service, which should now start automatically
on boot.

Figure 4-5 Enabling the iSCSI Initiator service on boot

162 Implementing the IBM Storwize V5000


The iSCSI Initiator window should open. Select the Configuration tab, as shown in
Figure 4-6.

Figure 4-6 Configuring the iSCSI Initiator

Make a note of the Initiator Name. This is the iSCSI Qualified Name (IQN) for the host, which
is required for configuring iSCSI host attachment on the IBM Storwize V5000. The IQN
identifies the host in the same way that WWPNs identify an FC or SAS host.

From the Configuration tab, it is possible to change the Initiator Name, enable CHAP
authentication, and more; however, these tasks are beyond the scope of our basic setup.
CHAP authentication is disabled by default. For more information, Microsoft have published a
detailed guide on configuring the iSCSI Initiator at this website:
http://technet.microsoft.com/en-us/library/ee338476%28v=ws.10%29.aspx

Chapter 4. Host configuration 163


Configuring Ethernet ports
It is advised to use separate dedicated ports for host management and iSCSI. In this case, we
need to configure IPs on the iSCSI ports in the same subnet and VLAN as the external
storage we want to attach to.

To configure Ethernet port IPs on Windows 2012 R2, complete the following steps:
1. Go to Control Panel  Network and Internet  Network and Sharing Center. This
should take you to the window shown in Figure 4-7.

Figure 4-7 Network and Sharing Center in Windows 2012 R2

In this case, we have two networks visible to the system. The first network is the one we
are using to connect to the server, consisting of a single dedicated Ethernet port for
management. The second network is our iSCSI network, consisting of two dedicated
Ethernet ports for iSCSI. It is advised to have at least two dedicated ports for failover
purposes.

164 Implementing the IBM Storwize V5000


2. To configure an IP address, click one of the iSCSI Ethernet connections (in this case,
Ethernet 3 or Ethernet 4). The window shown in Figure 4-8 displays.

Figure 4-8 Ethernet status

3. To configure the IP address, click Properties. The window shown in Figure 4-9 displays.

Figure 4-9 Ethernet properties

Chapter 4. Host configuration 165


4. If you are using IPv6, select Internet Protocol Version 6 (TCP/IPv6) and click
Properties. Otherwise, select Internet Protocol Version 4 (TCP/IPv4) and
click Properties to configure an IPv4 address.
5. For IPv4, the window shown in Figure 4-10 displays. To manually set the IP, select “Use
the following address” and enter an IP address, subnet mask and gateway. Set the DNS
server address if required. Click OK to confirm.

Figure 4-10 Configuring an IPv4 address in Windows 2012 R2

6. Repeat the previous steps for each port you want to configure for iSCSI attachment.

The Ethernet ports should now be prepared for iSCSI attachment.

Setting the Windows registry keys


We can tweak the system registry to make iSCSI operations more reliable on Windows 2012
R2. To do this, complete the following steps. You must also check the guidance advised for
your application:
1. Press Win + R.
2. In the dialog box, enter regedit and press Enter. This will open the registry editor.
3. In the registry editor, locate the following key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-B
FC1-08002BE10318}\<bus ID>\Parameters\LinkDownTime
Confirm that the value for the LinkDownTime key is 120 (decimal value), and, if necessary,
change the value to 120.
4. In the registry editor, locate the following key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-B
FC1-08002BE10318}\<bus ID>\Parameters\MaxRequestHoldTime
Confirm that the value for the MaxRequestHoldTime key is 120 (decimal value), and, if
necessary, change the value to 120.

166 Implementing the IBM Storwize V5000


5. In the registry editor, locate the following key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-B
FC1-08002BE10318}\<bus ID>\Parameters\ MaxPendingRequests
Confirm that the value for the MaxPendingRequests key is 2048 (decimal value), and, if
necessary, change the value to 2048.
6. In the registry editor, locate the following key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Disk\TimeOutValue
Confirm that the value for the TimeOutValue key is 60 (decimal value), and, if necessary,
change the value to 60. For more details on configuring the disk I/O timeout value, see
“Windows 2012 R2: Preparing for FC attachment” on page 157.
7. Restart Windows for these changes to take effect.

Multipath support for iSCSI on Windows


For multipathing with iSCSI, we need to enable Microsoft Multipath Input/Output (MPIO). See
“Windows 2012 R2: Preparing for FC attachment” on page 157 for instructions on enabling
MPIO.

It is important to note that IBM Subsystem Device Driver DSM (SDDDSM) is not supported
for iSCSI attachment, so do not follow the steps to install this as for FC or SAS.

These are the basic steps to prepare a Windows 2012 R2 host for iSCSI attachment. For
information about configuring iSCSI attachment on the IBM Storwize V5000 side, see
“Configuring IBM Storwize V5000 for FC connectivity” on page 189. For information about
discovering mapped iSCSI volumes from Windows 2012 R2, see “Discovering iSCSI volumes
from Windows 2012 R2” on page 229.

4.2.3 Windows 2012 R2: Preparing for SAS attachment


This procedure is described in the following sections.

Installing and updating supported HBAs


Install a supported SAS HBA with the latest supported firmware and drivers for your
configuration. A list of the latest supported HBAs and levels for Windows 2012 R2 are
available at this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1004971

Install the driver using Windows Device Manager or vendor tools. Also, check and update the
firmware level of the HBA using the manufacturer’s provided tools. Always check the readme
file to see whether there are Windows registry parameters that should be set for the HBA
driver.

For LSI cards, the following website provides some guidance on flashing the HBA firmware
and BIOS:

http://mycusthelp.info/LSI/_cs/AnswerDetail.aspx?sSessionID=&aid=8352

Determining host WWPNs


The Worldwide Port Names (WWPNs) of the SAS HBA are required to configure host
attachment on the IBM Storwize V5000.

Chapter 4. Host configuration 167


The host WWPNs can be found via vendor tools or the HBA BIOS. However, the easiest way
is to connect the SAS cables to the ports on the IBM Storwize V5000, log on to the Storwize
CLI via SSH and run svcinfo lssasportcandidate, as shown in Example 4-3.

Example 4-3 Finding host WWPNs


IBM_Storwize:ITSO_V5000:superuser>svcinfo lsssasportcandidate
sas_WWPN
500062B200556140
500062B200556141

Configuring SAS HBAs on Windows


These are the advised settings:
򐂰 I/O Timeout for Block Devices: 10
򐂰 I/O Timeout for Sequential Devices: 10
򐂰 I/O Timeout for Other Devices: 10
򐂰 LUNs to Scan for Block Devices: All
򐂰 LUNs to Scan for Sequential Devices: All
򐂰 LUNs to Scan for Other Devices: All

Multipath support for SAS on Windows


For multipathing with SAS, we need to enable Microsoft Multipath Input/Output (MPIO) and
install IBM Subsystem Device Driver DSM (SDDDSM). See 4.2.1, “Windows 2012 R2:
Preparing for FC attachment” on page 157 for instructions on this.

These are the basic steps to prepare a Windows 2012 R2 host for SAS attachment. For
information about configuring SAS attachment on the IBM Storwize V5000 side, see 4.3.5,
“Creating SAS hosts” on page 198. For information about discovering mapped SAS volumes
from Windows 2012 R2, see “Discovering SAS volumes from Windows 2012 R2” on
page 235.

4.2.4 VMware ESXi 5.5: Preparing for FC attachment


This procedure is described in the following sections.

Installing and updating supported HBAs


Install a supported HBA with the latest supported firmware and drivers for your configuration.
A list of the latest supported HBAs and levels for VMware ESXi 5.5 are available at this
website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1004971

Install the driver using VMware vSphere Client, the ESXi CLI or vendor tools. Also, check and
update the firmware level of the HBA using the manufacturer’s provided tools. Always check
the readme file to see whether there is further configuration required for the HBA driver.

See the VMware Compatibility Guide for more information:


http://www.vmware.com/resources/compatibility/search.php

Configuring Brocade HBAs for VMware ESX


After the firmware is installed, load the default settings for all your adapters that are installed
on the host system and make sure that the Adapter BIOS is disabled, unless SAN Boot is
used.

168 Implementing the IBM Storwize V5000


Configuring QLogic HBAs for VMware ESX
After the firmware is installed, you must configure the HBAs. To perform this task, use the
QLogic QConvergeConsole software or the HBA BIOS, load the adapter defaults, and set the
following values:
򐂰 Advised Host Adapter Settings:
– Host Adapter BIOS: Disabled (unless the host is configured for SAN Boot)
– Frame size: 2048
– Loop Reset Delay: 5 (minimum)
– Adapter Hard Loop ID: Disabled
– Hard Loop ID: 0
– Spinup Delay: Disabled
– Connection Options 1: Point-to-point only
– Fibre Channel Tape Support: Disabled
– Data Rate: 2
򐂰 Advised Advanced Adapter Settings:
– Execution throttle: 100
– LUNs per Target: 0
– Enable LIP Reset: No
– Enable LIP Full Login: Yes
– Enable Target Reset: Yes
– Login Retry Count: 8
– Link Down Timeout: 10
– Command Timeout: 20
– Extended event logging: Disabled (enable it for debugging only)
– RIO Operation Mode: 0
– Interrupt Delay Timer: 0

Configuring Emulex HBAs for VMware ESXi


After the firmware is installed, load the default settings for all your adapters that are installed
on the host system and make sure that the Adapter BIOS is disabled, unless SAN Boot is
used.

Multipath support for FC on VMware ESXi


The ESXi server has its own multipathing software. You do not need to install a multipathing
driver on the ESXi server or on the guest operating systems. The ESXi multipathing policy
supports the following operating modes:
򐂰 Round Robin
򐂰 Fixed
򐂰 Most Recently Used (MRU)

The IBM Storwize V5000 is an active/active storage device. Since VMware ESXi 5.0 and later,
the suggested multipathing policy is Round Robin. Round Robin performs dynamic load
balancing for I/O. If you do not want to have the I/O balanced over all available paths, the
Fixed policy is supported. This policy setting can be configured for every volume. Set this
policy after IBM Storwize V5000 volumes have been mapped to the ESXi host. For more
information about mapping volumes to hosts, see “Discovering FC volumes from VMware
ESXi 5.5” on page 235.

Determining host WWPNs


The Worldwide Port Names (WWPNs) of the FC HBA are required to correctly zone switches
and configure host attachment on the IBM Storwize V5000. On VMware ESXi 5.5, these can
be found via the VMware vSphere Client.

Chapter 4. Host configuration 169


Note: In VMware ESXi 5.5, some new features can only be accessed through the vSphere
Web Client. However, we do not demonstrate any of these features, so this and all of the
following examples will continue to focus on using the desktop client.

Connect to the ESXi server (or VMware vCenter) using the VMware vSphere Client and
browse to the Configuration tab. HBA WWPNs are listed under Storage Adapters, as
shown in Figure 4-11.

Figure 4-11 FC WWPNs in VMware vSphere Client

These are the basic steps to required to prepare a VMware ESXi 5.5 host for FC attachment.
For information about configuring FC attachment on the IBM Storwize V5000 side, see 4.3.1,
“Creating FC hosts” on page 186. For information about discovering mapped FC volumes
from VMware ESXi 5.5, see “Discovering FC volumes from VMware ESXi 5.5” on page 235.

4.2.5 VMware ESXi 5.5: Preparing for iSCSI attachment


This procedure is described in the following sections.

Installing and updating supported HBAs


Install a supported HBA model with the latest supported firmware and drivers for your
configuration. The latest supported HBAs and levels for VMware ESXi 5.5 are available at this
website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1004971

Install the driver using VMware vSphere Client, the ESXi CLI or vendor tools. Also, check and
update the firmware level of the HBA using the manufacturer’s provided tools. Always check
the readme file to see whether there is further configuration required for the HBA driver.

See the VMware Compatibility Guide for more information:


http://www.vmware.com/resources/compatibility/search.php

Important: For Converged Network Adapters (CNAs) which are capable of both FC and
iSCSI, it is important to ensure that the Ethernet networking driver is installed in addition to
the FCoE driver. This is required for configuring iSCSI.

170 Implementing the IBM Storwize V5000


If using a hardware iSCSI HBA, refer to the manufacturer’s documentation and IBM Storwize
V5000 Knowledge Center for further details on hardware and host OS configuration. The
following section describes how to configure iSCSI using the software initiator.

Enabling the iSCSI Initiator


The iSCSI initiator is installed by default on VMware ESXi 5.5, and only needs to be enabled.
To do this, we need to add and enable the iSCSI Software Adapter. To do this, complete the
following steps:
1. Connect to your ESXi server using the VMware vSphere Client. Browse to the
Configuration tab and select Storage Adapters. This should bring up the view shown in
Figure 4-12.

Figure 4-12 Adding a new storage adapter via VMware vSphere Client

2. Click Add... and the window shown in Figure 4-13 displays. Select Add Software iSCSI
Adapter and click OK.

Figure 4-13 Adding the iSCSI software adapter

Chapter 4. Host configuration 171


3. The window shown in Figure 4-14 displays. Click OK to confirm.

Figure 4-14 Confirm addition of the iSCSI Software Adapter

4. The iSCSI Software Adapter should now be listed under Storage Adapters, as shown in
Figure 4-15.

Figure 4-15 The Storage Adapters view showing the newly added iSCSI Software Adapter

5. To find the iSCSI initiator name and configure the iSCSI Software Adapter, right-click the
adapter and click Properties.... The window shown in Figure 4-16 displays. Make a note
of the initiator name and perform any other necessary configuration.

172 Implementing the IBM Storwize V5000


Figure 4-16 Properties of the iSCSI Software Adapter

The VMware ESXi 5.5 iSCSI initiator is now enabled and ready for use with IBM Storwize
V5000. The next step required is to configure the Ethernet adapter ports.

Configuring Ethernet ports


To configure Ethernet adapter ports for iSCSI attachment in VMware ESXi 5.5, complete the
following steps:
1. Browse to the Configuration tab and select Networking, as shown in Figure 4-17.

Figure 4-17 The network configuration tab in VMware vSphere Client

Chapter 4. Host configuration 173


2. Click Add Networking to start the Add Network Wizard, as shown in Figure 4-18. Select
VMkernel and click Next.

Figure 4-18 The Add Network Wizard in VMware vSphere Client

174 Implementing the IBM Storwize V5000


3. Create a new vSwitch by selecting one or more network interfaces to use for iSCSI traffic
and click Next, as shown in Figure 4-19.

Figure 4-19 Creating a new vSwitch using two physical adapter ports

Note: It is advised to select at least two interfaces for failover purposes.

Chapter 4. Host configuration 175


4. The wizard will then prompt you to create a single VMkernel port within the new vSwitch,
as shown in Figure 4-20. Enter a meaningful Network Label and if using VLAN tagging,
enter a VLAN ID or select All from the drop-down box. There are other options available -
see VMware documentation for details on configuring these. Click Next to continue.

Figure 4-20 Configuring a VMkernel port on the new vSwitch

176 Implementing the IBM Storwize V5000


5. Enter an IP address for your iSCSI network. You should use a dedicated network for iSCSI
traffic, as shown in Figure 4-21.

Figure 4-21 Configuring IP settings for the new VMkernel port

6. Click Next to see what will be added to the host network settings and Finish to exit from
iSCSI configuration. The network configuration page should now look like Figure 4-22.

Figure 4-22 The network configuration page with the newly created vSwitch and VMkernel port

Chapter 4. Host configuration 177


7. To add further VMkernel ports to this switch, click Properties to open the window shown in
Figure 4-23, then click Add. This will re-open the same wizard we have just gone through,
therefore, follow the same steps to add a second VMkernel port.

Figure 4-23 Adding another VMkernel port in the vSwitch properties

8. In this example, we create a second VMkernel port with IP 192.168.1.4. The network
configuration page should now look like Figure 4-24.

Figure 4-24 The finished network configuration page

178 Implementing the IBM Storwize V5000


9. You must now bind the VMkernel ports to the iSCSI Software Adapter. To do this, go to
Storage Adapters, right-click the iSCSI Software Adapter and click Properties.... Then,
go to Network Configuration and click Add... to start adding the VMkernel ports. The
window shown in Figure 4-25 displays. Click OK to add the first port, then repeat for the
second.

Figure 4-25 Adding VMkernel ports to the iSCSI Software Adapter

Important: If the VMkernel ports are reported as “Not Compliant”, complete the steps
described in “Multipath support for iSCSI on VMware ESXi” on page 181 and try
again.

Chapter 4. Host configuration 179


10.The Network Configuration tab should now look like Figure 4-26. Click Close to finish
binding the VMkernel ports.

Figure 4-26 The VMkernel ports added to the iSCSI Software Adapter

11.The window shown in Figure 4-27 displays. Click Yes to rescan the adapter and complete
this procedure.

Figure 4-27 Rescan the iSCSI Software Adapter

180 Implementing the IBM Storwize V5000


Multipath support for iSCSI on VMware ESXi
As explained in 4.2.4, “VMware ESXi 5.5: Preparing for FC attachment” on page 168, the
ESXi server has its own multipathing software.

For iSCSI, there is some extra configuration required in the VMkernel port properties to
enable path failover. Each VMkernel port must map to just one physical adapter port, which is
not the default setting. To fix this, complete the following steps:
1. Browse to the Configuration tab and select Networking. Click Properties... next to the
vSwitch you configured for iSCSI to open the window shown in Figure 4-28.

Figure 4-28 vSwitch properties with VMkernel ports listed

Chapter 4. Host configuration 181


2. Select one of the VMkernel ports and click Edit.... The window shown in Figure 4-29
displays.

Figure 4-29 Editing a VMkernel port

182 Implementing the IBM Storwize V5000


3. Go to NIC Teaming, select the check box “Override switch failover order,” and then ensure
that each port is tied to just one physical adapter port, as shown in Figure 4-30.

Figure 4-30 Configuring a VMkernel port to bind to a single physical adapter port

These are the basic steps to required to prepare a VMware ESXi 5.5 host for iSCSI
attachment. For information about configuring iSCSI attachment on the IBM Storwize V5000
side, see 4.3.3, “Creating iSCSI hosts” on page 191. For information about discovering
mapped iSCSI volumes from VMware ESXi 5.5, see “Discovering iSCSI volumes from
VMware ESXi 5.5” on page 237.

For more information about configuring iSCSI attachment on the VMware ESXi side, the
following white paper published by VMware is a useful resource:

http://www.vmware.com/files/pdf/techpaper/vmware-multipathing-configuration-softwa
re-iSCSI-port-binding.pdf

4.2.6 VMware ESXi 5.5: Preparing for SAS attachment


This procedure is described in the following sections.

Installing and updating supported HBAs


Install a supported HBA with the latest supported firmware and drivers for your configuration.
A list of the latest supported HBAs and levels for VMware ESXi 5.5 are available at this
website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1004971

Chapter 4. Host configuration 183


Install the driver using VMware vSphere Client, the ESXi CLI or vendor tools. Also, check and
update the firmware level of the HBA using the manufacturer’s provided tools. Always check
the readme file to see whether there is further configuration required for the HBA driver.

See the VMware Compatibility Guide for more information:


http://www.vmware.com/resources/compatibility/search.php

For LSI cards, the following website provides some guidance on flashing the HBA firmware
and BIOS:
http://mycusthelp.info/LSI/_cs/AnswerDetail.aspx?sSessionID=&aid=8352

Configuring SAS HBAs on VMware ESXi


In this example, we used an LSI 9207-8e card and did not have to configure HBA parameters
beyond the default settings. It is advised to check the parameters via the HBA BIOS or vendor
tools to confirm that they are suitable for your requirements.

Multipath support for SAS on VMware ESXi


As with FC, we can use native ESXi multipathing for SAS attachment on VMware ESXi 5.5.
See 4.2.4, “VMware ESXi 5.5: Preparing for FC attachment” on page 168 for more details.

Determining host WWPNs


The Worldwide Port Names (WWPNs) of the SAS HBA are required to configure host
attachment on the IBM Storwize V5000.

The host WWPNs are not directly available through VMware vSphere Client, however can be
found using vendor tools or the HBA BIOS. The method described in 4.2.3, “Windows 2012
R2: Preparing for SAS attachment” on page 167 also works.

These are the basic steps to required to prepare a VMware ESXi 5.5 host for SAS
attachment. For information about configuring SAS attachment on the IBM Storwize V5000
side, see 4.3.5, “Creating SAS hosts” on page 198. For information about discovering
mapped SAS volumes from VMware ESXi 5.5, see “Discovering SAS volumes from VMware
ESXi 5.5” on page 248.

For further information and guidance on attaching storage with VMware ESXi 5.5, the
following document published by VMware is a useful resource:
http://pubs.vmware.com/vsphere-55/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter
-server-552-storage-guide.pdf

184 Implementing the IBM Storwize V5000


4.3 Configuring hosts in IBM Storwize V5000
This section describes how to create FC, iSCSI and SAS hosts using the IBM Storwize V5000
GUI. We assume that the hosts have been previously prepared as described in Chapter 4.2,
“Preparing the host operating system” on page 157, and FC addresses, SAS addresses or
iSCSI initiator names were recorded for host creation purposes.

Considerations when configuring hosts in IBM Storwize V5000


When creating a host object in IBM Storwize V5000, it is important to verify the configuration
limits and restrictions, which are published at the following website:

http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004971

Creating a host object using the GUI


To create a host object, complete the following steps:
1. From the IBM Storwize V5000 GUI, open the hosts view by navigating to Hosts  Hosts,
as shown in Figure 4-31.

Figure 4-31 Opening the hosts view

Chapter 4. Host configuration 185


Doing this should load the page shown in Figure 4-32.

Figure 4-32 The hosts view with no host objects configured

2. Click the Add Host button to open the Add Host wizard. This should open the window
shown in Figure 4-33.

Figure 4-33 Choosing the host type in the Add Host wizard

3. Choose whether to create an FC host, an iSCSI host, or an SAS host, and follow the steps
in the wizard. The following sections provide guidance for each attachment type.

4.3.1 Creating FC hosts


To create an FC host, complete the following steps:
1. Click Fibre Channel Host, as shown in Figure 4-34. The Fibre Channel host configuration
wizard opens.

186 Implementing the IBM Storwize V5000


Figure 4-34 The FC host configuration wizard

2. Enter a descriptive Host Name and click the Fibre Channel Ports drop-down menu to
view a list of all FC WWPNs visible to the system, as shown in Figure 4-35.

Figure 4-35 FC WWPNs visible to the system

Chapter 4. Host configuration 187


3. If you have previously prepared an FC host as described in 4.2, “Preparing the host
operating system” on page 157, then the WWPNs you recorded in this section should
appear. If they do not appear in the list, verify that you have completed all of the required
steps and check your SAN zoning and cabling, then click Rescan in the configuration
wizard.

Note: It is possible to enter WWPNs manually. However, if these are not visible to IBM
Storwize V5000, then the host object will appear as offline and be unusable for I/O
operations until the ports become visible.

4. Select the WWPNs for your host and click Add Port to List for each. These will be added
to Port Definitions, as shown in Figure 4-36.

Figure 4-36 Adding the host WWPNs

5. In the Advanced Settings, it is possible to choose the Host Type. If using HP/UX,
OpenVMS or TPGS, this needs to be configured. Otherwise, the default option (Generic)
is fine.
6. From here, it is also possible to set the I/O Groups that your host will have access to. It is
important that host objects belong to the same I/O group(s) as the volumes you want to
map, otherwise the host will not have visibility of these volumes. For more information, see
Chapter 4, “Host configuration” on page 155.

Note: IBM Storwize V5000 supports a maximum of two nodes per system, arranged as
two I/O groups per cluster. Due to the host object limit per I/O group, for maximum host
connectivity, it is best to create hosts utilizing single I/O groups.

188 Implementing the IBM Storwize V5000


7. Click Add Host and the wizard will complete, as shown in Figure 4-37.

Figure 4-37 Completion of the Add Host wizard

8. Click Close to return to the host view, which should now list your newly created host
object, as shown in Figure 4-38.

Figure 4-38 The hosts view with the newly created host object listed

9. Repeat these steps for all the FC hosts.

After the host objects have been created, see Chapter 5, “Volume configuration” on page 201
to create volumes and map them to the hosts.

4.3.2 Configuring IBM Storwize V5000 for FC connectivity


It is possible to configure the FC ports on IBM Storwize V5000 to only be used for certain
connections. This is sometimes referred to as Port Masking. In a system with multiple I/O
groups and remote partnerships, this is a useful tool for ensuring peak performance and
availability.

The following options are available per port:


򐂰 Any: Allow local and remote communication between nodes
򐂰 Local: Allow only local node communication
򐂰 Remote: Allow only remote node communication
򐂰 None: Do not allow any node communication

Chapter 4. Host configuration 189


In all cases, host I/O is still permitted, so the None option could be used to exclusively
reserve a port for host I/O.

There is a limit of 16 logins per node from another node before an error will be logged. A
combination of port masking and SAN zoning can help to manage this and allow for optimum
I/O performance with local, remote and host traffic.

To configure FC ports, complete the following steps:


1. Go to Settings  Network, as shown in Figure 4-39.

Figure 4-39 Opening the network settings view

2. Select Fibre Channel Ports and the FC port configuration view displays, as shown in
Figure 4-40.

Figure 4-40 The FC ports view

3. To configure a port, right-click the port and select Modify Connection. This should bring
up the window shown in Figure 4-41.

190 Implementing the IBM Storwize V5000


Figure 4-41 Modifying the connection for Port 1

In this example, we select None to restrict traffic on this port to host I/O only. Click Modify
to confirm the selection.

Note: In doing this, we are configuring Port 1 for all nodes. It is not possible to configure
FC ports on a per node basis.

You can view connections between nodes, storage systems and hosts by selecting Fibre
Channel Connectivity while in the network settings view. Choose which connections you
want to view and click Show Results, as shown in Figure 4-42.

Figure 4-42 Viewing FC connections between nodes, storage systems and hosts

4.3.3 Creating iSCSI hosts


To create an iSCSI host, complete the following steps:
1. Click iSCSI Host, the iSCSI host configuration wizard opens, as shown in Figure 4-43.

Chapter 4. Host configuration 191


Figure 4-43 The iSCSI host configuration wizard

2. Enter a descriptive Host Name and enter the iSCSI Initiator Name into the iSCSI Ports
box, then click Add Ports to List. This will appear under Port Definitions, as shown in
Figure 4-44. Repeat this step if several initiator names are required.

Figure 4-44 Adding the iSCSI Initiator Name

192 Implementing the IBM Storwize V5000


3. In the Advanced Settings, it is possible to choose the Host Type. If using HP/UX,
OpenVMS or TPGS, this needs to be configured. Otherwise, the default option (Generic)
is fine.
4. From here, it is also possible to set the I/O Groups that your host will have access to. It is
important that host objects belong to the same I/O group(s) as the volumes you want to
map, otherwise the host will not have visibility of these volumes. For more information see
Chapter 5, “Volume configuration” on page 201.

Note: IBM Storwize V5000 supports a maximum of four nodes per system, arranged as
two I/O groups per cluster. Due to the host object limit per I/O group, for maximum host
connectivity it is best to create hosts utilizing single I/O groups.

5. Click Add Host and the wizard will complete, as shown in Figure 4-37.

Figure 4-45 Completion of the Add Host wizard

6. Click Close to return to the host view, which should now list your newly created host
object, as shown in Figure 4-38.

Figure 4-46 The hosts view with the newly created host object listed

7. Repeat these steps for all of your iSCSI hosts.

After the host objects have been created, see Chapter 5, “Volume configuration” on page 201
to create volumes and map them to the hosts.

Chapter 4. Host configuration 193


Note: iSCSI hosts may have Degraded status until volumes are mapped. This is a
limitation related to the implementation of iSCSI in IBM Storwize V5000 and not
necessarily a sign of problems with network connectivity or the host configuration.

4.3.4 Configuring IBM Storwize V5000 for iSCSI host connectivity


The iSCSI hosts are now configured on the IBM Storwize V5000. To provide connectivity, the
iSCSI Ethernet ports must also be configured. Complete the following steps to enable iSCSI
connectivity:
1. Go to Settings  Network, as shown in Figure 4-47.

Figure 4-47 Opening the network settings view

2. Select Ethernet Ports and the Ethernet port configuration view displays, as shown in
Figure 4-48.

Figure 4-48 The Ethernet ports view

194 Implementing the IBM Storwize V5000


3. To configure an IP address on a node port, expand the I/O group, right-click the desired
port and click Modify IP Settings, as shown in Figure 4-49.

Figure 4-49 Opening the Modify IP Settings window for Node 1, Port 2

Note: It is generally advised that at least one port is reserved for the management IP
address. This is normally Port 1, so in this example, we configure Port 2 for iSCSI.

4. The window shown in Figure 4-50 displays. To configure IPv6, click the IPv6 button to
reveal these options. Enter the desired IP address, subnet mask and gateway. It is
important to make sure that these exist in the same subnet as the host IP addresses, and
that the chosen address is not already in use. Click Modify to confirm.

Figure 4-50 Configuring an IP address, subnet mask and gateway for Port 2 of Node 1

Chapter 4. Host configuration 195


5. The port should now be listed as Configured, as shown in Figure 4-51.

Figure 4-51 Showing Configured port

6. Right-click the configured port again. This time, the options that were previously greyed
out should now be available. To confirm that the port is enabled for iSCSI, click Modify
iSCSI Hosts. The window shown in Figure 4-52 displays. If the port is not enabled, do so
using the drop-down box and click Modify to confirm.

Figure 4-52 Enabling an Ethernet port for iSCSI host attachment

196 Implementing the IBM Storwize V5000


7. To modify VLAN settings, right-click the port again and click Modify VLAN. The window
shown in Figure 4-53 displays. Check Enable to turn VLAN tagging on and set the tag in
the box provided. It is advised that the failover port should belong to the same VLAN, so
you should leave this box checked. Click Modify to confirm.

Figure 4-53 Modifying VLAN settings for Port 2 of Node 1

8. Repeat the foregoing steps for all ports that need to be configured.

It is also possible to configure iSCSI aliases, an iSNS name, and CHAP authentication. These
options are located in the iSCSI view. To access this, click iSCSI while in the network settings
view, as shown in Figure 4-54.

Figure 4-54 Advanced iSCSI configuration

Chapter 4. Host configuration 197


4.3.5 Creating SAS hosts
To create a SAS host, complete the following steps:
1. Click SAS Host, and the SAS host configuration wizard opens, as shown in Figure 4-55.

Figure 4-55 The SAS host configuration wizard

2. Enter a descriptive Host Name and click the SAS Ports drop-down menu to view a list of
all FC WWPNs visible to the system, as shown in Figure 4-56.

Figure 4-56 SAS WWPNs visible to the system

198 Implementing the IBM Storwize V5000


3. If you have previously prepared a SAS host as described in Chapter 4.2, “Preparing the
host operating system” on page 157, then the WWPNs you recorded in this section should
appear. If they do not appear in the list, verify that you have completed all of the required
steps and check your cabling, then click Rescan in the configuration wizard. In particular,
make sure that the ends of the SAS cables are aligned properly.

Note: It is possible to enter WWPNs manually. However, if these are not visible to IBM
Storwize V5000, then the host object will appear as offline and will be unusable for I/O
operations until the ports become visible.

4. Select the WWPNs for your host and click Add Port to List for each. These will be added
to Port Definitions, as shown in Figure 4-57.

Figure 4-57 Adding the host WWPNs

5. In the Advanced Settings, it is possible to choose the Host Type. If using HP/UX,
OpenVMS, or TPGS, this needs to be configured. Otherwise, the default option (Generic)
is fine.
6. From here, it is also possible to set the I/O Groups that your host will have access to. It is
important that host objects belong to the same I/O group(s) as the volumes you want to
map, otherwise the host will not have visibility of these volumes. For more information, see
Chapter 5, “Volume configuration” on page 201.

Note: IBM Storwize V5000 supports a maximum of four nodes per system, arranged as
two I/O groups per cluster. Due to the host object limit per I/O group, for maximum host
connectivity, it is best to create hosts utilizing single I/O groups.

Chapter 4. Host configuration 199


7. Click Add Host and the wizard will complete, as shown in Figure 4-37.

Figure 4-58 Completion of the Add Host wizard

8. Click Close to return to the host view, which should now list your newly created host
object, as shown in Figure 4-38.

Figure 4-59 The hosts view with the newly created host object listed

9. Repeat these steps for all of your SAS hosts.

After host objects have been created, see Chapter 5, “Volume configuration” on page 201 to
create volumes and map them to the hosts.

200 Implementing the IBM Storwize V5000


5

Chapter 5. Volume configuration


This chapter describes how to use the IBM Storwize V5000 to create and map volumes. A
volume is a logical disk on the IBM Storwize V5000 that is provisioned out of a storage pool.
Mappings are defined in IBM Storwize V5000 between volumes and host objects, at which
point the storage is ready to be used by the host operating systems.

For information about preparing host operating systems and creating host objects in IBM
Storwize V5000, see Chapter 4, “Host configuration” on page 155.

The first part of this chapter describes how to create volumes of different types and map them
to host objects in IBM Storwize V5000, while the second part of this chapter describes how to
discover these volumes from the host operating system.

After completing this chapter, your basic configuration is complete and you can begin to store
and access data on the IBM Storwize V5000.

For information about advanced volume administration, such as reconfiguring settings for
existing volumes, see Chapter 8, “Advanced host and volume administration” on page 359.

This chapter includes the following topics:


򐂰 Creating volumes in IBM Storwize V5000
򐂰 Mapping volumes to hosts
򐂰 Discovering mapped volumes from host systems

© Copyright IBM Corp. 2013, 2015. All rights reserved. 201


5.1 Creating volumes in IBM Storwize V5000
In this section, we describe how to create volumes using the IBM Storwize V5000 GUI. We
assume that you have completed the steps described in Chapter 2, “Initial configuration” on
page 25 and have storage arrays and pools already configured.

To create new volumes using the IBM Storwize V5000 GUI, complete the following steps:
1. Open the volumes view by navigating to Volumes  Volumes, as shown in Figure 5-1.

Figure 5-1 Opening the volumes view

This loads the page shown in Figure 5-2.

Figure 5-2 The volumes view with no volumes configured

202 Implementing the IBM Storwize V5000


2. Click the Create Volumes button to open the Create Volumes wizard. This opens the
window shown in Figure 5-3.

Figure 5-3 Selecting a volume preset and pool in the Create Volume wizard

3. Choose whether to create Generic, Thin Provision, Mirror or Thin Mirror volumes, and
which pool to provision this storage from. Descriptions of the volume presets are provided
below, and the following sections provide guidance for creating each volume type.

By default, volumes created via the IBM Storwize V5000 GUI are striped across all available
MDisks in the chosen storage pool. The following presets are provided:
򐂰 Generic: A striped volume that is fully provisioned, meaning that the volume capacity
reflects the same size physical disk capacity.
Creating generic volumes is described in 5.1.1, “Creating generic volumes” on page 204.
򐂰 Thin provisioned: A striped volume that reports a greater capacity than is initially
allocated. Thin provisioned volumes are sometimes referred to as space efficient.
Such volumes have two capacities:
– The real capacity. This determines the quantity of extents that are initially allocated to
the volume.
– The virtual capacity. This is the capacity that is reported to all other components (such
as FlashCopy) and to the hosts.
Thin provisioning is useful for applications where the amount of physical disk space
actually used is often much smaller than the amount allocated. Thin provisioned volumes
can be configured to expand automatically when more physical capacity is used. It is also
possible to configure the amount of space that is initially allocated. IBM Storwize V5000
includes a configurable alert that will alert system administrators when thin provisioned
volumes begin to reach their capacity.
Creating thin provisioned volumes is described in 5.1.2, “Creating thin provisioned
volumes” on page 206.

Chapter 5. Volume configuration 203


򐂰 Mirror: A striped volume that consists of two striped copies and is synchronized to protect
against loss of data if the underlying storage pool of one copy is lost.
In this case, a single volume is presented to the host, however, two copies exist on the
storage back end, usually in different storage pools (all reads are handled by the primary
copy). This feature is similar to host-based software mirroring, but provides a single point
of management and high availability to operating systems that do not support software
mirroring.
Creating mirror copies in different storage pools protects against array failures. However,
the mirroring feature is not a complete disaster recovery solution because both copies are
accessed by the same node pair and are addressable only by a single cluster. If you want
to create a robust disaster recovery solution using IBM Storwize V5000, see Chapter 10,
“Copy services” on page 451 for more information.
Creating mirror volumes is described in 5.1.3, “Creating mirrored volumes” on page 210.
򐂰 Thin mirror: A mirror volume that is also thin provisioned.
Creating thin mirror volumes is described in described in 5.1.4, “Creating thin mirrored
volumes” on page 213.

5.1.1 Creating generic volumes


To create generic volumes, complete the following steps:
1. Select Generic as the preset, as shown in Figure 5-4.
2. From the list shown in Figure 5-4, select a pool to provision storage from.
3. The view shown in Figure 5-4 will display. From here, you can choose how many volumes
to create, the capacity of each volume, and what to name the volumes.

Figure 5-4 Creating generic volumes using the Create Volumes wizard

204 Implementing the IBM Storwize V5000


4. Click Advanced... to view more options, as shown in Figure 5-5.

Figure 5-5 Advanced options for volume creation

Here are some important points to note:


– For generic volumes, capacity management and mirroring do not apply.
– If you have two I/O groups within your system, you can specify which I/O group is used
to provide the volume caching. The advised option here is Auto-balance, which allows
the system to balance I/O caching across both I/O groups.
– You can also specify which I/O groups have access to the volume(s). The
recommendation here is to allow both I/O groups access for maximum performance
and availability.
– Similarly, there is an option to set the preferred node within the I/O group that owns the
volume. The advised option here is to set this to Automatic and allow the system to
balance volume I/O across both node canisters in the I/O group.
When you are done, click OK to return to the main wizard.
5. Click Create or Create and Map to Host when you are finished. Create and Map to Host
will create the volumes and then take you to the Host Mapping wizard, which is described
in Chapter 5.2, “Mapping volumes to hosts” on page 215.

Important: The Create and Map to Host option is disabled if no hosts are configured on
the IBM Storwize V5000. For more information about configuring hosts, see Chapter 4,
“Host configuration” on page 155.

Chapter 5. Volume configuration 205


6. If you clicked Create, the wizard will complete as shown in Figure 5-6.

Figure 5-6 Completion of the Create Volumes wizard

7. Click Close to return to the volumes view, which now lists your newly created volume(s),
as shown in Figure 5-7.

Figure 5-7 The volumes view with the newly created volume listed

8. Repeat these steps for all generic volumes you want to create.

If you did not choose Create and Map to Host in Step 5, see 5.2.2, “Manually mapping
volumes to hosts” on page 218 for instructions on mapping the volume(s) manually after host
creation.

5.1.2 Creating thin provisioned volumes


To create thin provisioned volumes, complete the following steps:
1. Select Thin Provision as the preset, as shown in Figure 5-8.
2. From the list shown in Figure 5-8, select a pool to provision storage from.
3. The view shown in Figure 5-8 displays. From here, you can choose how many volumes to
create, the capacity of each volume and what to name the volumes.

206 Implementing the IBM Storwize V5000


Figure 5-8 Creating thin provisioned volumes using the Create Volumes wizard

Note: The capacity specified is the virtual capacity of the volume, while the real
capacity is 2% of this value. The real capacity can be modified in the advanced settings.

4. Click Advanced... to view more options. The Characteristics tab is the same as for
generic volumes, however, there are additional options under Capacity Management, as
shown in Figure 5-9.

Figure 5-9 Advanced capacity options for thin provisioned volumes

Chapter 5. Volume configuration 207


Here are some important points to note:
– For thin provisioned volumes, mirroring does not apply.
– The real capacity is the space that is actually allocated during creation. This can be
specified as a percentage of the virtual capacity, or as a value in GiB. Increasing the
real capacity reserves more space for the volume(s), ensuring that it does not get used
elsewhere. Increasing the real capacity diminishes the advantage of thin provisioned
volumes compared to generic volumes however, as there is less virtual capacity left to
work with and more physical capacity allocated.
– Enabling automatic expansion allows the real capacity to grow automatically as the
amount of actual data stored on the volume(s) increases.
– The warning threshold is the point at which the system begins to log capacity alerts for
the volume(s). This allows system administrators advance warning to plan for
expanding the real capacity of volumes. If a thin provisioned volume becomes
overallocated, the system will take the volume offline, so it is important to pay attention
to the warnings.
– You can also specify the grain size for the real capacity used. This is relevant when
creating FlashCopy mappings from the volume(s). For more information about
FlashCopy, refer to Chapter 10, “Copy services” on page 451.
When you are done, click OK to return to the main wizard.
5. Click Create or Create and Map to Host when you are finished. Create and Map to Host
will create the volumes and then take you to the Host Mapping wizard, which is described
in Chapter 5.2, “Mapping volumes to hosts” on page 215.

Important: The Create and Map to Host option is disabled if no hosts are configured on
the IBM Storwize V5000. For more information about configuring hosts, see Chapter 4,
“Host configuration” on page 155.

6. If you clicked Create, the wizard will complete as shown in Figure 5-10.

Figure 5-10 Completion of the Create Volumes wizard

208 Implementing the IBM Storwize V5000


7. Click Close to return to the volumes view, which now lists your newly created volume(s),
as shown in Figure 5-11.

Figure 5-11 The volumes view with the newly created volume listed

8. Repeat these steps for all thin provisioned volumes that you want to create.

If you did not choose Create and Map to Host in Step 5, see 5.2.2, “Manually mapping
volumes to hosts” on page 218 for instructions on mapping the volume(s) manually after
host creation.

Chapter 5. Volume configuration 209


5.1.3 Creating mirrored volumes
To create mirrored volumes, complete the following steps:
1. Select Mirror as the preset, as shown in Figure 5-12.
2. From the list shown in Figure 5-12, select a pool to provision storage from for each
mirrored copy. To take full advantage of the redundancy offered by the mirroring feature,
each copy should ideally be provisioned from separate pools.
3. The view shown in Figure 5-12 displays. From here, you can choose how many mirrored
pairs to create, the capacity of each volume, and what to name the volumes.

Figure 5-12 Creating mirrored volumes using the Create Volumes wizard

Note: The capacity specified is per copy, so the total capacity required will be the
number of mirrored pairs, multiplied by the capacity, then multiplied by two.

210 Implementing the IBM Storwize V5000


4. Click Advanced... to view more options. The Characteristics and Capacity
Management tabs are the same as for generic volumes, however there are additional
options under Mirroring, as shown in Figure 5-13.

Figure 5-13 Advanced mirroring options for mirrored volumes

Here it is possible to set the mirror sync rate. Higher sync rates will provide greater
availability at a small cost to performance, and vice versa.
When you are done, click OK to return to the main wizard.
5. Click Create or Create and Map to Host when you are finished. Create and Map to Host
will create the volumes and then take you to the Host Mapping wizard, which is described
in 5.2, “Mapping volumes to hosts” on page 215.

Important: The Create and Map to Host option is disabled if no hosts are configured on
the IBM Storwize V5000. For more information about configuring hosts, see Chapter 4,
“Host configuration” on page 155.

Chapter 5. Volume configuration 211


6. If you clicked Create, the wizard will complete, as shown in Figure 5-14.

Figure 5-14 Completion of the Create Volumes wizard

7. Click Close to return to the volumes view, which now lists your newly created volume(s),
as shown in Figure 5-15.

Figure 5-15 The volumes view with the newly created volume listed

8. Repeat these steps for all mirrored volumes that you want to create.

If you did not choose Create and Map to Host in Step 5, see 5.2.2, “Manually mapping
volumes to hosts” on page 218 for instructions on mapping the volume(s) manually after host
creation.

212 Implementing the IBM Storwize V5000


5.1.4 Creating thin mirrored volumes
To create thin mirrored volumes, complete the following steps:
1. Select Thin Mirror as the preset, as shown in Figure 5-16.
2. From the list shown in Figure 5-16, select a pool to provision storage from for each
mirrored copy. To take full advantage of the redundancy offered by the mirroring feature,
each copy should ideally be provisioned from separate pools.
3. The view shown in Figure 5-16 displays. From here, you can choose how many mirrored
pairs to create, the capacity of each volume, and what to name the volumes.

Figure 5-16 Creating thin mirrored volumes using the Create Volumes wizard

Note: The capacity specified is per copy, so the total virtual capacity will be the number
of mirrored pairs, multiplied by the capacity, then multiplied by two.

4. Click Advanced... to view more options. Here you can configure Characteristics as for
generic volumes, Capacity Management as for thin provisioned volumes, and Mirroring
as for mirrored volumes.
After you are done, click OK to return to the main wizard.
5. Click Create or Create and Map to Host when you are finished. Create and Map to Host
will create the volumes and then take you to the Host Mapping wizard, which is described
in 5.2, “Mapping volumes to hosts” on page 215.

Important: The Create and Map to Host option is disabled if no hosts are configured on
the IBM Storwize V5000. For more information about configuring hosts, see Chapter 4,
“Host configuration” on page 155.

Chapter 5. Volume configuration 213


6. If you clicked Create, the wizard will complete as shown in Figure 5-17.

Figure 5-17 Completion of the Create Volumes wizard

7. Click Close to return to the volumes view, which now lists your newly created volume(s),
as shown in Figure 5-18.

Figure 5-18 The volumes view with the newly created volume listed

8. Repeat these steps for all thin mirrored volumes that you want to create.

If you did not choose Create and Map to Host in Step 5, see 5.2.2, “Manually mapping
volumes to hosts” on page 218 for instructions on mapping the volume(s) manually after host
creation.

214 Implementing the IBM Storwize V5000


5.2 Mapping volumes to hosts
This section describes how to create mappings between volumes and hosts on IBM Storwize
V5000. After this task is complete, the mapped storage will be available for use by host
systems.

The first part of this section describes how to map volumes to hosts if you clicked Create and
Map to Host in 5.1, “Creating volumes in IBM Storwize V5000” on page 202. The second part
of this section describes the manual host mapping process that is used to create customized
mappings.

5.2.1 Mapping newly created volumes using Create and Map to Host
In this section, we assume that you are following one of the procedures in 5.1, “Creating
volumes in IBM Storwize V5000” on page 202 and clicked Create and Map to Host followed
by Continue when the Create Volumes task completed, as shown in Figure 5-19.

Figure 5-19 Continuing to map volumes after the Create Volumes task has completed

To map the volume(s), complete the following steps:


1. After clicking Continue, the window shown in Figure 5-20 displays.

Figure 5-20 Selecting the I/O group and host to assign the mapping(s) to

First, select the I/O group you want to assign the mapping(s) to. Selecting the correct I/O
group is important if there is more than one group. As discussed in 5.1.1, “Creating
generic volumes” on page 204, when a volume is created, it is possible to define the
caching I/O group or the I/O group that owns the volume and is used to access it.
Therefore, your host must communicate with the same I/O group for the mapping to be
successful.

Chapter 5. Volume configuration 215


2. Select the host you want to assign the mapping(s) to. The window expands to the view
shown in Figure 5-21.

Figure 5-21 The host mapping wizard with a new volume mapped to a host

216 Implementing the IBM Storwize V5000


3. The newly created volume(s) should already be mapped to the host. To confirm the
mapping(s), click Map Volumes to complete the operation and close the window.
Alternatively, click Apply to complete the operation but leave the host mapping wizard
open.
4. After clicking Map Volumes, the wizard completes as shown in Figure 5-22.

Figure 5-22 Completion of the host mapping wizard

5. Click Close to return to the volumes view, which now lists your newly created volume(s),
as shown in Figure 5-23.

Figure 5-23 The volumes view with the newly created and mapped volume listed

Chapter 5. Volume configuration 217


5.2.2 Manually mapping volumes to hosts
In this section, we assume that you have followed one of the procedures in 5.1, “Creating
volumes in IBM Storwize V5000” on page 202 and clicked Create. This section describes how
to manually map these volumes to hosts after volume creation.

To manually map volumes to hosts, complete the following steps:


1. Go to Volumes  Volumes, select the volumes you want to map and then click
Actions  Map to Host, as shown in Figure 5-24.

Figure 5-24 Manually creating host mappings from the volumes view

2. This brings up the window shown in Step 1 of the procedure in 5.2.1, “Mapping newly
created volumes using Create and Map to Host” on page 215. Complete the steps in that
section to finish mapping the volumes.

The volumes should now be accessible from the host operating system. The following section
describes how to discover and use mapped volumes from hosts.

218 Implementing the IBM Storwize V5000


5.3 Discovering mapped volumes from host systems
This section describes how to discover the volumes that were created and mapped in 5.1,
“Creating volumes in IBM Storwize V5000” on page 202 and 5.2, “Mapping volumes to hosts”
on page 215, from the host operating system.

We assume that you have completed all of the following tasks described in this book so that
the hosts and the IBM Storwize V5000 are prepared:
򐂰 Prepare your operating systems for attachment, including installing multipath support. For
more information, see Chapter 4, “Host configuration” on page 155.
򐂰 Create host objects using the IBM Storwize V5000 GUI. For more information, see
Chapter 4, “Host configuration” on page 155.
򐂰 Create volumes and map these to the host objects using the IBM Storwize V5000 GUI. For
more information, see 5.1, “Creating volumes in IBM Storwize V5000” on page 202, and
5.2, “Mapping volumes to hosts” on page 215.

First, we need to look up the UIDs of the volumes. Knowing the UIDs of the volumes helps
determine which is which from the host point of view. To do this, complete the following steps:
1. Open the volumes by host view by going to Volumes  Volumes by Host, as shown in
Figure 5-25.

Figure 5-25 Opening the volumes by host view

Chapter 5. Volume configuration 219


This should load the page shown in Figure 5-26.

Figure 5-26 The volumes by host view with a single host and volume mapping configured

2. Check the UID column and note the value for each mapped volume.

Next we need to discover the mapped volumes from the host. The following sections describe
how to discover mapped volumes from Windows 2012 R2 and VMware ESXi 5.5 for each
supported attachment type (FC, iSCSI and SAS).

Note: For Windows 2012 R2, we describe the use of SDDDSM for multipath for FC and
SAS, while 5.3.2, “Discovering iSCSI volumes from Windows 2012 R2” on page 229
describes how to configure native MPIO. This is because only native MPIO is supported for
iSCSI on Windows 2012 R2. For more information, see Chapter 4, “Host configuration” on
page 155.

5.3.1 Discovering FC volumes from Windows 2012 R2


In the sections that follow, we show how to discover FC volumes in Windows 2012 R2.

Discovering mapped volumes


To discover mapped FC volumes from Windows 2012 R2, complete the following steps:
1. Connect to your Windows server and open Server Manager. Go to File and Storage
Services, as shown in Figure 5-27.

220 Implementing the IBM Storwize V5000


Figure 5-27 Opening File and Storage Services in Server Manager

Note: File and Storage Services should be enabled by default. If not, it can be enabled
by going to Manage  Add Roles and Features and following the wizard.

2. In File and Storage Services, go to Disks. In here all storage devices known to Windows
should be listed, as shown in Figure 5-28.

Figure 5-28 The Disks view in File and Storage Services

If your mapped volumes are not shown, go to Tasks  Rescan Storage, as shown in
Figure 5-29.

Figure 5-29 Rescan storage from File and Storage Services

A bar will appear at the top of the screen to show that Windows is attempting to scan for
devices, as shown in Figure 5-30. Wait for this to complete.

Figure 5-30 Waiting for Windows to rescan storage

Chapter 5. Volume configuration 221


3. The mapped volumes are now listed, as shown in Figure 5-31.

Figure 5-31 The mapped volumes listed in Windows 2012 R2

4. We need to bring the disk online. To do this, right-click and click Bring Online, as shown in
Figure 5-32.

Figure 5-32 Bring a disk online

5. The window shown in Figure 5-33 displays. Click Yes to confirm that you want to bring the
disk online on this server.

Figure 5-33 Confirm that you want to bring the disk online

6. The next step is to initialize the disk. To do this, right-click the disk and click Initialize, as
shown in Figure 5-34.

Figure 5-34 Initializing the disk

222 Implementing the IBM Storwize V5000


7. The window shown in Figure 5-35 displays. Click Yes to confirm that you want to initialize
the disk.

Figure 5-35 Confirm that you want to initialize the disk

8. The disk now appear as online and initialized with GPT partitioning, as shown in
Figure 5-36.

Figure 5-36 The mapped volume online and initialized as a GPT disk

9. Repeat the previous steps for any other mapped volumes.

The mapped volumes are now ready to be managed by the host operating system.

Creating a file system on a mapped volume


To create a file system on a mapped volume in Windows 2012 R2, complete the following
steps:
1. In the Disks view in File and Storage Services, right-click the disk and click New
Volume..., as shown in Figure 5-37.

Figure 5-37 Creating a new file system on a disk

Note: In this context, “Volume” refers to a Windows concept and not specifically an IBM
Storwize V5000 volume.

2. The New Volume Wizard opens. Click Next on the first page to bring up the page shown in
Figure 5-38. Choose which server and disk to provision and click Next.

Chapter 5. Volume configuration 223


Figure 5-38 Choosing the server and disk

3. The next step is to choose the size of the volume, as shown in Figure 5-39. Set the volume
size and click Next to continue.

Figure 5-39 Choosing the volume size

224 Implementing the IBM Storwize V5000


4. Next, choose whether to assign a drive letter, as shown in Figure 5-40. Click Next to
continue.

Figure 5-40 Assigning a driver letter to the new volume

5. Choose a file system type, unit size and name for the new volume, as shown in
Figure 5-41. Click Next to continue.

Chapter 5. Volume configuration 225


Figure 5-41 Configuring file system settings

6. Confirm the new volume configuration on the next page, click Next, and confirm that
volume creation is successful, as shown in Figure 5-42.

Figure 5-42 New volume success

226 Implementing the IBM Storwize V5000


7. The new volume appears under the Volumes view of File and Storage Services, as
shown in Figure 5-43.

Figure 5-43 The new volume listed under the Volumes view

Configuring multipath settings for mapped volumes


To configure SDDDSM multipath settings for mapped volumes in Windows 2012 R2, we can
use the datapath.exe command line tool.

To query paths to discovered devices, open a Powershell terminal, change to the SDDDSM
directory and run datapath.exe query device, as shown in Example 5-1.

Example 5-1 Datapath query device


PS C:\Program Files\IBM\SDDDSM> .\datapath.exe query device

Total Devices : 1

DEV#: 1 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076300AF004B4800000000000027 LUN SIZE: 10.0GB
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 * Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0
1 * Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0
2 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 43 0
3 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 29 0
4 * Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0
5 * Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0
6 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 7 0
7 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 6 0

The output provides multipath information about the mapped volumes. In our example, one
disk is connected (Disk 1) and eight paths to the disk are available (State = Open). You can
also confirm that the serial numbers match the UIDs recorded earlier.

Important: Correct SAN switch zoning must be implemented to allow only eight paths to
be visible from the host to any one volume. Volumes with more than eight paths are not
supported.

Chapter 5. Volume configuration 227


To set the multipath policy for a device, run datapath.exe set device <device ID> policy
<rr/fo/lb/df>, where:
򐂰 rr is Round Robin
򐂰 fo is Failover
򐂰 lb is Load Balancing (reported as Optimized)
򐂰 df is Default (which should be Load Balancing)

In Example 5-2, we have changed the policy from Load Balancing to Round Robin.

Example 5-2 Changing the policy to Round Robin


PS C:\Program Files\IBM\SDDDSM> .\datapath.exe set device 1 policy rr

DEV#: 1 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: ROUND ROBIN
SERIAL: 6005076300AF004B4800000000000027 LUN SIZE: 10.0GB
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 * Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0
1 * Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0
2 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 43 0
3 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 29 0
4 * Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0
5 * Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0
6 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 7 0
7 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 6 0

For more information about configuring SDDDSM, see the Multipath Subsystem Device
Driver User’s Guide. This can be found at the following website:
http://www-01.ibm.com/support/docview.wss?rs=540&context=ST52G7&q=ssg1*&uid=ssg
1S7000303&loc=en_US&cs=utf-8&lang=en%20en

228 Implementing the IBM Storwize V5000


5.3.2 Discovering iSCSI volumes from Windows 2012 R2
In the sections that follow, we show how to discover iSCSI volumes in Windows 2012 R2.

Discovering mapped volumes


To discover mapped iSCSI volumes from Windows 2012 R2, complete the following steps:
1. Connect to your Windows server and open the iSCSI Initiator by opening Server Manager
and going to Tools  iSCSI Initiator, as shown in Figure 5-44.

Figure 5-44 Starting the iSCSI Initiator

2. While in the iSCSI Initiator properties, log on to a IBM Storwize V5000 iSCSI port by typing
the port IP into the Target box and clicking Quick Connect, as shown in Figure 5-45.

Figure 5-45 Connecting to the IBM Storwize V5000 iSCSI ports

Chapter 5. Volume configuration 229


3. If the connection is successful, the window shown in Figure 5-46 displays. Click Done to
proceed.

Figure 5-46 Successful iSCSI port connection

4. Repeat the previous two steps for all iSCSI IPs that you want to connect to.
5. In the iSCSI Initiator properties, the connected ports should now be listed under
“Discovered targets”, as shown in Figure 5-47.

Figure 5-47 Discovered iSCSI targets

6. The next step is to scan for disks. Follow the steps in 5.3.1, “Discovering FC volumes from
Windows 2012 R2” on page 220 to do this. The number of disks listed should be the
number of mapped volumes multiplied by the number of paths, because we have not yet
enabled MPIO for iSCSI devices.

230 Implementing the IBM Storwize V5000


7. To enable MPIO for iSCSI devices, first go to Tools  MPIO in Server Manager, as shown
in Figure 5-48.

Figure 5-48 Opening the MPIO properties window

This opens the window shown in Figure 5-49.

Figure 5-49 MPIO properties

Note: IBM 2145 devices need to be added to the list of supported devices for MPIO to
work with IBM Storwize V5000. Click Add to do this if they are not already listed.

Chapter 5. Volume configuration 231


8. Go to Discover Multi-paths and check “Add support for iSCSI devices”, as shown in
Figure 5-50.

Figure 5-50 Adding support for iSCSI devices in MPIO

Click Add, then click Yes when prompted to restart. Wait for the server to finish restarting.
After the server is back online, the correct number of disks should now be listed under the
Disks view in File and Storage Services.

Note: In some cases, the “Add support for iSCSI devices” option is disabled. To enable
this option, you must already have a connection to at least one iSCSI device.

9. To make iSCSI devices readily available on system boot, reopen the iSCSI Initiator
properties and go to Volumes and Devices, as shown in Figure 5-51.

Figure 5-51 iSCSI volumes and devices

232 Implementing the IBM Storwize V5000


Click Auto Configure and the mapped volumes should appear under “Volume List”, as
shown in Figure 5-52. This will ensure that on boot, the volumes are available for use by
dependent services and applications before these start.

Figure 5-52 Configured iSCSI device

10.Follow the instructions in 5.3.1, “Discovering FC volumes from Windows 2012 R2” on
page 220 to bring the disks online and initialize them.

The mapped volumes are now ready to be managed by the host operating system.

Creating a file system from a mapped volume


To create a file system from a mapped volume in Windows 2012 R2, complete the steps
described in 5.3.1, “Discovering FC volumes from Windows 2012 R2” on page 220. This
procedure is common to all attachment types.

Configuring multipath settings for mapped volumes


To configure MPIO multipath settings for mapped volumes in Windows 2012 R2, we can use
the mpclaim.exe command line tool.

To save details of the current MPIO configuration, including all connected devices and paths,
to a text file, open a Powershell terminal and run mpclaim -v <directory path>, for example,
mpclaim -v C:\multipath_config.txt. Some example output is shown in Example 5-3.

Example 5-3 MPIO configuration details


MPIO Storage Snapshot on Monday, 27 October 2014, at 14:57:03.169

Registered DSMs: 1
================
+--------------------------------|-------------------|----|----|----|---|-----+
|DSM Name | Version |PRP | RC | RI |PVP| PVE |
|--------------------------------|-------------------|----|----|----|---|-----|
|Microsoft DSM |006.0002.09200.16384|0020|0016|0001|060| True|
+--------------------------------|-------------------|----|----|----|---|-----+

Microsoft DSM
=============
MPIO Disk0: 04 Paths, Round Robin with Subset, Implicit Only
SN: 6057630AF04B480000002B
Supported Load Balance Policies: FOO RRWS LQD WP LB

Path ID State SCSI Address Weight


---------------------------------------------------------------------------
0000000077040002 Active/Optimized 004|000|002|000 0
TPG_State: Active/Optimized , TPG_Id: 0, TP_Id: 2432

Chapter 5. Volume configuration 233


Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 46616B65436F6E74726F6C6C6572 (State: Active)

0000000077040003 Active/Optimized 004|000|003|000 0


TPG_State: Active/Optimized , TPG_Id: 0, TP_Id: 2432
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 46616B65436F6E74726F6C6C6572 (State: Active)

0000000077040001 Active/Unoptimized 004|000|001|000 0


TPG_State: Active/Unoptimized, TPG_Id: 1, TP_Id: 384
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 46616B65436F6E74726F6C6C6572 (State: Active)

0000000077040000 Active/Unoptimized 004|000|000|000 0


TPG_State: Active/Unoptimized, TPG_Id: 1, TP_Id: 384
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 46616B65436F6E74726F6C6C6572 (State: Active)

MSDSM-wide default load balance policy: N\A

No target-level default load balance policies have been set.

================================================================================

Confirm that all paths are active as expected and the serial numbers match the UIDs
recorded earlier.

To change the load balancing policy for all devices, run mpclaim -L -M <policy>, where the
allowed policies are:

Table 5-1 MPIO load balancing policies


Parameter Definition

0 Clear the Policy

1 Failover Only

2 Round Robin

3 Round Robin with Subset

4 Least Queue Depth

5 Weighted Paths

6 Least Blocks

7 Vendor Specific

For detailed explanations of each policy, see the following link:


http://technet.microsoft.com/en-gb/library/dd851699.aspx

The following link provides further information and guidance on using mpclaim.exe:
http://technet.microsoft.com/en-us/library/ee619743%28v=ws.10%29.aspx

234 Implementing the IBM Storwize V5000


5.3.3 Discovering SAS volumes from Windows 2012 R2
In the sections that follow, we show how to discover SAS volumes in Windows 2012.

Discovering mapped volumes


To discover mapped SAS volumes from Windows 2012 R2, complete the steps described in
5.3.1, “Discovering FC volumes from Windows 2012 R2” on page 220. The steps required are
identical for both FC and SAS.

The mapped volumes are now ready to be managed by the host operating system.

Creating a file system from a mapped volume


To create a file system from a mapped volume in Windows 2012 R2, complete the steps
described in 5.3.1, “Discovering FC volumes from Windows 2012 R2” on page 220. This
procedure is common to all attachment types.

Configuring multipath settings for mapped volumes


To configure multipath settings for mapped volumes in Windows 2012 R2, complete the steps
described in 5.3.1, “Discovering FC volumes from Windows 2012 R2” on page 220.

5.3.4 Discovering FC volumes from VMware ESXi 5.5


In the sections that follow, we show how to discover SAS volumes in VMware ESXi 5.5.

Discovering mapped volumes


To discover mapped FC volumes from VMware ESXi 5.5, complete the following steps:
1. Connect to your ESXi Server using the VMware vSphere Client. Browse to the
Configuration tab and select Storage Adapters. This brings up the view shown in
Figure 5-53.

Figure 5-53 The storage adapters view in VMware vSphere Client

2. Click Rescan all.... The window shown in Figure 5-54 displays. Make sure “Scan for New
Storage Devices” is checked. Click OK to confirm.

Chapter 5. Volume configuration 235


Figure 5-54 Rescan all adapters

3. The rescan task starts and appears under Recent Tasks, as shown in Figure 5-55. Wait
for this to finish.

Figure 5-55 The “Rescan all HBAs” task in progress

4. If you now select the FC adapters, the mapped volumes are listed, as shown in
Figure 5-56 on page 236.

Figure 5-56 The mapped volumes listed in VMware vSphere Client

Check the UIDs you recorded earlier and confirm that all of the mapped volumes are
listed.

The mapped volumes are now ready to be managed by the host operating system.

Creating a datastore from a mapped volume


To create a datastore from a mapped volume in VMware ESXi 5.5, complete the steps
described in 5.3.5, “Discovering iSCSI volumes from VMware ESXi 5.5” on page 237. This
procedure is common to all attachment types.

Configuring multipath settings for mapped volumes


To configure multipath settings for a mapped volume in VMware ESXi 5.5, complete the steps
described in 5.3.5, “Discovering iSCSI volumes from VMware ESXi 5.5” on page 237. This
procedure is common to all attachment types.

236 Implementing the IBM Storwize V5000


5.3.5 Discovering iSCSI volumes from VMware ESXi 5.5
In the sections that follow, we show how to discover iSCSI volumes in VMware ESXi 5.5.

Discovering mapped volumes


To discover mapped iSCSI volumes from VMware ESXi 5.5, complete the following steps:
1. Connect to your ESXi Server using the VMware vSphere Client. Browse to the
Configuration tab and select Storage Adapters. This brings up the view shown in
Figure 5-57.

Figure 5-57 The storage adapters view in VMware vSphere Client

Chapter 5. Volume configuration 237


2. Right-click the iSCSI Software Adapter and click Properties.... The iSCSI initiator
properties window opens. Go to Dynamic Discovery and click Add..., as shown in
Figure 5-58.

Figure 5-58 Discover iSCSI targets dynamically from the iSCSI Initiator

3. The window shown in Figure 5-59 displays. Enter a target IP address, which should be
one of the IPs configured in 4.3.4, “Configuring IBM Storwize V5000 for iSCSI host
connectivity” on page 194. The target IP address is the iSCSI IP address of a node in the
I/O group from which the iSCSI volumes are mapped. Keep the default value of 3260 for
the port and click OK.

Figure 5-59 Adding a target IP to the iSCSI Initiator

238 Implementing the IBM Storwize V5000


4. Under Recent Tasks, a task will appear for connecting to the target, as shown in
Figure 5-60. Wait for this to complete.

Figure 5-60 Connecting to iSCSI targets

5. Repeat the previous two steps for each IBM Storwize V5000 iSCSI port you want to use
for iSCSI connections with this host.

iSCSI IP addresses: The iSCSI IP addresses are different from the cluster and
canister IP addresses. They are configured as described in Chapter 4, “Host
configuration” on page 155.

6. After all of the required ports have been added, close the iSCSI Initiator properties
window.
The window shown in Figure 5-61 displays. Click Yes to rescan the adapter.

Figure 5-61 Rescan the iSCSI Software Adapter

7. In Storage Adapters, click the iSCSI Software Adapter. After the rescan has completed,
the mapped volumes are listed in the adapter details, as shown in Figure 5-62.

Figure 5-62 The mapped volumes listed in VMware vSphere Client

Check the UIDs you recorded earlier and confirm that all of the mapped volumes are
listed.

The mapped volumes are now ready to be managed by the host operating system. In the next
section we describe how to create a datastore from a mapped volume.

Chapter 5. Volume configuration 239


Creating a datastore from a mapped volume
To create a datastore from a mapped volume in VMware ESXi 5.5, complete these steps:
1. Go to Configuration  Storage. This brings up the window shown in Figure 5-63.

Figure 5-63 The storage view in VMware vSphere Client

2. To create a new datastore, click Add Storage... to open the Add Storage wizard. The
window shown in Figure 5-64 displays. Select Disk/LUN and click Next.

Figure 5-64 The Add Storage wizard

240 Implementing the IBM Storwize V5000


3. The available disks and LUNs (mapped volumes) will be listed on the next page, as shown
in Figure 5-65. Select a volume to create the datastore from. In this example we have just
one mapped volume, so the choice is easy.

Figure 5-65 Choosing the disk or LUN from which to create the new datastore

Chapter 5. Volume configuration 241


4. Select an option for the File System Version. In this example, we select VMFS-5, as shown
in Figure 5-66. Click Next to continue.

Figure 5-66 Selecting the File System Version for the new datastore

242 Implementing the IBM Storwize V5000


5. Review the disk layout and click Next to continue, as shown in Figure 5-67.

Figure 5-67 Reviewing the disk layout

Chapter 5. Volume configuration 243


6. Enter a name for the datastore and click Next, as shown in Figure 5-68.

Figure 5-68 Choosing a name for the new datastore

244 Implementing the IBM Storwize V5000


7. Select how much of the available disk space to use and click Next, as shown in
Figure 5-69.

Figure 5-69 Selecting how much of the available disk space to use

Chapter 5. Volume configuration 245


8. Review your selections and click Finish, as shown in Figure 5-70.

Figure 5-70 Completing the Add Storage wizard

9. The “Create VMFS datastore” task will start, and this will appear under Recent Tasks, as
shown in Figure 5-71. Wait for this to complete.

Figure 5-71 The “Create VMFS datastore” task under Recent Tasks

10.After the task is complete, the new datastore appears under Configuration  Storage,
as shown in Figure 5-72.

Figure 5-72 The new datastore listed in the storage view

246 Implementing the IBM Storwize V5000


Configuring multipath settings for mapped volumes
To configure multipath settings for a mapped volume managed by VMware as a datastore,
complete the following steps:
1. Go to Configuration  Storage. Right-click the datastore and click Properties....
The window shown in Figure 5-73 displays.

Figure 5-73 Datastore properties

2. Click Manage Paths.... The window shown in Figure 5-74 displays. Select either Most
Recently Used, Round Robin, or Fixed, then click Change to confirm. Round Robin is
the suggested option, as this allows for I/O load balancing. For more information about the
available options, see Chapter 4, “Host configuration” on page 155.

Chapter 5. Volume configuration 247


Figure 5-74 Datastore multipath settings

3. When you are finished, close both windows.

The mapped volume is now configured for maximum availability and ready to be used.

5.3.6 Discovering SAS volumes from VMware ESXi 5.5


In the sections that follow, we show how to discover SAS volumes in VMware ESXi 5.5.

Discovering mapped volumes


To discover mapped SAS volumes from VMware ESXi 5.5, complete the steps described in
5.3.4, “Discovering FC volumes from VMware ESXi 5.5” on page 235. The steps required are
identical for both FC and SAS.

The mapped volumes are now ready to be managed by the host operating system.

Creating a datastore from a mapped volume


To create a datastore from a mapped volume in VMware ESXi 5.5, complete the steps
described in 5.3.5, “Discovering iSCSI volumes from VMware ESXi 5.5” on page 237. This
procedure is common to all attachment types.

Configuring multipath settings for mapped volumes


To configure multipath settings for mapped volumes in VMware ESXi 5.5, complete the steps
described in 5.3.5, “Discovering iSCSI volumes from VMware ESXi 5.5” on page 237. This
procedure is common to all attachment types.

248 Implementing the IBM Storwize V5000


6

Chapter 6. Storage migration


This chapter describes the steps involved in using the storage migration wizard. The storage
migration wizard is used to migrate data from existing external storage systems to the internal
capacity of the Storwize V5000. Migrating data from other storage systems to the Storwize
V5000 storage system consolidates storage and enables the benefit of the Storwize V5000
functionality, such as the easy-to-use GUI, internal virtualization, thin provisioning, and
FlashCopy to be realized across all volumes. After the migration is complete, the existing
system can be retired.

This chapter includes the following topics:


򐂰 Interoperability and compatibility
򐂰 Storage migration wizard
򐂰 Storage migration wizard example

CLI: For more information about the command-line interface setup, see Appendix A,
“Command-line interface setup and SAN Boot” on page 667.

Manually migrating data: For more information about migrating data manually, see
Chapter 11, “External storage virtualization” on page 579.

© Copyright IBM Corp. 2013, 2015. All rights reserved. 249


6.1 Interoperability and compatibility
Interoperability is an important consideration when a new storage system is set up in an
environment that contains existing storage infrastructure. In this section, we describe how to
check that the storage environment, the existing storage system, and IBM Storwize V5000 are
ready for the data migration process.

To ensure system interoperability and compatibility between all elements that are connected
to the SAN fabric, check the proposed configuration with the IBM System Storage
Interoperation Center (SSIC). SSIC can confirm whether the solution is supported and provide
recommendations for hardware and software levels.

If the required configuration is not listed for support in the SSIC, contact your IBM sales
representative and ask for a Request for Price Quotation (RPQ) for your specific configuration.

For more information about the IBM SSIC, see this website:
http://www.ibm.com/systems/support/storage/ssic/interoperability.wss

6.2 Storage migration wizard


The Storwize V5000 storage migration wizard simplifies the migration task. The wizard
features easy-to-follow panels that guide users through the entire process.

6.2.1 External virtualization capability


To migrate data from an existing storage system to the Storwize V5000, it is necessary to use
the built-in external virtualization capability of the Storwize V5000. This capability places
external Fibre Channel connected Logical Units (LUs) under the control of the Storwize
V5000. After the volumes are virtualized, hosts continue to access them but do so through the
IBM Storwize V5000 which acts as a proxy. For more information about external virtualization,
see Chapter 11, “External storage virtualization” on page 579.

6.2.2 Overview of the storage migration wizard


An overview of the storage migration wizard process includes the following considerations:
򐂰 Typically, storage systems divide storage into many Small Computer System Interface
(SCSI) LUs that are presented on a Fibre Channel SAN to hosts.
򐂰 I/O to the LUNs must be stopped and changes made to the mapping of the storage system
LUs and to the SAN fabric zoning so that the original LUs are presented directly to the
Storwize V5000 and not to the hosts. The Storwize V5000 discovers the external LUs as
unmanaged MDisks.
򐂰 The unmanaged MDisks are then imported to the Storwize V5000 as image mode MDisks
and placed into a storage pool. This storage pool is now a logical container for the
SAN-attached LUs.

250 Implementing the IBM Storwize V5000


򐂰 Each legacy volume has a one-to-one mapping with an image mode MDisk. From a data
perspective, the image mode volume represents the SAN-attached LUs exactly as they
were before the import operation. The image mode volumes are on the same physical
drives of the existing storage system and the data remains unchanged. The Storwize
V5000 is presenting active images of the SAN-attached LUs and is acting as a proxy.
򐂰 The hosts must have the existing storage system multipath device driver removed and are
then configured for Storwize V5000 attachment. Further zoning changes are made for
host-to-V5000 SAN connections. The Storwize V5000 hosts are defined with worldwide
port names (WWPNs) and the volumes are mapped to the hosts. After the volumes are
mapped, the hosts discover the Storwize V5000 volumes through a host re-scan or reboot
operation.
򐂰 Storwize V5000 volume mirror operations are then initiated. The image mode volumes are
mirrored to generic volumes. The mirrors are online migration tasks, which means a
defined host can still access and use the volumes during the mirror synchronization
process.
򐂰 After the mirror operations are complete, the volume mirror relationships and the image
mode volumes are removed. The other storage system LUs are now migrated and the now
redundant existing storage can be retired or re-used elsewhere.

Attention: If you are going to be migrating volumes from another IBM Storwize product,
be aware that the target Storwize unit will need to be configured at the Replication layer in
order for the source to discover the target system as a host. The default layer setting for
Storwize V5000 is Storage. If this is not done, then it will not be possible to add the target
system as a host on the source storage system, or to see source volumes on the target
system. For more information about layers and how to change them, see Chapter 10,
“Copy services” on page 451.

6.2.3 Storage migration wizard tasks


The storage migration wizard is designed for the easy and nondisruptive migration of data
from other storage systems to the internal capacity of the Storwize V5000.

This section describes the following storage migration wizard tasks:


򐂰 Avoiding data loss
򐂰 Accessing the storage migration wizard
򐂰 Before you begin
򐂰 Preparing the environment for migration
򐂰 Mapping storage
򐂰 Migrating MDisks
򐂰 Configuring hosts
򐂰 Mapping volumes to hosts
򐂰 Selecting a storage pool
򐂰 Finishing the storage migration wizard
򐂰 Accessing the storage migration wizard
򐂰 Finalizing migrated volumes

Avoiding data loss


The risk of losing data when the storage migration wizard is used correctly is low. However, it
is prudent to avoid potential data loss by creating a backup of all the data that is stored on the
hosts, the existing storage systems, and the Storwize V5000 before the wizard is used.

Chapter 6. Storage migration 251


Accessing the storage migration wizard
Select System Migration in the Pools menu to open the System Migration panel as shown in
Figure 6-1. The System Migration panel provides access to the storage migration wizard and
displays the migration progress information.

Figure 6-1 Pools menu

252 Implementing the IBM Storwize V5000


Click Start New Migration to begin the storage migration wizard. Figure 6-2 shows the
System Migration panel.

Figure 6-2 System Migration panel

Chapter 6. Storage migration 253


Important:
򐂰 You may receive a warning message as shown in Figure 6-3 indicating that no external
storage controllers could be found, if you have not already configured your zoning
correctly (or have the layer set incorrectly if this is another Storwize system). Click
Close and correct the problem before trying to start the migration wizard again. This
occurs on the early 7.4 beta code that we used in writing this book.
򐂰 The subsequent panes in the migration wizard, as shown in Figure 6-6 on page 257,
direct you to remove host zoning to the external storage and create zones between the
Storwize V5000 and the external storage, however, you must have already done this in
order to start the wizard in the first place. See “Preparing the environment for migration”
on page 257 and “Mapping storage” on page 258 for a list of instructions to complete
before starting the data migration wizard.

Figure 6-3 Error message displayed when no external storage is found

254 Implementing the IBM Storwize V5000


Before you begin
This panel of the storage migration wizard describes the restrictions and prerequisites for the
wizard to complete successfully, as shown in Figure 6-4.

Figure 6-4 Step 1 of the storage migration wizard

Restrictions
Confirm that the following conditions are met:
򐂰 You are not using the storage migration wizard to migrate cluster hosts, including clusters
of VMware hosts and Virtual I/O Servers (VIOS).
򐂰 You are not using the storage migration wizard to migrate SAN boot images.

If the restrictions options cannot be selected, the migration must be performed outside of this
wizard because more steps are required. For more information, see the IBM Storwize V5000
Knowledge Center at this website:
http://www.ibm.com/support/knowledgecenter/STHGUJ_7.4.0/com.ibm.storwize.v5000.740
.doc/v5000_ichome_740.html

The VMware vSphere Storage vMotion feature might be an alternative for migrating VMware
clusters. For more information, see this website:
http://www.vmware.com/products/vsphere/features/storage-vmotion.html

For more information about migrating SAN boot images, see Appendix A, “Command-line
interface setup and SAN Boot” on page 667.

Chapter 6. Storage migration 255


Prerequisites
Confirm that the following prerequisites apply:
򐂰 Make sure that the Storwize V5000, existing storage system, hosts, and Fibre Channel
ports are physically connected to the SAN fabrics.
򐂰 If there are VMware ESX hosts involved in the data migration, make sure that the VMware
ESX hosts are set to allow volume copies to be recognized. For more information, see the
VMware ESX product documentation at this website:
http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html?

If all options can be selected, click Next to continue. In all other cases, Next cannot be
selected and the data must be migrated without use of this wizard. Figure 6-5 shows the
storage migration wizard with all restrictions satisfied and prerequisites met.

Figure 6-5 Confirming prerequisites

256 Implementing the IBM Storwize V5000


Preparing the environment for migration
Follow the instructions given in this pane carefully. When all of the required tasks are
complete, click Next to continue. Figure 6-6 shows the Prepare Environment for Migration
panel.

Figure 6-6 Prepare the migration environment

Chapter 6. Storage migration 257


Mapping storage
Follow the next instructions as shown in Figure 6-7. Record all of the details carefully because
the information will be used in later panels. Table 6-1 shows an example table for capturing
the information that relates to the external storage system LUs.

Figure 6-7 Directions to record migration data

Table 6-1 Example table for capturing external LU information


LU name Controller Array SCSI ID Host name Capacity

V3700external0 node2 V3700 0 50GiB

V3700external1 node1 V3700 1 50GiB

V3700external2 node2 V3700 2 50GiB

V3700external3 node1 V3700 3 50GiB

V3700external4 node2 V3700 4 50GiB

V3700external5 node1 V3700 5 50GiB

SCSI ID: Record the SCSI ID of the LUs to which the host is originally mapped. Some
operating systems do not support changing the SCSI ID during the migration.

258 Implementing the IBM Storwize V5000


Table 6-2 shows an example table for capturing host information.

Table 6-2 Example table for capturing host information


Host Name/ Adapter / Slot / Port WWPN HBA HBA Operating V5000
LU Names F/W Device System Multipath
Driver Software

mcr-host-153 QLE2562 / 2 / 1 21000024FF2D076C 2.10 9.1.9.25 RHEL5 Device


Mapper

mcr-host-153 QLE2562 / 2 / 2 21000024FF2D076D 2.10 9.1.9.25 RHEL5 Device


Mapper

After all the data has been collected and the tasks carried out in the Map Storage section,
click Next to continue. The Storwize V5000 runs the discover devices task. After the task is
complete, click Close to continue. Figure 6-8 shows the results of the Discover Devices task.

Figure 6-8 Discover Devices task

Migrating MDisks
In the next step of the wizard, select the MDisks that are to be migrated and then click Next to
continue. Figure 6-9 shows the Migrating MDisks panel.

Chapter 6. Storage migration 259


Figure 6-9 Migrating MDisks panel

MDisk selection: Select only the MDisks that are applicable to the current migration plan.
After step 8 of the current migration completes, another migration plan can be started to
migrate any remaining MDisks.

The Storwize V5000 runs the Import MDisks task. After the task is complete, click Close to
continue. Figure 6-10 shows the result of the Import MDisks task. Note the volumes showing
in the background after the task completes.

Figure 6-10 Import MDisks task

260 Implementing the IBM Storwize V5000


Configuring hosts

Note: This step is optional; you may bypass it by selecting Next and moving to “Mapping
volumes to hosts” on page 264.

Follow this step of the wizard to select or configure new hosts as required. Figure 6-11 shows
the Configure Hosts panel.

Figure 6-11 Configure hosts panel

Chapter 6. Storage migration 261


If hosts are already defined, they will show up listed in the panel as shown in Figure 6-16 on
page 264. In our case, we have none defined, so they can be created by selecting the Create
Host option. This will begin the process of defining a host with the choice of connectivity, as
shown in Figure 6-12.

Figure 6-12 Add host connection type

Select your connection type and click Add Host. The next panel allows you to name the host,
assign ports (in our case, Fibre Channel WWPNs), and in the advanced settings, assign I/O
group ownership and host type as shown in Figure 6-13. More information about I/O group
assignment can be found in Chapter 4, “Host configuration” on page 155.

Figure 6-13 Naming host and selecting advanced options

262 Implementing the IBM Storwize V5000


Add WWPNs as shown in Figure 6-14.

Figure 6-14 Adding host ports

Click Add Host to complete the task as shown in Figure 6-15.

Figure 6-15 Create host task complete

Chapter 6. Storage migration 263


The host will now be listed in the original configure hosts panel. This is shown for
completeness in Figure 6-16.

Figure 6-16 Configure host with host listed

Mapping volumes to hosts


This step of the wizard allows you to select the newly migrated volumes to be mapped to
hosts. Figure 6-17 shows the Map Volumes to Hosts panel.

Figure 6-17 Map Volumes to Hosts panel

264 Implementing the IBM Storwize V5000


The image mode volumes are listed, the names of which are assigned automatically by the
Storwize V5000 storage system. The names can be changed to reflect something more
meaningful to the user by selecting the volume and clicking Rename in the Actions menu.

Names: The names of the image mode volumes must begin with a letter. The name can be
a maximum of 63 characters. The following valid characters can be used:
򐂰 Uppercase letters (A - Z)
򐂰 Lowercase letters (a - z)
򐂰 Digits (0 - 9)
򐂰 Underscore (_)
򐂰 Period (.)
򐂰 Hyphen (-)
򐂰 Blank space

The names must not begin or end with a space.

Hold Ctrl and click to select multiple volumes. Click Map to Host to select the host as shown
in Figure 6-18 and click Apply.

Figure 6-18 Modify host mappings

Chapter 6. Storage migration 265


The Modify Host Mappings panel will now be displayed listing the volumes you have chosen
to be mapped to your particular host as shown in Figure 6-19. The MDisks are shown in
yellow in the Modify Host Mappings panel. The yellow highlighting means that the volumes
are not yet mapped to the host. Click Edit SCSI ID and modify as required. The SCSI ID
should reflect the same SCSI ID as was recorded in step 3.

Figure 6-19 Mapping volumes to host

If the selections are correct click Apply to complete the task.

Figure 6-20 Mapping task complete

266 Implementing the IBM Storwize V5000


After you click the Close button, the screen will return to the Modify Host Mappings screen. If
any errors were made you can unmap volumes at this stage. If everything is correct then use
the Cancel option as shown in Figure 6-21. This is normal and is not an indication that the
wizard is being cancelled. The volumes have been successfully mapped indicated by the fact
that there are no volumes highlighted in yellow and the only option available is cancel. If you
use the Map Volumes button instead of the Apply button the Modify Host Mappings closes
when the task window closes.

Figure 6-21 Cancelling out of Modify Host Mappings

Click Cancel to return to the beginning of “Mapping volumes to hosts” on page 264. Click
Next to continue.

Selecting a storage pool


To continue with the storage migration wizard, you must now select a storage pool to migrate
the imported volumes too, as shown in Figure 6-22. The destination storage pool will be an
internal storage pool of the Storwize V5000. Ensure that there is enough space in the
selected storage pool to accommodate the migrated volume. The migration task runs in the
background and results in a copy of the data being placed on the MDisks in the selected
storage pool.

The process uses the volume mirroring function that is included with the Storwize V5000.

Chapter 6. Storage migration 267


Figure 6-22 Storage pool selection

Select a pool and click Next. The task will begin and complete as shown in Figure 6-23.

Figure 6-23 Pool selection task complete

Click Close to move to the end of the migration wizard.

268 Implementing the IBM Storwize V5000


Finishing the storage migration wizard
Click Finish to end the storage migration wizard as shown in Figure 6-24.

Figure 6-24 Migration wizard complete

The end of the storage migration wizard is not the end of the data migration task. It is still in
progress. A percentage indicator is displayed in the Storage Migration panel as shown in
Figure 6-25.

Figure 6-25 Storage migration progress

Chapter 6. Storage migration 269


Finalizing migrated volumes
When the migration completes with all the progress indicators at 100%, select all the volume
migrations you want to finalize by holding the Ctrl key and clicking them, and then select the
Actions box and Finalize option as shown in Figure 6-26. Alternatively, right-click the
selected volumes and click Finalize.

Figure 6-26 Finalize storage migration

You will be asked to confirm the number of volumes that you want to finalize as shown in
Figure 6-27.

Figure 6-27 Confirm volumes to finalize

270 Implementing the IBM Storwize V5000


Verify the volume names and the number of migrations is correct and click OK. The image
mode volumes are deleted and the associated image mode MDisks from the migration
storage pool are removed. Figure 6-28 shows the tasks completing.

Figure 6-28 Removal of image mode volumes

The status of those image mode MDisks is then unmanaged as shown in the pools view
Figure 6-29. When the finalization completes, the data migration to the IBM Storwize V5000 is
done. Remove zoning and retire the external storage system.

Figure 6-29 Unmanaged external mdisks

Chapter 6. Storage migration 271


6.3 Storage migration wizard example
This section describes an example scenario migrating volumes from an external DS3400 to
an IBM Storwize V5000.

6.3.1 Example scenario: Storage migration wizard


The example scenario shows the introduction of a Storwize V5000 to an environment that
contains existing storage infrastructure, which includes a SAN fabric, a Windows 2008 host,
and an IBM DS3400 storage system.

The Windows 2008 host has existing data on the disks of an IBM DS3400 storage system.
That data must be migrated to the internal storage of the Storwize V5000. The Windows 2008
host has a dual port QLogic Host Bus Adapter (HBA) type QLE2562. Each of the Fibre
Channel switches is the IBM 2498-24B type. There are two host disks to be migrated: Disk 1
and Disk 2. Figure 6-30 shows the Windows 2008 Disk management panel. The two disks
feature defined volumes. The volume labels are Migration 1 (G: drive) and Migration 2
(H: drive).

Figure 6-30 Windows 2008 disk management panel

The two disks to be migrated are on the IBM DS3400 storage system. Therefore, the disk
properties display the disk device type as an IBM1726-4xx FAStT disk device. To show this
disk attribute, right-click the disk to show the menu and then select Properties, as shown in
Figure 6-31.

Figure 6-31 Display properties of disk before migration

272 Implementing the IBM Storwize V5000


After the disk properties panel is opened, the General tab shows the disk device type.
Figure 6-32 shows the General tab in the Windows 2008 Disk Properties window.

Figure 6-32 Windows 2008 Disk Properties: General tab

Perform this task on all disks before the migration and then the same check can be done after
the disks are presented from the Storwize V5000. After the Storwize V5000 mapping and host
rescan, the disk device definitions should have changed to IBM 2145 Multi-Path disk device
This is confirmation that the disks are under Storwize V5000 control.

6.3.2 Example scenario: Fibre Channel cabling layout


To provide more information about the example migration, Figure 6-33 shows the example
Fibre Channel cabling layout. The IBM DS3400, and Storwize V5000 are cabled into a dual
SAN fabric configuration. The connection method that is shown provides improved availability
through fabric and path redundancy and improved performance through workload balancing.
The hosts are also connected to both fabrics but not shown in this diagram. Each host has
one connection to each fabric.

Chapter 6. Storage migration 273


Figure 6-33 Example Fibre Channel cabling layout

6.3.3 Using the storage migration wizard for example scenario


This section provides an overview of the storage migration tasks to be performed when the
storage migration wizard is used for our example scenario. A more detailed perspective is
also provided to assist users that require more information.

Overview of storage migration wizard tasks for example scenario


The following steps provide an overview of the wizard tasks for our example scenario:
1. Search the IBM SSIC for device compatibility.
2. Back up all of the data that is associated with the host, DS3400, and Storwize V5000.
3. Start New Migration on the Storwize V5000.
4. Complete the check boxes in step 1.
5. Follow Step 2 in the wizard to prepare the environment for migration, including the
following steps:
a. Stop host operations or stop all I/O to volumes that you are migrating.
b. Remove Host-to-DS3400 zones on SAN.

274 Implementing the IBM Storwize V5000


c. Update your host device drivers, including your multipath driver, and configure them for
attachment to the IBM Storwize V5000. Complete the steps that are described in 4.2.1,
“Windows 2012 R2: Preparing for FC attachment” on page 157 to connect to the
Storwize V5000 using Fibre Channel.
Pay attention to the following tasks:
i. Make sure that the latest OS service pack and fixes are applied to your Microsoft
server.
ii. Use the latest firmware and driver levels on your host system adapters.
iii. Configure the HBA for hosts that are running Windows.
iv. Set the Windows timeout value.
v. Install the Subsystem Device Driver Device Specific Module (SDDDSM) multipath
module.
vi. Connect the FC Host Adapter ports to the switches.
d. Create a storage system zone between the storage system that is to be migrated and
the Storwize V5000 and host zones to the Storwize V5000 for the hosts that are being
migrated.
Pay attention to the following tasks:
i. Locate the WWPNs for Host.
ii. Locate WWPNs for IBM DS3400.
iii. Locate WWPNs for Storwize V5000.
iv. Define port aliases definitions on SAN.
v. Add V5000-to-DS3400 zones on SAN.
vi. Add Host-to-V5000 zones on SAN.
e. Create a host or host group in the external storage system with the WWPNs for the
Storwize V5000 system.

Important: If the external storage system cannot restrict access to specific hosts, all
volumes on the system must be migrated.

Add a Storwize V5000 host group on the DS3400.


f. Configure the storage system for use with the Storwize V5000.
See the following link for information about configuring external storage systems for
use with IBM Storwize V5000:
http://pic.dhe.ibm.com/infocenter/storwize/v5000_ic/index.jsp?topic=%2Fcom.i
bm.storwize.v5000.710.doc%2Fv5000_ichome_710.html

6. Follow Step 3 of the wizard Map Storage; for each of the steps shown. do these tasks:
a. Create a list of all external storage system volumes that are migrated.
Create a DS3400 LU table.
b. Record the hosts that use each volume.
Create Host table.
c. Record the WWPNs associated with each host.
Add WWPNs to Host table.

Chapter 6. Storage migration 275


d. Unmap all volumes that are to be migrated from the hosts in the storage system and
map them to the host or host group that you created when your environment was
prepared.

Important: If the external storage system cannot restrict access to specific hosts, all
volumes on the system must be migrated.

Move the LUs from Hosts to the Storwize V5000 Host Group on DS3400.
e. Record the storage system LUN that is used to map each volume to this system.
Update the DS3400 LU table.
7. Follow Step 4 of the wizard to discover the external storage LUs as MDisks on the
Storwize V5000.
8. In Step 5 of the wizard, configure hosts by completing the following steps:
a. Create Host on Storwize V5000.
b. Select Host on Storwize V5000.
9. In Step 6 of the wizard, map volumes to hosts by completing the following steps:
a. Map volumes to Host on Storwize V5000.
b. Verify that disk device type is now 2145 on Host.
c. SDDDSM datapath query commands on Host.
10.In Step 7 of the wizard select internal storage pool on Storwize V5000 to which you want
the volumes to be migrated - ensure sufficient space exists for all the volumes.
11.Finish the storage migration wizard.
12.Finalize the migrated volumes.

Detailed view of the storage migration wizard for the example scenario
The following steps provide an a more detailed explanation of the wizard tasks for our
example scenario:
1. Search the IBM SSIC for scenario compatibility.
2. Back up all of the data that is associated with the host, DS3400, and Storwize V5000.
3. Start a New Migration to open the wizard on the Storwize V5000, as shown in “Accessing
the storage migration wizard” on page 252.
4. Follow step 1 of the wizard and select all of the restrictions and prerequisites, as shown in
Figure 6-4 on page 255 click Next to continue.
5. Follow step 2 of the wizard, as shown in Figure 6-6 on page 257. Complete all of the steps
before you continue.
Pay attention to the following tasks:
a. Stop host operation or stop all I/O to volumes that you are migrating.
b. Remove zones between the hosts and the storage system from which you are
migrating.

276 Implementing the IBM Storwize V5000


c. Update your host device drivers, including your multipath driver and configure them for
attachment to the IBM Storwize V5000. Complete the steps that are described in 4.2.1,
“Windows 2012 R2: Preparing for FC attachment” on page 157 to connect to the
Storwize V5000 using Fibre Channel.
Pay attention to the following tasks:
i. Make sure that the latest OS service pack and fixes are applied to your Microsoft
server.
ii. Use the latest firmware and driver levels on your host system adapters.
iii. Configure the HBA for hosts that are running Windows.
iv. Set the Windows timeout value.
v. Install the Subsystem Device Driver Device Specific Module (SDDDSM) multipath
module.
vi. Connect the FC Host Adapter ports to the switches.
d. Create a storage system zone between the storage system that is to be migrated and
the Storwize V5000 system and host zones for the hosts that are being migrated to the
Storwize V5000.
To perform this step, locate the WWPNs of the host, IBM DS3400, and Storwize
V5000, then create an alias for each port to simplify the zone creation steps.

Important: A WWPN is a unique identifier for each Fibre Channel port that is
presented to the SAN fabric.

Locating the HBA WWPNs on the Windows 2008 host


See the original IBM DS3400 Host definition to locate the WWPNs of the host’s dual port
QLE2562 HBA. To complete this task, open the IBM DS3400 Storage Manager and click the
Modify tab, as shown in Figure 6-34. Select Edit Host Topology to show the host definitions.

Figure 6-34 IBM DS3400 modify tab: Edit Host Topology

Figure 6-35 shows the IBM DS3400 storage manager host definition and the associated
WWPNs.

Figure 6-35 IBM DS3400 host definition

Record the WWPNs for alias, zoning, and the Storwize V5000 New Host task.

Chapter 6. Storage migration 277


Important: Alternatively, the QLogic SAN Surfer application for the QLogic HBAs or the
SAN fabric switch reports can be used to locate the WWPNs of the host.

Locating the controller WWPNs on the IBM DS3400


The IBM DS3400 Storage Manager can provide the controller WWPNs through the Storage
Subsystem Profile. Open the IBM DS3400 Storage Manager, click Support, and select View
Storage Subsystem Profile. Figure 6-36 shows the IBM DS3400 Storage Manager Support
tab.

Figure 6-36 Storage Subsystem Support profile

278 Implementing the IBM Storwize V5000


Click the Controllers tab to show the WWPNs for each controller. Figure 6-37 shows the IBM
Ds3400 storage manager Storage Subsystem Profile.

Figure 6-37 Storage Subsystem Profile: Controller WWPNs

Locating node canister WWPNs on the Storwize V5000


To locate the WWPNs for the Storwize V5000 node canisters, got to the 3D system health
view and rotate to view the rear of the Storwize V5000. Hover your pointer over the individual
Fibre Channel ports to see the associated WWPN as shown in Figure 6-38.

Note: The WWPN shown in this diagram are not the same as those used in the example
scenario.

Chapter 6. Storage migration 279


Figure 6-38 Obtaining Storwize V5000 WWPN

WWPN: The WWPN consists of eight bytes (two digits per byte). In Figure 6-38, the third
byte pair in the listed WWPNs are 04, 08, 0C, and 10. They are the differing bytes for each
WWPN only. Also, the last two bytes in the listed example of 04BF are unique for each
node canister. Taking note of these types of patterns can help when you are zoning or
troubleshooting SAN issues.

Example scenario: Storwize V5000 and IBM DS3400 WWPN diagram


Each port on the Storwize V5000 and IBM DS3400 has a unique and persistent WWPN. This
configuration means if a HBA in the storage system is replaced, the new HBA presents the
same WWPNs as the old HBA. This configuration means that if you understand the WWPN of
a port, you can match it to the storage system and the Fibre Channel port. Figure 6-33 on
page 274 shows the relationship between the device WWPNs and the Fibre Channel ports for
the Storwize V5000 and the IBM DS3400 that are used in the example.

Zoning: Defining aliases on the SAN fabrics


Now that the WWPNs for Storwize V5000, IBM DS3400, and Windows 2008 host have been
located, you can define the WWPN aliases on the SAN fabrics for the Storwize V5000,
DS3400 and Windows 2008 host if necessary. Aliases can simplify the zone creation process.
Create an alias name for each interface, then add the WWPN.

Aliases can contain the FC Switch Port to which the device is attached, or the attached
device’s WWPN. In this example scenario, WWPN-based zoning is used instead of
port-based zoning. Either method can be used; however, it is best not to intermix the methods
and keep the zoning configuration consistent throughout the fabric.

280 Implementing the IBM Storwize V5000


When WWPN-based zoning is used, be aware that if an FC adapter is replaced for any
reason, it could result in a different WWPN being presented to the SAN switch. The previously
defined aliases must be modified to match the new card WWPN. This situation is not the case
for IBM Storage Systems because they use persistent WWPNs, as we have already
explained. Example 6-1 shows the aliases we have used in our example.

Example 6-1 WWPN aliases


Storwize V5000 ports connected to SAN Fabric A:
alias= V5000_Canister_Left_Port1 wwpn= 50:05:07:68:03:04:26:BE
alias= V5000_Canister_Left_Port3 wwpn= 50:05:07:68:03:0C:26:BE
alias= V5000_Canister_Right_Port1 wwpn= 50:05:07:68:03:04:26:BF
alias= V5000_Canister_Right_Port3 wwpn= 50:05:07:68:03:0C:26:BF
Storwize V5000 ports connected to SAN Fabric B:
alias= V5000_Canister_Left_Port2 wwpn= 50:05:07:68:03:08:26:BE
alias= V5000_Canister_Left_Port4 wwpn= 50:05:07:68:03:10:26:BE
alias= V5000_Canister_Right_Port2 wwpn= 50:05:07:68:03:08:26:BF
alias= V5000_Canister_Right_Port4 wwpn= 50:05:07:68:03:10:26:BF
IBM DS3400 ports connected to SAN Fabric A:
alias= DS3400_CTRLA_FC1 wwpn= 20:26:00:A0:B8:75:DD:0E
alias= DS3400_CTRLB_FC1 wwpn= 20:27:00:A0:B8:75:DD:0E
IBM DS3400 ports connected to SAN Fabric B:
alias= DS3400_CTRLA_FC2 wwpn= 20:36:00:A0:B8:75:DD:0E
alias= DS3400_CTRLB_FC2 wwpn= 20:37:00:A0:B8:75:DD:0E
Window 2008 HBA port connected to SAN Fabric A:
alias= W2K8_HOST_P2 wwpn= 21:00:00:24:FF:2D:0B:E9
Window 2008 HBA port connected to SAN Fabric B:
alias= W2K8_HOST_P1 wwpn= 21:00:00:24:FF:2D:0B:E8

Zoning: Defining the Storwize V5000-to-DS3400 zones on the SAN fabrics


Define the Storwize V5000-to-DS3400 zones on the SAN fabrics. The preferred practice to
zone DS3400-to-Storwize V5000 connections is to ensure that the IBM DS3400 controllers
are not in the same zone. The zoning configuration in Example 6-2 shows the two zones per
fabric that are necessary to ensure that the IBM DS3400 controllers are not in the same zone.
Also, all Storwize V5000 node canisters must detect the same ports on IBM DS3400 storage
system.

Chapter 6. Storage migration 281


Example 6-2 Storwize V5000 - DS3400 zoning
FABRIC A
Zone name= ALL_V5000_to_DS3400_CTRLA_FC1:
DS3400_CTRLA_FC1
V5000_Canister_Left_Port1
V5000_Canister_Left_Port3
V5000_Canister_Right_Port1
V5000_Canister_Right_Port3
Zone name= ALL_V5000_to_DS3400_CTRLB_FC1:
DS3400_CTRLB_FC1
V5000_Canister_Left_Port1
V5000_Canister_Left_Port3
V5000_Canister_Right_Port1
V5000_Canister_Right_Port3
FABRIC B
Zone name= ALL_V5000_to_DS3400_CTRLA_FC2:
DS3400_CTRLA_FC2
V5000_Canister_Left_Port2
V5000_Canister_Left_Port4
V5000_Canister_Right_Port2
V5000_Canister_Right_Port4
Zone name= ALL_V5000_to_DS3400_CTRLB_FC2:
DS3400_CTRLB_FC2
V5000_Canister_Left_Port2
V5000_Canister_Left_Port4
V5000_Canister_Right_Port2
V5000_Canister_Right_Port4

Zoning: Defining the Host-to-Storwize V5000 zones on the SAN fabrics


Define the Host-to-Storwize V5000 zones on each of the SAN fabrics as shown in
Example 6-3. Zone each Host HBA port with one port from each node canister. This
configuration provides four paths to the Windows 2008 host. SDDDSM is optimized to use
four paths.

Example 6-3 Storwize V5000 - host zoning


FABRIC A
Zone name= W2K8_HOST_P2_to_V5000_Port1s:
W2K8_HOST_P2
V5000_Canister_Left_Port1
V5000_Canister_Right_Port1
FABRIC B
Zone name= W2K8_HOST_P1_to_V5000_Port2s:
W2K8_HOST_P1
V5000_Canister_Left_Port2
V5000_Canister_Right_Port2

Important: The configuration of an intra-cluster zone (Storwize V5000-to-Storwize V5000)


on each fabric is advised. Place all Storwize V5000 port aliases from each node canister
into the one zone on each of the fabrics. This configuration provides further resilience by
providing another communication path between each of the node canisters.

282 Implementing the IBM Storwize V5000


Creating a host or host group in the external storage system with the WWPNs
for this system

Important: If you cannot restrict volume access to hosts using the external storage
system, all volumes on the system must be migrated.

To complete this step, an IBM DS3400 Host Group is defined for the Storwize V5000, which
contains two hosts. Each host is a node canister of the Storwize V5000.

Creating an IBM DS3400 Host Group


To define a new Host Group for the Storwize V5000 by using the DS3400 Storage Manager,
click the Configure tab and then select Create Host Group to open the Create Host Group
panel, as shown in Figure 6-39.

Figure 6-39 Configure Storage Subsystem

Chapter 6. Storage migration 283


Using the IBM DS3400 Storage Manager, create a Host Group that is named
Storwize_V5000. Figure 6-40 shows the IBM DS3400 Create Host Group panel.

Figure 6-40 IBM DS3400 Create Host Group panel

Creating IBM DS3400 hosts


Using the IBM DS3400 Storage Manager, create a Host for each Storwize V5000 node
canister. To define a new Host using the DS3400 Storage Manager, click the Configure tab
and then select Configure Host-Access (Manual) to open the configure host access panel,
as shown in Figure 6-41.

Figure 6-41 Selecting Configure Host-Access (Manual) option

284 Implementing the IBM Storwize V5000


Provide a name for the host and ensure that the selected host type is IBM TS SAN VCE. The
name of the host should be easily recognizable, such as Storwize_V5000_Canister_Left and
Storwize_V5000_Canister_Right. Click Next to continue. Figure 6-42 shows the IBM DS3400
storage manager Configure Host Access (Manual) panel.

Figure 6-42 IBM DS3400 storage manager Configure tab: Configure host

Chapter 6. Storage migration 285


The node canister’s WWPNs are automatically discovered and must be matched to the
canister’s host definition. Select each of the four WWPNs for the node canister and then click
Add >. The selected WWPN moves to the right side of the panel, as shown in Figure 6-43.

Figure 6-43 IBM DS3400 Specify HBA Host Ports panel

286 Implementing the IBM Storwize V5000


Click Edit to open the Edit HBA Host Port panel, as shown in Figure 6-44.

Figure 6-44 IBM DS3400 storage manager specifying HBA host ports: Edit alias

Enter a meaningful alias for each of the WWPNs, such as V5000_Canister_Left_P1, as


shown in Figure 6-45.

Figure 6-45 IBM DS3400 Edit HBA Host Port panel

Chapter 6. Storage migration 287


After the four node canister ports and aliases are added to the node canister host definition,
click Next to continue. Figure 6-46 shows the node canister WWPNs added to the host
definition on the IBM DS3400 Specify HBA Host Ports panel.

Figure 6-46 IBM DS3400 Specify HBA Host Ports panel

288 Implementing the IBM Storwize V5000


Select Yes to allow the host to share access with other hosts for the same logical drives.
Ensure that the existing Host Group is selected and shows the previously defined
Storwize_V5000 host group. Click Next to continue. Figure 6-47 shows the IBM DS3400
Specify Host Group panel.

Figure 6-47 IBM DS3400 Specify Host Group panel

Chapter 6. Storage migration 289


A summary panel of the defined host and its associated host group is displayed. Cross-check
and confirm the host definition summary, and then click Finish, as shown in Figure 6-48.

Figure 6-48 IBM DS3400 Confirm Host Definition panel

A host definition must be created for the other node canister. This host definition is also
associated to the Host Group Storwize_V5000. To configure the other node canister,
complete the steps described in “Creating IBM DS3400 hosts” on page 284.

290 Implementing the IBM Storwize V5000


The node canister Host definitions are logically contained in the Storwize_V5000 Host Group.
After both node canister hosts are created, confirm the host group configuration by reviewing
the IBM DS3400 host topology tree. To access the host topology tree, use the IBM DS3400
storage manager, click the Modify tab and select Edit Host Topology, as shown in
Figure 6-49.

Figure 6-49 Selecting the Edit Host Topology option

Figure 6-50 shows the host topology of the defined Storwize_V5000 Host Group with both of
the created node canister hosts, as seen through the DS3400 Storage Manager software.

Figure 6-50 IBM DS3400 host group definition for the Storwize V5000

Chapter 6. Storage migration 291


Configuring the storage system for use with this system
To configure the external storage system for use with a Storwize V5000, see the following link:
http://pic.dhe.ibm.com/infocenter/storwize/v5000_ic/index.jsp?topic=%2Fcom.ibm.sto
rwize.v5000.710.doc%2Fv5000_ichome_710.html

Now that the environment is prepared, return to step 2 of the Storage Migration wizard in the
Storwize V5000 GUI and click Next to continue, as shown Figure 6-6 on page 257.

Follow step 3 of the Storage Migration wizard and record the details, as shown in Figure 6-7
on page 258.

Creating a list of all external storage system volumes being migrated and
recording the hosts that use each volume
Table 6-3 shows a list of the IBM DS3400 LUs that are to be migrated and the hosts that uses
them.

Table 6-3 List of the IBM DS3400 logical units that are migrated and hosted
LU name Controller Array SCSI ID Host name Capacity

Migration_1 DS3400 Array 1 0 W2K8_FC 50 GB

Migration_2 DS3400 Array 3 1 W2K8_FC 100 GB

Record the WWPNs that are associated with each host as shown in Table 6-4. It is advised
that you record the HBA firmware, HBA device driver version, adapter information, operating
system, and V5000 multi-path software version, if possible.

Table 6-4 WWPNs that are associated to the host


Host name Adapter / Slot / Port WWPNs HBA HBA Operating V5000
F/W Device System Multipath
Driver Software

W2K8_FC QLE2562 / 2 / 1 21000024FF2D0BE8 2.10 9.1.9.25 W2K8 R2 SP1 SDDDSM


2.4.3.1-2
QLE2562 / 2 / 2 21000024FF2D0BE9

Unmap all volumes that are migrated from the hosts in the storage system and map them to
the host or host group that you created when your environment was prepared.

Important: If you cannot restrict volume access to specific hosts using the external
storage system, all volumes on the system must be migrated.

292 Implementing the IBM Storwize V5000


The LUs that are to be migrated are presented from the IBM DS3400 to the Windows 2008
host because of a mapping definition that was configured on the IBM DS3400. This mapping
must be modified so that the LUs are accessible only from the Storwize V5000 Host Group.
To modify the mapping on the IBM DS3400, click the Modify tab and select Edit
Host-to-Logical Drive Mappings, as shown in Figure 6-51.

Figure 6-51 IBM DS3400 storage manager Modify tab

Figure 6-52 shows the IBM DS3400 logical drives mapping information before the change.
The volumes are accessible from the windows host only.

Figure 6-52 IBM DS3400 Logical drives mapping information before changes

To modify the mapping definition so that the LUs are accessible only from the Storwize V5000
Host Group, select Change... to open the Change Mapping panel and modify the mapping as
shown in Figure 6-53.

Figure 6-53 IBM DS3400 modify mapping panel: Change mapping

Chapter 6. Storage migration 293


Select Host Group Storewize_V5000 in the menu and ensure that the Logical Unit Number
(LUN) remains the same. Record the LUN for later reference. Figure 6-54 shows the IBM
DS3400 Change Mapping panel.

Figure 6-54 IBM DS3400 Change Mapping panel

Confirm the mapping change by selecting Yes as shown in Figure 6-55.

Figure 6-55 Change Mapping confirmation panel

294 Implementing the IBM Storwize V5000


Repeat the steps described in Chapter 6, “Storage migration” on page 249 for each of the
LUs that are to be migrated. Confirm that the Accessible By column now reflects the mapping
changes. Figure 6-56 shows that both logical drives are now accessible by Host Group
Storwize_V5000.

Figure 6-56 Edit Host-to-Logical Drive Mappings panel

Recording the storage system LUN used to map each volume to this system
The LUNs that are used to map the logical drives remained unchanged and can be found in
Table 6-3 on page 292. Now that step 3 of the storage migration wizard is complete, click
Next to show the Detect MDisks running task, as shown in Figure 6-7 on page 258.

After the Discover Devices running task is complete, select Close.

Follow the next step of the Storage Migration wizard, as shown in Figure 6-57 and detect the
MDisks to import. The MDisk name is allocated depending on the order of device discovery;
mdisk0 in this case is LUN 0 and mdisk1 is LUN 1. There is an opportunity to change the
MDisk names to something more meaningful to the user in later steps.

Figure 6-57 Detecting Mdisks in the Storage Migration wizard

Chapter 6. Storage migration 295


Select the discovered MDisks and click Next to open the Import MDisks running task panel,
as shown in Figure 6-58.

Figure 6-58 Selecting MDisk to migrate

After the Import MDisks running task is complete, select Close. In the next stage of the
wizard we define our host as shown in Figure 6-59. The Windows 2008 host is not yet defined
in the Storwize V5000. Select Create Host to open the Create Host panel.

Figure 6-59 Selecting Create Host option

296 Implementing the IBM Storwize V5000


Enter a host name and select the WWPNs that were recorded earlier from the Fibre Channel
ports menu. Select Add Port to List for each WWPN. Figure 6-60 shows the Create Host
panel.

Figure 6-60 Select WWPNs

After all of the port definitions are added, click Add Host to begin the Create Host running
task. After the Create Host running task is complete, select Close to return to the Configure
Hosts screen as shown in Figure 6-61. Note the Windows host is now shown.

Figure 6-61 Create Host task after adding new host

Select the host that was configured and click Next to move to the next stage of the wizard.

Chapter 6. Storage migration 297


Important: It is not mandatory to select the hosts now. The actual selection of the hosts
occurs in the next step. However, cross-check the hosts that must be migrated by
highlighting them in the list before you click Next.

At this point, it is possible to rename the MDisks to reflect something more meaningful.
Right-click the MDisk and select Rename to open the Rename Volume panel, as shown in
Figure 6-62.

Figure 6-62 Step 6 of the Storage Migration wizard

298 Implementing the IBM Storwize V5000


The name that is automatically given to the image mode volume includes the controller and
the LUN information. Use this information to determine an appropriate name for the volume.
After the new name is entered, click Rename from the Rename Volume panel to start the
rename running task. Rename both volumes. Figure 6-63 shows the Rename Volume panel.

Figure 6-63 Renaming a volume

After the final rename running task is complete, click Close to return to the Map Volume to
Host pane of the wizard, as shown in Figure 6-64. Highlight the two MDisks and select Map
to Host to open the Modify Host Mappings panel.

Figure 6-64 Renamed MDisks highlighted for mapping

Chapter 6. Storage migration 299


Select the host from the drop-down menu on the Modify Host Mappings panel, as shown in
Figure 6-65. The rest of the Modify Host Mappings panel opens.

Figure 6-65 Modify Host Mappings panel - select host

The MDisks that were highlighted are now shown in yellow in the Modify Host Mappings
panel. The yellow highlighting means that the volumes are not yet mapped to the host. Now is
the time to edit the SCSI ID, if required. (In this case, it is not necessary.) Click Map Volumes
to start the Modify Mappings task, as shown in Figure 6-66.

Figure 6-66 Volumes to be mapped

300 Implementing the IBM Storwize V5000


After the Modify Mappings running task is complete, select Close to return to the Map
Volumes to Host pane as shown in Figure 6-67. Notice that now the Host Mappings column
should indicate that both volumes are mapped.

Figure 6-67 Modify host mappings - volumes mapped

Verifying that migrated disk device type is now 2145 on the host
The migrated volumes are now mapped to the Storwize V5000 host definition. The migrated
disks properties show the disk device type as an IBM 2145 Multi-Path disk device. To confirm
that this information is accurate, right-click the disk to open the menu and select Properties,
as shown in Figure 6-68.

Figure 6-68 Display the disk properties from the Windows 2008 disk migration panel

After the disk properties panel is opened, the General tab shows the disk device type, as
shown in Figure 6-69.

Chapter 6. Storage migration 301


Figure 6-69 Windows 2008 properties General tab

The SDDDSM can also be used to verify that the migrated disk device is connected correctly.
Open the SDDDSM command-line interface (CLI) to run the disk and adapter queries. As an
example, on a Windows 2008 R2 SP1 host, click Subsystem Device Driver DSM to open the
SSDDSM CLI window, as shown Figure 6-70.

Figure 6-70 Windows 2008 R2 example: Open SSDDSM command line

302 Implementing the IBM Storwize V5000


The SDDDSM disk and adapter queries can be found in the SDDDSM user’s guide. As an
example, on a Windows 2008 R2 SP1 host, useful commands to run include datapath query
adapter and datapath query device. Example 6-4 shows the output of SDDDSM commands
that were run on the Window 2008 host.

Example 6-4 Output from datapath query adapter and datapath query device SDDDSM commands
C:\Program Files\IBM\SDDDSM>datapath query adapter

Active Adapters :2

Adpt# Name State Mode Select Errors Paths Active


0 Scsi Port3 Bus0 NORMAL ACTIVE 171 0 4 4
1 Scsi Port4 Bus0 NORMAL ACTIVE 174 0 4 4

C:\Program Files\IBM\SDDDSM>datapath query device

Total Devices : 2

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076300AE804B4800000000000019
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 90 0
1 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0
2 Scsi Port4 Bus0/Disk1 Part0 OPEN NORMAL 81 0
3 Scsi Port4 Bus0/Disk1 Part0 OPEN NORMAL 0 0

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076300AE804B4800000000000018
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 81 0
1 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0
2 Scsi Port4 Bus0/Disk2 Part0 OPEN NORMAL 93 0
3 Scsi Port4 Bus0/Disk2 Part0 OPEN NORMAL 0 0

Chapter 6. Storage migration 303


Use the SDDDSM output to verify that the expected number of devices, paths, and adapters
are shown. Example 6-4 shows that the workload is balanced across each adapter and that
there are four paths to the device. The datapath query device output shows two devices with
SERIALs: 6005076300AE804B4800000000000019 and 6005076300AE804B4800000000000018.
The serial numbers can be cross-checked with the UID values that are now shown in the
Storage Migration wizard, as shown in Figure 6-67 on page 301.

Now that we have imported the image mode volumes of the external storage device, have
them mapped to migration targets in the IBM Storwize V5000 and have also mapped these
volumes to the new host definitions, the next step in the storage migration wizard is to select
an internal pool for the migrations to be created in. Click Next to open the Select Pool option
of the wizard, as shown in Figure 6-71.

Figure 6-71 Internal storage pool selection

304 Implementing the IBM Storwize V5000


Highlight an internal storage pool and click Next to start the Migration task running.

After the Start Migration running task is complete, select Close as shown in Figure 6-72.

Figure 6-72 Start Migration completed task panel

The wizard has now completed click Finish to open the System Migration panel, as shown in
Figure 6-73.

Figure 6-73 Finish the Storage Migration wizard

Chapter 6. Storage migration 305


The end of the Storage Migration wizard is not the end of the data migration process. The
data migration is still in progress. A percentage indication of the migration progress is shown
in the System Migration panel, as shown in Figure 6-74.

Figure 6-74 Migration progress indicators

When the volume migrations are complete, select the volume migration instance and
right-click Finalize as shown in Figure 6-75.

Figure 6-75 Finalize Volume Migrations

306 Implementing the IBM Storwize V5000


From the Finalize Volume Migrations panel, verify the volume names and the number of
migrations and click OK, as shown in Figure 6-76.

Figure 6-76 Finalize Volume Migrations panel

The image mode volumes are deleted and the associated image mode MDisks are removed
from the migration storage pool. The status of those image mode MDisks is now unmanaged.
When the finalization completes, the data migration to the IBM Storwize V5000 is done.
Remove the DS3400-to-Storwize V5000 zoning and retire the external storage system.

Chapter 6. Storage migration 307


308 Implementing the IBM Storwize V5000
7

Chapter 7. Storage pools


This chapter describes how IBM Storwize V5000 manages physical storage resources. All
storage resources that are under IBM Storwize V5000 control are managed using storage
pools. Storage pools make it easy to dynamically allocate resources, maximize productivity,
and reduce costs. Advanced internal storage, Managed Disks (MDisks), and storage pool
management are covered in this chapter; external storage is covered in Chapter 11, “External
storage virtualization” on page 579.

Storage pools can be configured through the Initial Setup wizard when the system is first
installed, as described in Chapter 2, “Initial configuration” on page 25. They can also be
configured after the initial setup through the management GUI which provides a set of presets
to help you configure different RAID types.

The advised configuration presets will configure all drives into RAID arrays based on drive
class and will protect them with the appropriate number of spare drives. Alternatively you can
configure the storage to your own requirements. Selections include the drive class, the
number of drives to configure, whether to configure spare drives or not and optimizing for
performance or capacity.

This chapter includes the following topics:


򐂰 Working with internal drives
򐂰 Working with storage pools
򐂰 Working with child pools
򐂰 Working with MDisks on external storage

© Copyright IBM Corp. 2013, 2015. All rights reserved. 309


7.1 Working with internal drives
This section describes how to configure the internal storage disk drives using different RAID
levels and optimization strategies.

The IBM Storwize V5000 storage system provides an Internal Storage window for managing
all internal drives. The Internal Storage window can be accessed by opening the System
window, clicking the Pools function icon, and then clicking Internal Storage, as shown in
Figure 7-1.

Figure 7-1 Internal Storage Details via Pools icon

310 Implementing the IBM Storwize V5000


7.1.1 Internal Storage window
The Internal Storage window (as shown in Figure 7-2) provides an overview of the internal
drives that are installed in the IBM Storwize V5000 storage system. Selecting All Internal
under the Drive Class Filter shows all of the drives that are installed in the managed system,
including attached expansion enclosures. Alternatively, you can filter the drives by their type
or class; for example, you can choose to show only Enterprise drive class (SAS), Nearline
SAS, or Flash drives.

Figure 7-2 Internal storage window

On the right side of the Internal Storage window, the selected type of internal disk drives are
listed. By default, the following information also is listed:
򐂰 Logical Drive ID
򐂰 Drive capacity
򐂰 Current type of use (unused, candidate, member, spare, or failed)
򐂰 Status (online, offline, and degraded)
򐂰 MDisk name that the drive is a member of
򐂰 Enclosure ID that it is installed in
򐂰 Slot ID of the enclosure in which drive is installed

The default sort order is by enclosure ID (this default can be changed to any other column by
left-clicking the column header). To toggle between ascending and descending sort order,
left-click the column header again.

Chapter 7. Storage pools 311


More details can be shown, for example, the drive’s Technology Type, by right-clicking the
blue header bar of the table which opens the selection panel, as shown in Figure 7-3.

Figure 7-3 Internal storage window details selection

You can also find the overall internal storage capacity allocation indicator in the upper right
corner. The Total Capacity shows the overall capacity of the internal storage installed in the
IBM Storwize V5000 storage system. The MDisk Capacity shows the internal storage
capacity that is assigned to the MDisks. The Spare Capacity shows the internal storage
capacity that is used for hot spare disks.

The percentage bar that is shown in Figure 7-4 indicates how much capacity is allocated.

Figure 7-4 Internal storage allocation indicator

312 Implementing the IBM Storwize V5000


7.1.2 Actions on internal drives
There are a number of actions that can be performed on the internal drives when you select
them and right-click, or click the Actions drop-down menu, as shown in Figure 7-5.

Figure 7-5 Internal drive Actions menu

Depending on the status of the drive selected, the following actions are available:

Take Offline
The internal drives can be taken offline if there are problems with them. A confirmation
window opens, as shown in Figure 7-6.

Figure 7-6 Take internal drive offline warning

Chapter 7. Storage pools 313


A drive should be taken offline only if a spare drive is available. If the drive fails (as shown in
Figure 7-7), the MDisk (of which the failed drive is a member) remains online and a hot spare
is automatically reassigned.

Figure 7-7 Internal drive taken offline

If no sufficient spare drives are available and one drive must be taken offline, the second
option for no redundancy must be selected. This option results in a degraded storage pool
because of the degraded MDisk, as shown in Figure 7-8.

Figure 7-8 Internal drive that is failed with MDisk degraded

The IBM Storwize V5000 storage system prevents the drive from being taken offline if there
might be data loss as a result. A drive cannot be taken offline (as shown in Figure 7-9) if no
suitable spare drives are available and, based on the RAID level of the MDisk, drives are
already offline.

Figure 7-9 Internal drive offline not allowed because of insufficient redundancy

314 Implementing the IBM Storwize V5000


Example 7-1 shows how to use the chdrive CLI command to set the drive to failed.

Example 7-1 The use of the chdrive command to set drive to failed
chdrive -use failed driveID
chdrive -use failed -allowdegraded driveID

Mark as
The internal drives in the IBM Storwize V5000 storage system can be assigned the following
usage roles, as shown in Figure 7-10:
򐂰 Unused: The drive is not in use and cannot be used as a spare.
򐂰 Candidate: The drive is available for use in an array.
򐂰 Spare: The drive can be used as a hot spare, if required.

Figure 7-10 Internal drive Mark as... option

Chapter 7. Storage pools 315


The new role that can be assigned depends on the current drive usage role. These
dependencies are shown in Figure 7-11.

Figure 7-11 Internal drive usage role table

Identify
Use the Identify action to turn on the LED light so that you can easily identify a drive that must
be replaced or that you want to troubleshoot. The panel that is shown in Figure 7-12 appears
when the LED is on.

Figure 7-12 Internal drive identification

Click Turn LED Off when you are finished.

Example 7-2 shows how to use the chenclosureslot command to turn on and off the drive
LED.

Example 7-2 The use of the chenclosureslot command to turn on and off drive LED
chenclosureslot -identify yes/no -slot slot enclosureID

316 Implementing the IBM Storwize V5000


Upgrade
From this option, you can easily upgrade the drive firmware. The GUI allows you to upgrade
individual drives or upgrade all drives that have available updates. For more information about
upgrading drive firmware, see "Upgrading drive firmware" on page 633.

Show Dependent Volumes


Clicking Show Dependent Volumes shows the volumes that are dependent on the selected
drive. Volumes are dependent on a drive only when their underlying MDisks are in a degraded
or inaccessible state and removing further hardware causes the volume to go offline. This
condition is true for any RAID 0 MDisk or if the associated MDisk is degraded already.

Use the Show Dependent Volumes option before you perform maintenance to determine
which volumes are affected.

Important: A lack of listed dependent volumes does not imply that there are no volumes
using the drive.

Figure 7-13 shows an example if no dependent volumes are detected for a specific drive.

Figure 7-13 Internal drive no dependent volume

Chapter 7. Storage pools 317


Figure 7-14 shows the list of dependent volumes for a drive when its MDisk is in a degraded
state.

Figure 7-14 Internal drive with dependent volume

Example 7-3 shows how to view dependent volumes for a specific drive by using the CLI.

Example 7-3 Command to view dependent Vdisks for a specific drive


lsdependentvdisks -drive driveID

318 Implementing the IBM Storwize V5000


Properties
Clicking Properties (as shown in Figure 7-15) in the Actions menu or double-clicking the
drive provides the vital product data (VPD) and the configuration information. The Show
Details option was selected to show more information.

Figure 7-15 Internal drives properties: Part1

If the Show Details option is not selected, the technical information section is reduced, as
shown in Figure 7-16.

Figure 7-16 Internal drives properties no details

Chapter 7. Storage pools 319


A tab for the Drive Slot is available in the Properties panel (as shown in Figure 7-17) to get
specific information about the slot of the selected drive.

Figure 7-17 Internal drive properties slot

Example 7-4 shows how to use the lsdrive command to display configuration information
and drive VPD.

Example 7-4 The use of the lsdrive command to display configuration information and drive VPD
IBM_Storwize:ITSO_V5000:superuser>lsdrive 9
id 9
status online
error_sequence_number
use member
UID 5000cca013085fa4
tech_type sas_ssd
capacity 372.1GB
block_size 512
vendor_id IBM-B040
product_id HUSML4040ASS60
FRU_part_number 00Y5815
FRU_identity 11S49Y7471YXXXXYV4LSLA
RPM
firmware_level J3C0
FPGA_level
mdisk_id 8
mdisk_name mdisk7
member_id 0
enclosure_id 1
slot_id 2
node_id
node_name
quorum_id

320 Implementing the IBM Storwize V5000


port_1_status online
port_2_status online
interface_speed 6Gb
protection_enabled no
drive_auto_manage idle
IBM_Storwize:ITSO_V5000:superuser>

Customize Columns
Clicking Customize Columns in the actions menu, allows you to add or remove a number of
options available in the Internal Storage view. To restore the current view, select Restore
Default View as shown in Figure 7-18.

Figure 7-18 Customizing columns

Chapter 7. Storage pools 321


7.2 Configuring internal storage
The internal storage of an IBM Storwize V5000 can be configured into MDisks and pools
using the system setup wizard during the initial configuration. For more information, see
"Initial configuration" on page 49.

If the internal storage of an IBM Storwize V5000 was not configured during the initial setup,
arrays and MDisks can be configured by clicking Configure Storage in the internal Storage
window as shown in Figure 7-19.

Figure 7-19 Configuring internal storage

The decision choice is presented as follows:


򐂰 Use the Recommended Configuration
This option will configure all available drives based on the RAID configuration presets. The
setup creates MDisks and pools but does not create volumes.
If this automated configuration fits your business requirement, it is advised that this
configuration is kept.
򐂰 Select a Different Configuration
This option starts a wizard to customize storage configuration. A storage configuration
might be customized for the following reasons:
– The automated initial configuration does not meet customer requirements.
– More storage was attached to the IBM Storwize V5000 and must be integrated into the
existing configuration.

322 Implementing the IBM Storwize V5000


7.2.1 RAID configuration presets
RAID configuration presets are used to configure internal drives and are based on advised
values for the RAID level and drive class. Each preset has a specific goal for the number of
drives per array and the number of spare drives to maintain redundancy.

Table 7-1 describes the presets that are used for Flash drives for the IBM Storwize V5000
storage system.

Table 7-1 Flash RAID presets


Preset Purpose RAID Drives per Drive count Spare drive
level array goal (Min - Max) goal

Flash RAID 5 Protects against a single drive failure. 5 8 3 - 16 1


Data and one stripe of parity are
striped across all array members.

Flash RAID 6 Protects against two drive failures. 6 12 5 - 16 1


Data and two stripes of parity are
striped across all array members.

Flash RAID 10 Protects against at least one drive 10 8 4 - 16 (even) 1


failure. All data is mirrored on two array
members.

Flash RAID 1 Protects against at least one drive 1 2 2 1


failure. All data is mirrored on two array
members.

Flash RAID 0 Provides no protection against drive 0 8 1-8 0


failures.

Flash Easy Mirrors data to protect against drive 10 2 4 - 16 (even) 1


Tier failure. The mirrored pairs are spread
between storage pools to be used for
the Easy Tier function.

Flash RAID instances: In all Flash RAID instances, drives in the array are balanced
across enclosure chains, if possible.

Chapter 7. Storage pools 323


Table 7-2 describes the RAID presets that are used for Enterprise SAS and Nearline SAS
drives for the IBM Storwize V5000 storage system.
Table 7-2 HDD RAID presets
Preset Purpose RAID Drives Drive count Spare Chain balance
level per (Min - Max) goal
array
goal
Basic Protects against a single drive 5 8 3 - 16 1 All drives in the array are
RAID 5 failure. Data and one stripe of from the same chain
parity are striped across all wherever possible.
array members.
Basic Protects against two drive 6 12 5 - 16 1 All drives in the array are
RAID 6 failures. Data and two stripes from the same chain
of parity are striped across all wherever possible.
array members.
Basic Protects against at least one 10 8 4 - 16 1 All drives in the array are
RAID 10 drive failure. All data is (evens) from the same chain
mirrored on two array wherever possible.
members.
Balanced Protects against at least one 10 8 4 - 16 1 Exactly half of the drives
RAID 10 drive or enclosure failure. All (evens) are from each chain.
data is mirrored on two array
members. The mirrors are
balanced across the two
enclosure chains.
RAID 0 Provides no protection against 0 8 1-8 0 All drives in the array are
drive failures. from the same chain
wherever possible.

7.2.2 Customizing initial storage configuration


If the initial storage configuration carried out during setup or the advised configuration have
been run and do not meet the requirements, the pools created must first be deleted. Select
the Pool navigator in the GUI and click Pools  MDisks by Pools. Select and right-click the
pool and then select Delete Pool, as shown in Figure 7-20.

Figure 7-20 Delete selected pool

Note: The Delete Pool option only becomes available when the Pool is not provisioning
volumes.

324 Implementing the IBM Storwize V5000


By selecting Yes, the MDisks associated to the Pool will be removed and the drives will
become candidate drives as shown in Figure 7-21.

Figure 7-21 Delete pool confirmation

These drives now can be used for a different configuration.

Warning: When a pool is deleted using the CLI along with the parameter -force, all
mapping on, and data that is contained within, any volume provisioned from this pool is
deleted, without being able to recover it.

7.2.3 Creating an MDisk and a pool


To configure internal storage for use with hosts, click Pools  Internal Storage and then
click Configure Storage, as shown in Figure 7-22.

Figure 7-22 Click Configure Storage

Chapter 7. Storage pools 325


A configuration wizard opens and guides you through the process of configuring internal
storage. The wizard shows all internal drives, their status, and their use. The status shows
whether it is Online, Offline, or Degraded. The Use status shows if a drive is Unused, a
Candidate for configuration, a Spare, a Member of a current configuration, or Failed.
Figure 7-23 shows an example in which 14 drives are available for configuration.

Figure 7-23 Available drives for new MDisk

If there are internal drives with a status of Unused, a window opens, which gives the option to
include them in the RAID configuration, as shown in Figure 7-24.

Figure 7-24 Unused drives warning

When the decision is made to include the drives into the RAID configuration, their status is set
to Candidate, which also makes them available for a new MDisk.

326 Implementing the IBM Storwize V5000


The use of the storage configuration wizard simplifies the initial disk drive setup and offers the
following options:
򐂰 Use the recommended configuration
򐂰 Select a different configuration

Selecting Use the recommended configuration guides you through the wizard that is
described in "Option: Use the recommended configuration" on page 327.

Selecting Select a different configuration uses the wizard that is described in "Option:
Select a different configuration" on page 330.

7.2.4 Option: Use the recommended configuration


As shown in Figure 7-25, when you click Use the recommended configuration, the wizard
offers an advised storage configuration at the bottom of the window.

Figure 7-25 The recommended configuration

Chapter 7. Storage pools 327


In the Configuration Summary section (shown in Figure 7-26 on page 328), the wizard warns
that there are not enough disk drives to satisfy the target number of the drives needed for the
configuration, as follows:
򐂰 There are insufficient Enterprise disk class, 278.9 GiB
򐂰 There are insufficient Flash, 372.11 GiB Flash
򐂰 There are insufficient Nearline disk class, 1.8 TiB 7200 rpm.

Therefore, the wizard will automatically create a number of MDisks as shown in Figure 7-26.

Figure 7-26 The recommended configuration summary

Spare drives are also automatically created to meet the spare goals according to the preset
chosen; one spare drive is created out of every 24 disk drives of the same drive class on a
single chain. Spares are not created if sufficient spares are already configured.

Spare drives in the IBM Storwize V5000 are global spares, which means that any spare drive
that has at least the same capacity as the drive to be replaced can be used in any array. Thus,
a Flash array with no Flash spare available uses a SAS spare instead.

If the proposed configuration meets your requirements, click Finish, and the system
automatically creates the array MDisks.

Storage pools are also automatically created to contain the MDisks. MDisks with similar
performance characteristics, including RAID level, number of member drives, and drive class
are grouped into the same storage pool.

328 Implementing the IBM Storwize V5000


Important: This option adds new MDisks to an existing storage pool when the
characteristics match. If this is not what is required, the Select a different configuration
option should be used.

After an array is created, the Array MDisk members are synchronized with each other through
a background initialization process. The progress of the initialization process can be
monitored by clicking the icon at the left of the Running Tasks status bar and selecting the
initialization task to view the status, as shown in Figure 7-27.

Figure 7-27 Running task panel

Click the taskbar to open the progress window, as shown in Figure 7-28. The array is
available for I/O during this process.

Figure 7-28 Initialization progress view

Chapter 7. Storage pools 329


7.2.5 Option: Select a different configuration
The Select a different configuration option offers a more flexible way to configure the
internal storage as compared to the Use the recommended configuration preset in terms
of drive selection, RAID level, and storage pool to be used.

Only one drive class (RAID configuration) can be allocated at a time.

Complete the following steps to select a different configuration:


1. Choose drive class.
The drive class selection list contains each drive class available for configuration, as
shown in Figure 7-29.

Figure 7-29 Select drive class for new configuration

330 Implementing the IBM Storwize V5000


2. Click Next and select the appropriated RAID preset, as shown in Figure 7-30.

Figure 7-30 Select the RAID preset

3. Define the RAID attributes.


Selections include the configuration of spares, optimization for performance, optimization
for capacity, and the number of drives to provision.
Each IBM Storwize V5000 preset has a specific goal for the number of drives per array.
For more information, see the Knowledge Center at this website:
http://www.ibm.com/support/knowledgecenter/STHGUJ_7.4.0/com.ibm.storwize.v5000.
740.doc/v5000_ichome_740.html
Table 7-3 shows the RAID goal widths.
Table 7-3 RAID goal width
RAID level Enterprise goal width Flash goal width

0 8 8

5 8 9

6 12 10

10 8 8

Chapter 7. Storage pools 331


The following RAID configurations are available:
– Optimize for Performance
Optimizing for performance creates arrays with the same capacity and performance
characteristics. The RAID goal width (as shown in Table 7-3) must be met for this
target. In a performance optimized setup, the IBM Storwize V5000 provisions eight
physical disk drives in a single array MDisk, except for the following situations:
• RAID 6 uses 12 disk drives.
• Flash Easy Tier uses two disk drives.
Hence, creating an Optimized for Performance configuration is only possible if there
are enough drives available to match your needs.
As a consequence, all arrays with similar physical disks feature the same performance
characteristics. Because of the defined presets, this setup might leave drives unused.
The remaining unconfigured drives can be used in another array.
Figure 7-31 shows an example in which not all of the provisioned drives can be used in
a performance optimized configuration (six drives remain).

Figure 7-31 Optimization for performance failed

332 Implementing the IBM Storwize V5000


Figure 7-32 shows that the number of drives is not enough to satisfy the needs of the
configuration.

Figure 7-32 Not enough drives for performance optimization

Figure 7-33 shows that there are a suitable number of drives to configure performance
optimized arrays.

Figure 7-33 Arrays match performance goals

Four RAID 5 arrays are possible and all provisioned drives are used.

Chapter 7. Storage pools 333


– Optimize for Capacity
Optimizing for capacity creates arrays that allocate all the drives that are specified in
the Number of drives to provision field. This option results in arrays of different
capacities and performance. The number of drives in each MDisk does not vary by
more than one drive, as shown in Figure 7-34. After all these choices are made,
click Next.

Figure 7-34 Capacity optimized configuration

4. Storage pool assignment.


Choose whether an existing pool must be expanded or whether a pool is to be created for
the configuration, as shown in Figure 7-35.

Figure 7-35 Storage pool selection

Complete the following steps to expand or create a pool:


a. Expand an existing pool.
When an existing pool is to be expanded, you can select an existing storage pool that
does not contain MDisks, or a pool that contains MDisks with the same performance
characteristics (which is listed automatically), as shown in Figure 7-36.

334 Implementing the IBM Storwize V5000


Figure 7-36 List of matching storage pool

b. Create one or more pools.


Alternatively, a storage pool is created by entering the required name (Figure 7-37).

Figure 7-37 Create new pool

All drives are initialized when the Configuration wizard is finished.

Chapter 7. Storage pools 335


7.2.6 MDisk by Pools panel
The MDisks by Pools panel (as shown in Figure 7-38) displays information about all MDisks
made of internal and external storage. The MDisks are categorized by the pools to which they
are attached.

Figure 7-38 MDisk by Pool window

The following default information is provided:


򐂰 Name
The MDisk and the Pool name that is provided during the configuration process.
򐂰 Status
The status of the MDisk and storage pool. The following statuses are possible:
– Online
All MDisks are online and performing optimally.
– Degraded
One MDisk is in degraded state for example, missing SAS connection to an enclosure
of member drives or a failed drive with no spare available. A node could also be a
contributing factor to this state. Figure 7-39, shows the pool is degraded.

Figure 7-39 One degraded MDisk in pool

336 Implementing the IBM Storwize V5000


– Offline
One or more MDisks in a pool are offline. The pool (Pool3) also changes to offline, as
shown in Figure 7-40.

Figure 7-40 Offline MDisk in a pool

򐂰 Capacity
The capacity of the individual MDisks. The capacity for the storage pool is also shown,
which is the total of all the MDisks in this storage pool. The usage of the storage pool is
represented by a bar and the number as shown in Figure 7-41.

Figure 7-41 Capacity of the Pool

򐂰 Mode
An MDisk features the following modes:
– Array
Array mode MDisks are constructed from internal drives by using the RAID
functionality. Array MDisks are always associated with storage pools.
– Unmanaged
The MDisk is not a member of any storage pools, which means it is not used by the
IBM Storwize V5000 storage system. LUNs that are presented by external storage
systems to IBM Storwize V5000 are discovered as unmanaged MDisks.
– Managed
Managed MDisks are LUNs that are presented by external storage systems to an IBM
Storwize V5000 that are assigned to a storage pool and provide extents for volumes
creation. Any data that was on these LUNs when they are assigned to the pool is lost.
– Image
Image MDisks are LUNs that are presented by external storage systems to an IBM
Storwize V5000 and assigned directly to a volume with a one-to-one mapping of blocks
between the MDisk and the volume. This status is an intermediate status of the
migration process and is described in Chapter 6, “Storage migration” on page 249.

Chapter 7. Storage pools 337


Figure 7-42 shows some examples.

Figure 7-42 MDisk modes

򐂰 Storage System
The name of the external storage system from which the MDisk is presented as shown in
Figure 7-43.

Figure 7-43 Storage system name

򐂰 LUN
Represents the Logical Unit Number of the MDisk.

For more information about how to attach external storage to an IBM Storwize V5000 storage
system, see in Chapter 11, “External storage virtualization” on page 579.

The CLI command lsmdiskgrp returns a concise list or a detailed view of the storage pools
that are visible to the system, as shown in Example 7-5.

Example 7-5 CLI command lsmdiskgrp

lsmdiskgrp
lsmdiskgrp mdiskgrpID

338 Implementing the IBM Storwize V5000


7.2.7 RAID action for MDisks
Internal drives in the IBM Storwize V5000 are managed as Array mode MDisks, on which
several RAID actions can be performed. Select the appropriate Array MDisk by clicking
Pools  MDisks by Pools, and then click Actions  RAID Actions, as shown in
Figure 7-44.

Figure 7-44 MDisk RAID actions

You can choose the following RAID actions:


򐂰 Set Spare Goal
Figure 7-45 shows how to set the number of spare drives that are required to protect the
array from drive failures.

Figure 7-45 MDisk set spare goal

Chapter 7. Storage pools 339


The alternative CLI command is shown in Example 7-6.

Example 7-6 CLI command to set spares


charray -sparegoal mdiskID goal

If the number of drives that are assigned as Spare does not meet the configured spare
goal, an error is logged in the event log that reads: “Array MDisk is not protected by
sufficient spares.” This error can be fixed by adding more drives as spares. During the
internal drive configuration, spare drives are automatically assigned according to the
chosen RAID preset’s spare goals, as described in "Configuring internal storage" on page
322.
򐂰 Swap Drive
The Swap Drive action can be used to replace a drive in the array with another drive with
the status of Candidate or Spare. This action is used to replace a drive that failed, or is
expected to fail soon; for example, as indicated by an error message in the event log.
Select an MDisk that contains the drive to be replaced and click RAID Actions  Swap
Drive. In the Swap Drive window, select the member drive that is to be replaced (as shown
in Figure 7-46) and click Next.

Figure 7-46 MDisk swap drive: Step 1

340 Implementing the IBM Storwize V5000


In step 2 (as shown as Figure 7-47), a list of suitable drives is presented. One drive must
be selected to swap into the MDisk. Click Finish.

Figure 7-47 MDisk swap drive: Step 2

The exchange process starts and then runs in the background. The volumes on the
affected MDisk remain accessible.
If the GUI process is not used for any reason, the CLI command in Example 7-7 can be
run.

Example 7-7 CLI command to swap drives


charraymember -balanced -member oldDriveID -newdrive newDriveID mdiskID

Chapter 7. Storage pools 341


򐂰 Delete
An Array MDisk can be deleted by clicking RAID Actions  Delete. To select more than
one MDisk, press Ctrl+left-mouse click. A confirmation is required by entering the correct
number of MDisks to be deleted, as shown in Figure 7-48. You must confirm the number of
MDisks that you want to delete. If there is data on the MDisk, it can be deleted only by
tagging the option Delete the RAID array MDisk even if it has data on it.

Figure 7-48 MDisk delete confirmation

Data that is on MDisks is migrated to other MDisks in the pool if enough space is available
on the remaining MDisks in the pool.

Available capacity: Make sure that you have enough available capacity left in the
storage pool for the data on the MDisks to be removed.

After an MDisk is deleted from a pool, its former member drives return to candidate mode.
The alternative CLI command to delete MDisks is shown in Example 7-8.

Example 7-8 CLI command to delete MDisk


rmmdisk -mdisk list -force mdiskgrpID

If all the MDisks of a storage pool were deleted, the pool remains as an empty pool with
0 bytes of capacity, as shown in Figure 7-49.

Figure 7-49 Empty storage pool after MDisk deletion

342 Implementing the IBM Storwize V5000


7.2.8 More actions on MDisks
The following actions can be performed on MDisks:
򐂰 Detect MDisks
The Detect MDisks button at the upper left of the MDisks by Pools window is useful if you
have external storage controllers in your environment (for more information, see
Chapter 11, “External storage virtualization” on page 579). The Detect MDisk action starts
a rescan of the Fibre Channel network. It discovers any new MDisks that were mapped to
the IBM Storwize V5000 storage system and rebalances MDisk access across the
available controller device ports. This action also detects any loss of controller port
availability and updates the IBM Storwize V5000 configuration to reflect any changes.
When external storage controllers are added to the IBM Storwize V5000 environment, the
IBM Storwize V5000 automatically discovers the controllers and the LUNs that are
presented by those controllers are listed as unmanaged MDisks. However, if you attached
new storage and the IBM Storwize V5000 did not detect it, you might need to use the
Detect MDisk button before the system detects the new LUNs. If the configuration of the
external controllers is modified afterward, the IBM Storwize V5000 might be unaware of
these configuration changes. Use the Detect MDisk button to rescan the Fibre Channel
network and update the list of unmanaged MDisks.
Figure 7-50 shows the Detect MDisks button.

Figure 7-50 Detect MDisks

MDisks detection: The Detect MDisks action is asynchronous. Although the task
appears to be finished, it might still be running in the background.

򐂰 Include Excluded MDisks


An MDisk can be excluded from the IBM Storwize V5000 because of multiple I/O failures.
These failures might be caused, for example, by link errors. After a fabric-related problem
is fixed, the excluded disk can be added back into the IBM Storwize V5000 by selecting
the MDisks and clicking Include Excluded MDisk from the Actions drop-down menu.
򐂰 Select Tier
Internal drives and external MDisks have their tier assigned automatically by the IBM
Storwize V5000. The tier of an internal MDisk cannot be changed, but if the disk tier of an
external MDisk is recognized incorrectly by the IBM Storwize V5000, use this option to
manually change the disk tier. To specify a new tier, select the MDisk and click Actions 
Select Tier as shown in Figure 7-51.

Chapter 7. Storage pools 343


Figure 7-51 Select the desired disk tier

Some of the other actions are available by clicking MDisk by Pool  Actions, as shown
in Figure 7-52.

Figure 7-52 MDisk actions on externally virtualized storage

Rename
MDisks can be renamed by selecting the MDisk and clicking Rename from the Actions menu.
Enter the new name of your MDisk (as shown in Figure 7-53) and click Rename.

Figure 7-53 Rename MDisk

344 Implementing the IBM Storwize V5000


Show Dependent Volumes
Figure 7-54 shows the volumes that are dependent on an MDisk. The volumes can be
displayed by selecting the MDisk and clicking Show Dependent Volumes from the Actions
menu. The volumes are listed with general information.

Figure 7-54 Show dependent volumes

Properties
The Properties action for an MDisk shows the information that you need to identify it. In the
MDisks by Pools window, select the MDisk and click Properties from the Actions menu. The
following tabs are available in this information window:
򐂰 The Overview tab (as shown in Figure 7-55) contains information about the MDisk. To
show more details, click Show Details.

Chapter 7. Storage pools 345


Figure 7-55 MDisk properties overview

򐂰 The Dependent Volumes tab (as shown in Figure 7-56) lists all of volumes that use extents
on this MDisk.

Figure 7-56 MDisk dependent volumes

346 Implementing the IBM Storwize V5000


򐂰 In the Member Drives tab (as shown in Figure 7-57), you find all of the member drives of
this MDisk. Also, all actions that are described in "Actions on internal drives" on page 313
can be performed on the drives that are listed here.

Figure 7-57 MDisk properties member

7.3 Working with storage pools


The next sections provide a detailed overview about the concept and use of the storage pools
and child pools.

Storage pools (or pools) act as containers for MDisks and provision the capacity to volumes.
IBM Storwize V5000 organizes storage into storage pools to ease storage management and
make it more efficient. Child pools act as sub-containers and inherit the parent storage pool’s
properties, and may also be used to provision capacity to volumes. For more information
about the child pools, see “Working with child pools” on page 353.

Chapter 7. Storage pools 347


Storage pools and MDisks are managed via the MDisks by Pools window. You can access the
MDisks by Pools window by clicking at the upper right option Overview  Pools as shown in
Figure 7-58.

Figure 7-58 Pools from the overview window

An alternative path to the Pools window is to click Pools  MDisks by Pools, as shown in
Figure 7-59.

Figure 7-59 Pools from MDisk by Pools window

348 Implementing the IBM Storwize V5000


Using the MDisk by Pools window (as shown in Figure 7-60), you can manage internal and
external storage pools. All existing storage pools are displayed row-by-row. The first row
features Unassigned MDisks, which contains all unmanaged MDisks, if any exist. Each
defined storage pool is displayed with its assigned icon and name, numerical ID, status, and a
graphical indicator that shows the percentage of the pool’s capacity that is allocated to
volumes.

Figure 7-60 Pool window

When you expand a pool’s entry by clicking the plus sign (+) to the left of the pool’s icon, you
can access the MDisks that are associated with this pool. You can perform all the actions on
them, as described in "Working with child pools" on page 353.

7.3.1 Create Pool option


New storage pools are built when an MDisk is created if this MDisk is not attached to an
existing pool. To create an empty pool, click the New Pool option in the pool window.

The only required parameter for the pool is the pool name, as shown in Figure 7-61.

Figure 7-61 Create pool name input

Chapter 7. Storage pools 349


The IBM Storwize V5000 GUI has a default extent size value of 1 GB when you define a new
storage pool. Therefore, if you are creating an empty Pool using the GUI and you want to
specify the size of the extents, you must ensure the option Allow extent size selection
during pool creation is selected in the Settings  GUI Preferences. The size of the extent
is selected at creation and cannot be changed later. Figure 7-62 shows an example of how to
create a Pool choosing an extent size.

Figure 7-62 Creating a pool choosing the extent size.

The new pool is included in the pool list with 0 bytes, as shown in Figure 7-63.

Figure 7-63 Creating an empty pool

350 Implementing the IBM Storwize V5000


7.3.2 Actions on storage pools
There are a number of actions that can be performed on storage pools using the Actions
menu, as shown in Figure 7-64.

Figure 7-64 Pool action overview

Creating a pool
As shown in "Create Pool option" on page 349, this option is used to create an empty pool.

Detecting MDisks
As detailed in "More actions on MDisks" on page 343, the Detect MDisk option starts a
rescan of the Fibre Channel network. It discovers any new MDisks that were recently mapped
to the IBM Storwize V5000.

Changing the storage pool icon


There are several storage pool icons available that can be selected, as shown in Figure 7-65.
These icons can be used to differentiate between different storage tier, raid level or types of
drives.

Figure 7-65 Change storage pool icon

Chapter 7. Storage pools 351


Rename storage pool
The storage pool can be renamed at any time, as shown in Figure 7-66.

Figure 7-66 Rename storage pool

Deleting a storage pool


Pools can be deleted using the GUI only if there are no volumes assigned to it. A panel
appears to confirm that all associated MDisks will be removed and the associated drives will
become candidate drives, as shown in Figure 7-67.

Figure 7-67 Confirmation to delete the storage pool

Through the CLI you can delete a Pool and all it’s contents using the parameter -force,
however, all volumes and host mappings are deleted without being able to recover them.

Important: After you delete the pool through the CLI, all data that is stored in the pool is
lost except for the image mode MDisks; their volume definition is deleted, but the data on
the imported MDisk remains untouched.

After you delete the pool, all the managed or image mode MDisks in the pool return to a
status of unmanaged.

352 Implementing the IBM Storwize V5000


Child pools
Figure 7-68 shows the list of child pools that are dependent on a storage parent pool. A
storage pool can be either a parent or child type. For more information about child pools, see
"Working with child pools" on page 353.

Figure 7-68 Child pools list

7.4 Working with child pools


A child pool is an object similar to a storage pool and is a user configurable object, however, a
user can create a child pool through the CLI only.

A storage parent pool is a collection of MDisks that provides the pool of storage from which
volumes are provisioned; a child pool resides and gets capacity exclusively from one storage
pool. A user can specify the child pool capacity at creation. The child pool can grow and
decrease nondisruptively.

The child pool shares the properties of the parent storage pool and provides most of the
functions that storage pools have. These are the common properties and behaviors:
򐂰 ID, Name, Status, Volume Count
򐂰 Warning Thresholds (child pools can have their own threshold warning setting)
򐂰 MDisk (this value is always zero “0” for child pools)
򐂰 Space Utilization (in %)
򐂰 Easy Tier (for a child pool, this value must be the same as the parent mdiskgrp's Easy Tier
setting)

Chapter 7. Storage pools 353


Figure 7-69 shows an example of a child pool.

Figure 7-69 Child pool representation

7.4.1 Creating child pools


See Example 7-9 for how to create a child pool using the CLI.

Example 7-9 Creating a child pool


IBM_Storwize:ITSO_V5000:superuser>svctask mkmdiskgrp -name V5K_Child_Pool -warning
70% -parentmdiskgrp V5K_Parent_Pool -unit gb -size 200
MDisk Group, id [3], successfully created
IBM_Storwize:ITSO_V5000:superuser>

The following parameters must be specified when creating child pools:


򐂰 “-name” (child pool name)
򐂰 “-warning” (Generates a warning when the used disk capacity exceeds the specified
threshold)
򐂰 “-parentmdiskgrp” (The id or name of the parent pool)
򐂰 “-unit” (Specifies the data unit; it must be in multiples of extent size of the parent pool)
򐂰 “-size” (Specifies the capacity of the child pool)

354 Implementing the IBM Storwize V5000


Note: The capacity of the child pool is always limited to the free capacity of the parent pool.

Because a child pool must have a unique parent pool, the maximum number of child pools per
parent pools is 127.

For more information about the limits and restrictions, see the following URL:
https://ibm.biz/BdFNu6

7.4.2 Actions on child pools


By selecting a child pool and clicking the action menu, only the following actions are
applicable to child pools:

Change Icon
As described in “Changing the storage pool icon” on page 351, this option helps to manage
and differentiate between pools and drive classes.

Rename
Select this option to rename a child pool. Actions are explained in “Rename storage pool” on
page 352.

Delete Pool
If the child pool is not provisioning any capacity to volumes, use this option to delete a child
pool.

Properties
The properties of the child pool can be seen by choosing this option.

7.4.3 Resizing a child pool


A child pool can be easily expanded and reduced. Although re-sizing a child pool can only be
performed through CLI.

If you are certain that the parent pool has sufficient free extents, you can use the svctask
chmdiskgrp with “-size” parameter to set the new size of the child pool.

The capacity of the child pool can also be reduced. You cannot reduce the size of the child
pool to less than it’s used capacity.

Assuming that the child pool capacity is 20.00 GB, Example 7-10 shows how to increase the
capacity by 2 GB.

Example 7-10 Expanding a child pool


IBM_Storwize:ITSO_V5000:superuser>svctask chmdiskgrp -size 22 -unit gb
V5K_Child_Pool1
IBM_Storwize:ITSO_V5000:superuser>

Chapter 7. Storage pools 355


After you expand the size of the child pool, you may need to run svcinfo lsmdiskgrp to have
a detailed view of the child pool, as shown in Example 7-11.

Example 7-11 Run svcinfo lsmdiskgrp to get a view of the child pool
IBM_Storwize:ITSO_V5000:superuser>svcinfo lsmdiskgrp V5K_Child_Pool1
id 3
name V5K_Child_Pool1
status online
mdisk_count 0
vdisk_count 0
capacity 22.00GB
extent_size 512
free_capacity 22.00GB
virtual_capacity 0.00MB
used_capacity 0.00MB
real_capacity 0.00MB
overallocation 0
warning 70
easy_tier auto
easy_tier_status balanced
tier ssd
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier enterprise
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier nearline
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
compression_active no
compression_virtual_capacity 0.00MB
compression_compressed_capacity 0.00MB
compression_uncompressed_capacity 0.00MB
site_id
site_name
parent_mdisk_grp_id 1
parent_mdisk_grp_name V5K_Parent_Pool
child_mdisk_grp_count 0
child_mdisk_grp_capacity 0.00MB
type child_thick
IBM_Storwize:ITSO_V5000:superuser>

Assuming that the child pool capacity is 22 GB and you want to reduce the capacity by 4 GB,
use svctask chmdiskgrp with the “-size” parameter to reduce to the capacity.

Example 7-12 shows an example of the command. It sets the size of the child pool to 18 GB.

Example 7-12 Shrinking a child pool


IBM_Storwize:ITSO_V5000:superuser>svctask chmdiskgrp -size 18 -unit gb
V5K_Child_Pool1
IBM_Storwize:ITSO_V5000:superuser>

356 Implementing the IBM Storwize V5000


7.5 Working with MDisks on external storage
After the internal storage configuration is complete, you can find the MDisks that were created
using the internal drives in the MDisks by Pools window. When using this window, you can
manage all MDisks that are made up of internal and external storage.

Logical Unit Numbers (LUNs) that are presented by external storage systems to IBM Storwize
V5000 are discovered as unmanaged MDisks. Initially, the MDisk is not a member of any
storage pools, meaning that it is not used by the IBM Storwize V5000 storage system.

To learn more about external storage, see Chapter 11, “External storage virtualization” on
page 579.

Chapter 7. Storage pools 357


358 Implementing the IBM Storwize V5000
8

Chapter 8. Advanced host and volume


administration
The IBM Storwize V5000 offers many functions for volume and host configuration. The basic
host and volume features of IBM Storwize V5000 are described in Chapter 4, “Host
configuration” on page 155 and Chapter 5, “Volume configuration” on page 201. These
chapters also describe how to create hosts and volumes and how to map them to a host.

This chapter includes the following topics:


򐂰 Advanced host administration
򐂰 Adding and deleting host ports
򐂰 Host mappings overview
򐂰 Advanced volume administration
򐂰 Volume properties
򐂰 Advanced volume copy functions
򐂰 Volumes by storage pool
򐂰 Volumes by host

© Copyright IBM Corp. 2013, 2015. All rights reserved. 359


8.1 Advanced host administration
This section describes host administration, including host modification, host mappings, and
deleting hosts. Basic host creation and mapping are described in Chapter 4, “Host
configuration” on page 155. It is assumed that hosts have already been defined and volumes
are mapped to them.

The following topics are covered in this section:


򐂰 Modifying Hosts, as described in 8.1.1, “Modifying Mappings menu” on page 362.
򐂰 Ports by Host, as described in 8.2, “Adding and deleting host ports” on page 376.
򐂰 Host Mappings, as described in 8.3, “Host mappings overview” on page 382.

The Hosts menu is shown in Figure 8-1.

Figure 8-1 Host menu

360 Implementing the IBM Storwize V5000


Select Hosts, to open the Hosts panel, as shown in Figure 8-2.

Figure 8-2 Hosts

Figure 8-2 shows a number of hosts, with volumes mapped to them all.

Selecting a host and clicking Actions (as shown in Figure 8-3) or right-clicking the host,
shows the available actions.

Figure 8-3 Host - Action menu

Chapter 8. Advanced host and volume administration 361


As shown in Figure 8-3, there are a number of Actions associated with host mapping. For
more information, see 8.1.1, “Modifying Mappings menu” on page 362 and 8.1.2, “Unmapping
volumes from a host” on page 366.

8.1.1 Modifying Mappings menu


From the Host panel, after selecting a host and opening the Actions menu, select Modify
Mappings to open the Modify Host Mappings panel, as shown in Figure 8-4.

Figure 8-4 Host mappings panel

If the system has multiple I/O groups, a drop-down menu at the top-left of the panel shows the
I/O group selection. When selecting individual I/O groups, the IBM Storwize V5000 GUI lists
only the volumes that correspond to that I/O group. The Host drop-down menu lists the hosts
that are attached to the IBM Storwize V5000.

Important: Before you change host mappings, always ensure that the host can access
volumes from the correct I/O group.

The left pane (Unmapped Volumes) shows the volumes that are available for mapping to the
chosen host. The right pane (Volumes Mapped to the Host) shows the volumes that are
already mapped. Figure 8-4 shows that the volume named Win2012_Vol_01 with SCSI ID 0 is
mapped to the host Win_2012_FC. The left pane shows that a further 29 volumes are available
for mapping.

Important: The unmapped volumes panel refers to volumes that are not mapped to the
chosen host, however, they may be mapped to other hosts.

362 Implementing the IBM Storwize V5000


To map further volumes to a chosen host, select the volume from the left pane, as shown in
Figure 8-5, then click the Right Arrow icon to move the volume to the right pane, as shown in
Figure 8-6. The changes are marked in yellow and the Map Volumes and Apply buttons are
enabled.

Figure 8-5 Modify Host Mappings

Figure 8-6 Modify Host Mappings

Chapter 8. Advanced host and volume administration 363


Clicking Map Volumes applies the changes and the Modify Mappings panel shows that the
task completed successfully, as shown in Figure 8-7.

Figure 8-7 Modify Mappings task completed

After clicking Close, the Modify Mapping panel closes.

If Apply was clicked in the panel shown in Figure 8-6 on page 363, the changes are
submitted to the system, but the Modify Host Mappings panel remains open for further
changes.

364 Implementing the IBM Storwize V5000


From the Modify Host Mappings panel, it is possible to modify another host by selecting it
from the Hosts drop-down menu, as shown in Figure 8-8.

Figure 8-8 Selecting another host to modify

After moving a volume to the right pane in the Modify Host Mappings panel, it is possible to
right-click the yellow unmapped volume and change the SCSI ID used for the host mapping,
as shown in Figure 8-9.

Figure 8-9 Editing SCSI ID

Chapter 8. Advanced host and volume administration 365


Click Edit SCSI ID, select the required SCSI ID, then click OK to save the change. Click
Apply to submit the changes and complete the host volume mapping.

Important: The IBM Storwize V5000 automatically assigns the lowest available SCSI ID if
none is specified. However, you can set an SCSI ID for the volume. The SCSI ID cannot be
changed while volume is assigned to host.

It is also possible to map a volume to multiple hosts, however, this would normally only be
done in a clustered host environment, as data corruption is possible. A warning is displayed,
as shown in Figure 8-10.

Figure 8-10 Volume mapped to multiple hosts warning

8.1.2 Unmapping volumes from a host


To unmap a volume from a host, select the volume and click the Left Arrow icon to move the
volume to the left (Unmapped Volumes) pane. Multiple volumes can be unmapped by
selecting them with the Ctrl or Shift key, as shown in Figure 8-11.

366 Implementing the IBM Storwize V5000


Figure 8-11 Unmapping volumes

To unmap all volumes from a host, select a host from the Hosts panel and using the Action
menu, click Unmap all Volumes, as shown in Figure 8-12.

Figure 8-12 Unmap all volumes

Chapter 8. Advanced host and volume administration 367


A prompt is displayed requesting confirmation of the number of mappings to be removed.
Enter the number of mappings to be removed and click Unmap. In the example, two
mappings are removed, as shown in Figure 8-13.

Figure 8-13 Enter the number of mappings to be removed

Unmapping: By clicking Unmap, access to all the volumes for this host are removed.
Ensure that the required procedures are run on the host OS before the unmapping
procedure is performed.

After clicking Unmap, the changes are applied to the system, as shown in Figure 8-14. Click
Close after you review the output.

Figure 8-14 Unmapping all volumes from host

368 Implementing the IBM Storwize V5000


Figure 8-15 shows the selected host no longer has any volume mappings.

Figure 8-15 Host mapping

8.1.3 Renaming a host


To rename a host, select the host and from the Actions menu, click Rename, as shown in
Figure 8-16.

Figure 8-16 Renaming a host

Chapter 8. Advanced host and volume administration 369


Enter a new name and click Rename, as shown in Figure 8-17. Clicking Reset, changes the
host name back to its original name, while clicking Cancel, cancels the name change and
closes the Rename Host panel.

Figure 8-17 Rename Host panel

After the changes are applied to the system, click Close, as shown in Figure 8-18.

Figure 8-18 Rename a host task completed

370 Implementing the IBM Storwize V5000


8.1.4 Removing a host
To remove a host, select the host and from the Actions menu, click Remove, as shown in
Figure 8-19.

Figure 8-19 Removing a host

A prompt is displayed requesting confirmation of the number of hosts to be removed. Enter


the number of hosts to be removed and click Delete, as shown in Figure 8-20.

Figure 8-20 Removing a host

If it is necessary to remove a host with volumes assigned, deletion must be forced by


selecting the option to Remove the hosts even if volumes are mapped to them
(see Figure 8-20).

Chapter 8. Advanced host and volume administration 371


After the task is complete, click Close to return to the Hosts panel, as shown in Figure 8-21.

Figure 8-21 Remove host task completed

8.1.5 Host properties


This section describes the host properties, which provide the following information:
򐂰 Overview
򐂰 Mapped Volumes
򐂰 Port Definitions

To open the Host Details panel, select the host and from the Action menu, click Properties.
You also can highlight the host and right-click it, as shown in Figure 8-22.

Figure 8-22 Opening host properties

372 Implementing the IBM Storwize V5000


Figure 8-23 shows the Overview tab of the Host Details panel. Select the Show Details
check box in the lower left to see more information about the host.

Figure 8-23 Host details

This tab provides the following information:


򐂰 Host Name: Host object name
򐂰 Host ID: Host object identification number
򐂰 Status: The current host object status; it can be Online, Offline, or Degraded
򐂰 # of FC Ports: The number of host Fibre Channel ports
򐂰 # of iSCSI Ports: The number of host iSCSI names or host IQN IDs
򐂰 # of SAS Ports: The number of host SAS ports
򐂰 I/O Group: The I/O group from which the host can access a volume (or volumes)
򐂰 iSCSI CHAP Secret: The Challenge Handshake Authentication Protocol information if it
exists or is configured

Chapter 8. Advanced host and volume administration 373


To change the host properties, click Edit and several fields can be edited, as shown in
Figure 8-24.

Figure 8-24 Host properties: Editing host information

Make any necessary changes and click Save to apply them and then click Close to return to
the Host Details panel.

Figure 8-25 shows the Mapped Volumes tab which provides an overview of the volumes
mapped to the host. This tab provides the following information:
򐂰 SCSI ID
򐂰 Volume name
򐂰 UID
򐂰 Caching I/O group ID

Clicking the Show Details option does not show any detailed information.

Figure 8-25 Host Details: Mapped Volumes

374 Implementing the IBM Storwize V5000


Figure 8-26 shows the Port Definitions tab, which shows the configured host ports and their
status. This tab provides the following information:
򐂰 The worldwide port names (WWPNs) (for SAS and FC hosts)
򐂰 iSCSI Qualified Name (IQN) for iSCSI hosts
򐂰 Type column: Shows the port type information.
򐂰 # Nodes Logged In column: Lists the number of IBM Storwize V5000 node canisters that
each port (initiator port) is logged in to.

Figure 8-26 Host port definitions

Using this panel, you can also Add and Delete Host Ports, as described in 8.2, “Adding and
deleting host ports” on page 376. Selecting the Show Details option does not show any
further information.

Click Close to close the Host Details section.

Chapter 8. Advanced host and volume administration 375


8.2 Adding and deleting host ports
To configure host ports, browse to Hosts  Ports by Host to open the Ports by Host panel,
as shown in Figure 8-27.

Figure 8-27 Ports by Host panel

Hosts are listed in the left pane. The Host icons show an orange cable for a Fibre Channel
host, a black cable for a SAS host, and a blue cable for an iSCSI host.

The properties of the selected host are shown in the right pane. Clicking Add Host, starts the
wizard described in Chapter 4, “Host configuration” on page 155.

376 Implementing the IBM Storwize V5000


Figure 8-28 shows the Action menu and associated tasks for a host.

Figure 8-28 Ports by Host - Action menu

8.2.1 Adding a host port


To add a host port, select the host from left pane, click Add, and then choose a Fibre
Channel, SAS, or an iSCSI port, as shown in Figure 8-29.

Figure 8-29 Adding a host port

Chapter 8. Advanced host and volume administration 377


Important: A host system can have a mix of Fibre Channel, iSCSI, and SAS connections.
If a configuration requires a mix of protocols, check the capabilities of the OS and plan
carefully to avoid miscommunication or data loss.

8.2.2 Adding a Fibre Channel port


After clicking Add, as shown in Figure 8-29 on page 377, clicking Fibre Channel Port opens
the Add Fibre Channel Ports panel.

Click the Fibre Channel Ports drop-down menu, to display a list of all available Fibre
Channel host ports. If the FC WWPN of your host is not available in the menu, check the SAN
zoning and rescan the SAN from the host or try clicking the Rescan button.

Select the required WWPN and click Add Port to List. This adds the port to the Port
Definitions list, as shown in Figure 8-30. Repeat this step to add more ports to the host.

Figure 8-30 Adding online host ports

378 Implementing the IBM Storwize V5000


To add an offline port, manually enter the WWPN of the port into the Fibre Channel Ports field
and click Add Port to List, as shown in Figure 8-31.

Figure 8-31 Adding offline port

The port appears as unverified because it is not logged in to the SAN fabric or known to the
IBM Storwize V5000. The first time the port logs in, the state automatically changes to online
and the mapping is applied to this port.

To remove one of the ports from the list, click the red X next to it. In Figure 8-31, we manually
added an FC port.

Important: When removing online or offline ports, the IBM Storwize V5000 prompts you to
confirm the number of ports to be deleted but does not warn about mappings. Disk
mapping is associated to the host object and Logical Unit Number (LUN) access is lost if all
ports are deleted.

Click Add Ports to Host to apply the changes.

8.2.3 Adding a SAS host port


After clicking Add, as shown in Figure 8-29 on page 377, clicking SAS Port opens the Add
SAS Ports panel.

Click the SAS Ports drop-down menu, to see a list of all available SAS ports. If the SAS
WWPNs are not available, try the Rescan option or check the physical connections.

Important: The IBM Storwize V5000 allows the addition of an offline SAS port. Enter the
SAS WWPN in SAS Port field and then click Add Port to List.

Chapter 8. Advanced host and volume administration 379


Select the required SAS WWPN and click Add Port to List, as shown in Figure 8-32. Repeat
this step to add more ports to a host.

Figure 8-32 Adding an online SAS port

Click Add Ports to Host to apply the changes.

8.2.4 Adding an iSCSI host port


After clicking Add, as shown in Figure 8-29 on page 377, selecting iSCSI Port opens the Add
iSCSI Ports panel.

Figure 8-33 Adding iSCSI Host ports

Enter the initiator name of your host and click Add Port to List, as shown in Figure 8-33.
Click Add Ports to Host to apply the changes. The iSCSI port status remains unknown until
it is added to the host and a host rescan process is completed.

Click Close to return to the Ports by Host panel.

380 Implementing the IBM Storwize V5000


Important: An error message with code CMMVC6581E is shown if one of the following
conditions occurs:
򐂰 The IQNs exceed the maximum number that is allowed.
򐂰 There is a duplicated IQN.
򐂰 The IQN contains a comma or leading or trailing spaces.
򐂰 The IQN is invalid in some other way.

8.2.5 Deleting a host port


To delete host ports, click Host  Ports by Host to open the Ports by Host panel.

Select the host in left pane and select the host port to be deleted, then click the Delete Port
button, as shown in Figure 8-34.

Figure 8-34 Delete host port

Multiple ports can be selected by holding down the Ctrl or Shift keys while selecting the host
ports to delete.

Chapter 8. Advanced host and volume administration 381


Figure 8-35 shows the Delete Ports confirmation panel. Confirm the number of ports to be
deleted and click Delete to apply the changes.

Figure 8-35 Delete host port

A task panel opens that shows the results. Click Close to return to the Ports by Host panel.

8.3 Host mappings overview


Select Host  Host Mappings to open the Host Mappings panel, as shown in Figure 8-36.

Figure 8-36 Host mappings

382 Implementing the IBM Storwize V5000


The Host Mapping panel shows a list of mapped hosts and volumes, along with their
respective SCSI ID and Volume Unique Identifier (UID). Figure 8-36 on page 382 shows the
host Windows_2012_FC has two mapped volumes (Windows2012_Vol_01 and
Windows2012_Vol_02), and the associated SCSI ID (0 and 1), Volume Name, UID, and
Caching I/O Group ID.

As shown in Figure 8-37, selecting a mapping and opening the Actions menu provides the
following options:
򐂰 Unmapping volumes
򐂰 Properties (Host)
򐂰 Properties (Volume)

Figure 8-37 Host mapping options

If multiple mappings are selected (by holding the Ctrl or Shift keys), only the Unmap Volumes
option is available.

8.3.1 Unmapping volumes


Select one or more mappings and from the Actions menu, select Unmap Volumes, enter the
number of mappings to be removed (as shown in Figure 8-38), and click Unmap. The
mappings for all selected entries are removed.

Figure 8-38 Unmapping a volume from host

Warning: Ensure that the required procedures are run on the host OS before unmapping
volumes on the IBM Storwize V5000.

Chapter 8. Advanced host and volume administration 383


8.3.2 Properties (Host)
Selecting a mapping and clicking Properties (Host) from the Actions menu (as shown in
Figure 8-37 on page 383) opens the Host Properties panel. For more information, see 8.1.5,
“Host properties” on page 372.

8.3.3 Properties (Volume)


Selecting a mapping and clicking Properties (Volume) from the Actions menu (as shown in
Figure 8-37 on page 383) opens the Volume Properties panel. For more information about
volume properties, see 8.5, “Volume properties” on page 396.

8.4 Advanced volume administration


This section describes volume administration tasks, such as volume modification, volume
migration, and creation of volume copies. We assume that you have completed Chapter 5,
“Volume configuration” on page 201, and so are familiar with volume creation and generic,
thin provisioned, mirrored and thin mirrored volumes

Figure 8-39 shows the following options that are available within the Volumes menu for
advanced features administration:
򐂰 Volumes
򐂰 Volumes by Pool
򐂰 Volumes by Host

Figure 8-39 Volumes menu

384 Implementing the IBM Storwize V5000


8.4.1 Advanced volume functions
Click Volumes (as shown in Figure 8-39 on page 384) and the Volumes panel opens, as
shown in Figure 8-40.

Figure 8-40 Volume panel

This panel lists all configured volumes on the system and provides the following information:
򐂰 Name: Shows the name of the volume. If there is a + sign next to the name, this sign
means that there are two copies of this volume. Click the + sign to expand the view and list
the copies, as shown in Figure 8-40.
򐂰 Status: Provides the status information about the volume, which can be online, offline, or
degraded.
򐂰 Capacity: The disk capacity that is presented to the host. If a blue volume is listed next to
the capacity, this means that this volume is a thin-provisioned volume. Therefore, the listed
capacity is the virtual capacity, which might be more than the real capacity on the system.
򐂰 Storage Pool: Shows in which storage pool the volume is stored. The primary copy is
shown unless you expand the volume copies.
򐂰 UID: The volume unique identifier.
򐂰 Host Mappings: Shows if a volume has host mapping: Yes when host mapping exists, and
No when there are no hosting mappings.

Tip: Right-clicking anywhere in the blue title bar allows you to customize the volume
attributes that are displayed. You might want to add some useful information, such as
caching I/O group and real capacity.

To create a volume, click Create Volumes and complete the steps as described in 5.1,
“Creating volumes in IBM Storwize V5000” on page 202.

Chapter 8. Advanced host and volume administration 385


Right-clicking or selecting a volume, and opening the Actions menu, shows the available
actions for a volume, as shown in Figure 8-41.

Figure 8-41 Action menu for a volume

Depending on the volume selected, the following Volume options are available:
򐂰 Create Volumes
򐂰 Map to Host
򐂰 Unmap All Hosts
򐂰 View Mapped Host
򐂰 Duplicate Volume
򐂰 Move to Another I/O Group (if a multi-I/O group system)
򐂰 Rename
򐂰 Shrink
򐂰 Expand
򐂰 Migrate to Another Pool
򐂰 Export to Image Mode
򐂰 Delete
򐂰 Volume Copy Actions
– Create Volumes
– Add Mirrored Copy
򐂰 Properties

For Thin-Provisioned volumes, the following Volume Copy Actions are available:
򐂰 Create Volumes
򐂰 Add Mirror Copy
򐂰 Thin Provisioned
– Shrink
– Expand
– Edit Properties

These options are described in the next sections.

386 Implementing the IBM Storwize V5000


8.4.2 Mapping a volume to a host
To map a volume to a host, select Map to Host from the Action menu. Select the I/O group
(the I/O Group drop-down box is only shown on multi-I/O group systems) and the host to
which you want to map the volume. Figure 8-42 shows the Modify Host Mappings panel.

Figure 8-42 Modify Host Mappings panel

Important: It is not possible to change the caching I/O group, using the I/O group
drop-down menu. Instead, the menu is used to list hosts that have access to the specified
I/O group.

After a host is selected, the Modify Mappings panel opens. In the upper left, the I/O group
(if applicable) and selected host are displayed. The volume that is to be mapped is highlighted
in yellow, as shown in Figure 8-43. Click Map Volumes to apply the changes to the system.

Figure 8-43 Modify Host Mappings

Chapter 8. Advanced host and volume administration 387


After the changes are made, click Close to return to the Volumes panel.

Modify Mappings panel: For more information about the Modify Mappings panel, see
8.1.1, “Modifying Mappings menu” on page 362.

8.4.3 Unmapping volumes from all hosts


To remove all host mappings from a volume, select Unmap All Hosts from the actions menu.
This action removes all host mappings, which means that no hosts can access this volume.
Confirm the number of mappings that are to be removed and click Unmap, as shown in
Figure 8-44.

Figure 8-44 Unmapping from host (or hosts)

After the task completes, click Close to return to the Volumes panel.

Important: Ensure that the required procedures are run on the host OS before the
unmapping procedure is performed.

388 Implementing the IBM Storwize V5000


8.4.4 Viewing which host is mapped to a volume
To determine which host mappings are configured, highlight a volume and select View
Mapped Host from the Actions menu. The Host Maps tab of the Volume Details panel opens,
as shown in Figure 8-45. In this example, host Win_2012_FC is mapped to the Win2012_Vol_01
volume.

Figure 8-45 Volume to host mapping

To remove a mapping, highlight the host and click Unmap from Host. If several hosts are
mapped to this volume (for example, in a cluster), only the selected host is removed.

8.4.5 Renaming a volume


To rename a volume, select Rename from the Action menu. The Rename Volume window
opens. Enter the new name, as shown in Figure 8-46.

Figure 8-46 Renaming a volume

Clicking Reset, resets the name field to the original name of the volume. Click Rename to
apply the changes and click Close after task panel completes.

Chapter 8. Advanced host and volume administration 389


8.4.6 Shrinking a volume
The IBM Storwize V5000 can shrink volumes. This feature should be used only if the host OS
supports it. This capability reduces the capacity that is allocated to the particular volume by
the amount specified. To shrink a volume, select Shrink from the Actions menu. Enter the
new volume size or by how much the volume should shrink, as shown in Figure 8-47.

Important: Before you shrink a volume, ensure that the host Operating System (OS)
supports this. If the OS does not support shrinking a volume, it is likely that the OS will log
disk errors and data corruption will occur.

Figure 8-47 Shrink Volume panel

Click Shrink to start the process. Click Close when task panel completes, to return to the
Volumes panel.

Run the required procedures on the host OS after the shrinking process.

Important: For volumes that contain more than one copy, you might receive a
CMMVC6354E error; run the lsvdisksyncprogress command on the IBM Storwize V5000
CLI to view the synchronization status. Wait for the copy to synchronize. If you want the
synchronization process to complete more quickly, increase the rate by running the
chvdisk command. When the copy is synchronized, resubmit the shrink process.

See Appendix A, “Command-line interface setup and SAN Boot” on page 667.

390 Implementing the IBM Storwize V5000


8.4.7 Expanding a volume
The IBM Storwize V5000 can expand volumes. This feature should be used only if the host
OS supports it. This capability increases the capacity that is allocated to the particular volume
by the amount specified. To expand a volume, select Expand from the Actions menu. Enter
the new volume size or by how much the volume should expand and click Expand, as shown
in Figure 8-48.

Figure 8-48 Expand Volume panel

After the task completes, click Close to return to the Volumes panel.

Run the required procedures on host OS to use the available space.

8.4.8 Migrating a volume to another storage pool


The IBM Storwize V5000 supports online volume migration, while applications are running.
Using volume migration, volumes can be moved between storage pools, whether the pools
are internal pools or on an external storage system. The migration process is a low priority
task and one extent is moved at a time and has a slight effect on the performance.

Important: To migrate a volume, the source and target storage pool must have the same
extent size. For more information about extent size, see Chapter 1, “Overview of the IBM
Storwize V5000 system” on page 1.

To migrate a volume to another storage pool, select Migrate to Another Pool from the
Actions menu. The Migrate Volume Copy panel opens.

If the volume consists of more than one copy, you are asked which copy you want to migrate
to another storage pool, as shown in Figure 8-49. If the selected volume consists of one copy,
this option does not appear.

Notice that the Win2012_Vol_01 volume has two copies stored in two different storage pools.
The storage pools to which they belong are shown in parentheses.

Chapter 8. Advanced host and volume administration 391


Figure 8-49 Migrate Volume

Select the new target storage pool and click Migrate.

The volume migration starts, as shown in Figure 8-50. Click Close to return to the Volumes
panel.

Figure 8-50 Migrate Volume Copy task progress

392 Implementing the IBM Storwize V5000


Depending on the size of the volume, the migration process can take some time. The status
of the migration can be monitored in the Running Tasks bar at the bottom of the GUI panel.
Volume migration tasks cannot be interrupted.

After the migration completes, the “copy 0” from the Win2012_Vol_01 volume is shown in the
new storage pool, as shown in Figure 8-51.

Figure 8-51 Volume showing in new storage pool

The volume copy was migrated to the new storage pool without any downtime. It is also
possible to migrate both volume copies to other storage pools.

The volume copy feature can also be used to migrate volumes to a different pool with a
different extent size, as described in 8.6.5, “Migrating volumes by using the volume copy
features” on page 412.

8.4.9 Exporting to an image mode volume


Image mode provides a direct block-for-block translation from MDisk to a Volume with no
virtualization. An image mode MDisk is associated with exactly one volume. This feature can
be used to export a volume to a non-virtualized disk and to remove the volume from storage
virtualization.

Chapter 8. Advanced host and volume administration 393


Select the volume to be exported to an image mode volume and, from the Actions menu,
select Export to Image Mode, as shown in Figure 8-52.

Figure 8-52 Exporting a volume to an image mode

The Export to Image Mode wizard opens and displays the available MDisks. Select the
MDisk to which the volume will be exported to and click Next, as shown in Figure 8-53.

Figure 8-53 Selecting the MDisk to export the volume

394 Implementing the IBM Storwize V5000


Select a storage pool into which the image-mode volume will be placed after migration is
completed, as shown in Figure 8-54.

Figure 8-54 Select the storage pool

Click Finish to start the migration. After the task is complete, click Close to return to Volumes
panel.

Important: Use image mode to import or export existing data into or out of the IBM
Storwize V5000. Migrate such data from image mode MDisks to other storage pools to
benefit from storage virtualization.

For more information about importing volumes from external storage, see Chapter 6, “Storage
migration” on page 249 and Chapter 7, “Storage pools” on page 309.

Chapter 8. Advanced host and volume administration 395


8.4.10 Deleting a volume
To delete a volume, select Delete, from the Action menu. Confirm the number of volumes to
be deleted and select the check box if you want to force the deletion. Figure 8-55 shows the
Delete Volume panel.

Figure 8-55 Delete Volume panel

Click Delete and the volumes are removed from the system.
After the task completes, click Close to return to the Volumes panel

Important: You must force the deletion if the volume has host mappings or is used in
FlashCopy mappings. To be cautious, always ensure that the volume has no association
before you delete it.

8.5 Volume properties


This section provides an overview of all available information that is related to IBM Storwize
V5000 volumes.

To open the advanced view of a volume, select Properties from the Action menu, and the
Volume Details panel opens, as shown in Figure 8-56 on page 397. The following tabs are
available:
򐂰 Overview
򐂰 Host Maps
򐂰 Member MDisk

8.5.1 Overview tab


The Overview tab shown in Figure 8-56 on page 397 gives you a complete overview of the
volume properties. The left part of the panel displays common volume properties and the right
part of the panel displays information about the volume copies.

396 Implementing the IBM Storwize V5000


Figure 8-56 Volume Details - Overview

The following details are available:


򐂰 Volume Properties:
– Volume Name: Shows the name of the volume.
– Volume ID: Shows the ID of the volume.
– Status: Gives status information about the volume, which can be online, offline, or
degraded.
– Capacity: Shows the capacity of the volume. If the volume is thin-provisioned, this
number is the virtual capacity; the real capacity is displayed for each copy.
– # of FlashCopy Mappings: The number of existing FlashCopy relationships. For more
information, see Chapter 10, “Copy services” on page 451.
– Volume UID: The volume unique identifier.
– Caching I/O Group: Specifies the volume Caching I/O Group.
– Accessible I/O Group: Shows the I/O Group the host can use to access the volume.
– Preferred Node: Specifies the ID of the preferred node for the volume.
– I/O Throttling: It is possible to set a maximum rate at which the volume processes I/O
requests. The limit can be set in I/Os to MBps. This feature is an advanced feature and
it is possible to enable it only through the CLI, as described in Appendix A,
“Command-line interface setup and SAN Boot” on page 667.

Chapter 8. Advanced host and volume administration 397


– Mirror Sync Rate: After creation, or if a volume copy is offline, the mirror sync rate
weights the synchronization process. Volumes with a high sync rate (100%) complete
the synchronization faster than volumes with a lower priority. By default, the rate is set
to 50% for all volumes.
– Cache Mode: Shows if the cache is enabled or disabled for this volume.
– Cache State: Provides feedback if open I/O requests are inside the cache that is not
destaged to the disks.
– UDID (OpenVMS): The unit device identifiers are used by OpenVMS hosts to access
the volume.
򐂰 Copy Properties:
– Storage Pool: Provides information about which pool the copy is in, what type of copy it
is (generic or thin-provisioned), the status of the copy, and Easy Tier status.
– Capacity: Shows the allocated (used) and the virtual (Real) capacity from both Tiers
(SSD and HDD) and the warning threshold, and the grain size for Thin-Provisioned
volumes.
It is possible to modify some of these settings. Click Edit to change the available settings,
as shown in Figure 8-57.

Figure 8-57 Modify Volume Details panel

398 Implementing the IBM Storwize V5000


In the Modify Volume Details window, the following properties can be changed:
򐂰 Volume Name
򐂰 I/O Group
򐂰 Mirror Sync Rate
򐂰 Cache Mode
򐂰 UDID

Make any required changes and click Save.

Important: Changing the I/O group can cause loss of access because of cache reload and
host-I/O group access. Also, setting the Mirror Sync Rate to 0% disables synchronization.

8.5.2 Host Maps tab


The second tab of the Volume Details panel is Host Maps, as shown in Figure 8-58. All hosts
that are mapped to the selected volume are listed in this view.

Figure 8-58 Volume Details - Host Maps

Chapter 8. Advanced host and volume administration 399


To unmap a host from the volume, highlight it and click Unmap from Host. Confirm the
number of mappings to remove and click Unmap. Figure 8-59 shows the Unmap Host panel.

Figure 8-59 Unmap Host panel

After the changes are applied to the system, the selected host no longer has access to this
volume. Click Close to return to the Host Maps window. For more information about host
mappings, see 8.3, “Host mappings overview” on page 382.

8.5.3 Member MDisks tab


The third tab is the Member MDisks tab, which lists all MDisks on which the volume is located.
If there are multiple copies of the volume, it is possible to select an individual copy or display
both. The associated MDisks are displayed, as shown in Figure 8-60.

Figure 8-60 Member MDisk tab

400 Implementing the IBM Storwize V5000


When an image mode volume is using external storage, the Storage Subsystem name and
the external LUN ID are displayed.

Highlight an MDisk and click Actions to see the available tasks, as shown in Figure 8-61.

For more information about the available tasks, see Chapter 7, “Storage pools” on page 309.

Figure 8-61 MDisk Actions menu

Click Close to return to the Volumes panel.

8.5.4 Adding a mirrored volume copy


If a volume consists of only one copy, it is possible to add a second mirrored copy of the
volume. This second copy can be generic or thin-provisioned.

You also can use this method to migrate data across storage pools with different extent sizes.

Chapter 8. Advanced host and volume administration 401


To add a second copy, select the volume and click Actions  Volume Copy Actions 
Add Mirrored Copy, as shown in Figure 8-62.

Figure 8-62 Add Mirrored Copy

Select if the volume should be Generic or Thin-Provisioned, select the storage pool in which
the new copy should be created and click Add Copy, as shown in Figure 8-63.

Figure 8-63 Select storage pool

402 Implementing the IBM Storwize V5000


The copy is created after clicking Add Copy and data starts to synchronize as a background
task. Figure 8-64 shows the volume named Win2012_Vol_02 now has two volume copies that
are stored in two different storage pools.

Figure 8-64 Volume copies

8.5.5 Editing thin-provisioned volume properties


The processes that are used to modify the volume size that is presented to a host are
described in 8.4.6, “Shrinking a volume” on page 390 and 8.4.7, “Expanding a volume” on
page 391.

However, with a thin-provisioned volume, it is also possible to edit the allocated size and the
warning thresholds. To edit these settings, select the volume, then select Actions  Volume
Copy Actions  Thin-Provisioned as shown in Figure 8-65 on page 404.

The following options are available as shown in Figure 8-65:


򐂰 Shrink
򐂰 Expand
򐂰 Edit Properties

Chapter 8. Advanced host and volume administration 403


Figure 8-65 Working with thin-provisioned volumes

Shrinking thin-provisioned space


Select Shrink, as shown in Figure 8-65, to reduce the allocated space of a thin-provisioned
volume. Enter the amount by which the volume should shrink or the new final size and click
Shrink.

Deallocating extents: Only extents that do not include stored data can be deallocated. If
the space is allocated because there is data on the extent, the allocated space cannot be
shrunk and an out-of-range warning message appears.

Figure 8-66 shows the Shrink Volume window.

Figure 8-66 Shrink Thin-Provisioned Volume panel

404 Implementing the IBM Storwize V5000


After the task completes, click Close. The allocated space of the thin-provisioned volume is
reduced.

Expanding thin-provisioned space


To expand the allocated space of a thin-provisioned volume, select Expand, as shown in
Figure 8-65 on page 404. Enter the amount by which space should be allocated or the new
final size and click Expand. In our example, shown in Figure 8-67, we are expanding the
thin-provisioned space by 15 MiB.

Figure 8-67 Expand Thin-Provisioned Volume panel

The new space is now allocated. Click Close after task is complete.

Editing thin-provisioned properties


To edit thin-provisioned properties, select Edit Properties, as shown in Figure 8-65 on
page 404. Edit the settings (if required) and click OK to apply the changes.

Figure 8-68 shows the Edit Properties window.

Figure 8-68 Edit Properties panel

After the task completes, click Close to return to the Volumes panel.

Chapter 8. Advanced host and volume administration 405


8.6 Advanced volume copy functions
In 8.4.1, “Advanced volume functions” on page 385, we described all of the available actions
at a volume level and how to create a second volume copy. In this section, we focus on
volumes that consist of two volume copies and how to apply the concept of two copies for
business continuity and data migration.

Expanding a volume, selecting a copy, and opening the Action menu, displays the following
volume copy actions, as shown in Figure 8-69:
򐂰 Thin-Provisioned (for thin volumes)
򐂰 Make Primary (for non-primary copy)
򐂰 Split into New Volume
򐂰 Validate Volume Copies
򐂰 Delete Copy option

Figure 8-69 Volume copy actions

Looking at the volume copies that are shown in Figure 8-69, it is possible to see that one of
the copies has a star displayed next to its name, as also shown in Figure 8-70.

Figure 8-70 Volume copy names

Each volume has a primary and a secondary copy, and the star indicates the primary copy.
The two copies are always synchronized, which means that all writes are destaged to both
copies, but all reads are always done from the primary copy. Two copies per volume is the
maximum number configurable and the roles of the copies can be changed.

To accomplish this task, select the secondary copy and then click Actions  Make Primary.
Usually, it is a preferred practice to place the volume copies on storage pools with similar
performance because the write performance is constrained if one copy is placed on a lower
performance pool.

406 Implementing the IBM Storwize V5000


Figure 8-71 shows the secondary copy Actions menu.

Figure 8-71 Make primary

If high read performance is demanded, another possibility is to place the primary copy in an
SSD pool, or externally virtualized Flash System and the secondary copy in a normal disk
storage pool. This action maximizes the read performance of the volume and makes sure that
you have a synchronized second copy in your less expensive disk pool. It is possible to
migrate online copies between storage pools. For more information about how to select which
copy you want to migrate, see 8.4.8, “Migrating a volume to another storage pool” on
page 391.

Click Make Primary and the role of the copy is changed to primary. Click Close when the
task completes.

The volume copy feature also is a powerful option for migrating volumes, as described in
8.6.5, “Migrating volumes by using the volume copy features” on page 412.

8.6.1 Thin-Provisioned menu


The Thin-Provisioned menu includes the same functions that are described in “Shrinking
thin-provisioned space” on page 404, “Expanding thin-provisioned space” on page 405, and
“Editing thin-provisioned properties” on page 405.

Chapter 8. Advanced host and volume administration 407


Figure 8-72 shows the Thin-Provisioned menu items.

Figure 8-72 Thin-Provisioned menu items

8.6.2 Splitting into a new volume


If the two-volume copies are synchronized, it is possible to split one of the copies to a new
volume and map this volume to another host. From a storage point of view, this procedure can
be performed online, which means you can split one copy from the volume and create a copy
from the remaining volume without any host impact. However, if you want to use the split copy
for testing or backup purposes, you must make sure that the data inside the volume is
consistent. Therefore, the data must be flushed to storage to make the copies consistent.

For more information about flushing the data, see your operating system documentation. The
easiest way to flush the data is to shut down the hosts or application before a copy is split.

In our example, volume Win2012_Vol_01 has two copies: Copy 0 as primary and Copy 1 as
secondary. To split a copy, click Split into New Volume (as shown in Figure 8-69 on
page 406) on any copy and the remaining secondary copy automatically becomes the
primary for the source volume. Optionally, a name for the new volume can be specified.

Figure 8-73 shows the Split Volume Copy panel.

Figure 8-73 Split Volume Copy panel

408 Implementing the IBM Storwize V5000


After the task completes, click Close to return to the Volumes panel, where the copy appears
as a new volume named vdisk0 (unless a name was specified) that can be mapped to a host,
as shown in Figure 8-74.

Figure 8-74 Volumes: New volume from split copy

Important: If you receive error message code CMMVC6357E while you are splitting volume
copy, use the lsvdisksyncprogress command to view the synchronization status or wait for
the copy to synchronize. Example 8-1 shows an output of lsvdisksyncprogress
command.

Example 8-1 Output of lsvdisksyncprogress command


IBM_Storwize:ITSO_V5000:superuser>lsvdisksyncprogress
vdisk_id vdisk_name copy_id progress estimated_completion_time
30 Win2012_Vol_02 1 77 141031132545
IBM_Storwize:ITSO_V5000:superuser>

Chapter 8. Advanced host and volume administration 409


8.6.3 Validate Volume Copies option
It is possible to check if the volume copies are identical or process the differences between
them.

To validate the copies of a mirrored volume, complete the following steps:


1. Select Validate Volume Copies, as shown in Figure 8-69 on page 406. The Validate
Volume Copies panel opens, as shown in Figure 8-75.

Figure 8-75 Validate Volume Copies panel

The following options are available:


– Generate Event of Differences
Use this option if you want to verify only that the mirrored volume copies are identical. If
any difference is found, the command stops and logs an error that includes the logical
block address (LBA) and the length of the first difference. You can use this option,
starting at a different LBA each time, to count the number of differences on a volume.
– Overwrite Differences
Use this option to overwrite contents from the primary volume copy to the other volume
copy. The command corrects any differing sectors by copying the sectors from the
primary copy to the copies that are compared. Upon completion, the command
process logs an event that indicates the number of differences that were corrected.
Use this option if you are sure that the primary volume copy data is correct or that your
host applications can handle incorrect data.

410 Implementing the IBM Storwize V5000


– Return Media Error to Host
Use this option to convert sectors on all volume copies that contain different contents
into virtual medium errors. Upon completion, the command logs an event, which
indicates the number of differences that were found, the number that were converted
into medium errors, and the number that were not converted. Use this option if you are
unsure what the correct data is and you do not want an incorrect version of the data to
be used.
2. Select which action to perform and click Validate to start the task. The volume is now
checked. Click Close.
Figure 8-76 shows the output when the volume copy Generate Event of Differences
option is chosen.

Figure 8-76 Volume copy validation output

The validation process runs as a background process and may take some time, depending on
the volume size. You can check the status in the Running Tasks window, as shown in
Figure 8-77.

Figure 8-77 Validate Volume Copies: Running Tasks

Chapter 8. Advanced host and volume administration 411


8.6.4 Delete Volume Copy option
Click Delete (as shown in Figure 8-69 on page 406) to delete a volume copy. The copy is
deleted, but the volume remains online by using the remaining copy. Confirm the deletion
process by clicking Yes. Figure 8-78 shows the copy deletion warning panel.

Figure 8-78 Delete a copy

After the copy is deleted, click Close to return to the Volumes panel.

8.6.5 Migrating volumes by using the volume copy features


In the previous sections, we showed that it is possible to create, synchronize, split, and delete
volume copies. A combination of these tasks can be used to migrate volumes to other storage
pools.

The easiest way to migrate volume copies is to use the migration feature that is described in
8.4.8, “Migrating a volume to another storage pool” on page 391. Using this feature, one
extent after another is migrated to the new storage pool. However, the use of volume copies
provides another way to migrate volumes if you have different storage pool characteristics in
terms of extent size.

To migrate a volume, complete the following steps:


1. Create a second copy of your volume in the target storage pool. For more information, see
8.5.4, “Adding a mirrored volume copy” on page 401.
2. Wait until the copies are synchronized.
3. Change the role of the copies and make the new copy the primary copy. For more
information, see 8.6, “Advanced volume copy functions” on page 406.
4. Split or delete the old copy from the volume. For more information, see 8.6.2, “Splitting into
a new volume” on page 408 or 8.6.4, “Delete Volume Copy option” on page 412.

This migration process requires more user interaction with the IBM Storwize V5000 GUI, but it
offers some benefits.

As an example, we look at migrating a volume from a tier 1 storage pool to a lower


performance tier 2 storage pool.

In step 1, you create the copy on the tier 2 pool, while all reads are still performed in the tier 1
pool to the primary copy. After the synchronization, all writes are destaged to both pools, but
the reads are still done only from the primary copy.

412 Implementing the IBM Storwize V5000


Because the copies are fully synchronized, you can switch their role online (see step 3), and
analyze the performance of the new pool. When you are done testing your lower performance
pool, you can split or delete the old copy in tier 1 or switch back to tier 1 in seconds if the tier
2 storage pool did not meet your requirements.

8.7 Volumes by storage pool


To see an overview of which volumes are on which storage pool, click Volumes by Pool, as
shown in Figure 8-79.

Figure 8-79 Volumes by Pool

Chapter 8. Advanced host and volume administration 413


The Volumes by Pool panel opens, as shown in Figure 8-80.

Figure 8-80 Volumes by Pool panel

The left pane is called the Pool Filter and the storage pools are displayed there. For more
information about storage pools, see Chapter 7, “Storage pools” on page 309.

In the upper right, you see information about the pool that you selected in the pool filter. The
following information is also shown:
򐂰 Pool icon: As storage pools can have different characteristics, it is possible change the
storage pool icon. For more information, see 7.3, “Working with storage pools” on
page 347.
򐂰 Pool Name: The name that is given during the creation of the storage pool. For more
information about changing the storage pool name.
򐂰 Pool Details: Shows you the information about the storage pools, such as status, the
number of managed disks, and Easy Tier status.
򐂰 Volume allocation: Shows you the amount of capacity that is allocated to volumes from this
storage pool.

414 Implementing the IBM Storwize V5000


The lower right section (as shown in Figure 8-81) lists all volumes that have at least one copy
in the selected storage pool. The following information is provided:
򐂰 Name: Shows the name of the volume.
򐂰 Status: Shows the status of the volume.
򐂰 Capacity: Shows the capacity that is presented to hosts.
򐂰 UID: Shows the volume unique identifier.
򐂰 Host Mappings: Shows if host mappings exist.

Figure 8-81 Volumes by storage pool

It is also possible to create volumes from this panel. Click Create Volume to start the Volume
Creation panel. The steps are described in Chapter 5, “Volume configuration” on page 201.

Selecting a volume and opening the Actions menu or right-clicking the volume, shows the
same options as described in 8.4, “Advanced volume administration” on page 384.

Chapter 8. Advanced host and volume administration 415


8.8 Volumes by host
To see an overview on which volumes a host can access, click Volumes by Host
(as shown in Figure 8-79 on page 413) and the Volumes by Host window opens, as shown in
Figure 8-82.

Figure 8-82 Volumes by Host

In the left pane of the view is the Host Filter. Selecting a host shows its properties in the right
pane, such as the host name, number of ports, host type, and the I/O group to which it has
access.

The Host icons show an orange cable for a Fibre Channel host, a black cable for a SAS host,
and a blue cable for an iSCSI host.

416 Implementing the IBM Storwize V5000


The volumes that are mapped to this host are listed, as shown in Figure 8-83.

Figure 8-83 Volumes by Host

It is also possible to create a volume from this panel. Clicking Create Volumes, opens the
same wizard as described in Chapter 5, “Volume configuration” on page 201.

Selecting a volume, and opening the Actions menu or right-clicking the volume, shows the
same options as described in 8.4, “Advanced volume administration” on page 384.

Chapter 8. Advanced host and volume administration 417


418 Implementing the IBM Storwize V5000
9

Chapter 9. Easy Tier


In today’s storage industry, flash drives are emerging as an attractive alternative to hard disk
drives (HDDs). Because of their low response times, high throughput, and
IOPS-energy-efficient characteristics, flash drives have the potential to allow your storage
infrastructure to achieve significant savings in operational costs. However, the current
acquisition cost per GB for flash is higher than for Enterprise SAS (serial attached SCSI) and
NL-SAS (Nearline SAS).

Enterprise SAS drives replaced the old SCSI drives and have become common in the storage
market. They are offered in a wide variety of capacity, spindle speeds and form factors.
Nearline SAS is the low-cost, large capacity storage drive class, commonly offered at 7200
(rpm) spindle speed.

It is critical to choose the right mix of drives and the right data placement to achieve optimal
performance at low cost. Maximum value can be derived by placing “hot” data with high I/O
density and low response time requirements on Flash, while Enterprise class disks are
targeted for “warm” and Nearline for “cold” data accessed more sequentially and at lower
rates.

This chapter describes the Easy Tier disk performance optimization function of the IBM
Storwize V5000. It also describes how to activate the Easy Tier process for both evaluation
purposes and for automatic extent migration.

This chapter includes the following topics:


򐂰 Generations of IBM Easy Tier
򐂰 New features in Easy Tier 3
򐂰 Easy Tier overview
򐂰 Easy Tier process
򐂰 Easy Tier configuration using the GUI
򐂰 Easy Tier configuration using the command-line interface
򐂰 IBM Storage Tier Advisor Tool

© Copyright IBM Corp. 2013, 2015. All rights reserved. 419


9.1 Generations of IBM Easy Tier
IBM Storwize Family Software has benefited from software development work for the IBM
System Storage DS8000 product, in which there have been six versions of Easy Tier. Of
those versions, versions 1 and 3 have been implemented within IBM Storwize Family
Software.

The first generation of Easy Tier introduced automated storage performance management,
efficiently boosting enterprise-class performance with flash drives and automating storage
tiering from enterprise class drives to flash drives. It also introduced dynamic volume
relocation and dynamic extent pool merge.

The second generation of Easy Tier introduced the combination of Nearline drives with the
objective of maintaining optimal performance at low cost. The second generation also
introduced auto rebalancing of extents and was implemented within DS8000 products only.

The third generation of Easy Tier introduces further enhancements that provide automated
storage performance and storage economics management across all three drive tiers (Flash,
Enterprise, and Nearline storage tiers). It allows you to consolidate and efficiently manage
more workloads on a single IBM Storwize V5000. It also introduces support for Storage Pool
Balancing in homogeneous pools.

9.2 New features in Easy Tier 3


The enhancements of Easy Tier 3 include these:
򐂰 Support for three tiers of disk or a mixture of any two tiers.
򐂰 Storage Pool Balancing.
򐂰 Enhancements to the Storage Tier Automation Tool, including additional graphing from the
STAT Utility.

Figure 9-1 shows the supported Easy Tier pools available in the third generation of Easy Tier.

Figure 9-1 Supported Easy Tier Pools

420 Implementing the IBM Storwize V5000


9.3 Easy Tier overview
Easy Tier is an optional licensed function of IBM Storwize V5000 that brings enterprise
storage enhancements to the midrange segment. It enables automated subvolume data
placement throughout different storage tiers to intelligently align the system with current
workload requirements and to optimize the usage of storage. This function includes the ability
to automatically and non-disruptively relocate data (at the extent level) from one tier to
another tier in either direction to achieve the best available storage performance for your
workload in your environment.

Easy Tier reduces the I/O latency for hot spots, but it does not replace storage cache. Easy
Tier and storage cache solve a similar access latency workload problem, but these methods
weigh differently in the algorithmic construction based on “locality of reference,” recency, and
frequency. Because Easy Tier monitors I/O performance from the extent end (after cache), it
can pick up the performance issues that cache cannot solve and complement the overall
storage system performance.

In general, the storage environment I/O is monitored on volumes and the entire volume is
always placed inside one appropriate storage tier. Determining the amount of I/O on single
extents is too complex for monitoring I/O statistics, moving them manually to an appropriate
storage tier, and reacting to workload changes.

Easy Tier is a performance optimization function that overcomes this issue because it
automatically migrates (or moves) extents that belong to a volume between different storage
tiers, as shown in Figure 9-2. Because this migration works at the extent level, it is often
referred to as sub-LUN migration.

Figure 9-2 Easy Tier

Chapter 9. Easy Tier 421


You can enable Easy Tier for storage on a volume basis. It monitors the I/O activity and
latency of the extents on all Easy Tier enabled volumes over a 24-hour period. Based on the
performance log, it creates an extent migration plan and dynamically moves high activity or
hot extents to a higher disk tier within the same storage pool. It also moves extents whose
activity dropped off (or cooled) from higher disk tier MDisks back to a lower tier MDisk.

To enable this migration between MDisks with different tier levels, the target storage pool must
consist of different characteristic MDisks. These pools are named multi-tiered storage pools.
IBM Storwize V5000 Easy Tier is optimized to boost the performance of storage pools that
contain Flash, Enterprise and the Nearline drives.

To identify the potential benefits of Easy Tier in your environment before actually installing
higher MDisk tiers (such as Flash), it is possible to enable the Easy Tier monitoring on
volumes in single-tiered storage pools. Although the Easy Tier extent migration is not possible
within a single-tiered pool, the Easy Tier statistical measurement function is possible.
Enabling Easy Tier on a single-tiered storage pool starts the monitoring process and logs the
activity of the volume extents.

The STAT tool is a no-cost tool that helps you to analyze this data. If you do not have an IBM
Storwize V5000, use Disk Magic to get a better idea about the required number of different
drive types that are appropriate for your workload.

Easy Tier is available for IBM Storwize V5000 internal volumes and volumes on external
virtualized storage subsystems.

9.3.1 Tiered storage pools


With IBM Storwize V5000, we must differentiate between the following types of storage pools:
򐂰 Single-tiered storage pools
򐂰 Multi-tiered storage pools

Figure 9-3 shows single-tiered storage pools that include one type of disk tier attribute. Each
disk should have the same size and performance characteristics. Multi-tiered storage pools
are populated with two or more different disk tier attributes, high-performance flash drives,
enterprise SAS drives, and Nearline drives.

A volume migration occurs when the complete volume is migrated from one storage pool to
another storage pool. An Easy Tier data migration moves only extents inside the storage pool
to different performance attributes.

422 Implementing the IBM Storwize V5000


Figure 9-3 Tiered storage pools

By default, Easy Tier is enabled on any pool that contains two or more classes of disk drive.
The Easy Tier function manages the extent migration as follows:
򐂰 Promote
– Moves the candidate hot extent to a higher performance tier.
򐂰 Warm Demote
– Prevents performance overload of a tier by demoting a warm extent to a lower tier.
– Triggered when bandwidth or IOPS exceeds predefined threshold.
򐂰 Cold Demote
– Coldest extent moves to a lower tier.
򐂰 Expanded or Cold Demote
– Demote appropriate sequential workload to the lowest tier to better utilize Nearline
bandwidth.
򐂰 Swap
– This operation exchanges a cold extent in a higher tier with a hot extent in a lower tier
or vice versa.

Note: Extent migrations occur only between adjacent tiers.

Chapter 9. Easy Tier 423


Figure 9-4 shows the Easy Tier extent migration.

Figure 9-4 Easy Tier process

424 Implementing the IBM Storwize V5000


9.4 Easy Tier process
Easy Tier is based on an algorithm with a threshold to evaluate if an extent is cold, warm, or
hot, and consists of four main processes. These processes ensure that the extent allocation
in multi-tiered storage pools is optimized for the best performance, based on your workload in
the last 24 hours. These are the processes:
򐂰 I/O Monitoring
򐂰 Data Placement Advisor
򐂰 Data Migration Planner
򐂰 Data Migrator

Figure 9-5 shows the flow between these processes.

Figure 9-5 Easy Tier process flow

The four main processes and the flow between them are described in the following sections.

9.4.1 I/O Monitoring


The I/O Monitoring (IOM) process operates continuously and monitors host volumes for I/O
activity. It collects performance statistics for each extent at five minutes intervals and derives
averages for a rolling 24-hour period of I/O activity.

Easy Tier makes allowances for large block I/Os and thus considers only I/Os up to 64 KB as
migration candidates.

This is an efficient process and adds negligible processing impact to the IBM Storwize V5000
node canisters.

9.4.2 Data Placement Advisor


The Data Placement Advisor (DPA) uses workload statistics to make a cost benefit decision
about which extents should be candidates for migration to a higher performance tier.

This process also identifies extents that must be migrated back to a lower tier.

9.4.3 Data Migration Planner


Using the previously identified extents, the Data Migration Planner (DMP) process builds the
extent migration plan for the storage pool.

9.4.4 Data Migrator


The Data Migrator (DM) process involves scheduling and the actual movement, or migration,
of the volume’s extents up to, or down from, the high disk tier. The extent migration rate is
capped to a maximum of up to 15 MBps.

Chapter 9. Easy Tier 425


This rate equates to around 2 TB a day being migrated between disk tiers (Figure 9-6).

Figure 9-6 Easy Tier Data Migrator

9.4.5 Easy Tier operating modes


IBM Storwize V5000 offers the following operating modes for Easy Tier:
򐂰 Easy Tier: Off
Easy Tier can be turned off. No statistics are recorded and no extents are moved.
򐂰 Easy Tier: On
When the Easy Tier function is turned on, then Easy Tier measures the I/O activity for all
extents. When you have a multi-tiered pool, the extents are migrated dynamically by the
Easy Tier processes to achieve the best performance. The movement is not apparent to
the host server and applications.
A statistic summary file is created and can be off-loaded and analyzed with the IBM
Storage Tier Advisory Tool as described in 9.7, “IBM Storage Tier Advisor Tool” on
page 445. Easy Tier can be turned on for any single tier or multi-tiered pool.
򐂰 Easy Tier: Measured
When Easy Tier is in measured mode, it means Easy Tier measures the I/O activity for all
extents but does not move any extents in the storage pool. A statistic summary file is
created and can be off-loaded from the IBM Storwize V5000. This file can be analyzed
with the IBM Storage Tier Advisory Tool. This analysis shows the benefits to your workload
if you were to add or remove different drive class to your pool before any actual hardware
is acquired. No license is required to have Easy Tier set to measure mode.

426 Implementing the IBM Storwize V5000


򐂰 Easy Tier: Auto
This is the operating mode default. When you have Easy Tier set to auto to a single-tiered
storage pool, Easy Tier sets to off for all volumes inside the storage pool and no extents
are moved. If you have Easy Tier set to auto to a multi-tiered storage pool, the Easy Tier
status becomes active and Easy Tier sets to on for all volumes inside the storage pool and
the extents are migrated dynamically by the EasyTier process. This scenario is similar to
Easy Tier set to on, however the extents are not migrated if the Easy Tier function is not
licensed.

9.4.6 Easy Tier status


Depending on Easy Tier mode attributes, a storage pool can have one of the following Easy
Tier status:
򐂰 Active: This status indicates Easy Tier is actively managing the extents of the storage
pool.
򐂰 Balanced: This status applies to homogeneous storage pools and indicates that Easy Tier
is actively managing the extents to provide enhanced performance by re-balancing the
extents among the MDisks within the Tier. This re-balancing characteristic is called
Storage Pool Balancing, which is described in 9.4.7, “Storage Pool Balancing”.
򐂰 Measured: This status means Easy Tier is constantly measuring the I/O activity for all
extents to generate an I/O statistics report.
򐂰 Inactive: When the Easy Tier status is inactive, no extents are being monitored and no
statistics are being recorded.

9.4.7 Storage Pool Balancing


Storage Pool Balancing is a new function within Storwize Family release 7.4 or higher which,
while associated with Easy Tier, operates independently and does not require a license.
Storage Pool Balancing works in conjunction with Easy Tier when multiple tiers exist in a
single pool.

It assesses the extents in a storage tier and balances them automatically across all MDisks
within that tier. The Storage Pool Balancing moves the extents to achieve a balanced
workload distribution and avoid hotspots. The Storage Pool Balancing is an algorithm based
on MDisk IOPS usage, which means it is not capacity-based but performance-based. It works
on a 6 hour performance window.

When a new MDisk is added into an existing storage pool, the Storage Pool Balancing will
automatically balance the extents across all MDisk.

Figure 9-7 represents an example of Storage Pool Balancing.

Figure 9-7 Storage pool balancing

Chapter 9. Easy Tier 427


9.4.8 Easy Tier rules
The following operating rules apply when IBM System Storage Easy Tier is used on the
IBM Storwize V5000:
򐂰 Automatic data placement and extent I/O activity monitors are supported on each copy of
a mirrored volume. Easy Tier works with each copy independently of each other.

Volume mirroring: Volume mirroring can have different workload characteristics for
each copy of the data because reads are normally directed to the primary copy and
writes occur to both. Thus, the number of extents that Easy Tier migrates is probably
different for each copy.

򐂰 Easy Tier works with all striped volumes, including these:


– Generic volumes
– Thin-provisioned volumes
– Mirrored volumes
– Thin-mirrored volumes
– Global and Metro Mirror sources and targets
򐂰 Easy Tier automatic data placement is not supported for image mode or sequential
volumes. I/O monitoring for such volumes is supported, but you cannot migrate extents on
such volumes unless you convert image or sequential volume copies to striped volumes.
򐂰 IBM Storwize V5000 creates volumes or volume expansions using extents from MDisks
from the Enterprise and Nearline tier. Extents from MDisks in the Flash tier are used if no
Enterprise and Nearline space are available.
򐂰 When a volume is migrated out of a storage pool managed with Easy Tier, Automatic Data
Placement Mode is no longer active on that volume. Automatic Data Placement is also
turned off while a volume is migrated, even if it is between pools that both have Easy Tier
Automatic Data Placement enabled. Automatic Data Placement for the volume is
re-enabled when the migration is complete.
򐂰 Flash drive performance is dependent on block size (small blocks perform much better
than larger blocks). Easy Tier measures I/O blocks smaller than 64 KB, but it migrates the
entire extent to the appropriate disk tier.
򐂰 As extents are migrated, the use of smaller extents makes Easy Tier more efficient.
򐂰 The first migration starts about one hour after Automatic Data Placement Mode is
enabled. It takes up to 24 hours to achieve optimal performance.
򐂰 In the current IBM Storwize V5000 Easy Tier implementation, it takes about two days
before hot spots are considered moved from tier to tier, which prevents hot spots from
being moved from fast tier if the workload changes over a weekend.
򐂰 If you run an unusual workload over a longer period, Automatic Data Placement can be
turned off and on online to avoid data movement.

428 Implementing the IBM Storwize V5000


Depending on which storage pool and which Easy Tier configuration is set, a volume copy
can have the Easy Tier states that are shown in Table 9-1.

Table 9-1 Easy Tier states


Storage pool Single-tiered or Volume copy Easy Tier status on volume copy
multi-tiered storage Easy Tier
pool setting

Off Single-tiered Off Inactive

Off Single-tiered On Inactive

Off Multi-tiered Off Inactive

Off Multi-tiered On Inactive

Autoa Single-tiered Off Measuredb

Autoa Single-tiered On Balanced (see footnote e)

Autoa Multi-tiered Off Measuredb

Autoa Multi-tiered On Activec d

On Single-tiered Off Measuredb

On Single-tiered On Balanced (see footnote e)

On Multi-tiered Off Measuredb

On Multi-tiered On Activec

Measure Single-tiered On Measuredb

Measure Single-tiered Off Measuredb

Measure Multi-tiered On Measuredb

Measure Multi-tiered Off Measuredb


a. The default Easy Tier setting for a storage pool is Auto, and the default Easy Tier setting for a
volume copy is On. This scenario means that Easy Tier functions are disabled for storage pools
with a single tier and only Storage Pool Balancing is active.
b. When the volume copy status is measured, the Easy Tier function collects usage statistics for
the volume, but automatic data placement is not active.
c. If the volume copy is in image or sequential mode or is being migrated, the volume copy Easy
Tier status is measured instead of active.
d. When the volume copy status is active, the Easy Tier function operates in automatic data
placement mode for that volume.
e. When the volume easy tier status is Balanced, the Easy Tier is actively managing the extents
by re-balancing them among the MDisks within the tier.

Chapter 9. Easy Tier 429


9.5 Easy Tier configuration using the GUI
This section describes how to activate Easy Tier using the IBM Storwize V5000 GUI.

9.5.1 Creating multi-tiered pools: Enabling Easy Tier


In this section, we describe how to create multi-tiered storage pools using the GUI. When a
storage pool changes from single-tiered to multi-tiered, Easy Tier is enabled by default for the
pool and on all volume copies inside this pool.

In this example, we create a pool containing Flash, Enterprise and Nearline MDisks.

To create a multi-tiered pool, complete the following steps:


1. Click Pools  Internal Storage. Figure 9-8 shows all internal drives classes installed in
the IBM Storwize V5000.

Figure 9-8 Creating Easy Tier Pool

430 Implementing the IBM Storwize V5000


2. Select the class of drive you want to include in the multi-tiered pool and click Configure
Storage. In this scenario, we provision one RAID 10 MDisk using the Flash drive class as
shown in Figure 9-9. Click Next to continue.

Figure 9-9 Creating a new MDisk

Chapter 9. Easy Tier 431


3. Select Create one or more new pools, and include the pool name or prefix. In this
example scenario, we create a new storage pool named EasyTier_Pool as shown in
Figure 9-10.

Figure 9-10 Creating a new pool

4. Click Finish.
Using the IBM Storwize V5000 GUI, you can check the properties of the EasyTier_Pool.
Because the storage pool contains only one Flash MDisk the Easy Tier status remains as
Inactive as shown in Figure 9-11.

Figure 9-11 Storage pool properties

432 Implementing the IBM Storwize V5000


5. From the main page, click Pools  Internal Storage and repeat step 1 on page 430 and
step 2 on page 431 to create a further disk class MDisk using the available storage.
6. As shown Figure 9-12, select Expand an existing pool and select the pool that you want
to change to a multi-tiered pool. In our example, EasyTier_Pool is selected.

Figure 9-12 Configure Internal Storage window

7. Click Finish and Close when the task completes. Add further MDisk and classes by
repeating the steps above.
8. In Figure 9-13, you see that the EasyTier_Pool storage pool contains MDisks from all three
tiers available, Flash, Enterprise, and Nearline.

Figure 9-13 Easy Tier active

Chapter 9. Easy Tier 433


As soon as a second MDisk is added the status of Easy Tier changed from Inactive to Active.
In this pool, I/O Monitoring has started and the other Easy Tier processes start to work.
Figure 9-14 shows the storage pool changed to a multi-tiered storage pool containing Flash,
Enterprise and Nearline Tier. The storage pool icon changed from a single-tier to a multi-tier
characteristic.

Figure 9-14 Multi-tiered storage pool

Figure 9-15 shows two volumes on the multi-tiered storage pool.

Figure 9-15 Volumes by Pool

434 Implementing the IBM Storwize V5000


If you open the properties of a volume by clicking Actions  Properties, you can also see
that Easy Tier is enabled on the volume by default, as shown in Figure 9-16. Volumes inherit
the Easy Tier state of their parent pool.

Figure 9-16 Easy Tier enabled volume

Chapter 9. Easy Tier 435


If a volume has more than one copy, Easy Tier can be enabled and disabled on each copy
separately. This action depends on the storage pool where the volume copy is defined. You
can see a volume with two copies that are stored in two different storage pools, as shown in
Figure 9-17.

Figure 9-17 Easy Tier by Copy

If you want to enable Easy Tier on the second copy, change the storage pool of the second
copy to a multi-tiered storage pool by repeating the steps in this section.

If external storage is used, you must select the tier manually, and then add the external
MDisks to a storage pool, as described in Chapter 11, “External storage virtualization” on
page 579. This action also changes the storage pools to multi-tiered storage pools and
enables Easy Tier on the pool and the volumes.

9.5.2 Downloading Easy Tier I/O measurements


Easy Tier is now enabled and Automatic Data Placement Mode is active. Extents are
automatically migrated to, or from disk tiers, and the statistic summary collection is now
active. The statistics log file can be downloaded to analyze how many extents were migrated,
and to monitor if it makes sense to add more Flash to the multi-tiered storage pool.

Heat data files are produced approximately once a day (that is, roughly every 24 hours) when
Easy Tier is active on one or more storage pools.

436 Implementing the IBM Storwize V5000


To download the statistics file, complete the following steps:
1. Click Settings  Support, as shown in Figure 9-18.

Figure 9-18 Settings menu

2. Click Show full log listing, as shown in Figure 9-19.

Figure 9-19 Download files menu

This action lists all the log files available to download. The Easy Tier log files are always
named dpa_heat.canister_name_date.time.data.
If you run Easy Tier for a longer period, it generates a heat file at least every 24 hours. The
time and date the file was created is included in the file name.

Chapter 9. Easy Tier 437


To download the statistics file, select the file for the most representative period and click
Actions  Download as shown in Figure 9-20. Usually this is the latest file.

Figure 9-20 Download dpa_heat file

Log file creation: Depending on your workload and configuration, it can take up to 24
hours until a new Easy Tier log file is created.

You can also use the search field on the right to filter your search, as shown in
Figure 9-21.

Figure 9-21 Filter your search

Depending on your browser settings, the file is downloaded to your default location, or you
are prompted to save it to your computer. This file can be analyzed as described in 9.7,
“IBM Storage Tier Advisor Tool” on page 445.

9.6 Easy Tier configuration using the command-line interface


The process used to enable IBM Storwize V5000 Easy Tier using the GUI is described in 9.5,
“Easy Tier configuration using the GUI” on page 430. Easy Tier can also be configured using
the command-line interface (CLI). For the advanced user, this method offers more options for
Easy Tier configuration.

438 Implementing the IBM Storwize V5000


Before you use the CLI, you must configure CLI access, as described in Appendix A,
“Command-line interface setup and SAN Boot” on page 667.

Readability: In most examples that are shown in this section, many lines were deleted in
the command output or responses so we can concentrate on the information that is related
to Easy Tier.

9.6.1 Enabling Easy Tier measured mode


It is possible to enable Easy Tier in measured mode, on either a single-tiered or multi-tiered
storage pool. Connect to your IBM Storwize V5000 using the CLI and run the svcinfo
lsmdiskgrp command, as shown in Example 9-1. This command shows an overview of all
configured storage pools and their Easy Tier status. In our example, there are two storage
pools listed: Nearline_Pool with Easy Tier in auto, and Multi_Tier_Pool with Easy Tier
enabled.

Example 9-1 Show all configured storage pools


IBM_Storwize:ITSO_V5000:superuser>svcinfo lsmdiskgrp
id name status mdisk_count easy_tier easy_tier_status type
0 Nearline_Pool online 1 auto balanced parent
1 EasyTier_Pool online 3 on active parent
IBM_Storwize:ITSO_V5000:superuser>

To get a more detailed view of the single-tiered storage pool, run the svcinfo lsmdiskgrp
storage pool name command, as shown in Example 9-2.

Example 9-2 Storage Pools details: Easy Tier auto


IBM_Storwize:ITSO_V5000:superuser>svcinfo lsmdiskgrp Nearline_Pool
id 0
name Nearline_Pool
status online
mdisk_count 1
...
easy_tier auto
easy_tier_status balanced
tier nearline
tier_mdisk_count 1
type parent
...
IBM_Storwize:ITSO_V5000:superuser>

To enable Easy Tier on a single-tiered storage pool in measure mode, run the chmdiskgrp
-easytier measure storage pool name command, as shown in Example 9-3.

Example 9-3 Enable Easy Tier in measure mode on a single-tiered storage pool
IBM_Storwize:ITSO_V5000:superuser>chmdiskgrp -easytier measure Nearline_Pool
IBM_Storwize:ITSO_V5000:superuser>

Chapter 9. Easy Tier 439


Check the status of the storage pool again by running the lsmdiskgrp storage pool name
command again, as shown in Example 9-4.

Example 9-4 Storage pool details: Easy Tier Measured


IBM_Storwize:ITSO_V5000:superuser>lsmdiskgrp Nearline_Pool
id 0
name Nearline_Pool
status online
mdisk_count 1
vdisk_count 2
capacity 1.81TB
extent_size 1024
free_capacity 1.80TB
virtual_capacity 11.00GB
used_capacity 11.00GB
real_capacity 11.00GB
overallocation 0
warning 90
easy_tier measure
easy_tier_status measured
tier ssd
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier enterprise
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier nearline
tier_mdisk_count 1
tier_capacity 1.81TB
tier_free_capacity 1.80TB
parent_mdisk_grp_id 2
parent_mdisk_grp_name Nearline_Pool
child_mdisk_grp_count 0
child_mdisk_grp_capacity 0.00MB
type parent
IBM_Storwize:ITSO_V5000:superuser>

To get the list of all the volumes defined, run the lsvdisk command, as shown in
Example 9-5. For this example, we are only interested in the RedbookVolume volume.

Example 9-5 All volumes list


IBM_Storwize:ITSO_V5000:superuser>lsvdisk
id name IO_group_id IO_group_name status mdisk_grp_id
...
26 RedbookVolume 0 io_grp0 online 0
...

To get a more detailed view of a volume, run the lsvdisk volume_name command, as shown in
Example 9-6. This output shows two copies of a volume both copies now have Easy Tier
turned on, however, Copy 0 is in a multi-tiered storage pool and Automatic Data Placement is
active, as indicated by the easy_tier_status line. Copy 1 is in the single-tiered storage pool,
and Easy Tier mode is measured, as indicated by the easy_tier_status line.

440 Implementing the IBM Storwize V5000


Example 9-6 Volume details
IBM_Storwize:ITSO_V5000:superuser>lsvdisk RedbookVolume
id 26
name RedbookVolume
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id many
mdisk_grp_name many
capacity 10.00GB
...
...

copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name EasyTier_Pool
...
easy_tier on
easy_tier_status active
tier ssd
tier_capacity 0.00MB
tier enterprise
tier_capacity 10.00GB
tier nearline
tier_capacity 0.00MB

...

copy_id 1
status online
sync no
mdisk_grp_id 0
mdisk_grp_name Nearline_Pool
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
grainsize
se_copy no
easy_tier off
easy_tier_status measured
tier ssd
tier_capacity 0.00MB
tier enterprise
tier_capacity 0.00MB
tier nearline
tier_capacity 10.00GB
parent_mdisk_grp_id 2
parent_mdisk_grp_name Nearline_Pool
IBM_Storwize:ITSO_V5000:superuser>

Chapter 9. Easy Tier 441


Easy Tier measure mode will not carry out any data placement only collect statistics for
measurement. For more information about downloading and analyzing the I/O statistics, see
9.5.2, “Downloading Easy Tier I/O measurements” on page 436.

These changes are also reflected in the GUI, as shown in Figure 9-22. Click Pools 
Volumes by Pool, select Nearline_Pool and find the volume RedbookVolume; then click
Actions  Properties option to view the details of the Easy Tier for each of the volume
copies.

Figure 9-22 Volume properties for Easy Tier

9.6.2 Enabling or disabling Easy Tier on single volumes


If you enable Easy Tier on a storage pool, all volume copies inside the Easy Tier pools also
have Easy Tier enabled by default. This setting applies to multi-tiered and single-tiered
storage pools. It is also possible to turn Easy Tier on and off for single volume copies.

Before to disable the Easy tier on single volume, run svcinfo lsmdisgrp storage pool name
command to show all storage pools configured as shown in Example 9-7. In our example,
EasyTier_Pool is the storage pool used as reference.

Example 9-7 Listing the storage pool


IBM_Storwize:ITSO_V5000:superuser>svcinfo lsmdiskgrp
id name status easy_tier easy_tier_status
.
1 EasyTier_Pool online on active
.
.
IBM_Storwize:ITSO_V5000:superuser>

442 Implementing the IBM Storwize V5000


Run the svcinfo lsvdisk to show all configured volumes within your IBM Storwize V5000 as
shown in Example 9-8. We are only interested in a single volume.

Example 9-8 Show all configured volumes


IBM_Storwize:ITSO_V5000:superuser>svcinfo lsvdisk
id name IO_group_id status mdisk_grp_id mdisk_grp_name capacity
0 Volume001 0 online 1 EasyTier_Pool 5.00GB
IBM_Storwize:ITSO_V5000:superuser>

To disable Easy Tier on single volumes, run the svctask chvdisk -easytier off volume
name command, as shown in Example 9-9.

Example 9-9 Disable Easy Tier on a single volume


IBM_Storwize:ITSO_V5000:superuser>svctask chvdisk -easytier off Volume001
IBM_Storwize:ITSO_V5000:superuser>

This command disables Easy Tier on all copies of the volume. Example 9-10 shows Easy Tier
turned off for the copy 0 even if Easy Tier is still enabled on the storage pool. Note that on
copy 0, the status changed to measured, as the pool is still actively measuring the I/O on the
volume.

Example 9-10 Easy Tier disabled


IBM_Storwize:ITSO_V5000:superuser>svcinfo lsvdisk Volume001
id 0
name Volume001
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name EasyTier_Pool
capacity 5.00GB
type striped
throttling 0
preferred_node_id 2
parent_mdisk_grp_id 1
parent_mdisk_grp_name EasyTier_Pool

copy_id 0
status online
mdisk_grp_id 1
mdisk_grp_name EasyTier_Pool
fast_write_state empty
used_capacity 5.00GB
real_capacity 5.00GB
free_capacity 0.00MB
overallocation 100
easy_tier off
easy_tier_status measured
tier ssd
tier_capacity 1.00GB
tier enterprise
tier_capacity 4.00GB
tier nearline
tier_capacity 0.00MB

Chapter 9. Easy Tier 443


compressed_copy no
uncompressed_used_capacity 5.00GB
parent_mdisk_grp_id 1
parent_mdisk_grp_name EasyTier_Pool
IBM_Storwize:ITSO_V5000:superuser>

To enable Easy Tier on a volume, run the svctask chvdisk -easytier on volume name
command (as shown in Example 9-11), and Easy Tier changes back to on (as shown in
Example 9-12). Notice that copy 0 status also changed back to active.

Example 9-11 Easy Tier enabled


IBM_Storwize:ITSO_V5000:superuser>svctask chvdisk -easytier on Volume001
IBM_Storwize:ITSO_V5000:superuser>

Example 9-12 Easy Tier on single volume enabled


IBM_Storwize:ITSO_V5000:superuser>svcinfo lsvdisk Volume001
id 0
name Volume001
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name EasyTier_Pool
capacity 5.00GB
parent_mdisk_grp_id 1
parent_mdisk_grp_name EasyTier_Pool

copy_id 0
status online
mdisk_grp_id 1
mdisk_grp_name EasyTier_Pool
type striped
mdisk_id
mdisk_name
used_capacity 5.00GB
real_capacity 5.00GB
free_capacity 0.00MB
overallocation 100
easy_tier on
easy_tier_status active
tier ssd
tier_capacity 1.00GB
tier enterprise
tier_capacity 4.00GB
tier nearline
tier_capacity 0.00MB
compressed_copy no
uncompressed_used_capacity 5.00GB
parent_mdisk_grp_id 1
parent_mdisk_grp_name EasyTier_Pool
IBM_Storwize:ITSO_V5000:superuser>

444 Implementing the IBM Storwize V5000


9.7 IBM Storage Tier Advisor Tool
The Storage Tier Advisor Tool (STAT) is a Windows console tool. If you run Easy Tier in
measure mode, the tool analyzes the extents and captures I/O profiles in order to estimate
how much benefit you would derive from implementing Easy Tier Automatic Data Placement
with additional MDisks tiers. If Automatic Data Placement Mode is already active, the analysis
also includes an overview of migrated hot data and advice about whether you can derive any
benefit from adding more Flash or Enterprise drives for example. The output provides a
graphical representation of the performance data collected by Easy Tier over a 24-hour
operational cycle.

The tool comes packaged as an ISO file, which needs to be extracted to a temporary folder.
The STAT Tool can be downloaded from the following link:
https://ibm.biz/BdEfrX

9.7.1 Processing heat log files


Storage Tier Advisor Tool takes input from the dpa_heat log file and produces an HTML file
that contains the report. Download the heat_log file, as described in 9.5.2, “Downloading
Easy Tier I/O measurements” on page 436, and save it to the HDD of a Windows system.

For more information about the tool and to download it, see this website:
https://ibm.biz/BdEfrX

Click Start  Run, enter cmd, and then click OK to open a command prompt.

Typically, the tool is installed in the C:\Program Files\IBM\STAT directory. Enter the command
to generate the report, as shown in Example 9-13.
C:\Program Files\IBM\STAT>STAT.exe -o c:\directory_where_you_want_the output_to_go
c:\location_of_dpa_heat_data_file

If you do not specify -o c:\directory_where_you_want_the output_to_go, the output goes to


the directory where the STAT.exe file is located.

Example 9-13 Generate HTML file


C:\EasyTier>STAT.exe -o C:\EasyTier C:\StorwizeV5000_Logs\dpa_heat.31G00KV-1.101
209.131801.data

CMUA00019I The STAT.exe command has completed.

C:\EasyTier>

The Storage Tier Advisor Tool creates a set of HTML files. Browse to the directory where you
directed the output file, and find the file named index.html. Open the file using your browser
to view the report.

Typically, the STAT tool generates the report as described in 9.7.2, “Storage Tier Advisor Tool
reports” on page 446.

Chapter 9. Easy Tier 445


9.7.2 Storage Tier Advisor Tool reports
When you open the index.html file of an IBM Storwize V5000 system, a window opens that
gives you a complete system summary report as shown in Figure 9-23.

Figure 9-23 STAT report: System Summary

As shown in In Figure 9-23, the prediction generated by the Storage Tier Advisor Tool is
pool-based. The first page of the report, shows the Storage facility, total number of storage
pools and volumes monitored, total capacity monitored, hot data capacity, data validity and
system state.

The next table shows how data is managed in the particular storage pool(s) and is denoted by
different colors. The total capacity of the storage pool is divided into two parts: the data
managed by Easy Tier and the unallocated space. The green portion of the bar represents
the data managed by Easy Tier. The black portion of the bar, represents the unallocated
space or data that is not being managed by Easy Tier.

446 Implementing the IBM Storwize V5000


In our example, a total of 3 storage pools were monitored with a total capacity of 2063.0 GiB.
Click the storage pool P1 as shown in Figure 9-24.

Figure 9-24 Selecting storage pool “P1”

As shown in Figure 9-25, the Storage Tier Advisor Tool shows the MDisk distribution of each
tier that construct the storage pool. The tool also shows the performance statistics and the
projected IOPS for the MDisks after the Easy Tier processes and re-balancing operations are
completed on the extents of the storage pool.

For each MDisk, the STAT tool shows the following values:
򐂰 The MDisk ID
򐂰 The Storage Pool ID
򐂰 The MDisk type
򐂰 The number of IOPS threshold
򐂰 The utilization of MDisk IOPS
򐂰 The projected utilization of MDisk IOPS

Figure 9-25 Storage pool performance statistics

The blue portion of the bar represents the percentage range which is below the tier average
utilization of MDisk IOPS.

The red portion of the bar represents percentage range which is above the maximum allowed
threshold.

Chapter 9. Easy Tier 447


In our example, the STAT Tool output shows 0% of utilization of the Flash tier. This means that
there are no extents residing in this particular tier. In the Enterprise section, the STAT tool
shows 8% of utilization of the Enterprise tier and the IOPS did not exceed the threshold.

In the Nearline section, we look at 182% of utilization, this means that in a period of 24 hours,
the maximum IOPS exceeded the threshold by 82%. The blue portion shows the 90% out of
182% (approximately 49.5%) of the IOPS did not exceeded the threshold for this particular
tier. The red portion shows the 92% out of 182% (approximately 50.5%) of the IOPS
exceeded the threshold and the extents are potential candidates to move to a higher tier.

In the Workload Distribution Across Tier section, the STAT tool shows the skew of the
workload of the storage pool selected. The X-axis (horizontal) denotes the percentage of
extents in use. The Y-axis (vertical) denotes the percentage of extents that are busy out of the
given percentage from the X-axis. In our graph, for instance, when we look at 10% of the
extents in the pool, only about 40% of these are determined to be very busy. Figure 9-26
shows an example of our graph.

Figure 9-26 STAT Tool project workload distribution

In the Volume Heat Distribution section, the STAT tool shows volume heat distribution of all
volumes in the storage pool. The columns are as follows:
򐂰 VDisk ID
򐂰 VDisk Copy ID
򐂰 Configured size or VDisk capacity
򐂰 I/O percentage of extent pool
򐂰 The tier(s) that the volume is taking capacity from
򐂰 The capacity on each of the tier(s)
򐂰 The heat distribution of that storage pool

The heat distribution of a volume is displayed using a color bar which represents the
following:

The blue portion of the bar represents the capacity of cold extents in the volume. Extents are
considered cold when it is not used heavily or the I/O per second is very low.

The orange portion of the bar represents the capacity of warm extents in the volume. Data is
considered warm when it is either relatively heavily or the IOPS is relatively more compared
to the cold extents or lower when compared to hot extents.

448 Implementing the IBM Storwize V5000


The red portion of the bar represents the capacity of hot data in the volume. Data is
considered hot when it is used most heavily or the IOPS on that data has been highest.

Figure 9-27 shows few examples of the volume heat distribution.

Figure 9-27 Volume heat distribution

The Systemwide Recommendation section can be viewed on another page which shows the
advised configuration for the tiers as applicable for the configuration of the system. Typically, it
shows three recommended configurations: Flash (SSD), Enterprise, and Nearline.

Each recommendation displays the storage pools, the recommendation, and the expected
improvement. An example is shown in Figure 9-28.

Figure 9-28 STAT Tool systemwide recommendation

Chapter 9. Easy Tier 449


450 Implementing the IBM Storwize V5000
10

Chapter 10. Copy services


In this chapter, we describe the copy services functions provided by the IBM Storwize V5000
storage system, including FlashCopy and Remote Copy. Copy services functions are useful
for making data copies for backup, application test, recovery, and so on. The IBM Storwize
V5000 system makes it easy to apply these functions to your environment through its intuitive
GUI.

This chapter includes the following topics:


򐂰 FlashCopy
򐂰 Remote Copy
򐂰 Troubleshooting Remote Copy
򐂰 Managing Remote Copy using the GUI

© Copyright IBM Corp. 2013, 2015. All rights reserved. 451


10.1 FlashCopy
Using the FlashCopy function of the IBM Storwize V5000 storage system, you can create a
point-in-time copy of one or more volumes. In this section, we describe the structure of
FlashCopy and provide details about its configuration and use.

You can use FlashCopy to solve critical and challenging business needs that require the
duplication of data on your source volume. Volumes can remain online and active while you
create consistent copies of the data sets. Because the copy is performed at the block level, it
operates below the host operating system and cache and, therefore, is not apparent to the
host.

Flushing: Because FlashCopy operates at the block level below the host operating system
and cache, those levels do need to be flushed for consistent FlashCopy copies.

While the FlashCopy operation is performed, I/O to the source volume is frozen briefly to
initialize the FlashCopy bitmap and then is allowed to resume. Although several FlashCopy
options require the data to be copied from the source to the target in the background (which
can take time to complete), the resulting data on the target volume copy appears to complete
immediately. This task is accomplished by using a bitmap (or bit array) that tracks changes to
the data after the FlashCopy is started, and an indirection layer, which allows data to be read
from the source volume transparently.

10.1.1 Business requirements for FlashCopy


When you are deciding whether FlashCopy addresses your needs, you must adopt a
combined business and technical view of the problems you must solve. Determine your needs
from a business perspective, and then determine whether FlashCopy fulfills the technical
needs of those business requirements.

With an immediately available copy of the data, FlashCopy can be used in the following
business scenarios:
򐂰 Rapidly creating consistent backups of dynamically changing data
FlashCopy can be used to create backups through periodic running; the FlashCopy target
volumes can be used to complete a rapid restore of individual files or the entire volume
through Reverse FlashCopy (by using the -restore option).
The target volumes that are created by FlashCopy can also be used for backup to tape. By
attaching them to another server and performing backups from there, it allows the
production server to continue largely unaffected. After the copy to tape completes, the
target volumes can be discarded or kept as a rapid restore copy of the data.
򐂰 Rapidly creating consistent copies of production data to facilitate data movement or
migration between hosts
FlashCopy can be used to facilitate the movement or migration of data between hosts
while minimizing downtime for applications. FlashCopy allows application data to be
copied from source volumes to new target volumes while applications remain online. After
the volumes are fully copied and synchronized, the application can be stopped and then
immediately started on the new server that is accessing the new FlashCopy target
volumes. This mode of migration is faster than other migration methods that are available
through the IBM Storwize V5000 because the size and the speed of the migration is not as
limited.

452 Implementing the IBM Storwize V5000


򐂰 Rapidly creating copies of production data sets for application development and testing
Under normal circumstances to perform application development and testing, data must
be restored from traditional backup media, such as tape. Depending on the amount of
data and the technology in use, this process can easily take a day or more. With
FlashCopy, a copy can be created and be online for use in just a few minutes. The time
varies based on the application and the data set size.
򐂰 Rapidly creating copies of production data sets for auditing purposes and data mining
Auditing or data mining normally require the usage of the production applications. This
situation can cause high loads for databases track inventories or similar data. With
FlashCopy, you can create copies for your reporting and data mining activities. This
feature reduces the load on your production systems, which increases their performance.
򐂰 Rapidly creating copies of production data sets for quality assurance
Quality assurance is an interesting case for FlashCopy. Because traditional methods
involve so much time and labor, the refresh cycle is typically extended. Because
FlashCopy reduces the time that is required it allows much more frequent refreshes of the
quality assurance database.

10.1.2 FlashCopy functional overview


FlashCopy occurs between a source volume and a target volume. The source and target
volumes must be the same size. Multiple FlashCopy mappings (source-to-target
relationships) can be defined, and point-in-time consistency can be maintained across
multiple point-in-time mappings by using consistency groups. For more information about
FlashCopy consistency groups, see “FlashCopy consistency groups” on page 458.

The minimum granularity that IBM Storwize V5000 storage system supports for FlashCopy is
an entire volume; it is not possible to use FlashCopy to copy only part of a volume.
Additionally, the source and target volumes must belong to the same IBM Storwize V5000
storage system, but they do not have to be in the same storage pool.

Before you start a FlashCopy (regardless of the type and options that are specified), the IBM
Storwize V5000 must put the cache into write-through mode, which flushes the I/O that is
bound for the source volume. If you are scripting FlashCopy operations from the CLI, you
must run the prestartfcmap or prestartfcconsistgrp command. However, this step is
managed for you and carried out automatically by the GUI. This is not the same as flushing
the host cache, which is not required. After FlashCopy is started, an effective copy of a source
volume to a target volume is created. The content of the source volume is immediately
presented on the target volume and the original content of the target volume is lost. This
FlashCopy operation is also referred to as a time-zero copy (T0 ).

Immediately following the FlashCopy operation, the source and target volumes are available
for use. The FlashCopy operation creates a bitmap that is referenced and maintained to direct
I/O requests within the source and target relationship. This bitmap is updated to reflect the
active block locations as data is copied in the background from the source to target and
updates are made to the source.

Chapter 10. Copy services 453


Figure 10-1 shows the redirection of the host I/O toward the source volume and the target
volume.

Figure 10-1 Redirection of host I/O

When data is copied between volumes, it is copied in units of address space known as
grains. Grains are units of data that are grouped to optimize the use of the bitmap that track
changes to the data between the source and target volume. You have the option of using
64 KB or 256 KB grain sizes (256 KB is the default). The FlashCopy bitmap contains 1 bit for
each grain and is used to track whether the source grain was copied to the target. The 64 KB
grain size uses bitmap space at a rate of four times the default 256 KB size.

The FlashCopy bitmap dictates the following read and write behaviors for the source and
target volumes:
򐂰 Read I/O request to source: Reads are performed from the source volume the same as for
non-FlashCopy volumes.
򐂰 Write I/O request to source: Writes to the source cause the grains of the source volume to
be copied to the target if they were not already and then the write is performed to the
source.
򐂰 Read I/O request to target: Reads are performed from the target if the grains were already
copied; otherwise, the read is performed from the source.
򐂰 Write I/O request to target: Writes to the target cause the grain to be copied from the
source to the target first, unless the entire grain is being written and then the write
completes to the target only.

454 Implementing the IBM Storwize V5000


FlashCopy mappings
A FlashCopy mapping defines the relationship between a source volume and a target volume.
FlashCopy mappings can be stand-alone mappings or a member of a consistency group, as
described in “FlashCopy consistency groups” on page 458.

Incremental FlashCopy mappings


In an incremental FlashCopy, the initial mapping copies all of the data from the source volume
to the target volume. Subsequent FlashCopy mappings copy only data that was modified
since the initial FlashCopy mapping. This action reduces the amount of time that it takes to
re-create an independent FlashCopy image. You can define a FlashCopy mapping as
incremental only when you create the FlashCopy mapping.

Multiple target FlashCopy mappings


You can copy up to 256 target volumes from a single source volume. Each relationship
between a source and target volume is managed by a unique mapping such that a single
volume can be the source volume for up to 256 mappings.

Each of the mappings from a single source can be started and stopped independently. If
multiple mappings from the same source are active (in the copying or stopping states), a
dependency exists between these mappings.

If a single source volume has multiple target FlashCopy volumes, the write to the source
volume does not cause its data to be copied to all of the targets. Instead, it is copied to the
newest target volume only. The older targets refer to new targets first before they refer to the
source. A dependency relationship exists between a particular target and all newer targets
that share a source until all data is copied to this target and all older targets.

Cascaded FlashCopy mappings


The cascaded FlashCopy function allows a FlashCopy target volume to be the source volume
of another FlashCopy mapping. Up to 256 mappings can exist in a cascade. If cascaded
mappings and multiple target mappings are used, a tree of up to 256 mappings can be
created.

Cascaded mappings differ from multiple target FlashCopy mappings in depth. Cascaded
mappings have an association in the manner of A > B > C, while multiple target FlashCopy
has an association in the manner A > B1 and A > B2.

Background copy
The background copy rate is a property of a FlashCopy mapping that is defined as a value of
0 - 100. The background copy rate can be defined and dynamically changed for individual
FlashCopy mappings. A value of 0 disables background copy. This option is also called the
no-copy option, which provides pointer-based images for limited lifetime uses.
With FlashCopy background copy, the source volume data is copied to the corresponding
target volume in the FlashCopy mapping. If the background copy rate is set to 0 (which means
disable the FlashCopy background copy), only data that changed on the source volume is
copied to the target volume. The benefit of using a FlashCopy mapping with background copy
enabled is that the target volume becomes a real independent clone of the FlashCopy
mapping source volume after the copy is complete. When the background copy is disabled,
only the target volume is a valid copy of the source data while the FlashCopy mapping
remains in place. Copying only the changed data saves your storage capacity (assuming it is
thin provisioned and the -rsize option was correctly setup.)

Chapter 10. Copy services 455


The relationship of the background copy rate value to the amount of data that is copied per
second is shown in Table 10-1.

Table 10-1 Background copy rate


Value Data that is copied per Grains per second Grains per second
second (256 KB grain) (64 KB grain)

1 - 10 128 KB 0.5 2

11 - 20 256 KB 1 4

21 - 30 512 KB 2 8

31 - 40 1 MB 4 16

41 - 50 2 MB 8 32

51 - 60 4 MB 16 64

61 - 70 8 MB 32 128

71 - 80 16 MB 64 256

81 - 90 32 MB 128 512

91 - 100 64 MB 256 1024

Data copy rate: The data copy rate remains the same regardless of the FlashCopy grain
size. The difference is the number of grains that are copied per second. The gain size can
be 64 KB or 256 KB. The smaller size uses more bitmap space and thus limits the total
amount of FlashCopy space possible. However, it might be more efficient regarding the
amount of data that is moved, depending on your environment.

Cleaning rate
The cleaning rate provides a method for FlashCopy copies with dependant mappings
(multiple target or cascaded) to complete their background copies before their source goes
offline or is deleted after a stop is issued.

When you create or modify a FlashCopy mapping, you can specify a cleaning rate for the
FlashCopy mapping that is independent of the background copy rate. The cleaning rate is
also defined as a value of 0 - 100, which has the same relationship to data copied per second
as the backup copy rate (see Table 10-1).

The cleaning rate controls the rate at which the cleaning process operates. The purpose of
the cleaning process is to copy (or flush) data from FlashCopy source volumes upon which
there are dependent mappings. For cascaded and multiple target FlashCopy, the source
might be a target for another FlashCopy or a source for a chain (cascade) of FlashCopy
mappings. The cleaning process must complete before the FlashCopy mapping can go to the
stopped state. This feature and the distinction between stopping and stopped states was
added to prevent data access interruption for dependent mappings when their source is
issued a stop.

456 Implementing the IBM Storwize V5000


FlashCopy mapping states
A mapping is in one of the following states at any point:
򐂰 Idle or Copied
The source and target volumes act as independent volumes even if a mapping exists
between the two. Read and write caching is enabled for the source and target volumes.
If the mapping is incremental and the background copy is complete, the mapping records
only the differences between the source and target volumes. The source and target
volumes go offline if the connection to both nodes in the IBM Storwize V5000 storage
system that the mapping is assigned to is lost.
򐂰 Copying
The copy is in progress. Read and write caching is enabled on the source and the target
volumes.
򐂰 Prepared
The mapping is ready to start. The target volume is online, but is not accessible. The
target volume cannot perform read or write caching. Read and write caching is failed by
the SCSI front end as a hardware error. If the mapping is incremental and a previous
mapping completed, the mapping records only the differences between the source and
target volumes. The source and target volumes go offline if the connection to both nodes
in the IBM Storwize V5000 storage system that the mapping is assigned to is lost.
򐂰 Preparing
The target volume is online, but not accessible. The target volume cannot perform read or
write caching. Read and write caching is failed by the SCSI front end as a hardware error.
Any changed write data for the source volume is flushed from the cache. Any read or write
data for the target volume is discarded from the cache. If the mapping is incremental and a
previous mapping completed, the mapping records only the differences between the
source and target volumes. The source and target volumes go offline if the connection to
both nodes in the IBM Storwize V5000 storage system that the mapping is assigned to is
lost.
򐂰 Stopped
The mapping is stopped because you issued a stop command or an I/O error occurred.
The target volume is offline and its data is lost. To access the target volume, you must
restart or delete the mapping. The source volume is accessible and the read and write
cache is enabled. If the mapping is incremental, the mapping is recording write operations
to the source volume. The source and target volumes go offline if the connection to both
nodes in the IBM Storwize V5000 storage system that the mapping is assigned to is lost.
򐂰 Stopping
The mapping is copying data to another mapping. If the background copy process is
complete, the target volume is online while the stopping copy process completes. If the
background copy process did not complete, data is discarded from the target volume
cache. The target volume is offline while the stopping copy process runs. The source
volume is accessible for I/O operations.
򐂰 Suspended
The mapping did start, but it did not complete. Access to the metadata is lost, which
causes the source and target volume to go offline. When access to the metadata is
restored, the mapping returns to the copying or stopping state and the source and target
volumes return online. The background copy process resumes.
Any data that was not flushed and was written to the source or target volume before the
suspension is in cache until the mapping leaves the suspended state.

Chapter 10. Copy services 457


FlashCopy consistency groups
Consistency groups address the requirement to preserve point-in-time data consistency
across multiple volumes for applications that include related data that spans them. For these
volumes, consistency groups maintain the integrity of the FlashCopy by ensuring that
dependent writes are run in the application’s intended sequence. For more information about
dependent writes, see “Dependent writes” on page 458.

When consistency groups are used, the FlashCopy commands are issued to the FlashCopy
consistency group, which performs the operation on all FlashCopy mappings that are
contained within the consistency group.

Figure 10-2 shows a consistency group that consists of two FlashCopy mappings.

Figure 10-2 FlashCopy consistency group

FlashCopy mapping management: After an individual FlashCopy mapping was added to


a consistency group, it can be managed only as part of the group. Operations such as start
and stop are no longer allowed on the individual mapping.

Dependent writes
To show why it is crucial to use consistency groups when a data set spans multiple volumes,
consider the following typical sequence of writes for a database update transaction:
1. A write is run to update the database log, which indicates that a database update is about
to be performed.
2. A second write is run to complete the actual update to the database.
3. A third write is run to update the database log, which indicates that the database update
completed successfully.

The database ensures the correct ordering of these writes by waiting for each step to
complete before it starts the next step. However, if the database log (updates 1 and 3) and the
database (update 2) are on separate volumes, it is possible for the FlashCopy of the database
volume to occur before the FlashCopy of the database log. This situation can result in the
target volumes seeing writes (1) and (3) but not (2) because the FlashCopy of the database
volume occurred before the write was completed.

458 Implementing the IBM Storwize V5000


In this case, if the database was restarted using the backup that was made from the
FlashCopy target volumes, the database log indicates that the transaction completed
successfully when, in fact, it had not. This situation occurs because the FlashCopy of the
volume with the database file was started (bitmap was created) before the write completed to
the volume. Therefore, the transaction is lost and the integrity of the database is in question.

To overcome the issue of dependent writes across volumes and to create a consistent image
of the client data, it is necessary to perform a FlashCopy operation on multiple volumes as an
atomic operation using consistency groups.

A FlashCopy consistency group can contain up to 512 FlashCopy mappings. The more
mappings that you have, the more time it takes to prepare the consistency group. FlashCopy
commands can then be issued to the FlashCopy consistency group and simultaneously for all
of the FlashCopy mappings that are defined in the consistency group. For example, when the
FlashCopy for the consistency group is started, all FlashCopy mappings in the consistency
group are started at the same time, which results in a point-in-time copy that is consistent
across all FlashCopy mappings that are contained in the consistency group.

A consistency group aggregates FlashCopy mappings, not volumes. Thus, where a source
volume has multiple FlashCopy mappings, they can be in the same or separate consistency
groups. If a particular volume is the source volume for multiple FlashCopy mappings, you
might want to create separate consistency groups to separate each mapping of the same
source volume. Regardless of whether the source volume with multiple target volumes is in
the same consistency group or in separate consistency groups, the resulting FlashCopy
produces multiple identical copies of the source data.

The consistency group can be specified when the mapping is created. You can also add the
FlashCopy mapping to a consistency group or change the consistency group of a FlashCopy
mapping later.

Important: Do not place stand-alone mappings into a consistency group because they
become controlled as part of that consistency group.

FlashCopy consistency group states


A FlashCopy consistency group is in one of the following states at any point:
򐂰 Idle or Copied
All FlashCopy Mappings in this consistency group are in the Idle or Copied state.
򐂰 Preparing
At least one FlashCopy mapping in this consistency group is in the Preparing state.
򐂰 Prepared
The consistency group is ready to start. While in this state, the target volumes of all
FlashCopy mappings in this consistency group are not accessible.
򐂰 Copying
At least one FlashCopy mapping in the consistency group is in the Copying state and no
FlashCopy mappings are in the Suspended state.
򐂰 Stopping
At least one FlashCopy mapping in the consistency group is in the Stopping state and no
FlashCopy mappings are in the Copying or Suspended state.

Chapter 10. Copy services 459


򐂰 Stopped
The consistency group is stopped because you issued a command or an I/O error
occurred.
򐂰 Suspended
At least one FlashCopy mapping in the consistency group is in the Suspended state.
򐂰 Empty
The consistency group does not have any FlashCopy mappings.

Reverse FlashCopy
Reverse FlashCopy enables FlashCopy targets to become restore points for the source
without breaking the FlashCopy relationship and without waiting for the original copy
operation to complete. It supports multiple targets and multiple rollback points.

A key advantage of Reverse FlashCopy is that it does not delete the original target, thus
allowing processes that use the target, such as a tape backup, to continue uninterrupted.

You can also create an optional copy of the source volume that is made before the reverse
copy operation is started. This copy restores the original source data, which can be useful for
diagnostic purposes.

Figure 10-3 shows an example of the reverse FlashCopy scenario.

Figure 10-3 Reverse FlashCopy scenario

460 Implementing the IBM Storwize V5000


To restore from a FlashCopy backup using the GUI, complete the following steps:
1. (Optional) Create a target volume (volume Z) and run FlashCopy on the production
volume (volume X) to copy data on to the new target for later problem analysis.
2. Create a FlashCopy map with the backup to be restored (volume Y) or (volume W) as the
source volume and volume X as the target volume.
3. Start the FlashCopy map (volume Y to volume X).

The -restore option: In the CLI, you must add the -restore option to the command
svctask startfcmap manually. For more information about using the CLI, see
Appendix A, “Command-line interface setup and SAN Boot” on page 667. The GUI
adds this suffix automatically if it detects you are mapping from a target back to the
source.

Regardless of whether the initial FlashCopy map (volume X to volume Y) is incremental, the
Reverse FlashCopy operation copies only the modified data.

Consistency groups are reversed by creating a set of new “reverse” FlashCopy maps and
adding them to a new “reverse” consistency group. Consistency groups cannot contain more
than one FlashCopy map with the same target volume.

10.1.3 Planning for FlashCopy


There are several items that must be considered before a FlashCopy is performed, which are
described in this section.

Guidelines for FlashCopy implementation


Consider the following guidelines for FlashCopy implementation:
򐂰 The source and target volumes must be on the same IBM Storwize V5000 storage
system.
򐂰 The source and target volumes do not need to be in the same storage pool.
򐂰 The FlashCopy source and target volumes can be thin-provisioned.
򐂰 The source and target volumes must be the same size. The size of the source and target
volumes cannot be altered (increased or decreased) while a FlashCopy mapping is
defined.
򐂰 FlashCopy operations perform in direct proportion to the performance of the source and
target disks. If you have a fast source disk and slow target disk, the performance of the
source disk is reduced because it must wait for the write operation to occur at the target
before it can write to the source.

Chapter 10. Copy services 461


Maximum configurations for FlashCopy
Table 10-2 shows some of the FlashCopy maximum configurations. For more information
about the latest values, see the IBM Storwize V5000 Knowledge Center at this website:
http://www.ibm.com/support/knowledgecenter/STHGUJ_7.4.0/com.ibm.storwize.v5000.740
.doc/v5000_ichome_740.html

For the planning and installation documentation, see this website:


https://www-947.ibm.com/support/entry/myportal/plan_install/system_storage/disk_sy
stems/mid-range_disk_systems/ibm_storwize_v5000?productContext=-2033461677

Table 10-2 FlashCopy maximum configurations


FlashCopy property Maximum

FlashCopy targets per source 256

FlashCopy mappings per cluster 4,096

FlashCopy consistency groups per cluster 255

FlashCopy mappings per consistency group 512

FlashCopy presets
The IBM Storwize V5000 storage system provides three FlashCopy presets (Snapshot,
Clone, and Backup) to simplify the more common FlashCopy operations, as shown in
Table 10-3.

Table 10-3 FlashCopy presets


Preset Purpose

Snapshot Creates a point-in-time view of the production data. The snapshot is not intended
to be an independent copy. Instead, it is used to maintain a view of the production
data at the time the snapshot is created.
This preset automatically creates a thin-provisioned target volume with none of the
capacity that is allocated at the time of creation. The preset uses a FlashCopy
mapping with none of the background copy so that only data written to the source
or target is copied to the target volume.

Clone Creates an exact replica of the volume, which can be changed without affecting the
original volume. After the copy operation completes, the mapping that was created
by the preset is automatically deleted.
This preset automatically creates a volume with the same properties as the source
volume and creates a FlashCopy mapping with a background copy rate of 50. The
FlashCopy mapping is configured to automatically delete when the FlashCopy
mapping reaches 100% completion.

Backup Creates a point-in-time replica of the production data. After the copy completes, the
backup view can be refreshed from the production data, with minimal copying of
data from the production volume to the backup volume.
This preset automatically creates a volume with the same properties as the source
volume. The preset creates an incremental FlashCopy mapping with a background
copy rate of 50.

Presets: All of the presets can be adjusted by using the expandable Advanced Settings
section in the GUI.

462 Implementing the IBM Storwize V5000


10.1.4 Managing FlashCopy using the GUI
The IBM Storwize V5000 storage system provides a separate function icon to access copy
service management. The following windows are available for managing FlashCopy under the
Copy Services function icon:
򐂰 FlashCopy
򐂰 Consistency Groups
򐂰 FlashCopy Mappings

The Copy Services function icon is shown in Figure 10-4.

Figure 10-4 Copy Services function icon

Most of the actions to manage the FlashCopy mapping can be done in the FlashCopy window
or the FlashCopy Mappings windows, although the quick path to create FlashCopy presets
can be found only in the FlashCopy window.

Click FlashCopy in the Copy Services function icon menu and the FlashCopy window opens,
as shown in Figure 10-5. In the FlashCopy window, the FlashCopy mappings are organized
by volumes.

Chapter 10. Copy services 463


Figure 10-5 FlashCopy window

Click FlashCopy Mappings in the Copy Services function icon menu and the FlashCopy
Mappings window opens, as shown in Figure 10-6. In the FlashCopy Mappings window, the
FlashCopy mappings are listed individually.

Figure 10-6 FlashCopy Mappings window

464 Implementing the IBM Storwize V5000


The Consistency Groups window is used to manage the consistency groups for FlashCopy
mappings. Click Consistency Groups in the Copy Services function icon menu and the
Consistency Groups window opens, as shown in Figure 10-7.

Figure 10-7 Consistency Groups window

Chapter 10. Copy services 465


Quick path to create FlashCopy presets
It is easy to create a FlashCopy by using the presets in the FlashCopy window.

Creating a snapshot
In the FlashCopy window, choose a volume and click Create Snapshot from the Actions
drop-down menu, as shown in Figure 10-8. Alternatively, you can highlight your chosen
volume and right-click to access the Actions menu.

Figure 10-8 Create a snapshot using the preset

You now have a snapshot volume for the volume you selected.

466 Implementing the IBM Storwize V5000


Creating a clone
In the FlashCopy window, choose a volume and click Create Clone from the Actions
drop-down menu, as shown in Figure 10-9. Alternatively, highlight your chosen volume and
right-click to access the Actions menu.

Figure 10-9 Create a clone from the preset

You now have a clone volume for the volume you selected.

Chapter 10. Copy services 467


Creating a backup
In the FlashCopy window, choose a volume and click Create Backup from the Actions
drop-down menu, as shown in Figure 10-10. Alternatively, highlight your chosen volume and
right-click to access the Actions menu.

Figure 10-10 Create a backup from the preset

You now have a backup volume for the volume you selected.

In the FlashCopy window and in the FlashCopy Mappings window, you can monitor the
progress of the running FlashCopy operations, as shown in Figure 10-11. The progress bars
for each target volume indicate the copy progress as a percentage. The copy progress
remains 0% for snapshots (there is no change until data is written to the target volume). The
copy progresses for clone and backup continues to increase until the copy process
completes.

468 Implementing the IBM Storwize V5000


Figure 10-11 FlashCopy in progress viewed in the FlashCopy Mappings window

The copy progress can be also found under the Running Tasks status indicator, as shown in
Figure 10-12.

Figure 10-12 Running Tasks bar: FlashCopy operations

Chapter 10. Copy services 469


This view is a little different from FlashCopy and FlashCopy Mapping windows (Figure 10-13).

Figure 10-13 FlashCopy operations shown through Running Tasks

After the copy processes complete, you find the FlashCopy mapping with the clone preset
(TestVol2 in our example) was deleted automatically, as shown in Figure 10-14. There are
now two identical volumes that are independent of each other.

Figure 10-14 FlashCopy progresses complete

470 Implementing the IBM Storwize V5000


10.1.5 Managing FlashCopy mappings
The FlashCopy presets cover the most frequently used FlashCopy configurations for general
situations. However, customized FlashCopy mappings are still necessary in some
complicated scenarios.

Creating FlashCopy mappings


You can create FlashCopy mappings through the FlashCopy window. Select the volume that
you want to be the source volume for the FlashCopy mapping and click Advanced
FlashCopy... from the Actions drop-down menu, as shown in Figure 10-15. Alternatively,
select the volume and right-click.

Figure 10-15 Create advanced FlashCopy

You can Create New Target Volumes as part of the mapping process or Use Existing Target
Volumes. We describe creating volumes next. To use existing volumes, see “Using existing
target volumes” on page 480.

Chapter 10. Copy services 471


Creating target volumes
Complete the following steps to create target volumes:
1. Click Create new target volumes if you have not yet created the target volume.
2. The wizard guides you to choose a preset, as shown in Figure 10-16. Choose one preset
that has the most similar configuration to the one that is required and click Advanced
Settings to make any appropriate adjustments to the advanced settings.

Figure 10-16 Choose a preset most similar to your requirement

472 Implementing the IBM Storwize V5000


The following default advanced settings for the snapshot preset are shown in
Figure 10-17:
– Background Copy: 0
– Incremental: No
– Auto Delete after completion: No
– Cleaning Rate: 0

Figure 10-17 Default setting for the snapshot preset

Chapter 10. Copy services 473


The following Advanced Settings for the Clone Preset are shown in Figure 10-18:
– Background Copy: 50
– Incremental: No
– Auto Delete after completion: Yes
– Cleaning Rate: 50

Figure 10-18 Default settings for the clone preset

474 Implementing the IBM Storwize V5000


Figure 10-19 shows the following Advanced Settings for the Backup preset:
– Background Copy: 50
– Incremental: Yes
– Auto Delete after completion: No
– Cleaning Rate: 50

Figure 10-19 Default settings for the backup preset

Chapter 10. Copy services 475


3. Change the settings of the FlashCopy mapping according to your requirements and click
Next.
4. In the next step, you can add your FlashCopy mapping to a consistency group, as shown
in Figure 10-20. If the consistency group is not ready, the FlashCopy mapping can be
added to the consistency group afterward. Click Next to continue.

Figure 10-20 Add FlashCopy mapping to a consistency group

476 Implementing the IBM Storwize V5000


5. You can define how the new target volumes manage capacity as shown in Figure 10-21
The Create a Generic volume option is your default choice if you selected Clone or Backup
as your basic preset. If you select a thin-provisioned volume, more options are available,
as shown in Figure 10-22. The other option is to Inherit the Source Volume properties.

Figure 10-21 Defining how target volumes use capacity

Chapter 10. Copy services 477


Figure 10-22 Advanced thin provision options

6. You can choose from which storage pool you want to create your target volume as shown
in Figure 10-23. You can select the same storage pool used by the source volume or a
different pool. Click Next to continue.

Figure 10-23 Selecting storage pool for target volume

478 Implementing the IBM Storwize V5000


7. Click Finish when you make your decision and the mappings and volume are created.
8. Click Close to see the FlashCopy mapping created on your volume with a new target, as
shown in Figure 10-24. The status of the newly created FlashCopy mapping is Idle; it can
be started, as described in “Starting a FlashCopy mapping” on page 486.

Figure 10-24 New FlashCopy mapping is created with a new target

Chapter 10. Copy services 479


Using existing target volumes
Complete the following steps to use existing target volumes:
1. If you already have candidate target volumes, select Use Existing Target Volumes in the
Advanced FlashCopy menu, as shown in Figure 10-25.

Figure 10-25 Create FlashCopy mapping using existing target volume

480 Implementing the IBM Storwize V5000


2. You must choose the target volume for the source volume that you selected. Select the
target volume from the drop-down menu in the right pane of the window and click Add, as
shown in Figure 10-26.

Figure 10-26 Select the target volume

Chapter 10. Copy services 481


3. After you click Add, the FlashCopy mapping is listed, as shown in Figure 10-27. Click the
red X to delete it if the FlashCopy mapping is not the one you want to create. Additional
mappings can be made by selecting further source and target volumes from the
drop-down boxes and clicking Add. After all of the FlashCopy mappings you want are
defined, click Next to continue.

Figure 10-27 Add FlashCopy mapping

482 Implementing the IBM Storwize V5000


4. Select the preset and (if necessary) adjust the settings by using the Advanced Settings
section as shown in Figure 10-28. (For more information about the advanced setting, see
“Creating target volumes” on page 472.) Confirm that the settings meet your requirements
and then click Next.

Figure 10-28 Select a preset and make your adjustments

Chapter 10. Copy services 483


5. You can now add the FlashCopy mapping to a consistency group (if necessary), as shown
in Figure 10-29. Selecting Yes shows a drop-down menu from which you can select a
consistency group. Click Finish and the FlashCopy mapping is created with the status of
Idle, as shown in Figure 10-24 on page 479.

Figure 10-29 Select a consistency group to add the FlashCopy mapping

484 Implementing the IBM Storwize V5000


Creating new FlashCopy mappings
You can also create FlashCopy mappings in the FlashCopy Mappings window by clicking
New FlashCopy Mapping at the upper left, as shown in Figure 10-30.

Figure 10-30 Create a FlashCopy mapping in the FlashCopy Mappings window

A wizard guides you through the process to create a FlashCopy mapping. The steps are the
same as creating an Advanced FlashCopy mapping using Existing Target Volumes, as
described in “Using existing target volumes” on page 480.

Chapter 10. Copy services 485


Starting a FlashCopy mapping
Most of the FlashCopy mapping actions can be performed in the FlashCopy window or the
FlashCopy Mapping window. For the actions that are available in both windows, in the
following sections, we show the steps in the FlashCopy window, although the steps are the
same if you were to use the FlashCopy Mapping window.

You can start the mapping by selecting the FlashCopy target volume in the FlashCopy
window and selecting the Start option from the Actions drop-down menu (as shown in
Figure 10-31) or by selecting the volume and right-clicking. The status of the FlashCopy
mapping changes from Idle to Copying.

Figure 10-31 Start FlashCopy mapping

486 Implementing the IBM Storwize V5000


Stopping a FlashCopy mapping
The FlashCopy mapping can be stopped by selecting the FlashCopy target volume in the
FlashCopy window and clicking the Stop option from the Actions drop-down menu, as shown
in Figure 10-32. After the stopping process completes, the status of the FlashCopy mapping
is changed to Stopped.

Figure 10-32 Stopping a FlashCopy mapping

Renaming the target volume


If the FlashCopy target volumes were created automatically by the IBM Storwize V5000
storage system, the name of the target volume is the source volume name plus a suffix that
includes numbers. The name of the target volumes can be changed to be more meaningful in
your environment.

To change the name of the target volume, select the FlashCopy target volume in the
FlashCopy window and click the Rename Target Volume option from the Actions drop-down
menu (as shown in Figure 10-33) or right-click the selected volume.

Chapter 10. Copy services 487


Figure 10-33 Rename a target volume

Enter your new name for the target volume, as shown in Figure 10-34. Click Rename to
finish.

Figure 10-34 Rename a target volume

488 Implementing the IBM Storwize V5000


Renaming a FlashCopy mapping
The FlashCopy mappings are created with names that begin with fcmap. You can change this
to be something more meaningful.

To change the name of a FlashCopy mapping, select the FlashCopy mapping in the
FlashCopy Mappings window and click Rename Mapping in the Actions drop-down menu,
as shown in Figure 10-35.

Figure 10-35 Rename a FlashCopy mapping

Chapter 10. Copy services 489


Enter the new name for the FlashCopy mapping, as shown in Figure 10-36. Click Rename to
finish.

Figure 10-36 Enter a new name for the FlashCopy mapping

490 Implementing the IBM Storwize V5000


Deleting a FlashCopy mapping
The FlashCopy mapping can be deleted by selecting the FlashCopy target volume in the
FlashCopy window and clicking Delete Mapping in the Actions drop-down menu (as shown
in Figure 10-37) or by right-clicking the selected volume.

Figure 10-37 Select Delete Mapping

FlashCopy Mapping state: If the FlashCopy mapping is in the Copying state, it must be
stopped before it is deleted.

Chapter 10. Copy services 491


You must confirm your action to delete FlashCopy mappings in the window that opens, as
shown in Figure 10-38. Verify the number of FlashCopy mappings you want to delete. If you
want to delete the FlashCopy mappings while the data on the target volume is inconsistent
with the source volume, select the option to do so. Click Delete and your FlashCopy mapping
is removed.

Figure 10-38 Confirm the deletion of FlashCopy mappings

Deleting FlashCopy mapping: Deleting the FlashCopy mapping does not delete the
target volume. If you must reclaim the storage space occupied by the target volume, you
must delete the target volume manually.

492 Implementing the IBM Storwize V5000


Showing related volumes
You can show the FlashCopy mapping dependencies by selecting a target or source volume
in the FlashCopy window and clicking Show Related Volumes in the Actions drop-down
menu (as shown in Figure 10-39) or right-clicking the selected volume.

Figure 10-39 Show Related Volumes menu

Chapter 10. Copy services 493


The FlashCopy mapping dependency tree opens, as shown in Figure 10-40.

Figure 10-40 FlashCopy mapping dependency

494 Implementing the IBM Storwize V5000


Clicking either volume shows the properties of the volume, as shown in Figure 10-41.

Figure 10-41 Target FlashCopy Volume details

Chapter 10. Copy services 495


Editing properties
The background copy rate and cleaning rate can be changed after the FlashCopy mapping is
created. Select the FlashCopy target mapping in the FlashCopy window and click Edit
Properties in the Actions drop-down menu (as shown in Figure 10-42) or right-click.

Figure 10-42 Edit Properties menu

496 Implementing the IBM Storwize V5000


You can then modify the value of the background copy rate and cleaning rate by moving the
pointers on the bars, as shown in Figure 10-43. Click Save to save changes.

Figure 10-43 Change the copy and cleaning rate

Chapter 10. Copy services 497


Restoring from a FlashCopy
Complete the following steps to manipulate FlashCopy target volumes to restore a source
volume to a previous known state:
1. Identify the FlashCopy relationship that you want to restore. In our example, we want to
restore FlashVol1, as shown in Figure 10-44.

Figure 10-44 Starting FlashCopy restore

498 Implementing the IBM Storwize V5000


2. Create a mapping using the target volume of the mapping to be restored. In our example,
it is FlashVol1_01, as shown in Figure 10-45. Select Advanced FlashCopy  Use
Existing Target Volumes.

Figure 10-45 Create reverse mapping

Chapter 10. Copy services 499


3. The Source Volume is preselected with the target volume that was selected in the previous
step. Select the Target Volume from the drop-down menu (you select the source volume
that you want to restore). In our example, we select FlashVol1, as shown in Figure 10-46.

Figure 10-46 Select target volume

500 Implementing the IBM Storwize V5000


4. Click Add. A warning message appears, as shown in Figure 10-47. Click Close. This
message is shown because we are using a source as a target.

Figure 10-47 Flash restore warning

Chapter 10. Copy services 501


5. Click Next and you see a snapshot preset choice, as shown in Figure 10-48.

Figure 10-48 Choose snapshot preset

Select Snapshot and click Next.

502 Implementing the IBM Storwize V5000


6. In the next window, you are asked if the new mapping is to be part of a consistency group,
as shown in Figure 10-49. In our example, the new mapping is not part of a consistency
group, so we click No and then Finish to create the mapping.

Figure 10-49 Add new mapping to consistency group

Chapter 10. Copy services 503


7. The new reverse mapping is now created and shown in the Idle state, as shown in
Figure 10-50.

Figure 10-50 New reverse mapping

504 Implementing the IBM Storwize V5000


8. To restore the original source volume FlashVol1 with the snapshot we took
(FlashVol1_01), we select the new mapping and right-click to open the Actions menu, as
shown in Figure 10-51.

Figure 10-51 Starting the reverse mapping

Chapter 10. Copy services 505


9. Click Start to write over FlashVol1 with the original bitmap data that was saved in the
FlashCopy FlashVol01_01. The command then completes, as shown in Figure 10-52.

Figure 10-52 Flash Restore command

Important: The underlying command that is run by the IBM Storwize V5000 appends the
-restore option automatically.

506 Implementing the IBM Storwize V5000


10.The reverse mapping now shows as 100% copied, as shown in Figure 10-53.

Figure 10-53 Source volume restore complete

10.1.6 Managing a FlashCopy consistency group


FlashCopy consistency groups can be managed by clicking Consistency Groups under the
Copy Services function icon, as shown in Figure 10-54.

Chapter 10. Copy services 507


Figure 10-54 Access to the Consistency Groups window

The Consistency Groups window is where you can manage consistency groups and
FlashCopy mappings as shown in Figure 10-55.

Figure 10-55 Consistency Groups window

508 Implementing the IBM Storwize V5000


All FlashCopy mappings are shown, those in consistency groups and those that are not. Click
Not in a Group, and then expand your selection by clicking the plus (+) icon next to it. All the
FlashCopy mappings that are not in any consistency groups are displayed underneath.

Individual consistency groups are displayed underneath those not in groups, you can discover
the properties of a consistency group and the FlashCopy mappings in it by clicking the plus
icon to the left of the group name. You can also take action on any consistency groups and
FlashCopy mappings within the Consistency Groups window, as allowed by their state. For
more information, see 10.1.5, “Managing FlashCopy mappings” on page 471.

Creating a FlashCopy consistency group


To create a FlashCopy consistency group, click New Consistency Group at the top of the
Consistency Groups window, as shown in Figure 10-56.

Figure 10-56 New Consistency Group option

You are prompted to enter the name of the new consistency group. Following your naming
conventions, enter the name of the new consistency group in the name field and click Create.

Chapter 10. Copy services 509


After the creation process completes, you find a new consistency group, as shown in
Figure 10-57.

Figure 10-57 New consistency group

510 Implementing the IBM Storwize V5000


You can rename the Consistency Group by selecting it and then right-clicking or using the
Actions drop-down menu. Select Rename and enter the new name, as shown in
Figure 10-58. Next to the name of the consistency group, the state shows that it is now an
empty consistency group with no FlashCopy mapping in it.

Figure 10-58 Renaming a consistency group

Chapter 10. Copy services 511


Adding FlashCopy mappings to a consistency group
Click Not in a Group to list all the FlashCopy mappings with no Consistency Group. You can
add FlashCopy mappings to a Consistency Group by selecting them and clicking the Move to
Consistency Group option from the Actions drop-down menu, as shown in Figure 10-59 on
page 512.

Figure 10-59 Select the FlashCopy mappings to add to a consistency group

Important: You cannot move mappings that are copying. Selecting a FlashCopy that is
already running results in the Move to Consistency Group option being disabled.

Selections of a range are performed by highlighting a mapping, pressing and holding the Shift
key, and clicking the last item in the range. Multiple selections can be made by pressing and
holding the Ctrl key and clicking each mapping individually. The option is also available by
right-clicking individual mappings.

512 Implementing the IBM Storwize V5000


You are prompted to specify which consistency group you want to move the FlashCopy
mapping into, as shown in Figure 10-60. Choose from the list in the drop-down menu. Click
Move to Consistency Group to continue.

Figure 10-60 Select consistency group

After the action completes, you find that the FlashCopy mappings you selected were removed
from the Not In a Group list to the consistency group you chose.

Chapter 10. Copy services 513


Starting a consistency group
To start a consistency group, highlight the required group and click Start from the Actions
drop-down menu or right-click the consistency group, as shown in Figure 10-61.

Figure 10-61 Start a consistency group

After you start the consistency group, all the FlashCopy mappings in the consistency group
start at the same time. The state of consistency group and all the underlying mappings
changes to Copying, as shown in Figure 10-62.

514 Implementing the IBM Storwize V5000


Figure 10-62 Consistency group started

Stopping a consistency group


The consistency group can be stopped by selecting Stop from the Actions drop-down menu
or right-clicking, as shown in Figure 10-63.

Figure 10-63 Stop a consistency group

Chapter 10. Copy services 515


After the stop process completes, the FlashCopy mappings in the consistency group are in
the Stopped state and a red X icon appears on the function icon of this consistency group to
indicate an alert, as shown in Figure 10-64.

Figure 10-64 Consistency group stop completes

Any previously copied relationships that were added to a consistency group that was later
stopped before all members of the consistency group completed synchronization remain in
the Copied state.

516 Implementing the IBM Storwize V5000


Removing FlashCopy mappings from a consistency group
FlashCopy mappings can be removed from a consistency group by selecting the FlashCopy
mappings and clicking Remove from Consistency Group from the Actions drop-down
menu of the FlashCopy mapping or right-clicking, as shown in Figure 10-65.

Figure 10-65 Remove from consistency group

The FlashCopy mappings are returned to the Not in a Group list after they are removed from
the consistency group.

Chapter 10. Copy services 517


Deleting a consistency group
A consistency group can be deleted by clicking Delete from the Actions drop-down menu or
right-clicking the selected group, as shown in Figure 10-66.

Figure 10-66 Delete a consistency group

You are presented with a confirmation screen as shown in Figure 10-67.

Figure 10-67 Confirming deletion of a consistency group

518 Implementing the IBM Storwize V5000


Click Yes to delete the group. Consistency groups do not have to be empty to be deleted. Any
FlashCopy mappings that exist in them will be moved to the Not in a Group mappings list
when the consistency group is deleted.

Restoring from a FlashCopy Consistency Group


It is possible to manipulate FlashCopy mappings that were captured as part of a consistency
group to restore the source volumes of those mappings to the state they were all in at the time
the FlashCopy was taken.

To restore a consistency group from a FlashCopy, we must create a reverse mapping of all
the individual volumes that are contained within the original consistency group. In our
example, we have two FlashCopy mappings (fcmap0 and fcmap1) in a consistency group that
is known as FlashTestGroup, as shown in Figure 10-68.

Figure 10-68 Creating FlashCopy reverse mapping

Complete the following steps:


1. Click New Consistency Group in the upper left corner (as shown in Figure 10-68) and
create a consistency group. In our example, we created a group called RedBookTest.
2. Follow the procedure that is described in “Restoring from a FlashCopy” on page 498 to
create reverse mappings for each of the mappings that exist in the source consistency
group (FlashTestGroup). When prompted to add to a consistency group (as shown in
Figure 10-49 on page 503), select Yes and from the drop-down menu select the new
“reverse” consistency group that you created in step 2. In our example, this group is
RedBookTest. The result should be similar to that shown in Figure 10-69.

Chapter 10. Copy services 519


Figure 10-69 Reverse Consistency group populated.

3. To restore the consistency group, highlight the reverse consistency group and click Start,
as shown in Figure 10-70.

Figure 10-70 Starting Consistency group restore

520 Implementing the IBM Storwize V5000


4. This will overwrite TestVol0 and advancedflash0 with the original bitmap data that was
saved in the FlashTestGroup FlashCopy consistency group mapping. The command
completes, as shown in Figure 10-71.

Figure 10-71 Consistency Group restore command

Important: The IBM Storwize V5000 automatically appends the -restore option to the
command.

Chapter 10. Copy services 521


5. Click Close and the command panel returns to the Consistency Group window. The
reverse consistency group now shows as 100% copied, and all volumes in the original
FlashTestGroup are now restored, as shown in Figure 10-72.

Figure 10-72 Consistency Group restored

10.2 Remote Copy


In this section, we describe how the Remote Copy function works in IBM Storwize V5000. We
also provide the implementation steps for Remote Copy configuration and management using
the GUI.

Remote Copy consists of three methods for copying: Metro Mirror, Global Mirror, and Global
Mirror with Change Volumes. Metro Mirror is designed for metropolitan distances with a
synchronous copy requirement. Global Mirror is designed for longer distances without
requiring the hosts to wait for the full round-trip delay of the long-distance link through
asynchronous methodology. Global Mirror with Change Volumes is an added piece of
functionality for Global Mirror that is designed to attain consistency on lower-quality network
links.

Metro Mirror and Global Mirror are IBM branded terms for the functions of Synchronous
Remote Copy and Asynchronous Remote Copy. Throughout this book, the term “Remote
Copy” is used to refer to both functions where the text applies to each term equally.

522 Implementing the IBM Storwize V5000


10.2.1 Remote Copy concepts
Remote Copy concepts are described in this section.

Partnership
When a partnership is created, we connect two separate IBM Storwize V5000 systems or an
IBM SAN Volume Controller, Storwize V3700, or Storwize V7000, and an IBM Storwize
V5000. After the partnership creation is configured on both systems, further communication
between the node canisters in each of the storage systems is established and maintained by
the SAN. All inter-cluster communication goes through the Fibre Channel network.

The partnership must be defined on both participating Storwize or SVC systems to make the
partnership fully functional.

Interconnection: Interconnects between IBM Storwize products were introduced in


Version 6.3.0. Because IBM Storwize V5000 supports only version 7.10 or higher, there is
no problem with support for this functionality. However, any other Storwize product must be
at a minimum level of 6.3.0 to connect to the IBM Storwize V5000 and the IBM Storwize
V5000 must set the replication layer using the svctask chsystem -layer replication
command. Layer limitations are described next.

Introduction to layers
IBM Storwize V5000 implements the concept of layers. Layers determine how the IBM
Storwize portfolio interacts with the IBM SAN Volume Controller. Currently, there are two
layers: replication and storage.

The replication layer is used when you want to use the IBM Storwize V5000 with one or more
IBM SAN Volume Controllers as a Remote Copy partner. The storage layer is the default
mode of operation for the IBM Storwize V5000, and is used when you want to use the IBM
Storwize V5000 to present storage to an IBM SAN Volume Controller as a backend system.

The layer for the IBM Storwize V5000 can be switched by running svctask chsystem -layer
replication. Generally, switch the layer while your IBM Storwize V5000 system is not in
production. This situation prevents potential disruptions because layer changes are not
I/O-tolerant.

Figure 10-73 shows the effect of layers on IBM SAN Volume Controller and IBM Storwize
V5000 partnerships.

Figure 10-73 IBM Storwize V5000 virtualization layers

Chapter 10. Copy services 523


The replication layer allows an IBM Storwize V5000 system to be a Remote Copy partner with
an IBM SAN Volume Controller. The storage layer allows an IBM Storwize V5000 system to
function as back-end storage for an IBM SAN Volume Controller. An IBM Storwize V5000
system cannot be in both layers at the same time.

Limitations on the SAN Volume Controller and Storwize V5000


partnership
IBM SAN Volume Controller and IBM Storwize V5000 systems can be partners in a Remote
Copy partnership. However, the following limitations apply:
򐂰 The layer for the Storwize V5000 must be set to replication. The default is storage.
򐂰 If any other SAN Volume Controller or IBM Storwize V5000 ports are visible on the SAN
(aside from the ones on the cluster where you are making the change), you cannot change
the layer.
򐂰 If any host object is defined to an IBM SAN Volume Controller or IBM Storwize V5000
system, you cannot change the layer.
򐂰 If any MDisks from an IBM Storwize V5000 other than the one you are making the layer
change on are visible, you cannot change the layer.
򐂰 If any cluster partnership is defined, you cannot change the layer.

Partnership topologies
A partnership between up to four IBM Storwize V5000 systems is allowed.

The following typical partnership topologies between multiple IBM Storwize V5000s are
available:
򐂰 Daisy-chain topology, as shown in Figure 10-74.

Figure 10-74 Daisy chain partnership topology for IBM Storwize V5000

򐂰 Triangle topology, as shown in Figure 10-75.

Figure 10-75 Triangle partnership topology for IBM Storwize V5000

524 Implementing the IBM Storwize V5000


򐂰 Star topology, as shown in Figure 10-76.

Figure 10-76 Star topology for IBM Storwize V5000

򐂰 Full-meshed topology, as shown in Figure 10-77.

Figure 10-77 Full meshed IBM Storwize V5000

Partnerships: These partnerships are valid for configurations with SAN Volume
Controllers and IBM Storwize V5000 systems if the IBM Storwize V5000 systems are using
the replication layer. They are also valid for Storwize V3700 and V7000 products.

Partnership states
A partnership has the following states:
򐂰 Partially Configured
Indicates that only one cluster partner is defined from a local or remote cluster to the
displayed cluster and is started. For the displayed cluster to be configured fully and to
complete the partnership, you must define the cluster partnership from the cluster that is
displayed to the corresponding local or remote cluster.
򐂰 Fully Configured
Indicates that the partnership is defined on the local and remote clusters and is started.
򐂰 Remote Not Present
Indicates that the remote cluster is not present for the partnership.

Chapter 10. Copy services 525


򐂰 Partially Configured (Local Stopped)
Indicates that the local cluster is only defined to the remote cluster and the local cluster is
stopped.
򐂰 Fully Configured (Local Stopped)
Indicates that a partnership is defined on the local and remote clusters and the remote
cluster is present, but the local cluster is stopped.
򐂰 Fully Configured (Remote Stopped)
Indicates that a partnership is defined on the local and remote clusters and the remote
cluster is present, but the remote cluster is stopped.
򐂰 Fully Configured (Local Excluded)
Indicates that a partnership is defined between a local and remote cluster; however, the
local cluster was excluded. This state can occur when the fabric link between the two
clusters was compromised by too many fabric errors or slow response times of the cluster
partnership.
򐂰 Fully Configured (Remote Excluded)
Indicates that a partnership is defined between a local and remote cluster; however, the
remote cluster was excluded. This state can occur when the fabric link between the two
clusters was compromised by too many fabric errors or slow response times of the cluster
partnership.
򐂰 Fully Configured (Remote Exceeded)
Indicates that a partnership is defined between a local and remote cluster and the remote
is available; however, the remote cluster exceeds the number of allowed clusters within a
cluster network. The maximum of four clusters can be defined in a network. If the number
of clusters exceeds that limit, the IBM Storwize V5000 system determines the inactive
cluster or clusters by sorting all the clusters by their unique identifier in numerical order.
The inactive cluster partner that is not in the top four of the cluster-unique identifiers
shows Fully Configured (Remote Exceeded).

Remote Copy relationships


A Remote Copy relationship is a relationship between two individual volumes of the same
size. These volumes are called a master (source) volume and an auxiliary (target) volume.

Typically, the master volume contains the production copy of the data and is the volume that
the application normally accesses. The auxiliary volume often contains a backup copy of the
data and is used for disaster recovery.

The master and auxiliary volumes are defined when the relationship is created, and these
attributes never change. However, either volume can operate in the primary or secondary role
as necessary. The primary volume contains a valid copy of the application data and receives
updates from the host application, which is analogous to a source volume. The secondary
volume receives a copy of any updates to the primary volume because these updates are all
transmitted across the mirror link. Therefore, the secondary volume is analogous to a
continuously updated target volume. When a relationship is created, the master volume is
assigned the role of primary volume and the auxiliary volume is assigned the role of
secondary volume. The initial copying direction is from master to auxiliary. When the
relationship is in a consistent state, you can reverse the copy direction.

526 Implementing the IBM Storwize V5000


The two volumes in a relationship must be the same size. The Remote Copy relationship can
be established on the volumes within one IBM Storwize V5000 storage system, which is
called an intra-cluster relationship. The relationship can also be established in different IBM
Storwize V5000 storage systems or between an IBM Storwize V5000 storage system and an
IBM SAN Volume Controller, IBM Storwize V3700, or IBM Storwize V7000, which are called
inter-cluster relationships.

Important: The use of Remote Copy target volumes as Remote Copy source volumes is
not allowed. A FlashCopy target volume can be used as Remote Copy source volume and
also as a Remote Copy target volume.

Metro Mirror
Metro Mirror is a type of Remote Copy that creates a synchronous copy of data from a master
volume to an auxiliary volume. With synchronous copies, host applications write to the master
volume but do not receive confirmation that the write operation completed until the data is
written to the auxiliary volume. This action ensures that both volumes have identical data
when the copy completes. After the initial copy completes, the Metro Mirror function always
maintains a fully synchronized copy of the source data at the target site.

Figure 10-78 shows how a write to the master volume is mirrored to the cache of the auxiliary
volume before an acknowledgement of the write is sent back to the host that issued the write.
This process ensures that the auxiliary is synchronized in real time if it is needed in a failover
situation.

Figure 10-78 Write on volume in a Metro Mirror relationship

The Metro Mirror function supports copy operations between volumes that are separated by
distances up to 300 km. For disaster recovery purposes, Metro Mirror provides the simplest
way to maintain an identical copy on the primary and secondary volumes. However, as with all
synchronous copies over remote distances, there can be a performance impact to host
applications. This performance impact is related to the distance between primary and
secondary volumes and, depending on application requirements, its use might be limited
based on the distance between sites.

Chapter 10. Copy services 527


Global Mirror
Global Mirror provides an asynchronous copy, which means that the secondary volume is not
an exact match of the primary volume at every point. The Global Mirror function provides the
same function as Metro Mirror Remote Copy without requiring the hosts to wait for the full
round-trip delay of the long-distance link; however, some delay can be seen on the hosts in
congested or overloaded environments. Make sure that you closely monitor and understand
your workload.

In asynchronous Remote Copy (which Global Mirror provides), write operations are
completed on the primary site and the write acknowledgement is sent to the host before it is
received at the secondary site. An update of this write operation is sent to the secondary site
at a later stage, which provides the capability to perform Remote Copy over distances that
exceed the limitations of synchronous Remote Copy.

The distance of Global Mirror replication is limited primarily by the latency of the WAN link that
is provided. Global Mirror has a requirement of 80 ms round-trip-time for data that is sent to
the remote location. The propagation delay is roughly 8.2 µs per mile or 5 µs per kilometer for
Fibre Channel connections. Each device in the path adds more delay of about 25 µs. Devices
that use software (such as some compression devices) add much more time. The time that is
added by software-assisted devices is highly variable and should be measured directly. Be
sure to include these times when you are planning your Global Mirror design.

You should also measure application performance based on the expected delays before
Global Mirror is fully implemented. The IBM Storwize V5000 storage system provides you
with an advanced feature that permits you to test performance implications before Global
Mirror is deployed and a long-distance link is obtained. This advanced feature is enabled by
modifying the IBM Storwize V5000 storage system parameters gmintradelaysimulation and
gminterdelaysimulation. These parameters can be used to simulate the write delay to the
secondary volume. The delay simulation can be enabled separately for each intra-cluster or
inter-cluster Global Mirror. You can use this feature to test an application before the full
deployment of the Global Mirror feature. For more information about how to enable the CLI
feature, see Appendix A, “Command-line interface setup and SAN Boot” on page 667.

528 Implementing the IBM Storwize V5000


Figure 10-79 shows that a write operation to the master volume is acknowledged back to the
host that is issuing the write before the write operation is mirrored to the cache for the
auxiliary volume.

Figure 10-79 Global Mirror write sequence

The Global Mirror algorithms always maintain a consistent image on the auxiliary volume.
They achieve this consistent image by identifying sets of I/Os that are active concurrently at
the master, assigning an order to those sets, and applying those sets of I/Os in the assigned
order at the secondary.

In a failover scenario where the secondary site must become the master source of data
(depending on the workload pattern and the bandwidth and distance between local and
remote site), certain updates might be missing at the secondary site. Therefore, any
applications that use this data must have an external mechanism for recovering the missing
updates and reapplying them; for example, a transaction log replay.

10.2.2 Global Mirror with Change Volumes


Global Mirror within the IBM Storwize V5000 is designed to achieve a recovery point objective
(RPO) as low as possible so that data is as up-to-date as possible. This capability places
some strict requirements on your infrastructure and in certain situations (with low network link
quality or congested or overloaded hosts), you might be affected by multiple 1920
(congestion) errors.

Congestion errors happen in the following primary situations:


򐂰 Congestion at the source site through the host or network.
򐂰 Congestion in the network link or network path.
򐂰 Congestion at the target site through the host or network.

Chapter 10. Copy services 529


Global Mirror includes functionality that is designed to address the following conditions that
negatively affect some Global Mirror implementations:
򐂰 Estimation of bandwidth requirements tends to be complex.
򐂰 It is often difficult to ensure that the latency and bandwidth requirements can be met.
򐂰 Congested hosts on the source or target site can cause disruption.
򐂰 Congested network links can cause disruption with only intermittent peaks.

To address these issues, Change Volumes were added as an option for Global Mirror
relationships. Change Volumes use the FlashCopy functionality but cannot be manipulated as
FlashCopy volumes because they are special-purpose only. Change Volumes replicate
point-in-time images on a cycling period (the default is 300 seconds). This situation means
that your change rate must include only the condition of the data at the point-in-time the
image was taken instead of all the updates during the period. This situation can provide
significant reductions in replication volume.

Figure 10-80 shows a basic Global Mirror relationship without Change Volumes.

Figure 10-80 Global Mirror without Change Volumes

Figure 10-81 shows a relationship with the Change Volumes.

Figure 10-81 Global Mirror with Change Volumes

With Change Volumes, a FlashCopy mapping exists between the primary volume and the
primary Change Volume. The mapping is updated during a cycling period (every 60 seconds
to one day). The primary Change Volume is then replicated to the secondary Global Mirror
volume at the target site, which is then captured in another change volume on the target site.
This situation provides a consistent image at the target site and protects your data from being
inconsistent during resynchronization.

530 Implementing the IBM Storwize V5000


Figure 10-82 shows a number of I/Os on the source volume, the same number on the target
volume, and in the same order. Assuming that this set is the same set of data updated over
and over, these updates are wasted network traffic and the I/O can be completed much more
efficiently, as shown in Figure 10-83.

Figure 10-82 Global Mirror I/O replication without Change Volumes

In Figure 10-83, the same data is being updated repeatedly, so Change Volumes
demonstrate significant I/O transmission savings because you must send only I/O number 16,
which was the last I/O before the cycling period.

Figure 10-83 Global Mirror I/O replication with Change Volumes

The cycling period can be adjusted by running chrcrelationship -cycleperiodseconds


<60-86400>. If a copy does not complete in the cycle period, the next cycle does not start until
the prior cycle completes. It is for this reason that the use of Change Volumes gives you the
following possibilities for RPO:
򐂰 If your replication completes in the cycling period, your RPO is twice the cycling period.
򐂰 If your replication does not complete within the cycling period, your RPO is twice the
completion time. The next cycling period starts immediately after the prior period is
finished.

Careful consideration should be put into balancing your business requirements with the
performance of Global Mirror with Change Volumes. Global Mirror with Change Volumes
increases the inter-cluster traffic for more frequent cycling periods, so going as short as
possible is not always the answer. In most cases, the default should meet your requirements
and perform reasonably well.

Chapter 10. Copy services 531


Important: When Global Mirror volumes with Change Volumes are used, make sure that
you remember to select the Change Volume on the auxiliary (target) site. Failure to do so
leaves you exposed during a resynchronization operation.

The GUI automatically creates Change Volumes for you. However, it is a limitation of this
initial release that they are fully provisioned volumes. To save space, you should create
thin-provisioned volumes in advance and use the existing volume option to select your
change volumes.

Remote Copy consistency groups


A consistency group is a logical entity that groups copy relationships. By grouping the
relationships, you can ensure that these relationships are managed in unison and the data
within the group is in a consistent state. For more information about the necessity of
consistency groups, see 10.1.6, “Managing a FlashCopy consistency group” on page 507.

Remote Copy commands can be issued to a Remote Copy consistency group, and, therefore,
simultaneously for all Metro Mirror relationships that are defined within that consistency
group, or to a single Metro Mirror relationship that is not part of a Metro Mirror consistency
group.

Figure 10-84 shows the concept of Remote Copy consistency groups. Because the
RC_Relationships 1 and 2 are part of the consistency group, they can be handled as one
entity, while the stand-alone RC_Relationship 3 is handled separately.

Figure 10-84 Remote Copy consistency group

532 Implementing the IBM Storwize V5000


Remote Copy relationships do not have to belong to a consistency group, but if they do they
can only belong to one consistency group. Relationships that are not part of a consistency
group are called stand-alone relationships. A consistency group can contain zero or more
relationships. All relationships in a consistency group must have matching primary and
secondary clusters, which are sometimes referred to as master clusters and auxiliary
clusters. All relationships in a consistency group must also have the same copy direction and
state.

Metro Mirror and Global Mirror relationships cannot belong to the same consistency group. A
copy type is automatically assigned to a consistency group when the first relationship is
added to the consistency group. After the consistency group is assigned a copy type, only
relationships of that copy type can be added to the consistency group.

Remote Copy and consistency group states


Stand-alone Remote Copy relationships and consistency groups share a common
configuration and state model. All of the relationships in a non-empty consistency group have
the same state as the consistency group.

The following states apply to the relationships and the consistency groups, except for the
Empty state, which is only for consistency groups:
򐂰 InconsistentStopped
The primary volumes are accessible for read and write I/O operations, but the secondary
volumes are not accessible for either one. A copy process must be started to make the
secondary volumes consistent.
򐂰 InconsistentCopying
The primary volumes are accessible for read and write I/O operations, but the secondary
volumes are not accessible for either one. This state indicates that a copy process is
ongoing from the primary to the secondary volume.
򐂰 ConsistentStopped
The secondary volumes contain a consistent image, but it might be outdated compared to
the primary volumes. This state can occur when a relationship was in the
ConsistentSynchronized state and experiences an error that forces a freeze of the
consistency group or the Remote Copy relationship.
򐂰 ConsistentSynchronized
The primary volumes are accessible for read and write I/O operations. The secondary
volumes are accessible for read-only I/O operations.
򐂰 Idling
The primary volumes and the secondary volumes are operating in the primary role.
Therefore, the volumes are accessible for write I/O operations.
򐂰 IdlingDisconnected
The volumes in this half of the consistency group are all operating in the primary role and
can accept read or write I/O operations.
򐂰 InconsistentDisconnected
The volumes in this half of the consistency group are all operating in the secondary role
and cannot accept read or write I/O operations.

Chapter 10. Copy services 533


򐂰 ConsistentDisconnected
The volumes in this half of the consistency group are all operating in the secondary role
and can accept read I/O operations but not write I/O operations.
򐂰 Empty
The consistency group does not contain any relationships.

10.2.3 Remote Copy planning


Before you use Remote Copy, you must plan for its usage.

General guidelines for Remote Copy


General guidelines for Remote Copy include the following considerations:
򐂰 Partnerships between up to four IBM Storwize V5000 storage systems, IBM SAN Volume
Controller systems, IBM Storwize V7000, or IBM Storwize V3700 is allowed. The
partnership must be defined on any partnered IBM Storwize storage systems or IBM SAN
Volume Controller systems to make it fully functional.
򐂰 The two volumes in a relationship must be the same size.
򐂰 The Remote Copy relationship can be established on the volumes within one IBM
Storwize V5000 storage system or in different IBM Storwize V5000 storage systems.
When the two volumes are in the same cluster, they must be in the same I/O group.
򐂰 You cannot use Remote Copy target volumes as Remote Copy source volumes. However,
a FlashCopy target volume can be used as Remote Copy source volume. Other
restrictions are outlined in Table 10-5 on page 537.
򐂰 The Metro Mirror function supports copy operations between volumes that are separated
by distances up to 300 km.
򐂰 One Remote Copy relationship can belong only to one consistency group.
򐂰 All relationships in a consistency group must have matching primary and secondary
clusters, (master clusters and auxiliary clusters). All relationships in a consistency group
must also have the same copy direction and state.
򐂰 Metro Mirror and Global Mirror relationships cannot belong to the same consistency
group.
򐂰 To manage multiple Remote Copy relationships as one entity, relationships can be made
part of a Remote Copy consistency group, which ensures data consistency across
multiple Remote Copy relationships and provides ease of management.
򐂰 An IBM Storwize V5000 storage system implements flexible resynchronization support,
which enables it to resynchronize volume pairs that experienced write I/Os to both disks
and to resynchronize only those regions that are known to changed.
򐂰 Global Mirror with Change Volumes should have Change Volumes defined for the master
and auxiliary volumes.

534 Implementing the IBM Storwize V5000


Remote Copy configuration limits
Table 10-4 lists the Remote Copy configuration limits.

Table 10-4 Remote Copy configuration limits


Parameter Value

Number of Remote Copy consistency groups per cluster 256

Number of Remote Copy relationships per consistency group No limit beyond the Remote Copy
relationships per system.

Number of Remote Copy relationships per I/O Group 2,048 (4,096 per system)

Total Remote Copy volume capacity per I/O Group 1024 TB


(This limit is the total capacity for
all master and auxiliary volumes in
the I/O group.)

SAN planning for Remote Copy


In this section, we describe some guidelines that can be used for SAN planning for Remote
Copy.

Zoning recommendation
Node canister ports on each IBM Storwize V5000 must communicate with each other so that
the partnership can be created. These ports must be visible to each other on your SAN.
Proper switch zoning is critical to facilitating inter-cluster communication.

The following SAN zoning recommendation should be considered:


򐂰 For each node canister, exactly two Fibre Channel ports should be zoned to exactly two
Fibre Channel ports from each node canister in the partner cluster.
򐂰 If dual-redundant inter-switch links (ISLs) are available, the two ports from each node
should be split evenly between the two ISLs; that is, exactly one port from each node
canister should be zoned across each ISL. For more information, see this website:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003634&myns=s033&mynp=famil
yind5329743&mync=E
򐂰 All local zoning rules should be followed. A properly configured SAN fabric is key to not
only local SAN performance, but Remote Copy. For more information about these rules,
see this website:
http://publib.boulder.ibm.com/infocenter/storwize/ic/index.jsp?topic=%2Fcom.ibm
.storwize.V5000.doc%2Fsvc_configrulessummary_02171530.html

Important: When a local fabric and a remote fabric are connected for Remote Copy
purposes, the ISL hop count between a local node and a remote node cannot exceed
seven.

Chapter 10. Copy services 535


Remote Copy link requirements
The following link requirements are valid for Metro Mirror and Global Mirror:
򐂰 Round-trip latency
The total round-trip latency must be less than 80 ms and less than 40 ms in each direction.
Latency simulations should be performed with your applications before any network links
are put in place to see whether the applications perform at an acceptable level whilst they
meet the round-trip latency requirement.
򐂰 Bandwidth
The bandwidth must satisfy the following requirements:
– If you are not using Change Volumes, be able to sustain peak write load for all mirrored
volumes and background copy traffic.
– If you are using Change Volumes with Global Mirror, be able to sustain change rate of
Source Change Volumes and background copy traffic.
– Other background copy rate (the preferred practice is 10% to 20% of maximum peak
load) for initial synchronization and resynchronization.
– Remote Copy internal communication at idle with or without Change Volumes is
approximately 2.6 Mbps. This amount is the minimum amount.

Redundancy: If the link between two sites is configured with redundancy so that it can
tolerate single failures, the link must be sized so that the bandwidth and latency
requirement can be met during single failure conditions.

Native IP Replication
With the advent of version 7.2 code the IBM Storwize V5000 now supports replication via
native IP links using the built in networking ports of the cluster nodes. This gives greater
flexibility in creating Remote Copy links between Storwize and SVC products. FC over IP
routers are no longer required. It utilizes SANslide technology developed by Bridgeworks
Limited of Christchurch, UK. They specialize in products that can bridge storage protocols
and accelerate data transfer over long distances.

Adding this technology at each end of a wide area network (WAN) Transmission Control
Protocol/Internet Protocol (TCP/IP) link significantly improves the utilization of the link. It does
this by applying patented Artificial Intelligence (AI) to hide latency normally associated with
WANs. Instead the Storwize technology uses TCP/IP latency to its advantage. In a traditional
IP link performance drops off the more data is sent because each transmission must wait for
acknowledgement before the next can be sent. Rather than wait for the acknowledgement to
come back, the Storwize technology sends more sets of packets across other virtual
connections. The number of virtual connections is controlled by the AI engine. This improves
WAN connection use, which results in a data transfer rate approaching full line speed and is
similar to the use of Buffer to Buffer credits in FC.

If packets are lost from any virtual connection, the data will be re-transmitted, and the remote
unit will wait for it. Presuming that this is not a frequent problem, overall performance is only
marginally affected because of the delay of an extra round trip for the data that is resent. The
implementation of this technology can greatly improve the performance of Remote Copy
especially Global Mirror Change Volumes (GM/CV) over long distances.

For more information about Native IP replication and how to use it, see the separate
publication, IBM SAN Volume Controller and Storwize Family Native IP Replication,
REDP-5103.

536 Implementing the IBM Storwize V5000


Interaction between Remote Copy and FlashCopy
Table 10-5 lists combinations of FlashCopy and Remote Copy that are supported.

Table 10-5 FlashCopy and Remote Copy interaction


Component Remote Copy primary Remote Copy secondary

FlashCopy source Supported Supported.


When the FlashCopy
relationship is in the Preparing
and Prepared states, the cache
at the Remote Copy secondary
site operates in write-through
mode. This process adds more
latency to the already latent
Remote Copy relationship.

FlashCopy target This combination is supported This combination is supported


and has the following by the restriction that the
restrictions: FlashCopy mapping cannot be
򐂰 Running stop -force copying, stopping, or
might cause the Remote suspended. Otherwise, the
Copy relationship to fully restrictions are the same as at
resynchronize. the Remote Copy primary site.
򐂰 The I/O group must be the
same.

If you are not using Global Mirror with Change Volumes, for disaster recovery purposes, you
can use the FlashCopy feature to create a consistent copy of an image before you restart a
Global Mirror relationship.

When a consistent relationship is stopped, the relationship enters the consistent_stopped


state. While in this state, I/O operations at the primary site continue to run. However, updates
are not copied to the secondary site. When the relationship is restarted, the synchronization
process for new data is started. During this process, the relationship is in the
inconsistent_copying state.

The secondary volume for the relationship cannot be used until the copy process completes
and the relationship returns to the consistent state. When this situation occurs, start a
FlashCopy operation for the secondary volume before you restart the relationship. While the
relationship is in the Copying state, the FlashCopy feature can provide a consistent copy of
the data. If the relationship does not reach the synchronized state, you can use the
FlashCopy target volume at the secondary site.

10.3 Troubleshooting Remote Copy


Remote Copy (Global Mirror and Metro Mirror) has the following primary error codes:
򐂰 1920: This error is a congestion error that means that the source, the link between source
and target, or the target cannot keep up with the rate of demand.
򐂰 1720. This error is a heartbeat or cluster partnership communication error. This error
tends to be more serious because failing communication between your cluster partners
involves some extended diagnostic time.

Chapter 10. Copy services 537


10.3.1 1920 error
A 1920 error (event ID 050010) can have several triggers. The following official probable
cause projections are available:
򐂰 Primary cluster or SAN fabric problem (10%)
򐂰 Primary cluster or SAN fabric configuration (10%)
򐂰 Secondary cluster or SAN fabric problem (15%)
򐂰 Secondary cluster or SAN fabric configuration (25%)
򐂰 Inter-cluster link problem (15%)
򐂰 Inter-cluster link configuration (25%)

In practice, the error that is most often overlooked is latency. Global Mirror has a
round-trip-time tolerance limit of 80 ms. A message that is sent from your source cluster to
your target cluster and the accompanying acknowledgement must have a total time of 80 ms
(or 40 ms each way).

The primary component of your round-trip time is the physical distance between sites. For
every 1,000 km (621.36 miles), there is a 5 ms delay. This delay does not include the time that
is added by equipment in the path. Every device adds a varying amount of time depending on
the device, but you can expect about 25 µs for pure hardware devices. For software-based
functions (such as compression that is implemented in software), the added delay tends to be
much higher (usually in the millisecond-plus range).

Consider this example. Company A has a production site that is 1,900 km from their recovery
site. Their network service provider uses five devices to connect the two sites. In addition to
those devices, Company A uses a SAN Fibre Channel Router at each site to provide FCIP to
encapsulate the Fibre Channel traffic between sites. There are now seven devices, and
1,900 km of distance delay. All the devices add 200 µs of delay each way. The distance adds
9.5 ms each way, for a total of 19 ms. Combined with the device latency that is 19.4 ms of
physical latency at a minimum. This latency is under the 80 ms limit of Global Mirror, but this
number is the best case number.

Link quality and bandwidth play a significant role here. Your network provider likely ensures a
latency maximum on your network link; be sure to stay below the Global Mirror RTT (Round
Trip Time) limit. You can easily double or triple the expected physical latency with a lower
quality or lower bandwidth network link. As a result, you are suddenly within range of
exceeding the limit the moment a large flood of I/O happens that exceeds the bandwidth
capacity you have in place.

When you get a 1920 error, always check the latency first. The FCIP routing layer can
introduce latency if it is not properly configured. If your network provider reports a much lower
latency, this report can be an indication of a problem at your FCIP Routing layer. Most FCIP
Routing devices have built-in tools that you can use to check the round-trip delay time (RTT).
When you are checking latency, remember that TCP/IP routing devices (including FCIP
routers) report RTT by using standard 64-byte ping packets.

538 Implementing the IBM Storwize V5000


Figure 10-85 shows why the effective transit time should be measured only using packets
large enough to hold a Fibre Channel frame. This packet size is 2148 bytes (2112 bytes of
payload and 36 bytes of header) and you should allow more capacity to be safe because
different switching vendors have optional features that might increase this size.

Figure 10-85 Effect of packet size (in bytes) versus link size

Before you proceed, take a quick look at the second largest component of your
round-trip-time; that is, serialization delay. Serialization delay is the amount of time that is
required to move a packet of data of a specific size across a network link of a bandwidth. This
delay is based on a simple concept that the time that is required to move a specific amount of
data decreases as the data transmission rate increases.

In Figure 10-85, there are orders of magnitude of difference between the different link
bandwidths. It is easy to see how 1920 errors can arise when your bandwidth is insufficient
and why you should never use a TCP/IP ping to measure RTT for FCIP traffic.

Figure 10-85 compares the amount of time in microseconds that is required to transmit a
packet across network links of varying bandwidth capacity. The following packet sizes are
used:
򐂰 64 bytes: The size of the common ping packet
򐂰 1500 bytes: The size of the standard TCP/IP packet
򐂰 2148 bytes: The size of a Fibre Channel frame

Your path MTU affects the delay that is incurred in getting a packet from one location to
another, when it causes fragmentation, or is too large and causes too many retransmits when
a packet is lost. After you verified your latency by using the correct packet size, proceed with
normal hardware troubleshooting.

Chapter 10. Copy services 539


10.3.2 1720 error
The 1720 error (event ID 050020) is the other primary error code of Remote Copy. Because
the term “System Partnership” implies that all involved virtualization systems are partners,
they must communicate with each other. When a partner on either side stops communicating,
you see a 1720 error appear in your error log. According to official documentation, there are
no likely field replaceable unit breakages or other causes.

In practice, the source of this error is most often a fabric problem or a problem in the network
path between your partners. When you receive this error, if your fabric has more than 64 HBA
ports zoned, you should check your fabric configuration to see if more than one HBA port for
each node per I/O group are zoned together. The advised zoning configuration for fabrics is
one port of each node per I/O group associated with the each host HBA. For those fabrics
with 64 or more host ports, this recommendation becomes a rule. You must follow this zoning
rule or the configuration is technically unsupported.

Improper zoning leads to SAN congestion, which can inhibit remote link communication
intermittently. Checking the zero buffer credit timer through IBM Tivoli Storage Productivity
Center and comparing its value against your sample interval might reveal potential SAN
congestion. When a zero buffer credit timer is above 2% of the total time of the sample
interval, it is likely to cause problems.

Next, always ask your network provider to check the status of the link. If the link is okay, watch
for repetition of this error. It is possible in a normal and functional network setup to have
occasional 1720 errors, but multiple occurrences indicate a larger problem.

If you receive multiple 1720 errors, recheck your network connection and then check the IBM
Storwize V5000 partnership information to verify the status and settings. Perform diagnostic
tests for every piece of equipment in the path between the two systems. It often helps to have
a diagram that shows the path of your replication from logical and physical configuration
viewpoints.

If your investigation fails to resolve your Remote Copy problems, you should contact your IBM
support representative for a complete analysis.

540 Implementing the IBM Storwize V5000


10.4 Managing Remote Copy using the GUI
The IBM Storwize V5000 storage system provides a separate function icon for copy service
management. The following windows are available for managing Remote Copy, which are
accessed through the Copy Services function icon:
򐂰 Remote Copy
򐂰 Partnerships

As the name implies, these windows are used to manage Remote Copy and the partnership.

10.4.1 Managing cluster partnerships


The Partnership window is used to manage a partnership between clusters. To access the
Partnership window, click the Copy Services function icon and then click Partnerships, as
shown in Figure 10-86.

Figure 10-86 Partnership window

Chapter 10. Copy services 541


Creating a partnership
No partnership is defined in our example (see Figure 10-87), we only see our local system, so
must create a partnership between the IBM Storwize V5000 system and the chosen Storwize
partner system. In our example this is a Storwize V3700. Click New Partnership in the
Partnership window.

Figure 10-87 Create a cluster partnership

If there is no partnership candidate, an error window opens, as shown in Figure 10-88.

Figure 10-88 No candidates are available to create a partnership

542 Implementing the IBM Storwize V5000


Check the zoning and the system status and make sure that the clusters can see each other.

If the Storwize V5000 can see a candidate system you will be prompted to choose between
an FC or IP partnership as shown in Figure 10-89.

Figure 10-89 Choosing an FC or IP partnership

Chapter 10. Copy services 543


The two systems should not be visible to each other via both FC and IP; if you are going to
use IP replication ensure that the two systems are not zoned in any way at the SAN level. A
system can have simultaneous partnerships over IP and FC with different systems, but never
the same one. For more information see IBM SAN Volume Controller and Storwize Family
Native IP Replication, REDP-5103. In our example we choose FC.

Next you can create your partnership by selecting the appropriate remote storage system
from the Partner system drop-down as shown in Figure 10-90, and defining the available
bandwidth and the background copy percentage between both systems.

Figure 10-90 Select the remote IBM Storwize storage system for a new partnership

The bandwidth that you must enter here is used by the background copy process between the
clusters in the partnership. To set the background copy bandwidth optimally, make sure that
you consider all three resources (primary storage, inter-cluster link bandwidth, and auxiliary
storage) to avoid overloading them, which will affect the I/O latency.

544 Implementing the IBM Storwize V5000


Click OK and the partnership task starts. When it finishes click Close and the partnership is
complete on this system. It will be listed in the Partnership window, but is displayed with an
alert and the system health status will turn yellow as shown in Figure 10-91. This is normal
and happens because only one half of the partnership currently exists.

Figure 10-91 Partially configured partnership

If you select the partnership, more information is available through the Properties option on
the Actions menu or by right-clicking, as shown in Figure 10-92.

Figure 10-92 Partnership properties

Chapter 10. Copy services 545


Important: The partnership is in the “Partially Configured: Local” state because we have
not yet defined it on the other IBM Storwize system. For more information about
partnership states, see “Remote Copy and consistency group states” on page 533.

From the Properties menu option, the partnership bandwidth and background copy rate can
be altered by clicking the Edit button as shown in Figure 10-93.

Figure 10-93 Editing partnership bandwidth and background copy rate

After you have edited the parameters, click Save to save changes or Cancel to quit without
making changes.

Complete the same steps on the second storage system for the partnership to become fully
configured.

546 Implementing the IBM Storwize V5000


The Remote Copy partnership is now complete between the two IBM Storwize V5000
systems as shown in Figure 10-94 and both systems are ready for Remote Copy
relationships to be configured.

Figure 10-94 Fully configured partnership

Stopping and starting a partnership


You can stop the partnership by clicking Stop from the Actions drop-down menu, as shown
in Figure 10-95. Stopping the partnership, disconnects the relationship that uses it.

Figure 10-95 Stopping a partnership

Chapter 10. Copy services 547


After you stop the partnership, your partnership is listed as Fully Configured: Stopped, as
shown in Figure 10-96.

Figure 10-96 Fully configured partnership in Stopped state

548 Implementing the IBM Storwize V5000


The Health status will turn yellow to indicate the partnership is down. This is normal. You can
restart a stopped partnership by clicking Start from the Actions drop-down menu.

The partnership returns to the fully configured status when restarted and the health bar
returns to green. Initially when re-starting a partnership the status may go to a Not Present
state and the health bar will turn red as shown in Figure 10-97. This is normal, and when the
GUI refreshes, it will correct itself and go to a Fully Configured state and the health bar will
turn green.

Figure 10-97 Re-starting a partnership

Deleting a partnership
You can delete a partnership by selecting Delete from the Actions drop-down menu, as
shown in Figure 10-95 on page 547.

Chapter 10. Copy services 549


10.4.2 Managing stand-alone Remote Copy relationships
A Remote Copy relationship can be defined between two volumes where one is the master
(source) and the other one is the auxiliary (target) volume. Use of Remote Copy auxiliary
volumes as Remote Copy master volumes is not allowed. Open the Remote Copy window to
manage Remote Copy by clicking the Copy Services icon and then clicking Remote Copy,
as shown in Figure 10-98.

Figure 10-98 Open Remote Copy window

The Remote Copy window is where management of Remote Copy relationships and Remote
Copy consistency groups is carried out as shown in Figure 10-99.

Figure 10-99 Remote Copy window

550 Implementing the IBM Storwize V5000


The Remote Copy window displays a list of Remote Copy consistency groups. You can also
take actions on the Remote Copy relationships and Remote Copy consistency groups. Click
Not in a Group and all the Remote Copy relationships that are not in any Remote Copy
consistency groups are displayed. To customize the blue column heading bar and select
different attributes of Remote copy relationships, right-click anywhere in the blue bar.

Creating stand-alone Remote Copy relationships

Important: Before a remote copy relationship can be created, target volumes that are the
same size as the source volumes that you want to mirror must be created. For more
information about creating volumes, see Chapter 5, “Volume configuration” on page 201.

To create a Remote Copy relationship, click Create Relationship from the Actions
drop-down menu as shown in Figure 10-100. A wizard opens and guides you through the
Remote Copy relationship creation process.

Figure 10-100 Creating a remote copy relationship

Chapter 10. Copy services 551


Next, as shown in Figure 10-101, you must set the Remote Copy relationship type. You can
select Metro Mirror (synchronous replication) or one of the two forms of Global Mirror
(asynchronous replication). Select the appropriate replication type based on your
requirements and click Next.

Figure 10-101 Select the appropriate Remote Copy type

You must select where your auxiliary (target) volumes are: the local system or the already
defined second storage system. In our example (as shown in Figure 10-102), we choose
another system to build an inter-cluster relationship. Click Next to continue.

Figure 10-102 Select Remote Copy partner

552 Implementing the IBM Storwize V5000


Now the Remote Copy master and auxiliary volume must be specified. Both volumes must
have the same size. As shown in Figure 10-103, use the drop-down boxes to select the
master and auxiliary volumes. The system offers only appropriate auxiliary candidates with
the same volume size as the selected master volume. After selecting the volumes click Add.

Figure 10-103 Select the master and auxiliary volume

You can define multiple and independent relationships by choosing another set of volumes
and clicking Add again. Incorrectly defined relationships can be removed by clicking the red
cross. In our example, we create two independent Remote Copy relationships, as shown in
Figure 10-104.

Figure 10-104 Define multiple independent relationships

Chapter 10. Copy services 553


The next prompt asks you to select if the volumes in the relationship are already
synchronized. In most situations, the data on the master volume and on the auxiliary volume
are not identical, so click No and then click Next to enable an initial copy, as shown in
Figure 10-105.

Figure 10-105 Activate initial data copy

If you select Yes, the volumes are already synchronized, a warning message opens, as
shown in Figure 10-106. Confirm that the volumes are truly identical, and then click OK to
continue.

Figure 10-106 Warning message to make sure that the volumes are synchronized

554 Implementing the IBM Storwize V5000


You can choose to start the initial copying progress now or wait to start it later. In our example,
we select Yes, start copying now and click Finish, as shown in Figure 10-107.

Figure 10-107 Choose if you want to start copying now or later

After the Remote Copy relationships creation completes, two independent Remote Copy
relationships are defined and displayed in the Not in a Group list, as shown in Figure 10-108.

Figure 10-108 Creating a Remote Copy relationship process completes

Chapter 10. Copy services 555


Optionally, you can monitor the ongoing initial synchronization in the Running Tasks status
indicator, as shown in Figure 10-109.

Figure 10-109 Remote copy initialization progress through Running Tasks

Highlight one of the operations and click to see the progress as shown in Figure 10-110.

Figure 10-110 Running task expanded to show the percentage complete of each copy operation

556 Implementing the IBM Storwize V5000


Stopping a stand-alone Remote Copy relationship
The Remote Copy relationship can be stopped by selecting the relationship and clicking Stop
from the Actions drop-down menu, as shown in Figure 10-111.

Figure 10-111 Stop Remote Copy relationship

A prompt appears. Click to allow secondary read/write access, if required, and then click Stop
Relationship, as shown in Figure 10-112.

Figure 10-112 Option to allow secondary read/write access

Chapter 10. Copy services 557


After the stop completes, the state of the Remote Copy relationship is changed from
Consistent Synchronized to Idling, as shown in Figure 10-113. Read/write access to both
volumes is now allowed unless you selected otherwise.

Figure 10-113 Remote Copy relationship stop completes

Starting a stand-alone Remote Copy relationship


The Remote Copy relationship can be started by selecting the relationship and clicking Start
from the Actions drop-down menu, or right as shown in Figure 10-114.

Figure 10-114 Re-start a Remote Copy relationship

558 Implementing the IBM Storwize V5000


When a Remote Copy relationship is started, the most important item is selecting the copy
direction. Both master and auxiliary volumes can be the primary. Make your decision based
on your requirements and click Start Relationship. In our example, choose the master
volume to be the primary, as shown in Figure 10-115.

Figure 10-115 Choose the copy direction

Switching the direction of a stand-alone Remote Copy relationship


The copy direction of the Remote Copy relationship can be switched by selecting the
relationship and clicking Switch from the Actions drop-down menu, as shown in
Figure 10-116.

Figure 10-116 Switch Remote Copy relationship

Chapter 10. Copy services 559


A warning message opens and shows you the consequences of this action, as shown in
Figure 10-117 on page 560. If you switch the Remote Copy relationship, the copy direction of
the relationship becomes the opposite; that is, the current primary volume becomes the
secondary while the current secondary volume becomes the primary. Write access to the
current primary volume is lost and write access to the current secondary volume is enabled. If
it is not a disaster recovery situation, you must stop your host I/O to the current primary
volume in advance. Make sure that you are prepared for the consequences. If so, click OK to
continue.

Figure 10-117 Warning message for switching direction of a Remote Copy relationship

After the switch completes, your Remote Copy relationship is tagged (as shown in
Figure 10-118), and shows you that the primary volume in this relationship was changed to
the auxiliary.

560 Implementing the IBM Storwize V5000


Figure 10-118 Switch icon on the state of the relationship

Renaming a stand-alone Remote Copy relationship


The Remote Copy relationship can be renamed by selecting the relationship and then clicking
Rename from the Actions drop-down menu, as shown in Figure 10-119, or by highlighting the
relationships and right-clicking.

Figure 10-119 Rename the Remote Copy relationship

Enter the new name for the Remote Copy relationship and click Rename.

Chapter 10. Copy services 561


Deleting a stand-alone Remote Copy relationship
The Remote Copy relationship can be deleted by selecting the relationship and selecting
Delete Relationship from the Actions drop-down menu, as shown in Figure 10-120, or by
highlighting the relationship and right-clicking.

Figure 10-120 Delete a Remote Copy relationship

You must confirm this deletion by verifying the number of relationships to be deleted, as
shown in Figure 10-121. Click Delete to proceed.

Figure 10-121 Confirm the relationship deletion

562 Implementing the IBM Storwize V5000


10.4.3 Managing a Remote Copy consistency group
A Remote Copy consistency group can be managed from the Remote Copy window as well.

Creating a Remote Copy consistency group


To create a Remote Copy consistency group, click Create Consistency Group, as shown in
Figure 10-122.

Figure 10-122 Create a Remote Copy consistency group

You must enter a name for your new consistency group, as shown in Figure 10-123. We call
ours ITSO_test.

Figure 10-123 Enter a name for the new consistency group

Chapter 10. Copy services 563


You are prompted for the location of auxiliary volumes, as shown in Figure 10-124. In our
example, these volumes are on another system. Select the relevant option and from the
drop-down menu, select the correct remote system. In our example, we have only one remote
system defined. Click Next to continue.

Figure 10-124 Remote Copy consistency group auxiliary volume location window

Next you are prompted to create an empty consistency group or add relationships to it, as
shown in Figure 10-125.

Figure 10-125 Creating an empty consistency group or adding relationships

564 Implementing the IBM Storwize V5000


If you select No and click Finish, the wizard completes and creates an empty Remote Copy
Consistency Group. Selecting Yes prompts for the type of copy to create, as shown in
Figure 10-126.

Figure 10-126 Selecting copy type - consistency group wizard

Choose the relevant copy type and click Next. A consistency group cannot contain both metro
mirror and global mirror volumes - it must contain only one type or the other. In the following
window, you can choose existing relationships to add to the new consistency group. Only
existing relationships of the type you chose previously (either Metro Mirror relationships or
Global Mirror relationships) will be displayed. This step is optional. Use the Ctrl and Shift keys
to select multiple relationships to add. If you decide that you do not want to use any of these
relationships but you do want to create other relationships, click Next.

Chapter 10. Copy services 565


However, if you choose relationships and then decide you do not want any of them, you
cannot remove them from the consistency group at this stage. You must stop the wizard and
start again, as shown in Figure 10-127.

Figure 10-127 Selecting existing relationships

The next window is optional and gives the option to create new relationships to add to the
consistency group, as shown in Figure 10-128. In our example we have chosen mastervol4
from the Master drop-down and auxmirror4 from the Auxiliary drop-down and added these.

Figure 10-128 Adding new relationships in the consistency group wizard

566 Implementing the IBM Storwize V5000


Select the relevant Master and Auxiliary volumes for the relationship you want to create and
click Add. Multiple relationships can be defined by selecting another Master and Auxiliary
volume and clicking Add again. You can remove any incorrect choices using the red “X” next
to each added relationship. When you finish, click Next. The next window prompts for whether
the relationships are synchronized, as shown in Figure 10-129.

Figure 10-129 Volume synchronization

In the next window, you are asked whether you want to start copying the volumes now, as
shown in Figure 10-130.

Figure 10-130 Remote Consistency group start copying option

Chapter 10. Copy services 567


After you select this option, click Finish to create the Remote Copy Consistency Group. You
may encounter an error during the creation task if one of the relationships you requested to
be added to the group is not in the correct state - the group and any relationships must be in
the same state to start with. This error is shown in Figure 10-131.

Figure 10-131 Error on creating consistency group when states do not match

Click Close to close the task window and the new consistency group is now shown in the
GUI, as shown in Figure 10-132. Notice the relationship we tried to add mastervol4 
auxmirror4 is not listed under the new consistency group but remains under those Not in a
Group.

Figure 10-132 New Remote Consistency group created

568 Implementing the IBM Storwize V5000


In our example, we created a consistency group with a single relationship. Other Remote
Copy relationships can be added to the consistency group later.

Just as for individual Remote Copy relationships, each Remote Copy Consistency Group
displays the name and the status of the consistency group beside the Relationship function
icon. Also shown is the copy direction - in our case this is ITSO V5000  ITSO V3700. It is
easy to change the name of a consistency group by right-clicking the entry, selecting Rename
and then entering a new name. Alternatively, highlight the consistency group and select
Rename from the Actions drop-down menu. Similarly, below the Relationship function icon
are the Remote Copy relationships in this consistency group. The actions on the Remote
Copy relationships can be applied here by using the Actions drop-down menu or right-clicking
the relationships, as shown in Figure 10-133.

Figure 10-133 Drop-down menu options

Chapter 10. Copy services 569


Notice, however, that unlike those options for individual Remote Copy relationship, those
under the control of a consistency group cannot be stopped or started individually. This is
shown in Figure 10-134.

Figure 10-134 Consistency group member options

Adding Remote Copy to a consistency group


The Remote Copy relationships in the Not in a Group list can be added to a consistency
group by selecting the volumes and clicking Add to Consistency Group from the Actions
drop-down menu, as shown in Figure 10-135. Now that the relationship we tried to add earlier
in Figure 10-128 on page 566 is in the same state as the consistency group, we will add that
to our ITSOTest group.

Figure 10-135 Add a Remote Copy relationships to a consistency group

570 Implementing the IBM Storwize V5000


You must choose the consistency group to which to add the Remote Copy relationships,
select the appropriate consistency group and click Add to Consistency Group, as shown in
Figure 10-136.

Figure 10-136 Choose the consistency group to add the remote copies

Your Remote Copy relationships are now in the consistency group that you selected. You can
only add Remote Copy relationships that are in the same state as the Consistency Group to
which you want to add them.

Chapter 10. Copy services 571


Stopping a consistency group
The Remote Copy relationship can be stopped by highlighting the consistency group and
clicking Stop from the Actions drop-down menu, as shown in Figure 10-137, or by
right-clicking the consistency group.

Figure 10-137 Stopping a consistency group

You will be asked if you want to allow read/write access to secondary volumes as shown in
Figure 10-138. Make your choice and then click Stop Consistency Group.

Figure 10-138 Allowing read/write access when stopping a consistency group

572 Implementing the IBM Storwize V5000


The Consistency Group will now be in the Consistent Stopped state if you chose not allow
read write access, or in the Idling state if you chose to allow access, as shown in
Figure 10-139.

Note: The CLI differentiates between stopping consistency groups with or without access
using the -access flag, eg stoprcconsistgrp -access 0

Starting a consistency group


The Remote Copy relationship can be started by clicking Start in the Actions drop-down
menu, as shown in Figure 10-139, or by right-clicking the consistency group.

Figure 10-139 Starting the consistency group

You will be prompted to select whether the Master or Auxiliary volumes are to act as the
primary volumes before the consistency group is started as shown in Figure 10-140. The
consistency group will then start copying data from the primary volumes to the secondary
volumes. If the consistency group was stopped without access and is in the Consistent
Stopped state, you will not be prompted to confirm the direction in which to start the group. It
will start by default using the original primary volumes from when it was stopped.

Important: If you are starting a Consistency Group where access was previously allowed
to auxiliary volumes when the group was stopped, ensure you choose the correct volumes
to act as the Primary volumes when you re-start. Failing to do so could lead to loss of data.

Chapter 10. Copy services 573


Figure 10-140 Confirming replication direction

Switching a consistency group


As with the switch action on the Remote Copy relationship, you can switch the copy direction
of the consistency group. To switch the copy direction of the consistency group, click Switch
from the Actions drop-down menu, as shown in Figure 10-141.

Figure 10-141 Switch the copy direction of a consistency group

574 Implementing the IBM Storwize V5000


A warning message opens, as shown in Figure 10-142. After the switch, the primary volumes
in the consistency group change. Write access to current master volumes is lost, while write
access to the current auxiliary volumes is enabled. This change affects host access, so make
sure that these settings are what you need, and if so, click OK to continue.

Figure 10-142 Warning message to confirm the switch

This option is used for disaster recovery and disaster recovery testing. Ensure host access to
the primary volumes is stopped before switching. You can then mount hosts on the auxiliary
volumes and continue production from a DR site. As before, with individual remote copy
relationships, all the relationships switched as part of the consistency group will now show an
icon indicating they have been swapped around. This is shown in Figure 10-143.

Figure 10-143 Consistency group after a switch roles has taken place

Chapter 10. Copy services 575


Removing Remote Copy relationships from a consistency group
Remote Copy relationships can be removed from a consistency group by selecting the
Remote Copy relationship and clicking Remove from Consistency Group from the Actions
drop-down menu, as shown in Figure 10-144. Alternatively, right-click the relationship to be
removed.

Figure 10-144 Remove Remote Copy relationships from a consistency group

A warning appears, as shown in Figure 10-145. Make sure the Remote Copy relationships
that are shown in the field are the ones that you want to remove from the consistency group.
Click Remove to proceed.

Figure 10-145 Confirm the relationships to remove from the Remote Copy consistency group

576 Implementing the IBM Storwize V5000


After the removal process completes, the Remote Copy relationships are deleted from the
consistency group and displayed in the Not in a Group list as shown in Figure 10-146.

Deleting a consistency group


The consistency group can be deleted by clicking it and selecting Delete from the Actions
drop-down menu, as shown in Figure 10-146. Alternatively, right-click the group to be deleted.

Figure 10-146 Delete a consistency group

You must confirm the deletion of the consistency group, as shown in Figure 10-147. Click Yes
if you are sure that this consistency group should be deleted.

Figure 10-147 Warning to confirm deletion of the consistency group

Chapter 10. Copy services 577


The consistency group will be deleted. Any relationships that were part of it are returned to
the Not in a Group list as shown in Figure 10-148.

Figure 10-148 Remote Copy relationships returned to Not in a Group

578 Implementing the IBM Storwize V5000


11

Chapter 11. External storage virtualization


In this chapter, we describe how to incorporate external storage systems into the virtualized
world of the IBM Storwize V5000. A key feature of IBM Storwize V5000 is its ability to
consolidate disk controllers from various vendors into pools of storage. In this way the storage
administrator can manage and provision storage to applications, from a single user interface
and use a common set of advanced functions across all the storage systems under the
control of the IBM Storwize V5000.

There is a distinction to be made between virtualizing external storage and importing existing
data into the Storwize V5000. When we talk of virtualizing external storage we mean creating
new logical units with no data on them and adding these to storage pools under Storwize
V5000 control. In this way the external storage can benefit from the Storwize V5000 features
such as EasyTier and Copy Services. When existing data needs to be put under control of the
Storwize V5000, it must first be imported as an image mode volume. It is advised that the data
then be copied onto storage, whether internal or external, already under the control of the
Storwize V5000 and not left as an image mode volume. The reason for this is again to be able
to benefit from the Storwize V5000 features.

This chapter includes the following topics:


򐂰 Planning for external storage virtualization
򐂰 Working with external storage

© Copyright IBM Corp. 2013, 2015. All rights reserved. 579


11.1 Planning for external storage virtualization
In this section, we describe how to plan for virtualizing external storage with IBM Storwize
V5000. Virtualizing the storage infrastructure with IBM Storwize V5000 makes your storage
environment more flexible, cost-effective, and easy to manage. The combination of IBM
Storwize V5000 and an external storage system allows more storage capacity benefits from
the powerful software functions within the IBM Storwize V5000.

The external storage systems that are incorporated into the IBM Storwize V5000 environment
can be new systems or existing systems. Any data on the existing storage systems can be
easily migrated to the IBM Storwize V5000 managed environment, as described in Chapter 6,
“Storage migration” on page 249.

11.1.1 License for external storage virtualization


From a licensing standpoint, when external storage systems are virtualized by IBM Storwize
V5000, a per-enclosure External Virtualization license is required.

Migration: If the IBM Storwize V5000 is used as a general management tool, the
appropriate External Virtualization licenses must be ordered. The only exception is if you
want to migrate existing data from external storage systems to IBM Storwize V5000
internal storage and then remove the external storage. You can temporarily configure your
External Storage license for a 45 day period. For more than a 45 day migration
requirement, an appropriate External Virtualization license must be ordered.

You can configure the IBM Storwize V5000 licenses by clicking the Settings icon and then
System. In the License pane, there are four license options you can set: FlashCopy,
RemoteCopy, Easy Tier and External Virtualization. Set these license options to the limits you
obtained from IBM. For more information about setting licenses on the IBM Storwize V5000
see Chapter 2, “Initial configuration” on page 25.

For assistance with licensing questions or to purchase any of these licenses, contact your
IBM account team or IBM Business Partner.

11.1.2 SAN configuration planning


External storage controllers that are virtualized by IBM Storwize V5000 must be connected
through SAN switches. A direct connection between the IBM Storwize V5000 and external
storage controllers is not supported.

Make sure that the switches or directors are at the firmware levels that are supported by the
IBM Storwize V5000 and that the IBM Storwize V5000 port login maximums that are listed in
the restriction document are not exceeded. The configuration restrictions can be found on the
Support home page, which is available at this website:
https://www-947.ibm.com/support/entry/myportal/product/system_storage/disk_systems
/mid-range_disk_systems/ibm_storwize_v5000?productContext=-2033461677

The advised SAN configuration is composed of a minimum of two fabrics. The ports on
external storage systems and the IBM Storwize V5000 ports should be evenly split between
the two fabrics to provide redundancy if one of the fabrics goes offline.

580 Implementing the IBM Storwize V5000


After the IBM Storwize V5000 and external storage systems are connected to the SAN
fabrics, zoning on the switches must be configured. In each fabric, create a zone with the four
IBM Storwize V5000 worldwide port names (WWPNs), two from each node canister with up
to a maximum of eight WWPNs from each external storage system.

Ports: IBM Storwize V5000 supports a maximum of 16 ports or WWPNs from an


externally virtualized storage system.

Figure 11-1 shows an example of how to cable devices to the SAN. Refer to this example as
we describe the zoning. For the purposes of this example we have used an IBM Storwize
V3700 as our external storage.

Figure 11-1 SAN cabling and zoning example diagram

Create an IBM Storwize V5000/external storage zone for each storage system to be
virtualized, as shown in the following examples:
򐂰 Zone external Storwize V3700 canister 1 port 2 with Storwize V5000 canister 1 port 2 and
canister 2 port 2 in the BLUE fabric
򐂰 Zone external Storwize V3700 canister 2 port 2 with Storwize V5000 canister 1 port 4 and
canister 2 port 4 in the BLUE fabric
򐂰 Zone external Storwize V3700 canister 1 port 3 with Storwize V5000 canister 1 port 1 and
canister 2 port 1 in the RED fabric
򐂰 Zone external Storwize V3700 canister 2 port 1 with Storwize V5000 canister 1 port 3 and
canister 2 port 3 in the RED fabric

Chapter 11. External storage virtualization 581


11.1.3 External storage configuration planning
Logical Units created on the external storage system must provide redundancy through the
various RAID levels, thus preventing a single physical disk failure from causing an MDisk,
storage pool, or associated host volume from going offline. To minimize the risk of data loss,
virtualize storage systems only where logical unit numbers (LUNs) are configured using a
RAID level other than RAID 0 (for example RAID 1, RAID 10, RAID 0+1, RAID 5, or RAID 6).

Verify that the storage controllers to be virtualized by IBM Storwize V5000 meet the
configuration restrictions, which can be found on the Support home page, at this website:
https://www-947.ibm.com/support/entry/myportal/product/system_storage/disk_systems
/mid-range_disk_systems/ibm_storwize_v5000?productContext=-2033461677

Make sure that the firmware or microcode levels of the storage controllers to be virtualized
are supported by IBM Storwize V5000. See the SSIC website for more details:
http://www-03.ibm.com/systems/support/storage/ssic/interoperability.wss

IBM Storwize V5000 must have exclusive access to the LUNs from the external storage
system that are presented to it. LUNs cannot be shared between IBM Storwize V5000 and
other storage virtualization platforms or between an IBM Storwize V5000 and hosts. However,
different LUNs can be mapped from the same external storage system to an IBM Storwize
V5000 and other hosts in the SAN through different storage ports.

Make sure that the external storage subsystem LUN masking is configured to map all LUNs
to all the WWPNs in the IBM Storwize V5000 storage system.

Ensure that you check the IBM Storwize V5000 Knowledge Center and review the
“Configuring and servicing external storage system” topic before you prepare the external
storage systems for discovery from the IBM Storwize V5000 system. This Knowledge Center
can be found at this website:
http://www.ibm.com/support/knowledgecenter/STHGUJ_7.4.0/com.ibm.storwize.v5000.740
.doc/v5000_ichome_740.html

11.1.4 Guidelines for virtualizing external storage


When external storage is virtualized using the IBM Storwize V5000, the following guidelines
must be followed:
򐂰 Avoid splitting arrays into multiple LUNs at the external storage system level. When
possible, create a single LUN per array for mapping to the IBM Storwize V5000.
򐂰 Use between 6 and 8 disks per RAID group when creating the external LUNS. Any more
and the failure of a single disk will prolong rebuild times, affecting performance of the LUN
and exposing it to complete failure if a second disk should fail. Additionally, the smaller the
number of disks, the more likely it is that write operations span an entire stripe (stripe size,
multiplied by the number of members, minus one). In this case, write performance is
improved.
򐂰 Except for Easy Tier, do not mix MDisks that vary in performance or reliability in the same
storage pool. Only put MDisks of a similar size and performance into the same storage
pool. Likewise group MDisks from different arrays into different pools. For more
information about Easy Tier, see Chapter 9, “Easy Tier” on page 419.
򐂰 Do not leave volumes in image mode. Use image mode only to import or export existing
data into or out of the IBM Storwize V5000. Migrate such data from image mode volumes
and associated MDisks to other storage pools to benefit from storage virtualization and the
enhanced benefits of the Storwize V5000, such as Easy Tier.

582 Implementing the IBM Storwize V5000


11.2 Working with external storage
In this section, we describe how to manage external storage using an IBM Storwize V5000.

The basic concepts of managing external storage system are the same as internal storage.
IBM Storwize V5000 discovers LUNs from the external storage system as one or more
MDisks. These MDisks are added to a storage pool in which volumes are created and
mapped to hosts as needed.

11.2.1 Adding external storage


To add new external storage systems to the IBM Storwize V5000 virtualized environment,
complete the following steps:
1. Zone a minimum of two and a maximum of 16 Fibre Channel ports from the external
storage system with all eight Fibre Channel ports on the IBM Storwize V5000 system. See
11.1.2, “SAN configuration planning” on page 580 for more details on zoning. As the IBM
Storwize V5000 is virtualizing your storage, hosts should be zoned with the V5000
controllers WWPNs.
2. Using the storage partitioning or LUN masking feature of the external storage system,
create a group that includes all eight IBM Storwize V5000 WWPNs.
3. Create equal size arrays on the external system using any RAID level except zero.
4. Create a single LUN per RAID array.
5. Map the LUNs to all eight Fibre Channel ports on the IBM Storwize V5000 system by
assigning them to the group that was created in step 2.
6. Verify that IBM Storwize V5000 discovered the LUNs as unmanaged MDisks. To get to the
external storage pool select External Storage from the pools menu as shown in
Figure 11-2.

Figure 11-2 Selecting external storage pool

Chapter 11. External storage virtualization 583


If they do not show up automatically, click Detect MDisk from the MDisk window of the GUI,
as shown in Figure 11-3. You should see the MDisks mapped to the IBM Storwize V5000
under the respective Storage system.

Figure 11-3 Detecting MDisks

Note: The Detect MDisk option is the only way to manually update the Storwize V5000
configuration when either adding or deleting MDisks.

7. Select the storage tier for the MDisks. The MDisks will be unassigned at this point and will
need to be assigned to the correct storage tiers for EasyTier management. For more
information about Storage Tiers see Chapter 9, “Easy Tier” on page 419.

584 Implementing the IBM Storwize V5000


Assigning Storage Tiers to MDisks is shown in Figure 11-4.

Figure 11-4 Select tier option

Select the MDisks to be assigned and either use the Actions drop-down or right-click and
Select Tier. Ensure that the correct tier is chosen, as shown in Figure 11-5.

Figure 11-5 Choosing a storage tier

Chapter 11. External storage virtualization 585


8. After the tier is assigned, add the MDisks to an existing pool or create a new one.
Figure 11-6 shows how to add selected MDisks to an existing storage pool.

Figure 11-6 Adding MDisks to a pool

If the storage pool does not yet exist, follow the procedure outlined in Chapter 7, “Storage
pools” on page 309.
9. Add the MDisks to the pool. Select the pool the MDisks are to belong to and click Add to
Pool as shown in Figure 11-7. After the task completes, click Close and the selected
volumes will be assigned to the storage pool.

Important: If the external storage volumes that are to be virtualized behind the Storwize
V5000 already have data on them and this data needs to be retained, DO NOT use the
Assign to Pool option to manage the MDisks. This will destroy the data on the disks.
Instead use the Import option. See 11.2.2, “Importing Image Mode volumes” on page 587

586 Implementing the IBM Storwize V5000


Figure 11-7 Selecting storage pool

10.Finally create volumes from the storage pool and map them to hosts, as needed. See
Chapter 5, “Volume configuration” on page 201 to learn how to create and map volumes to
hosts.

11.2.2 Importing Image Mode volumes


If the external storage systems are not new systems and there is existing data on the LUNs
that must be kept after virtualization then the existing LUNs must be mapped to the Storwize
V5000 and imported as image mode volumes. You can then migrate the existing data to IBM
Storwize V5000 internal storage or some other external storage. The process of importing
existing data on external volumes is simplified using the Storage Migration Wizard as
described in Chapter 6, “Storage migration” on page 249.

To manually import volumes as image mode volumes they must not have been assigned to a
storage pool and must be unmanaged MDisks. Hosts that are accessing data from these
external storage system LUNs can continue to do so, but must be re-zoned and mapped to
the Storwize V5000 to use them after they are presented through the IBM Storwize V5000. If
the Import option is used and no existing storage pool is chosen, a temporary migration pool
is created to hold the new image-mode volume.

This image-mode volume has a direct block-for-block translation from the imported MDisk and
the external LUN and existing data is preserved. In this state, the IBM Storwize V5000 is
acting as a proxy and the image mode volume is simply a “pointer” to the existing external
LUN. Because of the way in which the virtualization of the Storwize V5000 works, the external
LUN is presented as an MDisk, but we cannot map an MDisk directly to a host. Therefore we
must create the image mode volume in order that hosts can be mapped through the Storwize
V5000.

Chapter 11. External storage virtualization 587


If an existing storage pool is chosen then the Storwize V5000 will actually carry out a
migration task. The external LUN will be imported into a temporary migration pool and a
migration task will run in the background to copy data to MDisks already in the target storage
pool. At the end of the migration the external LUN and its associated MDisk will still be in the
temporary pool and show as managed, but can be removed from the Storwize V5000.

Figure 11-8 shows an example of how to import an unmanaged MDisk. Select the
unmanaged MDisk and click Import from the Actions drop-down menu. Multiple MDisks can
be selected using the Ctrl key.

Figure 11-8 Import MDisk option

588 Implementing the IBM Storwize V5000


This will start the Import Wizard as shown in Figure 11-9.

Figure 11-9 Starting the disk import wizard

Clear the Enable Caching option if you use copy services (mirroring functionality) on the
external storage system that is currently hosting the LUN. After the volumes are properly
under Storwize V5000 control, it is a preferred practice to use the copy services of IBM
Storwize V5000 for virtualized volumes. Click Next to continue. The wizard will direct you to
select the storage pool you want to import the volume to, as shown in Figure 11-10.

Chapter 11. External storage virtualization 589


Figure 11-10 Selecting storage pool in the import wizard

You have a choice between an existing storage pool (target pool) or using a temporary one,
which the Storwize V5000 will create and name for you. Selecting a target pool allows you to
choose the existing pool as shown in Figure 11-11and clicking Finish will create the volume
in the chosen pool. Only pools with sufficient capacity will be shown.

Figure 11-11 Selecting a target pool

590 Implementing the IBM Storwize V5000


Note: The reason only pools with sufficient capacity are shown is that importing an MDisk
to an existing storage pool will actually carry out a storage migration and that can only be
achieved if the target pool has sufficient capacity to create a copy of the data on its own
MDisks. The external MDisk will be imported as an image mode volume into a temporary
migration pool and a volume migration will take place in the background to create the
volume in the target pool.

A migration task will start and complete as shown in Figure 11-12. The actual data migration
begins after the MDisk is imported successfully.

Figure 11-12 Migration task starting

Chapter 11. External storage virtualization 591


The status of the migration task can be checked under the System Migration option found in
the Pools menu as shown in Figure 11-13.

Figure 11-13 Checking migration status

When complete, the migration status will disappear and the volume will be in the target pool
as shown in Figure 11-14.

Figure 11-14 Volume assigned to pool

592 Implementing the IBM Storwize V5000


The image mode volume will be named automatically by the Storwize V5000. At any point
after the wizard has completed, you can rename the volume to something more relevant.
Simply highlight the volume, right-click and select Rename. The image mode volume will be
deleted automatically, but the external LUN will still exist as a managed MDisk in the
temporary storage pool. It can be un-assigned from the pool and will then be listed as an
unassigned MDisk. Ultimately the external LUN can be retired and removed completely from
the Storwize V5000 by un-mapping the volume at the external storage and then clicking
Detect MDisks option on the Storwize V5000. See 11.2.4, “Removing external storage” on
page 600 for more information about removing external storage.

If you choose a temporary pool you must first select the extent size for the pool before clicking
Finish to import the MDisk as shown in Figure 11-15. The default value for extents is 1GB. If
you plan to migrate this volume to another pool at some stage in the future ensure that the
extent size matches that of the prospective target pool. For more information about extent
sizes see Chapter 7, “Storage pools” on page 309.

Figure 11-15 Importing to a temporary pool

Chapter 11. External storage virtualization 593


Clicking Finish will start the task running, as shown in Figure 11-16.

Figure 11-16 Importing MDisk to a temporary pool - task complete.

Click Close. The external LUN will now appear as an MDisk with an associated image mode
volume name and will be listed as such. This is shown in Figure 11-17.

Figure 11-17 MDisk as an image mode volume

594 Implementing the IBM Storwize V5000


The volume will also be listed in the System Migration window. This is because the Storwize
V5000 expects you to migrate from this volume at some point. This is shown in Figure 11-18.

Figure 11-18 Image mode volume ready for migration

It can however be mapped to a host at this point. You can also choose to rename it as shown
in Figure 11-19.

Figure 11-19 Renaming an image mode volume

Chapter 11. External storage virtualization 595


11.2.3 Managing external storage
The IBM Storwize V5000 provides an individual external window for managing external
storage systems.

You can access the external window by opening the External Storage option from the Pools
menu as shown in Figure 11-2 on page 583. Extended help information for external storage is
available in the help icon as shown in Figure 11-20.

Figure 11-20 Extended help for external storage

The External window as shown in Figure 11-21 gives you an overview of all your external
storage systems. There is a list of the external storage systems managed by the IBM
Storwize V5000. With the help of the filter, you can show only the external storage systems
you want to work with. Clicking the plus sign preceding each of the external storage
controllers provides more detailed information, including all of the MDisks that are mapped
from it.

596 Implementing the IBM Storwize V5000


Figure 11-21 External Storage window

You can change the name of any external storage system by highlighting the controller,
right-clicking and select Rename.

Alternatively use the Actions drop-down list. You can also find the Dependent Volumes
option, as shown in Figure 11-22. and the Detect MDisks option.

Figure 11-22 Show Dependent Volumes and Rename option in the Actions drop-down menu

Chapter 11. External storage virtualization 597


Clicking the Show Dependent Volumes option shows you the logical volumes that are
dependant on this external storage system, as shown in Figure 11-23.

Figure 11-23 Volumes dependent on external storage.

From the dependant volumes window, you can take volume actions, including Map to Host,
Shrink, Expand, Migrate to Another Pool, and Volume Copy, as shown in Figure 11-24.

Figure 11-24 Volume actions

598 Implementing the IBM Storwize V5000


In the IBM Storwize V5000 virtualization environment, you can migrate your application data
nondisruptively from one internal or external storage pool to another, making storage
management much simpler with less risk.

Volume copy is another key feature that you can benefit from using IBM Storwize V5000
virtualization. Two copies can be created to enhance availability for a critical application. A
volume copy can be also used to generate test data or for data migration.

For more information about the volume actions of the IBM Storwize V5000 storage system,
see Chapter 8, “Advanced host and volume administration” on page 359.

Returning to the External window and the MDisks that are provided by this external storage
system, you can highlight the name of an MDisk, right-click (or use the Actions drop-down) to
see a further menu of options available as shown in Figure 11-25.

From here you can view the properties of an MDisk, its capacity, the storage pool, and the
storage system it belongs to. There are also several actions on MDisks that can be made
through the menu, including Detect MDisks, Assign to Pool, Import, Unassign from pool,
Rename and Show Dependant Volumes.

Figure 11-25 MDisk menu in the External window

Chapter 11. External storage virtualization 599


11.2.4 Removing external storage
If you want to remove the external storage systems from the IBM Storwize V5000 virtualized
environment, you have the following options:
򐂰 If you want to remove the external storage systems and discard the data on it, complete
the following steps:
a. Stop any host I/O on the volumes.
b. Remove the volumes from the host file systems, logical volume, or volume group, and
remove the volumes from the host device inventory.
c. Remove the host mapping of volumes and the volumes themselves on IBM Storwize
V5000.
d. Remove the storage pools to which the external storage systems belong, or you can
keep the storage pool and remove the MDisks of the external storage from the storage
pools.
e. Unzone and disconnect the external storage systems from the IBM Storwize V5000.
f. Click Detect MDisks to make IBM Storwize V5000 discover the removal of the external
storage systems.
򐂰 If you want to remove the external storage systems and keep the volumes and their data
on the IBM Storwize V5000, complete the following steps:
a. Migrate volumes and their data to other internal or external storage pools that are on
IBM Storwize V5000.
b. Remove the storage pools to which the external storage systems belong, or you can
keep the storage pools and remove the MDisks of the external storage from the
storage pools.
c. Unzone and disconnect the external storage systems from the IBM Storwize V5000.
d. Click Detect MDisks to make IBM Storwize V5000 discover the removal of the external
storage systems.
򐂰 If you want to remove the external storage systems from IBM Storwize V5000 control and
keep the volumes and their data on some other external storage systems, complete the
following steps:
a. Migrate volumes and their data to other internal or external storage pools on the IBM
Storwize V5000, as described in Chapter 6, “Storage migration” on page 249.
b. Remove the storage pools to which the original external storage systems belong, or
you can keep the storage pools and remove the MDisks of that external storage from
the storage pools.
c. Export the volumes migrated in step a. to image mode with the new MDisks on the
target external storage systems. For more information about the restrictions and
prerequisites for migration, see Chapter 6, “Storage migration” on page 249.
You must record pre-migration information; for example, the original SCSI IDs the
volumes used when mapped to hosts. Some operating systems do not support
changing the SCSI ID during migration. Unzone and disconnect the external storage
systems from the IBM Storwize V5000.
a. Click Detect MDisks to make IBM Storwize V5000 discover the removal of the external
storage systems.

600 Implementing the IBM Storwize V5000


12

Chapter 12. RAS, monitoring, and


troubleshooting
This chapter describes the Reliability, Availability, and Serviceability (RAS) features and ways
to monitor and troubleshoot the IBM Storwize V5000.

This chapter includes the following topics:


򐂰 Reliability, Availability, and Serviceability on the IBM Storwize V5000
򐂰 IBM Storwize V5000 components
򐂰 Configuration backup procedure
򐂰 Updating software
򐂰 Event log
򐂰 Collecting support information
򐂰 Powering on and powering off the IBM Storwize V5000
򐂰 Tivoli Storage Productivity Center
򐂰 Using Tivoli Storage Productivity Center to administer and generate reports for an
IBM Storwize V5000

© Copyright IBM Corp. 2013, 2015. All rights reserved. 601


12.1 Reliability, Availability, and Serviceability on the IBM
Storwize V5000
This section describes the Reliability, Availability, and Serviceability (RAS) features of the IBM
Storwize V5000 as well as monitoring and troubleshooting. RAS features are important
concepts in the design of the IBM Storwize V5000. Hardware and software features, design
considerations, and operational guidelines all contribute to make the IBM Storwize V5000
reliable.

Fault tolerance and a high level of availability are achieved by the following features:
򐂰 The RAID capabilities of the underlying disk subsystems.
򐂰 The software architecture that is used by the IBM Storwize V5000 nodes.
򐂰 Auto-restart of nodes that are hung.
򐂰 Battery units to provide cache memory protection in the event of a site power failure.
򐂰 Host system multipathing and failover support.

High levels of serviceability are achieved by providing the following benefits:


򐂰 Cluster error logging
򐂰 Asynchronous error notification
򐂰 Dump capabilities to capture software detected failures
򐂰 Concurrent diagnostic procedures
򐂰 Directed maintenance procedures
򐂰 Concurrent log analysis and memory dump data recovery tools
򐂰 Concurrent maintenance of all IBM Storwize V5000 components
򐂰 Concurrent upgrade of IBM Storwize V5000 Software and microcode
򐂰 Concurrent addition or deletion of a node canister in a cluster
򐂰 Software recovery through the Service Assistant Tool
򐂰 Automatic software version correction when a node is replaced
򐂰 Detailed status and error conditions that are displayed via the Service Assistant Tool
򐂰 Error and event notification through Simple Network Management Protocol (SNMP),
syslog, and email
򐂰 Node canister support package gathering via USB, in case of network connection
problems

At the heart of the IBM Storwize V5000 is a redundant pair of node canisters. The two
canisters share the data transmitting and receiving load between the attached hosts and the
disk arrays.

602 Implementing the IBM Storwize V5000


12.2 IBM Storwize V5000 components
This section describes each of the components that make up the IBM Storwize V5000
system. Components are described in terms of location, function, and serviceability.

12.2.1 Enclosure midplane assembly


The enclosure midplane assembly is the unit that contains the node or expansion canisters
and the power supply units. The enclosure midplane assembly initially is generic and
configured as a control enclosure midplane or an expansion enclosure midplane. During the
basic system configuration, Vital Product Data (VPD) is written to the enclosure midplane
assembly, which decides whether the unit is a control enclosure midplane or an expansion
enclosure midplane.

Control enclosure midplane


The control enclosure midplane holds node canisters and the power supply units. The control
enclosure midplane assembly has specific VPD, such as WWNN 1, WWNN 2, machine type
and model, machine part number, and serial number. The control enclosure midplane must
be replaced only by a trained service provider. After a generic enclosure midplane assembly
is configured as a control enclosure midplane, it is no longer interchangeable with an
expansion enclosure midplane.

Expansion enclosure midplane


The expansion enclosure midplane holds expansion canisters and the power supply units.
The expansion enclosure midplane assembly also has specific VPD, such as machine type
and model, machine part number, and serial number. After a generic enclosure midplane
assembly is configured as an expansion enclosure midplane, it is no longer interchangeable
with a control enclosure midplane. The expansion enclosure midplane must be replaced only
by a trained service provider.

Figure 12-1 shows back of the Enclosure Midplane Assembly.

Figure 12-1 Rear view of Enclosure Midplane Assembly

For more information about replacing the control or expansion enclosure midplane, see the
IBM Storwize V5000 Knowledge Center at this website:

http://www.ibm.com/support/knowledgecenter/STHGUJ_7.4.0/com.ibm.storwize.v5000.740
.doc/v5000_ichome_740.html

Chapter 12. RAS, monitoring, and troubleshooting 603


12.2.2 Node canisters: Ports and LED
There are two node canister slots along the top of the unit. The left slot is canister 1 and the
right slot is canister 2.

Figure 12-2 shows the back of a fully equipped node enclosure.

Figure 12-2 Node canister

USB ports
There are two USB connectors side-by-side and they are numbered as 1 on the left and as 2
on the right. There are no indicators associated with the USB ports. Figure 12-3 shows the
USB ports.

Figure 12-3 Node Canister USB ports

Ethernet ports
There are two 10/100/1000 Mbps Ethernet ports side-by-side on the canister and they are
numbered 1 on the left and 2 on the right. Port 1 is required and port 2 optional. The ports are
shown in Figure 12-4.

Figure 12-4 Node canister Ethernet ports

604 Implementing the IBM Storwize V5000


Each port has two LEDs and their status is shown in Table 12-1.

Table 12-1 Ethernet LEDs status


LED Color Meaning

Link state Green On: There is an Ethernet link.

Activity Yellow Flashing: There is activity on the link.

SAS ports
There are four 6-Gbps Serial Attached SCSI (SAS) ports side-by-side on the canister. They
are numbered 1 on the left to 4 on the right. IBM Storwize V5000 uses port 1 and 2 for host
connectivity and ports 3 and 4 to connect optional expansion enclosures. The ports are
shown in Figure 12-5.

Figure 12-5 Node canister SAS ports

The SAS LED status meanings are described in Table 12-2.

Table 12-2 SAS LED Status


State Meaning

green Indicates at least one of the SAS lanes on this connector are operational.
If the light is off when it is connected, there is a problem with the connection.

amber If the light is on, one of the following errors occurred:


򐂰 One or more (but not all) of the four lanes are up for this connector (if none
of the lanes are up, the activity light is off)
򐂰 One or more of the up lanes are running at a different speed to the others
򐂰 One or more of the up lanes are attached to a different address to the others

Chapter 12. RAS, monitoring, and troubleshooting 605


IBM Storwize V5000 uses SFF-8644 mini-SAS HD connector cable to connect enclosures, as
shown in Figure 12-6.

Figure 12-6 Mini-SAS HD SFF 8644 connector

Battery status
Each node canister houses a battery, the status of which is displayed on three LEDs on the
back of the unit, as shown in Figure 12-7.

Figure 12-7 Node canister battery status

The battery indicator status meanings are described in Table 12-3.

Table 12-3 Battery indicator on Node canister


Color Name Definition

Green (left) Battery Status 򐂰 Fast flash: Indicates that the battery is charging and
has insufficient charge to complete a single memory
dump.
򐂰 Flashing: Indicates that the battery has sufficient
charge to complete a single memory dump only.
򐂰 Solid: Indicates that the battery is fully charged and has
sufficient charge to complete two memory dumps.

Amber (mid) Fault Indicates a fault with the battery.

Green (right) Battery in use Indicates that hardened or critical data is writing to disk.

606 Implementing the IBM Storwize V5000


Canister status
The status of each canister is displayed by three LEDs, as shown in Figure 12-8.

Figure 12-8 System status indicators

The system status LED meanings are described in Table 12-4.

Table 12-4 System status indicator


Color Name Definition

Green (left) System Power 򐂰 Flashing: The canister is in standby mode in which
case IBM Storwize V5000 code is not running.
򐂰 Fast flashing: The cannister is running a self test.
򐂰 On: The cannister is powered up and the IBM Storwize
V5000 code is running.

Green (mid) System Status 򐂰 Off: There is no power to the canister, the canister is in
standby mode, Power On SelfTest (POST) is running
on the canister, or the operating system is loading.
򐂰 Flashing: The node is in candidate or service state; it
cannot perform I/O. It is safe to remove the node.
򐂰 Fast flash: A code upgrade is running.
򐂰 On: The node is part of a cluster.

Amber (right) Fault 򐂰 Off: The node is in candidate or active state. This state
does not mean that there is no hardware error on the
node. Any error that is detected, is not severe enough
to stop the node from participating in a cluster (or there
is no power).
򐂰 Flashing: Identifies the canister.
򐂰 On: The node is in service state, or there is an error
that is stopping the software from starting.

Chapter 12. RAS, monitoring, and troubleshooting 607


12.2.3 Node canister replaceable hardware components
The IBM Storwize V5000 node canister contains the following customer-replaceable
components:
򐂰 Host Interface Card
򐂰 Memory
򐂰 Battery

Figure 12-9 shows the location of these parts within the node canister.

Figure 12-9 Node canister customer replaceable parts

Host interface card replacement


For more information about the replacement process, see the IBM Storwize V5000
Knowledge Center at this website:
http://www.ibm.com/support/knowledgecenter/STHGUJ_7.4.0/com.ibm.storwize.v5000.740
.doc/v5000_ichome_740.html

At the website, browse to Troubleshooting   Replacing parts  Replacing host


interface card.

608 Implementing the IBM Storwize V5000


The host interface card replacement is shown in Figure 12-10.

Figure 12-10 Host Interface card replacement

Memory replacement
For more information about the memory replacement process, see the IBM Storwize V5000
Knowledge Center at this website:
http://www.ibm.com/support/knowledgecenter/STHGUJ_7.4.0/com.ibm.storwize.v5000.740
.doc/v5000_ichome_740.html

At the website, browse to Troubleshooting   Replacing parts  Replacing the node


canister memory (2x 4 GB DIMM).

Chapter 12. RAS, monitoring, and troubleshooting 609


Figure 12-11 shows the memory location.

Figure 12-11 Memory replacement

Battery Backup Unit replacement

Caution: The battery is a lithium ion battery. To avoid possible explosion, do not incinerate
the battery. Exchange the battery only with the part that is approved by IBM.

Because the Battery Backup Unit (BBU) is located in the node canister, the BBU replacement
leads to a redundancy loss until the replacement is complete. Therefore, it is advised to
replace the BBU only when advised to do so. It is also advised to follow the Directed
Maintenance Procedures (DMP).

For more information about how to replace the BBU, see the Knowledge Center at this
website:
http://www.ibm.com/support/knowledgecenter/STHGUJ_7.4.0/com.ibm.storwize.v5000.740
.doc/v5000_ichome_740.html

At the website, browse to Troubleshooting  Replacing parts  Replacing battery in


a node canister.

610 Implementing the IBM Storwize V5000


Complete the following steps to replace the BBU:
1. Grasp the blue touch points on each end of the battery, as shown in Figure 12-12.

Figure 12-12 BBU replacement: Step 1

2. Lift the battery vertically upwards until the connectors disconnect.

Important: During a BBU change, the battery must be kept parallel to the canister
system board while it is removed or replaced, as shown in Figure 12-13. Keep equal
force, or pressure, on each end.

Figure 12-13 BBU replacement: Step 2

Chapter 12. RAS, monitoring, and troubleshooting 611


12.2.4 Expansion canister: Ports and LED
There are two expansion canister slots along the top of the unit as shown in Figure 12-14.

Figure 12-14 Rear of an expansion enclosure

SAS ports
SAS ports are used to connect the expansion canister to the node canister or to an extra
expansion canister in the chain. Figure 12-15 shows the SAS ports that are on the expansion
canister.

Figure 12-15 Expansion canister SAS ports

The meaning of the SAS port LEDs is described in Table 12-5.

Table 12-5 SAS LED status meaning


State Meaning

Green Indicates at least one of the SAS lanes on these connectors are operational.
If the light is off when connected, there is a problem with the connection.

Amber If the light is on, one of the following errors occurred:


򐂰 One or more (but not all) of the four lanes are up for this connector (if none
of the lanes are up, the activity light is off).
򐂰 One or more of the up lanes are running at a different speed to the others.
򐂰 One or more of the up lanes are attached to a different address to the
others.

Canister status
Each expansion canister has its status displayed by three LEDs, as shown in Figure 12-16.

Figure 12-16 Enclosure canister status

612 Implementing the IBM Storwize V5000


The LED status is described in Table 12-6.

Table 12-6 Enclosure canister status


Color Name Definition

Green (left) Power Indicates that the canister is receiving power.

Green (mid) Status If the light is on, the canister is running normally.
If the light is flashing, there is an error communicating with
the enclosure.

Amber (right) Fault If the light is solid, there is an error logged against the
canister or the firmware is not running.

12.2.5 Disk subsystem


The IBM Storwize V5000 disk subsystem is made up of control and expansion enclosures.
The system can have one or two control enclosures. As of IBM Storwize code version 7.4,
each control enclosure can attach to up to 19 expansion enclosures.

This section describes the parts of the disk subsystem.

SAS cabling
Expansion enclosures are attached to control enclosures by SAS cables. There are two SAS
chains. Up to ten expansion enclosures can be attached to SAS chain 1 and up to nine
expansion enclosures can be attached to SAS chain 2. The node canister uses SAS ports 3
and 4 for expansion enclosures and ports 1 and 2 for host connectivity.

Important: When an SAS cable is inserted, ensure that the connector is oriented correctly
by confirming that the following conditions are met:
򐂰 The pull tab must be below the connector.
򐂰 Insert the connector gently until it clicks into place. If you feel resistance, the connector
is probably oriented the wrong way. Do not force it.
򐂰 When inserted correctly, the connector can be removed only by pulling the tab.

The expansion canister has SAS port 1 for the channel input and SAS port 2 for the output
which connects to another expansion enclosures.

Chapter 12. RAS, monitoring, and troubleshooting 613


The SAS cabling is shown in Figure 12-17.

Figure 12-17 SAS cabling for single I/O group

A strand starts with an SAS initiator chip inside an IBM Storwize V5000 node canister and
progresses through SAS expanders, which connect to the disk drives. Each canister contains
an expander. Each drive has two ports, each of which is connected to a different expander
and strand. This configuration means both nodes directly access each drive and there is no
single point of failure.

At system initialization when devices are added to or removed from strands (and at other
times), the IBM Storwize V5000 software performs a discovery process to update the state of
the drive and enclosure objects.

Slot numbers in enclosures


The IBM Storwize V5000 is made up of enclosures. There are four types of enclosures, as
described in Table 12-7.

Table 12-7 Enclosure slot numbering


Enclosure type Number of slots

Enclosure 12x 3.5-inch drives: Enclosure with 12 slots.


򐂰 Control enclosure 2077/2078-12C
򐂰 Expansion enclosure 2077/2078-12E

614 Implementing the IBM Storwize V5000


Enclosure type Number of slots

Enclosure 24x 2.5-inch drives: Enclosure with 24 slots.


򐂰 Control enclosure 2077/2078-24C
򐂰 Expansion enclosure 2077/2078-24E

Array goal
Each array has a set of goals that describe the location and performance of each array
member. A sequence of drive failures and hot spare takeovers can leave an array
unbalanced; that is, with members that do not match these goals. The system automatically
rebalances such arrays when appropriate drives are available.

RAID level
An IBM Storwize V5000 disk array supports RAID 0, RAID 1, RAID 5, RAID 6, or RAID 10.
Each RAID level is described in Table 12-8.

Table 12-8 RAID levels that are supported by an IBM Storwize V5000 array
RAID Where data is striped Drive count
level (Min - Max)

0 Arrays have no redundancy and do not support hot-spare takeover. Data 1-8
is striped evenly across the drives without parity. Performance is
improved at the expense of the redundancy provided by other RAID
levels.

1 Provides disk mirroring, which duplicates data between two drives. A 2


RAID 1 array is internally identical to a two-member RAID 10 array.

5 Arrays stripe data over the member drives with one parity strip on every 3 - 16
stripe. RAID 5 arrays have single redundancy with higher space
efficiency than RAID 10 arrays, but with some performance penalty.
RAID 5 arrays can tolerate no more than one member drive failure.

6 Arrays stripe data over the member drives with two parity strips on every 5 - 16
stripe. A RAID 6 array can tolerate any two concurrent member drive
failures.

10 Arrays stripe data over mirrored pairs of drives. RAID 10 arrays have 2 - 16
single redundancy. The mirrored pairs rebuild independently. One
member out of every pair can be rebuilding or missing at the same time.
RAID 10 combines the features of RAID 0 and RAID 1.

Disk scrubbing
The scrub process runs when arrays do not have any other background processes. The
process checks that the drive logical block addresses (LBAs) are readable and array parity is
synchronized. Arrays are scrubbed independently and each array is entirely scrubbed every
seven days.

Chapter 12. RAS, monitoring, and troubleshooting 615


Solid-state drives
Solid-state drives (SSDs) are treated no differently by an IBM Storwize V5000 than hard disk
drives (HDDs) concerning RAID arrays or MDisks. The SSDs in the storage that are managed
by the IBM Storwize V5000 are combined into an array, usually in RAID 10 or RAID 5 format.
It is unlikely that RAID 6 SSD arrays are used because of the double parity impact, with two
SSD drives used for parity only.

Drive replacement procedure


From software v7.4, it is possible to reseat or replace a failed drive in a Storwize V5000 by
removing it from its enclosure and replacing with an appropriate new drive, without requiring
the Directed Maintenance Procedure to supervise the service action.

The system will automatically perform the drive hardware validation tests and will promote the
drive into the configuration if these tests pass, automatically configuring the inserted drive as
a spare. The status of the drive following the promotion will be recorded in the event log either
as an informational message or an error, should some hardware failure occur during the
system action.

For more information about the drive replacement process, see the IBM Storwize V5000
Knowledge Center at this website:

http://www.ibm.com/support/knowledgecenter/STHGUJ_7.4.0/com.ibm.storwize.v5000.740
.doc/v5000_ichome_740.html

At the website, browse to Troubleshooting  Replacing parts  Replacing a Drive


assembly.

12.2.6 Power supply unit


All enclosures require two power supply units (PSUs) for normal operation. A single PSU can
power the entire enclosure for redundancy.

Figure 12-18 shows the power supplies.

Figure 12-18 Power supply

The left side PSU is numbered 1 and the right side PSU is numbered 2.

616 Implementing the IBM Storwize V5000


PSU LED indicators
The PSU indicators are the same for the control and expansion units.

Figure 12-19 shows the PSU LED indicators.

Figure 12-19 PSU LED Indicators

Table 12-9 shows the colors and meaning of the LEDs.

Table 12-9 PSU LED definitions


Position Color Marking Name Definition

1 Green In AC Status Main power is delivered

2 Green DC DC Status DC power is available

3 Amber Fault exclamation Fault Fault on PSU


mark

4 Blue OK Service action N/A


that is allowed

PSU replacement procedure


For more information about the PSU replacement process, see the IBM Storwize V5000
Knowledge Center at this website:
http://www.ibm.com/support/knowledgecenter/STHGUJ_7.4.0/com.ibm.storwize.v5000.740
.doc/v5000_ichome_740.html

At the website, browse to Troubleshooting  Replacing parts  Replacing a Power


Supply Unit.

Chapter 12. RAS, monitoring, and troubleshooting 617


12.3 Configuration backup procedure
If there is a serious failure that requires the system configuration to be restored, the
configuration backup file must be used. The file contains configuration data such as arrays,
pools, and volumes (but no customer data). The backup file is updated by the cluster every
day and stored in the /dumps directory.

Even so, it is important to save the file after you change your system configuration, via the
command-line interface (CLI) to start a manual backup.

Regularly saving a configuration backup file on the IBM Storwize V5000 is important and it
must be done manually. Download this file regularly to your management workstation to
protect the configuration data (a preferred practice is to automate this download procedure by
using a script and saving it daily on a remote system).

12.3.1 Generating a manual configuration backup by using the CLI


To generate a configuration backup using the CLI, run the svcconfig backup command, as
shown in Example 12-1.

Example 12-1 Example for backup CLI command


svcconfig backup

The progress of the command is reported by dots, as shown in Example 12-2.

Example 12-2 Backup CLI command progress and output


..................................................................................
..................................................................................
....................
CMMVC6155I SVCCONFIG processing completed successfully

The svcconfig backup command creates three files that provide information about the
backup process and cluster configuration. These files are created in the /tmp directory on the
configuration node and can be retrieved using SCP.

The three files that are created by the backup process are described Table 12-10.

Table 12-10 File names that are created by the backup process
File name Description

svc.config.backup.xml This file contains the cluster configuration data.

svc.config.backup.sh This file contains the names of the commands that were
issued to create the backup of the cluster.

svc.config.backup.log This file contains details about the backup, including any
error information that might be reported.

618 Implementing the IBM Storwize V5000


12.3.2 Downloading a configuration backup by using the GUI
The IBM Storwize V5000 also automatically saves the configuration, on a daily basis, to the
/dumps directory.

Note: While the files in the /dump directory are the same as those generated using the CLI
svcconfig backup command, the user has no control over when they are generated.

To download a configuration backup file using the GUI, complete the following steps:
1. Browse to Settings  Support, as shown in Figure 12-20.

Figure 12-20 Configuration backup open support panel

2. Select the Show full log listing... option to list all of the available log files that are stored
on the configuration node, as shown in Figure 12-21.

Figure 12-21 Configuration backup - Show full log listing

Chapter 12. RAS, monitoring, and troubleshooting 619


3. Search for the files named svc.config.backup.xml_*, svc.config.backup.sh_* and
svc.config.backup.log_*, as shown in Figure 12-22. Select the files, right-click them
and select Download.

Figure 12-22 Configuration backup - download

4. Save the configuration backup files to your management workstation, as shown in


Figure 12-23.

Figure 12-23 Configuration backup - save file

Even though the configuration backup files are updated automatically on a daily basis, it may
be of interest to verify the time stamp of the actual file.

To do this, open the svc.config.backup.xml_xx file with an editor and search for the string
timestamp=, which is found near of the top of the file. Figure 12-24 shows the file and the
timestamp information.

620 Implementing the IBM Storwize V5000


Figure 12-24 Timestamp in backup XML file

12.4 Updating software


The system update process involves updating of the entire IBM Storwize V5000 environment.

The node canister software and disk firmware are separate updates and these tasks are
described separately.

12.4.1 Node canister software


Allow sufficient time to plan your tasks, review your preparatory update tasks, and complete
the update of the IBM Storwize V5000 environment. The update procedures can be divided
into the following general update tasks, shown in Table 12-11.

Table 12-11 Software update tasks


Sequence Upgrade tasks

1 Decide whether you want to update automatically or manually. During an automatic


update procedure, the clustered system updates each of the nodes systematically.
The automatic method is the preferred procedure for updating software on nodes.
However, you can update each node manually.

2 Ensure that CIM object manager (CIMOM) clients are working correctly. When
necessary, update these clients so that they can support the new version of IBM
Storwize V5000 code. Examples may be OS versions and options such as FlashCopy
Manager or VMWare plug-ins.

3 Ensure that multipathing drivers in the environment are fully redundant. If you
experience failover issues with multipathing driver support, resolve these issues
before you start normal operations.

4 Update other devices in the IBM Storwize V5000 environment. Examples might
include updating hosts and switches to the correct levels.

5 Update your IBM Storwize V5000.

Chapter 12. RAS, monitoring, and troubleshooting 621


Important: The amount of time it takes to perform a node canister update can vary
depending on the amount of preparation work that is required and the size of the
environment. Generally, to actually update the node software, allow 20-40 minutes per
node canister and a single 30 minute wait when the update is halfway complete. One node
in each I/O group will be upgraded to begin with, then the system will wait 30 minutes
before upgrading the second node in each I/O group. The 30 minute wait allows the
recently updated node canister to come online and be confirmed as operational, and
allows time for host multipathing to recover.

Some code levels support upgrades only from specific previous levels. If you upgrade to more
than one level above your current level, you might be required to install an intermediate level.
See the following for further details:

https://www-304.ibm.com/support/docview.wss?uid=ssg1S1004336

Important: Ensure that you have no unfixed errors in the log and that the system date and
time are correctly set. Start the fix procedures, and ensure that you fix any outstanding
errors before you attempt to concurrently update the code.

Updating software automatically


The software update can be performed concurrently with normal user I/O operations and
each node in the system updates individually. There may be some degradation in the
maximum I/O rate that can be sustained by the system while the code is uploaded to a node,
during the update, while the node is rebooted and the new code committed. This is because
write caching is disabled during the node canister update process.

The updating node is temporarily unavailable and all I/O operations fail to that node. As a
result, the I/O error counts increase and the failed I/O operations are directed to the partner
node of the working pair. Applications do not see any I/O failures. When new nodes are
added to the system, the upgrade package is automatically downloaded to the new nodes
from the IBM Storwize V5000 system.

Multipathing requirement
Before an update, ensure that the multipathing drivers are fully redundant with every path
available and online. You may see errors that are related to the paths, which will go away
(failover) and the error count will increase during the update. When the paths to the nodes
return, the nodes fall back to become a fully redundant system.

622 Implementing the IBM Storwize V5000


GUI node canister software update process
The automatic update process is started in the GUI by starting the Update wizard, as shown
in Figure 12-25. Browse to Settings  System  Update System  Update.

Figure 12-25 Start Upgrade wizard

When updating the node canister software, a Test utility and the node software must first be
downloaded from the internet. This can be downloaded either via the Download link within
the panel or manually via the IBM Support site. The Test utility verifies that there are no
issues with the current system environment, such as failed components and down-level drive
firmware. Select the Test utility and Update package files by clicking the folder icon in the
corresponding input field and then enter the version of the code level you are updating to, as
shown in Figure 12-26.

Figure 12-26 Select Test utility and software Update package

Chapter 12. RAS, monitoring, and troubleshooting 623


Select whether an automatic or manual code update is required. The Automatic update
option is the default and advised choice. Figure 12-27 shows the update mode selection
panel.

Figure 12-27 Update mode decision

If the Service Assistant Manual update option is selected, see, “Updating software
manually” on page 627.

Select Finish to start the update process on the nodes.

The Test utility and node canister software will then be uploaded to the system as shown in
Figure 12-28.

Figure 12-28 Software upload

624 Implementing the IBM Storwize V5000


After the software has been uploaded to the IBM Storwize V5000, the Test Utility is
automatically run as shown in Figure 12-29.

Figure 12-29 Running Test Utility

Messages inform you of any warnings or errors that the Test Utility may find. Figure 12-30
shows the result of the Test Utility finding issues with a IBM Storwize V5000 before continuing
with the software update.

Figure 12-30 Test Utility detected issues

Close the warning window and then click the Read more link, to display the results of the Test
Utility as shown in Figure 12-31.

Chapter 12. RAS, monitoring, and troubleshooting 625


Figure 12-31 Test Utility results

In this example, the Test Utility has found two warnings. This system has not been configured
to send email notifications to IBM and also has a number of drives with downlevel firmware.
Neither of these issues prevent the software update from continuing.

Clicking Continue will continue with the software update and clicking Cancel will cancel the
software update, allowing the user to correct any issues.

The results of continuing with the software update are shown in Figure 12-32.

Figure 12-32 Updating system

Nodes are upgraded and rebooted, one at a time until the upgrade process is complete.

626 Implementing the IBM Storwize V5000


Updating software manually

Important: It is highly advised to upgrade the IBM Storwize V5000 automatically by


following the Update wizard. If a manual update is used, make sure that you do not skip
any steps.

The steps for a manual update are shown in the Update System help. See Figure 12-33.

Figure 12-33 Update Software help

Complete the following steps to manually upgrade the software:


1. In the management GUI, click Settings  System  Update System and run the
Update wizard.
When updating the node canister software, a Test utility and the node software must first
be downloaded from the internet. This can be downloaded either via the Download link
within the panel or manually via the IBM Support site. The Test utility verifies that there are
no issues with the current system environment, such as failed components and down-level
drive firmware. Select the Test utility and Update package files by clicking the folder icon in
the corresponding input field, enter the version of the code level you are updating to and
click Update to continue, as shown in Figure 12-34.

Figure 12-34 Select Test utility and software Update package

Chapter 12. RAS, monitoring, and troubleshooting 627


2. At the next panel, select Service Assistant Manual update, as shown in Figure 12-35.

Figure 12-35 Select manual update mode

Select Finish to start the update process on the nodes.


The Test utility and node canister software will then be uploaded to the system as shown
in Figure 12-36.

Figure 12-36 Software upload

After the software has been uploaded, the Test Utility is automatically run, and if there are
no issues, the system is placed in a Prepared state as shown in Figure 12-37.

Figure 12-37 Update System status - Prepared state

628 Implementing the IBM Storwize V5000


3. Non-configuration nodes are updated first, leaving the configuration node until last.
In the management GUI, select Monitoring  System and hover over the canisters to
confirm which is / are the non-configuration node(s), as shown in Figure 12-38.

Figure 12-38 Checking the Configuration node status

4. Select the canister that contains the node you want to update and select Remove, as
shown in Figure 12-39.

Figure 12-39 Remove the non-config node from cluster

Important: Make sure that you select the non-configuration nodes first.

Chapter 12. RAS, monitoring, and troubleshooting 629


A warning message appears, as shown in Figure 12-40. Click Yes to continue.

Figure 12-40 Remove node warning message

The non-configuration node will be removed from Management GUI Update System panel
and will be shown as Unconfigured when hovering over the node in the Monitoring 
System panel.
5. Open the Service Assistant Tool for the node that you just removed. See 2.10.3, “Service
Assistant tool” on page 72.
6. In the Service Assistant Tool, the node that is ready for upgrade must be selected. Select
the node that shows Node status as service mode, displays an error code of 690 and has
no available cluster information, as shown in Figure 12-41.

Figure 12-41 Select node in Service GUI for update

630 Implementing the IBM Storwize V5000


7. In the Service Assistant Tool, select Upgrade Manually and select the required node
canister software upgrade file, as shown in Figure 12-42.

Figure 12-42 Select software file for upgrade

8. Click Upgrade to start the upgrade process on the first node.


The node is re-introduced automatically into the system after upgrade. Upgrading and
adding the node again can take 20-40 minutes, as shown in Figure 12-43.

Figure 12-43 Non-configuration node update complete

Chapter 12. RAS, monitoring, and troubleshooting 631


The management GUI also shows the progress of the update as shown in Figure 12-44.

Figure 12-44 Manual node software update progress

9. Repeat steps 3-7 for the remaining node (or nodes), leaving the Configuration node until
last.
After you remove the configuration node from the cluster for the update, a warning
appears, as shown in Figure 12-45. Click Yes to continue.

Figure 12-45 Configuration node failover warning

Important: The configuration node remains in Service State when it is re-added to the
cluster. Therefore, exit Service State manually.

632 Implementing the IBM Storwize V5000


10.To exit from service state, browse to the home panel of the Service Assistant Tool and
open the Action menu. Select Exit Service State, as shown in Figure 12-46.

Figure 12-46 Exit service state to add node back in cluster

Both the nodes are now back in the cluster (as shown in Figure 12-47) and the system is
running on the new code level.

Figure 12-47 Cluster is active again and running new code level

12.4.2 Upgrading drive firmware


In previous versions of IBM Storwize code it was only possible to upgrade the drive firmware
via the CLI using the applydrivesoftware command, after manually uploading the firmware
files to the to the /home/admin/upgrade directory of the IBM Storwize V5000. This process is
detailed in Appendix A, “Command-line interface setup and SAN Boot” on page 667.

As of Storwize software v7.4, the drive firmware can be upgraded via the GUI. The user can
choose to upgrade all drives or select an individual drive.

Download the latest Drive Upgrade package from the IBM Support site:
http://www.ibm.com/support/entry/portal/support

Chapter 12. RAS, monitoring, and troubleshooting 633


Upgrading the firmware on individual drives
To upgrade an individual drive, navigate to Pools  Internal Storage, right-click the drive to
be upgraded and select Upgrade from the action menu, as shown in Figure 12-48.

Figure 12-48 Upgrade an individual drive

Select the Drive Upgrade package, which was previously downloaded from the IBM Support
site, by clicking the folder icon, and click Upgrade as shown in Figure 12-49.

Figure 12-49 Select Drive Upgrade package

634 Implementing the IBM Storwize V5000


The drive firmware update takes about 2-3 minutes per drive. Figure 12-50 shows the
completed task.

Figure 12-50 Drive firmware update complete

Figure 12-51 shows the Pools  Internal Storage panel which displays the result of the
drive firmware update (from SC2C to SC2E).

Figure 12-51 Result of drive firmware update

Upgrading the firmware on all drives


There are two ways to upgrade all the drives in an IBM Storwize V5000 in the Management
GUI:
1. Via the Monitoring  System panel
2. Via the Pools  Internal Storage panel

Chapter 12. RAS, monitoring, and troubleshooting 635


Figure 12-52 shows how upgrade all drives via the Actions menu in the Systems panel.

Figure 12-52 Upgrade all drives - System panel

Figure 12-53 on page 636 shows how to update all drives via the Action menu in the Internal
Storage panel.

Note: If any drives are selected, the Action menu displays actions for the selected drives
and the Upgrade All option does not appear. If a drive is selected, de-select it by holding
down the Ctrl key and clicking the drive.

Figure 12-53 Upgrade all drives - Internal Storage panel

After initiating the drive upgrade process by either of the above options, the panel in
Figure 12-54 is displayed. Select the Drive Upgrade package, which was previously
downloaded from the IBM Support site, by clicking the folder icon, and then click Upgrade to
continue.

636 Implementing the IBM Storwize V5000


Figure 12-54 Upgrade all drives - select firmware file

All the drives that require an upgrade, will now be upgraded.

12.5 Event log


Whenever a significant change in the status of IBM Storwize V5000 is detected, an event is
submitted to the event log.

All events are classified as alerts or messages.

An alert is logged when the event requires action. Some alerts have an associated error code
that defines the service action that is required. The service actions are automated through the
fix procedures. If the alert does not have an error code, the alert represents an unexpected
change in state. This situation must be investigated to see whether it is expected or
represents a failure. Investigate an alert and resolve it when it is reported.

A message is logged when a change that is expected is reported; for instance, an IBM
FlashCopy operation completes.

The event log panel can be opened via the GUI by clicking Monitoring  Events, as shown
in Figure 12-55.

Figure 12-55 Opening the Events panel

Chapter 12. RAS, monitoring, and troubleshooting 637


Figure 12-56 shows the event log.

Figure 12-56 The event log view

12.5.1 Managing the event log


The event log features a size limit. After it is full, newer entries replace the older entries, which
are not required.

To avoid a repeated event that fills the event log, some records in the event log refer to
multiple occurrences of the same event. When event log entries are coalesced in this way, the
time stamp of the first occurrence and the last occurrence of the problem is saved in the log
entry. A count of the number of times that the error condition occurred also is saved in the log
entry. Other data refers to the last occurrence of the event.

Event log panel columns


Right-clicking in any column header opens the option menu in which you can select columns
that are shown or hidden. It’s also possible to click the Column icon on the far right of the
column headers to open the option menu.

Figure 12-57 shows all of the possible columns that can be displayed in the error log view.

Figure 12-57 Possible event log columns

638 Implementing the IBM Storwize V5000


The following available fields are advised as a minimum to assist you in diagnosing problems:
򐂰 Event ID
This number precisely identifies the reason why the event was logged.
򐂰 Error code
This number describes the service action that should be followed to resolve an error
condition. Not all events have error codes that are associated with them. Many event IDs
can have the same error code because the service action is the same for all of the events.
򐂰 Sequence number
A number that identifies the event.
򐂰 Event count
The number of events that are coalesced into this event log record.
򐂰 Fixed
When an alert is shown for an error condition, it indicates whether the reason for the event
was resolved. In many cases, the system automatically marks the events that are fixed
when appropriate. There are some events that must be manually marked as fixed. If the
event is a message, this field indicates that you read and performed the action. The
message must be marked as read.
򐂰 Last time
The time when the last instance of this error event was recorded in the log.
򐂰 Root sequence number
If set, this number is the sequence number of an event that represents an error that
probably caused this event to be reported. Resolve the root event first.

Event log panel options


Figure 12-58 shows the main Event log panel options, which should be used to handle
system events.

Figure 12-58 Events Panel

Chapter 12. RAS, monitoring, and troubleshooting 639


Event log filter options
The following log filter options are available:
򐂰 Show all
This option lists all available events.
򐂰 Unfixed Messages and Alerts
This option lists unfixed events. This option is useful to find events that must be handled
but no actions are required or recommended.
򐂰 Recommended Actions (default)
Only events with Recommended Actions (Status Alert) are displayed.

Important: Check for this filter option if no event is listed. There might be events that
are not associated to Recommended Actions.

Figure 12-59 shows an event log with no items found, which does not necessarily mean that
the event log is clear. To check whether the log is clear, use the filter option Show all.

Figure 12-59 No items found in event log

Actions on a single event


Right-clicking a single event gives options that might be used for that specific event:
򐂰 Mark as Fixed
It is possible to start the Fix Procedure on this specific event, even if it is not the advised
next action.
Some events, such as messages, must be set to Mark as Fixed.
򐂰 Show entries within... minutes/hours/days
This option is to limit the error log list to a specific date or a time slot. The following
selectable values are available:
– Minutes: 1, 5, 10, 15, 30, and 45
– Hours: 1, 2, 5, and 12
– Days: 1, 4, 7, 15, and 30
򐂰 Clear Log
This option clears the complete error log, even if only one event was selected.

Important: These actions cannot be undone and might prevent the system from being
analyzed when severe problems occur.

򐂰 Properties
This option provides more sense data for the selected event that is shown in the list.

640 Implementing the IBM Storwize V5000


Recommended Actions
A fix procedure invokes a wizard known as a Directed Maintenance Procedure (DMP) that
helps to troubleshoot and correct the cause of an error. Some DMPs reconfigure the system
based on your responses, ensure that actions are carried out in the correct sequence, and
prevent or mitigate loss of data. For this reason, you always must run the fix procedure to fix
an error, even if the fix might seem obvious.

To run the fix procedure for the error with the highest priority, go to the Recommended Action
panel at the top of the Event page and click Run This Fix Procedure. When you fix higher
priority events first, the system often can automatically mark lower priority events as fixed.

12.5.2 Alert handling and Recommended Actions


All events in Alert status require attention. Alerts are listed in priority order and should be
fixed sequentially by using the available fix procedures.

Example: External storage controller failure


For this example, we created an error on an externally virtualized storage controller by
rebooting one of it’s control canisters.

The following example shows how faults are represented in the error log, how information
about the fault can be gathered, and the Recommended Action (DMP) can be used to fix the
error:
򐂰 Detect an alert
The Health Status indicator is showing a red alert (for more information, see Chapter 3,
“Graphical user interface overview” on page 75). The Status Alerts indicator (to the right of
the Health status bar) is also visible and showing one alert. Click the alert to retrieve the
specific information, as shown in Figure 12-60.

Figure 12-60 Health check shows degraded system status

Review the event log for more information.


򐂰 Gather additional information: Alert properties
More details about the event can be found in the properties option, as shown in
Figure 12-61. This information might be of interest for problem fixing or for root cause
analysis.

Chapter 12. RAS, monitoring, and troubleshooting 641


Figure 12-61 Alert properties

򐂰 Run Recommended Action (DMP)


Use of the DMP is highly advised to fix any alerts. It is possible to miss background
running tasks when the DMP is bypassed. Not all alerts have DMPs available.
To start the DMP, right-click the alert record or click Run this fix procedure at the top of
the window.
The steps and panels of the DMP are specific to the error. The following figures represent
the Recommended Action (DMP) for the External storage event example which shows as
a Number of device logins reduced event.
Figure 12-62 shows step 1 of the DMP for the Number of device logins reduced event.
In this example, we will say that we did not deliberately remove the storage controller.

Figure 12-62 Recommended Action DMP for Device logins reduced - step 1

642 Implementing the IBM Storwize V5000


Figure 12-63 shows step 2 of the DMP for the Number of device logins reduced event.

Figure 12-63 Recommended Action DMP for Device logins reduced - step 2

Figure 12-64 shows step 3 of the DMP for the Number of device logins reduced event.

Figure 12-64 Recommended Action DMP for Device logins reduced - step 3

Figure 12-65 shows step 4 of the DMP for the Number of device logins reduced event.

Figure 12-65 Recommended Action DMP for Device logins reduced - step 4

Chapter 12. RAS, monitoring, and troubleshooting 643


Figure 12-66 shows step 5 of the DMP for the Number of device logins reduced event.

Figure 12-66 Recommended Action DMP for Device logins reduced - step 5

When all of the steps of the DMP are processed successfully, the Recommended Action is
complete and the problem should be fixed. Figure 12-67 shows the red color of the event
status changed to green. The System Health status is green and the Recommended Action
box has disappeared, implying that there are no further actions that must be addressed.

Figure 12-67 Recommended Action - completed

644 Implementing the IBM Storwize V5000


Handling multiple alerts
If there are multiple alerts logged, the IBM Storwize V5000 recommends an action to fix the
problem (or problems).

Figure 12-68 shows the event log that displays multiple alerts.

Figure 12-68 Multiple alert events displayed in the event log

The Next Recommended Action function orders the alerts by severity and displays the events
with the highest severity first. If multiple events have the same severity, they are ordered by
date and the oldest event is displayed first.

The following order of severity starts with the most severe condition:
򐂰 Unfixed alerts (sorted by error code; the lowest error code has the highest severity)
򐂰 Unfixed messages
򐂰 Monitoring events (sorted by error code; the lowest error code has the highest severity)
򐂰 Expired events
򐂰 Fixed alerts and messages

Faults are often fixed with the resolution of the most severe fault.

12.6 Collecting support information


If you have an issue with an IBM Storwize V5000 and call the IBM Support Center, you might
be asked to provide support data, as described in the next section.

12.6.1 Support information via GUI

Download Support Package wizard


The Download Support Package wizard provides a selection of various package types. IBM
Support provides direction on package type selection as required. To download a Support
package, browse to Settings  Support, as shown in Figure 12-69.

Chapter 12. RAS, monitoring, and troubleshooting 645


Figure 12-69 Accessing Support files

Click Download Support Package, as shown in Figure 12-70.

Figure 12-70 Download Support Package

Figure 12-71 shows the Download Support Package panel from which you can select one of
four different types of the support package.

Figure 12-71 Support Package selection

646 Implementing the IBM Storwize V5000


The type that is selected that depends on the event that is being investigated and IBM
Support will notify the user of which package is required.

The following components are included in the support package:


򐂰 Standard logs
Contains the most recent logs that were collected from the system. These logs are most
commonly used by Support to diagnose and solve problems.
򐂰 Standard logs plus one existing statesave
Contains the standard logs from the system and the most recent statesave from any of the
nodes in the system. Statesaves are also known as memory dumps or live memory dumps.
򐂰 Standard logs plus most recent statesave from each node
This option is used most often by the support team for problem analysis. They contain the
standard logs from system and the most recent statesave from each node in the system.
򐂰 Standard logs plus new statesave
This option might be requested by the Support team for problem determination. It
generates a new statesave (livedump) for all of the nodes and packages them with the
most recent logs.

Save the resulting file in a directory for later use or to upload to IBM Support.

Show full log listing


The Support panel also provides access to the files that are on the node canisters, as shown
in Figure 12-72. Click Show full log listing... to access the node canister files. To save a file
to the user’s workstation, select a file, right-click the file, and select Download. To change to
the file listing to show the files on a partner node canister, select the node canister from the
menu that is next to the Actions menu.

Figure 12-72 Full log listing

Chapter 12. RAS, monitoring, and troubleshooting 647


12.6.2 Support information via Service Assistant
The IBM Storwize V5000 management GUI collects information from all the components in
the system. The Service Assistant Tool collects information from all node canisters. The snap
file is the information that is collected and packaged in a single file.

If the package is collected by using the Service Assistant Tool, ensure that the node from
which the logs are collected is the current node, as shown in Figure 12-73.

Figure 12-73 Collect logs with the Service Assistance Tool

Support information can be downloaded with or without the latest statesave, as shown in
Figure 12-74.

Figure 12-74 Download support file via Service Assistant Tool

12.6.3 Support Information onto USB stick


Whenever the management GUI, Service Assistant Tool, or a remote connection is
unavailable, snaps can be collected from each node using a USB stick.

Complete the following steps to collect snaps by using a USB stick:


1. Create a text file that includes the following command:
satask snap -dump

648 Implementing the IBM Storwize V5000


2. Save the file as satask.txt in the root directory of the USB stick.
3. Insert the USB stick in the USB port of the node from which the support data should be
collected.
4. Wait until no write activity is recognized (this process can take 10 minutes or more).
5. Remove the USB stick and check the results, as shown in Figure 12-75.

Figure 12-75 Single snap result files on USB stick

satask_result file
The satask_result.html file is the general response to the command that is issued via the
USB stick. If the command did not run successfully, it is noted in this file. Otherwise, any
general system information is stored here, as shown in Figure 12-76.

Figure 12-76 satask_result.txt on USB stick (header only)

Snap memory dump on USB


A complete statesave of the node in which the USB was inserted is stored in a .tgz file. The
name of the file includes the node name and the time stamp. The content of the .tgz file is
shown in Figure 12-77.

Figure 12-77 Single snap memory dump on USB stick

Chapter 12. RAS, monitoring, and troubleshooting 649


12.7 Powering on and powering off the IBM Storwize V5000
In the following sections, we describe the process to power on and power off the IBM
Storwize V5000 system by using the GUI and the CLI.

12.7.1 Powering off the system


In this section, we show how to power off (shutdown) the IBM Storwize V5000 system by
using the GUI and CLI.

Important: You should never power off a IBM Storwize V5000 by powering off the PSUs,
removing both PSUs, or removing both power cables from a running system.

Powering down by using the GUI


You can power off a node canister or the entire cluster. When you shut down only one node
canister, all of the running tasks remain active as the remaining node takes over.

Note: If a canister or the enclosure is powered down, a local visit will be required to either
reseat the canister or power cycle the enclosure.

To power off a canister, browse to Monitoring  System, rotate the enclosure to the rear
view, right-click the required canister and select Power Off, as shown in Figure 12-78.

Figure 12-78 Powering off a canister

650 Implementing the IBM Storwize V5000


To power down the system, browse to Monitoring  System, click the Actions menu and
select Power Off, as shown in Figure 12-79.

Figure 12-79 Powering off the system

A Power Off confirmation window opens, prompting for confirmation to shut down the cluster.
Ensure that all FlashCopy, Metro Mirror, Global Mirror, data migration operations, and forced
deletions are stopped or allowed to complete before continuing. Enter the provided
confirmation code and click Power Off to begin the shutdown process, as shown in
Figure 12-80.

Figure 12-80 Power Off system confirmation

Wait for the power LED on both node canisters in the control enclosure to flash at 1 Hz, which
indicates that the shutdown operation completed (1 Hz is half as fast as the drive indicator
LED).

Tip: When you shut down an IBM Storwize V5000, it does not automatically restart. You
must manually restart the system by removing and re-applying power.

Chapter 12. RAS, monitoring, and troubleshooting 651


Shutting down by using the CLI
The CLI can also be used to shut down an IBM Storwize V5000. The CLI can be accessed on
Windows via the PuTTY utility.

Warning: If you are shutting down the entire system, you lose access to all volumes that
are provided by this system. Shutting down the system also shuts down all IBM Storwize
V5000 nodes. All data is flushed to disk before the power is removed.

Run the stopsystem command to shut down a clustered system, as shown in Example 12-3.

Example 12-3 Shut down


stopsystem

Are you sure that you want to continue with the shut down?

# Type y to shut down the entire clustered system.

12.7.2 Powering on
Complete the following steps to power on the system:

Important: This process assumes that all power is removed from the enclosure. If the
control enclosure is shut down but the power is not removed, the power LED on all node
canisters flash at 1 Hz. In this case, remove the power cords from both power supplies and
then reinsert them.

1. Ensure that any network switches that are connected to the system are powered on.
2. Power on any expansion enclosures by connecting the power cord to both power supplies
in the rear of the enclosure or turning on the power circuit.
3. Power on the control enclosure by connecting the power cords to both power supplies in
the rear of the enclosure and turning on the power circuits.
The system starts. The system starts successfully when all node canisters in the control
enclosure have their status LED permanently on, which should take no longer than 10
minutes.
4. Start the host applications.

12.8 Tivoli Storage Productivity Center


The IBM Tivoli Storage Productivity Center provides a set of policy-driven automated tools for
managing storage capacity, availability, events, performance, and assets in your enterprise
environment. Tivoli Storage Productivity Center provides storage management from the host
and application to the target storage device. It also provides disk and tape subsystem
configuration and management, Performance Management, SAN fabric management and
configuration, and usage reporting and monitoring. In this section, we describe how to use
Tivoli Storage Productivity Center to get usage reporting and to monitor performance data.

652 Implementing the IBM Storwize V5000


Tivoli Storage Productivity Center can help you to identify, evaluate, control, and predict your
enterprise storage management assets. Because it is policy-based, it can detect potential
problems and automatically make adjustments that are based on the policies and actions that
you define. For example, it can notify you when your system is running out of disk space or
warn you of an impending storage hardware failure. By alerting you to these and other issues
that are related to your stored data, you can prevent unnecessary system and application
downtime.

12.8.1 Tivoli Storage Productivity Center benefits


Tivoli Storage Productivity Center includes the following benefits:
򐂰 Simplifies the management of storage infrastructures
򐂰 Manages, configures, and provisions SAN-attached storage
򐂰 Monitors and tracks performance of SAN-attached devices
򐂰 Monitors, manages and controls (through zones) SAN fabric components
򐂰 Manages the capacity usage and availability of the file systems and databases
򐂰 Offers performance monitoring and reporting
򐂰 Reports can be viewed by using a web-based GUI

The examples provided in this section are not based on the latest version of Tivoli Storage
Productivity Center. For detailed guidance on configuring the latest version of Tivoli Storage
Productivity Center, see the IBM Tivoli Storage Productivity Center Technical Guide, which is
another IBM Redbooks publication:
http://www.redbooks.ibm.com/abstracts/sg248053.html?Open

12.8.2 Adding IBM Storwize V5000 in Tivoli Storage Productivity Center


After the Tivoli Storage Productivity Center is installed, it is ready to connect to the IBM
Storwize V5000 system.

Complete the following steps to connect Tivoli Storage Productivity Center to the IBM
Storwize V5000 system:
1. Open your browser and use the following link to start Tivoli Storage Productivity Center, as
shown in Figure 12-81:
http://TPC_system_Hostname:9550/ITSRM/app/en_US/index.html

Figure 12-81 Tivoli Storage Productivity Center front page

Chapter 12. RAS, monitoring, and troubleshooting 653


You also can find a link on the webpage to download IBM Java, if required. To start Tivoli
Storage Productivity Center console, click the Tivoli Storage Productivity Center GUI (Java
Web Start).
Tivoli Storage Productivity Center starts an application download, as shown in
Figure 12-82. If this is the first time you logged in, it takes time to install the required Java
packages to the local system.

Figure 12-82 Downloading Tivoli Storage Productivity Center application

2. Use your login credentials to access Tivoli Storage Productivity Center, as shown in
Figure 12-83.

Figure 12-83 Tivoli Storage Productivity Center login access

3. After successfully logging in, you are ready to add storage devices to Tivoli Storage
Productivity Center, as shown in Figure 12-84.

Figure 12-84 Add Devices window

654 Implementing the IBM Storwize V5000


4. Enter the details of your IBM Storwize V5000 in Tivoli Storage Productivity Center, as
shown in Figure 12-85.

Figure 12-85 Configure device in Tivoli Storage Productivity Center

Continue to following the wizard after you complete all the required fields. After the wizard is
completed, Tivoli Storage Productivity Center collects information from IBM Storwize V5000.
A summary of details is shown at the end of discovery process.

12.9 Using Tivoli Storage Productivity Center to administer and


generate reports for an IBM Storwize V5000
This section shows examples of how to use Tivoli Storage Productivity Center to administer,
configure, and generate reports for an IBM Storwize V5000 system. A detailed description
about Tivoli Storage Productivity Center reporting is beyond the intended scope of this book.

12.9.1 Basic configuration and administration


By using Tivoli Storage Productivity Center, you can administer, monitor, and configure your
IBM Storwize V5000 system. However, not all of the options that are normally associated with
the IBM Storwize V5000 GUI or CLI are available.

Chapter 12. RAS, monitoring, and troubleshooting 655


After successfully adding your IBM Storwize V5000 system, click Disk Manager  Storage
Subsystems to view your configured devices, as shown in Figure 12-86.

Figure 12-86 Storage Subsystem view

When you highlight the IBM Storwize V5000 system, action buttons become available that
you can use to view the device configuration or create virtual disks, as shown in Figure 12-87.

Figure 12-87 Action buttons

The MDisk Groups option provides you with a detailed list of the configured MDisk groups
including, pool space, available space, configured space, and Easy Tier Configuration.

The Virtual Disks option lists all the configured disks with the added option to filter them by
MDisk Group. The list includes several attributes, such as capacity, volume type, and type.

Important: Tivoli Storage Productivity Center and SAN Volume Controller use the
following terms:
򐂰 Virtual Disk: The equivalent of a Volume on a Storwize device
򐂰 MDisk Group: The equivalent of a Storage Pool on a Storwize device.

If you click Create Virtual Disk, the Create Virtual Disk wizard window opens, as shown in
Figure 12-88. Use this window to create volumes and specify several options, such as size,
name, thin provisioning, and add MDisks to an MDisk group.

656 Implementing the IBM Storwize V5000


Figure 12-88 Virtual Disk wizard Creation

12.9.2 Generating reports by using Java GUI


In this section, we describe how to generate sample reports by using the GUI. We also create
a probe to collect information from IBM Storwize V5000, as shown in Figure 12-89.

Figure 12-89 Create Probe option

Chapter 12. RAS, monitoring, and troubleshooting 657


Add the IBM Storwize V5000 probe for collecting information, as shown in Figure 12-90.

Figure 12-90 Adding IBM Storwize V5000 in probe

After you create a probe, you can click Create Subsystem Performance Monitor, as shown
in Figure 12-91.

Figure 12-91 Create subsystem performance monitor

658 Implementing the IBM Storwize V5000


To check the MDisk performance, click Disk Manager  Reporting  Storage Subsystem
Performance  By Managed Disk. You see many options to include in the wizard to check
MDisk performance, as shown in Figure 12-92.

Figure 12-92 Managed disk performance report filter specification

Click Generate Report to see a report, as shown in Figure 12-93.

Figure 12-93 MDisk performance report

Chapter 12. RAS, monitoring, and troubleshooting 659


Click the upper left icon to see a history chart report of the selected MDisk, as shown in
Figure 12-94.

Figure 12-94 MDisk history chart

660 Implementing the IBM Storwize V5000


12.9.3 Generating reports using Tivoli Storage Productivity Center web
console
In this section, we describe how to generate reports using the Tivoli Storage Productivity
Center web console.

To connect to the web page, browse to the following URL:


https://tpchostname.com:9569/srm/

A login panel is displayed (as shown in Figure 12-95). allowing a user to log in using their
Tivoli Storage Productivity Center credentials.

Figure 12-95 Tivoli Storage Productivity Center login panel

After you log in, the Tivoli Storage Productivity Center web dashboard is displayed, as shown
in Figure 12-96. The Tivoli Storage Productivity Center web-based GUI is used to show
information about the storage resources in your environment. It contains predefined and
custom reports about performance and storage tiering.

Figure 12-96 Tivoli Storage Productivity Center Dashboard

Chapter 12. RAS, monitoring, and troubleshooting 661


IBM Tivoli Common Reporting can be used to view predefined reports and create custom
reports from the web-based GUI. Predefined reports are also included, as shown in
Figure 12-97.

Figure 12-97 Tivoli Storage Productivity Center web-based reporting

Figure 12-98 shows how to select predefined Storage Tiering reporting.

Figure 12-98 Tivoli Storage Productivity Center Storage tiering reporting

Figure 12-99 shows the different report options for Storage Tiering.

Figure 12-99 Details reports

662 Implementing the IBM Storwize V5000


Figure 12-100 shows the output from VDisk Details report.

Figure 12-100 VDisk Details report

Figure 12-101 shows the Report Overview in pie-chart format.

Figure 12-101 Reporting Overview

Chapter 12. RAS, monitoring, and troubleshooting 663


Figure 12-102 shows the Easy Tier usage for volumes. To open this report in Tivoli Storage
Productivity Center, click Storage Resources  Volumes.

Figure 12-102 Volume Easy Tier usage

Figure 12-103 shows a detailed list of storage pools.

Figure 12-103 Pool Easy Tier information

664 Implementing the IBM Storwize V5000


Figure 12-104 shows Storage Virtualized Pool details in graph format.

Figure 12-104 Pool details

Chapter 12. RAS, monitoring, and troubleshooting 665


666 Implementing the IBM Storwize V5000
A

Appendix A. Command-line interface setup


and SAN Boot
This appendix describes the setup of the command-line interface (CLI) and provides more
information about the SAN Boot function.

This appendix includes the following topics:


򐂰 Command-line interface
򐂰 SAN Boot

© Copyright IBM Corp. 2013, 2015. All rights reserved. 667


Command-line interface
The IBM Storwize V5000 system has a powerful CLI, which offers even more functions than
the GUI. This section is not intended to be a detailed guide to the CLI, that topic is beyond the
scope of this book. The basic configuration of the IBM Storwize V5000 CLI and some
example commands are covered. The CLI commands are the same as those in the SAN
Volume Controller but in addition, there are more commands that are available to manage the
internal storage of the Storwize V5000. If a task is completed in the GUI, the CLI command is
always displayed in the details, as shown throughout this book.

Detailed CLI information is available in the IBM Storwize V5000 Knowledge Center under the
Command Line section, which can be found at this website:
http://www.ibm.com/support/knowledgecenter/STHGUJ_7.4.0/com.ibm.storwize.v5000.740
.doc/v5000_ichome_740.html

Implementing the IBM Storwize V7000 V7.2, SG24-7938also has information about the use of
the CLI. The commands in that book also apply to the IBM Storwize V5000 system because it
is part of the Storwize family.

Basic setup
In the IBM Storwize V5000 GUI, authentication is done using a user name and a password.
The CLI uses a Secure Shell (SSH) to connect from the host to the IBM Storwize V5000
system. A private and public key pair or user name and password is necessary. The following
steps are required to enable CLI access with SSH keys:
1. A public key and private key are generated as a pair.
2. A public key is uploaded to the IBM Storwize V5000 system using the GUI.
3. A client SSH tool is configured to authenticate with the private key.
4. A secure connection is established between the client and IBM Storwize V5000 system.

Secure Shell is the communication vehicle that is used between the management workstation
and the IBM Storwize V5000 system. The SSH client provides a secure environment from
which to connect to a remote machine. It uses the principles of public and private keys for
authentication.

SSH keys are generated by the SSH client software. The SSH keys include a public key,
which is uploaded and maintained by the clustered system, and a private key, which is kept
private on the workstation that is running the SSH client. These keys authorize specific users
to access the administration and service functions on the system. Each key pair is associated
with a user-defined ID string that can consist of up to 40 characters. Up to 100 keys can be
stored on the system. New IDs and keys can be added, and unwanted IDs and keys can be
deleted. To use the CLI, an SSH client must be installed on that system, the SSH key pair
must be generated on the client system, and the client’s SSH public key must be stored on
the IBM Storwize V5000 systems.

The SSH client that is used in this book is PuTTY. There is also a PuTTY key generator that
can be used to generate the private and public key pair. The PuTTY client can be downloaded
at no cost at the following website:
http://www.chiark.greenend.org.uk

The following tools should be downloaded:


򐂰 PuTTY SSH client: putty.exe
򐂰 PuTTY key generator: puttygen.exe

668 Implementing the IBM Storwize V5000


If using a Windows OS, it is suggested that the Windows installer is downloaded, which
includes everything required. This is currently ‘putty-0.63-installer.exe’

Generating a public and private key pair


To generate a public and private key pair, complete the following steps:
1. Start the PuTTY key generator to generate the public and private key pair, as shown in
Figure A-1.

Figure A-1 PuTTY key generator

Make sure that the following options are selected:


– Type of key to generate: SSH2 RSA
– Number of bits in a generated key: 1024

Appendix A. Command-line interface setup and SAN Boot 669


2. Click Generate and move the cursor over the blank area to generate the keys, as shown in
Figure A-2.

Figure A-2 Generate keys

Generating keys: The blank area that is indicated by the message is the large blank
grey rectangle on the GUI inside the section of the GUI labeled Key indicated by the
mouse pointer. Continue to move the mouse pointer over the blank area until the
progress bar reaches the far right side. This action generates random characters to
create a unique key pair.

More information about generating keys can be found in the PuTTY use manual. This is in the
help drop-down menu of the PuTTY GUI.
3. After the keys are generated, save them for later use. Click Save public key, as shown in
Figure A-3. You should always set a key pass phrase before saving the key. Not doing so
means the key will be stored on your workstation un-encrypted. Any attacker that gains
access to your private key will therefore gain access to all machines configured to accept
it.

670 Implementing the IBM Storwize V5000


Figure A-3 Save public key

4. You are prompted for a name (for example, pubkey) and a location for the public key (for
example, C:\Support Utils\PuTTY). Click Save.
Be sure to record the name and location of this SSH public key because this information
must be specified later.

Public key extension: By default, the PuTTY key generator saves the public key with
no extension. Use the string pub for naming the public key; for example, pubkey, to
differentiate the SSH public key from the SSH private key.

5. Click Save private key, as shown in Figure A-4.

Figure A-4 Save private key

Appendix A. Command-line interface setup and SAN Boot 671


6. You receive a warning message, as shown in Figure A-5. Click Yes to save the private key
without a passphrase.

Figure A-5 Confirm the security warning

7. When prompted, enter a name (for example, icat), select a secure place as the location,
and click Save.

Key generator: The PuTTY key generator saves the private key with the PPK
extension.

8. Close the PuTTY key generator.

For more information about generating the SSH keys see the PuTTY user manual found in
the help drop-down menu on the PuTTY GUI.

Uploading the SSH public key to the IBM Storwize V5000


After you create your SSH key pair, you must upload your SSH public key onto the IBM
Storwize V5000 system. Complete the following steps to upload the key:
1. Open the user section, as shown in Figure A-6.

Figure A-6 Open user section

2. Right-click the user for which you want to upload the key and click Properties, as shown in
Figure A-7.

672 Implementing the IBM Storwize V5000


Figure A-7 Superuser properties

3. To upload the public key, click Browse, select your public key, and click OK, as shown in
Figure A-8.

Figure A-8 Select public key

Appendix A. Command-line interface setup and SAN Boot 673


4. Click OK and the task to upload the key is started.
5. Click Close to return to the GUI.

Configuring the SSH client


Before the CLI can be used, the SSH client must be configured. Complete the following steps
to configure the client:
1. Start PuTTY, as shown in Figure A-9.

Figure A-9 PuTTY

In the right side pane under the “Specify the destination you want to connect to” section,
select SSH. Under the “Close window on exit” section, select Only on clean exit, which
ensures that if there are any connection errors, they are displayed in the user’s window.

674 Implementing the IBM Storwize V5000


2. From the Category pane on the left side of the PuTTY Configuration window, click
Connection  SSH to open the PuTTY SSH Configuration window, as shown in
Figure A-10.

Figure A-10 SSH protocol version 2

3. In the right side pane in the “Preferred SSH protocol version” section, select 2.

Appendix A. Command-line interface setup and SAN Boot 675


4. From the Category pane on the left side of the PuTTY Configuration window, click
Connection  SSH  Auth. As shown in Figure A-11, in the right side pane in the
“Private key file for authentication:” field under the Authentication Parameters section,
browse to or manually enter the fully qualified directory path and file name of the SSH
client private key file that was created earlier (for example, C:\Support
Utils\PuTTY\icat.PPK).

Figure A-11 SSH authentication

5. From the Category pane on the left side of the PuTTY Configuration window, click
Session to return to the Session view, as shown in Figure A-9 on page 674.

676 Implementing the IBM Storwize V5000


6. In the right side pane, enter the host name or system IP address of the IBM Storwize
V5000 clustered system in the Host Name field. Enter a session name in the Saved
Sessions field, as shown in Figure A-12.

Figure A-12 Enter session information

Appendix A. Command-line interface setup and SAN Boot 677


7. Click Save to save the new session, as shown in Figure A-13.

Figure A-13 Save Session

8. Highlight the new session and click Open to connect to the IBM Storwize V5000 system.
9. PuTTY now connects to the system and prompts you for a user name. Enter admin as the
user name and press Enter (see Example A-1).

Example: A-1 Enter user name


login as: superuser
Authenticating with public key "rsa-key-20130521"
Last login: Tue May 21 15:21:55 2013 from 9.174.219.143
IBM_Storwize:mcr-atl-cluster-01:superuser>

The CLI is now configured for IBM Storwize V5000 administration.

Example commands
A detailed description about all of the available commands is beyond the intended scope of
this book therefore we reference some sample commands used elsewhere in this book.

The svcinfo and svctask prefixes are no longer needed in IBM Storwize V5000. If you have
scripts that use this prefix, they will still run without problems, but aren’t necessary. If you
enter svcinfo or svctask and press the Tab key twice, all of the available subcommands are
listed. Pressing the Tab key twice also auto-completes commands if the input is valid and
unique to the system.

678 Implementing the IBM Storwize V5000


Enter lsvdisk (as shown in Example A-2) to list all configured volumes on the system. The
example shows that six volumes are configured.

Example: A-2 List all volumes


IBM_Storwize:mcr-atl-cluster-01:superuser>lsvdisk
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name
capacity type FC_id FC_name RC_id RC_name vdisk_UID opy_count
fast_write_state se_copy_count RC_change compressed_copy_count
0 V5000_Vol1 0 io_grp0 online 0 V5000_Pool
20.00GB striped 6005076300800 empty
1 no 0
1 V5000_Vol2 0 io_grp0 online 0 V5000_Pool
2.00GB striped 6005076300800 empty
1 no 0
2 V5000_Vol3 0 io_grp0 online 0 V5000_Pool
2.00GB striped 6005076300800 empty
1 no 0
3 V5000_Vol4 0 io_grp0 online 0 V5000_Pool
2.00GB striped 6005076300800 empty
1 no 0
4 V5000_Vol5 0 io_grp0 online 0 V5000_Pool
2.00GB striped 6005076300800 empty
1 no 0
5 V5000_Vol6 0 io_grp0 online 0 V5000_Pool
2.00GB striped 6005076300800 empty
1 no 0

Enter lshost to see a list of all configured hosts on the system, as shown in Example A-3.

Example: A-3 List hosts


IBM_Storwize:mcr-atl-cluster-01:superuser>lshost
id name port_count iogrp_count status
0 windows2008r2 2 4 online

To map the volume to the hosts, enter mkvdiskhostmap, as shown in Example A-4.

Example: A-4 Map volumes to host


IBM_Storwize:mcr-atl-cluster-01:superuser>mkvdiskhostmap -host ESXi-1 -scsi 0 -force
ESXi-Redbooks
Virtual Disk to Host map, id [0], successfully created

To verify the host mapping, enter lsvdiskhostmap, as shown in Example A-5.

Example: A-5 List all hosts that are mapped to a volume


IBM_Storwize:mcr-atl-cluster-01:superuser>lshostvdiskmap ESXi-1
id name SCSI_id vdisk_id vdisk_name vdisk_UID
4 ESXi-1 0 2 ESXi-Redbooks 600507680185853FF000000000000011

Appendix A. Command-line interface setup and SAN Boot 679


In the CLI, there are more options available than in the GUI. All advanced settings can be set;
for example, I/O throttling. To enable I/O throttling, change the properties of a volume by using
the changevdisk command, as shown in Example A-6. To verify the changes, run the lsvdisk
command.

Example: A-6 Enable advanced properties: I/O throttling


IBM_Storwize:mcr-atl-cluster-01:superuser>chvdisk -rate 1200 -unit mb
ESXi-Redbooks
IBM_Storwize:mcr-atl-cluster-01:superuser>
IBM_Storwize:mcr-atl-cluster-01:superuser>lsvdisk ESXi-Redbooks

id 2
name ESXi-Redbooks
.
.
vdisk_UID 600507680185853FF000000000000011
virtual_disk_throttling (MB) 1200
preferred_node_id 2
.
.
IBM_Storwize:mcr-atl-cluster-01:superuser>

Command output: The lsvdisk command lists all available properties of a volume and its
copies. To make it easier to read, lines in Example A-6 were deleted.

If you do not specify the unit parameter, the throttling is based on I/Os instead of throughput,
as shown in Example A-7.

Example: A-7 Throttling based on I/O


IBM_Storwize:mcr-atl-cluster-01:superuser>chvdisk -rate 4000 ESXi-Redbooks
IBM_Storwize:mcr-atl-cluster-01:superuser>lsvdisk ESXi-Redbooks
id 2
name ESXi-Redbooks
.
.
vdisk_UID 600507680185853FF000000000000011
throttling 4000
preferred_node_id 2
.
.
IBM_Storwize:mcr-atl-cluster-01:superuser>

680 Implementing the IBM Storwize V5000


To disable I/O throttling, set the I/O rate to 0, as shown in Example A-8.

Example: A-8 Disable I/O Throttling


IBM_Storwize:mcr-atl-cluster-01:superuser>chvdisk -rate 0 ESXi-Redbooks
IBM_Storwize:mcr-atl-cluster-01:superuser>lsvdisk ESXi-Redbooks
id 2
.
.
vdisk_UID 600507680185853FF000000000000011
throttling 0
preferred_node_id 2
IBM_Storwize:mcr-atl-cluster-01:superuser>

Upgrading drive firmware using the CLI

Note: Before upgrading any disk drive firmware, the Storwize system should be checked
for any failures and any that are found should be rectified before continuing.

Disk drives should not be upgraded if their associated arrays are not in a redundant state.

While the update process is designed not to take the drives offline, this cannot be
guaranteed.

Running the following command for the drive that you are upgrading will inform you of any
possible issues.

lsdependentvdisks -drive drive_id

This will return notification of a possible issue, if the drive in question is part of a
non-redundant array.

An example of this might be a RAID0 array or a RAID5 array with a failed drive.

To perform the disk drive firmware upgrade, you will require the following files:
򐂰 Upgrade Test Utility
򐂰 The Drive firmware package

These can be downloaded from the IBM Support site:

http://www-947.ibm.com/support/entry/portal/support

After downloading, copy the Upgrade Test Utility and Drive Firmware package to your PuTTY
folder.

Like the Storwize controller firmware upgrade, the disk drive upgrade requires the use of the
Upgrade Test Utility which will show the drives that need to be upgraded and will check if
there are likely to be any issues.

Appendix A. Command-line interface setup and SAN Boot 681


Copying, installing, and running the Upgrade Test Utility on the Storwize unit
The following steps provide the details to run the upgrade test utility.

Copying the Upgrade Test Utility to the Storwize unit


From a Windows command prompt, upload the Upgrade Test Utility to the Storwize unit using
the PuTTY Secure Copy client (PSCP). Run the following command from the PuTTY folder on
your laptop/PC (normally c:\programs files\putty or c:\program files (x86)\putty):

pscp -i hursley.ppk IBM2072_INSTALL_upgradetest_12.26


[email protected]:/home/admin/upgrade

Where ‘hursley.ppk’ is the private key file generated when SSH was set up,
‘IBM2072_INSTALL_upgradetest_12.26’ is the file name of the utility (use the latest one),
‘superuser’ is the user name and ‘9.174.152.17’ is the management IP address or name of
the Storwize unit. Change as appropriate to your environment.

It is also possible to upload the Upgrade Test Utility to the Storwize unit using the Storwize
user name and password (not using a SSH private key), by using the following command:

pscp IBM2072_INSTALL_upgradetest_12.26 [email protected]:/home/admin/upgrade

You will be asked for the password for whichever user name you specified. This is shown in
Example 12-4.

Example 12-4 uploading test utility


C:\Program Files (x86)\PuTTY>pscp IBM2072_INSTALL_upgradetest_12.26 superuser@9
174.152.17:/home/admin/upgrade
[email protected]'s password:
IBM2072_INSTALL_upgradete | 59 kB | 59.6 kB/s | ETA: 00:00:00 | 100%

C:\Program Files (x86)\PuTTY>

Installing and running the Upgrade Test Utility


From the Storwize CLI, install the Upgrade Test Utility using the following command:

svcservicetask applysoftware -file IBM2072_INSTALL_upgradetest_12.26

Then run the Upgrade Test Utility using the following command:

svcupgradetest –f –d

The –f switch specifies this is a drive firmware update while the –d switch shows firmware
details for every disk drive. Omitting the –d switch gives a summary. Example 12-5 shows this
output.

Example 12-5 Test utility output


IBM_Storwize:ITSO_V5000:superuser>svcupgradetest -f -d
svcupgradetest version 12.26

Please wait, the test may take several minutes to complete.

+******************* Warning found *******************

682 Implementing the IBM Storwize V5000


This tool has found the internal disks of this system are
not running the recommended firmware versions.
Details follow:

+----------------------+-----------+------------+---------------------------------
---------+
| Model | Latest FW | Current FW | Drive Info
|
+----------------------+-----------+------------+---------------------------------
---------+
| ST9146853SS | B63E | B63D | Drive in slot 13 in enclosure 1
|
| | | | Drive in slot 12 in enclosure 1
|
| | | | Drive in slot 4 in enclosure 1
|
| | | | Drive in slot 3 in enclosure 1
|
| MK3001GRRB | SC2E | SC2C | Drive in slot 24 in enclosure 1
|
| | | | Drive in slot 21 in enclosure 1
|
| | | | Drive in slot 23 in enclosure 1
|
| | | | Drive in slot 22 in enclosure 1
|
| | | | Drive in slot 11 in enclosure 1
|
+----------------------+-----------+------------+---------------------------------
---------+
We recommend that you upgrade the drive microcode at an
appropriate time. If you believe you are running the latest
version of microcode, then check for a later version of this tool.
You do not need to upgrade the drive firmware before starting the
software upgrade.

Results of running svcupgradetest:


==================================

The tool has found warnings.


For each warning above, follow the instructions given.

The tool has found 0 errors and 1 warnings


IBM_Storwize:ITSO_V5000:superuser>

The utility will list all drives in the system that require a firmware upgrade (if using the –d
switch).

Appendix A. Command-line interface setup and SAN Boot 683


Copying the code to the Storwize unit
It is assumed that the disk drive firmware has already been downloaded and placed in the
PuTTY folder. From a Windows command prompt, upload the disk drive firmware to the
Storwize unit using the PuTTY Secure Copy client (PSCP). Run the following command from
the PuTTY folder on your laptop/PC (normally c:\programs files\putty or c:\program files
(x86)\putty):

pscp -i hursley.ppk IBM2072_DRIVE_20140826


[email protected]:/home/admin/upgrade

Where 'hursley.ppk' is the private key file generated when SSH was set up,
'IBM2072_DRIVE_20140826' is the name of the firmware file, 'superuser' is the user name
and '9.174.152.17' is the management IP address or name of the Storwize unit. Change as
appropriate to your environment.

It is also possible to upload the disk drive firmware to the Storwize unit using the Storwize
user name and password (not using a SSH private key), by using the following command:

pscp IBM2072_DRIVE_20140826 [email protected]:/home/admin/upgrade

You will be asked for a password for the username in this case.

Applying the disk drive software


Depending on the Storwize code being used, there are a number of different options for
applying the disk drive firmware.

If using Storwize code v7.1 or older, disk drive firmware can only be manually applied to one
drive at a time, using the applydrivesoftware command for each individual disk. The output
from the test utility shown in Example 12-5 on page 682 gives the drive slot number as the
identifier, however to run the firmware upgrade on individual drives the drive ID is required -
not the slot ID. To obtain the drive ID from the slot ID use the lsdrive output as shown in
Example 12-6. The output shown has been abbreviated for the sake of clarity.

Example 12-6 lsdrive output


IBM_Storwize:ITSO_V5000:superuser>lsdrive
id status use tech_type capacity mdisk_id mdisk_name member_id enclosure_id slot_id
0 online candidate sas_ssd 185.8GB 1 17
1 online candidate sas_ssd 185.8GB 1 18
2 online candidate sas_ssd 185.8GB 1 6
3 online candidate sas_ssd 185.8GB 1 7
4 online candidate sas_ssd 185.8GB 1 16
5 online candidate sas_ssd 185.8GB 1 14
6 online candidate sas_ssd 185.8GB 1 15
7 online candidate sas_ssd 185.8GB 1 10
8 online candidate sas_ssd 372.1GB 1 1
9 online candidate sas_ssd 185.8GB 1 8
10 online candidate sas_ssd 185.8GB 1 9
11 online candidate sas_ssd 372.1GB 1 2
12 online member sas_hdd 136.2GB 3 mdisk3 3 1 13
13 online member sas_hdd 136.2GB 3 mdisk3 2 1 12
14 online member sas_hdd 136.2GB 3 mdisk3 1 1 5
15 online member sas_hdd 136.2GB 3 mdisk3 0 1 4
16 online spare sas_hdd 136.2GB 1 3
17 online candidate sas_hdd 278.9GB 1 24
18 online candidate sas_hdd 278.9GB 1 21

684 Implementing the IBM Storwize V5000


Run the command to apply drive firmware as shown in Example 12-7.

Example 12-7 Applying drive firmware to a single disk


IBM_Storwize:ITSO_V5000:superuser>applydrivesoftware -file IBM2072_DRIVE_20140826
-type firmware -drive 12
IBM_Storwize:ITSO_V5000:superuser>

With Storwize code 7.2 or later, it is possible to upgrade all drives using the -all switch as
shown in Example 12-8.

Example 12-8 Applying drive firmware to all drives


IBM_Storwize:ITSO_V5000:superuser>applydrivesoftware -file IBM2072_DRIVE_20140826
-type firmware -all
IBM_Storwize:ITSO_V5000:superuser>

The command takes roughly two minutes per drive to complete. To confirm that all disks have
been upgraded re-run the upgrade test utility or check the internal storage from the GUI.

SAN Boot
IBM Storwize V5000 supports SAN Boot for Windows, VMware, and many other operating
systems. It is also possible to migrate SAN Boot volumes from other storage systems onto the
Storwize V5000. Each implementation or migration will be somewhat different depending on
the OS, HBA and multi-path driver to be used. SAN Boot support can also change. This is
therefore beyond the scope of this book. For more information about SAN Boot visit the IBM
Storwize V5000 Knowledge Center:
http://www.ibm.com/support/knowledgecenter/STHGUJ_7.4.0/com.ibm.storwize.v5000.740
.doc/v5000_ichome_740.html

For more information about SAN Boot support for different operating systems with IBM SDD,
see the IBM System Storage Multipath Subsystem Device Driver User's Guide, GC52- 1309,
which is available at this website:

http://www-01.ibm.com/support/docview.wss?uid=ssg1S7000303

Appendix A. Command-line interface setup and SAN Boot 685


686 Implementing the IBM Storwize V5000
Related publications

The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.

IBM Redbooks publications


The following IBM Redbooks publications (or later versions as they become available) may
provide more information about the topics in this book. Some publications that are referenced
in the following list might be available in softcopy only:
򐂰 Implementing the IBM System Storage SAN Volume Controller V7.2, SG24-7933
򐂰 Implementing the IBM Storwize V7000 V7.2, SG24-7938
򐂰 SAN Volume Controller: Best Practices and Performance Guidelines, SG24-7521
򐂰 Implementing an IBM/Brocade SAN with 8 Gbps Directors and Switches, SG24-6116
򐂰 IBM SAN Volume Controller and Storwize Family Native IP Replication, REDP-5103

You can search for, view, download, or order these documents and other Redbooks
publications, Redpaper publications, Web Docs, drafts, and other materials, at the following
website:
http://www.ibm.com/redbooks

IBM Storwize V5000 publications


Storwize V5000 publications are available at this website:
https://ibm.biz/BdxyDL

IBM Storwize V5000 support


Storwize V5000 support is available at this website:
https://ibm.biz/BdxyD9

Help from IBM


IBM Support and downloads:
ibm.com/support

IBM Global Services:


ibm.com/services

© Copyright IBM Corp. 2013, 2015. All rights reserved. 687


688 Implementing the IBM Storwize V5000
Implementing the IBM Storwize
V5000
Implementing the IBM Storwize V5000
(1.0” spine)
0.875”<->1.498”
460 <-> 788 pages
Implementing the IBM Storwize V5000
Implementing the IBM Storwize V5000
Implementing the IBM Storwize
V5000
Implementing the IBM Storwize
V5000
Back cover ®

Implementing the IBM


Storwize V5000
®

Easily manage and Organizations of all sizes are faced with the challenge of managing
massive volumes of increasingly valuable data. But storing this data INTERNATIONAL
deploy systems with
can be costly, and extracting value from the data is becoming more TECHNICAL
embedded GUI
difficult. IT organizations have limited resources but must stay SUPPORT
responsive to dynamic environments and act quickly to consolidate, ORGANIZATION
Experience rapid and simplify, and optimize their IT infrastructures. The IBM Storwize V5000
flexible provisioning system provides a smarter solution that is affordable, easy to use, and
self-optimizing, which enables organizations to overcome these
Protect data with storage challenges.
remote mirroring Storwize V5000 delivers efficient, entry-level configurations that are BUILDING TECHNICAL
specifically designed to meet the needs of small and midsize INFORMATION BASED ON
businesses. Designed to provide organizations with the ability to PRACTICAL EXPERIENCE
consolidate and share data at an affordable price, Storwize V5000
offers advanced software capabilities that are usually found in more IBM Redbooks are developed
expensive systems. by the IBM International
Technical Support
This IBM Redbooks publication is intended for pre-sales and post-sales Organization. Experts from
technical support professionals and storage administrators. IBM, Customers and Partners
from around the world create
The concepts in this book also relate to the IBM Storwize V3700. timely technical information
This book was written at a software level of Version 7 Release 4. based on realistic scenarios.
Specific recommendations
are provided to help you
implement IT solutions more
effectively in your
environment.

For more information:


ibm.com/redbooks

SG24-8162-01 ISBN 073844037X

You might also like