V 5000 Implementing
V 5000 Implementing
V 5000 Implementing
Jon Tate
Adam Lyon-Jones
Lee Sirett
Chris Tapsell
Paulo Tomiyoshi Takeda
ibm.com/redbooks
International Technical Support Organization
February 2015
SG24-8162-01
Note: Before using this information and the product it supports, read the information in “Notices” on
page ix.
This edition applies to the IBM Storwize V5000 and software version 7.4. Note that this book was produced
based on beta code and some screens may change when it becomes generally available.
© Copyright International Business Machines Corporation 2013, 2015. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Contents v
7.2.3 Creating an MDisk and a pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
7.2.4 Option: Use the recommended configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
7.2.5 Option: Select a different configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
7.2.6 MDisk by Pools panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
7.2.7 RAID action for MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
7.2.8 More actions on MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
7.3 Working with storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
7.3.1 Create Pool option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
7.3.2 Actions on storage pools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
7.4 Working with child pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
7.4.1 Creating child pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
7.4.2 Actions on child pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
7.4.3 Resizing a child pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
7.5 Working with MDisks on external storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
Contents vii
11.1.4 Guidelines for virtualizing external storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
11.2 Working with external storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
11.2.1 Adding external storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
11.2.2 Importing Image Mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
11.2.3 Managing external storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596
11.2.4 Removing external storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
DS8000® Power Systems™ System i®
Easy Tier® Redbooks® System Storage®
FlashCopy® Redbooks (logo) ® Tivoli®
IBM® Storwize® XIV®
Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
Download
Android
iOS
Now
Organizations of all sizes are faced with the challenge of managing massive volumes of
increasingly valuable data. But storing this data can be costly, and extracting value from the
data is becoming more difficult. IT organizations have limited resources but must stay
responsive to dynamic environments and act quickly to consolidate, simplify, and optimize
their IT infrastructures. The IBM® Storwize® V5000 system provides a smarter solution that
is affordable, easy to use, and self-optimizing, which enables organizations to overcome
these storage challenges.
Storwize V5000 delivers efficient, entry-level configurations that are specifically designed to
meet the needs of small and midsize businesses. Designed to provide organizations with the
ability to consolidate and share data at an affordable price, Storwize V5000 offers advanced
software capabilities that are usually found in more expensive systems.
This IBM Redbooks® publication is intended for pre-sales and post-sales technical support
professionals and storage administrators.
The concepts in this book also relate to the IBM Storwize V3700.
Authors
This book was produced by a team of specialists from around the world working at the
International Technical Support Organization, Manchester Labs, UK.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface xv
Stay connected to IBM Redbooks
Find us on Facebook:
http://www.facebook.com/IBMRedbooks
Follow us on Twitter:
http://twitter.com/ibmredbooks
Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
This section describes the technical changes made in this edition of the book and in previous
editions. This edition might also include minor corrections and editorial changes that are not
identified.
Summary of Changes
for SG24-8162-01
for Implementing the IBM Storwize V5000
as created or updated on February 4, 2015.
New information
Scalability data
Drive Support
New GUI look and feel
New CLI commands
Changed information
GUI screen captures
CLI commands and output
The IBM Storwize V5000 addresses the block storage requirements of small and midsize
organizations and consists of one 2U control enclosure and, optionally, up to nineteen 2U
expansion enclosures, which are connected via serial-attached Small Computer Systems
Interface (SCSI SAS) cables that make up one system that is called an I/O group.
Two I/O groups can be connected to form a cluster giving a maximum of two control enclosure
and thirty eight expansion enclosures.
The control and expansion enclosures are available in the following form factors and can be
intermixed within an I/O group:
12 x 3.5-inch drives in a 2U unit
24 x 2.5-inch drives in a 2U unit
Within each enclosure, there are two canisters. Control enclosures contain two node
canisters, and expansion enclosures contain two expansion canisters.
The IBM Storwize V5000 supports up to 480 x 3.5-inch or 960 x 2.5-inch or a combination of
both drive form factors for the internal storage in a two I/O group cluster.
The IBM Storwize V5000 is designed to accommodate the most common storage network
technologies to enable easy implementation and management. It can be attached to hosts via
a Fibre Channel SAN fabric, an iSCSI infrastructure, or via SAS. Hosts can be network or
direct attached.
Important: IBM Storwize V5000 can be direct-attached to a host. For more information
about restrictions, see the IBM System Storage Interoperation Center (SSIC), which is
available at this website:
http://www-03.ibm.com/systems/support/storage/ssic/interoperability.wss
The IBM Storwize V5000 system also provides several configuration options that are aimed at
simplifying the implementation process. It also provides configuration presets and automated
wizards called Directed Maintenance Procedures (DMP) to help resolve any events that might
occur.
Included with an IBM Storwize V5000 system is a simple and easy to use graphical user
interface (GUI) that is designed to allow storage to be deployed quickly and efficiently. The
GUI runs on any supported browser. The management GUI contains a series of
pre-established configuration options that are called presets that use commonly used settings
to quickly configure objects on the system. Presets are available for creating volumes and
IBM FlashCopy® mappings and for setting up a RAID configuration.
You can also use the command-line interface (CLI) to set up or control the system.
Control enclosure A hardware unit that includes a chassis, node canisters, drives,
and power sources.
Data migration IBM Storwize V5000 can migrate data from existing external
storage to its internal volumes.
Expansion canister A hardware unit that includes the SAS interface hardware that
enables the control enclosure hardware to use the drives of the
expansion enclosure. Each expansion enclosure has two
expansion canisters.
External storage MDisks that are SCSI logical units (LUs) presented by storage
systems that are attached to and managed by the clustered
system.
Fibre Channel port Fibre Channel ports are connections for the hosts to get access
to the IBM Storwize V5000.
Host mapping The process of controlling which hosts can access specific
volumes within an IBM Storwize V5000.
Internal storage Array MDisks and drives that are held in enclosures that are
part of the IBM Storwize V5000.
iSCSI (Internet Small Computer Internet Protocol (IP)-based storage networking standard for
System Interface) linking data storage facilities.
Managed disk (MDisk) A component of a storage pool that is managed by a clustered
system. An MDisk is part of a RAID array of internal storage or
a SCSI LU for external storage. An MDisk is not visible to a host
system on the storage area network.
Node canister A hardware unit that includes the node hardware, fabric, and
service interfaces, SAS expansion ports, and battery. Each
control enclosure contains two node canister.
PHY A single SAS lane. There are four PHYs in each SAS cable.
Power Supply Unit Each enclosure has two power supply units (PSU).
Quorum disk A disk that contains a reserved area that is used exclusively for
cluster management. The quorum disk is accessed when it is
necessary to determine which half of the cluster continues to
read and write data.
Serial-Attached SCSI (SAS) ports SAS ports are connections for expansion enclosures and direct
attachment of hosts to access the IBM Storwize V5000.
Thin provisioning or thin The ability to define a storage unit (full system, storage pool, or
provisioned volume) with a logical capacity size that is larger than the
physical capacity that is assigned to that storage unit.
Worldwide port names Each Fibre Channel port and SAS port is identified by its
physical port number and worldwide port name (WWPN).
More information: For more information about the features, benefits, and specifications of
IBM Storwize V5000 models, see this website:
http://www.ibm.com/systems/storage/disk/storwize_v5000/index.html
The information in this book is accurate at the time of writing. However, as the IBM
Storwize V5000 matures, expect to see new features and enhanced specifications.
The IBM Storwize V5000 models are described in Table 1-2. All models have two node
canisters. C models are control enclosures and E models are expansion enclosures.
One-Year Warranty
2077-12C 16 GB 12 x 3.5-inch
2077-24C 16 GB 24 x 2.5-inch
Three-Year Warranty
2078-12C 16 GB 12 x 3.5-inch
2078-24C 16 GB 24 x 2.5-inch
Figure 1-1 shows the front view of the 2077/2078-12C and 12E enclosures.
Figure 1-1 IBM Storwize V5000 front view for 2077/2078-12C and 12E enclosures
The drives are positioned in four columns of three horizontal-mounted drive assemblies. The
drive slots are numbered 1 - 12, starting at upper left and going left to right, top to bottom.
Figure 1-2 IBM Storwize V5000 front view for 2077/2078-24C and 24E enclosure
The drives are positioned in one row of 24 vertically mounted drive assemblies. The drive
slots are numbered 1 - 24, starting from the left. There is a vertical center drive bay molding
between slots 12 and 13.
Figure 1-3 shows an overview of hardware components of the IBM Storwize V5000 solution.
Figure 1-4 IBM Storwize V5000 control enclosure rear view of models 12C and 24C
In Figure 1-4, you can see that there are two power supply slots at the bottom of the
enclosure. The power supplies are identical and exchangeable. There are two canister slots
at the top of the chassis.
In Figure 1-5, you can see the rear view of an IBM Storwize V5000 expansion enclosure.
Figure 1-5 IBM Storwize V5000 expansion enclosure rear view - models 12E and 24E
You can see that the only difference between the control enclosure and the expansion
enclosure are the canisters. The canisters of the expansion have only the two SAS ports.
For more information about the expansion enclosure, see 1.4.2, “Expansion enclosure” on
page 8.
The battery is used in case of power loss. The IBM Storwize V5000 system uses this battery
to power the canister while the cache data is written to the internal system flash. This memory
dump is called a fire hose memory dump. After the system is up again, this data is loaded
back to the cache for destage to the disks.
Figure 1-6 on page 7 also shows the following features that are provided by the IBM Storwize
V5000 node canister:
Two 10/100/1000 Mbps Ethernet ports, which are used for management. Port 1 (left port)
must be configured. The second port, port 2 on the right, is optional. Both ports can be
used for iSCSI traffic. For more information, see Chapter 4, “Host configuration” on
page 155.
Two USB ports. One port is used during the initial configuration or when there is a
problem. They are numbered 1 on the left and 2 on the right. For more information about
usage, see Chapter 2, “Initial configuration” on page 25.
Four serial attached SCSI (SAS) ports. They are numbered 1on the left to 4 on the right.
The IBM Storwize V5000 uses ports 1 and 2 for host connectivity and ports 3 and 4 to
connect to the optional expansion enclosures. The IBM Storwize V5000 incorporates two
SAS chains and up to 10 expansion enclosures can be connected to port 1 and up to 9
enclosures to port 2.
Four Fibre Channel ports, which operate at 2 Gbps, 4 Gbps, or 8 Gbps. The ports are
numbered from left to right starting with 1.
Service port: Do not use the port marked with a wrench. This port is a service port only.
The two node canisters act as a single processing unit and form an I/O group that is attached
to the SAN fabric, an iSCSI infrastructure, or directly attached to hosts via FC or SAS. The
pair of nodes is responsible for serving I/O to a volume. The two nodes provide a highly
available fault-tolerant controller so that if one node fails, the surviving node automatically
takes over. Nodes are deployed in pairs that are called I/O groups.
One node is designated as the configuration node, but each node in the control enclosure
holds a copy of the control enclosure state information.
The IBM Storwize V5000 supports two I/O groups in a clustered system.
The terms node canister and node are used interchangeably throughout this book.
The expansion enclosure power supplies are the same as the control enclosure. There is a
single power lead connector on each power supply unit.
As shown in Figure 1-8, each expansion canister provides two SAS interfaces that are used
to connect to the control enclosure and any further optional expansion enclosures. The ports
are numbered 1 on the left and 2 on the right. SAS port 1 is the IN port and SAS port 2 is the
OUT port.
Use of the SAS connector 1 is mandatory because the expansion enclosure must be
attached to a control enclosure or another expansion enclosure. SAS connector 2 is optional
because it is used to attach to further expansion enclosures.
Each port includes two LEDs to show the status. The first LED indicates the link status and
the second LED indicates the fault status.
For more information about LED and ports, see this website:
https://ibm.biz/BdF7H4
The 1 Gb iSCSI and 6 Gb SAS interfaces are built into the node canister hardware and the
8 Gb FC interface is supplied by a host interface card (HIC). At the time of writing, the 8 Gb
FC HIC is the only HIC that is available and is supplied as standard.
Table 1-3 shows the IBM Storwize V5000 Disk Drive types that are available at the time of
writing.
2.5-inch form factor Solid-state disk N/A 200, 400, 800 GB and
1.6TB
2.5-inch form factor SAS 10,000 rpm 600, 900 GB, and 1.2, 1.8
TB
2.5-inch form factor SAS 15,000 rpm 146, 300 and 600 GB
3.5-inch form factor SAS 10,000 rpm 900 GB and 1.2, 1.8 TBa
3.5-inch form factor SAS 15,000 rpm 300 and 600 GBb
Note: The 1.8TB and 6TB and drives listed above support 4K block sizes.
1.5.1 Hosts
A host system is a server that is connected to IBM Storwize V5000 through a Fibre Channel
connection, an iSCSI connection, or through a SAS connection.
Hosts are defined on IBM Storwize V5000 by identifying their WWPNs for Fibre Channel and
SAS hosts. iSCSI hosts are identified by using their iSCSI names. The iSCSI names can be
iSCSI qualified names (IQNs) or extended unique identifiers (EUIs). For more information,
see Chapter 4, “Host configuration” on page 155.
Hosts can be Fibre Channel attached via an existing Fibre Channel network infrastructure or
direct attached, iSCSI attached via an existing IP network, or directly attached via SAS. A
significant benefit of having direct attachment is that you can attach the host directly to the
IBM Storwize V5000 without the need for an FC or IP network.
One of the nodes within the system is known as the configuration node that manages
configuration activity for the clustered system. If this node fails, the system nominates the
other node to become the configuration node.
When a host server performs I/O to one of its volumes, all the I/Os for a specific volume are
directed to the I/O group. Also, under normal conditions, the I/Os for that specific volume are
always processed by the same node within the I/O group.
When a host server performs I/O to one of its volumes, all the I/O for that volume is directed to
the I/O group where the volume has been defined. Under normal conditions, these I/Os are
also always processed by the same node within that I/O group.
Both nodes of the I/O group act as preferred nodes for their own specific subset of the total
number of volumes that the I/O group presents to the host servers (a maximum of 2048
volumes per hosts). However, both nodes also act as a failover node for the partner node
within the I/O group. Therefore, a node takes over the I/O workload from its partner node
(if required) without affecting the server’s application.
In an IBM Storwize V5000 environment (which uses active-active architecture), the I/O
handling for a volume can be managed by both nodes of the I/O group. The I/O groups must
therefore be connected to the SAN such that all hosts can access all nodes. The hosts that
are connected through Fibre Channel connectors must use multipath device drivers to handle
this capability.
Up to 1024 host server objects can be defined to one I/O group or 2048 in a two I/O group
system. More information about I/O groups can be found in Chapter 5, “Volume configuration”
on page 201.
Important: The active/active architecture provides availability to process I/Os for both
controller nodes and allows the application to continue running smoothly, even if the server
has only one access route or path to the storage controller. This type of architecture
eliminates the path/LUN thrashing that is typical of an active/passive architecture.
System configuration backup: After the system configuration is backed up, save the
backup data on to your local hard disk (or at the least outside of the SAN). If you are unable
to access the IBM Storwize V5000, you do not have access to the backup data if it is on the
SAN. Perform this configuration backup after each configuration change to be safe.
The system can be configured by using the IBM Storwize V5000 management software
(GUI), CLI, or the USB key.
1.5.5 RAID
The IBM Storwize V5000 contains a number of internal drives, but these drives cannot be
directly added to storage pools. The drives must be included in a RAID array to provide
protection against the failure of individual drives.
These drives are referred to as members of the array. Each array has a RAID level. RAID
levels provide different degrees of redundancy and performance and have different
restrictions regarding the number of members in the array.
IBM Storwize V5000 supports hot spare drives. When an array member drive fails, the system
automatically replaces the failed member with a hot spare drive and rebuilds the array to
restore its redundancy. Candidate and spare drives can be manually exchanged with array
members.
Each array has a set of goals that describe the required location and performance of each
array. A sequence of drive failures and hot spare takeovers can leave an array unbalanced,
that is, with members that do not match these goals. The system automatically rebalances
such arrays when the appropriate drives are available.
An MDisk is invisible to a host system on the storage area network because it is internal to the
IBM Storwize V5000 system.
The clustered system automatically forms the quorum disk by taking a small amount of space
from an MDisk. It allocates space from up to three different MDisks for redundancy, although
only one quorum disk is active.
To avoid the possibility of losing all of the quorum disks because of a failure of a single
storage system if the environment has multiple storage systems, you should allocate the
quorum disk on different storage systems. It is possible to manage the quorum disks by using
the CLI.
MDisks can be added to a storage pool at any time to increase the capacity of the pool.
MDisks can belong in only one storage pool. For more information, see Chapter 7, “Storage
pools” on page 309.
Each MDisk in the storage pool is divided into a number of extents. The size of the extent is
selected by the administrator when the storage pool is created and cannot be changed later.
The size of the extent ranges from 16 MB - 8 GB.
Default extent size: The GUI of IBM Storwize V5000 has a default extent size value of
1 GB when you define a new storage pool.
The extent size directly affects the maximum volume size and storage capacity of the
clustered system.
A system can manage 2^22 (4,194,304) extents. For example, with a 16 MB extent size, the
system can manage up to 16 MB x 4,194,304 = 64 TB of storage.
The effect of extent size on the maximum volume and cluster size is shown in Table 1-4.
16 2048 (2 TB) 64 TB
Use the same extent size for all storage pools in a clustered system. This is a prerequisite if
you want to migrate a volume between two storage pools. If the storage pool extent sizes are
not the same, you must use volume mirroring to copy volumes between storage pools, as
described in Chapter 7, “Storage pools” on page 309.
A storage pool can have a threshold warning set that automatically issues a warning alert
when the used capacity of the storage pool exceeds the set limit.
A multi-tiered storage pool will therefore contain MDisks with different characteristics unlike
the single-tiered storage pool. MDisks with similar characteristics then form the tiers within
the pool. However, each tier should have MDisks of the same size that provide the same
number of extents.
A multi-tiered storage pool is used to enable automatic migration of extents between disk tiers
using the IBM Storwize V5000 IBM Easy Tier® function, as described in Chapter 9, “Easy
Tier” on page 419. This functionality can help improve performance of host volumes on the
IBM Storwize V5000.
1.5.9 Volumes
A volume is a logical disk that is presented to a host system by the clustered system. In our
virtualized environment, the host system has a volume that is mapped to it by IBM Storwize
V5000. IBM Storwize V5000 translates this volume into a number of extents, which are
allocated across MDisks. The advantage with storage virtualization is that the host is
decoupled from the underlying storage, so the virtualization appliance can move around the
extents without impacting the host system.
The host system cannot directly access the underlying MDisks in the same manner as it can
access RAID arrays in a traditional storage environment.
Sequential
A sequential volume is a volume in which the extents are allocated one after the other from
one MDisk to the next MDisk, as shown in Figure 1-10.
Some virtualization functions are not available for image mode volumes, so it is often useful to
migrate the volume into a new storage pool. After it is migrated, the MDisk becomes a
managed MDisk.
If you want to migrate data from an existing storage subsystem, use the Storage Migration
wizard, which guides you through the process.
If you add an MDisk that contains data to a storage pool, any data on the MDisk is lost. If you
are presenting externally virtualized LUNs that contain data to a IBM Storwize V5000, import
them as image mode volumes to ensure data integrity or use the migration wizard.
In the simplest terms, iSCSI allows the transport of SCSI commands and data over an
Internet Protocol network that is based on IP routers and Ethernet switches. iSCSI is a
block-level protocol that encapsulates SCSI commands into TCP/IP packets and uses an
existing IP network instead of requiring FC HBAs and a SAN fabric infrastructure.
An iSCSI address specifies the iSCSI name of an iSCSI node and a location of that node. The
address consists of a host name or IP address, a TCP port number (for the target), and the
iSCSI name of the node. An iSCSI node can have any number of addresses, which can
change at any time, particularly if they are assigned by way of Dynamic Host Configuration
Protocol (DHCP). An IBM Storwize V5000 node represents an iSCSI node and provides
statically allocated IP addresses.
Each iSCSI node, that is, an initiator or target, has a unique IQN, which can have a size of up
to 255 bytes. The IQN is formed according to the rules that were adopted for Internet nodes.
The IQNs can be abbreviated by using a descriptive name, which is known as an alias. An
alias can be assigned to an initiator or a target.
For more information about configuring iSCSI, see Chapter 4, “Host configuration” on
page 155.
1.5.11 SAS
The SAS standard is an alternative method of attaching hosts to the IBM Storwize V5000.
The IBM Storwize V5000 supports direct SAS host attachment providing easy-to-use,
affordable storage needs. Each SAS port device has a worldwide unique 64-bit SAS address
and operates at 6 Gbps.
When a host system issues a write to a mirrored volume, IBM Storwize V5000 writes the data
to both copies. When a host system issues a read to a mirrored volume, IBM Storwize V5000
requests it from the primary copy. If one of the mirrored volume copies is temporarily
unavailable, the IBM Storwize V5000 automatically uses the alternative copy without any
outage for the host system. When the mirrored volume copy is repaired, IBM Storwize V5000
resynchronizes the data.
A mirrored volume can be converted into a non-mirrored volume by deleting one copy or by
splitting away one copy to create a non-mirrored volume.
The mirrored volume copy can be any type: image, striped, sequential, and thin-provisioned
or not. The two copies can be different volume types.
The use of mirrored volumes can also assist with migrating volumes between storage pools
that have different extent sizes. Mirrored volumes can also provide a mechanism to migrate
fully allocated volumes to thin-provisioned volumes without any host outages.
The Volume Mirroring feature is included as part of the base software and no license is
required.
The real capacity determines the quantity of MDisk extents that are allocated for the volume.
The virtual capacity is the capacity of the volume that is reported to IBM Storwize V5000 and
to the host servers.
The real capacity is used to store the user data and the metadata for the thin-provisioned
volume. The real capacity can be specified as an absolute value or a percentage of the virtual
capacity.
The thin provisioning feature can be used on its own to create over-allocated volumes, or it
can be used with FlashCopy. Thin-provisioned volumes can be used with the mirrored volume
feature as well.
If the user modifies the real capacity, the contingency capacity is reset to be the difference
between the used capacity and real capacity. In this way the autoexpand feature does not
cause the real capacity to grow much beyond the virtual capacity.
A volume that is created with a zero contingency capacity goes offline when it must expand.
A volume with a non-zero contingency capacity stays online until it is used up.
The Thin Provisioning feature is included as part of the base software and no license is
required.
The Easy Tier function can be turned on or off at the storage pool and volume level.
It is possible to demonstrate the potential benefit of Easy Tier in your environment before
installing SSDs by using the IBM Storage Advisor Tool.
For more information about Easy Tier, see Chapter 9, “Easy Tier” on page 419.
The Storage Migration feature is included as part of the base software and no license is
required.
FlashCopy can be performed on multiple source and target volumes. FlashCopy permits the
management operations to be coordinated so that a common single point-in-time is chosen
for copying target volumes from their respective source volumes.
IBM Storwize V5000 also permits multiple target volumes to be FlashCopied from the same
source volume. This capability can be used to create images from separate points in time for
the source volume, and to create multiple images from a source volume at a common point in
time. Source and target volumes can be thin-provisioned volumes.
Reverse FlashCopy enables target volumes to become restore points for the source volume
without breaking the FlashCopy relationship and without waiting for the original copy
operation to complete. IBM Storwize V5000 supports multiple targets and thus multiple
rollback points.
For more information about FlashCopy copy services, see Chapter 10, “Copy services” on
page 451.
With the IBM Storwize V5000, Metro Mirror and Global Mirror are the IBM branded terms for
the functions that are synchronous Remote Copy and asynchronous Remote Copy.
By using the Metro Mirror and Global Mirror Copy Services features, you can set up a
relationship between two volumes so that updates that are made by an application to one
volume are mirrored on the other volume. The volumes can be in the same system or on two
different systems.
For both Metro Mirror and Global Mirror copy types, one volume is designated as the primary
and the other volume is designated as the secondary. Host applications write data to the
primary volume and updates to the primary volume are copied to the secondary volume.
Normally, host applications do not perform I/O operations to the secondary volume.
The Metro Mirror feature provides a synchronous copy process. When a host writes to the
primary volume, it does not receive confirmation of I/O completion until the write operation
completes for the copy on the primary and secondary volumes. This ensures that the
secondary volume is always up-to-date with the primary volume if a failover operation must be
performed.
Global Mirror can operate with or without cycling. When it is operating without cycling, write
operations are applied to the secondary volume as soon as possible after they are applied to
the primary volume. The secondary volume is less than one second behind the primary
volume, which minimizes the amount of data that must be recovered in the event of a failover.
However, this requires that a high-bandwidth link is provisioned between the two sites.
When Global Mirror operates with cycling mode, changes are tracked and where needed
copied to intermediate change volumes. Changes are transmitted to the secondary site
periodically. The secondary volumes are much further behind the primary volume, and more
data must be recovered in the event of a failover. Because the data transfer can be smoothed
over a longer time period, however, lower bandwidth is required to provide an effective
solution.
For more information about the IBM Storwize V5000 Copy Services, see Chapter 10, “Copy
services” on page 451).
You can maintain a chat session with the IBM service representative so that you can monitor
this activity and understand how to fix the problem yourself or allow the representative to fix it
for you.
When you access the website, you sign in and enter a code that the IBM service
representative provides to you. This code is unique to each IBM Assist On-site session. A
plug-in is downloaded on to your PC to connect you and your IBM service representative to
the remote service session. The IBM Assist On-site contains several layers of security to
protect your applications and your computers.
You also can use security features to restrict access by the IBM service representative.
Your IBM service representative can provide you with more information about the use of the
tool, if required.
You can configure IBM Storwize V5000 to send different types of notification to specific
recipients and choose the alerts that are important to you. When configuring Call Home to the
IBM Support Center, all events are sent via email only.
You can use the Management Information Base (MIB) file for SNMP to configure a network
management program to receive SNMP messages that are sent by the IBM Storwize V5000.
This file can be used with SNMP messages from all versions of IBM Storwize V5000
software.
To send email, you must configure at least one SMTP server. You can specify as many as five
other SMTP servers for backup purposes. The SMTP server must accept the relaying of email
from the IBM Storwize V5000 clustered system IP address. You can then use the
management GUI or the CLI to configure the email settings, including contact information and
email recipients. Set the reply address to a valid email address. Send a test email to check
that all connections and infrastructure are set up correctly. You can disable the Call Home
function at any time by using the management GUI or the CLI.
The Online Information Center also includes a Learning and Tutorial section where videos
describing the use and implementation of the IBM Storwize V5000 can be found.
Fibre Channel ports: Fibre Channel (FC) ports are required only if you are using FC
hosts or clustered systems arranged as two I/O groups. You can use the IBM Storwize
V5000 with Ethernet-only cabling for iSCSI hosts or use serial-attached SCSI (SAS)
cabling for direct-attach hosts.
For systems arranged as two I/O groups, up to eight hosts can be directly connected using
SAS ports 1 and 2 on each node canister, with SFF-8644 mini SAS HD cabling.
You should have a minimum of two Ethernet ports on the LAN, with four preferred for more
configuration access redundancy or iSCSI host access.
You should have a minimum of two Ethernet cable drops, with four preferred for more
configuration access redundancy or iSCSI host access. If you have two I/O groups, you
must have a minimum of four Ethernet cable drops. Ethernet port one on each node
canister must be connected to the LAN, with port two as optional.
Ports: Port 1 on each node canister must be connected to the same physical LAN or be
configured in the same VLAN and be on the same subnet or set of subnets.
Verify that the default IP addresses that are configured on Ethernet port 1 on each of the
node canisters (192.168.70.121 on node one and 192.168.70.122 on node 2) do not
conflict with existing IP addresses on the LAN. The default mask that is used with these IP
addresses is 255.255.255.0 and the default gateway address that is used is 192.168.70.1.
You should have a minimum of three IPv4 or IPv6 IP addresses for systems arranged as
one I/O group and minimum of five if you have two I/O groups. One is for the clustered
system and is what the administrator uses for management, and one for each node
canister for service access as needed.
Disk drives: The disk drives that are included with the control enclosure (model 2077-12C
or 2077-24C) are part of the single SAS chain. The expansion enclosures should be
connected to the SAS chain as shown in Figure 2-3 on page 29, so that they can use the
full bandwidth of the system.
The advised SAN configuration is composed of a minimum of two fabrics that encompass all
host ports and any ports on external storage systems that are to be virtualized by the IBM
Storwize V5000. The IBM Storwize V5000 ports must have the same number of cables
connected and they must be evenly split between the two fabrics to provide redundancy if one
of the fabrics goes offline (planned or unplanned).
Zoning must be implemented after the IBM Storwize V5000, hosts, and optional external
storage systems are connected to the SAN fabrics.
To enable the node canisters to communicate with each other in band, create a zone with only
the IBM Storwize V5000 WWPNs (two from each node canister) on each of the two fabrics.
If an external storage system is to be virtualized, create a zone in each fabric with the IBM
Storwize V5000 WWPNs (two from each node canister) with up to a maximum of eight
WWPNs from the external storage system. Assuming every host has a Fibre Channel
connection to each fabric, create a zone with the host WWPN and one WWPN from each
node canister in the IBM Storwize V5000 system in each fabric. The critical point is that there
should only ever be one initiator (host HBA) in any zone. For load balancing between the
node ports on the IBM Storwize V5000, alternate the host Fibre Channel ports between the
ports of the IBM Storwize V5000.
There should be a maximum of eight paths through the SAN from each host to the IBM
Storwize V5000. Hosts where this number is exceeded are not supported. The restriction is
there to limit the number of paths that the multi-pathing driver must resolve. A host with only
two HBAs should not exceed this limit with proper zoning in a dual fabric SAN.
Create a host/IBM Storwize V5000 zone for each server that volumes are mapped to and
from the clustered system, as shown in the following examples in Figure 2-4:
Zone Host A port 1 (HBA 1) with all node canister ports 1
Zone Host A port 2 (HBA 2) with all node canister ports 2
Zone Host B port 1 (HBA 1) with all node canister ports 3
Zone Host B port 2 (HBA 2) with all node canister ports 4
Similar zones should be created for all other hosts with volumes on the IBM Storwize V5000
I/O groups.
Verify interoperability with which the IBM Storwize V5000 connects to SAN switches or
directors by following the requirements that are provided at this website:
http://www-03.ibm.com/systems/support/storage/ssic/interoperability.wss
Switches or directors are at the firmware levels that are supported by the IBM Storwize
V5000.
Important: IBM Storwize V5000 port login maximum that is listed in the restriction
document must not be exceeded. The document is available at this website:
http://www.ibm.com/storage/support/Storwize/V5000
Connectivity issues: If you have any connectivity issues between IBM Storwize V5000
ports and Brocade SAN Switches or Directors at 8 Gbps, see this website for the correct
setting of the fillword port config parameter in the Brocade operating system:
http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003699
Verify direct-attach interoperability with the IBM Storwize V5000 and the supported server
operating systems by following the requirements that are provided at this website:
http://http://www-03.ibm.com/systems/support/storage/ssic/interoperability.wss
Inserting cables: It is possible to insert the cables upside down despite the keyway.
Ensure that the blue tag on the SAS connector is underneath when you are inserting the
cables.
Although it is possible to attach up to four hosts (one to each of the two available SAS ports
on the two node canisters) per I/O group. The advised configuration for direct attachment is to
have at least one SAS cable from the host that is connected to each node canister of the IBM
Storwize V5000. This configuration provides redundancy if one of the nodes goes offline, as
shown in Figure 2-8.
Ethernet port 1 is for accessing the management GUI, the service assistant GUI for the node
canister, and iSCSI host attachment. Port 2 can be used for the management GUI and iSCSI
host attachment.
Each node canister in a control enclosure connects over an Ethernet cable from Ethernet
port 1 of the canister to an enabled port on your Ethernet switch or router. Optionally, you can
attach an Ethernet cable from Ethernet port 2 on the canister to your Ethernet network.
Table 2-1 shows possible IP configuration of the Ethernet ports on the IBM Storwize V5000
system.
Table 2-1 Storwize V5000 IP address configuration options per node canister
Storwize V5000 Management Node Canister 1 Storwize V5000 Partner Node Canister 2
IPv4/6 management address ETH PORT 1 IPv4/6 service address ETH PORT 1
The management IP address is associated with one of the node canisters in the clustered
system and that node then becomes the configuration node. Should this node go offline
(planned or unplanned), the management IP address fails over to the other node’s Ethernet
port 1.
For more clustered system management redundancy, you should connect Ethernet port 2 on
each of the node canisters to the LAN, which allows for a backup management IP address to
be configured for access, if necessary.
Figure 2-10 shows a logical view of the Ethernet ports that are available for configuration of
the one or two management IP addresses. These IP addresses are for the clustered system
and therefore are associated with only one node, which is then considered the configuration
node.
Figure 2-11 shows a logical view of the Ethernet ports that are available for configuration of
the service IP addresses. Only port one on each node can be configured with a service IP
address.
If two connections per host are used, multipath software is also required on the host. If an
iSCSI host is to be employed, they will also require multipath software. All node canisters
should be configured and connected to the network so any iSCSI hosts see at least two paths
to volumes and multipath software is required to resolve these.
Verify that the hosts that access volumes from the IBM Storwize V5000 meet the
requirements that are found at this website:
https://ibm.biz/BdFNu6
Multiple operating systems are supported by IBM Storwize V5000. For more information
about HBA/Driver/multipath combinations, see this website:
http://www-03.ibm.com/systems/support/storage/ssic/interoperability.wss
As per the IBM System Storage Interoperation Center (SSIC), keep the following items under
consideration:
Host operating systems are at the levels that are supported by the IBM Storwize V5000.
HBA BIOS, device drivers, firmware, and multipathing drivers are at the levels that are
supported by IBM Storwize V5000.
If boot from SAN is required, ensure that it is supported for the operating systems that are
deployed.
If host clustering is required, ensure that it is supported for the operating systems that are
deployed.
All direct connect hosts should have the HBA set to point-to-point.
The information in the following checklist is helpful to have before the initial setup is
performed. The date and time can be manually entered, but to keep the clock synchronized,
use a network time protocol (NTP) service:
Document the LAN NTP server IP address that is used for synchronization of devices.
For alerts to be sent to storage administrators and to set up Call Home to IBM for service
and support, you need the following information:
Name of primary storage administrator for IBM to contact, if necessary.
Email address of the storage administrator for IBM to contact, if necessary.
Phone number of the storage administrator for IBM to contact, if necessary.
Physical location of the IBM Storwize V5000 system for IBM service (for example,
Building 22, first floor).
SMTP or email server address to direct alerts to and from the IBM Storwize V5000.
For the Call Home service to work, the IBM Storwize V5000 system must have access
to an SMTP server on the LAN that can forward emails to the default IBM service
address: [email protected] for Americas-based systems and
[email protected] for the rest of the World.
After the IBM Storwize V5000 initial configuration, you might want to add more users who can
manage the system. You can create as many users as you need, but the following roles
generally are configured for users:
Security Admin
Administrator
CopyOperator
Service
Monitor
The user in the Security Admin role can perform any function on the IBM Storwize V5000.
The user in the Administrator role can perform any function on the IBM Storwize V5000
system, except manage users.
User creation: The Security Admin role is the only role that has the create users function
and should be limited to as few users as possible.
The user in the CopyOperator role can view anything in the system, but the user can
configure and manage only the copy functions of the FlashCopy capabilities.
The user in the Monitor role can view object and system configuration information but cannot
configure, manage, or modify any system resource.
The only other role that is available is the service role, which is used if you create a user ID for
the IBM service representative. This user role allows IBM service personnel to view anything
on the system (as with the monitor role) and perform service-related commands, such as
adding a node back to the system after it is serviced or including disks that were excluded.
It allows for troubleshooting and management tasks, such as checking the status of the
storage server components, updating the firmware, and managing the storage server.
The GUI also offers advanced functions, such as FlashCopy, Volume Mirroring, Remote
Mirroring, and Easy Tier. A command-line interface (CLI) for the IBM Storwize V5000 system
also is available.
This section describes system management using the GUI and CLI.
Complete the following steps to open the Management GUI from any web browser:
For more information about how to use this interface, see this website:
http://www.ibm.com/support/knowledgecenter/STHGUJ_7.4.0/com.ibm.storwize.v5000.740
.doc/v5000_ichome_740.html
After the initial configuration that is described in 2.10, “Initial configuration” on page 49 is
completed, the IBM Storwize V5000 Welcome window opens, as shown in Figure 2-12.
The initial IBM Storwize V5000 system setup should be done using the process and tools that
are described in 2.9, “First-time setup” on page 42.
IBM Storwize V5000 uses an initial setup process that is contained within a USB key. The
USB key is delivered with each storage system and contains the initialization application file
that is called InitTool.exe. The tool is configured with your IBM Storwize V5000 system
management IP address, the subnet mask, and the network gateway address by first
plugging the USB stick into a Windows or Linux system.
The IBM Storwize V5000 starts the initial setup when you plug in the USB key with the newly
created file in to the storage system.
USB key: If you cannot find the official USB key that is supplied with the IBM Storwize
V5000, you can use any USB key that you have and download and copy the initTool.exe
application from IBM Storwize V5000 Support at this website:
http://www.ibm.com/storage/support/Storwize/V5000
The USB stick contains a readme file that provides details about how to use the tool with
various operating systems. The following operating systems are supported:
Microsoft Windows (R) 7 (64-bit)
Microsoft Windows XP (32-bit only)
Apple Mac OS(R) X 10.7
Red Hat Enterprise Server 5
Ubuntu (R) desktop 11.04
Mac OS or Linux:
There are other options available through the Tasks section. However, these options
generally are only required after initial configuration. The options are shown in Figure 2-15
on page 45 and are accessed by selecting No to the initial question to configure a new
system. A second question asks if you want to view instructions on how to expand a
system with a new control enclosure. Selecting No to answer this question gives the
option to reset the superuser password or set the service IP of a node canister. Selecting
Yes (as shown in Figure 2-14) progresses through the initial configuration of the IBM
Storwize V5000.
Any expansion enclosures that are part of the system should be powered up and allowed
to come ready before the control enclosure. Follow the instructions to power up the IBM
Storwize V5000 and wait for the status LED to flash. Then, insert the USB stick in one of
the USB ports on the left side node canister. This node becomes the control node and the
other node is the partner node. The fault LED begins to flash. When it stops, return the
USB stick to the Windows PC.
Clustered system creation: While the clustered system is created, the amber fault
LED on the node canister flashes. When this LED stops flashing, remove the USB key
from IBM Storwize V5000 and insert it in your system to check the results.
If successful, a summary page is displayed that shows the settings that are applied to the
IBM Storwize V5000, as shown in Figure 2-19.
Follow the on-screen instructions to resolve any issues. The wizard assumes the system
that you are using can connect to the IBM Storwize V5000 through the network. If it cannot
connect, you must follow step 1 from a machine that does have network access to the IBM
Storwize V5000. After the initialization process completes successfully, click Finish.
We describe system initial configuration using the GUI in 2.10, “Initial configuration”.
If you just completed the initial setup, that wizard automatically redirects to the IBM Storwize
V5000 GUI. Otherwise, complete the following steps to complete the initial configuration
process:
1. Start the service configuration wizard using a web browser on a workstation and point it to
the system management IP address that was defined in Figure 2-16 on page 45. Enter the
default superuser password <passw0rd> (where 0 = zero), as shown in Figure 2-22.
2. After you are logged in, a welcome window opens, as shown in Figure 2-23.
4. The next window in the initial service setup is setting up Email Event Notification. Select
Yes and click Next to set up email notification and call home, as shown in Figure 2-25.
Call Home: When Call Home is configured, the IBM Storwize V5000 automatically
creates a Support Contact with one of the following email addresses, depending on
country or region of installation:
US, Canada, Latin America, and Caribbean Islands: [email protected]
All other countries or regions: [email protected]
IBM Storwize V5000 can use Simple Network Management Protocol (SNMP) traps, syslog
messages, and Call Home email to notify you and the IBM Support Center when
significant events are detected. Any combination of these notification methods can be
used simultaneously.
To set up Call Home, you need the location details of the IBM Storwize V5000, Storage
Administrators details, and at least one valid SMTP server IP address. If you do not want
to configure Call Home now, it can be done later using the GUI and clicking Settings
Notifications (for more information, see 2.10.2, “Configuring Call Home, email alert, and
inventory” on page 70).
If your system is under warranty or you have a hardware maintenance agreement, to
enable pro-active support of the IBM Storwize V5000, it is advised that Call Home is
configured.
5. Enter the system location information. These details appear on the Call Home data to
enable IBM Support to correctly identify where the IBM Storwize V5000 is located, as
shown in Figure 2-26.
6. In next window, you must enter the contact details of the main storage administrator, as
shown in Figure 2-27. You can choose to enter the details for a 24-hour operations desk.
These details also are sent with any Call Home. This information allows IBM Support to
contact the correct people to quickly progress any issues.
8. The next window is for email server details. To enter more than one email server, click the
+ icon, as shown in Figure 2-29 and then click Apply and Next to commit.
9. The next window is the email recipient. IBM Storwize V5000 can also configure local email
alerts. These can be sent to a storage administrator or an email alias for a team of
administrators or operators. To add more than one recipient, click the + icon, as shown in
Figure 2-30.
10.Clicking Apply and Next displays the summary window for the contact details, system
location, email server, call home, and email notification options, as shown in Figure 2-31.
13.The wizard then prompts for the superuser to log on and proceed with the configuration,
as shown in Figure 2-33.
14.Click Next and complete the following steps in the System Setup:
15.In the System Name, set up the system name and click Apply and Next as shown in
Figure 2-35.
Note: The IBM Storwize V5000 GUI shows the CLI as you go through the configuration
steps.
Note: Use the chsystem command to modify the attributes of the clustered system. This
command can be used at any time after a system has been created.
17.In this next window, the IBM Storwize V5000 GUI provides help and guidance on the
different types of license of certain system functions. A license must be purchased for
each enclosure that is attached to, or externally managed by, the IBM Storwize V5000. For
more information about external Storage virtualization, see Chapter 11, “External storage
virtualization” on page 579. For each of the functions, complete with the number of
enclosures as shown in Figure 2-37 and click Apply and Next to continue.
19.The next window is the Email Event Notification. Because the information has been
previously entered in step 6 on page 53, you can either update or just click Apply and
Next through the steps.
20.The initial setup wizard moves on to the Configure Storage option next. This option takes
all the disks in the IBM Storwize V5000 and automatically configures them into optimal
RAID arrays for use as MDisks. If you do not want to automatically configure disks now,
select Configure storage later and you exit the wizard to the IBM Storwize V5000 GUI. If
you select Configure storage now the system will examine the enclosures and will
provide the best array configuration. Click Next moves to the summary window that shows
the RAID configuration that the IBM Storwize V5000 will implement, as shown next in
Figure 2-39.
Depending on your system configuration, the storage pools are created when you click
Finish. Closing that task box completes the Initial configuration wizard and automatically
directs you to the Create Hosts task option on the GUI, as shown in Figure 2-40.
Selecting Cancel exits to the IBM Storwize V5000 GUI. There is also a host link to the
e-Learning tours that are available through the GUI.
After the hardware is installed, cabled, zoned, and powered on, a second control enclosure
will be visible from the IBM Storwize V5000 GUI, as shown in Figure 2-41.
3. Select the control enclosure and click Actions Identify to turn on the identify LEDs of
the new canister, if required. Otherwise, click Next.
4. At this point, you are prompted to configure the new storage:
a. If you do not want to continue, click Cancel to quit the wizard and return to the IBM
Storwize V5000 main window.
When you choose to configure storage automatically, the wizard adds the new control
enclosure. The task takes all the disks in the new control enclosure and automatically
configures them into RAID arrays for use as MDisks, as shown in Figure 2-45.
Figure 2-46 IBM Storwize V5000 GUI with two I/O groups
Complete the following steps to add new a control enclosure using Custom configuration.
1. If you have the second I/O group ready, click the available I/O group as shown in
Figure 2-42 on page 62.
2. The add enclosure wizard starts as shown in Figure 2-47. Select Custom Configuration
and click Next.
Figure 2-49 New control enclosure that is shown as part of existing cluster
2. If the enclosure is properly cabled, the wizard identifies the candidate expansion
enclosure. Select the expansion enclosure and click Next, as shown in Figure 2-51.
If you choose Automatic configuration, the system will automatically configure the storage
into arrays for use as MDisks. If you choose Custom Configuration, the wizard offers a
more flexible way to configure the new storage, as shown in Figure 2-53.
To learn more about configuring internal storage, see 7.1, “Working with internal drives” on
page 310.
5. The new expansion enclosure is now shown as part of the cluster attached to its control
enclosure, as shown in Figure 2-55.
For more information about how to provision the new storage in the expansion enclosure, see
Chapter 7, “Storage pools” on page 309.
To configure the Call Home and email alert event notification in the IBM Storwize V5000 after
the Initial Configuration, complete the following steps:
1. Click Settings Event Notifications, as shown in Figure 2-56.
2. If your system has the Email Notification disabled, as shown in Figure 2-57, you can
re-enable it by clicking Enable Notification.
If the Email and Call Home notifications were never configured in the IBM Storwize V5000,
a window opens and the wizard guides you to the steps described in 2.10, “Initial
configuration” on page 49.
3. Click Email Edit as shown in Figure 2-58.
The fields to configure Call Home become available and you must enter accurate and
complete information about both company and contact. The Email contact is the person
who will be contacted by the support center about any issues and for further information.
You may want to enter a network operations center email address in here for 24 x7
coverage. Enter the IP address and the server port for one or more of the email servers
that will send an email to IBM.
The Management IP and Service IP addresses can be changed within the GUI as shown in
3.4.7, “Settings menu” on page 136.
The Service Assistant (SA) tool is a web-based GUI used to service individual node canisters,
primarily when a node has a fault and is in a service state. A node cannot be active as part of
a clustered system while it is in a service state. The SA is available even when the
management GUI is not accessible. The following information and tasks are included:
Status information about the connections and the node canister
Basic configuration information, such as configuring IP addresses
Service tasks, such as restarting the common information model object manager
(CIMOM) and updating the worldwide node name (WWNN)
Details about node error codes and hints about what to do to fix the node error
The Service Assistance GUI is available using a service assistant IP address on each node.
The SA GUI is accessed through the cluster IP addresses by appending service to the cluster
management URL. If the system is down, the only other method of communicating with the
node canisters is through the SA IP address directly. Each node can have a single SA IP
address on Ethernet port 1. It is advised that these IP addresses are configured on all
Storwize V5000 node canisters.
To open the SA GUI, enter one of the following URLs into any web browser:
http(s)://cluster IP address of your cluster/service
http(s)://service IP address of a node/service
Example:
Management address: http://1.2.3.4/service
SA access address: http://1.2.3.5/service
When you are accessing SA by using the <cluster address>/service, the configuration node
canister SA GUI login window opens, as shown in Figure 2-60.
The SA interfaces can view status and run service actions on other nodes and the node
where the user is connected.
The current canister node is displayed in the upper left corner of the GUI. As shown in
Figure 2-61, this is node 1. To change the canister, select the relevant node in the Change
Node section of the window. You see the details in the upper left change to reflect the new
canister.
The SA GUI provides access to service procedures and shows the status of the node
canisters. It is advised that these procedures should only be carried out if directed to do so by
IBM Support.
For more information about how to use the SA tool, see this website:
https://ibm.biz/BdEY8E
For information about supported browsers, see the IBM Storwize V5000 Information Center at
this website:
http://pic.dhe.ibm.com/infocenter/storwize/v5000_ic/index.jsp
Use the following information to log in to the IBM Storwize V5000 storage management:
User Name: superuser
Password: passw0rd (a zero replaces the letter o)
A successful login shows the System panel by default, as shown in Figure 3-2. Alternatively,
the last opened window from the previous session is displayed.
Clicking the logged-in user name in the top banner allows the user to log out, or manage their
password and SSH keys, as shown in Figure 3-3.
The System panel is new in software v7.4 and replaces the Overview and System Details
panels from older levels of software.
Figure 3-4 Three main sections of the IBM Storwize V5000 System panel
The Function Icons section shows a column of images. Each image represents a group of
interface functions. The icons enlarge with mouse hover and the following menus are shown:
Monitoring
Pools
Volumes
Hosts
Copy Services
Access
Settings
The System view section shows a 3D representation of the IBM Storwize V5000. Hovering
over any of the components will show a component overview, while right-clicking a
component will show an Action menu appropriate to that component. Clicking the arrow at the
bottom right of the graphic rotates it to show the rear of the system. This arrow may be blue or
red. Blue indicates no issues, while red indicates an issue of some kind with a component.
Hovering over or clicking the horizontal bars provides more information and menus, which are
described in 3.3, “Status indicator menus” on page 93.
There are also two links at the top of the System panel. The Actions link, at the top-left, opens
an Action menu as shown in Figure 3-5 and the Overview link at the top-right toggles an
Overview panel as shown in Figure 3-6.
For full details on the System panel, refer to “System panel” on page 97.
3.2 Navigation
Navigating the management tool is simple and, like most systems, there are many ways to
navigate. The two main methods are to use the Function Icons section or the Overview
drop-down.
This section describes the two main navigation methods and introduces the well-known
breadcrumb navigation aid and the Suggested Tasks aid. Information regarding the
navigation of panels with tables also is provided.
Figure 3-8 Options that are listed under Function Icons section
3.2.5 Actions
The IBM Storwize V5000 functional panels provide access to various actions that can be
performed, such as modify attributes and rename, add, or delete objects. The Action menu
options change, depending on the panel accessed.
The available Actions menus can be accessed by using one of two main methods: highlight
the resource and use the Actions drop-down menu, as shown in Figure 3-12, or right-click the
resource, as shown in Figure 3-13.
Sorting columns
Columns can be sorted by clicking the column heading. Figure 3-15 shows the result of
clicking the heading of the Capacity column. The table is now sorted and lists volumes with
the least amount of capacity at the top of the table.
Figure 3-17 shows the column heading Host Mappings as it is dragged to the required
location.
Important: Some users might run into a problem in which a context menu from the Firefox
browser is shown by right-clicking to change the column heading. This issue can be fixed
by clicking in Firefox: Tools Options Content Advanced (for Java setting)
Select: Display or replace context menus.
The web browser requirements and configuration settings that are advised to access the
IBM Storwize V5000 management GUI can be found in the IBM Storwize V5000
Information Center at this website:
http://pic.dhe.ibm.com/infocenter/storwize/V5000_ic/index.jsp
Multiple selections
You also can select multiple items in a list by using a combination of the Shift or Ctrl keys.
Advanced Filter
To use the Advanced Filter feature, click in the Filter field, then click the Advanced Filter icon,
as shown in Figure 3-24.
Figure 3-26 shows the addition of more filters. Click the + icon to the right of the filter to add
further filters. Choose the appropriate logical operand (AND or OR). Click Apply to display
-
the filtered data. Filters can also be removed by clicking the icon.
Figure 3-27 shows the result of the filter used in Figure 3-26. This displays all internal drives
that are Candidates or Spares. To reset a filter after the results have been displayed, click the
magnifying glass with a red cross through it, in the filter field.
Figure 3-29 The Allocated / Virtual Capacity bar that compares used capacity to real capacity
To change the capacity comparison, click the Allocated / Virtual status bar. This will toggle
between Allocated and Virtual. Figure 3-30 shows the new comparison of virtual capacity to
real capacity.
Figure 3-30 Changing the Allocated / Virtual comparison, virtual capacity to real capacity
If a status alert occurs, the health status bar can turn from green to yellow or to red. To show
the health status menu, click the attention icon on the left side of the health status bar, as
shown in Figure 3-33.
System panel
Select System in the Monitoring menu to open the panel. This panel shows a 3D graphical
representation of the front of the system and is shown in Figure 3-37.
Clicking the arrow at the bottom-right of the graphic, again, rotates it to show the front of the
system.
Both Figure 3-37 on page 97 and Figure 3-38 here, show a system with one I/O group.
Figure 3-39 on page 99 shows a system with dual I/O groups. Hovering over an enclosure will
show basic information, while clicking an enclosure will show a detailed view, as shown in
Figure 3-40 on page 99.
In this book, you will see examples using both single I/O group and dual I/O group views.
The curved band below the graphic of the system shows capacity usage, including physical
installed capacity, the amount of capacity allocated, and over-provisioned capacity.
Clicking any of the Overview icons, takes the user to the associated panel. As an example,
clicking Arrays takes the user to the Pools Internal Storage panel.
System 3D view
On the System panel, hovering over any of the components in the 3D view will show a
component overview, and right-clicking a component will show an Action menu appropriate to
that component.
2. Enclosure
The Enclosure properties show information such as the enclosure state, its machine type
and serial number, and Field Replaceable Unit (FRU) part number. Figure 3-44 shows the
Enclosure (Control or Expansion) properties.
4. Canister
The Canister properties show information such as Configuration node state, WWNN,
memory and CPU specifications, and FRU part number. Figure 3-46 shows the canister
properties.
The Battery properties show information such as state, charge state, and FRU part
number.
Event properties
To show actions and properties that are related to an event or to repair an event that is not the
next Recommended Action, right-click the event to show other options. Figure 3-51 shows the
selection of the Properties option.
Figure 3-56 Peak SAS Interface usage value over the last 5 minutes
Renaming pools
To rename a pool, select the pool from the pool filter and click the name of the pool.
Figure 3-60 shows that pool V5000_Pool_01 was selected to be renamed.
Volume functions
The Volumes by Pool panel also provides access to the volume functions via the Actions
menu, the Create Volumes option, and by right-clicking a listed volume. For more information
about navigating the Volume panel, see 3.4.3, “Volumes menu” on page 119. Figure 3-62
shows the volume functions that are available via the Volumes by Pool panel.
Figure 3-62 Volume functions are available via the Volume by Pools panel
Drive actions
Drive level functions, such as identifying a drive and marking a drive as offline, unused,
candidate, or spare, can be accessed here. Right-click a listed drive to show the Actions
menu. Alternatively, the drives can be selected and then the Action menu is used. For more
information, see “Multiple selections” on page 89. Figure 3-63 shows the Drive Actions menu.
Drive properties
Drive properties and dependent volumes can be displayed from the Internal Storage panel.
Select Properties from the Drive Actions menu. The drive properties panel shows the drive
attributes and the drive slot SAS port status. Figure 3-64 on page 113 shows the drive
properties with the Show Details option selected.
For full details on using the Configure Internal Storage wizard, see Chapter 7, “Storage pools”
on page 309.
Pool actions
To create, delete, rename, change a Pool Icon, view a Child Pool, or view a Pool properties,
right-click a pool. Alternatively, the Actions menu can be used. Figure 3-70 shows the pool
actions.
Note: In software v7.4, a pool cannot be deleted if it contains volumes. The volumes must
first be deleted via the Volumes by Pool panel. This is a safeguard against accidental
volume deletion.
RAID actions
Using the MDisks by Pool panel, you can perform MDisk RAID tasks, such as Set Spare
Goal, Swap Drive, and Delete. To access these functions, right-click the MDisk, as shown in
Figure 3-71.
Create volumes
Click Create Volumes to open the Create Volumes panel, as shown in Figure 3-76.
Using this panel, you can select a preset of the type of volume to create. The presets are
designed to accommodate most user cases. The presets are generic, thin-provisioned,
mirror, or thin mirror. After a preset is determined, select the storage pool from which the
volumes are to be allocated. An area to name and size the volumes is shown.
The Create Volumes panel displays a summary that shows the real and virtual capacity that is
used if the proposed volumes are created. Click Create or Create and Map to Host to
continue.
Figure 3-77 shows a quantity of 3 in the Quantity field and this will create 3 volumes named
V5000_Vol_05, V5000_Vol_06, and V5000_Vol_07.
Host Actions
Host Actions, such as Create Host, Modify Mappings, Unmap All Volumes, Duplicate
Mappings, Rename, Delete, and Properties can be performed from the Hosts panel.
Figure 3-81 shows the actions that are available from the Hosts panel.
Creating a host
Click Create Host to open the Add Host panel. Choose the host type from Fibre Channel
(FC), iSCSI, or SAS host and the applicable host configuration panel is shown. After the host
type is determined, the host name and port definitions can be configured. Figure 3-82 shows
the Choose the Host Type panel.
Actions such as Create Host, Modify mappings, Unmap, Duplicate Mappings, Rename and
Properties can be performed from this panel by clicking the Actions drop-down menu.
FlashCopy panel
Select FlashCopy in the Copy Services menu to open the panel, as shown in Figure 3-86.
The FlashCopy panel displays all of the volumes that are in the system.
The Consistency Group panel shows the defined groups with the associated FlashCopy
mappings. Group Actions such as FlashCopy mapping Start, Stop, and Delete can be
performed from this panel. Create FlashCopy Mapping and Create Consistency Group can
also be selected from this panel. For more information, see “FlashCopy Mappings panel”.
Figure 3-87 shows the Consistency Group panel.
For more information about how to create and administer FlashCopy mappings, see
Chapter 8, “Advanced host and volume administration” on page 359.
Figure 3-91 shows the Partnership properties, which is accessed via the Actions menu.
From here, it is possible to change the Link bandwidth and background copy rate. The Link
Bandwidth in Megabits per second (Mbps) specifies the bandwidth that is available for
replication between the local and partner systems. The Background copy rate specifies the
maximum percentage of the link bandwidth that can be used for background copy operations.
This value should be set so that there is enough bandwidth available to satisfy host write
operations in addition to the background copy.
The authentication mode can be set to local or remote. Select local if the IBM Storwize V5000
performs the authentication locally. Select remote if a remote service, such as an LDAP
server, authenticates the connection. If remote is selected, the remote authentication server
must be configured in the IBM Storwize V5000 by using the Settings menu Security
panel.
Notifications panel
Select Notifications in the Settings menu to open the panel. The IBM Storwize V5000 can
use Simple Network Management Protocol (SNMP) traps, syslog messages, emails, and IBM
Call Home to notify users and IBM when events are detected. Each event notification method
can be configured to report all events or alerts. Alerts are the significant events and might
require user intervention. The event notification levels are critical, warning, and information.
The Notifications panel provides access to the Email, SNMP, and Syslog configuration
panels. IBM Call Home is an email notification for IBM Support. It is automatically configured
as an email recipient and is enabled when the Email event notification option is enabled by
following the Call Home wizard.
Another Management IP address can be logically assigned to Ethernet port 2 of each node
canister for more fault tolerance. If the Management IP address is changed, use the new IP
address to log in to the Management GUI again. Click Management IP Addresses and then
click the port that you want to configure (the corresponding port on the partner node canister
is also highlighted). Figure 3-104 shows Management IP Addresses configuration panel.
Ethernet ports
The Ethernet ports can be used for iSCSI connections, host attachment, and remote copy.
Click Ethernet Ports to view the available ports. Figure 3-106 shows the Ethernet Ports panel
and associated Actions menu, which can be used to modify the port settings.
Licensing
The Licensing panel shows the current system licensing. The IBM Storwize V5000 uses the
same honor-based licensing as the Storwize V7000, which is based on per enclosure
licensing.
Figure 3-112 shows the Update License panel. In this example, an IBM Storwize V5000 that
is configured as a control enclosure with two expansion enclosures and externally virtualizes
a storage system consisting of two disk enclosures. Therefore, five enclosures are licensed
for FlashCopy, Remote Copy, and Easy Tier, and two enclosures are licensed for External
Virtualization.
To update the software, the IBM Storwize V5000 software and the IBM Storwize V5000
Upgrade Test Utility must be downloaded. After the files are downloaded, it is best to check
the checksum to ensure that the files are sound. Read the release notes, verify compatibility,
and follow all IBM recommendations and prerequisites.
To update the software of the IBM Storwize V5000, click Update. Figure 3-113 shows the
Update System panel.
There are also two further check boxes that allow the Extent size to be changed when
creating a new Pool and enable the low-resolution graphics mode.
The option to Allow extent size selection during pool creation toggles the Advanced Settings
button when creating a new Pool as can be seen in Chapter 7, “Storage pools” on page 309.
The option to Enable low graphics mode can be useful for remote access over low bandwidth
links. The Function Icons no longer enlarge and list the available functions. However, you can
navigate by clicking a Function Icon and by using the breadcrumb navigation aid.
After selecting the “Enable low graphics mode” check box, the user must log off and log on
again for this to take effect.
For more information about the Function Icons, see 3.1.3, “System panel layout” on page 78.
For more information about the breadcrumb navigation aid, see 3.2.2, “Breadcrumb
navigation aid” on page 82.
Figure 3-120 shows the selection of the panel help for the Volumes panel.
CLI: Refer to Appendix A, “Command-line interface setup and SAN Boot” on page 667 for
more information about the command-line interface setup as it applies to this chapter.
In short, IBM Storwize V5000 supports the following host attachment protocols:
8 Gb FC or 10 Gb iSCSI/FCoE (optional host interface)
6 Gb SAS (standard host interface)
1 Gb iSCSI (standard host interface)
In this chapter, we assume that your hosts are physically ready and attached to your FC and
IP network, or directly attached if using SAS Host Bus Adapters (HBAs) and that you have
completed the steps that are described in Chapter 2, “Initial configuration” on page 25.
Follow basic switch and zoning recommendations and ensure that each host has at least two
network adapter ports, that each adapter is on a separate network (or at minimum in a
separate zone), and that at least one connection to each node exists per adapter. This setup
assures redundant paths for failover and bandwidth aggregation.
Further information about SAN configuration is provided in 2.2, “SAN configuration planning”
on page 30.
For SAS attachment, ensure that each host has at least one SAS HBA connection to each
IBM Storwize V5000 canister. Further information about configuring SAS attached hosts is
provided in 2.4, “SAS direct-attach planning” on page 33.
Before new volumes are mapped to the host of your choice, it is advised that preparation is
done that will help with ease of use and reliability. There are several steps required on a host
system to prepare for mapping new IBM Storwize V5000 volumes. Use the System Storage
Interoperation Center (SSIC) to check which code levels are supported to attach your host to
your storage. SSIC is an IBM web tool that checks the interoperation of host, storage,
switches, and multipathing drivers. Go to SSIC web page to get the latest IBM Storwize
V5000 compatibility information:
http://ibm.com/systems/support/storage/ssic/interoperability.wss
The following sections provide worked examples of host configuration for each supported
attachment type on Microsoft Windows Server 2012 R2 and VMware ESXi 5.5. If you must
attach hosts running any other supported operating system, the required information can be
found in the IBM Storwize V5000 Knowledge Center:
http://www.ibm.com/support/knowledgecenter/STHGUJ/landing/V5000_welcome.html
In general, the following steps are required to prepare a host to connect to IBM Storwize
V5000:
1. Make sure that the latest supported system updates are applied to your host operating
system.
2. Make sure that the HBAs are physically installed in the host.
3. Make sure that the latest supported HBA firmware and driver levels are installed on your
host.
4. Configure the HBA parameters. While settings are given for each Host OS in the following
sections, review the IBM V5000 Infocenter to the latest supported settings.
5. Configure host I/O parameters (such as the disk I/O timeout value).
6. Install and configure multipath software.
7. Determine the host WWPNs.
8. Connect the HBA ports to switches using the appropriate cables, or directly attach to the
ports on IBM Storwize V5000.
9. Configure the switches, if applicable.
10.Configure SAN Boot (optional).
Install the driver using Windows Device Manager or vendor tools, such as QLogic
QConvergeConsole, Emulex HBAnyware or Brocade HBA Software Installer. Also, check and
update the firmware level of the HBA using the manufacturer’s provided tools. Always check
the readme file to see whether there are Windows registry parameters that should be set for
the HBA driver.
Important: When using the IBM Subsystem Device Driver DSM (SDDDSM) for multipath
support, the HBA driver must be a Storport miniport driver.
Figure 4-1 Editing the disk I/O timeout value on Windows 2012 R2
The default timeout is 60 seconds, and this is the value advised by IBM. However, depending
on your requirements, you may want to adjust this higher or lower. The following link explains
in detail why it may be useful to adjust the timeout:
http://blogs.msdn.com/b/san/archive/2011/09/01/the-windows-disk-timeout-value-unde
rstanding-why-this-should-be-set-to-a-small-value.aspx
The intent of MPIO is to provide better integration of a multipath storage solution with the
operating system. It also allows the use of multipath in the SAN infrastructure during the boot
process for SAN Boot hosts.
For FC attachment, the installation of IBM Subsystem Device Driver DSM (SDDDSM) is also
required. The SDDDSM setup program should enable MPIO automatically. Otherwise, follow
the foregoing steps to enable MPIO manually.
To ensure correct multipathing with IBM Storwize V5000 when using the FC protocol,
SDDDSM must be installed on Microsoft Windows hosts. To install SDDDSM, complete the
following steps:
1. Check the SDDDSM download matrix to determine the correct level of SDDDSM to install
on Windows 2012 R2 for IBM Storwize V5000:
http://ibm.com/support/docview.wss?uid=ssg1S7001350#WindowsSDDDSM
2. Download the software package from this website:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S4000350#Storwize
3. Copy the software package to your Windows server and on the server, run setup.exe to
install SDDDSM. A command prompt window opens, as shown in Figure 4-2. Enter Yes to
confirm the installation.
4. After the setup completes, enter Yes to restart the system, as shown in Figure 4-3.
After the server comes back online, the installation should be complete. You can check the
installed driver version by opening a Powershell terminal and running the datapath query
version command, as shown in Example 4-1.
For more information about configuring SDDDSM for all supported platforms, refer to the
Multipath Subsystem Device Driver User’s Guide. This can be found at the following website:
http://www-01.ibm.com/support/docview.wss?rs=540&context=ST52G7&q=ssg1*&uid=ssg1S7
000303&loc=en_US&cs=utf-8&lang=en%20en
To determine the host WWPNs using SDDDSM, we can open a Powershell terminal and run
the datapath query wwpn command, as shown in Example 4-2.
After the WWPNs are known, connect the cables and perform any necessary switch
configuration (such as zoning). The host is now prepared to connect to the IBM Storwize
V5000.
These are the basic steps to prepare a Windows 2012 R2 host for FC attachment. For
information about configuring FC attachment on the IBM Storwize V5000 side, see “Creating
FC hosts” on page 186. For information about discovering mapped FC volumes from
Windows 2012 R2, see “Discovering FC volumes from Windows 2012 R2” on page 220.
Install the driver using Windows Device Manager or vendor tools. Also, check and update the
firmware level of the HBA using the manufacturer’s provided tools. Always check the readme
file to see whether there are Windows registry parameters that should be set for the HBA
driver.
Important: For Converged Network Adapters (CNAs) which are capable of both FC and
iSCSI, it is important to ensure that the Ethernet networking driver is installed in addition to
the FCoE driver. This is required for configuring iSCSI.
If using a hardware iSCSI HBA, refer to the manufacturer’s documentation and IBM Storwize
V5000 Knowledge Center for further details on hardware and host OS configuration. The
following section describes how to configure iSCSI using the software initiator.
In Windows 2012 R2, the Microsoft iSCSI Initiator is preinstalled. To start the iSCSI Initiator,
open Server Manager and go to Tools iSCSI Initiator, as shown in Figure 4-4.
If this is the first time you have started the iSCSI Initiator, the window shown in Figure 4-5
displays. Click Yes to enable the iSCSI Initiator service, which should now start automatically
on boot.
Make a note of the Initiator Name. This is the iSCSI Qualified Name (IQN) for the host, which
is required for configuring iSCSI host attachment on the IBM Storwize V5000. The IQN
identifies the host in the same way that WWPNs identify an FC or SAS host.
From the Configuration tab, it is possible to change the Initiator Name, enable CHAP
authentication, and more; however, these tasks are beyond the scope of our basic setup.
CHAP authentication is disabled by default. For more information, Microsoft have published a
detailed guide on configuring the iSCSI Initiator at this website:
http://technet.microsoft.com/en-us/library/ee338476%28v=ws.10%29.aspx
To configure Ethernet port IPs on Windows 2012 R2, complete the following steps:
1. Go to Control Panel Network and Internet Network and Sharing Center. This
should take you to the window shown in Figure 4-7.
In this case, we have two networks visible to the system. The first network is the one we
are using to connect to the server, consisting of a single dedicated Ethernet port for
management. The second network is our iSCSI network, consisting of two dedicated
Ethernet ports for iSCSI. It is advised to have at least two dedicated ports for failover
purposes.
3. To configure the IP address, click Properties. The window shown in Figure 4-9 displays.
6. Repeat the previous steps for each port you want to configure for iSCSI attachment.
It is important to note that IBM Subsystem Device Driver DSM (SDDDSM) is not supported
for iSCSI attachment, so do not follow the steps to install this as for FC or SAS.
These are the basic steps to prepare a Windows 2012 R2 host for iSCSI attachment. For
information about configuring iSCSI attachment on the IBM Storwize V5000 side, see
“Configuring IBM Storwize V5000 for FC connectivity” on page 189. For information about
discovering mapped iSCSI volumes from Windows 2012 R2, see “Discovering iSCSI volumes
from Windows 2012 R2” on page 229.
Install the driver using Windows Device Manager or vendor tools. Also, check and update the
firmware level of the HBA using the manufacturer’s provided tools. Always check the readme
file to see whether there are Windows registry parameters that should be set for the HBA
driver.
For LSI cards, the following website provides some guidance on flashing the HBA firmware
and BIOS:
http://mycusthelp.info/LSI/_cs/AnswerDetail.aspx?sSessionID=&aid=8352
These are the basic steps to prepare a Windows 2012 R2 host for SAS attachment. For
information about configuring SAS attachment on the IBM Storwize V5000 side, see 4.3.5,
“Creating SAS hosts” on page 198. For information about discovering mapped SAS volumes
from Windows 2012 R2, see “Discovering SAS volumes from Windows 2012 R2” on
page 235.
Install the driver using VMware vSphere Client, the ESXi CLI or vendor tools. Also, check and
update the firmware level of the HBA using the manufacturer’s provided tools. Always check
the readme file to see whether there is further configuration required for the HBA driver.
The IBM Storwize V5000 is an active/active storage device. Since VMware ESXi 5.0 and later,
the suggested multipathing policy is Round Robin. Round Robin performs dynamic load
balancing for I/O. If you do not want to have the I/O balanced over all available paths, the
Fixed policy is supported. This policy setting can be configured for every volume. Set this
policy after IBM Storwize V5000 volumes have been mapped to the ESXi host. For more
information about mapping volumes to hosts, see “Discovering FC volumes from VMware
ESXi 5.5” on page 235.
Connect to the ESXi server (or VMware vCenter) using the VMware vSphere Client and
browse to the Configuration tab. HBA WWPNs are listed under Storage Adapters, as
shown in Figure 4-11.
These are the basic steps to required to prepare a VMware ESXi 5.5 host for FC attachment.
For information about configuring FC attachment on the IBM Storwize V5000 side, see 4.3.1,
“Creating FC hosts” on page 186. For information about discovering mapped FC volumes
from VMware ESXi 5.5, see “Discovering FC volumes from VMware ESXi 5.5” on page 235.
Install the driver using VMware vSphere Client, the ESXi CLI or vendor tools. Also, check and
update the firmware level of the HBA using the manufacturer’s provided tools. Always check
the readme file to see whether there is further configuration required for the HBA driver.
Important: For Converged Network Adapters (CNAs) which are capable of both FC and
iSCSI, it is important to ensure that the Ethernet networking driver is installed in addition to
the FCoE driver. This is required for configuring iSCSI.
Figure 4-12 Adding a new storage adapter via VMware vSphere Client
2. Click Add... and the window shown in Figure 4-13 displays. Select Add Software iSCSI
Adapter and click OK.
4. The iSCSI Software Adapter should now be listed under Storage Adapters, as shown in
Figure 4-15.
Figure 4-15 The Storage Adapters view showing the newly added iSCSI Software Adapter
5. To find the iSCSI initiator name and configure the iSCSI Software Adapter, right-click the
adapter and click Properties.... The window shown in Figure 4-16 displays. Make a note
of the initiator name and perform any other necessary configuration.
The VMware ESXi 5.5 iSCSI initiator is now enabled and ready for use with IBM Storwize
V5000. The next step required is to configure the Ethernet adapter ports.
Figure 4-19 Creating a new vSwitch using two physical adapter ports
6. Click Next to see what will be added to the host network settings and Finish to exit from
iSCSI configuration. The network configuration page should now look like Figure 4-22.
Figure 4-22 The network configuration page with the newly created vSwitch and VMkernel port
8. In this example, we create a second VMkernel port with IP 192.168.1.4. The network
configuration page should now look like Figure 4-24.
Important: If the VMkernel ports are reported as “Not Compliant”, complete the steps
described in “Multipath support for iSCSI on VMware ESXi” on page 181 and try
again.
Figure 4-26 The VMkernel ports added to the iSCSI Software Adapter
11.The window shown in Figure 4-27 displays. Click Yes to rescan the adapter and complete
this procedure.
For iSCSI, there is some extra configuration required in the VMkernel port properties to
enable path failover. Each VMkernel port must map to just one physical adapter port, which is
not the default setting. To fix this, complete the following steps:
1. Browse to the Configuration tab and select Networking. Click Properties... next to the
vSwitch you configured for iSCSI to open the window shown in Figure 4-28.
Figure 4-30 Configuring a VMkernel port to bind to a single physical adapter port
These are the basic steps to required to prepare a VMware ESXi 5.5 host for iSCSI
attachment. For information about configuring iSCSI attachment on the IBM Storwize V5000
side, see 4.3.3, “Creating iSCSI hosts” on page 191. For information about discovering
mapped iSCSI volumes from VMware ESXi 5.5, see “Discovering iSCSI volumes from
VMware ESXi 5.5” on page 237.
For more information about configuring iSCSI attachment on the VMware ESXi side, the
following white paper published by VMware is a useful resource:
http://www.vmware.com/files/pdf/techpaper/vmware-multipathing-configuration-softwa
re-iSCSI-port-binding.pdf
For LSI cards, the following website provides some guidance on flashing the HBA firmware
and BIOS:
http://mycusthelp.info/LSI/_cs/AnswerDetail.aspx?sSessionID=&aid=8352
The host WWPNs are not directly available through VMware vSphere Client, however can be
found using vendor tools or the HBA BIOS. The method described in 4.2.3, “Windows 2012
R2: Preparing for SAS attachment” on page 167 also works.
These are the basic steps to required to prepare a VMware ESXi 5.5 host for SAS
attachment. For information about configuring SAS attachment on the IBM Storwize V5000
side, see 4.3.5, “Creating SAS hosts” on page 198. For information about discovering
mapped SAS volumes from VMware ESXi 5.5, see “Discovering SAS volumes from VMware
ESXi 5.5” on page 248.
For further information and guidance on attaching storage with VMware ESXi 5.5, the
following document published by VMware is a useful resource:
http://pubs.vmware.com/vsphere-55/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter
-server-552-storage-guide.pdf
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004971
2. Click the Add Host button to open the Add Host wizard. This should open the window
shown in Figure 4-33.
Figure 4-33 Choosing the host type in the Add Host wizard
3. Choose whether to create an FC host, an iSCSI host, or an SAS host, and follow the steps
in the wizard. The following sections provide guidance for each attachment type.
2. Enter a descriptive Host Name and click the Fibre Channel Ports drop-down menu to
view a list of all FC WWPNs visible to the system, as shown in Figure 4-35.
Note: It is possible to enter WWPNs manually. However, if these are not visible to IBM
Storwize V5000, then the host object will appear as offline and be unusable for I/O
operations until the ports become visible.
4. Select the WWPNs for your host and click Add Port to List for each. These will be added
to Port Definitions, as shown in Figure 4-36.
5. In the Advanced Settings, it is possible to choose the Host Type. If using HP/UX,
OpenVMS or TPGS, this needs to be configured. Otherwise, the default option (Generic)
is fine.
6. From here, it is also possible to set the I/O Groups that your host will have access to. It is
important that host objects belong to the same I/O group(s) as the volumes you want to
map, otherwise the host will not have visibility of these volumes. For more information, see
Chapter 4, “Host configuration” on page 155.
Note: IBM Storwize V5000 supports a maximum of two nodes per system, arranged as
two I/O groups per cluster. Due to the host object limit per I/O group, for maximum host
connectivity, it is best to create hosts utilizing single I/O groups.
8. Click Close to return to the host view, which should now list your newly created host
object, as shown in Figure 4-38.
Figure 4-38 The hosts view with the newly created host object listed
After the host objects have been created, see Chapter 5, “Volume configuration” on page 201
to create volumes and map them to the hosts.
There is a limit of 16 logins per node from another node before an error will be logged. A
combination of port masking and SAN zoning can help to manage this and allow for optimum
I/O performance with local, remote and host traffic.
2. Select Fibre Channel Ports and the FC port configuration view displays, as shown in
Figure 4-40.
3. To configure a port, right-click the port and select Modify Connection. This should bring
up the window shown in Figure 4-41.
In this example, we select None to restrict traffic on this port to host I/O only. Click Modify
to confirm the selection.
Note: In doing this, we are configuring Port 1 for all nodes. It is not possible to configure
FC ports on a per node basis.
You can view connections between nodes, storage systems and hosts by selecting Fibre
Channel Connectivity while in the network settings view. Choose which connections you
want to view and click Show Results, as shown in Figure 4-42.
Figure 4-42 Viewing FC connections between nodes, storage systems and hosts
2. Enter a descriptive Host Name and enter the iSCSI Initiator Name into the iSCSI Ports
box, then click Add Ports to List. This will appear under Port Definitions, as shown in
Figure 4-44. Repeat this step if several initiator names are required.
Note: IBM Storwize V5000 supports a maximum of four nodes per system, arranged as
two I/O groups per cluster. Due to the host object limit per I/O group, for maximum host
connectivity it is best to create hosts utilizing single I/O groups.
5. Click Add Host and the wizard will complete, as shown in Figure 4-37.
6. Click Close to return to the host view, which should now list your newly created host
object, as shown in Figure 4-38.
Figure 4-46 The hosts view with the newly created host object listed
After the host objects have been created, see Chapter 5, “Volume configuration” on page 201
to create volumes and map them to the hosts.
2. Select Ethernet Ports and the Ethernet port configuration view displays, as shown in
Figure 4-48.
Figure 4-49 Opening the Modify IP Settings window for Node 1, Port 2
Note: It is generally advised that at least one port is reserved for the management IP
address. This is normally Port 1, so in this example, we configure Port 2 for iSCSI.
4. The window shown in Figure 4-50 displays. To configure IPv6, click the IPv6 button to
reveal these options. Enter the desired IP address, subnet mask and gateway. It is
important to make sure that these exist in the same subnet as the host IP addresses, and
that the chosen address is not already in use. Click Modify to confirm.
Figure 4-50 Configuring an IP address, subnet mask and gateway for Port 2 of Node 1
6. Right-click the configured port again. This time, the options that were previously greyed
out should now be available. To confirm that the port is enabled for iSCSI, click Modify
iSCSI Hosts. The window shown in Figure 4-52 displays. If the port is not enabled, do so
using the drop-down box and click Modify to confirm.
8. Repeat the foregoing steps for all ports that need to be configured.
It is also possible to configure iSCSI aliases, an iSNS name, and CHAP authentication. These
options are located in the iSCSI view. To access this, click iSCSI while in the network settings
view, as shown in Figure 4-54.
2. Enter a descriptive Host Name and click the SAS Ports drop-down menu to view a list of
all FC WWPNs visible to the system, as shown in Figure 4-56.
Note: It is possible to enter WWPNs manually. However, if these are not visible to IBM
Storwize V5000, then the host object will appear as offline and will be unusable for I/O
operations until the ports become visible.
4. Select the WWPNs for your host and click Add Port to List for each. These will be added
to Port Definitions, as shown in Figure 4-57.
5. In the Advanced Settings, it is possible to choose the Host Type. If using HP/UX,
OpenVMS, or TPGS, this needs to be configured. Otherwise, the default option (Generic)
is fine.
6. From here, it is also possible to set the I/O Groups that your host will have access to. It is
important that host objects belong to the same I/O group(s) as the volumes you want to
map, otherwise the host will not have visibility of these volumes. For more information, see
Chapter 5, “Volume configuration” on page 201.
Note: IBM Storwize V5000 supports a maximum of four nodes per system, arranged as
two I/O groups per cluster. Due to the host object limit per I/O group, for maximum host
connectivity, it is best to create hosts utilizing single I/O groups.
8. Click Close to return to the host view, which should now list your newly created host
object, as shown in Figure 4-38.
Figure 4-59 The hosts view with the newly created host object listed
After host objects have been created, see Chapter 5, “Volume configuration” on page 201 to
create volumes and map them to the hosts.
For information about preparing host operating systems and creating host objects in IBM
Storwize V5000, see Chapter 4, “Host configuration” on page 155.
The first part of this chapter describes how to create volumes of different types and map them
to host objects in IBM Storwize V5000, while the second part of this chapter describes how to
discover these volumes from the host operating system.
After completing this chapter, your basic configuration is complete and you can begin to store
and access data on the IBM Storwize V5000.
For information about advanced volume administration, such as reconfiguring settings for
existing volumes, see Chapter 8, “Advanced host and volume administration” on page 359.
To create new volumes using the IBM Storwize V5000 GUI, complete the following steps:
1. Open the volumes view by navigating to Volumes Volumes, as shown in Figure 5-1.
Figure 5-3 Selecting a volume preset and pool in the Create Volume wizard
3. Choose whether to create Generic, Thin Provision, Mirror or Thin Mirror volumes, and
which pool to provision this storage from. Descriptions of the volume presets are provided
below, and the following sections provide guidance for creating each volume type.
By default, volumes created via the IBM Storwize V5000 GUI are striped across all available
MDisks in the chosen storage pool. The following presets are provided:
Generic: A striped volume that is fully provisioned, meaning that the volume capacity
reflects the same size physical disk capacity.
Creating generic volumes is described in 5.1.1, “Creating generic volumes” on page 204.
Thin provisioned: A striped volume that reports a greater capacity than is initially
allocated. Thin provisioned volumes are sometimes referred to as space efficient.
Such volumes have two capacities:
– The real capacity. This determines the quantity of extents that are initially allocated to
the volume.
– The virtual capacity. This is the capacity that is reported to all other components (such
as FlashCopy) and to the hosts.
Thin provisioning is useful for applications where the amount of physical disk space
actually used is often much smaller than the amount allocated. Thin provisioned volumes
can be configured to expand automatically when more physical capacity is used. It is also
possible to configure the amount of space that is initially allocated. IBM Storwize V5000
includes a configurable alert that will alert system administrators when thin provisioned
volumes begin to reach their capacity.
Creating thin provisioned volumes is described in 5.1.2, “Creating thin provisioned
volumes” on page 206.
Figure 5-4 Creating generic volumes using the Create Volumes wizard
Important: The Create and Map to Host option is disabled if no hosts are configured on
the IBM Storwize V5000. For more information about configuring hosts, see Chapter 4,
“Host configuration” on page 155.
7. Click Close to return to the volumes view, which now lists your newly created volume(s),
as shown in Figure 5-7.
Figure 5-7 The volumes view with the newly created volume listed
8. Repeat these steps for all generic volumes you want to create.
If you did not choose Create and Map to Host in Step 5, see 5.2.2, “Manually mapping
volumes to hosts” on page 218 for instructions on mapping the volume(s) manually after host
creation.
Note: The capacity specified is the virtual capacity of the volume, while the real
capacity is 2% of this value. The real capacity can be modified in the advanced settings.
4. Click Advanced... to view more options. The Characteristics tab is the same as for
generic volumes, however, there are additional options under Capacity Management, as
shown in Figure 5-9.
Important: The Create and Map to Host option is disabled if no hosts are configured on
the IBM Storwize V5000. For more information about configuring hosts, see Chapter 4,
“Host configuration” on page 155.
6. If you clicked Create, the wizard will complete as shown in Figure 5-10.
Figure 5-11 The volumes view with the newly created volume listed
8. Repeat these steps for all thin provisioned volumes that you want to create.
If you did not choose Create and Map to Host in Step 5, see 5.2.2, “Manually mapping
volumes to hosts” on page 218 for instructions on mapping the volume(s) manually after
host creation.
Figure 5-12 Creating mirrored volumes using the Create Volumes wizard
Note: The capacity specified is per copy, so the total capacity required will be the
number of mirrored pairs, multiplied by the capacity, then multiplied by two.
Here it is possible to set the mirror sync rate. Higher sync rates will provide greater
availability at a small cost to performance, and vice versa.
When you are done, click OK to return to the main wizard.
5. Click Create or Create and Map to Host when you are finished. Create and Map to Host
will create the volumes and then take you to the Host Mapping wizard, which is described
in 5.2, “Mapping volumes to hosts” on page 215.
Important: The Create and Map to Host option is disabled if no hosts are configured on
the IBM Storwize V5000. For more information about configuring hosts, see Chapter 4,
“Host configuration” on page 155.
7. Click Close to return to the volumes view, which now lists your newly created volume(s),
as shown in Figure 5-15.
Figure 5-15 The volumes view with the newly created volume listed
8. Repeat these steps for all mirrored volumes that you want to create.
If you did not choose Create and Map to Host in Step 5, see 5.2.2, “Manually mapping
volumes to hosts” on page 218 for instructions on mapping the volume(s) manually after host
creation.
Figure 5-16 Creating thin mirrored volumes using the Create Volumes wizard
Note: The capacity specified is per copy, so the total virtual capacity will be the number
of mirrored pairs, multiplied by the capacity, then multiplied by two.
4. Click Advanced... to view more options. Here you can configure Characteristics as for
generic volumes, Capacity Management as for thin provisioned volumes, and Mirroring
as for mirrored volumes.
After you are done, click OK to return to the main wizard.
5. Click Create or Create and Map to Host when you are finished. Create and Map to Host
will create the volumes and then take you to the Host Mapping wizard, which is described
in 5.2, “Mapping volumes to hosts” on page 215.
Important: The Create and Map to Host option is disabled if no hosts are configured on
the IBM Storwize V5000. For more information about configuring hosts, see Chapter 4,
“Host configuration” on page 155.
7. Click Close to return to the volumes view, which now lists your newly created volume(s),
as shown in Figure 5-18.
Figure 5-18 The volumes view with the newly created volume listed
8. Repeat these steps for all thin mirrored volumes that you want to create.
If you did not choose Create and Map to Host in Step 5, see 5.2.2, “Manually mapping
volumes to hosts” on page 218 for instructions on mapping the volume(s) manually after host
creation.
The first part of this section describes how to map volumes to hosts if you clicked Create and
Map to Host in 5.1, “Creating volumes in IBM Storwize V5000” on page 202. The second part
of this section describes the manual host mapping process that is used to create customized
mappings.
5.2.1 Mapping newly created volumes using Create and Map to Host
In this section, we assume that you are following one of the procedures in 5.1, “Creating
volumes in IBM Storwize V5000” on page 202 and clicked Create and Map to Host followed
by Continue when the Create Volumes task completed, as shown in Figure 5-19.
Figure 5-19 Continuing to map volumes after the Create Volumes task has completed
Figure 5-20 Selecting the I/O group and host to assign the mapping(s) to
First, select the I/O group you want to assign the mapping(s) to. Selecting the correct I/O
group is important if there is more than one group. As discussed in 5.1.1, “Creating
generic volumes” on page 204, when a volume is created, it is possible to define the
caching I/O group or the I/O group that owns the volume and is used to access it.
Therefore, your host must communicate with the same I/O group for the mapping to be
successful.
Figure 5-21 The host mapping wizard with a new volume mapped to a host
5. Click Close to return to the volumes view, which now lists your newly created volume(s),
as shown in Figure 5-23.
Figure 5-23 The volumes view with the newly created and mapped volume listed
Figure 5-24 Manually creating host mappings from the volumes view
2. This brings up the window shown in Step 1 of the procedure in 5.2.1, “Mapping newly
created volumes using Create and Map to Host” on page 215. Complete the steps in that
section to finish mapping the volumes.
The volumes should now be accessible from the host operating system. The following section
describes how to discover and use mapped volumes from hosts.
We assume that you have completed all of the following tasks described in this book so that
the hosts and the IBM Storwize V5000 are prepared:
Prepare your operating systems for attachment, including installing multipath support. For
more information, see Chapter 4, “Host configuration” on page 155.
Create host objects using the IBM Storwize V5000 GUI. For more information, see
Chapter 4, “Host configuration” on page 155.
Create volumes and map these to the host objects using the IBM Storwize V5000 GUI. For
more information, see 5.1, “Creating volumes in IBM Storwize V5000” on page 202, and
5.2, “Mapping volumes to hosts” on page 215.
First, we need to look up the UIDs of the volumes. Knowing the UIDs of the volumes helps
determine which is which from the host point of view. To do this, complete the following steps:
1. Open the volumes by host view by going to Volumes Volumes by Host, as shown in
Figure 5-25.
Figure 5-26 The volumes by host view with a single host and volume mapping configured
2. Check the UID column and note the value for each mapped volume.
Next we need to discover the mapped volumes from the host. The following sections describe
how to discover mapped volumes from Windows 2012 R2 and VMware ESXi 5.5 for each
supported attachment type (FC, iSCSI and SAS).
Note: For Windows 2012 R2, we describe the use of SDDDSM for multipath for FC and
SAS, while 5.3.2, “Discovering iSCSI volumes from Windows 2012 R2” on page 229
describes how to configure native MPIO. This is because only native MPIO is supported for
iSCSI on Windows 2012 R2. For more information, see Chapter 4, “Host configuration” on
page 155.
Note: File and Storage Services should be enabled by default. If not, it can be enabled
by going to Manage Add Roles and Features and following the wizard.
2. In File and Storage Services, go to Disks. In here all storage devices known to Windows
should be listed, as shown in Figure 5-28.
If your mapped volumes are not shown, go to Tasks Rescan Storage, as shown in
Figure 5-29.
A bar will appear at the top of the screen to show that Windows is attempting to scan for
devices, as shown in Figure 5-30. Wait for this to complete.
4. We need to bring the disk online. To do this, right-click and click Bring Online, as shown in
Figure 5-32.
5. The window shown in Figure 5-33 displays. Click Yes to confirm that you want to bring the
disk online on this server.
Figure 5-33 Confirm that you want to bring the disk online
6. The next step is to initialize the disk. To do this, right-click the disk and click Initialize, as
shown in Figure 5-34.
8. The disk now appear as online and initialized with GPT partitioning, as shown in
Figure 5-36.
Figure 5-36 The mapped volume online and initialized as a GPT disk
The mapped volumes are now ready to be managed by the host operating system.
Note: In this context, “Volume” refers to a Windows concept and not specifically an IBM
Storwize V5000 volume.
2. The New Volume Wizard opens. Click Next on the first page to bring up the page shown in
Figure 5-38. Choose which server and disk to provision and click Next.
3. The next step is to choose the size of the volume, as shown in Figure 5-39. Set the volume
size and click Next to continue.
5. Choose a file system type, unit size and name for the new volume, as shown in
Figure 5-41. Click Next to continue.
6. Confirm the new volume configuration on the next page, click Next, and confirm that
volume creation is successful, as shown in Figure 5-42.
Figure 5-43 The new volume listed under the Volumes view
To query paths to discovered devices, open a Powershell terminal, change to the SDDDSM
directory and run datapath.exe query device, as shown in Example 5-1.
Total Devices : 1
The output provides multipath information about the mapped volumes. In our example, one
disk is connected (Disk 1) and eight paths to the disk are available (State = Open). You can
also confirm that the serial numbers match the UIDs recorded earlier.
Important: Correct SAN switch zoning must be implemented to allow only eight paths to
be visible from the host to any one volume. Volumes with more than eight paths are not
supported.
In Example 5-2, we have changed the policy from Load Balancing to Round Robin.
DEV#: 1 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: ROUND ROBIN
SERIAL: 6005076300AF004B4800000000000027 LUN SIZE: 10.0GB
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 * Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0
1 * Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0
2 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 43 0
3 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 29 0
4 * Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0
5 * Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0
6 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 7 0
7 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 6 0
For more information about configuring SDDDSM, see the Multipath Subsystem Device
Driver User’s Guide. This can be found at the following website:
http://www-01.ibm.com/support/docview.wss?rs=540&context=ST52G7&q=ssg1*&uid=ssg
1S7000303&loc=en_US&cs=utf-8&lang=en%20en
2. While in the iSCSI Initiator properties, log on to a IBM Storwize V5000 iSCSI port by typing
the port IP into the Target box and clicking Quick Connect, as shown in Figure 5-45.
4. Repeat the previous two steps for all iSCSI IPs that you want to connect to.
5. In the iSCSI Initiator properties, the connected ports should now be listed under
“Discovered targets”, as shown in Figure 5-47.
6. The next step is to scan for disks. Follow the steps in 5.3.1, “Discovering FC volumes from
Windows 2012 R2” on page 220 to do this. The number of disks listed should be the
number of mapped volumes multiplied by the number of paths, because we have not yet
enabled MPIO for iSCSI devices.
Note: IBM 2145 devices need to be added to the list of supported devices for MPIO to
work with IBM Storwize V5000. Click Add to do this if they are not already listed.
Click Add, then click Yes when prompted to restart. Wait for the server to finish restarting.
After the server is back online, the correct number of disks should now be listed under the
Disks view in File and Storage Services.
Note: In some cases, the “Add support for iSCSI devices” option is disabled. To enable
this option, you must already have a connection to at least one iSCSI device.
9. To make iSCSI devices readily available on system boot, reopen the iSCSI Initiator
properties and go to Volumes and Devices, as shown in Figure 5-51.
10.Follow the instructions in 5.3.1, “Discovering FC volumes from Windows 2012 R2” on
page 220 to bring the disks online and initialize them.
The mapped volumes are now ready to be managed by the host operating system.
To save details of the current MPIO configuration, including all connected devices and paths,
to a text file, open a Powershell terminal and run mpclaim -v <directory path>, for example,
mpclaim -v C:\multipath_config.txt. Some example output is shown in Example 5-3.
Registered DSMs: 1
================
+--------------------------------|-------------------|----|----|----|---|-----+
|DSM Name | Version |PRP | RC | RI |PVP| PVE |
|--------------------------------|-------------------|----|----|----|---|-----|
|Microsoft DSM |006.0002.09200.16384|0020|0016|0001|060| True|
+--------------------------------|-------------------|----|----|----|---|-----+
Microsoft DSM
=============
MPIO Disk0: 04 Paths, Round Robin with Subset, Implicit Only
SN: 6057630AF04B480000002B
Supported Load Balance Policies: FOO RRWS LQD WP LB
================================================================================
Confirm that all paths are active as expected and the serial numbers match the UIDs
recorded earlier.
To change the load balancing policy for all devices, run mpclaim -L -M <policy>, where the
allowed policies are:
1 Failover Only
2 Round Robin
5 Weighted Paths
6 Least Blocks
7 Vendor Specific
The following link provides further information and guidance on using mpclaim.exe:
http://technet.microsoft.com/en-us/library/ee619743%28v=ws.10%29.aspx
The mapped volumes are now ready to be managed by the host operating system.
2. Click Rescan all.... The window shown in Figure 5-54 displays. Make sure “Scan for New
Storage Devices” is checked. Click OK to confirm.
3. The rescan task starts and appears under Recent Tasks, as shown in Figure 5-55. Wait
for this to finish.
4. If you now select the FC adapters, the mapped volumes are listed, as shown in
Figure 5-56 on page 236.
Check the UIDs you recorded earlier and confirm that all of the mapped volumes are
listed.
The mapped volumes are now ready to be managed by the host operating system.
Figure 5-58 Discover iSCSI targets dynamically from the iSCSI Initiator
3. The window shown in Figure 5-59 displays. Enter a target IP address, which should be
one of the IPs configured in 4.3.4, “Configuring IBM Storwize V5000 for iSCSI host
connectivity” on page 194. The target IP address is the iSCSI IP address of a node in the
I/O group from which the iSCSI volumes are mapped. Keep the default value of 3260 for
the port and click OK.
5. Repeat the previous two steps for each IBM Storwize V5000 iSCSI port you want to use
for iSCSI connections with this host.
iSCSI IP addresses: The iSCSI IP addresses are different from the cluster and
canister IP addresses. They are configured as described in Chapter 4, “Host
configuration” on page 155.
6. After all of the required ports have been added, close the iSCSI Initiator properties
window.
The window shown in Figure 5-61 displays. Click Yes to rescan the adapter.
7. In Storage Adapters, click the iSCSI Software Adapter. After the rescan has completed,
the mapped volumes are listed in the adapter details, as shown in Figure 5-62.
Check the UIDs you recorded earlier and confirm that all of the mapped volumes are
listed.
The mapped volumes are now ready to be managed by the host operating system. In the next
section we describe how to create a datastore from a mapped volume.
2. To create a new datastore, click Add Storage... to open the Add Storage wizard. The
window shown in Figure 5-64 displays. Select Disk/LUN and click Next.
Figure 5-65 Choosing the disk or LUN from which to create the new datastore
Figure 5-66 Selecting the File System Version for the new datastore
Figure 5-69 Selecting how much of the available disk space to use
9. The “Create VMFS datastore” task will start, and this will appear under Recent Tasks, as
shown in Figure 5-71. Wait for this to complete.
Figure 5-71 The “Create VMFS datastore” task under Recent Tasks
10.After the task is complete, the new datastore appears under Configuration Storage,
as shown in Figure 5-72.
2. Click Manage Paths.... The window shown in Figure 5-74 displays. Select either Most
Recently Used, Round Robin, or Fixed, then click Change to confirm. Round Robin is
the suggested option, as this allows for I/O load balancing. For more information about the
available options, see Chapter 4, “Host configuration” on page 155.
The mapped volume is now configured for maximum availability and ready to be used.
The mapped volumes are now ready to be managed by the host operating system.
CLI: For more information about the command-line interface setup, see Appendix A,
“Command-line interface setup and SAN Boot” on page 667.
Manually migrating data: For more information about migrating data manually, see
Chapter 11, “External storage virtualization” on page 579.
To ensure system interoperability and compatibility between all elements that are connected
to the SAN fabric, check the proposed configuration with the IBM System Storage
Interoperation Center (SSIC). SSIC can confirm whether the solution is supported and provide
recommendations for hardware and software levels.
If the required configuration is not listed for support in the SSIC, contact your IBM sales
representative and ask for a Request for Price Quotation (RPQ) for your specific configuration.
For more information about the IBM SSIC, see this website:
http://www.ibm.com/systems/support/storage/ssic/interoperability.wss
Attention: If you are going to be migrating volumes from another IBM Storwize product,
be aware that the target Storwize unit will need to be configured at the Replication layer in
order for the source to discover the target system as a host. The default layer setting for
Storwize V5000 is Storage. If this is not done, then it will not be possible to add the target
system as a host on the source storage system, or to see source volumes on the target
system. For more information about layers and how to change them, see Chapter 10,
“Copy services” on page 451.
Restrictions
Confirm that the following conditions are met:
You are not using the storage migration wizard to migrate cluster hosts, including clusters
of VMware hosts and Virtual I/O Servers (VIOS).
You are not using the storage migration wizard to migrate SAN boot images.
If the restrictions options cannot be selected, the migration must be performed outside of this
wizard because more steps are required. For more information, see the IBM Storwize V5000
Knowledge Center at this website:
http://www.ibm.com/support/knowledgecenter/STHGUJ_7.4.0/com.ibm.storwize.v5000.740
.doc/v5000_ichome_740.html
The VMware vSphere Storage vMotion feature might be an alternative for migrating VMware
clusters. For more information, see this website:
http://www.vmware.com/products/vsphere/features/storage-vmotion.html
For more information about migrating SAN boot images, see Appendix A, “Command-line
interface setup and SAN Boot” on page 667.
If all options can be selected, click Next to continue. In all other cases, Next cannot be
selected and the data must be migrated without use of this wizard. Figure 6-5 shows the
storage migration wizard with all restrictions satisfied and prerequisites met.
SCSI ID: Record the SCSI ID of the LUs to which the host is originally mapped. Some
operating systems do not support changing the SCSI ID during the migration.
After all the data has been collected and the tasks carried out in the Map Storage section,
click Next to continue. The Storwize V5000 runs the discover devices task. After the task is
complete, click Close to continue. Figure 6-8 shows the results of the Discover Devices task.
Migrating MDisks
In the next step of the wizard, select the MDisks that are to be migrated and then click Next to
continue. Figure 6-9 shows the Migrating MDisks panel.
MDisk selection: Select only the MDisks that are applicable to the current migration plan.
After step 8 of the current migration completes, another migration plan can be started to
migrate any remaining MDisks.
The Storwize V5000 runs the Import MDisks task. After the task is complete, click Close to
continue. Figure 6-10 shows the result of the Import MDisks task. Note the volumes showing
in the background after the task completes.
Note: This step is optional; you may bypass it by selecting Next and moving to “Mapping
volumes to hosts” on page 264.
Follow this step of the wizard to select or configure new hosts as required. Figure 6-11 shows
the Configure Hosts panel.
Select your connection type and click Add Host. The next panel allows you to name the host,
assign ports (in our case, Fibre Channel WWPNs), and in the advanced settings, assign I/O
group ownership and host type as shown in Figure 6-13. More information about I/O group
assignment can be found in Chapter 4, “Host configuration” on page 155.
Names: The names of the image mode volumes must begin with a letter. The name can be
a maximum of 63 characters. The following valid characters can be used:
Uppercase letters (A - Z)
Lowercase letters (a - z)
Digits (0 - 9)
Underscore (_)
Period (.)
Hyphen (-)
Blank space
Hold Ctrl and click to select multiple volumes. Click Map to Host to select the host as shown
in Figure 6-18 and click Apply.
Click Cancel to return to the beginning of “Mapping volumes to hosts” on page 264. Click
Next to continue.
The process uses the volume mirroring function that is included with the Storwize V5000.
Select a pool and click Next. The task will begin and complete as shown in Figure 6-23.
The end of the storage migration wizard is not the end of the data migration task. It is still in
progress. A percentage indicator is displayed in the Storage Migration panel as shown in
Figure 6-25.
You will be asked to confirm the number of volumes that you want to finalize as shown in
Figure 6-27.
The status of those image mode MDisks is then unmanaged as shown in the pools view
Figure 6-29. When the finalization completes, the data migration to the IBM Storwize V5000 is
done. Remove zoning and retire the external storage system.
The Windows 2008 host has existing data on the disks of an IBM DS3400 storage system.
That data must be migrated to the internal storage of the Storwize V5000. The Windows 2008
host has a dual port QLogic Host Bus Adapter (HBA) type QLE2562. Each of the Fibre
Channel switches is the IBM 2498-24B type. There are two host disks to be migrated: Disk 1
and Disk 2. Figure 6-30 shows the Windows 2008 Disk management panel. The two disks
feature defined volumes. The volume labels are Migration 1 (G: drive) and Migration 2
(H: drive).
The two disks to be migrated are on the IBM DS3400 storage system. Therefore, the disk
properties display the disk device type as an IBM1726-4xx FAStT disk device. To show this
disk attribute, right-click the disk to show the menu and then select Properties, as shown in
Figure 6-31.
Perform this task on all disks before the migration and then the same check can be done after
the disks are presented from the Storwize V5000. After the Storwize V5000 mapping and host
rescan, the disk device definitions should have changed to IBM 2145 Multi-Path disk device
This is confirmation that the disks are under Storwize V5000 control.
Important: If the external storage system cannot restrict access to specific hosts, all
volumes on the system must be migrated.
6. Follow Step 3 of the wizard Map Storage; for each of the steps shown. do these tasks:
a. Create a list of all external storage system volumes that are migrated.
Create a DS3400 LU table.
b. Record the hosts that use each volume.
Create Host table.
c. Record the WWPNs associated with each host.
Add WWPNs to Host table.
Important: If the external storage system cannot restrict access to specific hosts, all
volumes on the system must be migrated.
Move the LUs from Hosts to the Storwize V5000 Host Group on DS3400.
e. Record the storage system LUN that is used to map each volume to this system.
Update the DS3400 LU table.
7. Follow Step 4 of the wizard to discover the external storage LUs as MDisks on the
Storwize V5000.
8. In Step 5 of the wizard, configure hosts by completing the following steps:
a. Create Host on Storwize V5000.
b. Select Host on Storwize V5000.
9. In Step 6 of the wizard, map volumes to hosts by completing the following steps:
a. Map volumes to Host on Storwize V5000.
b. Verify that disk device type is now 2145 on Host.
c. SDDDSM datapath query commands on Host.
10.In Step 7 of the wizard select internal storage pool on Storwize V5000 to which you want
the volumes to be migrated - ensure sufficient space exists for all the volumes.
11.Finish the storage migration wizard.
12.Finalize the migrated volumes.
Detailed view of the storage migration wizard for the example scenario
The following steps provide an a more detailed explanation of the wizard tasks for our
example scenario:
1. Search the IBM SSIC for scenario compatibility.
2. Back up all of the data that is associated with the host, DS3400, and Storwize V5000.
3. Start a New Migration to open the wizard on the Storwize V5000, as shown in “Accessing
the storage migration wizard” on page 252.
4. Follow step 1 of the wizard and select all of the restrictions and prerequisites, as shown in
Figure 6-4 on page 255 click Next to continue.
5. Follow step 2 of the wizard, as shown in Figure 6-6 on page 257. Complete all of the steps
before you continue.
Pay attention to the following tasks:
a. Stop host operation or stop all I/O to volumes that you are migrating.
b. Remove zones between the hosts and the storage system from which you are
migrating.
Important: A WWPN is a unique identifier for each Fibre Channel port that is
presented to the SAN fabric.
Figure 6-35 shows the IBM DS3400 storage manager host definition and the associated
WWPNs.
Record the WWPNs for alias, zoning, and the Storwize V5000 New Host task.
Note: The WWPN shown in this diagram are not the same as those used in the example
scenario.
WWPN: The WWPN consists of eight bytes (two digits per byte). In Figure 6-38, the third
byte pair in the listed WWPNs are 04, 08, 0C, and 10. They are the differing bytes for each
WWPN only. Also, the last two bytes in the listed example of 04BF are unique for each
node canister. Taking note of these types of patterns can help when you are zoning or
troubleshooting SAN issues.
Aliases can contain the FC Switch Port to which the device is attached, or the attached
device’s WWPN. In this example scenario, WWPN-based zoning is used instead of
port-based zoning. Either method can be used; however, it is best not to intermix the methods
and keep the zoning configuration consistent throughout the fabric.
Important: If you cannot restrict volume access to hosts using the external storage
system, all volumes on the system must be migrated.
To complete this step, an IBM DS3400 Host Group is defined for the Storwize V5000, which
contains two hosts. Each host is a node canister of the Storwize V5000.
Figure 6-42 IBM DS3400 storage manager Configure tab: Configure host
Figure 6-44 IBM DS3400 storage manager specifying HBA host ports: Edit alias
A host definition must be created for the other node canister. This host definition is also
associated to the Host Group Storwize_V5000. To configure the other node canister,
complete the steps described in “Creating IBM DS3400 hosts” on page 284.
Figure 6-50 shows the host topology of the defined Storwize_V5000 Host Group with both of
the created node canister hosts, as seen through the DS3400 Storage Manager software.
Figure 6-50 IBM DS3400 host group definition for the Storwize V5000
Now that the environment is prepared, return to step 2 of the Storage Migration wizard in the
Storwize V5000 GUI and click Next to continue, as shown Figure 6-6 on page 257.
Follow step 3 of the Storage Migration wizard and record the details, as shown in Figure 6-7
on page 258.
Creating a list of all external storage system volumes being migrated and
recording the hosts that use each volume
Table 6-3 shows a list of the IBM DS3400 LUs that are to be migrated and the hosts that uses
them.
Table 6-3 List of the IBM DS3400 logical units that are migrated and hosted
LU name Controller Array SCSI ID Host name Capacity
Record the WWPNs that are associated with each host as shown in Table 6-4. It is advised
that you record the HBA firmware, HBA device driver version, adapter information, operating
system, and V5000 multi-path software version, if possible.
Unmap all volumes that are migrated from the hosts in the storage system and map them to
the host or host group that you created when your environment was prepared.
Important: If you cannot restrict volume access to specific hosts using the external
storage system, all volumes on the system must be migrated.
Figure 6-52 shows the IBM DS3400 logical drives mapping information before the change.
The volumes are accessible from the windows host only.
Figure 6-52 IBM DS3400 Logical drives mapping information before changes
To modify the mapping definition so that the LUs are accessible only from the Storwize V5000
Host Group, select Change... to open the Change Mapping panel and modify the mapping as
shown in Figure 6-53.
Recording the storage system LUN used to map each volume to this system
The LUNs that are used to map the logical drives remained unchanged and can be found in
Table 6-3 on page 292. Now that step 3 of the storage migration wizard is complete, click
Next to show the Detect MDisks running task, as shown in Figure 6-7 on page 258.
Follow the next step of the Storage Migration wizard, as shown in Figure 6-57 and detect the
MDisks to import. The MDisk name is allocated depending on the order of device discovery;
mdisk0 in this case is LUN 0 and mdisk1 is LUN 1. There is an opportunity to change the
MDisk names to something more meaningful to the user in later steps.
After the Import MDisks running task is complete, select Close. In the next stage of the
wizard we define our host as shown in Figure 6-59. The Windows 2008 host is not yet defined
in the Storwize V5000. Select Create Host to open the Create Host panel.
After all of the port definitions are added, click Add Host to begin the Create Host running
task. After the Create Host running task is complete, select Close to return to the Configure
Hosts screen as shown in Figure 6-61. Note the Windows host is now shown.
Select the host that was configured and click Next to move to the next stage of the wizard.
At this point, it is possible to rename the MDisks to reflect something more meaningful.
Right-click the MDisk and select Rename to open the Rename Volume panel, as shown in
Figure 6-62.
After the final rename running task is complete, click Close to return to the Map Volume to
Host pane of the wizard, as shown in Figure 6-64. Highlight the two MDisks and select Map
to Host to open the Modify Host Mappings panel.
The MDisks that were highlighted are now shown in yellow in the Modify Host Mappings
panel. The yellow highlighting means that the volumes are not yet mapped to the host. Now is
the time to edit the SCSI ID, if required. (In this case, it is not necessary.) Click Map Volumes
to start the Modify Mappings task, as shown in Figure 6-66.
Verifying that migrated disk device type is now 2145 on the host
The migrated volumes are now mapped to the Storwize V5000 host definition. The migrated
disks properties show the disk device type as an IBM 2145 Multi-Path disk device. To confirm
that this information is accurate, right-click the disk to open the menu and select Properties,
as shown in Figure 6-68.
Figure 6-68 Display the disk properties from the Windows 2008 disk migration panel
After the disk properties panel is opened, the General tab shows the disk device type, as
shown in Figure 6-69.
The SDDDSM can also be used to verify that the migrated disk device is connected correctly.
Open the SDDDSM command-line interface (CLI) to run the disk and adapter queries. As an
example, on a Windows 2008 R2 SP1 host, click Subsystem Device Driver DSM to open the
SSDDSM CLI window, as shown Figure 6-70.
Example 6-4 Output from datapath query adapter and datapath query device SDDDSM commands
C:\Program Files\IBM\SDDDSM>datapath query adapter
Active Adapters :2
Total Devices : 2
Now that we have imported the image mode volumes of the external storage device, have
them mapped to migration targets in the IBM Storwize V5000 and have also mapped these
volumes to the new host definitions, the next step in the storage migration wizard is to select
an internal pool for the migrations to be created in. Click Next to open the Select Pool option
of the wizard, as shown in Figure 6-71.
After the Start Migration running task is complete, select Close as shown in Figure 6-72.
The wizard has now completed click Finish to open the System Migration panel, as shown in
Figure 6-73.
When the volume migrations are complete, select the volume migration instance and
right-click Finalize as shown in Figure 6-75.
The image mode volumes are deleted and the associated image mode MDisks are removed
from the migration storage pool. The status of those image mode MDisks is now unmanaged.
When the finalization completes, the data migration to the IBM Storwize V5000 is done.
Remove the DS3400-to-Storwize V5000 zoning and retire the external storage system.
Storage pools can be configured through the Initial Setup wizard when the system is first
installed, as described in Chapter 2, “Initial configuration” on page 25. They can also be
configured after the initial setup through the management GUI which provides a set of presets
to help you configure different RAID types.
The advised configuration presets will configure all drives into RAID arrays based on drive
class and will protect them with the appropriate number of spare drives. Alternatively you can
configure the storage to your own requirements. Selections include the drive class, the
number of drives to configure, whether to configure spare drives or not and optimizing for
performance or capacity.
The IBM Storwize V5000 storage system provides an Internal Storage window for managing
all internal drives. The Internal Storage window can be accessed by opening the System
window, clicking the Pools function icon, and then clicking Internal Storage, as shown in
Figure 7-1.
On the right side of the Internal Storage window, the selected type of internal disk drives are
listed. By default, the following information also is listed:
Logical Drive ID
Drive capacity
Current type of use (unused, candidate, member, spare, or failed)
Status (online, offline, and degraded)
MDisk name that the drive is a member of
Enclosure ID that it is installed in
Slot ID of the enclosure in which drive is installed
The default sort order is by enclosure ID (this default can be changed to any other column by
left-clicking the column header). To toggle between ascending and descending sort order,
left-click the column header again.
You can also find the overall internal storage capacity allocation indicator in the upper right
corner. The Total Capacity shows the overall capacity of the internal storage installed in the
IBM Storwize V5000 storage system. The MDisk Capacity shows the internal storage
capacity that is assigned to the MDisks. The Spare Capacity shows the internal storage
capacity that is used for hot spare disks.
The percentage bar that is shown in Figure 7-4 indicates how much capacity is allocated.
Depending on the status of the drive selected, the following actions are available:
Take Offline
The internal drives can be taken offline if there are problems with them. A confirmation
window opens, as shown in Figure 7-6.
If no sufficient spare drives are available and one drive must be taken offline, the second
option for no redundancy must be selected. This option results in a degraded storage pool
because of the degraded MDisk, as shown in Figure 7-8.
The IBM Storwize V5000 storage system prevents the drive from being taken offline if there
might be data loss as a result. A drive cannot be taken offline (as shown in Figure 7-9) if no
suitable spare drives are available and, based on the RAID level of the MDisk, drives are
already offline.
Figure 7-9 Internal drive offline not allowed because of insufficient redundancy
Example 7-1 The use of the chdrive command to set drive to failed
chdrive -use failed driveID
chdrive -use failed -allowdegraded driveID
Mark as
The internal drives in the IBM Storwize V5000 storage system can be assigned the following
usage roles, as shown in Figure 7-10:
Unused: The drive is not in use and cannot be used as a spare.
Candidate: The drive is available for use in an array.
Spare: The drive can be used as a hot spare, if required.
Identify
Use the Identify action to turn on the LED light so that you can easily identify a drive that must
be replaced or that you want to troubleshoot. The panel that is shown in Figure 7-12 appears
when the LED is on.
Example 7-2 shows how to use the chenclosureslot command to turn on and off the drive
LED.
Example 7-2 The use of the chenclosureslot command to turn on and off drive LED
chenclosureslot -identify yes/no -slot slot enclosureID
Use the Show Dependent Volumes option before you perform maintenance to determine
which volumes are affected.
Important: A lack of listed dependent volumes does not imply that there are no volumes
using the drive.
Figure 7-13 shows an example if no dependent volumes are detected for a specific drive.
Example 7-3 shows how to view dependent volumes for a specific drive by using the CLI.
If the Show Details option is not selected, the technical information section is reduced, as
shown in Figure 7-16.
Example 7-4 shows how to use the lsdrive command to display configuration information
and drive VPD.
Example 7-4 The use of the lsdrive command to display configuration information and drive VPD
IBM_Storwize:ITSO_V5000:superuser>lsdrive 9
id 9
status online
error_sequence_number
use member
UID 5000cca013085fa4
tech_type sas_ssd
capacity 372.1GB
block_size 512
vendor_id IBM-B040
product_id HUSML4040ASS60
FRU_part_number 00Y5815
FRU_identity 11S49Y7471YXXXXYV4LSLA
RPM
firmware_level J3C0
FPGA_level
mdisk_id 8
mdisk_name mdisk7
member_id 0
enclosure_id 1
slot_id 2
node_id
node_name
quorum_id
Customize Columns
Clicking Customize Columns in the actions menu, allows you to add or remove a number of
options available in the Internal Storage view. To restore the current view, select Restore
Default View as shown in Figure 7-18.
If the internal storage of an IBM Storwize V5000 was not configured during the initial setup,
arrays and MDisks can be configured by clicking Configure Storage in the internal Storage
window as shown in Figure 7-19.
Table 7-1 describes the presets that are used for Flash drives for the IBM Storwize V5000
storage system.
Flash RAID instances: In all Flash RAID instances, drives in the array are balanced
across enclosure chains, if possible.
Note: The Delete Pool option only becomes available when the Pool is not provisioning
volumes.
Warning: When a pool is deleted using the CLI along with the parameter -force, all
mapping on, and data that is contained within, any volume provisioned from this pool is
deleted, without being able to recover it.
If there are internal drives with a status of Unused, a window opens, which gives the option to
include them in the RAID configuration, as shown in Figure 7-24.
When the decision is made to include the drives into the RAID configuration, their status is set
to Candidate, which also makes them available for a new MDisk.
Selecting Use the recommended configuration guides you through the wizard that is
described in "Option: Use the recommended configuration" on page 327.
Selecting Select a different configuration uses the wizard that is described in "Option:
Select a different configuration" on page 330.
Therefore, the wizard will automatically create a number of MDisks as shown in Figure 7-26.
Spare drives are also automatically created to meet the spare goals according to the preset
chosen; one spare drive is created out of every 24 disk drives of the same drive class on a
single chain. Spares are not created if sufficient spares are already configured.
Spare drives in the IBM Storwize V5000 are global spares, which means that any spare drive
that has at least the same capacity as the drive to be replaced can be used in any array. Thus,
a Flash array with no Flash spare available uses a SAS spare instead.
If the proposed configuration meets your requirements, click Finish, and the system
automatically creates the array MDisks.
Storage pools are also automatically created to contain the MDisks. MDisks with similar
performance characteristics, including RAID level, number of member drives, and drive class
are grouped into the same storage pool.
After an array is created, the Array MDisk members are synchronized with each other through
a background initialization process. The progress of the initialization process can be
monitored by clicking the icon at the left of the Running Tasks status bar and selecting the
initialization task to view the status, as shown in Figure 7-27.
Click the taskbar to open the progress window, as shown in Figure 7-28. The array is
available for I/O during this process.
0 8 8
5 8 9
6 12 10
10 8 8
Figure 7-33 shows that there are a suitable number of drives to configure performance
optimized arrays.
Four RAID 5 arrays are possible and all provisioned drives are used.
Capacity
The capacity of the individual MDisks. The capacity for the storage pool is also shown,
which is the total of all the MDisks in this storage pool. The usage of the storage pool is
represented by a bar and the number as shown in Figure 7-41.
Mode
An MDisk features the following modes:
– Array
Array mode MDisks are constructed from internal drives by using the RAID
functionality. Array MDisks are always associated with storage pools.
– Unmanaged
The MDisk is not a member of any storage pools, which means it is not used by the
IBM Storwize V5000 storage system. LUNs that are presented by external storage
systems to IBM Storwize V5000 are discovered as unmanaged MDisks.
– Managed
Managed MDisks are LUNs that are presented by external storage systems to an IBM
Storwize V5000 that are assigned to a storage pool and provide extents for volumes
creation. Any data that was on these LUNs when they are assigned to the pool is lost.
– Image
Image MDisks are LUNs that are presented by external storage systems to an IBM
Storwize V5000 and assigned directly to a volume with a one-to-one mapping of blocks
between the MDisk and the volume. This status is an intermediate status of the
migration process and is described in Chapter 6, “Storage migration” on page 249.
Storage System
The name of the external storage system from which the MDisk is presented as shown in
Figure 7-43.
LUN
Represents the Logical Unit Number of the MDisk.
For more information about how to attach external storage to an IBM Storwize V5000 storage
system, see in Chapter 11, “External storage virtualization” on page 579.
The CLI command lsmdiskgrp returns a concise list or a detailed view of the storage pools
that are visible to the system, as shown in Example 7-5.
lsmdiskgrp
lsmdiskgrp mdiskgrpID
If the number of drives that are assigned as Spare does not meet the configured spare
goal, an error is logged in the event log that reads: “Array MDisk is not protected by
sufficient spares.” This error can be fixed by adding more drives as spares. During the
internal drive configuration, spare drives are automatically assigned according to the
chosen RAID preset’s spare goals, as described in "Configuring internal storage" on page
322.
Swap Drive
The Swap Drive action can be used to replace a drive in the array with another drive with
the status of Candidate or Spare. This action is used to replace a drive that failed, or is
expected to fail soon; for example, as indicated by an error message in the event log.
Select an MDisk that contains the drive to be replaced and click RAID Actions Swap
Drive. In the Swap Drive window, select the member drive that is to be replaced (as shown
in Figure 7-46) and click Next.
The exchange process starts and then runs in the background. The volumes on the
affected MDisk remain accessible.
If the GUI process is not used for any reason, the CLI command in Example 7-7 can be
run.
Data that is on MDisks is migrated to other MDisks in the pool if enough space is available
on the remaining MDisks in the pool.
Available capacity: Make sure that you have enough available capacity left in the
storage pool for the data on the MDisks to be removed.
After an MDisk is deleted from a pool, its former member drives return to candidate mode.
The alternative CLI command to delete MDisks is shown in Example 7-8.
If all the MDisks of a storage pool were deleted, the pool remains as an empty pool with
0 bytes of capacity, as shown in Figure 7-49.
MDisks detection: The Detect MDisks action is asynchronous. Although the task
appears to be finished, it might still be running in the background.
Some of the other actions are available by clicking MDisk by Pool Actions, as shown
in Figure 7-52.
Rename
MDisks can be renamed by selecting the MDisk and clicking Rename from the Actions menu.
Enter the new name of your MDisk (as shown in Figure 7-53) and click Rename.
Properties
The Properties action for an MDisk shows the information that you need to identify it. In the
MDisks by Pools window, select the MDisk and click Properties from the Actions menu. The
following tabs are available in this information window:
The Overview tab (as shown in Figure 7-55) contains information about the MDisk. To
show more details, click Show Details.
The Dependent Volumes tab (as shown in Figure 7-56) lists all of volumes that use extents
on this MDisk.
Storage pools (or pools) act as containers for MDisks and provision the capacity to volumes.
IBM Storwize V5000 organizes storage into storage pools to ease storage management and
make it more efficient. Child pools act as sub-containers and inherit the parent storage pool’s
properties, and may also be used to provision capacity to volumes. For more information
about the child pools, see “Working with child pools” on page 353.
An alternative path to the Pools window is to click Pools MDisks by Pools, as shown in
Figure 7-59.
When you expand a pool’s entry by clicking the plus sign (+) to the left of the pool’s icon, you
can access the MDisks that are associated with this pool. You can perform all the actions on
them, as described in "Working with child pools" on page 353.
The only required parameter for the pool is the pool name, as shown in Figure 7-61.
The new pool is included in the pool list with 0 bytes, as shown in Figure 7-63.
Creating a pool
As shown in "Create Pool option" on page 349, this option is used to create an empty pool.
Detecting MDisks
As detailed in "More actions on MDisks" on page 343, the Detect MDisk option starts a
rescan of the Fibre Channel network. It discovers any new MDisks that were recently mapped
to the IBM Storwize V5000.
Through the CLI you can delete a Pool and all it’s contents using the parameter -force,
however, all volumes and host mappings are deleted without being able to recover them.
Important: After you delete the pool through the CLI, all data that is stored in the pool is
lost except for the image mode MDisks; their volume definition is deleted, but the data on
the imported MDisk remains untouched.
After you delete the pool, all the managed or image mode MDisks in the pool return to a
status of unmanaged.
A storage parent pool is a collection of MDisks that provides the pool of storage from which
volumes are provisioned; a child pool resides and gets capacity exclusively from one storage
pool. A user can specify the child pool capacity at creation. The child pool can grow and
decrease nondisruptively.
The child pool shares the properties of the parent storage pool and provides most of the
functions that storage pools have. These are the common properties and behaviors:
ID, Name, Status, Volume Count
Warning Thresholds (child pools can have their own threshold warning setting)
MDisk (this value is always zero “0” for child pools)
Space Utilization (in %)
Easy Tier (for a child pool, this value must be the same as the parent mdiskgrp's Easy Tier
setting)
Because a child pool must have a unique parent pool, the maximum number of child pools per
parent pools is 127.
For more information about the limits and restrictions, see the following URL:
https://ibm.biz/BdFNu6
Change Icon
As described in “Changing the storage pool icon” on page 351, this option helps to manage
and differentiate between pools and drive classes.
Rename
Select this option to rename a child pool. Actions are explained in “Rename storage pool” on
page 352.
Delete Pool
If the child pool is not provisioning any capacity to volumes, use this option to delete a child
pool.
Properties
The properties of the child pool can be seen by choosing this option.
If you are certain that the parent pool has sufficient free extents, you can use the svctask
chmdiskgrp with “-size” parameter to set the new size of the child pool.
The capacity of the child pool can also be reduced. You cannot reduce the size of the child
pool to less than it’s used capacity.
Assuming that the child pool capacity is 20.00 GB, Example 7-10 shows how to increase the
capacity by 2 GB.
Example 7-11 Run svcinfo lsmdiskgrp to get a view of the child pool
IBM_Storwize:ITSO_V5000:superuser>svcinfo lsmdiskgrp V5K_Child_Pool1
id 3
name V5K_Child_Pool1
status online
mdisk_count 0
vdisk_count 0
capacity 22.00GB
extent_size 512
free_capacity 22.00GB
virtual_capacity 0.00MB
used_capacity 0.00MB
real_capacity 0.00MB
overallocation 0
warning 70
easy_tier auto
easy_tier_status balanced
tier ssd
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier enterprise
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier nearline
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
compression_active no
compression_virtual_capacity 0.00MB
compression_compressed_capacity 0.00MB
compression_uncompressed_capacity 0.00MB
site_id
site_name
parent_mdisk_grp_id 1
parent_mdisk_grp_name V5K_Parent_Pool
child_mdisk_grp_count 0
child_mdisk_grp_capacity 0.00MB
type child_thick
IBM_Storwize:ITSO_V5000:superuser>
Assuming that the child pool capacity is 22 GB and you want to reduce the capacity by 4 GB,
use svctask chmdiskgrp with the “-size” parameter to reduce to the capacity.
Example 7-12 shows an example of the command. It sets the size of the child pool to 18 GB.
Logical Unit Numbers (LUNs) that are presented by external storage systems to IBM Storwize
V5000 are discovered as unmanaged MDisks. Initially, the MDisk is not a member of any
storage pools, meaning that it is not used by the IBM Storwize V5000 storage system.
To learn more about external storage, see Chapter 11, “External storage virtualization” on
page 579.
Figure 8-2 shows a number of hosts, with volumes mapped to them all.
Selecting a host and clicking Actions (as shown in Figure 8-3) or right-clicking the host,
shows the available actions.
If the system has multiple I/O groups, a drop-down menu at the top-left of the panel shows the
I/O group selection. When selecting individual I/O groups, the IBM Storwize V5000 GUI lists
only the volumes that correspond to that I/O group. The Host drop-down menu lists the hosts
that are attached to the IBM Storwize V5000.
Important: Before you change host mappings, always ensure that the host can access
volumes from the correct I/O group.
The left pane (Unmapped Volumes) shows the volumes that are available for mapping to the
chosen host. The right pane (Volumes Mapped to the Host) shows the volumes that are
already mapped. Figure 8-4 shows that the volume named Win2012_Vol_01 with SCSI ID 0 is
mapped to the host Win_2012_FC. The left pane shows that a further 29 volumes are available
for mapping.
Important: The unmapped volumes panel refers to volumes that are not mapped to the
chosen host, however, they may be mapped to other hosts.
If Apply was clicked in the panel shown in Figure 8-6 on page 363, the changes are
submitted to the system, but the Modify Host Mappings panel remains open for further
changes.
After moving a volume to the right pane in the Modify Host Mappings panel, it is possible to
right-click the yellow unmapped volume and change the SCSI ID used for the host mapping,
as shown in Figure 8-9.
Important: The IBM Storwize V5000 automatically assigns the lowest available SCSI ID if
none is specified. However, you can set an SCSI ID for the volume. The SCSI ID cannot be
changed while volume is assigned to host.
It is also possible to map a volume to multiple hosts, however, this would normally only be
done in a clustered host environment, as data corruption is possible. A warning is displayed,
as shown in Figure 8-10.
To unmap all volumes from a host, select a host from the Hosts panel and using the Action
menu, click Unmap all Volumes, as shown in Figure 8-12.
Unmapping: By clicking Unmap, access to all the volumes for this host are removed.
Ensure that the required procedures are run on the host OS before the unmapping
procedure is performed.
After clicking Unmap, the changes are applied to the system, as shown in Figure 8-14. Click
Close after you review the output.
After the changes are applied to the system, click Close, as shown in Figure 8-18.
To open the Host Details panel, select the host and from the Action menu, click Properties.
You also can highlight the host and right-click it, as shown in Figure 8-22.
Make any necessary changes and click Save to apply them and then click Close to return to
the Host Details panel.
Figure 8-25 shows the Mapped Volumes tab which provides an overview of the volumes
mapped to the host. This tab provides the following information:
SCSI ID
Volume name
UID
Caching I/O group ID
Clicking the Show Details option does not show any detailed information.
Using this panel, you can also Add and Delete Host Ports, as described in 8.2, “Adding and
deleting host ports” on page 376. Selecting the Show Details option does not show any
further information.
Hosts are listed in the left pane. The Host icons show an orange cable for a Fibre Channel
host, a black cable for a SAS host, and a blue cable for an iSCSI host.
The properties of the selected host are shown in the right pane. Clicking Add Host, starts the
wizard described in Chapter 4, “Host configuration” on page 155.
Click the Fibre Channel Ports drop-down menu, to display a list of all available Fibre
Channel host ports. If the FC WWPN of your host is not available in the menu, check the SAN
zoning and rescan the SAN from the host or try clicking the Rescan button.
Select the required WWPN and click Add Port to List. This adds the port to the Port
Definitions list, as shown in Figure 8-30. Repeat this step to add more ports to the host.
The port appears as unverified because it is not logged in to the SAN fabric or known to the
IBM Storwize V5000. The first time the port logs in, the state automatically changes to online
and the mapping is applied to this port.
To remove one of the ports from the list, click the red X next to it. In Figure 8-31, we manually
added an FC port.
Important: When removing online or offline ports, the IBM Storwize V5000 prompts you to
confirm the number of ports to be deleted but does not warn about mappings. Disk
mapping is associated to the host object and Logical Unit Number (LUN) access is lost if all
ports are deleted.
Click the SAS Ports drop-down menu, to see a list of all available SAS ports. If the SAS
WWPNs are not available, try the Rescan option or check the physical connections.
Important: The IBM Storwize V5000 allows the addition of an offline SAS port. Enter the
SAS WWPN in SAS Port field and then click Add Port to List.
Enter the initiator name of your host and click Add Port to List, as shown in Figure 8-33.
Click Add Ports to Host to apply the changes. The iSCSI port status remains unknown until
it is added to the host and a host rescan process is completed.
Select the host in left pane and select the host port to be deleted, then click the Delete Port
button, as shown in Figure 8-34.
Multiple ports can be selected by holding down the Ctrl or Shift keys while selecting the host
ports to delete.
A task panel opens that shows the results. Click Close to return to the Ports by Host panel.
As shown in Figure 8-37, selecting a mapping and opening the Actions menu provides the
following options:
Unmapping volumes
Properties (Host)
Properties (Volume)
If multiple mappings are selected (by holding the Ctrl or Shift keys), only the Unmap Volumes
option is available.
Warning: Ensure that the required procedures are run on the host OS before unmapping
volumes on the IBM Storwize V5000.
Figure 8-39 shows the following options that are available within the Volumes menu for
advanced features administration:
Volumes
Volumes by Pool
Volumes by Host
This panel lists all configured volumes on the system and provides the following information:
Name: Shows the name of the volume. If there is a + sign next to the name, this sign
means that there are two copies of this volume. Click the + sign to expand the view and list
the copies, as shown in Figure 8-40.
Status: Provides the status information about the volume, which can be online, offline, or
degraded.
Capacity: The disk capacity that is presented to the host. If a blue volume is listed next to
the capacity, this means that this volume is a thin-provisioned volume. Therefore, the listed
capacity is the virtual capacity, which might be more than the real capacity on the system.
Storage Pool: Shows in which storage pool the volume is stored. The primary copy is
shown unless you expand the volume copies.
UID: The volume unique identifier.
Host Mappings: Shows if a volume has host mapping: Yes when host mapping exists, and
No when there are no hosting mappings.
Tip: Right-clicking anywhere in the blue title bar allows you to customize the volume
attributes that are displayed. You might want to add some useful information, such as
caching I/O group and real capacity.
To create a volume, click Create Volumes and complete the steps as described in 5.1,
“Creating volumes in IBM Storwize V5000” on page 202.
Depending on the volume selected, the following Volume options are available:
Create Volumes
Map to Host
Unmap All Hosts
View Mapped Host
Duplicate Volume
Move to Another I/O Group (if a multi-I/O group system)
Rename
Shrink
Expand
Migrate to Another Pool
Export to Image Mode
Delete
Volume Copy Actions
– Create Volumes
– Add Mirrored Copy
Properties
For Thin-Provisioned volumes, the following Volume Copy Actions are available:
Create Volumes
Add Mirror Copy
Thin Provisioned
– Shrink
– Expand
– Edit Properties
Important: It is not possible to change the caching I/O group, using the I/O group
drop-down menu. Instead, the menu is used to list hosts that have access to the specified
I/O group.
After a host is selected, the Modify Mappings panel opens. In the upper left, the I/O group
(if applicable) and selected host are displayed. The volume that is to be mapped is highlighted
in yellow, as shown in Figure 8-43. Click Map Volumes to apply the changes to the system.
Modify Mappings panel: For more information about the Modify Mappings panel, see
8.1.1, “Modifying Mappings menu” on page 362.
After the task completes, click Close to return to the Volumes panel.
Important: Ensure that the required procedures are run on the host OS before the
unmapping procedure is performed.
To remove a mapping, highlight the host and click Unmap from Host. If several hosts are
mapped to this volume (for example, in a cluster), only the selected host is removed.
Clicking Reset, resets the name field to the original name of the volume. Click Rename to
apply the changes and click Close after task panel completes.
Important: Before you shrink a volume, ensure that the host Operating System (OS)
supports this. If the OS does not support shrinking a volume, it is likely that the OS will log
disk errors and data corruption will occur.
Click Shrink to start the process. Click Close when task panel completes, to return to the
Volumes panel.
Run the required procedures on the host OS after the shrinking process.
Important: For volumes that contain more than one copy, you might receive a
CMMVC6354E error; run the lsvdisksyncprogress command on the IBM Storwize V5000
CLI to view the synchronization status. Wait for the copy to synchronize. If you want the
synchronization process to complete more quickly, increase the rate by running the
chvdisk command. When the copy is synchronized, resubmit the shrink process.
See Appendix A, “Command-line interface setup and SAN Boot” on page 667.
After the task completes, click Close to return to the Volumes panel.
Important: To migrate a volume, the source and target storage pool must have the same
extent size. For more information about extent size, see Chapter 1, “Overview of the IBM
Storwize V5000 system” on page 1.
To migrate a volume to another storage pool, select Migrate to Another Pool from the
Actions menu. The Migrate Volume Copy panel opens.
If the volume consists of more than one copy, you are asked which copy you want to migrate
to another storage pool, as shown in Figure 8-49. If the selected volume consists of one copy,
this option does not appear.
Notice that the Win2012_Vol_01 volume has two copies stored in two different storage pools.
The storage pools to which they belong are shown in parentheses.
The volume migration starts, as shown in Figure 8-50. Click Close to return to the Volumes
panel.
After the migration completes, the “copy 0” from the Win2012_Vol_01 volume is shown in the
new storage pool, as shown in Figure 8-51.
The volume copy was migrated to the new storage pool without any downtime. It is also
possible to migrate both volume copies to other storage pools.
The volume copy feature can also be used to migrate volumes to a different pool with a
different extent size, as described in 8.6.5, “Migrating volumes by using the volume copy
features” on page 412.
The Export to Image Mode wizard opens and displays the available MDisks. Select the
MDisk to which the volume will be exported to and click Next, as shown in Figure 8-53.
Click Finish to start the migration. After the task is complete, click Close to return to Volumes
panel.
Important: Use image mode to import or export existing data into or out of the IBM
Storwize V5000. Migrate such data from image mode MDisks to other storage pools to
benefit from storage virtualization.
For more information about importing volumes from external storage, see Chapter 6, “Storage
migration” on page 249 and Chapter 7, “Storage pools” on page 309.
Click Delete and the volumes are removed from the system.
After the task completes, click Close to return to the Volumes panel
Important: You must force the deletion if the volume has host mappings or is used in
FlashCopy mappings. To be cautious, always ensure that the volume has no association
before you delete it.
To open the advanced view of a volume, select Properties from the Action menu, and the
Volume Details panel opens, as shown in Figure 8-56 on page 397. The following tabs are
available:
Overview
Host Maps
Member MDisk
Important: Changing the I/O group can cause loss of access because of cache reload and
host-I/O group access. Also, setting the Mirror Sync Rate to 0% disables synchronization.
After the changes are applied to the system, the selected host no longer has access to this
volume. Click Close to return to the Host Maps window. For more information about host
mappings, see 8.3, “Host mappings overview” on page 382.
Highlight an MDisk and click Actions to see the available tasks, as shown in Figure 8-61.
For more information about the available tasks, see Chapter 7, “Storage pools” on page 309.
You also can use this method to migrate data across storage pools with different extent sizes.
Select if the volume should be Generic or Thin-Provisioned, select the storage pool in which
the new copy should be created and click Add Copy, as shown in Figure 8-63.
However, with a thin-provisioned volume, it is also possible to edit the allocated size and the
warning thresholds. To edit these settings, select the volume, then select Actions Volume
Copy Actions Thin-Provisioned as shown in Figure 8-65 on page 404.
Deallocating extents: Only extents that do not include stored data can be deallocated. If
the space is allocated because there is data on the extent, the allocated space cannot be
shrunk and an out-of-range warning message appears.
The new space is now allocated. Click Close after task is complete.
After the task completes, click Close to return to the Volumes panel.
Expanding a volume, selecting a copy, and opening the Action menu, displays the following
volume copy actions, as shown in Figure 8-69:
Thin-Provisioned (for thin volumes)
Make Primary (for non-primary copy)
Split into New Volume
Validate Volume Copies
Delete Copy option
Looking at the volume copies that are shown in Figure 8-69, it is possible to see that one of
the copies has a star displayed next to its name, as also shown in Figure 8-70.
Each volume has a primary and a secondary copy, and the star indicates the primary copy.
The two copies are always synchronized, which means that all writes are destaged to both
copies, but all reads are always done from the primary copy. Two copies per volume is the
maximum number configurable and the roles of the copies can be changed.
To accomplish this task, select the secondary copy and then click Actions Make Primary.
Usually, it is a preferred practice to place the volume copies on storage pools with similar
performance because the write performance is constrained if one copy is placed on a lower
performance pool.
If high read performance is demanded, another possibility is to place the primary copy in an
SSD pool, or externally virtualized Flash System and the secondary copy in a normal disk
storage pool. This action maximizes the read performance of the volume and makes sure that
you have a synchronized second copy in your less expensive disk pool. It is possible to
migrate online copies between storage pools. For more information about how to select which
copy you want to migrate, see 8.4.8, “Migrating a volume to another storage pool” on
page 391.
Click Make Primary and the role of the copy is changed to primary. Click Close when the
task completes.
The volume copy feature also is a powerful option for migrating volumes, as described in
8.6.5, “Migrating volumes by using the volume copy features” on page 412.
For more information about flushing the data, see your operating system documentation. The
easiest way to flush the data is to shut down the hosts or application before a copy is split.
In our example, volume Win2012_Vol_01 has two copies: Copy 0 as primary and Copy 1 as
secondary. To split a copy, click Split into New Volume (as shown in Figure 8-69 on
page 406) on any copy and the remaining secondary copy automatically becomes the
primary for the source volume. Optionally, a name for the new volume can be specified.
Important: If you receive error message code CMMVC6357E while you are splitting volume
copy, use the lsvdisksyncprogress command to view the synchronization status or wait for
the copy to synchronize. Example 8-1 shows an output of lsvdisksyncprogress
command.
The validation process runs as a background process and may take some time, depending on
the volume size. You can check the status in the Running Tasks window, as shown in
Figure 8-77.
After the copy is deleted, click Close to return to the Volumes panel.
The easiest way to migrate volume copies is to use the migration feature that is described in
8.4.8, “Migrating a volume to another storage pool” on page 391. Using this feature, one
extent after another is migrated to the new storage pool. However, the use of volume copies
provides another way to migrate volumes if you have different storage pool characteristics in
terms of extent size.
This migration process requires more user interaction with the IBM Storwize V5000 GUI, but it
offers some benefits.
In step 1, you create the copy on the tier 2 pool, while all reads are still performed in the tier 1
pool to the primary copy. After the synchronization, all writes are destaged to both pools, but
the reads are still done only from the primary copy.
The left pane is called the Pool Filter and the storage pools are displayed there. For more
information about storage pools, see Chapter 7, “Storage pools” on page 309.
In the upper right, you see information about the pool that you selected in the pool filter. The
following information is also shown:
Pool icon: As storage pools can have different characteristics, it is possible change the
storage pool icon. For more information, see 7.3, “Working with storage pools” on
page 347.
Pool Name: The name that is given during the creation of the storage pool. For more
information about changing the storage pool name.
Pool Details: Shows you the information about the storage pools, such as status, the
number of managed disks, and Easy Tier status.
Volume allocation: Shows you the amount of capacity that is allocated to volumes from this
storage pool.
It is also possible to create volumes from this panel. Click Create Volume to start the Volume
Creation panel. The steps are described in Chapter 5, “Volume configuration” on page 201.
Selecting a volume and opening the Actions menu or right-clicking the volume, shows the
same options as described in 8.4, “Advanced volume administration” on page 384.
In the left pane of the view is the Host Filter. Selecting a host shows its properties in the right
pane, such as the host name, number of ports, host type, and the I/O group to which it has
access.
The Host icons show an orange cable for a Fibre Channel host, a black cable for a SAS host,
and a blue cable for an iSCSI host.
It is also possible to create a volume from this panel. Clicking Create Volumes, opens the
same wizard as described in Chapter 5, “Volume configuration” on page 201.
Selecting a volume, and opening the Actions menu or right-clicking the volume, shows the
same options as described in 8.4, “Advanced volume administration” on page 384.
Enterprise SAS drives replaced the old SCSI drives and have become common in the storage
market. They are offered in a wide variety of capacity, spindle speeds and form factors.
Nearline SAS is the low-cost, large capacity storage drive class, commonly offered at 7200
(rpm) spindle speed.
It is critical to choose the right mix of drives and the right data placement to achieve optimal
performance at low cost. Maximum value can be derived by placing “hot” data with high I/O
density and low response time requirements on Flash, while Enterprise class disks are
targeted for “warm” and Nearline for “cold” data accessed more sequentially and at lower
rates.
This chapter describes the Easy Tier disk performance optimization function of the IBM
Storwize V5000. It also describes how to activate the Easy Tier process for both evaluation
purposes and for automatic extent migration.
The first generation of Easy Tier introduced automated storage performance management,
efficiently boosting enterprise-class performance with flash drives and automating storage
tiering from enterprise class drives to flash drives. It also introduced dynamic volume
relocation and dynamic extent pool merge.
The second generation of Easy Tier introduced the combination of Nearline drives with the
objective of maintaining optimal performance at low cost. The second generation also
introduced auto rebalancing of extents and was implemented within DS8000 products only.
The third generation of Easy Tier introduces further enhancements that provide automated
storage performance and storage economics management across all three drive tiers (Flash,
Enterprise, and Nearline storage tiers). It allows you to consolidate and efficiently manage
more workloads on a single IBM Storwize V5000. It also introduces support for Storage Pool
Balancing in homogeneous pools.
Figure 9-1 shows the supported Easy Tier pools available in the third generation of Easy Tier.
Easy Tier reduces the I/O latency for hot spots, but it does not replace storage cache. Easy
Tier and storage cache solve a similar access latency workload problem, but these methods
weigh differently in the algorithmic construction based on “locality of reference,” recency, and
frequency. Because Easy Tier monitors I/O performance from the extent end (after cache), it
can pick up the performance issues that cache cannot solve and complement the overall
storage system performance.
In general, the storage environment I/O is monitored on volumes and the entire volume is
always placed inside one appropriate storage tier. Determining the amount of I/O on single
extents is too complex for monitoring I/O statistics, moving them manually to an appropriate
storage tier, and reacting to workload changes.
Easy Tier is a performance optimization function that overcomes this issue because it
automatically migrates (or moves) extents that belong to a volume between different storage
tiers, as shown in Figure 9-2. Because this migration works at the extent level, it is often
referred to as sub-LUN migration.
To enable this migration between MDisks with different tier levels, the target storage pool must
consist of different characteristic MDisks. These pools are named multi-tiered storage pools.
IBM Storwize V5000 Easy Tier is optimized to boost the performance of storage pools that
contain Flash, Enterprise and the Nearline drives.
To identify the potential benefits of Easy Tier in your environment before actually installing
higher MDisk tiers (such as Flash), it is possible to enable the Easy Tier monitoring on
volumes in single-tiered storage pools. Although the Easy Tier extent migration is not possible
within a single-tiered pool, the Easy Tier statistical measurement function is possible.
Enabling Easy Tier on a single-tiered storage pool starts the monitoring process and logs the
activity of the volume extents.
The STAT tool is a no-cost tool that helps you to analyze this data. If you do not have an IBM
Storwize V5000, use Disk Magic to get a better idea about the required number of different
drive types that are appropriate for your workload.
Easy Tier is available for IBM Storwize V5000 internal volumes and volumes on external
virtualized storage subsystems.
Figure 9-3 shows single-tiered storage pools that include one type of disk tier attribute. Each
disk should have the same size and performance characteristics. Multi-tiered storage pools
are populated with two or more different disk tier attributes, high-performance flash drives,
enterprise SAS drives, and Nearline drives.
A volume migration occurs when the complete volume is migrated from one storage pool to
another storage pool. An Easy Tier data migration moves only extents inside the storage pool
to different performance attributes.
By default, Easy Tier is enabled on any pool that contains two or more classes of disk drive.
The Easy Tier function manages the extent migration as follows:
Promote
– Moves the candidate hot extent to a higher performance tier.
Warm Demote
– Prevents performance overload of a tier by demoting a warm extent to a lower tier.
– Triggered when bandwidth or IOPS exceeds predefined threshold.
Cold Demote
– Coldest extent moves to a lower tier.
Expanded or Cold Demote
– Demote appropriate sequential workload to the lowest tier to better utilize Nearline
bandwidth.
Swap
– This operation exchanges a cold extent in a higher tier with a hot extent in a lower tier
or vice versa.
The four main processes and the flow between them are described in the following sections.
Easy Tier makes allowances for large block I/Os and thus considers only I/Os up to 64 KB as
migration candidates.
This is an efficient process and adds negligible processing impact to the IBM Storwize V5000
node canisters.
This process also identifies extents that must be migrated back to a lower tier.
It assesses the extents in a storage tier and balances them automatically across all MDisks
within that tier. The Storage Pool Balancing moves the extents to achieve a balanced
workload distribution and avoid hotspots. The Storage Pool Balancing is an algorithm based
on MDisk IOPS usage, which means it is not capacity-based but performance-based. It works
on a 6 hour performance window.
When a new MDisk is added into an existing storage pool, the Storage Pool Balancing will
automatically balance the extents across all MDisk.
Volume mirroring: Volume mirroring can have different workload characteristics for
each copy of the data because reads are normally directed to the primary copy and
writes occur to both. Thus, the number of extents that Easy Tier migrates is probably
different for each copy.
On Multi-tiered On Activec
In this example, we create a pool containing Flash, Enterprise and Nearline MDisks.
4. Click Finish.
Using the IBM Storwize V5000 GUI, you can check the properties of the EasyTier_Pool.
Because the storage pool contains only one Flash MDisk the Easy Tier status remains as
Inactive as shown in Figure 9-11.
7. Click Finish and Close when the task completes. Add further MDisk and classes by
repeating the steps above.
8. In Figure 9-13, you see that the EasyTier_Pool storage pool contains MDisks from all three
tiers available, Flash, Enterprise, and Nearline.
If you want to enable Easy Tier on the second copy, change the storage pool of the second
copy to a multi-tiered storage pool by repeating the steps in this section.
If external storage is used, you must select the tier manually, and then add the external
MDisks to a storage pool, as described in Chapter 11, “External storage virtualization” on
page 579. This action also changes the storage pools to multi-tiered storage pools and
enables Easy Tier on the pool and the volumes.
Heat data files are produced approximately once a day (that is, roughly every 24 hours) when
Easy Tier is active on one or more storage pools.
This action lists all the log files available to download. The Easy Tier log files are always
named dpa_heat.canister_name_date.time.data.
If you run Easy Tier for a longer period, it generates a heat file at least every 24 hours. The
time and date the file was created is included in the file name.
Log file creation: Depending on your workload and configuration, it can take up to 24
hours until a new Easy Tier log file is created.
You can also use the search field on the right to filter your search, as shown in
Figure 9-21.
Depending on your browser settings, the file is downloaded to your default location, or you
are prompted to save it to your computer. This file can be analyzed as described in 9.7,
“IBM Storage Tier Advisor Tool” on page 445.
Readability: In most examples that are shown in this section, many lines were deleted in
the command output or responses so we can concentrate on the information that is related
to Easy Tier.
To get a more detailed view of the single-tiered storage pool, run the svcinfo lsmdiskgrp
storage pool name command, as shown in Example 9-2.
To enable Easy Tier on a single-tiered storage pool in measure mode, run the chmdiskgrp
-easytier measure storage pool name command, as shown in Example 9-3.
Example 9-3 Enable Easy Tier in measure mode on a single-tiered storage pool
IBM_Storwize:ITSO_V5000:superuser>chmdiskgrp -easytier measure Nearline_Pool
IBM_Storwize:ITSO_V5000:superuser>
To get the list of all the volumes defined, run the lsvdisk command, as shown in
Example 9-5. For this example, we are only interested in the RedbookVolume volume.
To get a more detailed view of a volume, run the lsvdisk volume_name command, as shown in
Example 9-6. This output shows two copies of a volume both copies now have Easy Tier
turned on, however, Copy 0 is in a multi-tiered storage pool and Automatic Data Placement is
active, as indicated by the easy_tier_status line. Copy 1 is in the single-tiered storage pool,
and Easy Tier mode is measured, as indicated by the easy_tier_status line.
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name EasyTier_Pool
...
easy_tier on
easy_tier_status active
tier ssd
tier_capacity 0.00MB
tier enterprise
tier_capacity 10.00GB
tier nearline
tier_capacity 0.00MB
...
copy_id 1
status online
sync no
mdisk_grp_id 0
mdisk_grp_name Nearline_Pool
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
grainsize
se_copy no
easy_tier off
easy_tier_status measured
tier ssd
tier_capacity 0.00MB
tier enterprise
tier_capacity 0.00MB
tier nearline
tier_capacity 10.00GB
parent_mdisk_grp_id 2
parent_mdisk_grp_name Nearline_Pool
IBM_Storwize:ITSO_V5000:superuser>
These changes are also reflected in the GUI, as shown in Figure 9-22. Click Pools
Volumes by Pool, select Nearline_Pool and find the volume RedbookVolume; then click
Actions Properties option to view the details of the Easy Tier for each of the volume
copies.
Before to disable the Easy tier on single volume, run svcinfo lsmdisgrp storage pool name
command to show all storage pools configured as shown in Example 9-7. In our example,
EasyTier_Pool is the storage pool used as reference.
To disable Easy Tier on single volumes, run the svctask chvdisk -easytier off volume
name command, as shown in Example 9-9.
This command disables Easy Tier on all copies of the volume. Example 9-10 shows Easy Tier
turned off for the copy 0 even if Easy Tier is still enabled on the storage pool. Note that on
copy 0, the status changed to measured, as the pool is still actively measuring the I/O on the
volume.
copy_id 0
status online
mdisk_grp_id 1
mdisk_grp_name EasyTier_Pool
fast_write_state empty
used_capacity 5.00GB
real_capacity 5.00GB
free_capacity 0.00MB
overallocation 100
easy_tier off
easy_tier_status measured
tier ssd
tier_capacity 1.00GB
tier enterprise
tier_capacity 4.00GB
tier nearline
tier_capacity 0.00MB
To enable Easy Tier on a volume, run the svctask chvdisk -easytier on volume name
command (as shown in Example 9-11), and Easy Tier changes back to on (as shown in
Example 9-12). Notice that copy 0 status also changed back to active.
copy_id 0
status online
mdisk_grp_id 1
mdisk_grp_name EasyTier_Pool
type striped
mdisk_id
mdisk_name
used_capacity 5.00GB
real_capacity 5.00GB
free_capacity 0.00MB
overallocation 100
easy_tier on
easy_tier_status active
tier ssd
tier_capacity 1.00GB
tier enterprise
tier_capacity 4.00GB
tier nearline
tier_capacity 0.00MB
compressed_copy no
uncompressed_used_capacity 5.00GB
parent_mdisk_grp_id 1
parent_mdisk_grp_name EasyTier_Pool
IBM_Storwize:ITSO_V5000:superuser>
The tool comes packaged as an ISO file, which needs to be extracted to a temporary folder.
The STAT Tool can be downloaded from the following link:
https://ibm.biz/BdEfrX
For more information about the tool and to download it, see this website:
https://ibm.biz/BdEfrX
Click Start Run, enter cmd, and then click OK to open a command prompt.
Typically, the tool is installed in the C:\Program Files\IBM\STAT directory. Enter the command
to generate the report, as shown in Example 9-13.
C:\Program Files\IBM\STAT>STAT.exe -o c:\directory_where_you_want_the output_to_go
c:\location_of_dpa_heat_data_file
C:\EasyTier>
The Storage Tier Advisor Tool creates a set of HTML files. Browse to the directory where you
directed the output file, and find the file named index.html. Open the file using your browser
to view the report.
Typically, the STAT tool generates the report as described in 9.7.2, “Storage Tier Advisor Tool
reports” on page 446.
As shown in In Figure 9-23, the prediction generated by the Storage Tier Advisor Tool is
pool-based. The first page of the report, shows the Storage facility, total number of storage
pools and volumes monitored, total capacity monitored, hot data capacity, data validity and
system state.
The next table shows how data is managed in the particular storage pool(s) and is denoted by
different colors. The total capacity of the storage pool is divided into two parts: the data
managed by Easy Tier and the unallocated space. The green portion of the bar represents
the data managed by Easy Tier. The black portion of the bar, represents the unallocated
space or data that is not being managed by Easy Tier.
As shown in Figure 9-25, the Storage Tier Advisor Tool shows the MDisk distribution of each
tier that construct the storage pool. The tool also shows the performance statistics and the
projected IOPS for the MDisks after the Easy Tier processes and re-balancing operations are
completed on the extents of the storage pool.
For each MDisk, the STAT tool shows the following values:
The MDisk ID
The Storage Pool ID
The MDisk type
The number of IOPS threshold
The utilization of MDisk IOPS
The projected utilization of MDisk IOPS
The blue portion of the bar represents the percentage range which is below the tier average
utilization of MDisk IOPS.
The red portion of the bar represents percentage range which is above the maximum allowed
threshold.
In the Nearline section, we look at 182% of utilization, this means that in a period of 24 hours,
the maximum IOPS exceeded the threshold by 82%. The blue portion shows the 90% out of
182% (approximately 49.5%) of the IOPS did not exceeded the threshold for this particular
tier. The red portion shows the 92% out of 182% (approximately 50.5%) of the IOPS
exceeded the threshold and the extents are potential candidates to move to a higher tier.
In the Workload Distribution Across Tier section, the STAT tool shows the skew of the
workload of the storage pool selected. The X-axis (horizontal) denotes the percentage of
extents in use. The Y-axis (vertical) denotes the percentage of extents that are busy out of the
given percentage from the X-axis. In our graph, for instance, when we look at 10% of the
extents in the pool, only about 40% of these are determined to be very busy. Figure 9-26
shows an example of our graph.
In the Volume Heat Distribution section, the STAT tool shows volume heat distribution of all
volumes in the storage pool. The columns are as follows:
VDisk ID
VDisk Copy ID
Configured size or VDisk capacity
I/O percentage of extent pool
The tier(s) that the volume is taking capacity from
The capacity on each of the tier(s)
The heat distribution of that storage pool
The heat distribution of a volume is displayed using a color bar which represents the
following:
The blue portion of the bar represents the capacity of cold extents in the volume. Extents are
considered cold when it is not used heavily or the I/O per second is very low.
The orange portion of the bar represents the capacity of warm extents in the volume. Data is
considered warm when it is either relatively heavily or the IOPS is relatively more compared
to the cold extents or lower when compared to hot extents.
The Systemwide Recommendation section can be viewed on another page which shows the
advised configuration for the tiers as applicable for the configuration of the system. Typically, it
shows three recommended configurations: Flash (SSD), Enterprise, and Nearline.
Each recommendation displays the storage pools, the recommendation, and the expected
improvement. An example is shown in Figure 9-28.
You can use FlashCopy to solve critical and challenging business needs that require the
duplication of data on your source volume. Volumes can remain online and active while you
create consistent copies of the data sets. Because the copy is performed at the block level, it
operates below the host operating system and cache and, therefore, is not apparent to the
host.
Flushing: Because FlashCopy operates at the block level below the host operating system
and cache, those levels do need to be flushed for consistent FlashCopy copies.
While the FlashCopy operation is performed, I/O to the source volume is frozen briefly to
initialize the FlashCopy bitmap and then is allowed to resume. Although several FlashCopy
options require the data to be copied from the source to the target in the background (which
can take time to complete), the resulting data on the target volume copy appears to complete
immediately. This task is accomplished by using a bitmap (or bit array) that tracks changes to
the data after the FlashCopy is started, and an indirection layer, which allows data to be read
from the source volume transparently.
With an immediately available copy of the data, FlashCopy can be used in the following
business scenarios:
Rapidly creating consistent backups of dynamically changing data
FlashCopy can be used to create backups through periodic running; the FlashCopy target
volumes can be used to complete a rapid restore of individual files or the entire volume
through Reverse FlashCopy (by using the -restore option).
The target volumes that are created by FlashCopy can also be used for backup to tape. By
attaching them to another server and performing backups from there, it allows the
production server to continue largely unaffected. After the copy to tape completes, the
target volumes can be discarded or kept as a rapid restore copy of the data.
Rapidly creating consistent copies of production data to facilitate data movement or
migration between hosts
FlashCopy can be used to facilitate the movement or migration of data between hosts
while minimizing downtime for applications. FlashCopy allows application data to be
copied from source volumes to new target volumes while applications remain online. After
the volumes are fully copied and synchronized, the application can be stopped and then
immediately started on the new server that is accessing the new FlashCopy target
volumes. This mode of migration is faster than other migration methods that are available
through the IBM Storwize V5000 because the size and the speed of the migration is not as
limited.
The minimum granularity that IBM Storwize V5000 storage system supports for FlashCopy is
an entire volume; it is not possible to use FlashCopy to copy only part of a volume.
Additionally, the source and target volumes must belong to the same IBM Storwize V5000
storage system, but they do not have to be in the same storage pool.
Before you start a FlashCopy (regardless of the type and options that are specified), the IBM
Storwize V5000 must put the cache into write-through mode, which flushes the I/O that is
bound for the source volume. If you are scripting FlashCopy operations from the CLI, you
must run the prestartfcmap or prestartfcconsistgrp command. However, this step is
managed for you and carried out automatically by the GUI. This is not the same as flushing
the host cache, which is not required. After FlashCopy is started, an effective copy of a source
volume to a target volume is created. The content of the source volume is immediately
presented on the target volume and the original content of the target volume is lost. This
FlashCopy operation is also referred to as a time-zero copy (T0 ).
Immediately following the FlashCopy operation, the source and target volumes are available
for use. The FlashCopy operation creates a bitmap that is referenced and maintained to direct
I/O requests within the source and target relationship. This bitmap is updated to reflect the
active block locations as data is copied in the background from the source to target and
updates are made to the source.
When data is copied between volumes, it is copied in units of address space known as
grains. Grains are units of data that are grouped to optimize the use of the bitmap that track
changes to the data between the source and target volume. You have the option of using
64 KB or 256 KB grain sizes (256 KB is the default). The FlashCopy bitmap contains 1 bit for
each grain and is used to track whether the source grain was copied to the target. The 64 KB
grain size uses bitmap space at a rate of four times the default 256 KB size.
The FlashCopy bitmap dictates the following read and write behaviors for the source and
target volumes:
Read I/O request to source: Reads are performed from the source volume the same as for
non-FlashCopy volumes.
Write I/O request to source: Writes to the source cause the grains of the source volume to
be copied to the target if they were not already and then the write is performed to the
source.
Read I/O request to target: Reads are performed from the target if the grains were already
copied; otherwise, the read is performed from the source.
Write I/O request to target: Writes to the target cause the grain to be copied from the
source to the target first, unless the entire grain is being written and then the write
completes to the target only.
Each of the mappings from a single source can be started and stopped independently. If
multiple mappings from the same source are active (in the copying or stopping states), a
dependency exists between these mappings.
If a single source volume has multiple target FlashCopy volumes, the write to the source
volume does not cause its data to be copied to all of the targets. Instead, it is copied to the
newest target volume only. The older targets refer to new targets first before they refer to the
source. A dependency relationship exists between a particular target and all newer targets
that share a source until all data is copied to this target and all older targets.
Cascaded mappings differ from multiple target FlashCopy mappings in depth. Cascaded
mappings have an association in the manner of A > B > C, while multiple target FlashCopy
has an association in the manner A > B1 and A > B2.
Background copy
The background copy rate is a property of a FlashCopy mapping that is defined as a value of
0 - 100. The background copy rate can be defined and dynamically changed for individual
FlashCopy mappings. A value of 0 disables background copy. This option is also called the
no-copy option, which provides pointer-based images for limited lifetime uses.
With FlashCopy background copy, the source volume data is copied to the corresponding
target volume in the FlashCopy mapping. If the background copy rate is set to 0 (which means
disable the FlashCopy background copy), only data that changed on the source volume is
copied to the target volume. The benefit of using a FlashCopy mapping with background copy
enabled is that the target volume becomes a real independent clone of the FlashCopy
mapping source volume after the copy is complete. When the background copy is disabled,
only the target volume is a valid copy of the source data while the FlashCopy mapping
remains in place. Copying only the changed data saves your storage capacity (assuming it is
thin provisioned and the -rsize option was correctly setup.)
1 - 10 128 KB 0.5 2
11 - 20 256 KB 1 4
21 - 30 512 KB 2 8
31 - 40 1 MB 4 16
41 - 50 2 MB 8 32
51 - 60 4 MB 16 64
61 - 70 8 MB 32 128
71 - 80 16 MB 64 256
81 - 90 32 MB 128 512
Data copy rate: The data copy rate remains the same regardless of the FlashCopy grain
size. The difference is the number of grains that are copied per second. The gain size can
be 64 KB or 256 KB. The smaller size uses more bitmap space and thus limits the total
amount of FlashCopy space possible. However, it might be more efficient regarding the
amount of data that is moved, depending on your environment.
Cleaning rate
The cleaning rate provides a method for FlashCopy copies with dependant mappings
(multiple target or cascaded) to complete their background copies before their source goes
offline or is deleted after a stop is issued.
When you create or modify a FlashCopy mapping, you can specify a cleaning rate for the
FlashCopy mapping that is independent of the background copy rate. The cleaning rate is
also defined as a value of 0 - 100, which has the same relationship to data copied per second
as the backup copy rate (see Table 10-1).
The cleaning rate controls the rate at which the cleaning process operates. The purpose of
the cleaning process is to copy (or flush) data from FlashCopy source volumes upon which
there are dependent mappings. For cascaded and multiple target FlashCopy, the source
might be a target for another FlashCopy or a source for a chain (cascade) of FlashCopy
mappings. The cleaning process must complete before the FlashCopy mapping can go to the
stopped state. This feature and the distinction between stopping and stopped states was
added to prevent data access interruption for dependent mappings when their source is
issued a stop.
When consistency groups are used, the FlashCopy commands are issued to the FlashCopy
consistency group, which performs the operation on all FlashCopy mappings that are
contained within the consistency group.
Figure 10-2 shows a consistency group that consists of two FlashCopy mappings.
Dependent writes
To show why it is crucial to use consistency groups when a data set spans multiple volumes,
consider the following typical sequence of writes for a database update transaction:
1. A write is run to update the database log, which indicates that a database update is about
to be performed.
2. A second write is run to complete the actual update to the database.
3. A third write is run to update the database log, which indicates that the database update
completed successfully.
The database ensures the correct ordering of these writes by waiting for each step to
complete before it starts the next step. However, if the database log (updates 1 and 3) and the
database (update 2) are on separate volumes, it is possible for the FlashCopy of the database
volume to occur before the FlashCopy of the database log. This situation can result in the
target volumes seeing writes (1) and (3) but not (2) because the FlashCopy of the database
volume occurred before the write was completed.
To overcome the issue of dependent writes across volumes and to create a consistent image
of the client data, it is necessary to perform a FlashCopy operation on multiple volumes as an
atomic operation using consistency groups.
A FlashCopy consistency group can contain up to 512 FlashCopy mappings. The more
mappings that you have, the more time it takes to prepare the consistency group. FlashCopy
commands can then be issued to the FlashCopy consistency group and simultaneously for all
of the FlashCopy mappings that are defined in the consistency group. For example, when the
FlashCopy for the consistency group is started, all FlashCopy mappings in the consistency
group are started at the same time, which results in a point-in-time copy that is consistent
across all FlashCopy mappings that are contained in the consistency group.
A consistency group aggregates FlashCopy mappings, not volumes. Thus, where a source
volume has multiple FlashCopy mappings, they can be in the same or separate consistency
groups. If a particular volume is the source volume for multiple FlashCopy mappings, you
might want to create separate consistency groups to separate each mapping of the same
source volume. Regardless of whether the source volume with multiple target volumes is in
the same consistency group or in separate consistency groups, the resulting FlashCopy
produces multiple identical copies of the source data.
The consistency group can be specified when the mapping is created. You can also add the
FlashCopy mapping to a consistency group or change the consistency group of a FlashCopy
mapping later.
Important: Do not place stand-alone mappings into a consistency group because they
become controlled as part of that consistency group.
Reverse FlashCopy
Reverse FlashCopy enables FlashCopy targets to become restore points for the source
without breaking the FlashCopy relationship and without waiting for the original copy
operation to complete. It supports multiple targets and multiple rollback points.
A key advantage of Reverse FlashCopy is that it does not delete the original target, thus
allowing processes that use the target, such as a tape backup, to continue uninterrupted.
You can also create an optional copy of the source volume that is made before the reverse
copy operation is started. This copy restores the original source data, which can be useful for
diagnostic purposes.
The -restore option: In the CLI, you must add the -restore option to the command
svctask startfcmap manually. For more information about using the CLI, see
Appendix A, “Command-line interface setup and SAN Boot” on page 667. The GUI
adds this suffix automatically if it detects you are mapping from a target back to the
source.
Regardless of whether the initial FlashCopy map (volume X to volume Y) is incremental, the
Reverse FlashCopy operation copies only the modified data.
Consistency groups are reversed by creating a set of new “reverse” FlashCopy maps and
adding them to a new “reverse” consistency group. Consistency groups cannot contain more
than one FlashCopy map with the same target volume.
FlashCopy presets
The IBM Storwize V5000 storage system provides three FlashCopy presets (Snapshot,
Clone, and Backup) to simplify the more common FlashCopy operations, as shown in
Table 10-3.
Snapshot Creates a point-in-time view of the production data. The snapshot is not intended
to be an independent copy. Instead, it is used to maintain a view of the production
data at the time the snapshot is created.
This preset automatically creates a thin-provisioned target volume with none of the
capacity that is allocated at the time of creation. The preset uses a FlashCopy
mapping with none of the background copy so that only data written to the source
or target is copied to the target volume.
Clone Creates an exact replica of the volume, which can be changed without affecting the
original volume. After the copy operation completes, the mapping that was created
by the preset is automatically deleted.
This preset automatically creates a volume with the same properties as the source
volume and creates a FlashCopy mapping with a background copy rate of 50. The
FlashCopy mapping is configured to automatically delete when the FlashCopy
mapping reaches 100% completion.
Backup Creates a point-in-time replica of the production data. After the copy completes, the
backup view can be refreshed from the production data, with minimal copying of
data from the production volume to the backup volume.
This preset automatically creates a volume with the same properties as the source
volume. The preset creates an incremental FlashCopy mapping with a background
copy rate of 50.
Presets: All of the presets can be adjusted by using the expandable Advanced Settings
section in the GUI.
Most of the actions to manage the FlashCopy mapping can be done in the FlashCopy window
or the FlashCopy Mappings windows, although the quick path to create FlashCopy presets
can be found only in the FlashCopy window.
Click FlashCopy in the Copy Services function icon menu and the FlashCopy window opens,
as shown in Figure 10-5. In the FlashCopy window, the FlashCopy mappings are organized
by volumes.
Click FlashCopy Mappings in the Copy Services function icon menu and the FlashCopy
Mappings window opens, as shown in Figure 10-6. In the FlashCopy Mappings window, the
FlashCopy mappings are listed individually.
Creating a snapshot
In the FlashCopy window, choose a volume and click Create Snapshot from the Actions
drop-down menu, as shown in Figure 10-8. Alternatively, you can highlight your chosen
volume and right-click to access the Actions menu.
You now have a snapshot volume for the volume you selected.
You now have a clone volume for the volume you selected.
You now have a backup volume for the volume you selected.
In the FlashCopy window and in the FlashCopy Mappings window, you can monitor the
progress of the running FlashCopy operations, as shown in Figure 10-11. The progress bars
for each target volume indicate the copy progress as a percentage. The copy progress
remains 0% for snapshots (there is no change until data is written to the target volume). The
copy progresses for clone and backup continues to increase until the copy process
completes.
The copy progress can be also found under the Running Tasks status indicator, as shown in
Figure 10-12.
After the copy processes complete, you find the FlashCopy mapping with the clone preset
(TestVol2 in our example) was deleted automatically, as shown in Figure 10-14. There are
now two identical volumes that are independent of each other.
You can Create New Target Volumes as part of the mapping process or Use Existing Target
Volumes. We describe creating volumes next. To use existing volumes, see “Using existing
target volumes” on page 480.
6. You can choose from which storage pool you want to create your target volume as shown
in Figure 10-23. You can select the same storage pool used by the source volume or a
different pool. Click Next to continue.
A wizard guides you through the process to create a FlashCopy mapping. The steps are the
same as creating an Advanced FlashCopy mapping using Existing Target Volumes, as
described in “Using existing target volumes” on page 480.
You can start the mapping by selecting the FlashCopy target volume in the FlashCopy
window and selecting the Start option from the Actions drop-down menu (as shown in
Figure 10-31) or by selecting the volume and right-clicking. The status of the FlashCopy
mapping changes from Idle to Copying.
To change the name of the target volume, select the FlashCopy target volume in the
FlashCopy window and click the Rename Target Volume option from the Actions drop-down
menu (as shown in Figure 10-33) or right-click the selected volume.
Enter your new name for the target volume, as shown in Figure 10-34. Click Rename to
finish.
To change the name of a FlashCopy mapping, select the FlashCopy mapping in the
FlashCopy Mappings window and click Rename Mapping in the Actions drop-down menu,
as shown in Figure 10-35.
FlashCopy Mapping state: If the FlashCopy mapping is in the Copying state, it must be
stopped before it is deleted.
Deleting FlashCopy mapping: Deleting the FlashCopy mapping does not delete the
target volume. If you must reclaim the storage space occupied by the target volume, you
must delete the target volume manually.
Important: The underlying command that is run by the IBM Storwize V5000 appends the
-restore option automatically.
The Consistency Groups window is where you can manage consistency groups and
FlashCopy mappings as shown in Figure 10-55.
Individual consistency groups are displayed underneath those not in groups, you can discover
the properties of a consistency group and the FlashCopy mappings in it by clicking the plus
icon to the left of the group name. You can also take action on any consistency groups and
FlashCopy mappings within the Consistency Groups window, as allowed by their state. For
more information, see 10.1.5, “Managing FlashCopy mappings” on page 471.
You are prompted to enter the name of the new consistency group. Following your naming
conventions, enter the name of the new consistency group in the name field and click Create.
Important: You cannot move mappings that are copying. Selecting a FlashCopy that is
already running results in the Move to Consistency Group option being disabled.
Selections of a range are performed by highlighting a mapping, pressing and holding the Shift
key, and clicking the last item in the range. Multiple selections can be made by pressing and
holding the Ctrl key and clicking each mapping individually. The option is also available by
right-clicking individual mappings.
After the action completes, you find that the FlashCopy mappings you selected were removed
from the Not In a Group list to the consistency group you chose.
After you start the consistency group, all the FlashCopy mappings in the consistency group
start at the same time. The state of consistency group and all the underlying mappings
changes to Copying, as shown in Figure 10-62.
Any previously copied relationships that were added to a consistency group that was later
stopped before all members of the consistency group completed synchronization remain in
the Copied state.
The FlashCopy mappings are returned to the Not in a Group list after they are removed from
the consistency group.
To restore a consistency group from a FlashCopy, we must create a reverse mapping of all
the individual volumes that are contained within the original consistency group. In our
example, we have two FlashCopy mappings (fcmap0 and fcmap1) in a consistency group that
is known as FlashTestGroup, as shown in Figure 10-68.
3. To restore the consistency group, highlight the reverse consistency group and click Start,
as shown in Figure 10-70.
Important: The IBM Storwize V5000 automatically appends the -restore option to the
command.
Remote Copy consists of three methods for copying: Metro Mirror, Global Mirror, and Global
Mirror with Change Volumes. Metro Mirror is designed for metropolitan distances with a
synchronous copy requirement. Global Mirror is designed for longer distances without
requiring the hosts to wait for the full round-trip delay of the long-distance link through
asynchronous methodology. Global Mirror with Change Volumes is an added piece of
functionality for Global Mirror that is designed to attain consistency on lower-quality network
links.
Metro Mirror and Global Mirror are IBM branded terms for the functions of Synchronous
Remote Copy and Asynchronous Remote Copy. Throughout this book, the term “Remote
Copy” is used to refer to both functions where the text applies to each term equally.
Partnership
When a partnership is created, we connect two separate IBM Storwize V5000 systems or an
IBM SAN Volume Controller, Storwize V3700, or Storwize V7000, and an IBM Storwize
V5000. After the partnership creation is configured on both systems, further communication
between the node canisters in each of the storage systems is established and maintained by
the SAN. All inter-cluster communication goes through the Fibre Channel network.
The partnership must be defined on both participating Storwize or SVC systems to make the
partnership fully functional.
Introduction to layers
IBM Storwize V5000 implements the concept of layers. Layers determine how the IBM
Storwize portfolio interacts with the IBM SAN Volume Controller. Currently, there are two
layers: replication and storage.
The replication layer is used when you want to use the IBM Storwize V5000 with one or more
IBM SAN Volume Controllers as a Remote Copy partner. The storage layer is the default
mode of operation for the IBM Storwize V5000, and is used when you want to use the IBM
Storwize V5000 to present storage to an IBM SAN Volume Controller as a backend system.
The layer for the IBM Storwize V5000 can be switched by running svctask chsystem -layer
replication. Generally, switch the layer while your IBM Storwize V5000 system is not in
production. This situation prevents potential disruptions because layer changes are not
I/O-tolerant.
Figure 10-73 shows the effect of layers on IBM SAN Volume Controller and IBM Storwize
V5000 partnerships.
Partnership topologies
A partnership between up to four IBM Storwize V5000 systems is allowed.
The following typical partnership topologies between multiple IBM Storwize V5000s are
available:
Daisy-chain topology, as shown in Figure 10-74.
Figure 10-74 Daisy chain partnership topology for IBM Storwize V5000
Partnerships: These partnerships are valid for configurations with SAN Volume
Controllers and IBM Storwize V5000 systems if the IBM Storwize V5000 systems are using
the replication layer. They are also valid for Storwize V3700 and V7000 products.
Partnership states
A partnership has the following states:
Partially Configured
Indicates that only one cluster partner is defined from a local or remote cluster to the
displayed cluster and is started. For the displayed cluster to be configured fully and to
complete the partnership, you must define the cluster partnership from the cluster that is
displayed to the corresponding local or remote cluster.
Fully Configured
Indicates that the partnership is defined on the local and remote clusters and is started.
Remote Not Present
Indicates that the remote cluster is not present for the partnership.
Typically, the master volume contains the production copy of the data and is the volume that
the application normally accesses. The auxiliary volume often contains a backup copy of the
data and is used for disaster recovery.
The master and auxiliary volumes are defined when the relationship is created, and these
attributes never change. However, either volume can operate in the primary or secondary role
as necessary. The primary volume contains a valid copy of the application data and receives
updates from the host application, which is analogous to a source volume. The secondary
volume receives a copy of any updates to the primary volume because these updates are all
transmitted across the mirror link. Therefore, the secondary volume is analogous to a
continuously updated target volume. When a relationship is created, the master volume is
assigned the role of primary volume and the auxiliary volume is assigned the role of
secondary volume. The initial copying direction is from master to auxiliary. When the
relationship is in a consistent state, you can reverse the copy direction.
Important: The use of Remote Copy target volumes as Remote Copy source volumes is
not allowed. A FlashCopy target volume can be used as Remote Copy source volume and
also as a Remote Copy target volume.
Metro Mirror
Metro Mirror is a type of Remote Copy that creates a synchronous copy of data from a master
volume to an auxiliary volume. With synchronous copies, host applications write to the master
volume but do not receive confirmation that the write operation completed until the data is
written to the auxiliary volume. This action ensures that both volumes have identical data
when the copy completes. After the initial copy completes, the Metro Mirror function always
maintains a fully synchronized copy of the source data at the target site.
Figure 10-78 shows how a write to the master volume is mirrored to the cache of the auxiliary
volume before an acknowledgement of the write is sent back to the host that issued the write.
This process ensures that the auxiliary is synchronized in real time if it is needed in a failover
situation.
The Metro Mirror function supports copy operations between volumes that are separated by
distances up to 300 km. For disaster recovery purposes, Metro Mirror provides the simplest
way to maintain an identical copy on the primary and secondary volumes. However, as with all
synchronous copies over remote distances, there can be a performance impact to host
applications. This performance impact is related to the distance between primary and
secondary volumes and, depending on application requirements, its use might be limited
based on the distance between sites.
In asynchronous Remote Copy (which Global Mirror provides), write operations are
completed on the primary site and the write acknowledgement is sent to the host before it is
received at the secondary site. An update of this write operation is sent to the secondary site
at a later stage, which provides the capability to perform Remote Copy over distances that
exceed the limitations of synchronous Remote Copy.
The distance of Global Mirror replication is limited primarily by the latency of the WAN link that
is provided. Global Mirror has a requirement of 80 ms round-trip-time for data that is sent to
the remote location. The propagation delay is roughly 8.2 µs per mile or 5 µs per kilometer for
Fibre Channel connections. Each device in the path adds more delay of about 25 µs. Devices
that use software (such as some compression devices) add much more time. The time that is
added by software-assisted devices is highly variable and should be measured directly. Be
sure to include these times when you are planning your Global Mirror design.
You should also measure application performance based on the expected delays before
Global Mirror is fully implemented. The IBM Storwize V5000 storage system provides you
with an advanced feature that permits you to test performance implications before Global
Mirror is deployed and a long-distance link is obtained. This advanced feature is enabled by
modifying the IBM Storwize V5000 storage system parameters gmintradelaysimulation and
gminterdelaysimulation. These parameters can be used to simulate the write delay to the
secondary volume. The delay simulation can be enabled separately for each intra-cluster or
inter-cluster Global Mirror. You can use this feature to test an application before the full
deployment of the Global Mirror feature. For more information about how to enable the CLI
feature, see Appendix A, “Command-line interface setup and SAN Boot” on page 667.
The Global Mirror algorithms always maintain a consistent image on the auxiliary volume.
They achieve this consistent image by identifying sets of I/Os that are active concurrently at
the master, assigning an order to those sets, and applying those sets of I/Os in the assigned
order at the secondary.
In a failover scenario where the secondary site must become the master source of data
(depending on the workload pattern and the bandwidth and distance between local and
remote site), certain updates might be missing at the secondary site. Therefore, any
applications that use this data must have an external mechanism for recovering the missing
updates and reapplying them; for example, a transaction log replay.
To address these issues, Change Volumes were added as an option for Global Mirror
relationships. Change Volumes use the FlashCopy functionality but cannot be manipulated as
FlashCopy volumes because they are special-purpose only. Change Volumes replicate
point-in-time images on a cycling period (the default is 300 seconds). This situation means
that your change rate must include only the condition of the data at the point-in-time the
image was taken instead of all the updates during the period. This situation can provide
significant reductions in replication volume.
Figure 10-80 shows a basic Global Mirror relationship without Change Volumes.
With Change Volumes, a FlashCopy mapping exists between the primary volume and the
primary Change Volume. The mapping is updated during a cycling period (every 60 seconds
to one day). The primary Change Volume is then replicated to the secondary Global Mirror
volume at the target site, which is then captured in another change volume on the target site.
This situation provides a consistent image at the target site and protects your data from being
inconsistent during resynchronization.
In Figure 10-83, the same data is being updated repeatedly, so Change Volumes
demonstrate significant I/O transmission savings because you must send only I/O number 16,
which was the last I/O before the cycling period.
Careful consideration should be put into balancing your business requirements with the
performance of Global Mirror with Change Volumes. Global Mirror with Change Volumes
increases the inter-cluster traffic for more frequent cycling periods, so going as short as
possible is not always the answer. In most cases, the default should meet your requirements
and perform reasonably well.
The GUI automatically creates Change Volumes for you. However, it is a limitation of this
initial release that they are fully provisioned volumes. To save space, you should create
thin-provisioned volumes in advance and use the existing volume option to select your
change volumes.
Remote Copy commands can be issued to a Remote Copy consistency group, and, therefore,
simultaneously for all Metro Mirror relationships that are defined within that consistency
group, or to a single Metro Mirror relationship that is not part of a Metro Mirror consistency
group.
Figure 10-84 shows the concept of Remote Copy consistency groups. Because the
RC_Relationships 1 and 2 are part of the consistency group, they can be handled as one
entity, while the stand-alone RC_Relationship 3 is handled separately.
Metro Mirror and Global Mirror relationships cannot belong to the same consistency group. A
copy type is automatically assigned to a consistency group when the first relationship is
added to the consistency group. After the consistency group is assigned a copy type, only
relationships of that copy type can be added to the consistency group.
The following states apply to the relationships and the consistency groups, except for the
Empty state, which is only for consistency groups:
InconsistentStopped
The primary volumes are accessible for read and write I/O operations, but the secondary
volumes are not accessible for either one. A copy process must be started to make the
secondary volumes consistent.
InconsistentCopying
The primary volumes are accessible for read and write I/O operations, but the secondary
volumes are not accessible for either one. This state indicates that a copy process is
ongoing from the primary to the secondary volume.
ConsistentStopped
The secondary volumes contain a consistent image, but it might be outdated compared to
the primary volumes. This state can occur when a relationship was in the
ConsistentSynchronized state and experiences an error that forces a freeze of the
consistency group or the Remote Copy relationship.
ConsistentSynchronized
The primary volumes are accessible for read and write I/O operations. The secondary
volumes are accessible for read-only I/O operations.
Idling
The primary volumes and the secondary volumes are operating in the primary role.
Therefore, the volumes are accessible for write I/O operations.
IdlingDisconnected
The volumes in this half of the consistency group are all operating in the primary role and
can accept read or write I/O operations.
InconsistentDisconnected
The volumes in this half of the consistency group are all operating in the secondary role
and cannot accept read or write I/O operations.
Number of Remote Copy relationships per consistency group No limit beyond the Remote Copy
relationships per system.
Number of Remote Copy relationships per I/O Group 2,048 (4,096 per system)
Zoning recommendation
Node canister ports on each IBM Storwize V5000 must communicate with each other so that
the partnership can be created. These ports must be visible to each other on your SAN.
Proper switch zoning is critical to facilitating inter-cluster communication.
Important: When a local fabric and a remote fabric are connected for Remote Copy
purposes, the ISL hop count between a local node and a remote node cannot exceed
seven.
Redundancy: If the link between two sites is configured with redundancy so that it can
tolerate single failures, the link must be sized so that the bandwidth and latency
requirement can be met during single failure conditions.
Native IP Replication
With the advent of version 7.2 code the IBM Storwize V5000 now supports replication via
native IP links using the built in networking ports of the cluster nodes. This gives greater
flexibility in creating Remote Copy links between Storwize and SVC products. FC over IP
routers are no longer required. It utilizes SANslide technology developed by Bridgeworks
Limited of Christchurch, UK. They specialize in products that can bridge storage protocols
and accelerate data transfer over long distances.
Adding this technology at each end of a wide area network (WAN) Transmission Control
Protocol/Internet Protocol (TCP/IP) link significantly improves the utilization of the link. It does
this by applying patented Artificial Intelligence (AI) to hide latency normally associated with
WANs. Instead the Storwize technology uses TCP/IP latency to its advantage. In a traditional
IP link performance drops off the more data is sent because each transmission must wait for
acknowledgement before the next can be sent. Rather than wait for the acknowledgement to
come back, the Storwize technology sends more sets of packets across other virtual
connections. The number of virtual connections is controlled by the AI engine. This improves
WAN connection use, which results in a data transfer rate approaching full line speed and is
similar to the use of Buffer to Buffer credits in FC.
If packets are lost from any virtual connection, the data will be re-transmitted, and the remote
unit will wait for it. Presuming that this is not a frequent problem, overall performance is only
marginally affected because of the delay of an extra round trip for the data that is resent. The
implementation of this technology can greatly improve the performance of Remote Copy
especially Global Mirror Change Volumes (GM/CV) over long distances.
For more information about Native IP replication and how to use it, see the separate
publication, IBM SAN Volume Controller and Storwize Family Native IP Replication,
REDP-5103.
If you are not using Global Mirror with Change Volumes, for disaster recovery purposes, you
can use the FlashCopy feature to create a consistent copy of an image before you restart a
Global Mirror relationship.
The secondary volume for the relationship cannot be used until the copy process completes
and the relationship returns to the consistent state. When this situation occurs, start a
FlashCopy operation for the secondary volume before you restart the relationship. While the
relationship is in the Copying state, the FlashCopy feature can provide a consistent copy of
the data. If the relationship does not reach the synchronized state, you can use the
FlashCopy target volume at the secondary site.
In practice, the error that is most often overlooked is latency. Global Mirror has a
round-trip-time tolerance limit of 80 ms. A message that is sent from your source cluster to
your target cluster and the accompanying acknowledgement must have a total time of 80 ms
(or 40 ms each way).
The primary component of your round-trip time is the physical distance between sites. For
every 1,000 km (621.36 miles), there is a 5 ms delay. This delay does not include the time that
is added by equipment in the path. Every device adds a varying amount of time depending on
the device, but you can expect about 25 µs for pure hardware devices. For software-based
functions (such as compression that is implemented in software), the added delay tends to be
much higher (usually in the millisecond-plus range).
Consider this example. Company A has a production site that is 1,900 km from their recovery
site. Their network service provider uses five devices to connect the two sites. In addition to
those devices, Company A uses a SAN Fibre Channel Router at each site to provide FCIP to
encapsulate the Fibre Channel traffic between sites. There are now seven devices, and
1,900 km of distance delay. All the devices add 200 µs of delay each way. The distance adds
9.5 ms each way, for a total of 19 ms. Combined with the device latency that is 19.4 ms of
physical latency at a minimum. This latency is under the 80 ms limit of Global Mirror, but this
number is the best case number.
Link quality and bandwidth play a significant role here. Your network provider likely ensures a
latency maximum on your network link; be sure to stay below the Global Mirror RTT (Round
Trip Time) limit. You can easily double or triple the expected physical latency with a lower
quality or lower bandwidth network link. As a result, you are suddenly within range of
exceeding the limit the moment a large flood of I/O happens that exceeds the bandwidth
capacity you have in place.
When you get a 1920 error, always check the latency first. The FCIP routing layer can
introduce latency if it is not properly configured. If your network provider reports a much lower
latency, this report can be an indication of a problem at your FCIP Routing layer. Most FCIP
Routing devices have built-in tools that you can use to check the round-trip delay time (RTT).
When you are checking latency, remember that TCP/IP routing devices (including FCIP
routers) report RTT by using standard 64-byte ping packets.
Figure 10-85 Effect of packet size (in bytes) versus link size
Before you proceed, take a quick look at the second largest component of your
round-trip-time; that is, serialization delay. Serialization delay is the amount of time that is
required to move a packet of data of a specific size across a network link of a bandwidth. This
delay is based on a simple concept that the time that is required to move a specific amount of
data decreases as the data transmission rate increases.
In Figure 10-85, there are orders of magnitude of difference between the different link
bandwidths. It is easy to see how 1920 errors can arise when your bandwidth is insufficient
and why you should never use a TCP/IP ping to measure RTT for FCIP traffic.
Figure 10-85 compares the amount of time in microseconds that is required to transmit a
packet across network links of varying bandwidth capacity. The following packet sizes are
used:
64 bytes: The size of the common ping packet
1500 bytes: The size of the standard TCP/IP packet
2148 bytes: The size of a Fibre Channel frame
Your path MTU affects the delay that is incurred in getting a packet from one location to
another, when it causes fragmentation, or is too large and causes too many retransmits when
a packet is lost. After you verified your latency by using the correct packet size, proceed with
normal hardware troubleshooting.
In practice, the source of this error is most often a fabric problem or a problem in the network
path between your partners. When you receive this error, if your fabric has more than 64 HBA
ports zoned, you should check your fabric configuration to see if more than one HBA port for
each node per I/O group are zoned together. The advised zoning configuration for fabrics is
one port of each node per I/O group associated with the each host HBA. For those fabrics
with 64 or more host ports, this recommendation becomes a rule. You must follow this zoning
rule or the configuration is technically unsupported.
Improper zoning leads to SAN congestion, which can inhibit remote link communication
intermittently. Checking the zero buffer credit timer through IBM Tivoli Storage Productivity
Center and comparing its value against your sample interval might reveal potential SAN
congestion. When a zero buffer credit timer is above 2% of the total time of the sample
interval, it is likely to cause problems.
Next, always ask your network provider to check the status of the link. If the link is okay, watch
for repetition of this error. It is possible in a normal and functional network setup to have
occasional 1720 errors, but multiple occurrences indicate a larger problem.
If you receive multiple 1720 errors, recheck your network connection and then check the IBM
Storwize V5000 partnership information to verify the status and settings. Perform diagnostic
tests for every piece of equipment in the path between the two systems. It often helps to have
a diagram that shows the path of your replication from logical and physical configuration
viewpoints.
If your investigation fails to resolve your Remote Copy problems, you should contact your IBM
support representative for a complete analysis.
As the name implies, these windows are used to manage Remote Copy and the partnership.
If the Storwize V5000 can see a candidate system you will be prompted to choose between
an FC or IP partnership as shown in Figure 10-89.
Next you can create your partnership by selecting the appropriate remote storage system
from the Partner system drop-down as shown in Figure 10-90, and defining the available
bandwidth and the background copy percentage between both systems.
Figure 10-90 Select the remote IBM Storwize storage system for a new partnership
The bandwidth that you must enter here is used by the background copy process between the
clusters in the partnership. To set the background copy bandwidth optimally, make sure that
you consider all three resources (primary storage, inter-cluster link bandwidth, and auxiliary
storage) to avoid overloading them, which will affect the I/O latency.
If you select the partnership, more information is available through the Properties option on
the Actions menu or by right-clicking, as shown in Figure 10-92.
From the Properties menu option, the partnership bandwidth and background copy rate can
be altered by clicking the Edit button as shown in Figure 10-93.
After you have edited the parameters, click Save to save changes or Cancel to quit without
making changes.
Complete the same steps on the second storage system for the partnership to become fully
configured.
The partnership returns to the fully configured status when restarted and the health bar
returns to green. Initially when re-starting a partnership the status may go to a Not Present
state and the health bar will turn red as shown in Figure 10-97. This is normal, and when the
GUI refreshes, it will correct itself and go to a Fully Configured state and the health bar will
turn green.
Deleting a partnership
You can delete a partnership by selecting Delete from the Actions drop-down menu, as
shown in Figure 10-95 on page 547.
The Remote Copy window is where management of Remote Copy relationships and Remote
Copy consistency groups is carried out as shown in Figure 10-99.
Important: Before a remote copy relationship can be created, target volumes that are the
same size as the source volumes that you want to mirror must be created. For more
information about creating volumes, see Chapter 5, “Volume configuration” on page 201.
To create a Remote Copy relationship, click Create Relationship from the Actions
drop-down menu as shown in Figure 10-100. A wizard opens and guides you through the
Remote Copy relationship creation process.
You must select where your auxiliary (target) volumes are: the local system or the already
defined second storage system. In our example (as shown in Figure 10-102), we choose
another system to build an inter-cluster relationship. Click Next to continue.
You can define multiple and independent relationships by choosing another set of volumes
and clicking Add again. Incorrectly defined relationships can be removed by clicking the red
cross. In our example, we create two independent Remote Copy relationships, as shown in
Figure 10-104.
If you select Yes, the volumes are already synchronized, a warning message opens, as
shown in Figure 10-106. Confirm that the volumes are truly identical, and then click OK to
continue.
Figure 10-106 Warning message to make sure that the volumes are synchronized
After the Remote Copy relationships creation completes, two independent Remote Copy
relationships are defined and displayed in the Not in a Group list, as shown in Figure 10-108.
Highlight one of the operations and click to see the progress as shown in Figure 10-110.
Figure 10-110 Running task expanded to show the percentage complete of each copy operation
A prompt appears. Click to allow secondary read/write access, if required, and then click Stop
Relationship, as shown in Figure 10-112.
Figure 10-117 Warning message for switching direction of a Remote Copy relationship
After the switch completes, your Remote Copy relationship is tagged (as shown in
Figure 10-118), and shows you that the primary volume in this relationship was changed to
the auxiliary.
Enter the new name for the Remote Copy relationship and click Rename.
You must confirm this deletion by verifying the number of relationships to be deleted, as
shown in Figure 10-121. Click Delete to proceed.
You must enter a name for your new consistency group, as shown in Figure 10-123. We call
ours ITSO_test.
Figure 10-124 Remote Copy consistency group auxiliary volume location window
Next you are prompted to create an empty consistency group or add relationships to it, as
shown in Figure 10-125.
Choose the relevant copy type and click Next. A consistency group cannot contain both metro
mirror and global mirror volumes - it must contain only one type or the other. In the following
window, you can choose existing relationships to add to the new consistency group. Only
existing relationships of the type you chose previously (either Metro Mirror relationships or
Global Mirror relationships) will be displayed. This step is optional. Use the Ctrl and Shift keys
to select multiple relationships to add. If you decide that you do not want to use any of these
relationships but you do want to create other relationships, click Next.
The next window is optional and gives the option to create new relationships to add to the
consistency group, as shown in Figure 10-128. In our example we have chosen mastervol4
from the Master drop-down and auxmirror4 from the Auxiliary drop-down and added these.
In the next window, you are asked whether you want to start copying the volumes now, as
shown in Figure 10-130.
Figure 10-131 Error on creating consistency group when states do not match
Click Close to close the task window and the new consistency group is now shown in the
GUI, as shown in Figure 10-132. Notice the relationship we tried to add mastervol4
auxmirror4 is not listed under the new consistency group but remains under those Not in a
Group.
Just as for individual Remote Copy relationships, each Remote Copy Consistency Group
displays the name and the status of the consistency group beside the Relationship function
icon. Also shown is the copy direction - in our case this is ITSO V5000 ITSO V3700. It is
easy to change the name of a consistency group by right-clicking the entry, selecting Rename
and then entering a new name. Alternatively, highlight the consistency group and select
Rename from the Actions drop-down menu. Similarly, below the Relationship function icon
are the Remote Copy relationships in this consistency group. The actions on the Remote
Copy relationships can be applied here by using the Actions drop-down menu or right-clicking
the relationships, as shown in Figure 10-133.
Figure 10-136 Choose the consistency group to add the remote copies
Your Remote Copy relationships are now in the consistency group that you selected. You can
only add Remote Copy relationships that are in the same state as the Consistency Group to
which you want to add them.
You will be asked if you want to allow read/write access to secondary volumes as shown in
Figure 10-138. Make your choice and then click Stop Consistency Group.
Note: The CLI differentiates between stopping consistency groups with or without access
using the -access flag, eg stoprcconsistgrp -access 0
You will be prompted to select whether the Master or Auxiliary volumes are to act as the
primary volumes before the consistency group is started as shown in Figure 10-140. The
consistency group will then start copying data from the primary volumes to the secondary
volumes. If the consistency group was stopped without access and is in the Consistent
Stopped state, you will not be prompted to confirm the direction in which to start the group. It
will start by default using the original primary volumes from when it was stopped.
Important: If you are starting a Consistency Group where access was previously allowed
to auxiliary volumes when the group was stopped, ensure you choose the correct volumes
to act as the Primary volumes when you re-start. Failing to do so could lead to loss of data.
This option is used for disaster recovery and disaster recovery testing. Ensure host access to
the primary volumes is stopped before switching. You can then mount hosts on the auxiliary
volumes and continue production from a DR site. As before, with individual remote copy
relationships, all the relationships switched as part of the consistency group will now show an
icon indicating they have been swapped around. This is shown in Figure 10-143.
Figure 10-143 Consistency group after a switch roles has taken place
A warning appears, as shown in Figure 10-145. Make sure the Remote Copy relationships
that are shown in the field are the ones that you want to remove from the consistency group.
Click Remove to proceed.
Figure 10-145 Confirm the relationships to remove from the Remote Copy consistency group
You must confirm the deletion of the consistency group, as shown in Figure 10-147. Click Yes
if you are sure that this consistency group should be deleted.
There is a distinction to be made between virtualizing external storage and importing existing
data into the Storwize V5000. When we talk of virtualizing external storage we mean creating
new logical units with no data on them and adding these to storage pools under Storwize
V5000 control. In this way the external storage can benefit from the Storwize V5000 features
such as EasyTier and Copy Services. When existing data needs to be put under control of the
Storwize V5000, it must first be imported as an image mode volume. It is advised that the data
then be copied onto storage, whether internal or external, already under the control of the
Storwize V5000 and not left as an image mode volume. The reason for this is again to be able
to benefit from the Storwize V5000 features.
The external storage systems that are incorporated into the IBM Storwize V5000 environment
can be new systems or existing systems. Any data on the existing storage systems can be
easily migrated to the IBM Storwize V5000 managed environment, as described in Chapter 6,
“Storage migration” on page 249.
Migration: If the IBM Storwize V5000 is used as a general management tool, the
appropriate External Virtualization licenses must be ordered. The only exception is if you
want to migrate existing data from external storage systems to IBM Storwize V5000
internal storage and then remove the external storage. You can temporarily configure your
External Storage license for a 45 day period. For more than a 45 day migration
requirement, an appropriate External Virtualization license must be ordered.
You can configure the IBM Storwize V5000 licenses by clicking the Settings icon and then
System. In the License pane, there are four license options you can set: FlashCopy,
RemoteCopy, Easy Tier and External Virtualization. Set these license options to the limits you
obtained from IBM. For more information about setting licenses on the IBM Storwize V5000
see Chapter 2, “Initial configuration” on page 25.
For assistance with licensing questions or to purchase any of these licenses, contact your
IBM account team or IBM Business Partner.
Make sure that the switches or directors are at the firmware levels that are supported by the
IBM Storwize V5000 and that the IBM Storwize V5000 port login maximums that are listed in
the restriction document are not exceeded. The configuration restrictions can be found on the
Support home page, which is available at this website:
https://www-947.ibm.com/support/entry/myportal/product/system_storage/disk_systems
/mid-range_disk_systems/ibm_storwize_v5000?productContext=-2033461677
The advised SAN configuration is composed of a minimum of two fabrics. The ports on
external storage systems and the IBM Storwize V5000 ports should be evenly split between
the two fabrics to provide redundancy if one of the fabrics goes offline.
Figure 11-1 shows an example of how to cable devices to the SAN. Refer to this example as
we describe the zoning. For the purposes of this example we have used an IBM Storwize
V3700 as our external storage.
Create an IBM Storwize V5000/external storage zone for each storage system to be
virtualized, as shown in the following examples:
Zone external Storwize V3700 canister 1 port 2 with Storwize V5000 canister 1 port 2 and
canister 2 port 2 in the BLUE fabric
Zone external Storwize V3700 canister 2 port 2 with Storwize V5000 canister 1 port 4 and
canister 2 port 4 in the BLUE fabric
Zone external Storwize V3700 canister 1 port 3 with Storwize V5000 canister 1 port 1 and
canister 2 port 1 in the RED fabric
Zone external Storwize V3700 canister 2 port 1 with Storwize V5000 canister 1 port 3 and
canister 2 port 3 in the RED fabric
Verify that the storage controllers to be virtualized by IBM Storwize V5000 meet the
configuration restrictions, which can be found on the Support home page, at this website:
https://www-947.ibm.com/support/entry/myportal/product/system_storage/disk_systems
/mid-range_disk_systems/ibm_storwize_v5000?productContext=-2033461677
Make sure that the firmware or microcode levels of the storage controllers to be virtualized
are supported by IBM Storwize V5000. See the SSIC website for more details:
http://www-03.ibm.com/systems/support/storage/ssic/interoperability.wss
IBM Storwize V5000 must have exclusive access to the LUNs from the external storage
system that are presented to it. LUNs cannot be shared between IBM Storwize V5000 and
other storage virtualization platforms or between an IBM Storwize V5000 and hosts. However,
different LUNs can be mapped from the same external storage system to an IBM Storwize
V5000 and other hosts in the SAN through different storage ports.
Make sure that the external storage subsystem LUN masking is configured to map all LUNs
to all the WWPNs in the IBM Storwize V5000 storage system.
Ensure that you check the IBM Storwize V5000 Knowledge Center and review the
“Configuring and servicing external storage system” topic before you prepare the external
storage systems for discovery from the IBM Storwize V5000 system. This Knowledge Center
can be found at this website:
http://www.ibm.com/support/knowledgecenter/STHGUJ_7.4.0/com.ibm.storwize.v5000.740
.doc/v5000_ichome_740.html
The basic concepts of managing external storage system are the same as internal storage.
IBM Storwize V5000 discovers LUNs from the external storage system as one or more
MDisks. These MDisks are added to a storage pool in which volumes are created and
mapped to hosts as needed.
Note: The Detect MDisk option is the only way to manually update the Storwize V5000
configuration when either adding or deleting MDisks.
7. Select the storage tier for the MDisks. The MDisks will be unassigned at this point and will
need to be assigned to the correct storage tiers for EasyTier management. For more
information about Storage Tiers see Chapter 9, “Easy Tier” on page 419.
Select the MDisks to be assigned and either use the Actions drop-down or right-click and
Select Tier. Ensure that the correct tier is chosen, as shown in Figure 11-5.
If the storage pool does not yet exist, follow the procedure outlined in Chapter 7, “Storage
pools” on page 309.
9. Add the MDisks to the pool. Select the pool the MDisks are to belong to and click Add to
Pool as shown in Figure 11-7. After the task completes, click Close and the selected
volumes will be assigned to the storage pool.
Important: If the external storage volumes that are to be virtualized behind the Storwize
V5000 already have data on them and this data needs to be retained, DO NOT use the
Assign to Pool option to manage the MDisks. This will destroy the data on the disks.
Instead use the Import option. See 11.2.2, “Importing Image Mode volumes” on page 587
10.Finally create volumes from the storage pool and map them to hosts, as needed. See
Chapter 5, “Volume configuration” on page 201 to learn how to create and map volumes to
hosts.
To manually import volumes as image mode volumes they must not have been assigned to a
storage pool and must be unmanaged MDisks. Hosts that are accessing data from these
external storage system LUNs can continue to do so, but must be re-zoned and mapped to
the Storwize V5000 to use them after they are presented through the IBM Storwize V5000. If
the Import option is used and no existing storage pool is chosen, a temporary migration pool
is created to hold the new image-mode volume.
This image-mode volume has a direct block-for-block translation from the imported MDisk and
the external LUN and existing data is preserved. In this state, the IBM Storwize V5000 is
acting as a proxy and the image mode volume is simply a “pointer” to the existing external
LUN. Because of the way in which the virtualization of the Storwize V5000 works, the external
LUN is presented as an MDisk, but we cannot map an MDisk directly to a host. Therefore we
must create the image mode volume in order that hosts can be mapped through the Storwize
V5000.
Figure 11-8 shows an example of how to import an unmanaged MDisk. Select the
unmanaged MDisk and click Import from the Actions drop-down menu. Multiple MDisks can
be selected using the Ctrl key.
Clear the Enable Caching option if you use copy services (mirroring functionality) on the
external storage system that is currently hosting the LUN. After the volumes are properly
under Storwize V5000 control, it is a preferred practice to use the copy services of IBM
Storwize V5000 for virtualized volumes. Click Next to continue. The wizard will direct you to
select the storage pool you want to import the volume to, as shown in Figure 11-10.
You have a choice between an existing storage pool (target pool) or using a temporary one,
which the Storwize V5000 will create and name for you. Selecting a target pool allows you to
choose the existing pool as shown in Figure 11-11and clicking Finish will create the volume
in the chosen pool. Only pools with sufficient capacity will be shown.
A migration task will start and complete as shown in Figure 11-12. The actual data migration
begins after the MDisk is imported successfully.
When complete, the migration status will disappear and the volume will be in the target pool
as shown in Figure 11-14.
If you choose a temporary pool you must first select the extent size for the pool before clicking
Finish to import the MDisk as shown in Figure 11-15. The default value for extents is 1GB. If
you plan to migrate this volume to another pool at some stage in the future ensure that the
extent size matches that of the prospective target pool. For more information about extent
sizes see Chapter 7, “Storage pools” on page 309.
Click Close. The external LUN will now appear as an MDisk with an associated image mode
volume name and will be listed as such. This is shown in Figure 11-17.
It can however be mapped to a host at this point. You can also choose to rename it as shown
in Figure 11-19.
You can access the external window by opening the External Storage option from the Pools
menu as shown in Figure 11-2 on page 583. Extended help information for external storage is
available in the help icon as shown in Figure 11-20.
The External window as shown in Figure 11-21 gives you an overview of all your external
storage systems. There is a list of the external storage systems managed by the IBM
Storwize V5000. With the help of the filter, you can show only the external storage systems
you want to work with. Clicking the plus sign preceding each of the external storage
controllers provides more detailed information, including all of the MDisks that are mapped
from it.
You can change the name of any external storage system by highlighting the controller,
right-clicking and select Rename.
Alternatively use the Actions drop-down list. You can also find the Dependent Volumes
option, as shown in Figure 11-22. and the Detect MDisks option.
Figure 11-22 Show Dependent Volumes and Rename option in the Actions drop-down menu
From the dependant volumes window, you can take volume actions, including Map to Host,
Shrink, Expand, Migrate to Another Pool, and Volume Copy, as shown in Figure 11-24.
Volume copy is another key feature that you can benefit from using IBM Storwize V5000
virtualization. Two copies can be created to enhance availability for a critical application. A
volume copy can be also used to generate test data or for data migration.
For more information about the volume actions of the IBM Storwize V5000 storage system,
see Chapter 8, “Advanced host and volume administration” on page 359.
Returning to the External window and the MDisks that are provided by this external storage
system, you can highlight the name of an MDisk, right-click (or use the Actions drop-down) to
see a further menu of options available as shown in Figure 11-25.
From here you can view the properties of an MDisk, its capacity, the storage pool, and the
storage system it belongs to. There are also several actions on MDisks that can be made
through the menu, including Detect MDisks, Assign to Pool, Import, Unassign from pool,
Rename and Show Dependant Volumes.
Fault tolerance and a high level of availability are achieved by the following features:
The RAID capabilities of the underlying disk subsystems.
The software architecture that is used by the IBM Storwize V5000 nodes.
Auto-restart of nodes that are hung.
Battery units to provide cache memory protection in the event of a site power failure.
Host system multipathing and failover support.
At the heart of the IBM Storwize V5000 is a redundant pair of node canisters. The two
canisters share the data transmitting and receiving load between the attached hosts and the
disk arrays.
For more information about replacing the control or expansion enclosure midplane, see the
IBM Storwize V5000 Knowledge Center at this website:
http://www.ibm.com/support/knowledgecenter/STHGUJ_7.4.0/com.ibm.storwize.v5000.740
.doc/v5000_ichome_740.html
USB ports
There are two USB connectors side-by-side and they are numbered as 1 on the left and as 2
on the right. There are no indicators associated with the USB ports. Figure 12-3 shows the
USB ports.
Ethernet ports
There are two 10/100/1000 Mbps Ethernet ports side-by-side on the canister and they are
numbered 1 on the left and 2 on the right. Port 1 is required and port 2 optional. The ports are
shown in Figure 12-4.
SAS ports
There are four 6-Gbps Serial Attached SCSI (SAS) ports side-by-side on the canister. They
are numbered 1 on the left to 4 on the right. IBM Storwize V5000 uses port 1 and 2 for host
connectivity and ports 3 and 4 to connect optional expansion enclosures. The ports are
shown in Figure 12-5.
green Indicates at least one of the SAS lanes on this connector are operational.
If the light is off when it is connected, there is a problem with the connection.
Battery status
Each node canister houses a battery, the status of which is displayed on three LEDs on the
back of the unit, as shown in Figure 12-7.
Green (left) Battery Status Fast flash: Indicates that the battery is charging and
has insufficient charge to complete a single memory
dump.
Flashing: Indicates that the battery has sufficient
charge to complete a single memory dump only.
Solid: Indicates that the battery is fully charged and has
sufficient charge to complete two memory dumps.
Green (right) Battery in use Indicates that hardened or critical data is writing to disk.
Green (left) System Power Flashing: The canister is in standby mode in which
case IBM Storwize V5000 code is not running.
Fast flashing: The cannister is running a self test.
On: The cannister is powered up and the IBM Storwize
V5000 code is running.
Green (mid) System Status Off: There is no power to the canister, the canister is in
standby mode, Power On SelfTest (POST) is running
on the canister, or the operating system is loading.
Flashing: The node is in candidate or service state; it
cannot perform I/O. It is safe to remove the node.
Fast flash: A code upgrade is running.
On: The node is part of a cluster.
Amber (right) Fault Off: The node is in candidate or active state. This state
does not mean that there is no hardware error on the
node. Any error that is detected, is not severe enough
to stop the node from participating in a cluster (or there
is no power).
Flashing: Identifies the canister.
On: The node is in service state, or there is an error
that is stopping the software from starting.
Figure 12-9 shows the location of these parts within the node canister.
Memory replacement
For more information about the memory replacement process, see the IBM Storwize V5000
Knowledge Center at this website:
http://www.ibm.com/support/knowledgecenter/STHGUJ_7.4.0/com.ibm.storwize.v5000.740
.doc/v5000_ichome_740.html
Caution: The battery is a lithium ion battery. To avoid possible explosion, do not incinerate
the battery. Exchange the battery only with the part that is approved by IBM.
Because the Battery Backup Unit (BBU) is located in the node canister, the BBU replacement
leads to a redundancy loss until the replacement is complete. Therefore, it is advised to
replace the BBU only when advised to do so. It is also advised to follow the Directed
Maintenance Procedures (DMP).
For more information about how to replace the BBU, see the Knowledge Center at this
website:
http://www.ibm.com/support/knowledgecenter/STHGUJ_7.4.0/com.ibm.storwize.v5000.740
.doc/v5000_ichome_740.html
Important: During a BBU change, the battery must be kept parallel to the canister
system board while it is removed or replaced, as shown in Figure 12-13. Keep equal
force, or pressure, on each end.
SAS ports
SAS ports are used to connect the expansion canister to the node canister or to an extra
expansion canister in the chain. Figure 12-15 shows the SAS ports that are on the expansion
canister.
Green Indicates at least one of the SAS lanes on these connectors are operational.
If the light is off when connected, there is a problem with the connection.
Canister status
Each expansion canister has its status displayed by three LEDs, as shown in Figure 12-16.
Green (mid) Status If the light is on, the canister is running normally.
If the light is flashing, there is an error communicating with
the enclosure.
Amber (right) Fault If the light is solid, there is an error logged against the
canister or the firmware is not running.
SAS cabling
Expansion enclosures are attached to control enclosures by SAS cables. There are two SAS
chains. Up to ten expansion enclosures can be attached to SAS chain 1 and up to nine
expansion enclosures can be attached to SAS chain 2. The node canister uses SAS ports 3
and 4 for expansion enclosures and ports 1 and 2 for host connectivity.
Important: When an SAS cable is inserted, ensure that the connector is oriented correctly
by confirming that the following conditions are met:
The pull tab must be below the connector.
Insert the connector gently until it clicks into place. If you feel resistance, the connector
is probably oriented the wrong way. Do not force it.
When inserted correctly, the connector can be removed only by pulling the tab.
The expansion canister has SAS port 1 for the channel input and SAS port 2 for the output
which connects to another expansion enclosures.
A strand starts with an SAS initiator chip inside an IBM Storwize V5000 node canister and
progresses through SAS expanders, which connect to the disk drives. Each canister contains
an expander. Each drive has two ports, each of which is connected to a different expander
and strand. This configuration means both nodes directly access each drive and there is no
single point of failure.
At system initialization when devices are added to or removed from strands (and at other
times), the IBM Storwize V5000 software performs a discovery process to update the state of
the drive and enclosure objects.
Array goal
Each array has a set of goals that describe the location and performance of each array
member. A sequence of drive failures and hot spare takeovers can leave an array
unbalanced; that is, with members that do not match these goals. The system automatically
rebalances such arrays when appropriate drives are available.
RAID level
An IBM Storwize V5000 disk array supports RAID 0, RAID 1, RAID 5, RAID 6, or RAID 10.
Each RAID level is described in Table 12-8.
Table 12-8 RAID levels that are supported by an IBM Storwize V5000 array
RAID Where data is striped Drive count
level (Min - Max)
0 Arrays have no redundancy and do not support hot-spare takeover. Data 1-8
is striped evenly across the drives without parity. Performance is
improved at the expense of the redundancy provided by other RAID
levels.
5 Arrays stripe data over the member drives with one parity strip on every 3 - 16
stripe. RAID 5 arrays have single redundancy with higher space
efficiency than RAID 10 arrays, but with some performance penalty.
RAID 5 arrays can tolerate no more than one member drive failure.
6 Arrays stripe data over the member drives with two parity strips on every 5 - 16
stripe. A RAID 6 array can tolerate any two concurrent member drive
failures.
10 Arrays stripe data over mirrored pairs of drives. RAID 10 arrays have 2 - 16
single redundancy. The mirrored pairs rebuild independently. One
member out of every pair can be rebuilding or missing at the same time.
RAID 10 combines the features of RAID 0 and RAID 1.
Disk scrubbing
The scrub process runs when arrays do not have any other background processes. The
process checks that the drive logical block addresses (LBAs) are readable and array parity is
synchronized. Arrays are scrubbed independently and each array is entirely scrubbed every
seven days.
The system will automatically perform the drive hardware validation tests and will promote the
drive into the configuration if these tests pass, automatically configuring the inserted drive as
a spare. The status of the drive following the promotion will be recorded in the event log either
as an informational message or an error, should some hardware failure occur during the
system action.
For more information about the drive replacement process, see the IBM Storwize V5000
Knowledge Center at this website:
http://www.ibm.com/support/knowledgecenter/STHGUJ_7.4.0/com.ibm.storwize.v5000.740
.doc/v5000_ichome_740.html
The left side PSU is numbered 1 and the right side PSU is numbered 2.
Even so, it is important to save the file after you change your system configuration, via the
command-line interface (CLI) to start a manual backup.
Regularly saving a configuration backup file on the IBM Storwize V5000 is important and it
must be done manually. Download this file regularly to your management workstation to
protect the configuration data (a preferred practice is to automate this download procedure by
using a script and saving it daily on a remote system).
The svcconfig backup command creates three files that provide information about the
backup process and cluster configuration. These files are created in the /tmp directory on the
configuration node and can be retrieved using SCP.
The three files that are created by the backup process are described Table 12-10.
Table 12-10 File names that are created by the backup process
File name Description
svc.config.backup.sh This file contains the names of the commands that were
issued to create the backup of the cluster.
svc.config.backup.log This file contains details about the backup, including any
error information that might be reported.
Note: While the files in the /dump directory are the same as those generated using the CLI
svcconfig backup command, the user has no control over when they are generated.
To download a configuration backup file using the GUI, complete the following steps:
1. Browse to Settings Support, as shown in Figure 12-20.
2. Select the Show full log listing... option to list all of the available log files that are stored
on the configuration node, as shown in Figure 12-21.
Even though the configuration backup files are updated automatically on a daily basis, it may
be of interest to verify the time stamp of the actual file.
To do this, open the svc.config.backup.xml_xx file with an editor and search for the string
timestamp=, which is found near of the top of the file. Figure 12-24 shows the file and the
timestamp information.
The node canister software and disk firmware are separate updates and these tasks are
described separately.
2 Ensure that CIM object manager (CIMOM) clients are working correctly. When
necessary, update these clients so that they can support the new version of IBM
Storwize V5000 code. Examples may be OS versions and options such as FlashCopy
Manager or VMWare plug-ins.
3 Ensure that multipathing drivers in the environment are fully redundant. If you
experience failover issues with multipathing driver support, resolve these issues
before you start normal operations.
4 Update other devices in the IBM Storwize V5000 environment. Examples might
include updating hosts and switches to the correct levels.
Some code levels support upgrades only from specific previous levels. If you upgrade to more
than one level above your current level, you might be required to install an intermediate level.
See the following for further details:
https://www-304.ibm.com/support/docview.wss?uid=ssg1S1004336
Important: Ensure that you have no unfixed errors in the log and that the system date and
time are correctly set. Start the fix procedures, and ensure that you fix any outstanding
errors before you attempt to concurrently update the code.
The updating node is temporarily unavailable and all I/O operations fail to that node. As a
result, the I/O error counts increase and the failed I/O operations are directed to the partner
node of the working pair. Applications do not see any I/O failures. When new nodes are
added to the system, the upgrade package is automatically downloaded to the new nodes
from the IBM Storwize V5000 system.
Multipathing requirement
Before an update, ensure that the multipathing drivers are fully redundant with every path
available and online. You may see errors that are related to the paths, which will go away
(failover) and the error count will increase during the update. When the paths to the nodes
return, the nodes fall back to become a fully redundant system.
When updating the node canister software, a Test utility and the node software must first be
downloaded from the internet. This can be downloaded either via the Download link within
the panel or manually via the IBM Support site. The Test utility verifies that there are no
issues with the current system environment, such as failed components and down-level drive
firmware. Select the Test utility and Update package files by clicking the folder icon in the
corresponding input field and then enter the version of the code level you are updating to, as
shown in Figure 12-26.
If the Service Assistant Manual update option is selected, see, “Updating software
manually” on page 627.
The Test utility and node canister software will then be uploaded to the system as shown in
Figure 12-28.
Messages inform you of any warnings or errors that the Test Utility may find. Figure 12-30
shows the result of the Test Utility finding issues with a IBM Storwize V5000 before continuing
with the software update.
Close the warning window and then click the Read more link, to display the results of the Test
Utility as shown in Figure 12-31.
In this example, the Test Utility has found two warnings. This system has not been configured
to send email notifications to IBM and also has a number of drives with downlevel firmware.
Neither of these issues prevent the software update from continuing.
Clicking Continue will continue with the software update and clicking Cancel will cancel the
software update, allowing the user to correct any issues.
The results of continuing with the software update are shown in Figure 12-32.
Nodes are upgraded and rebooted, one at a time until the upgrade process is complete.
The steps for a manual update are shown in the Update System help. See Figure 12-33.
After the software has been uploaded, the Test Utility is automatically run, and if there are
no issues, the system is placed in a Prepared state as shown in Figure 12-37.
4. Select the canister that contains the node you want to update and select Remove, as
shown in Figure 12-39.
Important: Make sure that you select the non-configuration nodes first.
The non-configuration node will be removed from Management GUI Update System panel
and will be shown as Unconfigured when hovering over the node in the Monitoring
System panel.
5. Open the Service Assistant Tool for the node that you just removed. See 2.10.3, “Service
Assistant tool” on page 72.
6. In the Service Assistant Tool, the node that is ready for upgrade must be selected. Select
the node that shows Node status as service mode, displays an error code of 690 and has
no available cluster information, as shown in Figure 12-41.
9. Repeat steps 3-7 for the remaining node (or nodes), leaving the Configuration node until
last.
After you remove the configuration node from the cluster for the update, a warning
appears, as shown in Figure 12-45. Click Yes to continue.
Important: The configuration node remains in Service State when it is re-added to the
cluster. Therefore, exit Service State manually.
Both the nodes are now back in the cluster (as shown in Figure 12-47) and the system is
running on the new code level.
Figure 12-47 Cluster is active again and running new code level
As of Storwize software v7.4, the drive firmware can be upgraded via the GUI. The user can
choose to upgrade all drives or select an individual drive.
Download the latest Drive Upgrade package from the IBM Support site:
http://www.ibm.com/support/entry/portal/support
Select the Drive Upgrade package, which was previously downloaded from the IBM Support
site, by clicking the folder icon, and click Upgrade as shown in Figure 12-49.
Figure 12-51 shows the Pools Internal Storage panel which displays the result of the
drive firmware update (from SC2C to SC2E).
Figure 12-53 on page 636 shows how to update all drives via the Action menu in the Internal
Storage panel.
Note: If any drives are selected, the Action menu displays actions for the selected drives
and the Upgrade All option does not appear. If a drive is selected, de-select it by holding
down the Ctrl key and clicking the drive.
After initiating the drive upgrade process by either of the above options, the panel in
Figure 12-54 is displayed. Select the Drive Upgrade package, which was previously
downloaded from the IBM Support site, by clicking the folder icon, and then click Upgrade to
continue.
An alert is logged when the event requires action. Some alerts have an associated error code
that defines the service action that is required. The service actions are automated through the
fix procedures. If the alert does not have an error code, the alert represents an unexpected
change in state. This situation must be investigated to see whether it is expected or
represents a failure. Investigate an alert and resolve it when it is reported.
A message is logged when a change that is expected is reported; for instance, an IBM
FlashCopy operation completes.
The event log panel can be opened via the GUI by clicking Monitoring Events, as shown
in Figure 12-55.
To avoid a repeated event that fills the event log, some records in the event log refer to
multiple occurrences of the same event. When event log entries are coalesced in this way, the
time stamp of the first occurrence and the last occurrence of the problem is saved in the log
entry. A count of the number of times that the error condition occurred also is saved in the log
entry. Other data refers to the last occurrence of the event.
Figure 12-57 shows all of the possible columns that can be displayed in the error log view.
Important: Check for this filter option if no event is listed. There might be events that
are not associated to Recommended Actions.
Figure 12-59 shows an event log with no items found, which does not necessarily mean that
the event log is clear. To check whether the log is clear, use the filter option Show all.
Important: These actions cannot be undone and might prevent the system from being
analyzed when severe problems occur.
Properties
This option provides more sense data for the selected event that is shown in the list.
To run the fix procedure for the error with the highest priority, go to the Recommended Action
panel at the top of the Event page and click Run This Fix Procedure. When you fix higher
priority events first, the system often can automatically mark lower priority events as fixed.
The following example shows how faults are represented in the error log, how information
about the fault can be gathered, and the Recommended Action (DMP) can be used to fix the
error:
Detect an alert
The Health Status indicator is showing a red alert (for more information, see Chapter 3,
“Graphical user interface overview” on page 75). The Status Alerts indicator (to the right of
the Health status bar) is also visible and showing one alert. Click the alert to retrieve the
specific information, as shown in Figure 12-60.
Figure 12-62 Recommended Action DMP for Device logins reduced - step 1
Figure 12-63 Recommended Action DMP for Device logins reduced - step 2
Figure 12-64 shows step 3 of the DMP for the Number of device logins reduced event.
Figure 12-64 Recommended Action DMP for Device logins reduced - step 3
Figure 12-65 shows step 4 of the DMP for the Number of device logins reduced event.
Figure 12-65 Recommended Action DMP for Device logins reduced - step 4
Figure 12-66 Recommended Action DMP for Device logins reduced - step 5
When all of the steps of the DMP are processed successfully, the Recommended Action is
complete and the problem should be fixed. Figure 12-67 shows the red color of the event
status changed to green. The System Health status is green and the Recommended Action
box has disappeared, implying that there are no further actions that must be addressed.
Figure 12-68 shows the event log that displays multiple alerts.
The Next Recommended Action function orders the alerts by severity and displays the events
with the highest severity first. If multiple events have the same severity, they are ordered by
date and the oldest event is displayed first.
The following order of severity starts with the most severe condition:
Unfixed alerts (sorted by error code; the lowest error code has the highest severity)
Unfixed messages
Monitoring events (sorted by error code; the lowest error code has the highest severity)
Expired events
Fixed alerts and messages
Faults are often fixed with the resolution of the most severe fault.
Figure 12-71 shows the Download Support Package panel from which you can select one of
four different types of the support package.
Save the resulting file in a directory for later use or to upload to IBM Support.
If the package is collected by using the Service Assistant Tool, ensure that the node from
which the logs are collected is the current node, as shown in Figure 12-73.
Support information can be downloaded with or without the latest statesave, as shown in
Figure 12-74.
satask_result file
The satask_result.html file is the general response to the command that is issued via the
USB stick. If the command did not run successfully, it is noted in this file. Otherwise, any
general system information is stored here, as shown in Figure 12-76.
Important: You should never power off a IBM Storwize V5000 by powering off the PSUs,
removing both PSUs, or removing both power cables from a running system.
Note: If a canister or the enclosure is powered down, a local visit will be required to either
reseat the canister or power cycle the enclosure.
To power off a canister, browse to Monitoring System, rotate the enclosure to the rear
view, right-click the required canister and select Power Off, as shown in Figure 12-78.
A Power Off confirmation window opens, prompting for confirmation to shut down the cluster.
Ensure that all FlashCopy, Metro Mirror, Global Mirror, data migration operations, and forced
deletions are stopped or allowed to complete before continuing. Enter the provided
confirmation code and click Power Off to begin the shutdown process, as shown in
Figure 12-80.
Wait for the power LED on both node canisters in the control enclosure to flash at 1 Hz, which
indicates that the shutdown operation completed (1 Hz is half as fast as the drive indicator
LED).
Tip: When you shut down an IBM Storwize V5000, it does not automatically restart. You
must manually restart the system by removing and re-applying power.
Warning: If you are shutting down the entire system, you lose access to all volumes that
are provided by this system. Shutting down the system also shuts down all IBM Storwize
V5000 nodes. All data is flushed to disk before the power is removed.
Run the stopsystem command to shut down a clustered system, as shown in Example 12-3.
Are you sure that you want to continue with the shut down?
12.7.2 Powering on
Complete the following steps to power on the system:
Important: This process assumes that all power is removed from the enclosure. If the
control enclosure is shut down but the power is not removed, the power LED on all node
canisters flash at 1 Hz. In this case, remove the power cords from both power supplies and
then reinsert them.
1. Ensure that any network switches that are connected to the system are powered on.
2. Power on any expansion enclosures by connecting the power cord to both power supplies
in the rear of the enclosure or turning on the power circuit.
3. Power on the control enclosure by connecting the power cords to both power supplies in
the rear of the enclosure and turning on the power circuits.
The system starts. The system starts successfully when all node canisters in the control
enclosure have their status LED permanently on, which should take no longer than 10
minutes.
4. Start the host applications.
The examples provided in this section are not based on the latest version of Tivoli Storage
Productivity Center. For detailed guidance on configuring the latest version of Tivoli Storage
Productivity Center, see the IBM Tivoli Storage Productivity Center Technical Guide, which is
another IBM Redbooks publication:
http://www.redbooks.ibm.com/abstracts/sg248053.html?Open
Complete the following steps to connect Tivoli Storage Productivity Center to the IBM
Storwize V5000 system:
1. Open your browser and use the following link to start Tivoli Storage Productivity Center, as
shown in Figure 12-81:
http://TPC_system_Hostname:9550/ITSRM/app/en_US/index.html
2. Use your login credentials to access Tivoli Storage Productivity Center, as shown in
Figure 12-83.
3. After successfully logging in, you are ready to add storage devices to Tivoli Storage
Productivity Center, as shown in Figure 12-84.
Continue to following the wizard after you complete all the required fields. After the wizard is
completed, Tivoli Storage Productivity Center collects information from IBM Storwize V5000.
A summary of details is shown at the end of discovery process.
When you highlight the IBM Storwize V5000 system, action buttons become available that
you can use to view the device configuration or create virtual disks, as shown in Figure 12-87.
The MDisk Groups option provides you with a detailed list of the configured MDisk groups
including, pool space, available space, configured space, and Easy Tier Configuration.
The Virtual Disks option lists all the configured disks with the added option to filter them by
MDisk Group. The list includes several attributes, such as capacity, volume type, and type.
Important: Tivoli Storage Productivity Center and SAN Volume Controller use the
following terms:
Virtual Disk: The equivalent of a Volume on a Storwize device
MDisk Group: The equivalent of a Storage Pool on a Storwize device.
If you click Create Virtual Disk, the Create Virtual Disk wizard window opens, as shown in
Figure 12-88. Use this window to create volumes and specify several options, such as size,
name, thin provisioning, and add MDisks to an MDisk group.
After you create a probe, you can click Create Subsystem Performance Monitor, as shown
in Figure 12-91.
A login panel is displayed (as shown in Figure 12-95). allowing a user to log in using their
Tivoli Storage Productivity Center credentials.
After you log in, the Tivoli Storage Productivity Center web dashboard is displayed, as shown
in Figure 12-96. The Tivoli Storage Productivity Center web-based GUI is used to show
information about the storage resources in your environment. It contains predefined and
custom reports about performance and storage tiering.
Figure 12-99 shows the different report options for Storage Tiering.
Detailed CLI information is available in the IBM Storwize V5000 Knowledge Center under the
Command Line section, which can be found at this website:
http://www.ibm.com/support/knowledgecenter/STHGUJ_7.4.0/com.ibm.storwize.v5000.740
.doc/v5000_ichome_740.html
Implementing the IBM Storwize V7000 V7.2, SG24-7938also has information about the use of
the CLI. The commands in that book also apply to the IBM Storwize V5000 system because it
is part of the Storwize family.
Basic setup
In the IBM Storwize V5000 GUI, authentication is done using a user name and a password.
The CLI uses a Secure Shell (SSH) to connect from the host to the IBM Storwize V5000
system. A private and public key pair or user name and password is necessary. The following
steps are required to enable CLI access with SSH keys:
1. A public key and private key are generated as a pair.
2. A public key is uploaded to the IBM Storwize V5000 system using the GUI.
3. A client SSH tool is configured to authenticate with the private key.
4. A secure connection is established between the client and IBM Storwize V5000 system.
Secure Shell is the communication vehicle that is used between the management workstation
and the IBM Storwize V5000 system. The SSH client provides a secure environment from
which to connect to a remote machine. It uses the principles of public and private keys for
authentication.
SSH keys are generated by the SSH client software. The SSH keys include a public key,
which is uploaded and maintained by the clustered system, and a private key, which is kept
private on the workstation that is running the SSH client. These keys authorize specific users
to access the administration and service functions on the system. Each key pair is associated
with a user-defined ID string that can consist of up to 40 characters. Up to 100 keys can be
stored on the system. New IDs and keys can be added, and unwanted IDs and keys can be
deleted. To use the CLI, an SSH client must be installed on that system, the SSH key pair
must be generated on the client system, and the client’s SSH public key must be stored on
the IBM Storwize V5000 systems.
The SSH client that is used in this book is PuTTY. There is also a PuTTY key generator that
can be used to generate the private and public key pair. The PuTTY client can be downloaded
at no cost at the following website:
http://www.chiark.greenend.org.uk
Generating keys: The blank area that is indicated by the message is the large blank
grey rectangle on the GUI inside the section of the GUI labeled Key indicated by the
mouse pointer. Continue to move the mouse pointer over the blank area until the
progress bar reaches the far right side. This action generates random characters to
create a unique key pair.
More information about generating keys can be found in the PuTTY use manual. This is in the
help drop-down menu of the PuTTY GUI.
3. After the keys are generated, save them for later use. Click Save public key, as shown in
Figure A-3. You should always set a key pass phrase before saving the key. Not doing so
means the key will be stored on your workstation un-encrypted. Any attacker that gains
access to your private key will therefore gain access to all machines configured to accept
it.
4. You are prompted for a name (for example, pubkey) and a location for the public key (for
example, C:\Support Utils\PuTTY). Click Save.
Be sure to record the name and location of this SSH public key because this information
must be specified later.
Public key extension: By default, the PuTTY key generator saves the public key with
no extension. Use the string pub for naming the public key; for example, pubkey, to
differentiate the SSH public key from the SSH private key.
7. When prompted, enter a name (for example, icat), select a secure place as the location,
and click Save.
Key generator: The PuTTY key generator saves the private key with the PPK
extension.
For more information about generating the SSH keys see the PuTTY user manual found in
the help drop-down menu on the PuTTY GUI.
2. Right-click the user for which you want to upload the key and click Properties, as shown in
Figure A-7.
3. To upload the public key, click Browse, select your public key, and click OK, as shown in
Figure A-8.
In the right side pane under the “Specify the destination you want to connect to” section,
select SSH. Under the “Close window on exit” section, select Only on clean exit, which
ensures that if there are any connection errors, they are displayed in the user’s window.
3. In the right side pane in the “Preferred SSH protocol version” section, select 2.
5. From the Category pane on the left side of the PuTTY Configuration window, click
Session to return to the Session view, as shown in Figure A-9 on page 674.
8. Highlight the new session and click Open to connect to the IBM Storwize V5000 system.
9. PuTTY now connects to the system and prompts you for a user name. Enter admin as the
user name and press Enter (see Example A-1).
Example commands
A detailed description about all of the available commands is beyond the intended scope of
this book therefore we reference some sample commands used elsewhere in this book.
The svcinfo and svctask prefixes are no longer needed in IBM Storwize V5000. If you have
scripts that use this prefix, they will still run without problems, but aren’t necessary. If you
enter svcinfo or svctask and press the Tab key twice, all of the available subcommands are
listed. Pressing the Tab key twice also auto-completes commands if the input is valid and
unique to the system.
Enter lshost to see a list of all configured hosts on the system, as shown in Example A-3.
To map the volume to the hosts, enter mkvdiskhostmap, as shown in Example A-4.
id 2
name ESXi-Redbooks
.
.
vdisk_UID 600507680185853FF000000000000011
virtual_disk_throttling (MB) 1200
preferred_node_id 2
.
.
IBM_Storwize:mcr-atl-cluster-01:superuser>
Command output: The lsvdisk command lists all available properties of a volume and its
copies. To make it easier to read, lines in Example A-6 were deleted.
If you do not specify the unit parameter, the throttling is based on I/Os instead of throughput,
as shown in Example A-7.
Note: Before upgrading any disk drive firmware, the Storwize system should be checked
for any failures and any that are found should be rectified before continuing.
Disk drives should not be upgraded if their associated arrays are not in a redundant state.
While the update process is designed not to take the drives offline, this cannot be
guaranteed.
Running the following command for the drive that you are upgrading will inform you of any
possible issues.
This will return notification of a possible issue, if the drive in question is part of a
non-redundant array.
An example of this might be a RAID0 array or a RAID5 array with a failed drive.
To perform the disk drive firmware upgrade, you will require the following files:
Upgrade Test Utility
The Drive firmware package
http://www-947.ibm.com/support/entry/portal/support
After downloading, copy the Upgrade Test Utility and Drive Firmware package to your PuTTY
folder.
Like the Storwize controller firmware upgrade, the disk drive upgrade requires the use of the
Upgrade Test Utility which will show the drives that need to be upgraded and will check if
there are likely to be any issues.
Where ‘hursley.ppk’ is the private key file generated when SSH was set up,
‘IBM2072_INSTALL_upgradetest_12.26’ is the file name of the utility (use the latest one),
‘superuser’ is the user name and ‘9.174.152.17’ is the management IP address or name of
the Storwize unit. Change as appropriate to your environment.
It is also possible to upload the Upgrade Test Utility to the Storwize unit using the Storwize
user name and password (not using a SSH private key), by using the following command:
You will be asked for the password for whichever user name you specified. This is shown in
Example 12-4.
Then run the Upgrade Test Utility using the following command:
svcupgradetest –f –d
The –f switch specifies this is a drive firmware update while the –d switch shows firmware
details for every disk drive. Omitting the –d switch gives a summary. Example 12-5 shows this
output.
+----------------------+-----------+------------+---------------------------------
---------+
| Model | Latest FW | Current FW | Drive Info
|
+----------------------+-----------+------------+---------------------------------
---------+
| ST9146853SS | B63E | B63D | Drive in slot 13 in enclosure 1
|
| | | | Drive in slot 12 in enclosure 1
|
| | | | Drive in slot 4 in enclosure 1
|
| | | | Drive in slot 3 in enclosure 1
|
| MK3001GRRB | SC2E | SC2C | Drive in slot 24 in enclosure 1
|
| | | | Drive in slot 21 in enclosure 1
|
| | | | Drive in slot 23 in enclosure 1
|
| | | | Drive in slot 22 in enclosure 1
|
| | | | Drive in slot 11 in enclosure 1
|
+----------------------+-----------+------------+---------------------------------
---------+
We recommend that you upgrade the drive microcode at an
appropriate time. If you believe you are running the latest
version of microcode, then check for a later version of this tool.
You do not need to upgrade the drive firmware before starting the
software upgrade.
The utility will list all drives in the system that require a firmware upgrade (if using the –d
switch).
Where 'hursley.ppk' is the private key file generated when SSH was set up,
'IBM2072_DRIVE_20140826' is the name of the firmware file, 'superuser' is the user name
and '9.174.152.17' is the management IP address or name of the Storwize unit. Change as
appropriate to your environment.
It is also possible to upload the disk drive firmware to the Storwize unit using the Storwize
user name and password (not using a SSH private key), by using the following command:
You will be asked for a password for the username in this case.
If using Storwize code v7.1 or older, disk drive firmware can only be manually applied to one
drive at a time, using the applydrivesoftware command for each individual disk. The output
from the test utility shown in Example 12-5 on page 682 gives the drive slot number as the
identifier, however to run the firmware upgrade on individual drives the drive ID is required -
not the slot ID. To obtain the drive ID from the slot ID use the lsdrive output as shown in
Example 12-6. The output shown has been abbreviated for the sake of clarity.
With Storwize code 7.2 or later, it is possible to upgrade all drives using the -all switch as
shown in Example 12-8.
The command takes roughly two minutes per drive to complete. To confirm that all disks have
been upgraded re-run the upgrade test utility or check the internal storage from the GUI.
SAN Boot
IBM Storwize V5000 supports SAN Boot for Windows, VMware, and many other operating
systems. It is also possible to migrate SAN Boot volumes from other storage systems onto the
Storwize V5000. Each implementation or migration will be somewhat different depending on
the OS, HBA and multi-path driver to be used. SAN Boot support can also change. This is
therefore beyond the scope of this book. For more information about SAN Boot visit the IBM
Storwize V5000 Knowledge Center:
http://www.ibm.com/support/knowledgecenter/STHGUJ_7.4.0/com.ibm.storwize.v5000.740
.doc/v5000_ichome_740.html
For more information about SAN Boot support for different operating systems with IBM SDD,
see the IBM System Storage Multipath Subsystem Device Driver User's Guide, GC52- 1309,
which is available at this website:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S7000303
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
You can search for, view, download, or order these documents and other Redbooks
publications, Redpaper publications, Web Docs, drafts, and other materials, at the following
website:
http://www.ibm.com/redbooks
Easily manage and Organizations of all sizes are faced with the challenge of managing
massive volumes of increasingly valuable data. But storing this data INTERNATIONAL
deploy systems with
can be costly, and extracting value from the data is becoming more TECHNICAL
embedded GUI
difficult. IT organizations have limited resources but must stay SUPPORT
responsive to dynamic environments and act quickly to consolidate, ORGANIZATION
Experience rapid and simplify, and optimize their IT infrastructures. The IBM Storwize V5000
flexible provisioning system provides a smarter solution that is affordable, easy to use, and
self-optimizing, which enables organizations to overcome these
Protect data with storage challenges.
remote mirroring Storwize V5000 delivers efficient, entry-level configurations that are BUILDING TECHNICAL
specifically designed to meet the needs of small and midsize INFORMATION BASED ON
businesses. Designed to provide organizations with the ability to PRACTICAL EXPERIENCE
consolidate and share data at an affordable price, Storwize V5000
offers advanced software capabilities that are usually found in more IBM Redbooks are developed
expensive systems. by the IBM International
Technical Support
This IBM Redbooks publication is intended for pre-sales and post-sales Organization. Experts from
technical support professionals and storage administrators. IBM, Customers and Partners
from around the world create
The concepts in this book also relate to the IBM Storwize V3700. timely technical information
This book was written at a software level of Version 7 Release 4. based on realistic scenarios.
Specific recommendations
are provided to help you
implement IT solutions more
effectively in your
environment.