Dell EMC Host Connectivity Guide For Windows PDF
Dell EMC Host Connectivity Guide For Windows PDF
Dell EMC Host Connectivity Guide For Windows PDF
Windows
P/N 300-000-603
REV 60
December 2019
Copyright © 2015-2019 Dell Inc. or its subsidiaries. All rights reserved.
Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS.” DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED
IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE.
Dell Technologies, Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property
of their respective owners. Published in the USA.
Dell EMC
Hopkinton, Massachusetts 01748-9103
1-508-435-1000 In North America 1-866-464-7381
www.DellEMC.com
PREFACE 5
Chapter 2 Connectivity 11
Fibre Channel connectivity................................................................................ 12
Introduction.......................................................................................... 12
Configuring HBAs for a Windows host.................................................. 12
iSCSI connectivity............................................................................................. 17
Introduction.......................................................................................... 17
iSCSI discovery.................................................................................... 18
iSCSI Initiator cleanup.......................................................................... 19
Booting from SAN.............................................................................................20
Introduction......................................................................................... 20
Supported environments......................................................................20
Limitations and guidelines..................................................................... 21
Preparing host connectivity..................................................................21
Configuring a SAN boot for an FC-attached host.................................22
Direct-attached storage....................................................................................25
Network-attached storage................................................................................25
As part of an effort to improve its product lines, Dell EMC periodically releases revisions of its
software and hardware. Therefore, some functions described in this document might not be
supported by all versions of the software or hardware currently in use. The product release notes
provide the most up-to-date information on product features.
Contact your Dell EMC technical support professional if a product does not function properly or
does not function as described in this document.
Note: This document was accurate at publication time. Go to Dell EMC Online Support
(https://support.emc.com) to ensure that you are using the latest version of this document.
Purpose
This guide describes the features and setup procedures for Windows Server 2019, Windows Server
2016, Windows Server 2012 R2, Windows Server 2012, and Windows Server 2008 R2 SP1 host
interfaces to Dell EMC storage arrays over Fibre Channel or iSCSI.
Audience
This guide is intended for use by storage administrators, system programmers, or operators who
are involved in acquiring, managing, or operating Dell EMC PowerMax, Dell EMC VMAX All Flash
Family, Dell EMC VMAX3 Family, Dell EMC Unity Family, Dell EMC Unified VNX series, Dell EMC
XtremIO, Dell EMC VPLEX, and host devices, and Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, and Windows Server 2008 R2 SP1.
Related documentation
For the documentation referred in this guide, go to Dell EMC Online Support.
Special notice conventions used in this document
Dell EMC uses the following conventions for special notices:
WARNING Indicates a hazardous situation which, if not avoided, could result in death or
serious injury.
CAUTION Indicates a hazardous situation which, if not avoided, could result in minor or
moderate injury.
Note: Presents information that is important, but not hazard-related.
Typographical conventions
Dell EMC uses the following type style conventions in this document:
Bold Used for names of interface elements, such as names of windows,
dialog boxes, buttons, fields, tab names, key names, and menu paths
(what the user specifically selects or clicks)
Technical support
Go to Dell EMC Online Support and click Service Center. You will see several options for
contacting Dell EMC Technical Support. Note that to open a service request, you must have a
valid support agreement. Contact your Dell EMC sales representative for details about
obtaining a valid support agreement or with questions about your account.
Your comments
Your suggestions will help us continue to improve the accuracy, organization, and overall quality of
the user publications. Send your opinions of this document to [email protected].
l Overview................................................................................................................................. 8
l Terminology............................................................................................................................ 8
l Windows functions and utilities............................................................................................... 8
l Supported network connectivity in Windows environment...................................................... 8
l Supported hardware................................................................................................................9
Overview
The Microsoft Windows Server family consists of the following versions including all Datacenter,
Core, Essentials, and Enterprise:
l 2008 R2 SP1
l 2012/2012 R2
l 2016
l 2019
For information about support dates and service packs for your Windows Server version, see
Microsoft Lifecycle Policy.
The following chapters are designed to help a storage administrator or storage architect build a
Windows Server infrastructure with Dell EMC storage.
Terminology
The following table lists terms that are used in this guide.
Term Definition
Array Dell EMC storage, including Fibre Channel (FC), FC over Ethernet (FCoE),
and Internet SCSI (iSCSI); external storage; and software-defined data
center (SDDC) hyperconverged infrastructure (HCI) storage.
Free space An unused and unformatted portion of a hard disk that can be partitioned
or sub-partitioned.
Volume A partition or collection of partitions that have been formatted for use by a
file system. A volume is assigned with a drive letter.
Fibre Channel
Microsoft allows the host to connect to the external storage through a Fibre Channel (FC) host
bus adapter (HBA) that is certified by Microsoft (See Windows Server Catalog) and running with
Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, Windows Server 2012
R2 ,Windows Server 2016, and Windows Server 2019.
Terminology
Term Definition
Storage Area Network SAN is the most commonly used enterprise storage networking
(SAN) architecture for business-critical applications that need to deliver
high throughput and low latency. A rapidly growing portion of SAN
deployments uses all-flash storage to gain high performance,
consistent low latency, and lower total cost when compared to
spinning disks. With centralized shared storage, SANs enable
organizations to apply consistent methodologies and tools for
security, data protection, and disaster recovery.
World Wide Port Name WWPN is a World Wide Name that is assigned to a port in a Fibre
(WWPN) Channel fabric. It is a unique identifier that is used in SANs and
performs a function that is equivalent to the MAC address in the
Ethernet protocol.
iSCSI
Microsoft iSCSI Initiator enables you connect a host computer that is running Windows Server
2008, Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, Windows Server
2016, and Windows Server 2019 to an external iSCSI-based storage array through an Ethernet
network adapter. You can use Microsoft iSCSI Initiator in your existing network infrastructure to
enable block-based SANs. SANs provide iSCSI target functionality without requiring an investment
in additional hardware.
Terminology
Term Definition
Challenge Handshake Access CHAP is an authentication method that is used during the
Protocol (CHAP) iSCSI login in both the target discovery and the normal
login.
iSCSI Network Portal iSCSI Network Portal is the host NIC IP address that is
used for the iSCSI driver to create a session with the
storage.
Supported hardware
See the Dell EMC Simple Support Matrix or contact your Dell EMC representative for the latest
information on qualified hosts, host bus adapters (HBAs), and connectivity equipment.
Dell EMC recommends that you do not mix HBAs from different vendors in the same host.
The following Dell EMC storage products are Microsoft-certified and can be used in the Windows
environment:
l Unity series
n FC - Unity 300 / Unity 400 / Unity 500 / Unity 600 / Unity 300F / Unity 400F / Unity
500F / Unity 600F / Unity 350F / Unity 450F / Unity 550F / Unity 650F/Unity XT 380 /
Unity XT 480 /Unity XT 680 /Unity XT 880 /Unity XT 380F /Unity XT 480F /Unity XT
680F /Unity XT 880F
n iSCSI: Unity 300 iSCSI Target / Unity 400 iSCSI Target / Unity 500 iSCSI Target / Unity
600 iSCSI Target / Unity 300F iSCSI Target / Unity 400F iSCSI Target / Unity 500F iSCSI
Target / Unity 600F iSCSI Target / Unity 350F iSCSI Target / Unity 450F iSCSI Target /
Unity 550F iSCSI Target / Unity 650F iSCSI Target / Unity XT 380 iSCSI Target / Unity XT
480 iSCSI Target / Unity XT 680 iSCSI Target / Unity XT 880 iSCSI Target / Unity XT
380F iSCSI Target / Unity XT 480F iSCSI Target / Unity XT 680F iSCSI Target / Unity XT
880F iSCSI Target / UnityVSA iSCSI Target
l XtremIO series
n FC - XtremIO X2-R / XtremIO X2-S / XtremIO X2-T
n iSCSI - XtremIO X2-R iSCSI Target / XtremIO X2-S iSCSI Target / XtremIO X2-T iSCSI
Target
l VMAX/PowerMax series
n FC - VMAX 100K / VMAX 200K / VMAX 400K / VMAX250F / VMAX250FX / VMAX450F /
VMAX450FX / VMAX850F / VMAX850FX / VMAX950F / VMAX950FX / PowerMax
2000 / PowerMax 8000
n iSCSI - VMAX 100K iSCSI Target / VMAX 200K iSCSI Target / VMAX 400K iSCSI Target /
VMAX250F iSCSI Target / VMAX250FX iSCSI Target / VMAX450F iSCSI Target /
VMAX450FX iSCSI Target / VMAX850F iSCSI Target / VMAX850FX iSCSI Target /
VMAX950F iSCSI Target / VMAX950FX iSCSI Target / PowerMax 2000 iSCSI Target /
PowerMax 8000 iSCSI Target
l VPLEX series
n FC - VPLEX VS2 / VPLEX VS6
Introduction
FC has some of the benefits of both channels and networks. A FC fabric is a switched network,
providing generic, low-level services onto which host channel architectures and network
architectures can be mapped. Networking and I/O protocols (such as SCSI commands) are
mapped to FC constructs and then encapsulated and transported within FC frames. This process
enables high-speed transfer of multiple protocols over the same physical interface.
As with direct-attach SCSI, FC provides block-level access to the devices that allows the host
system to identify the device as a native device. The true power of native device identification is
the ability to use all the current applications (for example, backup software, volume management,
and raw disk management) without modification.
FC is a technology for transmitting data between computer devices at data rates of up to 32 GB/s
at this time. FC is flexible, and devices can be as far as 10 kilometers (about 6 miles) apart if
optical fiber is used as the physical medium.
FC supports connectivity over fiber optic cabling or copper wiring. FC devices using fiber optic
cabling use two unidirectional fiber optic cables for each connection. One fiber optic cable is used
for transmitting and the other for receiving.
Dell EMC HBA installation and configurations guides are available at the Dell EMC download pages
on these websites.
Emulex FC HBA
Using a Broadcom Emulex Fibre Channel (FC) host bus adapter (HBA) with the Windows operating
system requires adapter driver software. The driver functions at a layer below the Windows SCSI
driver to present FC devices to the operating system as standard SCSI devices.
Download the driver from the Dell EMC section of the Broadcom website. Follow the links to your
adapter for the appropriate operating system and version.
After the driver installation is complete, go to Computer Management > Device Manager >
Storage Controllers. You can see the Emulex driver information as shown in the following
illustration.
Right-click a port and review the driver properties, as shown in following illustration.
Figure 2 Emulex driver properties window
Dell EMC recommends that you install the Emulex OneCommand Manager from the Broadcom
website for better HBA management.
You can also use the Dell EMCInquiry utility, INQ, as an alternate for HBA management.
QLogic FC HBA
Using the Qlogic Fibre Channel (FC) host bus adapter (HBA) with the Windows operating system
requires adapter driver software. The driver functions at a layer below the Windows SCSI driver to
present FC devices to the operating system as standard SCSI devices.
Download the driver from the Dell EMC section of the Qlogic website. Follow the links to your
adapter for the appropriate operating system and version.
After the driver installation is complete, go to Computer Management > Device Manager >
Storage Controllers. You can see the QLogic driver information as follows:
Figure 5 QLogic driver information
Right-click a port and review the driver properties, as shown in the following figure.
Dell EMC recommends that you install the QConvergeConsole from the QLogic website for better
HBA management. The following figure shows the QConvergeConsole CLI.
Figure 7 QConvergeConsole
You can also use the Dell EMC Inquiry utility, INQ, as an alternative for HBA management.
Brocade FC HBA
The Brocade product portfolio includes Fibre Channel (FC) host bus adapters (HBAs),converged
network adapters (CNAs), and mezzanine adapters for OEM blade server platforms. The following
Brocade HBAs and CNAs are now provided by QLogic under the same model numbers:
l Brocade 1860 fabric adapters
l Brocade 815/825 and 415/425 FC HBAs
l Brocade 1010/1020 CNAs
l OEM HBA and CNA mezzanine adapters (1007, 1867, 1869 & BR1741M-k)
iSCSI connectivity
Introduction
Internet SCSI (iSCSI) is an IP-based storage networking standard, developed by the Internet
Engineering Task Force, for linking data storage facilities. iSCSI facilitates block-level transfers by
transmitting SCSI commands over IP networks.
The iSCSI architecture is similar to a client/server architecture. In the case of iSCSI, the client is
an initiator that issues an I/O request, and the server is a target (such as a device in a storage
system). This architecture can be used over IP networks to provide distance extension. The
iSCSI discovery
About this task
Note: Microsoft iSCSI Initiator is native to Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and Windows
Server 2008. On these operating systems, no installation steps are required. For more details,
see the Microsoft iSCSI Initiator Step-by-Step Guide on Microsoft TechNet. Before
configuring the iSCSI initiator, ensure that you have identified the NIC and the target where it
will connect.
For example: NIC1 and SPA-0 and SPB-0 are on one network subnet. NIC2 and SPA-1 and SPB-1
are on a different subnet. This example connects NIC1 to SPA-0 and SPB-0, and NIC2 to SPA-1
and SPB-1.
Note: NIC1 and NIC2 could also be on the same subnet, but Dell EMC does not recommend it.
b. Type powermt check and, when prompted to remove dead device, select a for ALL.
11. Review the Discovery tab and ensure that no targets are connected.
12. Review each of the iSCSI initiator tabs and ensure that they are empty.
Introduction
All Dell EMC storage arrays support installing and booting Windows from a SAN environment. The
Windows host operating system can reside on an external device that is managed by the Microsoft
Multipath I/O (MPIO) utility or PowerPath software. This section describes the configuration
process and how to prevent possible issues.
Supported environments
Dell EMC storage such as PowerMax and VMAX series, VNX, Unity, VPLEX, and XtremIO arrays
supports boot-from-SAN.
See the Dell EMC Simple Support Matrix for a list of operating system kernels that support
booting from SAN storage.
Dell EMC supported Broadcom Emulex, QLogic, and Brocade host bus adapters (HBAs) can be
used to boot from SAN. To boot from storage that is attached to the SAN environment, the HBA's
boot BIOS must be installed and enabled on the adapter. See the driver manuals and configuration
guides on the Dell EMC section of the Broadcom Emulex, QLogic, and Brocade websites as well as
operating system, HBA, and server vendor documentation.
Note:
l The AX100/100i and AX150/150i are supported only with the low-cost HBAs. See the Dell
EMC Simple Support Matrix for information about the HBAs that are supported with these
arrays.
l iSCSI booting from SAN is supported in limited configurations. See the Dell EMC Simple
Support Matrix for information about supported environments.
Guidelines
l Maintain the simplest connectivity configuration between the host server and SAN
environment before installing the operating system. You can alter the configuration after
installation.
l If multiple host bus adapters (HBAs) are attached to the host, ensure that the HBA that is
connected to the lowest-numbered PCI slot is zoned to the array.
l Unzone all arrays from the host server, except the array where the boot device resides. Place
only the boot device that is attached to the host server on the array where the boot device
resides.
l Assign the boot LUN as host LUN ID 0. If the boot LUN has a Host ID other than 0, the HBA
BIOS installation might fail and have no visibility to the boot LUN.
n The boot LUN's Host ID on a VNX series or Dell EMC CLARiiON system can be forced to 0
by removing all the other LUNs from the storage group and adding back only the boot LUN.
n For Symmetrix, the Symmetrix LUN base/offset skip adjustment (symmask set lunoffset)
capability can be used to assign LUN 0 to a boot LUN if necessary.
n For Unity series or VNXe series, you can modify the Host LUN ID directly in the Unisphere
UI.
n For the PowerMax and VMAX series, when you add devices to a storage group, you can use
the following command:
switch -lun 0. (symaccess -sid xxx -type stor -name host_sg_name
add devs lun_id -lun 0)
l In XtremIO version 4.0.0 or later, volumes are numbered by default, starting from LUN ID 1. Do
not manually adjust the LUN ID to 0, because doing so might lead to issues with some
operating systems. In XtremIO 3.x and earlier versions, LUN IDs start at 0 and remain
accessible when the XtremIO cluster is updated from 3.0.x to 4.x.
l You can add LUNs after the operating system installation is completed.
other vendor HBA and operating system versions might require slight modifications. For details,
see the HBA and operating system vendor websites.
Procedure
1. Boot the server, and press Alt+E to enter the Broadcom Emulex BIOS when you see the
following message:
2. Select the adapter port that you want to configure, which, in a single-path configuration, is
the HBA port zoned to storage.
3. If the link status is Link UP, select Enable/Disable Boot from SAN to enable the HBA
BIOS.
4. To enable the boot BIOS on the adapter port, select Enable.
5. After the BIOS is enabled, select Scan for Target Devices.
The utility lists the attached LUNs. The example in following figure shows the attached boot
LUN, which is a VNX RAID3 LUN.
6. When the boot LUN is visible to the HBA, select Configure Boot Devices.
7. Review the list of boot devices and select the required LUN as the primary boot device.
You can view the device for boot details, as shown in the following figure.
Direct-attached storage
Direct-attached Storage (DAS) is digital storage that is directly attached to the computer that
access it, as opposed to storage that is accessed over a computer network. Examples of DAS
include hard drives, solid-state drives, optical disc drives, and storage on external drives.
Network-attached storage
Network-attached storage (NAS) is a file-level data-storage server that is connected to a
computer network that provides data access to a heterogeneous group of clients. NAS is
specialized for serving files by its hardware, software, or configuration.
l Introduction...........................................................................................................................28
l Dell EMC PowerPath for Windows........................................................................................ 28
l Microsoft Native MPIO......................................................................................................... 35
l Veritas volume management software...................................................................................42
Introduction
Dell EMC supports various mechanisms to address multiple paths to a device. Having redundant
access paths to a storage environment is an essential aspect in any storage topology. Online array
code upgrades (nondisruptive upgrades, or NDU), online configuration changes, or any
disturbances in the topology are best handled by a host when multiple paths are available, and
when path management software is installed and configured on the host.
The advantages of path management software include:
l Path failover and path monitoring-Periodically assess the health of the host storage
connectivity and routing over a preconfigured alternate path in case of a path failure, a
component failure, or both.
l Load balancing-Share the I/O load across multiple channels to improve performance.
l Device management- Manage multiple native devices which are instances of a single device,
and in active-passive array environments, route I/O to an active device.
l Device persistence-Achieve persistence of device names when the SCSI device is rescanned
upon reboot.
After you install PowerPath software on the cluster, if you test node failover by disconnecting all
cables for a LUN or by disrupting the path between the active host and the array, Windows will log
event messages indicating a hardware or network failure and possible data loss. If the cluster is
working correctly, it will fail over to a node with an active path and you can ignore the messages
from the original node as logged in the event log.
Note: Check the application that is generating I/O to see if any failures have occurred. If no
failures have occurred, everything is working normally.
Installing PowerPath software in a clustered environment requires:
l Moving all resources to Node A
l Installing PowerPath software on Node B
l Configuring additional paths between the storage array and Node B
l Moving all resources to Node B
l Installing PowerPath software on Node A
l Configuring additional paths between the storage array and Node A
l Returning Node A resources to Node A
The following sections describe these tasks in detail.
Note: If the cluster has more than two nodes, install PowerPath software on the
other nodes.
For example, In a four-node cluster, replace Node B with Nodes B, C, and D in
step 4 of the previous section (Moving resources to Node A), and in steps 1 and
step 2 of this section.
3. To pause Node A, click Node A and click File > Pause Node.
The following figure shows taskbar icons and the PowerPath status that each represents.
Figure 12 PowerPath taskbar icons and status
The following figure shows how the PowerPath Administrator would look if PowerPath is installed
correctly. In this case, one path is zoned between the host bus adapter (HBA) and one port on the
storage device.
The following figure shows how the PowerPath Administrator would look when multiple paths are
zoned to your storage device.
Figure 14 Multiple paths
Problem determination
Use of PowerPath Administrator can help you determine what has caused a loss of connectivity to
the storage device. The PowerPath Administrator UI shows array ports that are offline, defective
host bus adapters (HBAs), and broken paths in several ways. The following table shows the known
possible failure states. Referencing this table can greatly reduce problem determination time.
Object Degraded icon Failed icon Unlicensed and Unlicensed and failed
degraded icon icon
PowerPath
device
One or more (but not All paths to the disk PowerPath PowerPath Administrator
all) paths to the disk device have failed. Administrator is is unlicensed. All paths to
device have failed. The disk is not unlicensed. One or more the disk device have
available. (but not all) paths to the failed. This disk is not
disk device have failed. available.
Adapter
One or more (but not All paths on this PowerPath PowerPath Administrator
all) adapters have adapter to this disk Administrator is is unlicensed.
either failed or have failed. unlicensed.
All paths on this adapter
degraded. The status is
One or more (but not all) to the disk have failed.
displayed in the
adapters have either
Adapters folder in the
failed or degraded. The
scope pane.
status is displayed in the
One or more (but not Adapters folder in the
all) paths have failed on scope pane.
the adapter. The status
One or more (but not all)
is displayed in the
paths have failed on the
individual adapter folder
specific adapter. The
under Adapters in the
status is displayed in the
scope pane or in the
individual adapter folder
result pane when
under Adapters in the
Adapters is selected
scope pane or in the
from the scope pane.
result pane when
Adapters is selected
from the scope pane.
PowerPath Administrator
One or more hardware
is unlicensed and one or
components that
more of the path
make up the path
hardware components
have failed; therefore,
has failed; therefore the
the entire path failed.
entire path failed.
Root node
Object Degraded icon Failed icon Unlicensed and Unlicensed and failed
degraded icon icon
Storage
array
One or more (but not All paths to the array PowerPath PowerPath Administrator
all) paths to the storage have failed. This array Administrator is is unlicensed. All paths to
array have failed or are is not available. unlicensed. One or more the array have failed.
in a degraded state. (but not all) paths to the This disk is not available.
array have failed or are in
a degraded state.
Storage
array port
One or more (but not All paths to the array PowerPath PowerPath Administrator
all) paths to the storage port have failed. This Administrator is is unlicensed. All paths to
array port have failed or array port is not unlicensed. One or more the array port have
are in a degraded state. available. (but not all) paths to the failed. This disk is not
array port have failed or available.
are in a degraded state.
l The following figure shows the result of a problem with one of the HBAs or the path leading to
the HBA. The failed HBA or path is marked with a red X. Access to the disk devices still exists
even when it is degraded.
Figure 16 Failed HBA path
PowerPath messages
For a complete list of PowerPath messages and their meanings, see the PowerPath Product
Guide.
Support for MPIO in Windows Server 2008 and Windows Server 2008 R2
Windows Server 2008 and Windows Server 2008 R2 natively include Microsoft Multipath I/O
(MPIO), which is supported by all Dell EMC storage arrays.
Note the following details:
l For Server Core installations of Windows Server 2008 Core and Windows Server 2008 R2,
MPIO is failover only. No load-balancing options are available in the default Device Specific
Module (DSM) for Dell EMC storage arrays.
l Default Microsoft MPIO Timer Counters are supported.
l Hosts running Windows Server 2008 and Windows Server 2008 R2 must be manually
configured so that the initiators are registered using failover mode 4 (ALUA).
l CLARiiON CX4 requires Flare R30 or later to support MPIO. Other CLARiiON systems require
Flare 26 or later.
l VNX arrays require VNX OE for Block 31 or later.
Configuring MPIO for Server Core installations of Windows Server 2008 and 2008 R2
You must configure Multipath I/O (MPIO) to manage VPLEX, Symmetrix DMX, VMAX, VNX,
XtremIO, Unity series, and CLARiiON systems. To configure MPIO open the Control Panel, and
then open the MPIO applet.
Use one of the following methods to configure MPIO:
Method 1: Manually enter the Vendor and Device IDs of the arrays for MPIO to claim and manage.
Note: This method is preferred if all arrays are not initially connected during configuration and
to avoid the subsequent reboots.
1. Use the MPIO-ed Devices tab in the MPIO Properties dialog box of the MPIO control panel
applet.
2. Select Add and enter the vendor and product IDs of the array devices to be claimed by MPIO.
You must enter the vendor ID as a string of eight characters (padded with trailing spaces) followed
by the product ID as a string of sixteen characters (padded with trailing spaces).
For example: To claim a VNX series and CLARiiON RAID 1 device in MPIO, the string would be
entered as:
DGC*****RAID*1**********
where the asterisk is representative of a space.
The vendor and product IDs vary based on the array and device types that are presented to the
host, as shown in the following table.
Method 2: Use the MPIO applet to discover, claim, and manage the arrays that were already
connected during configuration.
Note: This method is preferred if ease-of-use is required and subsequent reboots are
acceptable when each array is connected.
CAUTION When configuring MPIO on your system, do not exceed the MPIO maximum of 32
paths per LUN. If you exceed this number, the host will fail and produce a blue-screen error
message.
1. Go to Discover Multi-Paths tab of the MPIO Properties dialog box of the MPIO control panel
applet to configure automatic discovery.
Only arrays that are connected with at least two logical paths are listed as available to be
added in this tab.
l The SPC-3-compliant section of the applet lists only devices from:
n VNX systems running VNX OE for Block 31
n CLARiiON systems that are running Flare R26 or later and that are configured for
failover mode 4 (ALUA)
l The Others section of the applet lists devices from the following arrays: DMX, VMAX 40K,
VMAX 20K/VMAX, VMAX 10K systems with SN xxx987xxxx, VMAX 10K systems with SN
xxx959xxxx, VMAXe, and VPLEX.
2. Select the array and device types to be claimed and managed by MPIO by selecting the Device
Hardware ID and clicking the Add button.
Note: Although the operating system prompts you to reboot for each device type that you add,
a single reboot after you add multiple device types is sufficient.
VMAX 20K/VMAX, VMAX 10K systems with SN xxx987xxxx, VMAX 10K systems with SN
xxx959xxxx, VMAXe, VNX series, and CLARiiON.
The default load balance policy (as reported in the MPIO tab) for each disk device depends upon
the type of disk device presented.
Note: Any permitted changes to the default load balance policy, as described in this section,
must be made on a per-disk-device basis.
l In Windows Server 2008, devices on the following arrays report a default load balance policy as
Fail Over Only, where the first reported path is listed as Active/Optimized and all other paths
are listed as Standby:
DMX, VMAX 40K, VMAX 20K/VMAX, VMAX 10K systems with SN xxx987xxxx, VMAX 10K
systems with SN xxx959xxxx, and VMAXe.
You can override the default policy by changing the load balance policy to any other available
option. See the Windows Server 2008 documentation for a detailed description of available
load balance policies.
l In Windows Server 2008 R2, devices on the following arrays report a default load balance
policy as Round Robin; all the paths are listed as Active/Optimized:
DMX, VMAX 40K, VMAX 20K/VMAX, VMAX 10K systems with SN xxx987xxxx, VMAX 10K
systems with SN xxx959xxxx, and VMAXe.
You can override the default policy by changing the load balance policy to any other available
option. See the Windows Server 2008 R2 documentation for a detailed description of available
load balance policies.
l DMX, VMAX 40K, VMAX 20K/VMAX, VMAX 10K systems with SN xxx987xxxx, VMAX 10K
systems with SN xxx959xxxx, and VMAXe array devices attached to the host with a default
load balance policy as Fail Over Only. You can change the load balance policy to any other
available options. See the Windows Server 2008 and Windows Server 2008 R2 documentation
for a detailed description of available load balance policies.
l VNX series and CLARiiON devices report a default load balance policy as Round Robin With
Subset. All paths to the storage processor owning the device as Active/Optimized, and all
paths to the storage processor not owning the LUN as Active/Unoptimized.
l VNX series and CLARiiON devices attached to the host in ALUA mode (as is required when
using native MPIO) report the path state that is used directly by the host that is natively
running MPIO. You cannot override the path state by changing the load balance policy.
l VPLEX devices report a default load balance policy of Round Robin with all active paths as
Active/Optimized. You can override the default policy by changing the load balance policy to
any other available option, except Fail Over Only. See the Windows Server 2008 and Windows
Server 2008 R2 documentation for a detailed description of available load balance policies.
Enabling MPIO on Server Core installations of Windows Server 2008 and 2008 R2
You must start MultiPath I/O (MPIO) and other features from the command line because the
Server Core installations of Windows Server 2008 and 2008 R2 do not have traditional a UI. See
Microsoft TechNet for more information about Windows Server Core installations.
To enable the native MPIO feature from the command line, type:
start /w ocsetup MultipathIo
After the system reboots, you can manage MPIO with the mpiocpl.exe utility. From the command
prompt, type: mpiocpl.exe
The MPIO Properties dialog box appears. The arrays and devices can be claimed and managed
from the MPIO Properties dialog box, as described in Path management in MPIO for standard
Windows installations.
For more information about Microsoft MPIO, see Microsoft website and Microsoft TechNet.
Only arrays that are connected with at least two logical paths are available to be added in this
tab.
l The SPC 3-compliant section of the applet lists only devices from:
n VNX systems running VNX OE for Block 31
n CLARiiON systems that are running Flare 30 or later and that are configured for failover
mode 4 (ALUA)
l The Others section of the applet lists devices from the following arrays: DMX, VMAX 40K,
VMAX 20K/VMAX, VMAX 10K systems with SN xxx987xxxx, VMAX 10K systems with SN
xxx959xxxx, VPLEX and VMAXe
2. Select the array and device types to be claimed and managed by MPIO by selecting the Device
Hardware ID and clicking Add.
Figure 17 MPIO Properties dialog box
Note: Although the operating system prompts you to reboot for each device type that you add,
a single reboot after you add multiple device types is sufficient.
Fail Over Only-This policy does not perform load balancing. It uses a single active path, and the
rest of the paths are standby paths. The active path is used for sending all I/O. If the active path
fails, then one of the standby paths is used. When the path that failed is reactivated or
reconnected, the standby path that was activated returns to standby.
Round Robin-This load balance policy allows the Device Specific Module (DSM) to use all
available paths for MPIO in a balanced way. This is the default policy that is chosen when the
storage controller follows the active-active model and the management application does not
specifically choose a load balance policy.
Round Robin with Subset-This load balance policy allows the application to specify a set of paths
to be used in a round robin fashion and with a set of standby paths. The DSM uses paths from a
primary pool of paths for processing requests as long as at least one of the paths is available. The
DSM uses a standby path only when all the primary paths fail. For example: With four paths: A, B,
C, and D, paths A, B, and C are listed as primary paths and D is the standby path. The DSM
chooses a path from A, B, and C in round robin fashion as long as at least one of them is available.
If all three paths fail, the DSM uses D, which is the standby path. If path A, B, or C becomes
available, the DSM stops using path D and switches to the available paths among A, B, and C.
Least Queue Depth-This load balance policy sends I/O down the path with the fewest currently
outstanding I/O requests. For example, consider that, of two I/Os, one I/O is sent to LUN 1 on
Path 1, and the other I/O is sent to LUN 2 on Path 1. The cumulative outstanding I/O on Path 1 is
2, and on Path 2, it is 0. Therefore, the next I/O for either LUN will process on Path 2.
Weighted Paths-This load balance policy assigns a weight to each path. The weight indicates the
relative priority of a path. The larger the number, the lower ranked the priority. The DSM chooses
the least-weighted path from the available paths.
Least Blocks-This load balance policy that sends I/O down the path with the least number of data
blocks currently being processed. For example, consider that, of two I/Os, one is 10 bytes and the
other is 20 bytes. Both are in process on Path 1, and both have completed Path 2. The cumulative
outstanding amount of I/O on Path 1 is 30 bytes. On Path 2, it is 0. Therefore, the next I/O will
process on Path 2.
The default load balance policy for each disk device (as reported in the MPIO tab) depends upon
the type of disk device that is presented.
In Windows Server 2012, devices on the following arrays report a default load balance policy of
Round Robin, where all paths are listed as Active/Optimized: DMX, VMAX 40K, VMAX 20K/
VMAX, VMAX 10K systems with SN xxx987xxxx, VMAX 10K systems with SN xxx959xxxx, and
VMAXe.
VNX series and CLARiiON devices report a load balance policy of Round Robin With Subset,
where all paths to the storage processor owning the device are Active/Optimized, and all paths to
the storage processor not owning the LUN are Active/Unoptimized. VNX series and CLARiiON
devices that are attached to the host in ALUA mode (as is required when using MPIO) report the
path state. The reported path state is used directly by the host running MPIO and cannot be
overridden by changing the load balance policy.
VPLEX devices report a default load balance policy of Round Robin, with all active paths as
Active/Optimized.
Change load balance policies based on your environment. In most cases, the default policy will be
suitable for your I/O load needs. However, some environments might require a change to the load
balance policy to improve performance or better spread I/O load across storage front-end ports.
Dell EMC does not require a specific load balance policy for any environment, and customers can
change their load balance policies to meet their environment's needs.
For more information about Microsoft MPIO, see Microsoft website and Microsoft TechNet.
PowerMax storage systems running PowerMaxOS 5978 would be available with Dell EMC VMAX
All Flash storage systems also running PowerMaxOS 5978.
HYPERMAX OS 5977 provides emulations that perform specific data service and control functions
in the HYPERMAX environment, and it introduces an open application platform for running data
services and provide file system storage with eNAS and embedded management services for
Unisphere. The entire feature set available with HYPERMAX OS 5977 running on Dell EMC VMAX
All Flash storage systems would be available with PowerMaxOS 5978 running on Dell EMC VMAX
All Flash storage systems except FCoE front-end connections.
For detailed product specifications including the Frontend I/O protocol support information, see
VMAX All Flash Product Guide and VMAX All Flash: Family Overview available on https://
www.dellemc.com/.
For more detailed features, see the Dell EMC PowerMax and VMAX: Non-Disruptive Migration Best
Practices and Operational Guide white paper available on https://www.dellemc.com/.
See Dell EMC Simple Support Matrix available on Dell EMC E-Lab Navigator for host
interoperability with various operating system platforms and multipathing software supported for
Non-Disruptive migrations.
Use advanced query option on Dell EMC E-lab Navigator for specific configuration search.
Midrange Storage
This section describes host connectivity of the Dell EMC Midrange storage arrays.
Storage configuration
Raid protection
Dell EMC Unity applies RAID protection to the storage pool to protect user data against drive
failures. Choose the RAID type that best suits your needs for performance, protection, and cost.
l RAID-1/0 provides the highest level of performance from a given set of drive resources, with
the lowest CPU requirements; however, only 50% of the total drive capacity is usable.
l RAID-5 provides the best usable capacity from a set of drive resources, but at lower overall
performance and availability than RAID-1/0.
l RAID-6 provides better availability than RAID-5 and better usable capacity than RAID-1/0, but
has the lowest performance potential of the three RAID types.
Traditional pools
Traditional Storage Pools apply RAID protection to individual groups of drives within the storage
pool. Traditional pools are the only type of pool available on Dell EMC Unity hybrid systems, and
are also available on all-Flash systems.
Raid protection
For traditional pools, Dell EMC recommends RAID-5 for drives in Extreme Performance and
Performance tiers, and RAID-6 for drives in the Capacity tier.
Assuming that roughly the same number of drives will be configured in a traditional pool, Dell EMC
recommends smaller RAID widths as providing the best performance and availability, at the cost of
slightly less usable capacity.
Example: When configuring a traditional pool tier with RAID-6, use 4+2 or 6+2 as opposed to 10+2
or 14+2.
When choosing RAID-1/0, 1+1 can provide better performance with the same availability and usable
capacity as larger RAID widths (assuming that the same total number of drives are used), and also
provides more flexibility.
All-flash pool
All-flash pools provide the highest level of performance in Dell EMC Unity. Use an all-flash pool
when the application requires the highest storage performance at the lowest response time.
Snapshots and Replication operate most efficiently in all-flash pools. Data Reduction is only
supported in an all-flash pool.
FAST Cache and FAST VP are not applicable to all-flash pools.
Dell EMC recommends using only a single drive size and a single RAID width within an all-flash
pool.
Dynamic pools
Dynamic Storage Pools apply RAID protection to groups of drive extents from drives within the
pool, and allow for greater flexibility in managing and expanding the pool. Dynamic pools are only
available on Dell EMC Unity all-Flash systems, and therefore must be all-Flash pools; dynamic
pools cannot be built with HDDs.
RAID protection
At the time of creation, dynamic pools use the largest RAID width possible with the number of
drives that are specified, up to the following maximum widths:
l RAID-1/0: 4+4
l RAID-5: 12+1
l RAID-6: 14+2
With dynamic pools, there is no performance or availability advantage to smaller RAID widths. To
maximize usable capacity with parity RAID, Dell EMC recommends to initially create the pool with
enough drives to guarantee the largest possible RAID width.
l For RAID-5, initially create the pool with at least 14 drives.
l For RAID-6, initially create the pool with at least 17 drives.
Spare capacity
Hot spares are not needed with dynamic pools. A dynamic pool automatically reserves the capacity
of one drive, as spare space in the pool, for every 32 drives. If a drive fails, the data that was on
the failed drive is rebuilt into the spare capacity on the other drives in the pool. Also, unbound
drives of the appropriate type can be used to replenish the spare capacity of a pool, after the pool
rebuild has occurred.
Example: For an All-Flash pool, use only 1.6 TB SAS Flash 3 drives, and configure them all with
RAID-5 8+1.
Hybrid pool
Hybrid pools can contain HDDs (SAS and NL-SAS drives) and flash drive, and can contain more
than one type of drive technology in different tiers. Hybrid pools typically provide greater capacity
at a lower cost than all-flash pools, but also typically have lower overall performance and higher
response times. Use hybrid pools for applications that do not require consistently low response
times, or that have large amounts of mostly inactive data.
Performance of a hybrid pool can be improved by increasing the amount of capacity in the flash
drive tier, so that more of the active dataset resides on and is serviced by the flash drives. See the
FAST VP section.
Hybrid pools can have up to three tiers (Extreme Performance, Performance, and Capacity). Dell
EMC recommends using only a single drive speed, size, and RAID width within each tier of a hybrid
pool.
Example:
l For the Extreme Performance tier, use only 800 GB SAS flash 2 drives, and configure them all
with RAID-5 8+1.
l For the Performance tier, use only 1.2 TB SAS 10K RPM drives, and configure them with
RAID-5 4+1.
l For the Capacity tier, use only 6 TB NL-SAS drives, and configure them all with RAID-6 6+2.
l Data Reduction
l Snapshots
l Thin Clones
l Asynchronous Replication
Thick storage objects will reserve capacity from the storage pool, and dedicate it to that particular
storage object. Thick storage objects guarantee that all advertised capacity is available for that
object. Thick storage objects are not space efficient, and therefore do not support the use of
space-efficient features. If it is required to enable a space-efficient feature on a thick storage
object, it is recommended to first migrate the thick storage object to a thin storage object, and
enable the feature during the migration (for Data Reduction) or after migration has completed (for
Snapshots, Thin Clones, and Asynchronous Replication).
In addition to capacity for storing data, storage objects also require pool capacity for metadata
overhead. The overhead percentage is greater on smaller storage objects. For better capacity
utilization, Dell EMC recommends configuring storage objects that are at least 100GB in size, and
preferably at least 1TB in size.
Features
FAST VP
Fully Automated Storage Tiering (FAST) for Virtual Pools (VP) accelerates performance of a
specific storage pool by automatically moving data within that pool to the appropriate drive
technology, based on data access patterns. FAST VP is applicable to hybrid pools only within a Dell
EMC Unity hybrid system.
The default and recommended FAST VP policy for all storage objects is Start High then Auto-
tier. This policy places initial allocations for the storage object in the highest tier available, and
monitors activity to this storage object to determine the correct placement of data as it ages.
FAST VP is most effective if data relocations occur during or immediately after normal daily
processing. Dell EMC recommends scheduling FAST VP relocations to occur before backups or
nightly batch processing. For applications which are continuously active, consider configuring
FAST VP relocations to run constantly.
Dell EMC recommends maintaining at least 10% free capacity in storage pools, so that FAST VP
relocations can occur efficiently. FAST VP relocations cannot occur if the storage pool has no free
space.
FAST Cache
FAST Cache is a single global resource that can improve performance of one or more hybrid pools
within a Dell EMC Unity hybrid system. FAST Cache can only be created with SAS Flash 2 drives,
and is only applicable to hybrid pools. Dell EMC recommends to place a Flash tier in the hybrid pool
before configuring FAST Cache on the pool. FAST Cache can improve access to data that is
resident in the HDD tiers of the pool.
Enable FAST Cache on the hybrid pool if the workload in that pool is highly transactional, and has a
high degree of locality that changes rapidly.
For applications that use larger I/O sizes, have lower skew, or do not change locality as quickly, it
may be more beneficial to increase the size of the Flash tier rather than enable FAST Cache.
FAST Cache can increase the IOPS achievable from the Dell EMC Unity system, and this will most
likely result in higher CPU utilization (to service the additional I/O). Before enabling FAST Cache
on additional pools or expanding the size of an existing FAST Cache, monitor the average system
CPU utilization to determine if the system can accommodate the additional load. See Table 3 for
recommendations.
Data Reduction
Dell EMC Unity Data Reduction by compression is available for Block LUNs and VMFS datastores
in an all-flash pool starting with Dell EMC Unity OE 4.1. Data reduction via compression is available
for file systems and NFS datastores in an all-flash pool starting with Dell EMC Unity OE 4.2.
Beginning with Dell EMC Unity OE 4.3, data reduction includes both compression and
deduplication.
Be aware that data reduction increases the overall CPU load on the system when storage objects
service reads or writes of reduceable data, and may increase latency. Before enabling data
reduction on a storage object, Dell EMC recommends to monitor the system and ensure that the
system has available resources to support data reduction (See Table 3 to the Hardware Capability
Guidelines). Enable data reduction on a few storage objects at a time, and then monitor the system
to be sure it is still within recommended operating ranges, before enabling data reduction on more
storage objects.
For new storage objects, or storage objects that are populated by migrating data from another
source, Dell EMC recommends to create the storage object with data reduction enabled, before
writing any data. This provides maximum space savings with minimal system impact.
Advanced Deduplication
Dell EMC Unity Advanced Deduplication is an optional extension to Data Reduction, that you can
enable to increase the capacity efficiency of data reduction enabled storage objects. Beginning
with Dell EMC Unity OE 4.5, advanced deduplication is available for storage objects in dynamic
pools on Dell EMC Unity 450F, 550F, and 650F All-Flash systems.
As with data reduction, advanced deduplication is only applied to data when it is written to the
storage object. LUN Move can be utilized to deduplicate existing data on Block storage objects.
For new storage objects, or storage objects that will be populated by migrating data from another
source, it is recommended to create the storage object with advanced deduplication enabled,
before writing any data. This provides maximum space savings with minimal system impact.
Snapshots
Dell EMC recommends including a Flash tier in a hybrid pool where snapshots will be active.
Snapshots increase the overall CPU load on the system, and increase the overall drive IOPS in the
storage pool. Snapshots also use pool capacity to store the older data being tracked by the
snapshot, which increases the amount of capacity used in the pool, until the snapshot is deleted.
Consider the overhead of snapshots when planning both performance and capacity requirements
for the storage pool.
Before enabling snapshots on a storage object, it is recommended to monitor the system and
ensure that existing resources can meet the additional workload requirements (See Table 2 for
Hardware Capability Guidelines ). Enable snapshots on a few storage objects at a time, and then
monitor the system to be sure it is still within recommended operating ranges, before enabling
more snapshots.
Dell EMC recommends to stagger snapshot operations (creation, deletion, and so on). This can be
accomplished by using different snapshot schedules for different sets of storage objects. It is also
recommended to schedule snapshot operations after any FAST VP relocations have completed.
Snapshots are deleted by the system asynchronously; when a snapshot is in the process of being
deleted, it will be marked as Destroying. If the system is accumulating Destroying snapshots over
time, it may be an indication that existing snapshot schedules are too aggressive; taking snapshots
less frequently may provide more predictable levels of performance. Dell EMC Unity will throttle
snapshot delete operations to reduce the impact to host I/O. Snapshot deletes will occur more
quickly during periods of low system utilization.
Thin Clones
Dell EMC recommends including a flash tier in a hybrid pool where thin clones will be active.
Thin clones use snapshot technology to provide space-efficient clones of block objects. Consider
the overhead of snapshots when planning performance and capacity requirements for a storage
pool which will have thin clones.
Asynchronous replication
Dell EMC recommends including a Flash tier in a hybrid pool where asynchronous replication is
active. This is applicable to both the source and the destination pools.
Dell EMC recommends configuring multiple replication interfaces per SP, and distributing
replication sessions across them. Link Aggregation Control Protocol (LACP) can also be used to
aggregate bandwidth for a replication interface. Configure Jumbo frames (MTU 9000) when
possible.
Asynchronous replication takes snapshots on the replicated storage objects to create the point-in-
time copy, determine the changed data to transfer, and maintain consistency during the transfer.
Consider the overhead of snapshots when planning performance and capacity requirements for a
storage pool that has replicated objects.
When possible, fill the source storage object with data before creating the replication session. The
data will then be transmitted to the destination storage object during initial synchronization. This is
typically the fastest way to populate the destination storage object with asynchronous replication.
Setting smaller RPO values on replication sessions will not make them transfer data more quickly;
but smaller RPOs result in more frequent snapshot operations. Choosing larger RPOs, or manually
synchronizing during nonproduction hours, may provide more predictable levels of performance.
SAN Copy
SAN Copy provides one-time migration of Block resources from a third-party array, using either
iSCSI or FC connections. When using FC, note that SAN Copy must use different ports than the
FC ports which are designated for Synchronous Replication. This is true even if Synchronous
Replication is not actively being used.
To lessen the impact of SAN Copy migrations on other host activity, consider reducing the number
of host connections on the FC ports used for SAN Copy.
NDMP
Dell EMC Unity supports 2-way NDMP for file data, which enables the system to send file data
directly to a backup device using FC connections. Make sure that NDMP uses different ports than
the FC ports which are designated for Synchronous Replication. This is true even if Synchronous
Replication is not actively being used.
To lessen the impact of 2-way NDMP backups on other host activity, consider reducing the
number of host connections on the FC ports that are used for NDMP.
Application considerations
Hosts
Under the ACCESS category in the main navigation menu, users can configure hosts (Windows or
Linux/UNIX) for storage access. VMware hosts can be configured on the VMware (Hosts) page.
Before a network host can access block storage or NFS file systems, the user must define a
configuration for the host and associate it with a storage resource. SMB file systems can
automatically be accessed by authorized users once provisioned. Users can use the Hosts page, as
shown in the following figure to configure host configurations. This can be done on an individual
host-by-host basis or through subnet and netgroup configurations that allow access to multiple
hosts or network segments. For block resources, before the user starts to configure a host, the
user should ensure that initiator interfaces are configured and initiator registration completed.
Once a host configuration is completed, users can go to the properties of a storage resource and
specify the hosts, subnets, or netgroups from which they want the resource to be accessed.
Figure 18 Hosts
VMware (ACCESS)
The VMware host access page is specifically for VMware ESXi hosts and their associated vCenter
servers. Unisphere provides VMware discovery capabilities through the VMware page, as shown in
the following figure. These discovery capabilities collect virtual machine and datastore storage
details from vSphere and display them in the context of the storage system. Imported VMware
hosts automatically register their initiators, allowing for ease of management. The vCenters tab
allows users to add a vCenter and associated ESXi hosts in a single workflow, while the ESXi hosts
tab allows users to add standalone ESXi hosts as needed. The Virtual Machines tab and Virtual
Drives tab display imported information about virtual machines and their VMDKs from any added
ESXi host.
For more information about VMware access and integration capabilities, see the Dell EMC Unity:
Virtualization Integration white paper on Dell EMC Online Support.
Initiators
To ensure that hosts can access block storage resources, the user must register initiators between
the storage system and configured hosts. On the Initiators page, as shown in the following figure,
users can manually register one or more Fibre Channel or iSCSI initiators. Initiators are endpoints
from which Fibre Channel and iSCSI sessions originate, where each initiator is uniquely identified
by its World Wide Name (WWN) or iSCSI Qualified Name (IQN). The link between a host initiator
and a target port on the storage system is called the initiator path. Each initiator can be associated
with multiple initiator paths. The Initiator Paths tab shows all data paths that are currently
available to the initiators connected to the system either by FC or iSCSI. For iSCSI paths to show
up, iSCSI interfaces must be configured on the Block Page. These initiators can then be
discovered and registered by hosts using the iSCSI initiator tool (that is, the Microsoft iSCSI
Initiator). For Fibre Channel paths, FC zoning on the appropriate switch is needed for the initiator
paths to be seen as available by the system. Once the paths are available, users can configure their
connected hosts on the Hosts Page.
Figure 20 Initiators
With the release of Dell EMC Unity OE version 4.3, Initiators can now have advanced settings
customized through Unisphere. In order to access these settings, select an Initiator and then click
the pencil icon to bring up the Edit Initiator window. Clicking Advanced at the bottom to reveal the
Initiator Source Type, Fail-over Mode, Unit Serial Number, and LunZ Enabled settings, as shown in
the following figure. For more information about configuring Host Initiator Parameters, please
reference the Online Help through Unisphere.
Snapshot schedule
Dell EMC Unity enables you to take point-in-time snapshots for all storage resources (block or file)
to meet protection and recovery requirements in the event of corruption or accidental deletion.
The Snapshot Schedule page, as shown in the figure , enables users to set the system to
periodically take snapshots of storage resources automatically. Automating these administrative
tasks takes away some of the management aspects of data protection. After enabling a snapshot
schedule for a resource, each snapshot that is taken is time-stamped with the date and time of
when it was created, and contains a point-in-time image of the data in the storage resource. The
default snapshot schedules available on the system are:
l Default protection - A snapshot is taken at 08:00 (UTC) every day, and the snapshot is
retained for 2 days.
l Protection with shorter retention - A snapshot is taken at 08:00 (UTC) every day, and the
snapshot is retained for 1 day.
l Protection with longer retention - A snapshot is taken at 08:00 (UTC) every day, and the
snapshot is retained for 7 days.
Note: Times are displayed in a user's local time in a 12-hour format and that default snapshot
schedules cannot be modified; but custom snapshot schedules can be configured by selecting
the intervals, times, and days for the system to take snapshots regularly.
With the Dell EMC Unity OE version 4.4 or later, user-defined Snapshot Schedules can be
replicated using the Synchronous Replication connection that is established between two physical
systems. Reference the new Sync Replicated column in the Snapshot Schedule page, as shown in
the following figure. Applying a replicated Snapshot Schedule is only enabled in synchronously
replicated file resources.
For more information about the snapshot technology available on Dell EMC Unity systems, see the
Dell EMC Unity: Snapshots and Thin Clones and Dell EMC Unity: MetroSync for File white papers on
Dell EMC Online Support.
Front-end connectivity
Dell EMC Unity provides multiple options for front-end connectivity, using on-board ports directly
on the DPE, and using optional I/O Modules. This section discusses recommendations for the
different types of connectivity.
In general, front-end ports need to be connected and configured symmetrically across the two
storage processors (SPs), to facilitate high availability and continued connectivity if there is SP
failure.
Example - A NAS Server is configured so that NAS clients connect using port 0 of the first I/O
Module on SPA; therefore port 0 of the first I/O Module on SPB must be cabled so that it is
accessible to the same networks.
For best performance, Dell EMC recommends using all front-end ports that are installed in the
system, so that workload is spread across as many resources as possible.
Example - configuring the 4-port Fibre Channel I/O Module, zone different hosts to different ports
so that all eight ports across the 2 SPs are used; do not zone all hosts to the first port of each I/O
Module.
Fibre Channel
When configured for Fibre Channel (FC), Dell EMC Unity CNA ports and I/O Module ports can be
configured with 8 GB or 16 GB SFPs. All FC ports can negotiate to lower speeds. 16 GB FC is
recommended for the best performance.
Dell EMC recommends single-initiator zoning when creating zone sets. For high availability
purposes, a single host initiator should be zoned to at least one port from SPA and one port from
SPB. For load balancing on a single SP, the host initiator can be zoned to two ports from SPA and
two ports from SPB. When zoning additional host initiators, zone them to different SP ports when
possible, to spread the load across all available SP ports.
Utilize multipathing software on hosts that are connected using FC, such as Dell EMC PowerPath,
which coordinates with the Dell EMC Unity system to provide path redundancy and load balancing.
iSCSI
Dell EMC Unity supports iSCSI connections on multiple 1 Gb/s and 10 GB/s port options. 10GBase-
T ports can autonegotiate to 1 GB/s speeds. 10 GB/s is recommended for the best performance. If
possible, configure Jumbo frames (MTU 9000) on all ports in the end-to-end network, to provide
the best performance.
To achieve optimal iSCSI performance, use separate networks and VLANs to segregate iSCSI
traffic from normal network traffic. Configure standard 802.3x Flow Control (Pause or Link Pause)
on all iSCSI Initiator and Target ports that are connected to the dedicated iSCSI VLAN.
Dell EMC Unity supports 10 GbE and 1GBase-T ports that provide iSCSI offload. Specifically, the
CNA ports (when configured as 10GbE or 1GBase-T) and the 2-port 10GbE I/O Module ports
provide iSCSI offload. Using these modules with iSCSI can reduce the protocol load on SP CPUs
by 10-20%, so that those cycles can be used for other services.
Utilize multipathing software on hosts that are connected using iSCSI, such as Dell EMC
PowerPath, which coordinates with the Dell EMC Unity system to provide path redundancy and
load balancing.
Network-attached storage (NAS)
Dell EMC Unity supports NAS (NFS, FTP, and/or SMB) connections on multiple 1 GB/s and 10
GB/s port options. 10GBase-T ports can auto-negotiate to 1 GB/s speed. 10 GB/s is recommended
for the best performance. If possible, configure Jumbo frames (MTU 9000) on all ports in the end-
to-end network, to provide the best performance.
Dell EMC recommends configuring standard 802.3x Flow Control (Pause or Link Pause) on all
storage ports, switch ports, and client ports that are used for NAS connectivity.
Dell EMC Unity provides network redundancy for NAS using Link Aggregation Control Protocol
(LACP) and Fail-Safe Networking (FSN). Combine FSN and LACP with redundant switches to
provide the highest network availability. In addition to redundancy, LACP can also improve
performance with multiple 1GBase-T connections, by aggregating bandwidth. LACP can be
configured across any Ethernet ports that have the same speed, duplex, and MTU.
Note: LACP cannot be enabled on ports that are also used for iSCSI connections.
While LACP creates a link aggregation with multiple active links, FSN provides redundancy by
configuring a primary link and a standby link. The standby link is inactive unless the entire primary
link fails. If FSN is configured with links of different performance capability (such as a link
aggregation of 10 GB/s ports, and a stand-alone 1 GB/s port), Dell EMC recommends that you
configure the highest performing link as the primary.
NAS Servers are assigned to a single SP. All file systems that are serviced by that NAS Server
have I/O processed by the SP on which the NAS Server is resident. For load-balancing, Dell EMC
recommends that you create at least two NAS Servers per Dell EMC Unity system: one on SPA,
and one on SPB. Assign file systems to each NAS Server such that front-end workload is
approximately the same for each SP.
Connectivity Options
The following tables provide maximum expected IOPS and bandwidth from the different ports that
are available in the Dell EMC Unity system. (The capability of a port does not guarantee that the
system can reach that level, nor does it guarantee that performance will scale with additional
ports. System capabilities are highly dependent on other configuration parameters.)
SAS ports are used by the SPs to move data to and from the back-end drives; all other ports can
be used to provide access to hosts.
Table 3 provides maximum expected IOPS and bandwidth from a 12Gb SAS port. The base Dell
EMC Unity configuration contains four ports.
The following tables provide maximum expected IOPS from the different ports that are available in
the Dell EMC Unity system. (The capability of a port does not guarantee that the system can
reach that level, nor does it guarantee that performance will scale with additional ports. System
capabilities are highly dependent on other configuration parameters.)
16 GB FC 45,000
8 GB FC CNA 45,000
The following table provides maximum expected IOPS from the front-end ports which provide File
protocols (NFS and SMB).
VPLEX
Overview
For detailed information about VPLEX, See documentation available at Dell EMC Online Support.
Documentation
See the following documents for configuration and administration operations:
l Dell EMC VPLEX with GeoSynchrony 6.x Product Guide
l Dell EMC VPLEX with GeoSynchrony 6.x CLI Guide
Prerequisites
Before configuring VPLEX in the Windows environment, complete the following on each host:
l Confirm that all necessary remediation has been completed. This ensures that OS-specific
patches and software on all hosts in the VPLEX environment are at supported levels according
to the Dell EMC E-Lab Navigator.
l Confirm that each host is running VPLEX-supported failover software and has at least one
available path to each VPLEX fabric.
Note: See Dell EMC E-Lab Navigator for the most up-to-date support information and
prerequisites.
l If a host is running PowerPath, confirm that the load-balancing and failover policy is set to
Adaptive.
Storage volumes
A storage volume is a LUN exported from an array. When an array is discovered, the storage
volumes view shows all exported LUNs on that array. You must claim, and optionally name, these
storage volumes before you can use them in a VPLEX cluster. Once claimed, you can divide a
storage volume into multiple extents (up to 128), or you can create a single full-size extent using
the entire capacity of the storage volume.
Note: To claim storage volumes, the GUI supports only the Claim Storage wizard, which
assigns a meaningful name to the storage volume. Meaningful names help you associate a
storage volume with a specific storage array and LUN on that array and are useful during
troubleshooting and performance analysis.
This section contains the following topics:
l Claiming and naming storage volumes
l Extents
l Devices
l Distributed devices
l Rule sets
l Virtual volumes
Claiming and naming storage volumes
You must claim storage volumes before you can use them in the cluster (except the metadata
volume, which is created from an unclaimed storage volume). Only after claiming a storage volume
you can use it to create extents, devices, and then virtual volumes.
Extents
An extent is a slice (range of blocks) of a storage volume. You can create a full-size extent using
the entire capacity of the storage volume, or you can carve the storage volume up into several
contiguous slices. Extents are used to create devices and then virtual volumes.
Devices
Devices combine extents or other devices into one large device with specific RAID techniques,
such as mirroring or striping. Devices can only be created from extents or other devices. A device's
storage capacity is not available until you create a virtual volume on the device and export that
virtual volume to a host.
You can create only one virtual volume per device. There are two types of devices:
l Simple device - A simple device is configured using one component, which is an extent.
l Complex device - A complex device has more than one component, combined using a specific
RAID type. The components can be extents or other devices (both simple and complex)
Distributed devices
Distributed devices are configured using storage from both clusters and are only used in multi-
cluster plexes. A distributed device's components must be other devices and those devices must
be created from storage in different clusters in the plex.
Rule sets
Rule sets are predefined rules that determine how a cluster behaves when it loses communication
with the other cluster, for example, during an inter-cluster link failure or cluster failure. In these
situations, until communication is restored, most I/O workloads require specific sets of virtual
volumes to resume on one cluster and remain suspended on the other cluster.
VPLEX provides a Management Console on the management server in each cluster. You can create
distributed devices using the GUI or CLI on either management server. The default rule set used by
the GUI causes the cluster used to create the distributed device to detach during communication
problems, allowing I/O to resume at the cluster. For more information about creating and applying
rule sets, see the Dell EMC VPLEX CLI Guide on Dell EMC Online Support.
There are cases in which all I/O must be suspended resulting in a data unavailability. VPLEX with
functionality of VPLEX Witness.
VPLEX with functionality of VPLEX Witness: When a VPLEX Metro configuration is augmented by
VPLEX Witness, the resulting configuration provides the following features:
l High availability for applications in a VPLEX Metro configuration (no single points of storage
failure)
l Fully automatic failure handling in a VPLEX Metro configuration
l Improved failure handling in a VPLEX configuration
l Better resource utilization
For information about VPLEX Witness, see the Dell EMC VPLEX with GeoSynchrony 5.5, 6.1 Product
Guide on Dell EMC Online Support.
Virtual volumes
Virtual volumes are created on devices or distributed devices and presented to a host through a
storage view. You can create virtual volumes only on top-level devices and always use full capacity
of the device.
System volumes
VPLEX stores configuration and metadata on system volumes that are created from storage
devices. There are two types of system volumes:
l Metadata volumes
l Logging volumes
Each of these volumes is briefly discussed in the following sections:
Metadata volumes
VPLEX maintains its configuration state, referred as metadata, on storage volumes provided by
storage arrays. Each VPLEX cluster maintains its own metadata, which describes the local
configuration information for this cluster and any distributed configuration information that is
shared between clusters.
For more information about metadata volumes for VPLEX with GeoSynchrony v4.x, see Dell EMC
VPLEX CLI Guide, on Dell EMC Online Support.
For more information about metadata volumes for VPLEX with GeoSynchrony v5.x, see Dell EMC
VPLEX with GeoSynchrony 5.0 Product Guide, on Dell EMC Online Support.
For more information about metadata volumes for VPLEX with GeoSynchrony v6.x, see the Dell
EMC VPLEX with GeoSynchrony 5.0 Product Guide, on Dell EMC Online Support.
Logging volumes
Logging volumes are created during initial system setup. It is required in each cluster to track any
blocks written during a loss of connectivity between clusters. After an inter-cluster link is restored,
the logging volume is used to synchronize distributed devices by sending only changed blocks over
the inter-cluster link.
For more information about logging volumes for VPLEX with GeoSynchrony v4.x, see Dell EMC
VPLEX CLI Guide, on Dell EMC Online Support.
For more information about logging volumes for VPLEX with GeoSynchrony v5.x, see Dell EMC
VPLEX with GeoSynchrony 5.0 Product Guide, on Dell EMC Online Support.
For more information about logging volumes for VPLEX with GeoSynchrony v6.x, see Dell EMC
VPLEX with GeoSynchrony 5.0 Product Guide, on Dell EMC Online Support.
Note: Dell EMC recommends that you download the latest information before installing any
server.
a. You must set VCM/ACLX bit, if VPLEX is sharing VMAX series directors with hosts that
require conflicting bit settings. For any other configuration, the VCM/ACLX bit can be
either set or not set.
Note: When setting up a VPLEX-attach version 4.x or earlier with a VNX series or CLARiiON
system, you must set the initiator type to CLARiiON Open and Failover Mode to 1. ALUA is
not supported.
When setting up a VPLEX-attach version 5.0 or later with a VNX series or CLARiiON system,
the initiator type can be set to CLARiiON Open and the Failover Mode set to 1 or Failover
Mode 4 since ALUA is supported.
If you are using the LUN masking, set the VCM/ACLX flag. You must use VCM/ACLX, if sharing
array directors with hosts which require conflicting flag settings.
Note: The FA bit settings that are listed in Table 4 are for connectivity of VPLEX to Dell EMC
VMAX series only. For host to Dell EMC VMAX series FA bit settings, see the Dell EMC E-Lab
Navigator.
For lists the storage arrays that are qualified to use with VPLEX, see the Dell EMC Simple Support
Matrix .
See VPLEX Procedure Generator, on Dell EMC Online Support, to verify supported storage arrays.
VPLEX automatically discovers storage arrays that are connected to the back-end ports. All arrays
connected to each director in the cluster are listed in the storage array view.
Host connectivity
For Windows host connectivity recommendations and best practices with VPLEX configurations,
see Implementation and planning best practices for EMC VPLEX technical notes, available on Dell
EMC Online Support.
For the most up-to-date information about qualified switches, hosts, HBAs, and software, see the
Dell EMC Simple Support Matrix, or contact your Dell EMC Customer Support.
The latest Dell EMC-approved HBA drivers and software are available for download at the
following websites:
l www.broadcom.com
l www.QLogic.com
l www.brocade.com
The Dell EMC HBA installation and configurations guides are available at the Dell EMC download
pages of these websites.
Note: Direct connect from an HBA to a VPLEX engine is not supported.
The execution throttle setting controls the amount of outstanding I/O requests per HBA port. The
HBA execution throttle should be set to the QLogic default value, which is 65535. This can be
done at the HBA firmware level using the HBA BIOS or the QConvergeConsole CLI or GUI.
The queue depth setting controls the amount of outstanding I/O requests per a single path. On
Windows, the HBA queue depth can be adjusted using the Windows Registry.
Note: When the execution throttle in the HBA level is set to a value lower than the queue
depth, it might limit the queue depth to a lower value than the set value.
The following procedures detail how to adjust the queue depth setting for QLogic HBAs:
l Setting the queue depth for the Qlogic FC HBA
l Setting the execution throttle on the Qlogic FC HBA
l Setting the queue depth and queue target on the Emulex FC HBA
Follow the procedure according to the HBA type. For any additional information, see the HBA
vendor's documentation.
2. Select HKEY_LOCAL_MACHINE and follow the tree structure down to the QLogic driver as
shown in the following figure and double-click DriverParameter.
2. Select one of the adapter ports in the navigation tree on the left.
3. Select Host > Parameters > Advanced HBA Parameters.
4. Set the Execution Throttle to 65535.
5. Click Save.
6. Repeat the steps for each port on each adapter connecting to VPLEX.
Using QConvergeConsole CLI
Procedure
1. Select 2: Adapter Configuration from the main menu.
8. Validate that the Execution Throttle is set to the expected value of 65535.
9. Repeat the steps for each port on each adapter connecting to VPLEX.
Procedure
1. Install OneCommand.
2. Launch the OneCommand UI.
7. In the same list, locate the QueueTarget parameter and set its value to 0.
8. Click Apply.
9. Repeat the steps for each port on the host that has VPLEX Storage exposed.
Procedure
1. In Failover Cluster Manager, select the cluster and from the drop-down menu and select
More Actions > Configure Cluster Quorum Settings.
2. Click Select Quorum Configuration Option and choose Advanced quorum configuration.
4. Click Select Quorum Witness and choose Configure a file share witness.
7. Click Finish.
For the server hosting the file share, follow these requirements and recommendations:
l Must have a minimum of 5 MB of free space.
l Must be dedicated to the single cluster and not used to store user or application data.
l Must have write permissions enabled for the computer object for the cluster name.
The following are additional considerations for a file server that hosts the file share witness:
l A single file server can be configured with file share witnesses for multiple clusters.
l The file server must be on a site that is separate from the cluster workload. This enables
equal opportunity for any cluster site to survive if site-to-site network communication is
lost. If the file server is on the same site, that site becomes the primary site, and it is the
only site that can reach the file share.
l The file server can run on a virtual machine if the virtual machine is not hosted on the
same cluster that uses the file share witness.
l For high availability, the file server can be configured on a separate failover cluster.
2. Click Next.
The Select Quorum Configuration Option window displays:
For the server hosting the file share, follow these requirements and recommendations:
l Must have a minimum of 5 MB of free space
l Must be dedicated to the single cluster and not used to store user or application data
l Must have write permissions enabled for the computer object for the cluster name
The following are additional considerations for a file server that hosts the file share witness:
l A single file server can be configured with file share witnesses for multiple clusters.
l The file server must be on a site that is separate from the cluster workload. This enables
equal opportunity for any cluster site to survive if site-to-site network communication is
lost. If the file server is on the same site, that site becomes the primary site, and it is the
only site that can reach the file share
l The file server can run on a virtual machine if the virtual machine is not hosted on the
same cluster that uses the file share witness.
l For high availability, the file server can be configured on a separate failover cluster.
5. Click Next
The Confirmation screen displays:
7. You can view this report or click Finish to complete the file share witness configuration.
Note: All connections shown in previous figure are Fiber Channel, except the network
connections.
The environment in previous figure consists of the following:
l Node-1: Windows 2008 or Windows 2008 R2 Server connected to the VPLEX instance over
Fiber Channel.
l Node-2: Windows 2008 or Windows 2008 R2 Server connected to the VPLEX instance over
Fiber Channel.
l VPLEX instance: One or more engine VPLEX having a connection through the L2 switch to
back-end and front-end devices.
Prerequisites
Ensure the following before configuring the VPLEX Metro or Geo cluster:
l VPLEX firmware is installed properly and the minimum configuration is created.
l All volumes to be used during the cluster test should have multiple back-end and front-end
paths.
Note: See the Implementation and Planning Best Practices for Dell EMC VPLEX Technical
Notes, available on Dell EMC Online Support, for best practices for the number of paths
for back-end and front-end paths.
l All hosts/servers/nodes of the same configuration, version, and service pack of the operating
system are installed.
l All nodes are part of the same domain and can communicate with each other before installing
Windows Failover Clustering.
l One free IP address is available for cluster IP in the network.
l PowerPath or MPIO is installed and enabled on all the cluster hosts.
l The hosts are registered to the appropriate View and visible to VPLEX.
l All volumes to be used during cluster test should be shared by all nodes and accessible from all
nodes.
l A network fileshare is required for cluster quorum.
Setting up quorum on a Windows 2008/2008R2 Failover Cluster for VPLEX Metro clusters
About this task
To set up a quorum on VPLEX Metro clusters for Windows Failover Cluster, complete the following
steps.
Procedure
1. Select the quorum settings. In the Failover Cluster Manager, right-click on the cluster name
and select More Actions > Configure Cluster Quorum Settings > Node and File Share
Majority.
The Node and File Share Majority model is recommended for VPLEX Metro and Geo
environments.
3. In the Select Quorum Configuration window, ensure that the Node and File Share
Majority radio button is selected, and then click Next.
4. 1. In the Configure File Share Witness window, ensure that the \\sharedfolder from any
Windows host in a domain other than the configured Windows Failover Cluster nodes is in
the Shared Folder Path, and then click Next.
6. In the Summary window, go to the Failover Cluster Manager and verify that the quorum
configuration is set to \\sharedfolder.
7. Click Finish.
XtremIO
General guidelines
l The optimal number of paths depends on the operating system and server information. To
avoid multipathing performance degradation, do not use more than 16 paths per device. Dell
EMC recommends using eight paths.
Note: This recommendation is not applicable to Linux hosts connected to XtremIO. On
such hosts, more than 16 paths per device can be used (if required).
l Balance the hosts between the Storage Controllers to provide a distributed load across all
target ports.
l Host I/O latency can be severely affected by SAN congestion. Minimize the use of ISLs by
placing the host and storage ports on the same physical switch. When this is not possible,
ensure that there is sufficient ISL bandwidth and that both the Host and XtremIO interfaces
are separated by no more than two ISL hops. For more information about proper SAN design,
see the Networked Storage Concepts and Protocols Techbook.
l Keep a consistent link speed and duplex across all paths between the host and the XtremIO
cluster.
l To ensure continuous access to XtremIO storage during cluster software upgrade, verify that a
minimum I/O timeout of 30 seconds is set on the HBAs of all hosts that are connected to the
affected XtremIO cluster. Similarly, verify that a minimum timeout of 30 seconds is set for all
applications that are using storage from the XtremIO cluster.
Note: See the Dell EMC KB article 167514 for references to Dell EMC Host Connectivity
Guides. These guides provide the procedures that are required for adjusting the HBA
minimum I/O timeout.
l Enable the TCP Offloading Engine (TOE) on the host iSCSI interfaces, to offload the TCP
packet encapsulation from the CPU of the Host to the NIC or iSCSI HBA, and free up CPU
cycles.
l Dell EMC recommends using a dedicated NIC or iSCSI HBA for XtremIO iSCSI and not to
partition the iSCSI interface (in other words, disable NIC Partitioning - NPAR).
l When using XtremIO iSCSI, Dell EMC recommends using interfaces individually rather than
using NIC Teaming (Link Aggregation), to combine multiple interfaces into a single virtual
interface.
Note: See the user manual of the FC/iSCSI switch for instructions about real implementations.
The following figure shows the logical connection topology for four paths. This topology applies to
both dual and quad HBA/NIC host architecture:
The following figure shows the logical connection topology for eight paths. This topology applies to
both dual and quad HBA/NIC host architecture:
The following figures show the logical connection topology for eight paths. This topology applies to
both dual and quad HBA/NIC host architecture.
Note: For clusters with an odd number of X-Brick blocks, change these examples to
accommodate the cluster configuration and try to balance the host load among the X-Brick
blocks of the cluster. The following figures show an eight paths connection topology with four
X-Brick blocks (or more):
The following figures show the logical connection topology for 16 paths. This topology applies to
both dual and quad HBA/NIC host architecture.
Note: For clusters with an odd number of X-Brick blocks, change these examples to
accommodate the cluster configuration and try to balance the host load among the X-Brick
blocks of the cluster. The following figure shows a 16-path connection topology with four X-
Brick blocks (or more):
iSCSI configuration
Note: This section applies only for iSCSI. If you are using only Fibre Channel with Windows and
XtremIO, skip to Fibre Channel HBA configuration.
Note: Review iSCSI SAN Guidelines before you proceed.
This section describes the issues that you must address when using iSCSI with XtremIO, for
optimal performance (with or without an iSCSI HBA).
Pre-requisites
Follow the VMware recommendations for installation and setup of the appropriate NIC/iSCSI HBA
for your system. It is recommended to install the latest driver version (patch), as described in the
VMware support site for each specific NIC/iSCSI HBA.
Refer to the E-Lab Interoperability Navigator for supported NIC/iSCSI HBA models and drivers.
Key:
HKLM\CurrentControlSet\Control\Class{4D36E97B-E325-11CE-BFC1-0
8002BE10318}\<Instance Number>\Parameters
FirstBurstLength (default 64K)
MaxBurstLength (default 256K)
Dell EMC recommends to set the FirstBurstLength and MaxBurstLength parameters to 512KB.
Note: For more details about configuring iSCSI with Windows, see Microsoft website.
Pre-requisite
To install one or more Dell EMC-approved Host Bus Adapters (HBAs) into a Windows host, follow
one of these documents according to the FC HBA type:
For Qlogic HBAs - Dell EMC Host Connectivity with QLogic Fibre Channel and iSCSI Host Bus
Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment document
For Emulex HBAs - Dell EMC Host Connectivity with Emulex Fibre Channel and iSCSI Host Bus
Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment document
These documents provide guidance on configuring the host for connection to the Dell EMC
Storage Arrays over a FC including any needed HBA BIOS settings. Both documents are available
in the Dell EMC OEM section of the Qlogic and Emulex websites. They can also be found on
http://support.EMC.com.
See E-Lab Interoperability Navigator for supported FC HBA models and drivers.
Queue Depth
Note: The FC HBA recommendations in this section are applicable to the following FC HBAs:
l Qlogic - adapters with names that start with ql
l Emulex - adapters with names that start with lpfc
See E-Lab Interoperability Navigator for all supported FC HBA models and drivers.
Note: Changing queue depth settings is designed for advanced users. Increasing the queue
depth may cause hosts to over-stress other arrays connected to the Windows host, resulting
in performance degradation while communicating with them. To avoid this, in mixed
environments with multiple array types connected to the Windows host, compare XtremIO
recommendations for queue depth with those of other platform before applying them.
For optimal operation with XtremIO storage, consider adjusting the queue depth of the FC HBA.
Queue depth is the amount of SCSI commands (including I/O requests) that can be handled by
storage device at a given time. Queue depth can be set on either of the following:
l Initiator level - HBA queue depth
l LUN level - LUN queue depth
The HBA queue depth (also referred to as execution throttle) setting controls the amount of
outstanding I/O requests per HBA port. The HBA queue depth should be set to the maximum
value. This can be done on the HBA firmware level using the HBA BIOS or CLI utility provided by
the HBA vendor as follows:
l Qlogic - Execution Throttle - Change the default value (32) to 65535.
l Emulex - lpfc_hba_queue_depth - No need to change the default (and maximum) value (8192).
Note: HBA queue depth (execution throttle) does not apply to QLE2600 and QLE8300 Series
Qlogic adapters, and is read only for 10GbE adapters. For more information, see Qlogic
website.
The LUN queue depth setting controls the amount of outstanding I/O requests per a single path.
On Windows, the LUN queue depth can be adjusted via the Windows Registry.
Note: When the HBA queue depth is set to a value lower than the LUN queue depth, it may
limit the LUN queue depth to a lower value than set.
The following table summarizes the default and recommended queue depth settings for Windows.
The following procedures detail setting the LUN queue depth for Qlogic and Emulex HBAs as
follows:
l Qlogic - Set the Qlogic HBA adapter LUN queue depth in Windows StorPort driver to 256
(maximum value).
l Emulex - Set the Emulex HBA adapter LUN queue depth in Windows to 128 (maximum value).
Follow the appropriate procedure according to the HBA type.
Setting the LUN Queue Depth for Qlogic HBA
Procedure
1. On the desktop, click Start, select Run and open the REGEDIT (Registry Editor).
Note: Some driver versions do not create the registry by default. In such cases the user
needs to manually create the registry.
2. Select HKEY_LOCAL_MACHINE and follow the tree structure down to the Qlogic driver as
follows:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\ql2300\
Parameters\Device
Note: In some cases the Windows host detects the Qlogic HBA as ql2300i (instead of
ql2300). In such cases, the following registry tree structure should be used instead:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\ql2300i
\Parameters\Device
3. Double-click DriverParameter:REG_SZ:qd=32.
4. Change the value of qd to 256.
5. If the string "qd=" does not exist, append the following text to the end of the string using a
semicolon (";"):
qd=256
6. Click OK.
7. Exit the Registry Editor and reboot the Windows host.
Note: Setting the queue-depth per this procedure is not disruptive.
Procedure
1. Install OneCommand.
2. Lunch the OneCommand UI.
Get-ItemProperty
hklm:\system\currentcontrolset\control\filesystem -Name
"FilterSupportedFeaturesMode"
Enabling ODX
Before you begin
If ODX is disabled on Windows, perform the following steps to enable it.
Procedure
1. Open a Windows PowerShell session as an administrator.
2. Run the following command:
Set-ItemProperty
hklm:\system\currentcontrolset\control\filesystem -Name
"FilterSupportedFeaturesMode" -Value 0
Disabling ODX
Dell EMC recommends to enable ODX functionality, when using Windows with XtremIO version 4.0
(or later), for optimal performance. However, in some cases (mainly for testing purposes), it is
necessary to disable ODX.
Before disabling ODX, on each Windows host that uses ODX, list the file system filter drivers
attached to the volume on which you want to disable ODX. Make sure that this list is empty.
To list the file system drivers attached to a volume for ODX:
1. Open a Windows PowerShell session as an administrator.
2. Run the following command: Fltmc instances -v <volume>
Note: <volume> refers to the drive letter of the volume.
The example below shows the expected Filtmc command output prior to disabling ODX:
Set-ItemProperty
hklm:\system\currentcontrolset\control\filesystem -Name
"FilterSupportedFeaturesMode" -Value 1
After reboot, use the mpclaim -h command to display the hardware IDs that are already
managed by MPIO (the list should include XtremIO).
2. Run the following mpclaim command, to set the load-balancing algorithm to Least Queue
Depth for all XtremIO volumes:
mpclaim -l -t "XtremIO XtremApp " 4
Note: There should be eight spaces between ’XtremApp’ and the closing quotation
mark.
Note: This command does not affect non-XtremIO volumes presented to the Windows
host.
Note: Use the mpclaim -s -t command to check the default load-balancing settings
for XtremIO devices. Use the mpclaim -s -d command to list all disks currently
claimed by MPIO and their load-balancing settings.
potential file loss and a bugcheck message that identifies Dell EMC PowerPath driver
EMCPMPX.SYS (https://support.emc.com/kb/491197).
XtremIO supports multipathing using Dell EMC PowerPath on Windows. PowerPath versions 5.7
SP2 and above provide Loadable Array Module (LAM) for XtremIO Array devices. With this
support, XtremIO devices running versions 2.2 and above are managed under the XtremIO class.
PowerPath provides enhanced path management capabilities for up to 32 paths per logical device,
as well as intelligent dynamic I/O load-balancing functionalities specifically designed to work within
the Microsoft Multipathing I/O (MPIO) framework. Having multiple paths enables the host to
access a storage device even if a specific path is unavailable. Multiple paths share the I/O traffic
to a storage device, using intelligent load-balancing policies which enhance I/O performance and
increase application availability. Dell EMC PowerPath is the recommended multipathing choice.
PowerPath features include:
l Multiple paths - provides higher availability and I/O performance
n Includes the support on Server Core and Hyper-V (available in Windows Server 2008 and
later).
l Running PowerPath in Hyper-V VMs (guest operating sytems), PowerPath supports:
n iSCSI through software initiator
n Virtual Fibre Channel for Hyper-V (available in Windows Server 2012 and above) that
provides the guest operating system with unmediated access to a SAN through vHBA
l Path management insight capabilities - PowerPath characterizes I/O patterns and aides in
diagnosing I/O problems due to flaky paths or unexpected latency values. Metrics are provided
on:
n Read and write - in MB/seconds per LUN
n Latency distribution - the high and low watermarks per path
n Retries - the number of failed I/Os on a specific path
l Autostandby - automatically detects intermittent I/O failures and places paths into
autostandby (also known as flaky paths)
l PowerPath Migration Enabler - is a host-based migration tool that allows migrating data
between storage systems and supports migration in an MSCS environment (for Windows 2008
and later). PowerPath Migration Enabler works in conjunction with the host operating system
(also called Host Copy) and other underlying technologies such as Open Replicator (OR).
l Remote monitoring and management
n PowerPath Management Appliance 2.2 (PPMA 2.2)
n Systems Management Server (SMS)
n Microsoft Operations Manager
n SNMP management daemon
Further PowerPath related information:
l For details on the PowerPath releases supported for your Windows host, see the XtremIO
Simple Support Matrix.
l For details on class support with XtremIO for your host, see the Dell EMC PowerPath release
notes for the PowerPath version you are installing.
l For details on installing and configuring PowerPath with XtremIO class support on your host,
see the Dell MC PowerPath on Windows Installation and Administration Guide for the PowerPath
version you are installing. This guide provides the required information for placing XtremIO
volumes under PowerPath control.
Note: The PowerPath with XtremIO class support installation procedure is fully storage-aware.
All required PowerPath settings with XtremIO storage are automatically done when PowerPath
is installed on your host. This includes settings such as the PowerPath multipathing policy that
does not require manual setting.
Disk formatting
When creating volumes in XtremIO for a Windows host, the following considerations should be
made:
l Disk Logical Block Size–A 512B logical block size must be used for a new XtremIO volume.
The following figure demonstrates formatting an XtremIO Volume using the WebUI.
For details on formatting a newly created Volume (using either the WebUI or the GUI interfaces),
see the XtremIO Storage Array User Guide that matches the version running on your XtremIO
cluster.
When adding Initiator Groups and Initiators to allow Windows hosts to access XtremIO volumes,
specify Windows as the operating system for the newly-created Initiators.
The following figure demonstrates setting the Operating System field for an Initiator using the
WebUI.
Note: Setting the Initiator’s Operating System is required for optimal interoperability and
stability of the host with XtremIO storage. You can adjust the setting while the host is online
and connected to the XtremIO cluster with no I/O impact.
Note: See the XtremIO Storage Array User Guide that matches the version running on your
XtremIO cluster.
Following a cluster upgrade from XtremIO version 3.0.x to version 4.0 (or later), make sure to
modify the operating system for each initiator that is connected to a Windows host.
Space reclamation
This section provides a comprehensive list of capacity management steps for achieving optimal
capacity utilization on the XtremIO array, when connected to a Windows host.
Data space reclamation helps to achieve optimal XtremIO capacity utilization. Space reclamation is
a Windows operating system function that enables to reclaim used space by sending zeros to a
specific address of the volume after being notified by the file system that the address space was
deleted. While some Windows operating systems can perform this action automatically, others
require a user-initiated operation.
The following sections detail steps for performing space reclamation with:
l NTFS file systems
l Hyper-V
NTFS File Systems
Automatic space reclamation-On Windows server 2012 and above, NTFS supports automatic
space reclamation when a file is deleted from the file system.
Delete notification (also referred to as TRIM or UNMAP) is a feature that notifies the underlying
storage device of clusters that have been freed, following a file delete operation. The
DisableDeleteNotify parameter is used to disable (using the value 1) or enable (using the value 0)
delete notifications for all volume. This parameter was introduced in Windows Server 2008 R2 and
Windows 7.
Note: Disabling delete notifications disables all TRIM commands from the Windows Host.
l Hard disk drives and SANs that do not support TRIM, will not receive TRIM notifications.
l TRIM is enabled by default and can be disabled by the administrator.
l Enabling or disabling delete notifications does not require a restart.
l TRIM is effective when the next UNMAP command is issued.
l Existing inflight I/Os are not impacted by the registry change.
Note: In Windows Server 2012, when a large file is deleted, the file system performs space
reclamation according to the storage array setting. Large file deletions can potentially affect
the performance of the regular I/O. To avoid this, delete notification should be disabled. For
further details, See Microsoft website.
Note: All files must be permanently deleted (i.e. removed from the Recycle Bin) for the
automatic deletion to be performed.
Manual space reclamation-You can perform manual space reclamation, using the following
options:
l Windows Optimizer (Optimize-Volume cmdlet)
l SDelete utility
l PowerShell script
Windows Optimizer (Optimize-Volume cmdlet)-Starting from Windows 2012, Windows
introduced the option to reclaim space on a TRIM-enabled array.
The delete notifications option must be enabled for running manual space reclamation,
using Winodws Optimizer or the Optimize-Volume command.
Note: It is suggested to enable the delete notifications parameter for using Windows
space reclamation tools. When space reclamation is complete, disable this parameter to avoid
automatic space reclamation.
Note: In versions prior to Windows 2012R2, the entire file system is locked during every file
deletion. For further details, contact Microsoft.
To run TRIM on a volume, use the following command: Optimize-Volume
The following example show running TRIM on volume E.
When running Optimizer, using the Windows GUI, the Optimize button may be grayed-out in some
cases, due to an issue currently under investigation with Microsoft. In this case, use the command
line to run Optimizer.
SDelete utility-Is supported starting from Windows 2008. This utility was originally designed to
provide an option for securely deleting data on magnetic disks by overwriting on-disk data, using
various techniques to ensure that disk data is unrecoverable.
For more information and for downloading the SDelete utility, go to the Microsoft website.
SDelete allocates the largest file possible, using non-cached file I/O, to prevent the contents of
the NT file system cache from being replaced with useless data, associated with SDelete’s space-
hogging file. Since non-cached file I/O must be sector (512-byte) aligned, there may be non-
allocated space left even when SDelete cannot increase the file size. To solve this, SDelete
allocates the largest cache file possible.
PS C:\Users\Administrator> sdelete.exe -z s:
SDelete - Secure Delete v1.61
Copyright (C) 1999-2012 Mark Russinovich
Sysinternals - www.sysinternals.com
SDelete is set for 1 pass.
Free space cleaned on S:\
1 drives zapped
Note: To ensure optimal manual space reclamation operation when using SDelete, use the
SDelete parameter corresponding to the Zero free space option. In some SDelete versions
(e.g. version 1.51), the Zero free space option corresponds to -c and not to -z.
Note: During space reclamation, the drive capacity is reduced to minimum because of the
created balloon file.
If the server is running custom made PS scripts for the first time, See Example 2 in Windows
Powershell Reclaim Script.
HYPER-V space reclamation
UNMAP requests from the Hyper-V guest operating system - During the virtual machine (VM)
creation, a Hyper-V host inquires whether the storage device, holding the virtual hard disk (VHD),
supports UNMAP or TRIM commands. When a large file is deleted from the file system of a VM
guest operating system, the guest operating system sends a file delete request to the virtual
machine’s virtual hard disk (VHD) or VHD file. The VM’s VHD or VHD file tunnels the SCSI UNMAP
request to the class driver stack of the Windows Hyper-V host.
Microsoft hypervisor passes T10 commands. Therefore, if the guest operating system file system
supports online space reclamation, no additional task is required.
To run manual space reclamation in the guest OS level, refer to the relevant OS chapter.
Note: If delete notification is disabled on the hypervisor level, all guest VMs cannot utilize
space reclamation features until delete notification is re-enabled.
param(
[Parameter(Mandatory=$true,ValueFromPipelineByPropertyName=$true)]
[ValidateNotNullOrEmpty()]
[Alias("name")]
$Root,
[Parameter(Mandatory=$false)]
[ValidateRange(0,1)]
$PercentFree =.05
)
process{
#Convert the #Root value to a valid WMI filter string
$FixedRoot = ($Root.Trim("\") -replace "\\","\\") + "\\"
$FileName = "ThinSAN.tmp"
$FilePath = Join-Path $Root $FileName
$Stream = [io.File]::OpenWrite($FilePath)
try {
$CurFileSize = 0
while($CurFileSize -lt $FileSize) {
$Stream.Write($ZeroArray,0, $ZeroArray.Length)
$CurFileSize += $ZeroArray.Length
}
} finally {
#always close our file stream, even if an exception occurred
if($Stream) {
$Stream.Close()
}
#always delete the file if we created it, even if an exception occured
if( (Test-Path $filePath) ) {
del $FilePath
}
}
} else {
Write-Error "Unable to locate a volume mounted at $Root"
}
}
}
In case the error message, as displayed in Example 1, appears while running the script for the first
time, update the execution policy as displayed in Example 2.
l Example 1
l Example 2
Example:
The Requestor initiates the backup and restore processes, and the Provider controls the processes
and instructs the Writer to prepare a dataset for backup. When these steps are completed, the
Requestor instructs the Provider to create a shadow copy.
Basic VSS Scenario
About this task
The following steps describe a basic VSS scenario, in which a backup application (Requestor)
requests to back up a volume while the Exchange server (Writer) is up and running:
Procedure
1. The Requestor interacts with the VSS service to start the backup.
2. VSS service sends a Freeze request to the Writer. The Writer freezes a stable state of the
volume (after closing relevant files, locking additional requests etc.).
Note: Each Writer provides its own implementation of the freeze procedure.
3. The Requestor queries the Writer Metadata Document xml document to acquire information
about files that require backup.
4. The Provider creates a snapshot of the file-system, maps a new volume with the snapshot
content (mapping is performed on-demand when the snapshot is exposed as a volume), and
returns a response to the VSS Service.
5. The VSS service requests the Writer to unfreeze the volume, by sending a Thaw event.
6. The Requestor uses the snapshot to back up the content while having a read-only/complete
state of that volume
2. Download the XtremIO VSS Provider package to the Windows host. For details on which
XtremIO VSS Provider package to download, refer to the Release Notes of the version you
are installing.
3. Run the XtremIOVSSProvider.msi from an elevated command prompt or PowerShell prompt
(i.e. the shell should be run as an administrator).
4. Accept the software license and maintenance agreement and click Next.
Note: The XtremIO Hardware Provider should be installed on the local disk.
6. Click Install.
7. When installation is complete, verify that the Launch XtremIO VSS Hardware Provider
option is selected, and click Finish.
8. In the opened XtremIO VSS Provider Control Panel window, provide the required details.
Note: Verify that the typed IP address and the credentials are correct before clicking
OK.
10. In the PowerShell, run the vssadmin list providers command and verify that the XtremIO
VSS provider appears in the providers list and is properly installed.
Note: If XtremIO VSS Hardware Provider fails to successfully register as a COM+
application on the machine it is installed on, it may not function properly.
l GetTargetLuns - Used to provide VSS with the LUN(s) information for the snapshot volumes.
l LocateLuns - Used to map LUNs requested by the VSS service.
l OnLunEmpty - A VSS request used to delete and unmap a LUN.
l BeginPrepare Snapshot - A VSS request used to prepare for snapshotting.
l FillInLunInfo - A VSS request used to confirm and fix LUN(s) structure.
l CommitSnapshots - Used to commit the snapshots, using a naming convention of (Original
volume name_-Year-Month-Day-Hour-Minute-Sec.
l AbortSnapshots - Used to abort snapshot creation.
l RegisterProvider - Used to register the hardware provider.
l UnregisterProvider - Used to unregister the hardware provider.
l OnLunStateChange - A VSS request used to unmap a LUN.
Extra VSS Provider Features
l Multi-clusters support
l Reconfiguration (changing configuration via the control panel)
l Fibre Channel and iSCSI interfaces support
l On Fibre Channel, map to all ports with different Initiator Groups
Tools
l Vshadow
n A command-line tool used for creating and managing volume shadow copies
n Acts as a requestor
n Provided with Microsoft SDK 7+
Note: Verify that you are running the x64 version, by checking the path C:\Program
Files (x86)\Microsoft SDKs\Windows\v7.1A\Bin\x64.
n Vshadow Syntax examples:
Create a persistent shadow copy set from the d: volume (d: must be an XtremIO volume):
Vshadow.exe -p d:
Create a persistent shadow copy set from multiple volumes x: y: z: (must be XtremIO
volumes).
Vshadow.exe -p x: y: z:
Delete a specific shadow copy.
Vshadow.exe -ds=ShadowCopyId
Delete all shadow copies.
Vshadow.exe -da
Expose a shadow copy to the X: drive (read-only).
Vshadow.exe -er=ShadowCopyId,x:
For more syntax examples, see VShadow Tool and Sample.
Boot-from-SAN
Although Windows servers typically boot the operating system from a local, internal disk, many
customers want to use the features of PowerMax, VMAX, Unity, Unified VNX, and XtremIO
storage to store and protect their boot disks and data. Boot-from-SAN enables PowerMax,
VMAX3, Unity, Unified VNX, and XtremIO to be used as the boot disk for your server instead of a
directly attached (or internal) hard disk. You can configure a server to use a LUN that is presented
from the array as its boot disk using a properly configured Fibre Channel (FC) host bus adapter
(HBA), Fibre Channel over Ethernet (FCoE), converged network adapter (CNA), or blade server
mezzanine adapter that is connected and zoned to the same switch or fabric as the storage array.
Benefits of boot-from-SAN
Boot-from-SAN can simplify management in the data center. Separating the boot image from each
server enables administrators to take advantage of their investments in Dell EMC storage arrays to
achieve high availability, better data integrity, and more efficient storage management. Other
benefits can include:
l Improved disaster tolerance
l Reduced total cost through diskless servers
l High-availability storage
l Rapid server repurposing
l Consolidation of image management
single hardware fault has occurred. For a host to be properly configured for high availability with
boot-from-SAN, the HBA BIOS must have connections to both SPs on the Unity and Unified VNX
systems.
At the start of the Windows boot procedure, no failover software is running. The HBA BIOS, with a
primary path and one or more secondary paths that are properly configured (with access to both
SPs), will provide high availability while booting from SAN with a single hardware fault.
Note: Dell EMC strongly recommends using failover mode 4 (ALUA active/active), when it is
supported. ALUA will allow I/O access to the boot LUN from either SP, regardless of which SP
currently owns the boot LUN.
Failover mode 1 is an active/passive failover mode. I/O can only be completed successfully if it is
directed to the SP that currently owns the boot LUN. If the HBA BIOS attempts to boot from a
passive path, the BIOS will have to time out before attempting a secondary path to the active
(owning) SP, which can cause delays at boot time. Using ALUA failover mode whenever possible
will avoid these delays.
To configure a host to boot from SAN, the server must have a boot LUN presented to it from the
array, which requires registration of the WWN of the HBAs or converged network adapters
(CNAs), or the iSCSI Qualified Name (IQN) of an iSCSI host.
In configurations where a server is already running Windows and is being attached to a Unity series
or Unified VNX systems, the Dell EMC Unisphere or Navisphere Agent is installed on the server.
This agent automatically registers the server's HBA WWNs on the array. In boot-from-SAN
configurations, where the operating system is going to be installed on the Unity series or Unified
VNX series, no agent is available to perform the registration. You must manually register of the
HBA WWNs to present a LUN to the server for boot.
ReFS 35 PB 35 PB
Hyper-V
Hyper-V in Windows Server enables you to create a virtualized server computing environment. You
can improve the efficiency of your hardware resources by using Hyper-V to create and manage
virtual machines and their resources. Each virtual machine is a self-contained virtualized computer
system that operates in an isolated environment. This enables multiple operating systems to run
simultaneously on one physical computer. Hyper-V is an available role in Windows Server 2008 and
later.
For information on Hyper-V, including features, benefits, and installation procedures, see
Microsoft website and Microsoft TechNet.