Gen6 Installation Guide
Gen6 Installation Guide
Gen6 Installation Guide
• Drive types................................................................................................................................................................................................................. 2
• Unpack and verify components............................................................................................................................................................................ 2
• Installation types.......................................................................................................................................................................................................2
• Install the chassis rails.............................................................................................................................................................................................2
• Install the chassis..................................................................................................................................................................................................... 5
• Install compute modules and drive sleds............................................................................................................................................................ 6
• Back panel.................................................................................................................................................................................................................. 8
• Supported switches................................................................................................................................................................................................. 8
• Attaching network and power cables.................................................................................................................................................................11
• Configure the node.................................................................................................................................................................................................18
• Front panel LCD menu...........................................................................................................................................................................................21
• Update the install database................................................................................................................................................................................. 23
• Where to go for support...................................................................................................................................................................................... 23
Drive types
This installation guide applies to nodes that contain any of the following drive types: self-encrypting drives (SEDs), hard disk
drives (HDDs), and solid state drives (SSDs).
CAUTION: Only install the drives that were shipped with the node. Do not mix drives of different capacities in your
node.
If you remove drive sleds from the chassis during installation, ensure to label the sleds clearly. Replace the drive sleds
in the same sled bay you removed them from. If drive sleds are mixed between nodes, even before configuration, the
system is inoperable.
If you are working with a node containing SEDs, the node might take up to two hours longer to join the cluster than a node with
standard drives. Do not power off the node during the join process.
To avoid personal injury or damage to the hardware, always use multiple people to lift and move heavy equipment.
Installation types
You may be able to skip certain sections of this procedure based on the type of installation you are performing.
New cluster
If you are installing a new cluster, follow every step in this procedure. Repeat the procedure for each chassis you install.
If you are installing a new cluster with more than 22 nodes, or if you are growing an existing cluster to include more than 22
nodes, follow the instructions in Install a new cluster using Leaf-Spine configuration in the Leaf-Spine Cluster Installation Guide.
See the PoweScale Site Preparation and Planning Guide for more information about the Leaf-Spine network topology.
New chassis
If you are adding a new Generation 6 chassis to an existing cluster, follow every step in this procedure.
2
The rails adjust in length from 24 to 36 inches to accommodate various cabinet depths. The rails are not left-specific or right-
specific and can be installed on either side of the rack.
NOTE: Check the depth of the racks to ensure that they fit the depth of the chassis being installed. The Generation 6 Site
Preparation and Planning Guide provides details.
The two rails are packaged separately inside the chassis shipping container.
1. Separate a rail into front and back pieces.
Pull up on the locking tab, and slide the two sections of the rail apart.
2. Remove the mounting screws from the back section of the rail.
The back section is the thinner of the two rail sections. There are three mounting screws that are attached to the back
bracket. There are also two smaller alignment screws. Do not uninstall the alignment screws.
3
3. Attach the back section of the rail to the rack with the three mounting screws.
Ensure that the locking tab is on the outside of the rail.
4. Remove the mounting screws from the front section of the rail.
The front section is the wider of the two rail sections. There are three mounting screws that are attached to the front
bracket. There are also two smaller alignment screws. Do not uninstall the alignment screws.
5. Slide the front section of the rail onto the back section that is secured to the rack.
4
6. Adjust the rail until you can insert the alignment screws on the front bracket into the rack.
7. Attach the front section of the rail to the rack with only two of the mounting screws.
Attach the mounting screws in the holes between the top and bottom alignment screws. You will install mounting screws in
the top and bottom holes after the chassis is installed, to secure the chassis to the rack.
CAUTION: If you remove drive sleds from the chassis during installation, make sure to label the sleds clearly. You must
replace the drive sleds in the same sled bay you removed them from. If drive sleds are mixed between nodes, even
prior to configuration, the system will be inoperable.
1. Align the chassis with the rails that are attached to the rack.
2. Slide the first few inches of the back of the chassis onto the supporting ledge of the rails.
3. Release the lift casters and carefully slide the chassis into the cabinet as far as the lift will allow.
5
4. Secure the lift casters on the floor.
5. Carefully push the chassis off the lift arms and into the rack.
CAUTION: Make sure to leave the lift under the chassis until the chassis is safely balanced and secured within the
cabinet.
6. Install two mounting screws at the top and bottom of each rail to secure the chassis to the rack.
7. If you removed the drives and nodes prior to installing the chassis, re-install them now.
6
4. Push the release lever in against the node back panel.
You can feel the lever pull the node into place in the bay. If you do not feel the lever pull the node into the bay, pull the lever
back into the open position, make sure that the node is pushed all the way into the node bay, then push the lever in against
the node again.
5. Tighten the thumbscrew on the release lever to secure the lever in place.
6. At the front of the chassis, locate the empty drive sled bays where you install the drive sleds that correspond to the
compute module you installed.
7. Make sure the drive sled handle is open before inserting the drive sled.
8. With two hands, slide the drive sled into the sled bay.
9. Push the drive sled handle back into the face of the sled to secure the drive sled in the bay.
10. Repeat the previous steps to install all drive sleds for the corresponding compute module.
11. Repeat all the steps in this section to install other nodes.
7
Back panel
The back panel provides connections for power, network access, and serial communication, as well as access to the power
supplies and cache SSDs.
slot 1
slot 0
1 2 3
PORT 2 PORT 2
4 5
PORT 1 PORT 1
6
7 8
9
10
CL6091
NOTE: 1GbE management interface on Generation 6 hardware is designed to handle SSH traffic only.
CAUTION: Only trained Dell EMC Isilon personnel should connect to the node with the USB or HDMI debugging ports.
For direct access to the node, connect to the console connector.
CAUTION: Do not connect mobile devices to the USB connector for charging.
Multifunction button
You can perform two different functions with the multifunction button. With a short press of the button, you can begin a stack
dump. With a long press of the button, you can force the node to power off.
NOTE: It is recommended to power off nodes from the OneFS command line. Only power off a node with the multifunction
button if the node does not respond to the OneFS command.
Supported switches
Switches ship with the proper rails or tray to install the switch in the rack.
The following internal network switches ship with rails to install the switch. The switch rails are adjustable to fit NEMA front rail
to rear rail spacing ranging from 22 in to 34 in.
8
Table 1. Z9100-ON Ethernet switch
Switch Maximum number of ports Network
Z9100-ON 128-port 32x100 GbE, 32x40 GbE, 128x10 GbE
(with breakout cables)
The Z9100-ON is a fixed 1U Ethernet switch which can accommodate high port density (lower and upper RUs) and multiple
interface types (32 ports of 100 GbE or 40 GbE in QSFP28 or 128 ports of 25 GbE or 10 GbE with breakout) for maximum
flexibility.
NOTE: In OneFS 8.2.0 and later, the Z9100-ON switch is required for Leaf-Spine networking of large clusters.
CAUTION: If the switch you are installing features power connectors on the front of the switch, it is important to leave
space between appliances to run power cables to the back of your rack. There is no 0U cable management option
available at this time.
1. Remove rails and hardware from packaging.
2. Verify that all components are included.
3. Locate the inner and outer rails and secure the inner rail to the outer rail.
4. Attach the rails assembly to the rack using the eight screws as illustrated in the following figure.
NOTE: The rail assembly is adjustable for NEMA, front to rear spacing extends from 22 in to 34 in.
9
Figure 1. Install the inner rail to the outer rail
5. Attach the switch rails to the switch by placing the larger side of the mounting holes on the inner rail over the shoulder studs
on the switch. Press the rail even against the switch.
NOTE: The orientation of the rail tabs for the front NEMA rail are located on the power supply side of the switch.
6. Slide the inner rail towards the rear of the switch slide into the smaller side of each of the mounting holes on the inner rail.
Ensure the inner rail is firmly in place.
7. Secure the switch to the rail, securing the bezel clip and switch to the rack using the two screws as illustrated in the
following figure.
10
Dell Switch configuration
Install the configuration file depending on the switch you are using, and the role for which it is being configured.
The following steps apply only to switches that are running DNOS 10.5.0.6.
1. For all flat, top of rack (ToR) setups of switches S4112, S4148, Z9100, and Z9264, run the following command to configure
the leaf role:
For Leaf and Spine network configuration, see the PowerScale Leaf Spine Cluster Installation Guide, or the Generation 6
Leaf-Spine Installation Guide.
2. The following prompt appears: Reboot to change the personality? [yes/no]
Type yes.
The switch reboots, and loads the configuration.
PORT 2
PORT 1
3. To PDU 1 4. To PDU 2
Work with the site manager to determine external network connections, and bundle the additional network cables together with
the internal network cables from the same node pair.
It is important to keep future maintenance in mind as you dress the network and power cables. Cables must be dressed loosely
enough to allow you to:
● remove any of the four compute nodes from the back of the Generation 6 chassis.
● remove power supplies from the back of compute nodes.
In order to avoid dense bundles of cables, you can dress the cables from the node pairs to either side of the rack. For example,
dress the cables from nodes 1 and 2 toward the lower right corner of the chassis, and dress the cables from nodes 3 and 4
toward the lower left corner of the chassis.
Wrap network cables and power cables into two separate bundles to avoid EMI (electromagnetic interference) issues, but make
sure that both bundles easily shift together away from components that need to be removed during maintenance, such as
compute nodes and power supplies.
11
Cable management
To protect the cable connections, organize cables for proper airflow around the cluster, and to ensure fault-free maintenance of
the Isilon nodes.
Protect cables
Damage to the InfiniBand or Ethernet cables (copper or optical Fibre) can affect the Isilon cluster performance. Consider the
following to protect cables and cluster integrity:
● Never bend cables beyond the recommended bend radius. The recommended bend radius for any cable is at least 10–12
times the diameter of the cable. For example, if a cable is 1.6 inches, round up to 2 inches and multiply by 10 for an
acceptable bend radius. Cables differ, so follow the recommendations of the cable manufacturer.
● As illustrated in the following figure, the most important design attribute for bend radius consideration is the minimum mated
cable clearance (Mmcc). Mmcc is the distance from the bulkhead of the chassis through the mated connectors/strain relief
including the depth of the associated 90 degree bend. Multimode fiber has many modes of light (fiber optic) traveling
through the core. As each of these modes moves closer to the edge of the core, light and the signal are more likely to be
reduced, especially if the cable is bent. In a traditional multimode cable, as the bend radius is decreased, the amount of light
that leaks out of the core increases, and the signal decreases.
12
Ensure airflow
Bundled cables can obstruct the movement of conditioned air around the cluster.
● Secure cables away from fans.
● To keep conditioned air from escaping through cable holes, employ flooring seals or grommets.
NOTE: Both the Z9100-ON and the Z9264-ON switches can support Generation 6 nodes (40 GbE) with F200, and F600
PowerScale nodes (100 GbE, 25 GbE) on the same switch.
13
The Celestica switch breakout mode port-mapping table lists the Celestica switch QSFP ports that support breakout mode. The
table includes the limitations and port remapping for when breakout is used. When breakout is enabled on a port, the port is
remapped to the expanded interfaces.
From the Celestica switch CLI, run the following command for the expanded interface details:
<hostname># show interfaces hardware profile
Cables
Table 7. 40GbEb
Dell EMC Part Number Cable type and length Orderable Model
PN: 038-004-065-01 1 m, copper, InfiniBand QDR Orderable PN: 851-0208
Model
PN: 038-004-067-01 3 m, copper, InfiniBand QDR PN: 851-0209
14
Table 7. 40GbEb (continued)
Dell EMC Part Number Cable type and length Orderable Model
PN: 038-004-069-01 5 m, copper, InfiniBand QDR PN: 851-0210
PN: 038-002-064-02 1 m, copper 40GbEb Ethernet PN: 851-0253
PN: 038-002-066-02 3 m, copper, 40GbEb Ethernet PN: 851-0254
PN: 038-002-139-02 5 m, copper, 40GbEb Ethernet PN: 851-0255
PN: 038-002-139-02 5 m, copper, 40GbEb Ethernet PN: 851-0255
PN: 038-004-214 1 m Optical, QDR/40GbEb Ethernet PN: 851-0274
PN: 038-004-216 3 m Optical, QDR/40GbEb Ethernet PN: 851-0275
PN: 038-004-217 5 m Optical, QDR/40GbEb Ethernet PN: 851-0276
PN: 038-004-218 10 m, Optical, QDR/40GbEb Ethernet PN: 851-0224
PN: 038-004-219 30 m Optical, QDR/40GbEb Ethernet PN: 851-0225
PN: 038-004-220 50 m Optical, QDR/40GbEb Ethernet PN: 851-0226
Use the optical transceiver, EMC PN: 019-078-046 40GbEb with the MTP/MPO optical cables listed in this table
NOTE: Any cables/transceivers not listed in this table are not supported. Replace the unsupported cable before
proceeding with the conversion.
10GbEb
If the conversion is from QDR InfiniBand to 10GbEb Ethernet, a Dell QSFP to SFP converter is required. EMC PN: 019-078-055
QSFP-SFP Adapter EMC PN: 019-078-055 QSFP-SFP Adapter Orderable Model PN: 851-0331.
Table 8. 10 GbEb
Dell EMC Part Number Cable type and length Orderable model
EMC PN: 038-003-728-01 1 m, copper, 10GbEb Ethernet PN: 851-0262
PN: 038-003-729-01 3 m, copper, 10GbEb Ethernet PN: 851-0263
PN: 038-003-730-01 5 m, copper, 10GbEb Ethernet PN: 851-0264
PN: 038-004-153 10 m, Optical, 10GbEb Ethernet PN: 851-0266
PN: 038-004-154 30 m, Optical, 10GbEb Ethernet PN: 851-0267
PN: 038-004-155 50 m, Optical 10GbEb Ethernet PN: 851-0268
PN: 038-004-156 100 m Optical, 10GbEb Ethernet PN: 851-0269
The optical transceiver, EMC PN: 019-078-041 10GbEb is for the MTP/MPO optical cables that are listed in this table is
included (qty 2) in the orderable 851 PNs for the optical 10 GbE cables. To order the optic only, use orderable PN: 851-0296.
PN: 038-004-216 1 50-meter Optical, 10GbEb Ethernet PN: 851-0270
Breakout cables might be required with archive nodes (A200, A2000, H400, H500) in the cluster, and with Celestica D4040,
S4112-ON and Z9100-ON switches. Breakout cables connect one 40GbEb connection to four 10GbEb connections.
NOTE: There is no breakout cable support for Arista switches. However, you can add a 10 GbE or 40 GbE line card
depending on the Arista switch model.
15
2. For the secondary internal network, connect the int-b port, the top port, to a separate network switch for the int-b network.
slot 1
slot 0
PORT 2 PORT 2
1
PORT 1 PORT 1
CL6154
PORT 2 PORT 2
1
PORT 1 PORT 1
CL6155
16
Install transformers
This step only applies if you are installing F800 or H600 nodes into a low-line power environment (100v-120v). If you are not
installing F800 or H600 nodes, or if you are not installing into a low-line power environment, you can skip this section.
CAUTION:
If you are installing F800 or H600 nodes into a low-line power environment (100v-120v), you must install transformers
that will regulate power to the requirements for these nodes (200v-240v). To make sure that there are redundant
power sources, you will install two transformers, then connect the node power supplies from peer nodes, one to each
transformer. The transformers will connect to the power source.
Reference the Isilon Generation 6 Transformer Installation Guide for complete instructions on installing a transformer
and connecting your power supplies.
3. Rotate the metal bale down over the power cord to hold the cord in place.
17
Configure the node
Before using the node, you must either create a new cluster or add the node to an existing cluster.
Federal installations
You can configure nodes to comply with United States federal regulations.
If you are installing an EMC Isilon cluster for a United States federal agency, you can configure the cluster's external network
with IPv6 addresses. In order to comply with Federal requirements, if the OneFS cluster is configured for IPv6, enablement of
link-local is required.
As part of the installation procedure, you can configure the external cluster for IPv6 addresses in the Isilon configuration wizard
after you power up a node.
After you install the cluster, you can enable link-local addresses by following the instructions in the KB article How to enable
link-local addresses for IPv6.
Once you select to run a node in SmartLock compliance mode, you cannot leave compliance mode without reformatting
the node.
SmartLock compliance mode is incompatible with Isilon for vCenter, VMware vSphere API for Storage Awareness
(VASA), and the VMware vSphere API for Array Integration (VAAI) NAS Plug-In for Isilon.
18
2. Connect the other end of the serial cable to the serial port on the back panel of the node.
3. Start a serial communication utility such as Minicom (UNIX) or PuTTY (Windows).
4. Configure the connection utility to use the following port settings:
Setting Value
Data bits 8
Parity None
Stop bits 1
NOTE: You can type back at most prompts to return to the previous step in the wizard.
1. For new clusters, joining a node to an existing cluster, or preparing a node to run in SmartLock compliance mode, choose one
of the following options:
● To create a cluster, type 1.
● To join the node to an existing cluster, type 2.
● To exit the wizard and configure the node manually, type 3.
● To restart the node in SmartLock compliance mode, type 4.
CAUTION: If you choose to restart the node in SmartLock compliance mode, the node restarts and returns to
this step. Selection 4 changes to enable you to disable SmartLock compliance mode. Selection 4 is the last
opportunity to back out of compliance mode without reformatting the node.
2. Follow the prompts to configure the node.
For new clusters, the following table lists the information necessary to configure the cluster. To ensure that the installation
process is not interrupted, it is recommended that you collect this information before installation.
Setting Description
SmartLock compliance license A valid SmartLock license for clusters in compliance mode only
Root password The password for the root user for clusters in compliance mode do not
allow a root user and request a compliance administrator (comp admin)
password.
Admin password The password for the administrator user
Cluster name The name used to identify the cluster. Cluster names must begin with a
letter and can contain only numbers, letters, and hyphens.
NOTE: if the cluster name is longer than 11 characters, the
following warning displays: WARNING: Limit cluster name
to 11 characters or less when the NetBIOS Name
Service is enabled to avoid name truncation.
Isilon uses up to 4 characters for individual
node names.
19
Setting Description
Character encoding The default character encoding is UTF-8.
int-a network settings The int-a network settings are for communication between nodes.
● Netmask The int-a network must be configured with IPv4.
● IP range
The int-a network must be on a separate subnet from an int-b/failover
network.
int-b/failover network settings The int-b/failover network settings are optional. The int-b network is
● Netmask for communication between nodes, and provides redundancy with the
● IP range int-a network.
● Failover IP range The int-b network must be configured with IPv4.
The int-a and int-b networks must be on separate subnets.
The failover IP range is a virtual IP range that is resolved to either of
the active ports during failover.
External network settings The external network settings are for client access to the cluster. The
● Netmask 10Gb ports can be configured from the wizard.
● MTU The default external network can be configured with IPv4 or IPv6
● IP range addresses.
Configure the external network with IPv6 addresses by entering an
integer less than 128 for the netmask value. The standard external
netmask value for IPv6 addresses is 64. If you enter a netmask value
with dot-decimal notation, use IPv4 addresses for the IP range.
In the configuration wizard, the following options are available:
Or
Default gateway The IP address of the optional gateway server through which the
cluster communicates with clients outside the subnet. Enter an IPv4 or
IPv6 address, depending on how configured the external network.
SmartConnect settings SmartConnect balances client connections across nodes in a cluster.
● SmartConnect zone name Information about configuring SmartConnect is available in the OneFS
● SmartConnect service IP Administration Guide.
Date and time settings The day and time settings are for the cluster.
● Time zone
● Day and time
20
Setting Description
Cluster join mode The method that the cluster uses to add new nodes. Choose one of the
following options:
NOTE: If you are installing a node that contains SEDs (self-encrypting drives), the node formats the drives now. The
formatting process might take up to two hours to complete.
LCD Interface
The LCD interface is located on the node front panel. The interface consists of the LCD screen, a round button labeled ENTER
for making selections, and four arrow buttons for navigating menus.
There are also four LEDs across the bottom of the interface that indicate which node you are communicating with. You can
change which node you are communicating with the arrow buttons.
The LCD screen will be dark until you activate it. To activate the LCD screen and view the menu, press the square selection
button.
You can press the right arrow button to move to the next level of a menu.
Attach menu
The Attach menu contains the following sub-menu:
21
Drive Adds a drive to the node. After you select this command, you can select the drive bay that contains the
drive you would like to add.
Status menu
The Status menu contains the following sub-menus:
Alerts Displays the number of critical, warning, and informational alerts that are active on the cluster.
Cluster The Cluster menu contains the following sub-menus:
Details Displays the cluster name, the version of OneFS installed on the cluster, the health
status of the cluster, and the number of nodes in the cluster.
Capacity Displays the total capacity of the cluster and the percentage of used and available
space on the cluster.
Throughput Displays throughput numbers for the cluster as <in> | <out> | <total>.
Details Displays the node ID, the node serial number, the health status of the node, and the
node uptime as <days>, <hours>:<minutes>:<seconds>
Capacity Displays the total capacity of the node and the percentage of used and available
space on the node.
Network Displays the IP and MAC addresses for the node.
Throughput Displays throughput numbers for the node as <in> | <out> | <total>.
Disk/CPU Displays the current access status of the node, either Read-Write or Read-Only.
Also displays the current CPU throttling status, either Unthrottled or Throttled.
Hardware Displays the current hardware status of the node as <cluster name>-<node number>:<status>.
Also displays the Statistics menu.
Statistics Displays a list of hardware components. Select one of the hardware components to
view statistics related to that component.
Update menu
The Update menu allows you to update OneFS on the node. Press the selection button to confirm that you would like to update
the node. You can press the left navigation button to back out of this menu without updating.
Service menu
The Service menu contains the following sub-menus:
22
Read-Write Press the selection button to set node access to read-write.
UnitLED On Press the selection button to turn on the unit LED.
UnitLED Off Press the selection button to turn off the unit LED.
Shutdown menu
The Shutdown menu allows you to shut down or reboot the node. This menu also allows you to shut down or reboot the entire
cluster. Press the up or down navigation button to cycle through the four shut down and reboot options, or to cancel out of the
menu.
Press the selection button to confirm the command. You can press the left navigation button to back out of this menu without
shutting down or rebooting.
23
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the
problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2016 - 2020 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Other trademarks may be trademarks of their respective owners.