Gen6 Installation Guide

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Generation 6 Installation Guide

Install new Generation 6 hardware


December 2020

• Drive types................................................................................................................................................................................................................. 2
• Unpack and verify components............................................................................................................................................................................ 2
• Installation types.......................................................................................................................................................................................................2
• Install the chassis rails.............................................................................................................................................................................................2
• Install the chassis..................................................................................................................................................................................................... 5
• Install compute modules and drive sleds............................................................................................................................................................ 6
• Back panel.................................................................................................................................................................................................................. 8
• Supported switches................................................................................................................................................................................................. 8
• Attaching network and power cables.................................................................................................................................................................11
• Configure the node.................................................................................................................................................................................................18
• Front panel LCD menu...........................................................................................................................................................................................21
• Update the install database................................................................................................................................................................................. 23
• Where to go for support...................................................................................................................................................................................... 23
Drive types
This installation guide applies to nodes that contain any of the following drive types: self-encrypting drives (SEDs), hard disk
drives (HDDs), and solid state drives (SSDs).
CAUTION: Only install the drives that were shipped with the node. Do not mix drives of different capacities in your
node.

If you remove drive sleds from the chassis during installation, ensure to label the sleds clearly. Replace the drive sleds
in the same sled bay you removed them from. If drive sleds are mixed between nodes, even before configuration, the
system is inoperable.
If you are working with a node containing SEDs, the node might take up to two hours longer to join the cluster than a node with
standard drives. Do not power off the node during the join process.

Unpack and verify components


Before you install any equipment, inspect it to make sure that no damage occurred during transit.
Remove all components from the shipping package and inspect the components for any sign of damage. If the components
appear damaged in any way, notify Isilon Technical Support. Do not use a damaged component.
NOTE:

To avoid personal injury or damage to the hardware, always use multiple people to lift and move heavy equipment.

Installation types
You may be able to skip certain sections of this procedure based on the type of installation you are performing.

New cluster
If you are installing a new cluster, follow every step in this procedure. Repeat the procedure for each chassis you install.
If you are installing a new cluster with more than 22 nodes, or if you are growing an existing cluster to include more than 22
nodes, follow the instructions in Install a new cluster using Leaf-Spine configuration in the Leaf-Spine Cluster Installation Guide.
See the PoweScale Site Preparation and Planning Guide for more information about the Leaf-Spine network topology.

New chassis
If you are adding a new Generation 6 chassis to an existing cluster, follow every step in this procedure.

New node pair


If you are installing a new node pair in an existing chassis, you can skip the steps in this procedure that describe how to install
rails and a chassis.

Install the chassis rails


Install the adjustable chassis rails in the rack.
You can install your chassis in standard ANSI/EIA RS310D 19-inch rack systems, including all Dell EMC racks. The rail kit is
compatible with rack cabinets with the following hole types:
● 3/8 inch square holes
● 9/32-inch round holes
● 10-32, 12-24, M5X.8, or M6X1 prethreaded holes

2
The rails adjust in length from 24 to 36 inches to accommodate various cabinet depths. The rails are not left-specific or right-
specific and can be installed on either side of the rack.
NOTE: Check the depth of the racks to ensure that they fit the depth of the chassis being installed. The Generation 6 Site
Preparation and Planning Guide provides details.
The two rails are packaged separately inside the chassis shipping container.
1. Separate a rail into front and back pieces.
Pull up on the locking tab, and slide the two sections of the rail apart.

2. Remove the mounting screws from the back section of the rail.
The back section is the thinner of the two rail sections. There are three mounting screws that are attached to the back
bracket. There are also two smaller alignment screws. Do not uninstall the alignment screws.

3
3. Attach the back section of the rail to the rack with the three mounting screws.
Ensure that the locking tab is on the outside of the rail.

4. Remove the mounting screws from the front section of the rail.
The front section is the wider of the two rail sections. There are three mounting screws that are attached to the front
bracket. There are also two smaller alignment screws. Do not uninstall the alignment screws.
5. Slide the front section of the rail onto the back section that is secured to the rack.

4
6. Adjust the rail until you can insert the alignment screws on the front bracket into the rack.
7. Attach the front section of the rail to the rack with only two of the mounting screws.
Attach the mounting screws in the holes between the top and bottom alignment screws. You will install mounting screws in
the top and bottom holes after the chassis is installed, to secure the chassis to the rack.

8. Repeat these steps to install the second rail in the rack.

Install the chassis


Slide the chassis onto the installed rails and secure the chassis to the rack.
NOTE: A chassis that contains drives and nodes can weigh up to 285 pounds. We recommend that you attach the chassis
to a lift to install it in a rack. If a lift is not available, you must remove all drive sleds and nodes from the chassis before you
attempt to lift it. Even when the chassis is empty, only attempt to lift and install the chassis with multiple people.

CAUTION: If you remove drive sleds from the chassis during installation, make sure to label the sleds clearly. You must
replace the drive sleds in the same sled bay you removed them from. If drive sleds are mixed between nodes, even
prior to configuration, the system will be inoperable.
1. Align the chassis with the rails that are attached to the rack.
2. Slide the first few inches of the back of the chassis onto the supporting ledge of the rails.
3. Release the lift casters and carefully slide the chassis into the cabinet as far as the lift will allow.

5
4. Secure the lift casters on the floor.
5. Carefully push the chassis off the lift arms and into the rack.
CAUTION: Make sure to leave the lift under the chassis until the chassis is safely balanced and secured within the
cabinet.

6. Install two mounting screws at the top and bottom of each rail to secure the chassis to the rack.

7. If you removed the drives and nodes prior to installing the chassis, re-install them now.

Install compute modules and drive sleds


Follow the steps in this section if you are installing a new node pair into an existing chassis, or if you needed to remove compute
modules and drive sleds to safely install the chassis in the rack.
CAUTION: Remember, you must install drive sleds with the compute module they were packaged with on arrival to the
site. If you removed the compute nodes and drive sleds to rack the chassis, you must replace the drive sleds and
compute modules in the same bays from which you removed them. If drive sleds are mixed between nodes, even before
configuration, the system is inoperable.
If all compute nodes and drive sleds are already installed in the chassis, you can skip this section.
1. At the back of the chassis, locate the empty node bay where you install the node.
2. Pull the release lever away from the node.
Keep the lever in the open position until the node is pushed all the way in to the node bay.
3. Slide the node into the node bay.
NOTE: Support the compute node with both hands until it is fully inserted in the drive bay.

6
4. Push the release lever in against the node back panel.
You can feel the lever pull the node into place in the bay. If you do not feel the lever pull the node into the bay, pull the lever
back into the open position, make sure that the node is pushed all the way into the node bay, then push the lever in against
the node again.
5. Tighten the thumbscrew on the release lever to secure the lever in place.
6. At the front of the chassis, locate the empty drive sled bays where you install the drive sleds that correspond to the
compute module you installed.
7. Make sure the drive sled handle is open before inserting the drive sled.
8. With two hands, slide the drive sled into the sled bay.
9. Push the drive sled handle back into the face of the sled to secure the drive sled in the bay.

10. Repeat the previous steps to install all drive sleds for the corresponding compute module.
11. Repeat all the steps in this section to install other nodes.

7
Back panel
The back panel provides connections for power, network access, and serial communication, as well as access to the power
supplies and cache SSDs.

slot 1
slot 0
1 2 3
PORT 2 PORT 2

4 5
PORT 1 PORT 1
6

7 8
9
10

CL6091

1. 1GbE management and SSH port 6. Multifunction button

2. Internal network ports 7. Power supply

3. External network ports 8. Cache SSDs

4. Console connector 9. USB connector

5. Do Not Remove LED 10. HDMI debugging port

NOTE: 1GbE management interface on Generation 6 hardware is designed to handle SSH traffic only.

CAUTION: Only trained Dell EMC Isilon personnel should connect to the node with the USB or HDMI debugging ports.
For direct access to the node, connect to the console connector.

CAUTION: Do not connect mobile devices to the USB connector for charging.

Multifunction button
You can perform two different functions with the multifunction button. With a short press of the button, you can begin a stack
dump. With a long press of the button, you can force the node to power off.
NOTE: It is recommended to power off nodes from the OneFS command line. Only power off a node with the multifunction
button if the node does not respond to the OneFS command.

Supported switches
Switches ship with the proper rails or tray to install the switch in the rack.
The following internal network switches ship with rails to install the switch. The switch rails are adjustable to fit NEMA front rail
to rear rail spacing ranging from 22 in to 34 in.

8
Table 1. Z9100-ON Ethernet switch
Switch Maximum number of ports Network
Z9100-ON 128-port 32x100 GbE, 32x40 GbE, 128x10 GbE
(with breakout cables)
The Z9100-ON is a fixed 1U Ethernet switch which can accommodate high port density (lower and upper RUs) and multiple
interface types (32 ports of 100 GbE or 40 GbE in QSFP28 or 128 ports of 25 GbE or 10 GbE with breakout) for maximum
flexibility.

NOTE: In OneFS 8.2.0 and later, the Z9100-ON switch is required for Leaf-Spine networking of large clusters.

Table 2. Z9264F-ON Ethernet switch


Switch Maximum number of ports Network
Z9264F-ON 128-port 64x100 GbE, 64x40 GbE, 256x10 GbE
(with breakout cables)
The Z9264F-ON is a fixed 2U Ethernet switch which provides industry-leading density of either 64 ports of 100GbE or 40 GbE
in QSFP28 or 128 ports of 25 GbE or 10 GbE by breakout. Breakout cables are only used in the odd-numbered ports and using
one in odd-numbered port disables the corresponding even-numbered port.

Table 3. S4148F-ON Ethernet switch


Switch Maximum number of ports Network
S4148F-ON 48-port 2x40 GbE 48x10 GbE
The S4148F-ON is the next generation family of 10 GbE (48 ports) top-of-rack, aggregation-switch, or router products that
aggregates 10 GbE server or storage devices and provides multi speed uplinks for maximum flexibility and simple management.

Table 4. S4112F-ON Ethernet switch


Switch Maximum number of ports Network
S4112F-ON 12-port 3x100 GbE (with breakout, connect
12x10 GbE nodes using the 3x100 GbE
ports) 12 x10GbE
The S4112F-ON supports 10/100GbE with 12 fixed SFP+ ports to implement 10 GbE and three fixed QSFP28 ports to
implement 4x10 or 4x25 using breakout. A total of 24 10 GbE connections including the three fixed QSFP28 ports using 4x10
breakout cables.

Installing the switch


The switches ship with the proper rails or tray to install the switch in the rack.
NOTE: If the installation instructions in this section do not apply to the switch you are using, follow the procedures provided
by your switch manufacturer.

CAUTION: If the switch you are installing features power connectors on the front of the switch, it is important to leave
space between appliances to run power cables to the back of your rack. There is no 0U cable management option
available at this time.
1. Remove rails and hardware from packaging.
2. Verify that all components are included.
3. Locate the inner and outer rails and secure the inner rail to the outer rail.
4. Attach the rails assembly to the rack using the eight screws as illustrated in the following figure.
NOTE: The rail assembly is adjustable for NEMA, front to rear spacing extends from 22 in to 34 in.

9
Figure 1. Install the inner rail to the outer rail

5. Attach the switch rails to the switch by placing the larger side of the mounting holes on the inner rail over the shoulder studs
on the switch. Press the rail even against the switch.
NOTE: The orientation of the rail tabs for the front NEMA rail are located on the power supply side of the switch.

Figure 2. Install the inner rails

6. Slide the inner rail towards the rear of the switch slide into the smaller side of each of the mounting holes on the inner rail.
Ensure the inner rail is firmly in place.
7. Secure the switch to the rail, securing the bezel clip and switch to the rack using the two screws as illustrated in the
following figure.

Figure 3. Secure the switch to the rail

8. Snap the bezel in place.

10
Dell Switch configuration
Install the configuration file depending on the switch you are using, and the role for which it is being configured.
The following steps apply only to switches that are running DNOS 10.5.0.6.
1. For all flat, top of rack (ToR) setups of switches S4112, S4148, Z9100, and Z9264, run the following command to configure
the leaf role:

configure smartfabric l3fabric enable role LEAF

For Leaf and Spine network configuration, see the PowerScale Leaf Spine Cluster Installation Guide, or the Generation 6
Leaf-Spine Installation Guide.
2. The following prompt appears: Reboot to change the personality? [yes/no]
Type yes.
The switch reboots, and loads the configuration.

Attaching network and power cables


Network and power cables must be attached to make sure that there are redundant power and network connections, and
dressed to allow for easy maintenance in the future.
The following image shows you how to attach your internal network and power cables for a node pair. Both node pairs in a
chassis must be cabled in the same way.

PORT 2

PORT 1

1. To internal network switch 2 2. To internal network switch 1

3. To PDU 1 4. To PDU 2

Work with the site manager to determine external network connections, and bundle the additional network cables together with
the internal network cables from the same node pair.
It is important to keep future maintenance in mind as you dress the network and power cables. Cables must be dressed loosely
enough to allow you to:
● remove any of the four compute nodes from the back of the Generation 6 chassis.
● remove power supplies from the back of compute nodes.
In order to avoid dense bundles of cables, you can dress the cables from the node pairs to either side of the rack. For example,
dress the cables from nodes 1 and 2 toward the lower right corner of the chassis, and dress the cables from nodes 3 and 4
toward the lower left corner of the chassis.
Wrap network cables and power cables into two separate bundles to avoid EMI (electromagnetic interference) issues, but make
sure that both bundles easily shift together away from components that need to be removed during maintenance, such as
compute nodes and power supplies.

11
Cable management
To protect the cable connections, organize cables for proper airflow around the cluster, and to ensure fault-free maintenance of
the Isilon nodes.

Protect cables
Damage to the InfiniBand or Ethernet cables (copper or optical Fibre) can affect the Isilon cluster performance. Consider the
following to protect cables and cluster integrity:
● Never bend cables beyond the recommended bend radius. The recommended bend radius for any cable is at least 10–12
times the diameter of the cable. For example, if a cable is 1.6 inches, round up to 2 inches and multiply by 10 for an
acceptable bend radius. Cables differ, so follow the recommendations of the cable manufacturer.
● As illustrated in the following figure, the most important design attribute for bend radius consideration is the minimum mated
cable clearance (Mmcc). Mmcc is the distance from the bulkhead of the chassis through the mated connectors/strain relief
including the depth of the associated 90 degree bend. Multimode fiber has many modes of light (fiber optic) traveling
through the core. As each of these modes moves closer to the edge of the core, light and the signal are more likely to be
reduced, especially if the cable is bent. In a traditional multimode cable, as the bend radius is decreased, the amount of light
that leaks out of the core increases, and the signal decreases.

Figure 4. Cable design


● Keep cables away from sharp edges or metal corners.
● Never bundle network cables with power cables. If network and power cables are not bundled separately, electromagnetic
interference (EMI) can affect the data stream.
● When bundling cables, do not pinch or constrict the cables.
● Avoid using zip ties to bundle cables, instead use velcro hook-and-loop ties that do not have hard edges, and can be
removed without cutting. Fastening cables with velcro ties also reduces the impact of gravity on the bend radius.
NOTE: Gravity decreases the bend radius and results in the loss of light (fiber optic), signal power, and quality.
● For overhead cable supports:
○ Ensure that the supports are anchored adequately to withstand the significant weight of bundled cables. Anchor cables
to the overhead supports, then again to the rack to add a second point of support.
○ Do not let cables sag through gaps in the supports. Gravity can stretch and damage cables over time. You can anchor
cables to the rack with velcro ties at the mid-point of the cables to protect your cable bundles from sagging.
○ Place drop points in the supports that allow cables to reach racks without bending or pulling.
● If the cable is running from overhead supports or from underneath a raised floor, be sure to include vertical distances when
calculating necessary cable lengths.

12
Ensure airflow
Bundled cables can obstruct the movement of conditioned air around the cluster.
● Secure cables away from fans.
● To keep conditioned air from escaping through cable holes, employ flooring seals or grommets.

Prepare for maintenance


To accommodate future work on the cluster, design the cable infrastructure. Think ahead to required tasks on the cluster, such
as locating specific pathways or connections, isolating a network fault, or adding and removing nodes and switches.
● Label both ends of every cable to denote the node or switch to which it should connect.
● Leave a service loop of cable behind nodes. Service technicians should be able to slide a node out of the rack without pulling
on power or network connections. In the case of Generation 6 nodes, you should be able to slide any of the four nodes out of
the chassis without disconnecting any cables from the other three nodes.
WARNING: If adequate service loops are not included during installation, downtime might be required to add service
loops later.
● Allow for future expansion without the need for tearing down portions of the cluster.

Verify switch and cable compatibility


If the cables are supported for the Ethernet back-end network with the switches, the existing cables can be used.

Switches and ports


NOTE: There is no breakout cable support for Arista switches. However, you can add a 10 GbE or 40 GbE line card
depending on the Arista switch model.

Table 5. Supported switches


Switch Part number Model number
Arista DCS-7304 2x 32-port 40 GbE with line cards 105-572-006-00 851-0261
Arista DCS-7308 2x 48-port 10 GbE with line cards 105-572-009-00 851-0260
Arista Line card 32-Port 40 GbE 105-572-016-00 851-0283
Arista Line card 48-Port 10 GbE 105-572-015-00 851-0282
Celestica D2024 24x10GbE + 2x40GbE 100-575-036-00 851-0258
Celestica D2060 48x10GbE + 6x40GbE 100-575-034-00 851-0257
Celestica D4040 32x40GbE 100-575-035-00 851-0259
Dell S4112F-ON ● 12x 1/10GbE SFP+ ports 100-572-315-00 851-0334
NOTE: The S4112F switch
requires breakout cables to
support 24 ports for 10 GbE.
● 3 QSFP28 Multirate
1/10/25/40/100 GbE ports
Dell S4148-ON 2x40GbE 48x10GbE 100-575-041-00 851-0317
Dell Z9100-ON 32x40GbE 128x10GbE 100-575-040-00 851-0316
Dell Z9264F-ON 64x40GbE 100-575-042-00 851-0318

NOTE: Both the Z9100-ON and the Z9264-ON switches can support Generation 6 nodes (40 GbE) with F200, and F600
PowerScale nodes (100 GbE, 25 GbE) on the same switch.

13
The Celestica switch breakout mode port-mapping table lists the Celestica switch QSFP ports that support breakout mode. The
table includes the limitations and port remapping for when breakout is used. When breakout is enabled on a port, the port is
remapped to the expanded interfaces.
From the Celestica switch CLI, run the following command for the expanded interface details:
<hostname># show interfaces hardware profile

Table 6. Celestica switch breakout mode port mapping


40GbEbE interface Configured mode and Expandable options Expanded interfaces
operating mode
0/1 4x10GbE 4x10GbE 0/33-36
0/2 1x40GbE 4x10GbE 0/37-40
0/3 1x40GbE 4x10GbE 0/41-44
0/4 1x40GbE 4x10GbE 0/45-48
0/5 1x40GbE 4x10GbE 0/49-52
0/6 1x40GbE 4x10GbE 0/53-56
0/7 4x10GbE 4x10GbE 0/57-60
0/8 1x40GbE 4x10GbE 0/61-64
0/9 1x40GbE 4x10GbE 0/65-68
0/10 1x40GbE 4x10GbE 0/69-72
0/11 4x10GbE 4x10GbE 0/73-76
0/12 1x40GbE 4x10GbE 0/77-80
0/17 1x40GbE 4x10GbE 0/97-100
0/18 1x40GbE 4x10GbE 0/101-104
0/19 1x40GbE 4x10GbE 0/105-108
0/20 1x40GbE 4x10GbE 0/109-112
0/21 1x40GbE 4x10GbE 0/113-116
0/22 1x40GbE 4x10GbE 0/117-120
0/23 1x40GbE 4x10GbE 0/121-124
0/24 1x40GbE 4x10GbE 0/125-128
0/25 1x40GbE 4x10GbE 0/129-132
0/26 1x40GbE 4x10GbE 0/133-136
0/27 1x40GbE 4x10GbE 0/137-140
0/28 1x40GbE 4x10GbE 0/141-144

NOTE: Ports 13 through 16 cannot be used for connection.

Cables
Table 7. 40GbEb
Dell EMC Part Number Cable type and length Orderable Model
PN: 038-004-065-01 1 m, copper, InfiniBand QDR Orderable PN: 851-0208
Model
PN: 038-004-067-01 3 m, copper, InfiniBand QDR PN: 851-0209

14
Table 7. 40GbEb (continued)
Dell EMC Part Number Cable type and length Orderable Model
PN: 038-004-069-01 5 m, copper, InfiniBand QDR PN: 851-0210
PN: 038-002-064-02 1 m, copper 40GbEb Ethernet PN: 851-0253
PN: 038-002-066-02 3 m, copper, 40GbEb Ethernet PN: 851-0254
PN: 038-002-139-02 5 m, copper, 40GbEb Ethernet PN: 851-0255
PN: 038-002-139-02 5 m, copper, 40GbEb Ethernet PN: 851-0255
PN: 038-004-214 1 m Optical, QDR/40GbEb Ethernet PN: 851-0274
PN: 038-004-216 3 m Optical, QDR/40GbEb Ethernet PN: 851-0275
PN: 038-004-217 5 m Optical, QDR/40GbEb Ethernet PN: 851-0276
PN: 038-004-218 10 m, Optical, QDR/40GbEb Ethernet PN: 851-0224
PN: 038-004-219 30 m Optical, QDR/40GbEb Ethernet PN: 851-0225
PN: 038-004-220 50 m Optical, QDR/40GbEb Ethernet PN: 851-0226

Use the optical transceiver, EMC PN: 019-078-046 40GbEb with the MTP/MPO optical cables listed in this table
NOTE: Any cables/transceivers not listed in this table are not supported. Replace the unsupported cable before
proceeding with the conversion.

10GbEb
If the conversion is from QDR InfiniBand to 10GbEb Ethernet, a Dell QSFP to SFP converter is required. EMC PN: 019-078-055
QSFP-SFP Adapter EMC PN: 019-078-055 QSFP-SFP Adapter Orderable Model PN: 851-0331.

Table 8. 10 GbEb
Dell EMC Part Number Cable type and length Orderable model
EMC PN: 038-003-728-01 1 m, copper, 10GbEb Ethernet PN: 851-0262
PN: 038-003-729-01 3 m, copper, 10GbEb Ethernet PN: 851-0263
PN: 038-003-730-01 5 m, copper, 10GbEb Ethernet PN: 851-0264
PN: 038-004-153 10 m, Optical, 10GbEb Ethernet PN: 851-0266
PN: 038-004-154 30 m, Optical, 10GbEb Ethernet PN: 851-0267
PN: 038-004-155 50 m, Optical 10GbEb Ethernet PN: 851-0268
PN: 038-004-156 100 m Optical, 10GbEb Ethernet PN: 851-0269
The optical transceiver, EMC PN: 019-078-041 10GbEb is for the MTP/MPO optical cables that are listed in this table is
included (qty 2) in the orderable 851 PNs for the optical 10 GbE cables. To order the optic only, use orderable PN: 851-0296.
PN: 038-004-216 1 50-meter Optical, 10GbEb Ethernet PN: 851-0270

Breakout cables might be required with archive nodes (A200, A2000, H400, H500) in the cluster, and with Celestica D4040,
S4112-ON and Z9100-ON switches. Breakout cables connect one 40GbEb connection to four 10GbEb connections.
NOTE: There is no breakout cable support for Arista switches. However, you can add a 10 GbE or 40 GbE line card
depending on the Arista switch model.

Connect the internal cluster network


Internal network cables connect nodes to the cluster's internal network so the nodes can communicate with each other.
1. Depending on the type of internal network card in the node, connect a network cable between the int-a port, the bottom of
the two ports, and the network switch for the Internal A network.

15
2. For the secondary internal network, connect the int-b port, the top port, to a separate network switch for the int-b network.

slot 1
slot 0
PORT 2 PORT 2

1
PORT 1 PORT 1

CL6154

1. int-a port 2. int-b port

Connect the external client network


Ethernet cables connect the node to the cluster's external network so the node can communicate with external clients.
Complete the following steps to connect the node with the switch for the external network.
NOTE: External client ethernet network interfaces are named based on their speed, which could currently be 10, 25, 40, or
100Gb. The example below refers to a node with 10Gb interfaces.
With an ethernet cable, connect the 10gige-1 port on the node, the bottom of the two ports, to the switch for the external
network.
For additional connections, use the 10gige-2 port, the top port.
slot 1
slot 0

PORT 2 PORT 2

1
PORT 1 PORT 1

CL6155

1. 10gige-1 port 2. 10gige-2 port

16
Install transformers
This step only applies if you are installing F800 or H600 nodes into a low-line power environment (100v-120v). If you are not
installing F800 or H600 nodes, or if you are not installing into a low-line power environment, you can skip this section.
CAUTION:

If you are installing F800 or H600 nodes into a low-line power environment (100v-120v), you must install transformers
that will regulate power to the requirements for these nodes (200v-240v). To make sure that there are redundant
power sources, you will install two transformers, then connect the node power supplies from peer nodes, one to each
transformer. The transformers will connect to the power source.

Reference the Isilon Generation 6 Transformer Installation Guide for complete instructions on installing a transformer
and connecting your power supplies.

Connect the power supplies


Connect a power cable to each node in the back of the chassis.
Perform these steps for every node installed in the back of the chassis.
Nodes will automatically power up when they are connected to power.
1. Connect the power cord to your power source.
CAUTION: In order for node pairs to provide redundant power to one another, you must not connect both nodes in
a node pair to the same power source. Nodes in a node pair must be connected to separate power sources. From
the back of a Generation 6 chassis, nodes in bays 1 and 2 are a node pair and nodes in bays 3 and 4 are a node pair.

2. Connect the power cord to the power supply.

3. Rotate the metal bale down over the power cord to hold the cord in place.

17
Configure the node
Before using the node, you must either create a new cluster or add the node to an existing cluster.

Federal installations
You can configure nodes to comply with United States federal regulations.
If you are installing an EMC Isilon cluster for a United States federal agency, you can configure the cluster's external network
with IPv6 addresses. In order to comply with Federal requirements, if the OneFS cluster is configured for IPv6, enablement of
link-local is required.
As part of the installation procedure, you can configure the external cluster for IPv6 addresses in the Isilon configuration wizard
after you power up a node.
After you install the cluster, you can enable link-local addresses by following the instructions in the KB article How to enable
link-local addresses for IPv6.

SmartLock compliance mode


You can configure nodes to operate in SmartLock compliance mode. You should only choose to run your cluster in SmartLock
compliance mode if your data environment must comply with SEC rule 17-a4(f).
Compliance mode controls how SmartLock directories function and limits access to the cluster in alignment with SEC rule 17-
a4(f).
A valid SmartLock license is required to configure a node in compliance mode.
CAUTION:

Once you select to run a node in SmartLock compliance mode, you cannot leave compliance mode without reformatting
the node.

SmartLock compliance mode is incompatible with Isilon for vCenter, VMware vSphere API for Storage Awareness
(VASA), and the VMware vSphere API for Array Integration (VAAI) NAS Plug-In for Isilon.

Connect to the node using a serial cable


You can use a null modem serial cable to provide a direct connection to a node.
If no serial ports are available, you can use a USB-to-serial converter.
1. Connect a null modem serial cable to the serial port of a computer, such as a laptop.

18
2. Connect the other end of the serial cable to the serial port on the back panel of the node.
3. Start a serial communication utility such as Minicom (UNIX) or PuTTY (Windows).
4. Configure the connection utility to use the following port settings:

Setting Value

Transfer rate 115,200 bps

Data bits 8

Parity None

Stop bits 1

Flow control Hardware


(RTS/CTS)

5. Open a connection to the node.

Run the configuration wizard


The Isilon configuration wizard starts automatically when a new node is powered on. The wizard provides step-by-step guidance
for configuring a new cluster or adding a node to an existing cluster.
The following procedure assumes that there is an open serial connection to a new node.

NOTE: You can type back at most prompts to return to the previous step in the wizard.

1. For new clusters, joining a node to an existing cluster, or preparing a node to run in SmartLock compliance mode, choose one
of the following options:
● To create a cluster, type 1.
● To join the node to an existing cluster, type 2.
● To exit the wizard and configure the node manually, type 3.
● To restart the node in SmartLock compliance mode, type 4.
CAUTION: If you choose to restart the node in SmartLock compliance mode, the node restarts and returns to
this step. Selection 4 changes to enable you to disable SmartLock compliance mode. Selection 4 is the last
opportunity to back out of compliance mode without reformatting the node.
2. Follow the prompts to configure the node.
For new clusters, the following table lists the information necessary to configure the cluster. To ensure that the installation
process is not interrupted, it is recommended that you collect this information before installation.

Setting Description
SmartLock compliance license A valid SmartLock license for clusters in compliance mode only
Root password The password for the root user for clusters in compliance mode do not
allow a root user and request a compliance administrator (comp admin)
password.
Admin password The password for the administrator user
Cluster name The name used to identify the cluster. Cluster names must begin with a
letter and can contain only numbers, letters, and hyphens.
NOTE: if the cluster name is longer than 11 characters, the
following warning displays: WARNING: Limit cluster name
to 11 characters or less when the NetBIOS Name
Service is enabled to avoid name truncation.
Isilon uses up to 4 characters for individual
node names.

19
Setting Description
Character encoding The default character encoding is UTF-8.
int-a network settings The int-a network settings are for communication between nodes.
● Netmask The int-a network must be configured with IPv4.
● IP range
The int-a network must be on a separate subnet from an int-b/failover
network.

int-b/failover network settings The int-b/failover network settings are optional. The int-b network is
● Netmask for communication between nodes, and provides redundancy with the
● IP range int-a network.
● Failover IP range The int-b network must be configured with IPv4.
The int-a and int-b networks must be on separate subnets.
The failover IP range is a virtual IP range that is resolved to either of
the active ports during failover.

External network settings The external network settings are for client access to the cluster. The
● Netmask 10Gb ports can be configured from the wizard.
● MTU The default external network can be configured with IPv4 or IPv6
● IP range addresses.
Configure the external network with IPv6 addresses by entering an
integer less than 128 for the netmask value. The standard external
netmask value for IPv6 addresses is 64. If you enter a netmask value
with dot-decimal notation, use IPv4 addresses for the IP range.
In the configuration wizard, the following options are available:

Configure external subnet


[ 1] 25gige-1 - External interface
[Enter] Exit configuring external
network.
Configure external subnet >>>

Or

Configure external subnet


[ 1] 100gige-1 - External interface
[Enter] Exit configuring external
network.
Configure external subnet >>>

NOTE: The 100gige is only an option on F600 nodes.

Default gateway The IP address of the optional gateway server through which the
cluster communicates with clients outside the subnet. Enter an IPv4 or
IPv6 address, depending on how configured the external network.
SmartConnect settings SmartConnect balances client connections across nodes in a cluster.
● SmartConnect zone name Information about configuring SmartConnect is available in the OneFS
● SmartConnect service IP Administration Guide.

DNS settings The DNS settings are for the cluster.


● DNS servers Enter a comma-separated list to specify multiple DNS servers or search
● DNS search domains domains. Enter IPv4 or IPv6 addresses, depending on how you
configured the external network settings.

Date and time settings The day and time settings are for the cluster.
● Time zone
● Day and time

20
Setting Description
Cluster join mode The method that the cluster uses to add new nodes. Choose one of the
following options:

Manual join Cluster join mode enables configured nodes in


the cluster, or new nodes to request to join the
cluster.

Secure join A configured node in the existing cluster must


invite a new unconfigured node to join the
cluster.

NOTE: If you are installing a node that contains SEDs (self-encrypting drives), the node formats the drives now. The
formatting process might take up to two hours to complete.

Updating node firmware


To make sure that the most recent firmware is installed on a node, update the node firmware.
Follow instructions in the most current Node Firmware Release Notes to update your node to the most recent Node Firmware
Package.

Updating drive firmware


To ensure that the most recent drive firmware is installed, update the drive firmware.
Follow instructions in the most current Drive Support Package Release Notes to update your drives to the most recent Isilon
Drive Support Package.

Licensing and remote support


After you configure new hardware, update your OneFS license and configure the new hardware for remote support.
For instructions on updating your OneFS license and configuring remote support (ESRS), refer to the OneFS CLI Administration
Guide or OneFS Web Administration Guide.

Front panel LCD menu


You can perform certain actions and check a node's status from the LCD menu on the front panel of the node.

LCD Interface
The LCD interface is located on the node front panel. The interface consists of the LCD screen, a round button labeled ENTER
for making selections, and four arrow buttons for navigating menus.
There are also four LEDs across the bottom of the interface that indicate which node you are communicating with. You can
change which node you are communicating with the arrow buttons.
The LCD screen will be dark until you activate it. To activate the LCD screen and view the menu, press the square selection
button.
You can press the right arrow button to move to the next level of a menu.

Attach menu
The Attach menu contains the following sub-menu:

21
Drive Adds a drive to the node. After you select this command, you can select the drive bay that contains the
drive you would like to add.

Status menu
The Status menu contains the following sub-menus:

Alerts Displays the number of critical, warning, and informational alerts that are active on the cluster.
Cluster The Cluster menu contains the following sub-menus:

Details Displays the cluster name, the version of OneFS installed on the cluster, the health
status of the cluster, and the number of nodes in the cluster.
Capacity Displays the total capacity of the cluster and the percentage of used and available
space on the cluster.
Throughput Displays throughput numbers for the cluster as <in> | <out> | <total>.

Node The Node menu contains the following sub-menus:

Details Displays the node ID, the node serial number, the health status of the node, and the
node uptime as <days>, <hours>:<minutes>:<seconds>
Capacity Displays the total capacity of the node and the percentage of used and available
space on the node.
Network Displays the IP and MAC addresses for the node.
Throughput Displays throughput numbers for the node as <in> | <out> | <total>.
Disk/CPU Displays the current access status of the node, either Read-Write or Read-Only.
Also displays the current CPU throttling status, either Unthrottled or Throttled.

Drives Displays the status of each drive bay in the node.


You can browse through all the drives in the node with the right and left navigation buttons.
You can view the drives in other nodes in the cluster with the up and down navigation buttons. The node
you are viewing will display above the drive grid as Drives on node:<node number>.

Hardware Displays the current hardware status of the node as <cluster name>-<node number>:<status>.
Also displays the Statistics menu.

Statistics Displays a list of hardware components. Select one of the hardware components to
view statistics related to that component.

Update menu
The Update menu allows you to update OneFS on the node. Press the selection button to confirm that you would like to update
the node. You can press the left navigation button to back out of this menu without updating.

Service menu
The Service menu contains the following sub-menus:

Throttle Displays the percentage at which the CPU is currently running.


Press the selection button to throttle the CPU speed.

Unthrottle Displays the percentage at which the CPU is currently running.


Press the selection button to set CPU speed to 100%.

Read-Only Press the selection button to set node access to read-only.

22
Read-Write Press the selection button to set node access to read-write.
UnitLED On Press the selection button to turn on the unit LED.
UnitLED Off Press the selection button to turn off the unit LED.

Shutdown menu
The Shutdown menu allows you to shut down or reboot the node. This menu also allows you to shut down or reboot the entire
cluster. Press the up or down navigation button to cycle through the four shut down and reboot options, or to cancel out of the
menu.
Press the selection button to confirm the command. You can press the left navigation button to back out of this menu without
shutting down or rebooting.

Update the install database


After all work is complete, update the install database.
1. Browse to the Business Services portal.
2. Select the Product Registration and Install Base Maintenance option.
3. To open the form, select the IB Status Change option.
4. Complete the form with the applicable information.
5. To submit the form, click Submit.

Where to go for support


This topic contains resources for getting answers to questions about Isilon products.

Online support ● Live Chat


● Create a Service Request
For questions about accessing online support, send an email to [email protected].

Telephone ● United States: 1-800-SVC-4EMC (1-800-782-4362)


support ● Canada: 1-800-543-4782
● Worldwide: 1-508-497-7901
● Local phone numbers for a specific country are available at Dell EMC Customer Support Centers.
Isilon Community The Isilon Community Network connects you to a central hub of information and experts to help you
Network maximize your current storage solution. From this site, you can demonstrate Isilon products, ask questions,
view technical videos, and get the latest Isilon product documentation.
Isilon Info Hubs For the list of Isilon info hubs, see the Isilon Info Hubs page on the Isilon Community Network. Use these
info hubs to find product documentation, troubleshooting guides, videos, blogs, and other information
resources about the Isilon products and features you're interested in.

23
Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the
problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

© 2016 - 2020 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Other trademarks may be trademarks of their respective owners.

You might also like