Isilon Ethernet Backend Network Overview PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

WHITE PAPER

ISILON ETHERNET BACKEND


NETWORK OVERVIEW

Abstract
This white paper provides an introduction to the Ethernet backend network for
Dell EMC Isilon scale-out NAS.

November 2018

1 | Isilon
Isilon Ethernet Backend
Ethernet Network
Backend Overview
Network Overview
© 2018 Dell Inc. or its subsidiaries.
The information in this publication is provided “as is.” DELL EMC Corporation makes no representations or warranties of any kind with
respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular
purpose.

Use, copying, and distribution of any DELL EMC software described in this publication requires an applicable software license.
DELL EMC2, DELL EMC, the DELL EMC logo are registered trademarks or trademarks of DELL EMC Corporation in the United States
and other countries. All other trademarks used herein are the property of their respective owners. © Copyright 2018 DELL EMC
Corporation. All rights reserved. Published in the USA. 11/18 white paper H16346.1
DELL EMC believes the information in this document is accurate as of its publication date. The information is subject to change without
notice.

DELL EMC is now part of the Dell group of companies.

2 | Isilon Ethernet Backend Network Overview


© 2018 Dell Inc. or its subsidiaries.
TABLE OF CONTENTS

Legacy Isilon Backend Network ........................................................................................................... 4


New Generation Isilon Backend Network Option ................................................................................. 4
What is new with Ethernet Backend? ................................................................................................................................ 4
Dell Switch support for Ethernet Backend ......................................................................................................................... 5
Configuration and Monitoring ............................................................................................................................................. 5
Troubleshooting.................................................................................................................................................................. 7

3 | Isilon Ethernet Backend Network Overview


© 2018 Dell Inc. or its subsidiaries.
Legacy Isilon Backend Network
Prior to the recent introduction of the new generation of Dell EMC Isilon scale-out NAS storage platforms, inter-node communication in
an Isilon cluster has been performed using a proprietary, unicast (node-to-node) protocol known as RBM (Remote Block Manager).
This inter-node communication uses a fast low-latency, InfiniBand (IB) network. This backend network, which is configured with
redundant switches for high availability, acts as the backplane for the Isilon cluster. This backplane enables each Isilon node to act as
a contributor in the cluster and provides node-to-node communication with a private, high-speed, low-latency network. This backend
network utilizes Internet Protocol (IP) over IB (IPoIB) to manage the cluster. SDP (Sockets Direct Protocol) is used for all data traffic
between nodes in the cluster.

New Generation Isilon Backend Network Option


The new generation of Isilon scale-out NAS storage platforms offers increased backend networking flexibility. With the new Isilon
platforms, customers may choose to use either an InfiniBand or Ethernet switch on the backend. For customers electing to use an
InfiniBand backend network, the configuration and implementation will remain the same as previous generations of Isilon systems.
Customers looking to add new generation platforms (e.g. Isilon F800, H600, H500, H400, A200 and A2000) to an existing Isilon cluster
comprised of earlier Isilon systems, will need to configure the new Isilon nodes with an InfiniBand backend interface. The new
Ethernet backend network option is only supported in clusters that are comprised entirely of new generation Isilon platforms. In these
configurations, only Ethernet backend switches that are provided and managed by Dell EMC will be supported.

The new Isilon backend Ethernet options are detailed in Table 1.

Backend Options Compute Compatibility

Isilon H400,
10 GbE SFP+
Isilon A200 or Isilon A2000

Isilon F800,
40 GbE QSFP+
Isilon H600 or Isilon H500

Table 1: New Generation Isilon Backend Ethernet Options

In general, high performance platforms such as the new Isilon F800 all-flash or Isilon H600 hybrid scale-out NAS platforms will
typically utilize the bandwidth capabilities of 40 GbE ports. Less performant platforms such as the Isilon A200 or A2000 archive scale-
out NAS platforms will typically be well-supported with the bandwidth provided by 10 GbE ports. Ethernet has all the performance
characteristics needed to make it comparable to InfiniBand.
New generation Isilon platforms with different backend speeds can connect to the same switch and not see performance issues. For
example, in a mixed cluster where archive nodes have 10 GbE on the backend and performance nodes have 40 GbE on the backend,
both node types can connect to a 40 GbE switch without affecting the performance of other nodes on the switch. The 40 GbE switch
will provide 40 GbE to the ports servicing the high performance nodes and 10 GbE to the archive or lower performing nodes.

What is new with Ethernet Backend?


In legacy Isilon systems, backend data traffic uses SDP and IPoIB for management. SDP has fast failover and incorporates a variety
of InfiniBand-only features that ensures optimum performance. However, because SDP only works over InfiniBand, a new method was
required to get optimal performance over the Ethernet backend. For this reason, the new generation of Isilon platforms now uses RBM
over TCP on the backend switches.

4 | Isilon Ethernet Backend Network Overview


© 2018 Dell Inc. or its subsidiaries.
RBM Block Manager now uses TCP and the TCP stack has been enhanced to provide the performance required to support the cluster
communication. All the modifications of the TCP stack have been made while conforming to the industry standard specification of the
stack. The backend and frontend networks will use the same TCP stack and modifications to the performance of the backend TCP
stack should not affect TCP traffic on the frontend. RBM over Ethernet will still provide fast failover.

Dell Switch support for Ethernet Backend


We recently added support for two Dell Ethernet switches to be used for Isilon backend as a Top of Rack solution (TOR).

• Z9100-ON
• S4148F-ON

Both these Ethernet switches will be zero-touch backend switches that are used for inter-node communication in an Isilon cluster, and
those are typically what are called “plug and play.” They are shipped with a fixed configuration and additional customer configuration is
not necessary or allowed.

The Z9100-ON is a fixed 1 RU Ethernet switch which can accommodate high port density (lower and upper RUs) and multiple interface
types (10 GbE and 40 GbE) for maximum flexibility.

The S4148F-ON is the next generation family of 10 GbE Top-of-Rack / Aggregation switch / router products that aggregates 10 GbE
server/storage devices and provides multi speed uplinks for maximum flexibility and simple management.

Note:
• Both these switches are qualified to be used with currently available network cables (MPO, LC, QSFP+, SFP+ and
breakout cables).
• Both these switches are shipped with a custom operating system that is built specifically to be compatible with Isilon
OneFS and they cannot be used (nor are they supported if used) as frontend network switches.

Configuration and Monitoring


When installing a new Isilon cluster, the Configuration Wizard has not changed. It still prompts you for int-a, int-b, and failover range.
All configuration and setup steps will be the same regardless of InfiniBand or Ethernet option selected.
Figure 1 below shows the relative positioning of backend ports provided in the Compute Assembly for each Isilon node in the new
generation platforms.

5 | Isilon Ethernet Backend Network Overview


© 2018 Dell Inc. or its subsidiaries.
Figure 1: New Generation Isilon Backend Ports

Table 2 below provides configuration information for the backend ports in new generation Isilon platforms:

Int-a network setting • The network settings used by the int-a network. The int-a
network is used for communication between nodes.
• Netmask
• The int-a network must be configured with IPv4.
• IP range
• The int-a network must be on a separate/distinct subnet from an
int-b/failover network.

Int-b and failover network setting • The network settings used by the optional int-b/failover network.

• Netmask • The int-b network is used for communication between nodes


and provides redundancy with the int-a network.
• IP range
• The int-b network must be configured with IPv4.
• Failover IP range
• The int-a, int-b and failover networks must be on
separate/distinct subnets.

Table 2: int-a, int-b and failover configuration

The monitoring capabilities on Isilon switches correspond to the FRU (field replaceable unit) components such as power supply, the
fan, etc. Protocol and performance monitoring capability is not provided. Customers should not attempt to alter the backend network
configurations provided by Dell EMC. Any attempt to do so can result in a cluster wide outage.

6 | Isilon Ethernet Backend Network Overview


© 2018 Dell Inc. or its subsidiaries.
For SNMP capabilities, customer may send an SNMP alert through the CELOG system. In today’s backend Ethernet world, we no
longer have opensm topology files to view all connected devices on the backend network. If you want to know what is connected to
the fabric of backend Ethernet (int-a or int-b) you may use the isi_dump_fabric int-a (or int-b) command.

Troubleshooting
In the past, to get the backend networking information, we issued the isi_eth_mixer_d command” As the result, you could determine
the backend interfaces, or which of Int-a or int-b interfaces was currently being used. This information is now available via the sysctl
isi.lbfo.config. This will tell you, for each failover address that exists on other nodes, which interface is primary. There is no
preference for one or the other, the current connection shows the path that was last used. Failover occurs in under ½ a second to the
other route.

# sysctl isi.lbfo.config

isi.lbfo.config:

Node: 169.254.3.75, Int-A: mlxen0 (P), Int-B: mlxen1 (A)(C)

Node: 169.254.3.76, Int-A: mlxen0 (P)(C) Int-B: mlxen1 (A)

(P)=Primary

(A)=Alternate

(C)=Current path to the node

mlxen0= Mellanox EN card for int-a

mlxen1= Mellanox EN card for int-b

Table 3 below provides a summary of the supported backend network options for each of the Isilon platforms.

Isilon Platform Compute Compatibility Network Options

F800 CPU: 2.6GHz 16c IB, 40 Gb Ethernet

H600 CPU:2.4GHz 14c IB, 40 Gb Ethernet

H500 CPU:2.2GHz 10c IB, 40 Gb Ethernet

H400 CPU:2.2GHz 4c IB, 10 Gb Ethernet

A200 CPU:2.2GHz 2c IB, 10 Gb Ethernet

A2000 CPU:2.2GHz 2c IB, 10 Gb Ethernet

Table 3: Supported Backend Options for the New Generation Isilon Platforms

Example 1:

All Performance 40 GbE backend: When using performance nodes, the backend must be 40 GbE (10 GbE is not
supported).

7 | Isilon Ethernet Backend Network Overview


© 2018 Dell Inc. or its subsidiaries.
In this example, your configuration will include:

• Two 40 GbE backend switches


• 16 QSFP+/MPO backend cables
• 16 Optics (If MPO cables used)

8 | Isilon Ethernet Backend Network Overview


© 2018 Dell Inc. or its subsidiaries.
40 GbE Switch Options

Isilon
Model Backend Rack Mixed Environment (10 and
Vendor Model Code Port Qty Port type Units 40 GbE nodes 40 GbE)

Support breakout cables, total


851-0316 32 All 40 GbE 1 Less than 32
Dell EMC Z9100-ON 128 10 GbE nodes

Support breakout cables, total


851-0259 32 All 40 GbE 1 Less than 32
Celestica D4040 96 10 GbE nodes

Greater than 32 and No breakout support with FT,


851-0261 64 All 40 GbE 13 less than 64 (included but you can add 10 GbE line
Arista DCS-7308 two 32 port line cards) card

leaf Greater than 64 and


upgrade less than 144 (max 3
Arista 851-0282 (32 ports) All 40 GbE leaf upgrade)

Cable options for F800 (also H600 and H500)


Cable type Model Connector Length EMC P/N Reason

(Passive)

Copper 851-0253 QSFP+ 1m 038-002-064-01 Ethernet cluster


Copper 851-0254 QSFP+ 3m 038-002-066-01 Ethernet cluster
Copper 851-0255 QSFP+ 5m 038-002-139-01 Ethernet cluster
Optical 851-0274 MPO 1m 038-004-214 Ethernet/IB cluster
Optical 851-0275 MPO 3m 038-004-216 Ethernet/IB cluster
Optical 851-0276 MPO 5m 038-004-227 Ethernet/IB cluster
Optical 851-0224 MPO 10m 038-004-218 Ethernet/IB cluster
Optical 851-0225 MPO 30m 038-004-219 Ethernet/IB cluster
Optical 851-0226 MPO 50m 038-004-220 Ethernet/IB cluster
Optical 851-0227 MPO 100m 038-004-221 Ethernet/IB cluster
Optical 851-0277 MPO 150m 038-000-139 Ethernet/IB cluster

Note:

• QSFP+ cables for Ethernet use do not requires optics.


• MPO cables for Ethernet use requires passive optics. The model is 851-0285 (019-078-046)

9 | Isilon Ethernet Backend Network Overview


© 2018 Dell Inc. or its subsidiaries.
• MPO optics are added automatically when MPO cables are quoted and appear as a separate line item.

A single MPO cable configured. Nothing additional.

As you can see two models added to the summary based off of the user’s selection. One for the cable one for the optics.

10 |Isilon Ethernet Backend Network Overview


© 2018 Dell Inc. or its subsidiaries.
Example 2:

Mixed Environment 10 and 40 GbE backend: When mixing performance and archive nodes, use a 40 GbE
infrastructure with 40 GbE connections to the performance nodes and 4 x 10 GbE breakout cables to the archive nodes.

In this example your configuration will include:

• Two 40 GbE backend switches


• 8 QSFP+/MPO backend cables
• 8 Optics (If MPO cables used)
• 4 QSFP to SFP+ breakout cables

11 |Isilon Ethernet Backend Network Overview


© 2018 Dell Inc. or its subsidiaries.
40 GbE Switch option

Isilon
Model Backend Rack Mixed Environment (10 and
Vendor Model Code Port Qty Port type Units 40 GbE nodes 40 GbE)

Support breakout cables,


851-0316 32 All 40 GbE 1 Less than 32
Dell EMC Z9100-ON total 128 10 GbE nodes

Support breakout cables,


851-0259 32 All 40 GbE 1 Less than 32
Celestica D4040 total 96 10 GbE nodes

Greater than 32 and No breakout support with FT,


851-0261 64 All 40 GbE 13 less than 64 (included but you can add 10 GbE line
Arista DCS-7308 two 32 port line cards) card

leaf Greater than 64 and


upgrade less than 144 (max 3
Arista 851-0282 (32 ports) All 40 GbE leaf upgrade)

Note:

• For the Celestica 851-0259, you can use 24 breakout cables to connect 96 nodes at 10G, though only ports 1 -
12 and 17 - 28 can break out (this is a Celestica design limitation).
• Breakout cables don’t require manual configuration on the switch – they are plug & play

Cable options for F800 (also H600 and H500)


Cable type Model Connector Length EMC P/N Reason

(Passive)

Copper 851-0253 QSFP+ 1m 038-002-064-01 Ethernet cluster


Copper 851-0254 QSFP+ 3m 038-002-066-01 Ethernet cluster
Copper 851-0255 QSFP+ 5m 038-002-139-01 Ethernet cluster
Optical 851-0274 MPO 1m 038-004-214 Ethernet/IB cluster
Optical 851-0275 MPO 3m 038-004-216 Ethernet/IB cluster
Optical 851-0276 MPO 5m 038-004-227 Ethernet/IB cluster
Optical 851-0224 MPO 10m 038-004-218 Ethernet/IB cluster
Optical 851-0225 MPO 30m 038-004-219 Ethernet/IB cluster
Optical 851-0226 MPO 50m 038-004-220 Ethernet/IB cluster
Optical 851-0227 MPO 100m 038-004-221 Ethernet/IB cluster
Optical 851-0277 MPO 150m 038-000-139 Ethernet/IB cluster

Note:

12 |Isilon Ethernet Backend Network Overview


© 2018 Dell Inc. or its subsidiaries.
• QSFP+ cables for Ethernet use do not requires optics.
• MPO cables for Ethernet use requires passive optics. The model is 851-0285 (019-078-046)
• MPO optics are added automatically when MPO cables are quoted and appear as a separate line item.

A single MPO cable configured. Nothing additional.

As you can see two models added to the summary based of the user’s selection. One for the cable one for the optics.

Cable options for Isilon A200 (also Isilon A2000 and H400)
Cable type Model Length Connector Optic EMC P/N Reason
part #

Copper 851-0278 1m (1) QSFP to (4) SFP+ N/A 038-004-506-03 Breakout: 40Ge/10Ge (4)
Copper 851-0279 3m (1) QSFP to (4) SFP+ N/A 038-004-507-03 Breakout: 40Ge/10Ge (4)
Copper 851-0280 5m (1) QSFP to (4) SFP+ N/A 038-004-508-03 Breakout: 40Ge/10Ge (4)

Note:

• Breakout cables do not requires optics.


• For the Celestica 851-0259, you can use 24 breakout cables to connect 96 nodes at 10G, though only ports 1 -
12 and 17 - 28 can break out (this is due to a Celestica design factor).

13 |Isilon Ethernet Backend Network Overview


© 2018 Dell Inc. or its subsidiaries.
Example 3: All Archive 10GbE nodes

In this example, your configuration will include:

• Two 10 GbE SFP+ switches


• 16 SFP+/LC cables
• 16 optics (If you go with LC cables)

14 |Isilon Ethernet Backend Network Overview


© 2018 Dell Inc. or its subsidiaries.
10 GbE Switch option

Isilon Backend Rack Mixed Environment


Port type All 10 GbE nodes
Vendor Model Model Code Port Qty. Units (10 and 40 GbE)

48 port 10 GbE,
851-0317 48 1 Less than 48 Not supported
Dell EMC S4148F-ON 2 port 40 GbE

24 port 10 GbE,
851-0258 24 1 Less than 24
Celestica D2024 2 port 40 GbE Not supported

48 port 10 GbE, Greater than 24 and less than


851-0257 48 1
Celestica D2060 6 port 40 GbE 48 Not supported

Greater than 48 and less than


48 port 10 GbE, 40 GbE line card can be
851-0260 96 8 96 (included two 48 ports line
4 port 40 GbE added
Arista DCS-7304 cards)

Leaf
Greater than 96 and less than
upgrade (48
144 (max 1 leaf upgrade)
Arista 851-0283 ports)

Note:

• For Celestica D2024, the two 40 GbE ports are not supported.
• For Celestica D2060, the six 40 GbE ports have been tested and can breakout to 4x10 GbE mode.
• For Arista DCS 7304, the four 40 GbE ports are not supported.

Cable options for H400 and A2000 (A200)


Cable type Model Connector Length EMC P/N

Copper 851-0262 SFP+ 1m 038-003-728-01


Copper 851-0263 SFP+ 3m 038-003-729-01
Copper 851-0264 SFP+ 5m 038-004-730-01
Optical 851-0266 LC 10m 038-004-153
Optical 851-0267 LC 30m 038-004-154
Optical 851-0268 LC 50m 038-004-155
Optical 851-0269 LC 100m 038-004-156
Optical 851-0270 LC 150m 038-004-591

Note:

• The optics for the LC-LC cables are bundled with the cable BOM and not listed separately on the quoting tool.

15 |Isilon Ethernet Backend Network Overview


© 2018 Dell Inc. or its subsidiaries.
Learn more about Dell Contact a Dell EMC Expert View more resources Join the conversation
EMC Isilon with #DellEMCStorage

16 |Isilon Ethernet Backend Network Overview


© 2018
© 2018 Dell
Dell Inc. or itsInc. or its subsidiaries.
subsidiaries. All Rights Reserved. Dell, EMC and other trademarks are trademarks of Dell Inc. or
its subsidiaries. Other trademarks may be trademarks of their respective owners. Reference Number: H16346.1

You might also like