Isilon Ethernet Backend Network Overview PDF
Isilon Ethernet Backend Network Overview PDF
Isilon Ethernet Backend Network Overview PDF
Abstract
This white paper provides an introduction to the Ethernet backend network for
Dell EMC Isilon scale-out NAS.
November 2018
1 | Isilon
Isilon Ethernet Backend
Ethernet Network
Backend Overview
Network Overview
© 2018 Dell Inc. or its subsidiaries.
The information in this publication is provided “as is.” DELL EMC Corporation makes no representations or warranties of any kind with
respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular
purpose.
Use, copying, and distribution of any DELL EMC software described in this publication requires an applicable software license.
DELL EMC2, DELL EMC, the DELL EMC logo are registered trademarks or trademarks of DELL EMC Corporation in the United States
and other countries. All other trademarks used herein are the property of their respective owners. © Copyright 2018 DELL EMC
Corporation. All rights reserved. Published in the USA. 11/18 white paper H16346.1
DELL EMC believes the information in this document is accurate as of its publication date. The information is subject to change without
notice.
Isilon H400,
10 GbE SFP+
Isilon A200 or Isilon A2000
Isilon F800,
40 GbE QSFP+
Isilon H600 or Isilon H500
In general, high performance platforms such as the new Isilon F800 all-flash or Isilon H600 hybrid scale-out NAS platforms will
typically utilize the bandwidth capabilities of 40 GbE ports. Less performant platforms such as the Isilon A200 or A2000 archive scale-
out NAS platforms will typically be well-supported with the bandwidth provided by 10 GbE ports. Ethernet has all the performance
characteristics needed to make it comparable to InfiniBand.
New generation Isilon platforms with different backend speeds can connect to the same switch and not see performance issues. For
example, in a mixed cluster where archive nodes have 10 GbE on the backend and performance nodes have 40 GbE on the backend,
both node types can connect to a 40 GbE switch without affecting the performance of other nodes on the switch. The 40 GbE switch
will provide 40 GbE to the ports servicing the high performance nodes and 10 GbE to the archive or lower performing nodes.
• Z9100-ON
• S4148F-ON
Both these Ethernet switches will be zero-touch backend switches that are used for inter-node communication in an Isilon cluster, and
those are typically what are called “plug and play.” They are shipped with a fixed configuration and additional customer configuration is
not necessary or allowed.
The Z9100-ON is a fixed 1 RU Ethernet switch which can accommodate high port density (lower and upper RUs) and multiple interface
types (10 GbE and 40 GbE) for maximum flexibility.
The S4148F-ON is the next generation family of 10 GbE Top-of-Rack / Aggregation switch / router products that aggregates 10 GbE
server/storage devices and provides multi speed uplinks for maximum flexibility and simple management.
Note:
• Both these switches are qualified to be used with currently available network cables (MPO, LC, QSFP+, SFP+ and
breakout cables).
• Both these switches are shipped with a custom operating system that is built specifically to be compatible with Isilon
OneFS and they cannot be used (nor are they supported if used) as frontend network switches.
Table 2 below provides configuration information for the backend ports in new generation Isilon platforms:
Int-a network setting • The network settings used by the int-a network. The int-a
network is used for communication between nodes.
• Netmask
• The int-a network must be configured with IPv4.
• IP range
• The int-a network must be on a separate/distinct subnet from an
int-b/failover network.
Int-b and failover network setting • The network settings used by the optional int-b/failover network.
The monitoring capabilities on Isilon switches correspond to the FRU (field replaceable unit) components such as power supply, the
fan, etc. Protocol and performance monitoring capability is not provided. Customers should not attempt to alter the backend network
configurations provided by Dell EMC. Any attempt to do so can result in a cluster wide outage.
Troubleshooting
In the past, to get the backend networking information, we issued the isi_eth_mixer_d command” As the result, you could determine
the backend interfaces, or which of Int-a or int-b interfaces was currently being used. This information is now available via the sysctl
isi.lbfo.config. This will tell you, for each failover address that exists on other nodes, which interface is primary. There is no
preference for one or the other, the current connection shows the path that was last used. Failover occurs in under ½ a second to the
other route.
# sysctl isi.lbfo.config
isi.lbfo.config:
(P)=Primary
(A)=Alternate
Table 3 below provides a summary of the supported backend network options for each of the Isilon platforms.
Table 3: Supported Backend Options for the New Generation Isilon Platforms
Example 1:
All Performance 40 GbE backend: When using performance nodes, the backend must be 40 GbE (10 GbE is not
supported).
Isilon
Model Backend Rack Mixed Environment (10 and
Vendor Model Code Port Qty Port type Units 40 GbE nodes 40 GbE)
(Passive)
Note:
As you can see two models added to the summary based off of the user’s selection. One for the cable one for the optics.
Mixed Environment 10 and 40 GbE backend: When mixing performance and archive nodes, use a 40 GbE
infrastructure with 40 GbE connections to the performance nodes and 4 x 10 GbE breakout cables to the archive nodes.
Isilon
Model Backend Rack Mixed Environment (10 and
Vendor Model Code Port Qty Port type Units 40 GbE nodes 40 GbE)
Note:
• For the Celestica 851-0259, you can use 24 breakout cables to connect 96 nodes at 10G, though only ports 1 -
12 and 17 - 28 can break out (this is a Celestica design limitation).
• Breakout cables don’t require manual configuration on the switch – they are plug & play
(Passive)
Note:
As you can see two models added to the summary based of the user’s selection. One for the cable one for the optics.
Cable options for Isilon A200 (also Isilon A2000 and H400)
Cable type Model Length Connector Optic EMC P/N Reason
part #
Copper 851-0278 1m (1) QSFP to (4) SFP+ N/A 038-004-506-03 Breakout: 40Ge/10Ge (4)
Copper 851-0279 3m (1) QSFP to (4) SFP+ N/A 038-004-507-03 Breakout: 40Ge/10Ge (4)
Copper 851-0280 5m (1) QSFP to (4) SFP+ N/A 038-004-508-03 Breakout: 40Ge/10Ge (4)
Note:
48 port 10 GbE,
851-0317 48 1 Less than 48 Not supported
Dell EMC S4148F-ON 2 port 40 GbE
24 port 10 GbE,
851-0258 24 1 Less than 24
Celestica D2024 2 port 40 GbE Not supported
Leaf
Greater than 96 and less than
upgrade (48
144 (max 1 leaf upgrade)
Arista 851-0283 ports)
Note:
• For Celestica D2024, the two 40 GbE ports are not supported.
• For Celestica D2060, the six 40 GbE ports have been tested and can breakout to 4x10 GbE mode.
• For Arista DCS 7304, the four 40 GbE ports are not supported.
Note:
• The optics for the LC-LC cables are bundled with the cable BOM and not listed separately on the quoting tool.