BP 2071 AHV Networking
BP 2071 AHV Networking
BP 2071 AHV Networking
Copyright
Copyright 2017 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual
property laws.
Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other
marks and names mentioned herein may be trademarks of their respective companies.
Copyright | 2
AHV Networking
Contents
1. Executive Summary................................................................................ 5
2. Introduction..............................................................................................6
2.1. Audience........................................................................................................................ 6
2.2. Purpose..........................................................................................................................6
6. Conclusion............................................................................................. 36
Appendix......................................................................................................................... 37
AHV Networking Terminology.............................................................................................37
AHV Networking Best Practices Checklist..........................................................................37
AHV Command Line Tutorial..............................................................................................40
3
AHV Networking
List of Figures................................................................................................................47
List of Tables................................................................................................................. 48
4
AHV Networking
1. Executive Summary
The default networking that we describe in the AHV Best Practices Guide covers a wide range of
scenarios that Nutanix administrators encounter. However, for those scenarios with unique VM
and host networking requirements that are not covered elsewhere, use this advanced networking
guide.
The default AHV networking configuration provides a highly available network for guest VMs
and the Nutanix Controller VM (CVM). This structure includes simple control and segmentation
of guest VM traffic using VLANs, as well as IP address management. The network visualization
for AHV available in Prism also provides a view of the guest and host network configuration for
troubleshooting and verification.
This advanced guide is useful when the defaults don't match customer requirements.
Configuration options include host networking high availability and load balancing mechanisms
beyond the default active-backup, tagged VLAN segmentation for host and CVM traffic, and
detailed command line configuration techniques for situations where a GUI may not be sufficient.
The tools we present here enable you to configure AHV to meet the most demanding network
requirements.
1. Executive Summary | 5
AHV Networking
2. Introduction
2.1. Audience
This best practices guide is part of the Nutanix Solutions Library. It is intended for AHV
administrators configuring advanced host and VM networking. Readers of this document should
already be familiar with the AHV Best Practices Guide, which covers basic networking.
2.2. Purpose
In this document, we cover the following topics:
Command line overview and tips.
Open vSwitch in AHV.
VLANs for hosts, CVMs, and guest VMs.
IP address management (IPAM).
Network adapter teaming within bonds.
Network adapter load balancing.
2. Introduction | 6
AHV Networking
For more detailed information on the Nutanix Enterprise Cloud Platform, please visit
Nutanix.com.
4.2. Bridges
Bridges act as virtual switches to manage traffic between physical and virtual network interfaces.
The default AHV configuration includes an OVS bridge called br0 and a native Linux bridge
called virbr0. The virbr0 Linux bridge exclusively carries management communication between
the CVM and AHV host. All other storage, host, and VM network traffic flows through the br0
OVS bridge. The AHV host, VMs, and physical interfaces use "ports" for connectivity to the
bridge.
4.3. Ports
Ports are logical constructs created in a bridge that represent connectivity to the virtual switch.
Nutanix uses several port types, including internal, tap, VXLAN, and bond.
An internal portwith the same name as the default bridge (br0)provides access for the
AHV host.
Tap ports act as bridge connections for virtual NICs presented to VMs.
VXLAN ports are used for the IP address management functionality provided by Acropolis.
Bonded ports provide NIC teaming for the physical interfaces of the AHV host.
4.4. Bonds
Bonded ports aggregate the physical interfaces on the AHV host. By default, the system creates
a bond named br0-up in bridge br0 containing all physical interfaces. Changes to the default
bond (br0-up) using manage_ovs commands often rename it to bond0 when using our examples,
so keep in mind that your system may be named differently than the diagram below.
OVS bonds allow for several load-balancing modes, including active-backup, balance-slb, and
balance-tcp. Administrators can also activate LACP for a bond to negotiate link aggregation with
a physical switch. Because the bond_mode setting is not specified during installation, it defaults
to active-backup, which is the configuration we recommend.
The following diagram illustrates the networking configuration of a single host immediately after
imaging. The best practice is to use only the 10 Gb NICs and to disconnect the 1 Gb NICs if you
do not need them. For additional information on bonds, please refer to the Best Practices section
below.
Note: Only utilize NICs of the same speed within the same bond.
By default, all virtual NICs are created in access mode, which permits only one VLAN per virtual
network. However, you can choose to configure a virtual NIC in trunked mode instead, allowing
multiple VLANs on a single VM NIC for network-aware user VMs. For more information on virtual
NIC modes, please refer to the Best Practices section below.
assignment. AHV uses VXLAN and OpenFlow rules in OVS to intercept DHCP requests from
user VMs and fill those requests with the configured IP address pools and settings.
Figure 5: IPAM
Administrators can use AHV with IPAM to deliver a complete virtualization deployment, including
network management, from the unified Prism interface. This capability radically simplifies the
complex network management traditionally associated with provisioning VMs and assigning
network addresses. To avoid address overlap, be sure to work with your network team to reserve
a range of addresses for VMs before enabling the IPAM feature.
AHV assigns an IP address from the address pool when creating a managed VM NIC; the
address releases back to the pool when the VM NIC or VM is deleted. In a managed network,
AHV intercepts DHCP requests and bypasses traditional network-based DHCP servers.
You can see individual VM network details under the Table view on the VM page by selecting the
desired VM and choosing Update, as shown in the figure below.
Tip: For better security and a single point of management, avoid connecting directly
to the AHV hosts. All AHV host operations can be performed from the CVM by
connecting to 192.168.5.1, the internal management address of the AHV host.
Administrators can view and change OVS configuration from the CVM command line with the
manage_ovs command. To execute a single command on every Nutanix CVM in a cluster, use
the allssh shortcut described in the AHV Command Line appendix.
Note: The order in which flags and actions pass to manage_ovs is critical. Flags
must come first. Any flag passed after an action is not parsed.
nutanix@CVM$ manage_ovs --help
USAGE: manage_ovs [flags] <action>
To list all physical interfaces on all nodes, use the show_interfaces action. The show_uplinks
action returns the details of a bonded adapter for a single bridge.
Note: If you do not enter a bridge_name, the action reports on the default bridge,
br0.
nutanix@CVM$ allssh "manage_ovs show_interfaces"
nutanix@CVM$ allssh "manage_ovs --bridge_name <bridge> show_uplinks"
The update_uplinks action takes a comma-separated list of interfaces and configures these
into a single uplink bond in the specified bridge. All members of the bond must be physically
connected, or the manage_ovs command produces a warning and exits without configuring the
bond. To avoid this error and provision members of the bond even if they are not connected, use
the require_link =false flag.
nutanix@CVM$ allssh "manage_ovs --bridge_name <bridge> --interfaces <interfaces>
update_uplinks"
nutanix@CVM$ allssh "manage_ovs --bridge_name <bridge> --interfaces <interfaces> --
require_link=false update_uplinks"
5.1. Scenario 1: 2x 10 Gb
The most common network configuration is to utilize the 10 Gb interfaces within the default
bond for all networking traffic. The CVM and all user VMs use the 10 Gb interfaces. In this
configuration, we dont use the 1 Gb interfaces. Note that this is different from the factory
configuration, because we have removed the 1 Gb interfaces from the OVS bond.
This scenario uses two physical upstream switches, and each 10 Gb interface within a bond
plugs into a separate physical switch for high availability. Within each bond, only one physical
interface is active when using the default active-backup load balancing mode. See the Load
Balancing within Bond Interfaces section below for more information and alternate configurations.
Remove NICs that are not in use from the default bond, especially when they are of different
speeds. To do so, perform the following manage_ovs action for each Nutanix node in the cluster,
or use with the command allssh:
From the CVM, remove eth0 and eth1 from the default bridge br0 on all CVMs by specifying
that only eth2 and eth3 remain in the bridge. The 10g shortcut lets you include all 10 Gb
interfaces without having to specify the interfaces explicitly by name. Some Nutanix models
have different ethX names for 1 Gb and 10 Gb links, so this shortcut is helpful:
nutanix@CVM$ allssh "manage_ovs --bridge_name br0 --bond_name bond0 --interfaces 10g
update_uplinks"
Note: Running this command changes the name of the bond for bridge br0 to bond0
from the default, br0-up. We suggest keeping the new name, bond0, for simple
identification, but you can use any bond name.
If you need the 1 Gb physical interfaces, separate the 10 Gb and 1 Gb interfaces into different
bridges and bonds to ensure that CVM and user VM traffic always traverses the fastest possible
link.
Here, weve grouped the 10 Gb interfaces (eth2 and eth3) into bond0 and dedicated them to the
CVM and User VM1. Weve grouped the 1 Gb interfaces into bond1; only a second link on User
VM2 uses them. Bond0 and bond1 are added into br0 and br1, respectively.
In this configuration, the CVM and user VMs use the 10 Gb interfaces. Bridge br1 is available
for VMs that require physical network separation from the CVM and VMs on br0. Devices eth0
and eth1 could alternatively plug into a different pair of upstream switches for further traffic
separation.
Perform the following actions for each Nutanix node in the cluster to achieve the configuration
shown above:
On each AHV host, add bridge br1. You can reach the AHV host local to the CVM with the
local 192.168.5.1 interface address. Bridge names must be 10 characters or fewer. We
suggest using the name br1. Repeat the following command once on each CVM in the cluster:
nutanix@CVM$ ssh [email protected] "ovs-vsctl add-br br1"
Alternatively, perform the previous command using hostssh, which applies it to all Nutanix
nodes in the cluster.
nutanix@CVM$ hostssh "ovs-vsctl add-br br1"
From the CVM, remove eth0 and eth1 from the default bridge br0 on all CVMs, as described
in the first scenario. You can use the allssh shortcut here, but first run the appropriate show
commands to make sure that all interfaces are in a good state before executing the update.
nutanix@CVM$ allssh "manage_ovs show_interfaces"
nutanix@CVM$ allssh "manage_ovs --bridge_name br0 show_uplinks"
The output from the show commands above should clarify that the 10 Gb interfaces have
connectivity to the upstream switchesjust look for the columns labeled link and speed. Once
youve confirmed connectivity, update the bond to include only 10 Gb interfaces.
nutanix@CVM$ allssh "manage_ovs --bridge_name br0 --bond_name bond0 --interfaces 10g
update_uplinks"
Add the eth0 and eth1 uplinks to br1 in the CVM using the 1g interface shortcut.
Note: Use the -- require_link =false flag to create the bond even if not all 1g
adapters are connected.
nutanix@CVM$ allssh "manage_ovs --bridge_name br1 --bond_name bond1 --interfaces 1g --
require_link=false update_uplinks"
Now that a bridge, br1, exists just for the 1 Gb interfaces, you can create networks for "User
VM2" with the following global aCLI command. Putting the bridge name in the network name
is helpful when viewing the network in the Prism GUI. In this example, Prism shows a network
named "br1_vlan99" to indicate that this network sends VM traffic over VLAN 99 on bridge
br1.
nutanix@CVM$ acli net.create br1_vlan99 vswitch_name=br1 vlan=99
Nutanix servers are also available with four 10 Gb adapters. In this configuration Nutanix
recommends dedicating two 10 Gb adapters to CVM traffic and two 10 Gb adapters for user
VM traffic, thus separating user and CVM network traffic. Alternatively, for increased data
throughput, you can combine all 10 Gb adapters into a single bridge and bond with advanced
load balancing. Split the 1 Gb adapters into a dedicated bond for user VMs if required; otherwise,
these interfaces can remain unused. In the diagram above we show all interfaces connected to
the same pair of switches, but you could use separate switches for each bond to provide further
traffic segmentation.
To achieve this configuration, perform the following actions for each Nutanix node in the cluster:
On each AHV host, add bridge br1 and br2 from the CVM. Bridge names must be 10
characters or fewer. Use hostssh to execute the command against all AHV hosts in the
cluster.
nutanix@CVM$ hostssh "ovs-vsctl add-br br1"
nutanix@CVM$ hostssh "ovs-vsctl add-br br2"
On the CVM, remove eth0, eth1, eth2, and eth3 from the default bridge br0 on all CVMs by
specifying that only eth4 and eth5 remain in the bridge. We cant use the 10 Gb interface
shortcut here because only two of the four 10 Gb adapters are needed in the bond. If all
Nutanix nodes in the cluster share identical network configuration, use the allssh shortcut.
Again, confirm the network link using show commands before executing these changes.
nutanix@CVM$ allssh "manage_ovs show_interfaces"
nutanix@CVM$ allssh "manage_ovs --bridge_name br0 show_uplinks"
nutanix@CVM$ allssh "manage_ovs --bridge_name br0 --bond_name bond0 --interfaces eth4,eth5
update_uplinks"
Add the eth2 and eth3 uplinks to br1 in the CVM and eth0 and eth1 to br2.
Note: Use the -- require_link =false flag to create the bond even if not all adapters
are connected.
nutanix@CVM$ allssh "manage_ovs --bridge_name br1 --bond_name bond1 --interfaces eth2,eth3
update_uplinks"
nutanix@CVM$ allssh "manage_ovs --bridge_name br2 --bond_name bond2 --interfaces eth0,eth1
update_uplinks"
You can now create networks in these new bridges using the same syntax as in the previous
scenarios in the aCLI. Once youve created the networks in the aCLI, you can view them in the
Prism GUI, so its helpful to include the bridge name in the network name.
nutanix@CVM$ acli net.create <net_name> vswitch_name=<br_name> vlan=<vlan_num>
For example:
nutanix@CVM$ acli net.create br1_production vswitch_name=br1 vlan=1001
nutanix@CVM$ acli net.create br2_production vswitch_name=br2 vlan=2001
Maximum
Maximum Host
Bond Mode Use Case VM NIC
Throughput*
Throughput*
Default configuration, which
active-backup transmits all traffic over a single 10 Gb 10 Gb
active adapter.
Increases host bandwidth utilization
beyond a single 10 Gb adapter.
balance-slb 10 Gb 20 Gb
Places each VM NIC on a single
adapter at a time.
Increases host and VM bandwidth
utilization beyond a single 10 Gb
adapter by balancing each VM NIC
LACP and balance-tcp 20 Gb 20 Gb
TCP session on a different adapter.
Also used when network switches
require LACP negotiation.
Combines adapters for fault
tolerance, but sources VM NIC
LACP and balance-slb traffic from only a single adapter 10 Gb 20 Gb
at a time. This use case is not
illustrated in the diagrams below.
* Assuming 2x 10 Gb adapters
Active-Backup
The default bond mode is active-backup, where one interface in the bond carries traffic and other
interfaces in the bond are used only when the active link fails. Active-backup is the simplest bond
mode, easily allowing connections to multiple upstream switches without any additional switch
configuration. The downside is that traffic from all VMs uses only the single active link within
the bond. All backup links remain unused until the active link fails. In a system with dual 10 Gb
adapters, the maximum throughput of all VMs running on a Nutanix node is limited to 10 Gbps,
the speed of a single link.
Active-backup mode is enabled by default, but you can also configure it with the following AHV
command:
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port bond0 bond_mode=active-backup"
View the bond mode and current active interface with the following AHV command:
nutanix@CVM$ ssh [email protected] "ovs-appctl bond/show"
In the active-backup configuration, this commands output would be similar to the following,
where eth2 is the active and eth3 is the backup interface:
---- bond0 ----
bond_mode: active-backup
bond-hash-basis: 0
updelay: 0 ms
downdelay: 0 ms
lacp_status: off
slave eth2: enabled
active slave
may_enable: true
slave eth3: enabled
may_enable: true
Balance-SLB
To take advantage of the bandwidth provided by multiple upstream switch links, you can use
the balance-slb bond mode. The balance-slb bond mode in OVS takes advantage of all links
in a bond and uses measured traffic load to rebalance VM traffic from highly used to less used
interfaces. When the configurable bond-rebalance interval expires, OVS uses the measured
load for each interface and the load for each source MAC hash to spread traffic evenly among
links in the bond. Traffic from some source MAC hashes may move to a less active link to more
evenly balance bond member utilization. Perfectly even balancing may not always be possible,
depending on the number of traffic streams and their sizes.
Each individual VM NIC uses only a single bond member interface at a time, but a hashing
algorithm distributes multiple VM NICs (multiple source MAC addresses) across bond member
interfaces. As a result, it is possible for a Nutanix AHV node with two 10 Gb interfaces to use up
to 20 Gbps of network throughput, while individual VMs have a maximum throughput of 10 Gbps.
The default rebalance interval is 10 seconds, but Nutanix recommends setting this interval to
60 seconds to avoid excessive movement of source MAC address hashes between upstream
switches. Nutanix has tested this configuration using two separate upstream switches with AHV.
No additional configuration (such as link aggregation) is required on the switch side, as long as
the upstream switches are interconnected physically or virtually and both uplinks trunk the same
VLANs.
Configure the balance-slb algorithm for each bond on all AHV nodes in the Nutanix cluster with
the following commands:
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port bond0 bond_mode=balance-slb"
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port bond0 other_config:bond-rebalance-
interval=60000"
Verify the proper bond mode on each CVM with the following commands:
nutanix@CVM$ ssh [email protected] "ovs-appctl bond/show bond0"
---- bond0 ----
bond_mode: balance-slb
bond-hash-basis: 0
updelay: 0 ms
downdelay: 0 ms
next rebalance: 59108 ms
lacp_status: off
slave eth2: enabled
may_enable: true
hash 120: 138065 kB load
hash 182: 20 kB load
slave eth3: enabled
active slave
may_enable: true
hash 27: 0 kB load
hash 31: 20 kB load
hash 104: 1802 kB load
hash 206: 20 kB load
Note: Ensure that youve appropriately configured the upstream switches before
enabling LACP. On the switch, link aggregation is commonly referred to as port
channel or LAG, depending on the switch vendor. Using multiple upstream switches
may require additional configuration such as MLAG or vPC.
With LACP, multiple links to separate physical switches appear as a single layer-2 link. A traffic-
hashing algorithm such as balance-tcp can split traffic between multiple links in an active-
active fashion. Because the uplinks appear as a single L2 link, the algorithm can balance traffic
among bond members without any regard for switch MAC address tables. Nutanix recommends
using balance-tcp when LACP is configured, because each TCP stream from a single VM can
potentially use a different uplink in this configuration. With link aggregation, LACP, and balance-
tcp, a single user VM with multiple TCP streams could use up to 20 Gbps of bandwidth in an
AHV node with two 10 Gb adapters.
Configure LACP and balance-tcp with the commands below on all Nutanix CVMs in the cluster.
Note: You must configure upstream switches for LACP before configuring the AHV
host from the CVM.
If upstream LACP negotiation fails, the default configuration disables the bond, thus blocking all
traffic. The following command allows fallback to active-backup bond mode in the event of LACP
negotiation failure:
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port bond0 other_config:lacp-fallback-
ab=true"
Next, enable LACP negotiation and set the hash algorithm to balance-tcp.
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port bond0 lacp=active"
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port bond0 bond_mode=balance-tcp"
Confirm the LACP negotiation with the upstream switch or switches using ovs-appctl, looking for
the word "negotiated" in the status lines.
nutanix@CVM$ ssh [email protected] "ovs-appctl bond/show bond0"
nutanix@CVM$ ssh [email protected] "ovs-appctl lacp/show bond0"
Figure 15: Default Untagged VLAN for CVM and AHV Host
The setup above works well for situations where the switch administrator can set the CVM and
AHV VLAN to untagged. However, if you do not want to send untagged traffic to the AHV host
and CVM, or if the security policy doesnt allow this configuration, you can add a VLAN tag to the
host and the CVM with the following arrangement.
Configure VLAN tags for the CVM on every CVM in the Nutanix cluster.
nutanix@CVM$ change_cvm_vlan 10
Tip: Perform the commands above individually on every AHV host and CVM.
Nutanix does not recommend using the allssh or hostssh shortcuts to set VLAN tags,
because if one switch is misconfigured, one shortcut could disconnect all Nutanix
nodes in a cluster.
If you need network segmentation between storage data and management traffic within the
CVM to meet security requirements, please see KB article KB-2748 for AHV.
6. Conclusion
Nutanix recommends using the default AHV networking settings, configured via the Prism GUI,
for most Nutanix deployments. But when your requirements demand specific configuration
outside the defaults, this advanced networking guide provides detailed CLI configuration
examples that can help.
Administrators can use the Nutanix CLI to configure advanced networking features on a single
host or, conveniently, on all hosts at once. VLAN trunking for guest VMs allows a single VM NIC
to pass traffic on multiple VLANs for network-intensive applications. You can apply VLAN tags to
the AHV host and CVM in situations that require all traffic to be tagged. Grouping host adapters
in different ways can provide physical traffic isolation or allow advanced load balancing and link
aggregation to provide maximum throughput and redundancy for VMs and hosts.
With these advanced networking techniques, administrators can configure a Nutanix system with
AHV to meet the demanding requirements of any VM or application.
For feedback or questions, please contact us using the Nutanix NEXT Community forums.
6. Conclusion | 36
AHV Networking
Appendix
Appendix | 37
AHV Networking
Connect to a CVM instead of to the AHV hosts when using SSH. Use the hostssh or
192.168.5.1 shortcut for any AHV host operation.
For high availability, connect to the cluster Virtual IP (VIP) for cluster-wide commands
entered in the aCLI rather than to a single CVM.
Open vSwitch
Do not modify the OpenFlow tables associated with the default OVS bridge br0; the AHV
host, CVM, and IPAM rely on this bridge.
While it is possible to set QoS policies and other network configuration on the VM tap
interfaces manually (using the ovs-vsctl command), we do not recommend it. Policies are
ephemeral and do not persist across VM power cycles or migrations between hosts.
Do not delete or rename OVS bridge br0.
Do not modify the native Linux bridge virbr0.
OVS bonds
Aggregate the 10 Gb interfaces on the physical host to an OVS bond named bond0 on the
default OVS bridge br0 and trunk VLANs to these interfaces on the physical switch.
Use active-backup load balancing unless you have a specific need for balance-slb or LACP
with balance-tcp.
Create a separate bond and bridge for the connected 1 Gb interfaces, or remove them from
the primary bond0.
Do not include 1 Gb interfaces in the same bond or bridge as the 10 Gb interfaces.
If required, connect the 1 Gb interfaces to different physical switches than those connecting
to the 10 Gb to provide physical network separation for user VMs.
Use LACP with balance-tcp only if guest VMs require link aggregation. Ensure that you
have completed LACP configuration on the physical switches first.
Physical network layout
Use redundant top-of-rack switches in a leaf-spine architecture. This simple, flat network
design is well suited for a highly distributed, shared-nothing compute and storage
architecture.
Add all the nodes that belong to a given cluster to the same layer-2 network segment.
If you need more east-west traffic capacity, add spine switches.
Use redundant 40 Gbps (or faster) connections to ensure adequate bandwidth between
upstream switches.
Upstream physical switch specifications
Appendix | 38
AHV Networking
Connect the 10 Gb uplink ports on the AHV node to switch ports that are nonblocking
datacenter-class switches that provide line-rate traffic throughput.
Use an Ethernet switch that has a low-latency, cut-through design, and that provides
predictable, consistent traffic latency regardless of packet size, traffic pattern, or the
features enabled on the 10 Gb interfaces. Port-to-port latency should be no higher than two
microseconds.
Use fast-convergence technologies (such as Cisco PortFast) on switch ports that are
connected to the AHV host.
Switch and host VLANs
Keep the CVM and AHV in the same VLAN. By default, the CVM and the hypervisor
are assigned to VLAN 0, which effectively places them on the native untagged VLAN
configured on the upstream physical switch.
Configure switch ports connected to AHV as VLAN trunk ports.
Configure any dedicated native untagged VLAN other than 1 on switch ports facing AHV
hosts to carry only CVM and AHV traffic.
Guest VM VLANs
Configure guest VM network VLANs on br0 using the Prism GUI.
Use VLANs other than the dedicated CVM and AHV VLAN.
Use the aCLI to add guest VM network VLANs for additional bridges, and include the
bridge name in the network name for easy bridge identification.
Use VM NIC VLAN trunking only in cases where guest VMs require multiple VLANs on the
same NIC. In all other cases, add a new VM NIC with a single VLAN to bring new VLANs to
guest VMs.
CVM network configuration
Do not remove the CVM from either the OVS bridge br0 or the native Linux bridge virbr0.
If necessary, you can separate network traffic for CVM management and storage by adding
a virtual NIC to the CVM following KB-2748. You have the option to connect this additional
virtual NIC to a separate physical network.
Jumbo frames
Nutanix does not currently recommend jumbo frames when using AHV. Performance
improvements are generally not significant when switching from 1,500 byte frames to 9,000
byte frames.
IP address management
Appendix | 39
AHV Networking
Coordinate the configuration of IP address pools to avoid address overlap with existing
network DHCP pools.
Confirm IP address availability with the network administrator before configuring an IPAM
address pool in AHV.
IPMI ports
Do not trunk multiple VLANs to switch ports that connect to the IPMI interface. For
management simplicity, only configure the IPMI switch ports as access ports in a single
VLAN.
Appendix | 40
AHV Networking
CLI shortcuts exist to make cluster management a bit easier. Often, you need to execute a
command on all CVMs, or on all AHV hosts, rather than on just a single host. It would be tedious
to log on to every system and enter the same command on each of them, especially in a large
cluster. That's where the allssh and hostssh shortcuts come in. allssh takes a given command
entered on the CVM BASH CLI and executes that command on every CVM in the cluster.
hostssh works similarly, taking a command entered on the CVM BASH CLI and executing that
command on every AHV host in the cluster, as shown in the figure above.
To streamline the management of CVMs and AHV hosts, the SSH shortcut connects a single
CVM directly to the local AHV host. From any single CVM, you can use SSH to connect to the
AHV hosts local address at IP address 192.168.5.1. Similarly, any AHV host can SSH to the
local CVM using the IP address 192.168.5.2. This SSH connection uses the internal Linux bridge
virbr0, which we discuss in the sections below.
Let's take a look at a few examples to demonstrate the usefulness of these commands.
Example 1: allssh
Imagine that we need to determine which network interfaces are plugged in on all nodes in the
cluster, and the link speed of each interface. We could use manage_ovs show_interfaces at
each CVM, but instead let's use the allssh shortcut. First, SSH into any CVM in the cluster as
the nutanix user, then execute the command allssh "manage_ovs show_interfaces" at the CVM
BASH shell:
nutanix@NTNX-A-CVM:10.0.0.25:~$ allssh "manage_ovs show_interfaces"
Appendix | 41
AHV Networking
In the sample output below, we've truncated the results after the second node to save space.
Executing manage_ovs show_interfaces on the cluster
================== 10.0.0.25 =================
name mode link speed
eth0 1000 True 1000
eth1 1000 True 1000
eth2 10000 True 10000
eth3 10000 True 10000
Connection to 10.0.0.25 closed.
================== 10.0.0.26 =================
name mode link speed
eth0 1000 True 1000
eth1 1000 True 1000
eth2 10000 True 10000
eth3 10000 True 10000
Connection to 10.0.0.26 closed.
Example 2: hostssh
If we wanted to view the MAC address of the eth0 interface on every AHV host, we could connect
to each AHV host individually and use ifconfig eth0. To make things faster, let's use the hostssh
shortcut instead. In this example, we still use SSH to connect to the CVM BASH shell, then prefix
our desired command with hostssh.
nutanix@NTNX-A-CVM:10.0.0.25:~$ hostssh "ifconfig eth0 | grep HWaddr"
============= 10.0.0.23 ============
eth0 Link encap:Ethernet HWaddr 0C:C4:7A:46:B1:FE
============= 10.0.0.22 ============
eth0 Link encap:Ethernet HWaddr 0C:C4:7A:46:B2:4E
Appendix | 42
AHV Networking
Example 3: aCLI
Administrators can use the aCLI shell to view Nutanix cluster information that might not be easily
available in the Prism GUI. For example, let's list all of the VMs in a given network. First, connect
to any CVM using SSH, then enter the aCLI.
nutanix@NTNX-A-CVM:10.0.0.25:~$ acli
<acropolis> net.list_vms 1GBNet
VM UUID VM name MAC address
0d6afd4a-954d-4fe9-a184-4a9a51c9e2c1 VM2 50:6b:8d:cb:1b:f9
Note: Watch carefully for the double and single quotes that encapsulate these
commands.
nutanix@NTNX-A-CVM:10.0.0.25:~$ allssh "ssh [email protected] 'ifconfig eth0 | grep HWaddr' "
Executing ssh [email protected] 'ifconfig eth0 | grep HWaddr' on the cluster
================== 10.0.0.25 =================
eth0 Link encap:Ethernet HWaddr 0C:C4:7A:46:B1:78
Connection to 10.0.0.25 closed.
================== 10.0.0.26 =================
eth0 Link encap:Ethernet HWaddr 0C:C4:7A:46:B2:4E
Connection to 10.0.0.26 closed.
With these command line utilities, we can manage a large number of Nutanix nodes at once.
Centralized management helps administrators apply configuration consistently and verify
configuration across a number of servers.
Appendix | 43
AHV Networking
Appendix | 44
AHV Networking
References
1. AHV Best Practices Guide
2. AHV Administration Guide: Host Network Configuration
3. AHV Administration Guide: VM Network Configuration
4. KB-2748 CVM Network Segmentation
5. Open vSwitch Documentation
6. Prism Web Console Guide: Network Visualization
Appendix | 45
AHV Networking
Jason has designed and supported Unified Communications infrastructure in the enterprise for
the past decade, deploying UC to connect hundreds of thousands of end-users. Outside of his
day job, he has an unusual passion for certificates, security, and motorcycles.
Follow Jason on Twitter @bbbburns.
About Nutanix
Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that
power their business. The Nutanix Enterprise Cloud Platform leverages web-scale engineering
and consumer-grade design to natively converge compute, virtualization, and storage into
a resilient, software-defined solution with rich machine intelligence. The result is predictable
performance, cloud-like infrastructure consumption, robust security, and seamless application
mobility for a broad range of enterprise applications. Learn more at www.nutanix.com or follow up
on Twitter @nutanix.
Appendix | 46
AHV Networking
List of Figures
Figure 1: Enterprise Cloud Platform.................................................................................. 7
Figure 5: IPAM.................................................................................................................12
Figure 15: Default Untagged VLAN for CVM and AHV Host...........................................33
47
AHV Networking
List of Tables
Table 1: Document Version History.................................................................................. 6
48