BP 2071 AHV Networking

Download as pdf or txt
Download as pdf or txt
You are on page 1of 48

AHV Networking

Nutanix Best Practice Guide

Version 1.0February 2017BP-2071


AHV Networking

Copyright
Copyright 2017 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual
property laws.
Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other
marks and names mentioned herein may be trademarks of their respective companies.

Copyright | 2
AHV Networking

Contents

1. Executive Summary................................................................................ 5

2. Introduction..............................................................................................6
2.1. Audience........................................................................................................................ 6
2.2. Purpose..........................................................................................................................6

3. Nutanix Enterprise Cloud Platform Overview.......................................7


3.1. Nutanix Acropolis Overview...........................................................................................7
3.2. Nutanix Acropolis Architecture...................................................................................... 8

4. AHV Networking Overview..................................................................... 9


4.1. Open vSwitch.................................................................................................................9
4.2. Bridges........................................................................................................................... 9
4.3. Ports...............................................................................................................................9
4.4. Bonds........................................................................................................................... 10
4.5. Virtual Local Area Networks (VLANs)......................................................................... 11
4.6. IP Address Management (IPAM).................................................................................11
4.7. View Network Status................................................................................................... 13

5. AHV Networking Best Practices.......................................................... 19


5.1. Open vSwitch Bridge and Bond Recommendations................................................... 19
5.2. Load Balancing within Bond Interfaces....................................................................... 25
5.3. VLANs for AHV Hosts and CVMs............................................................................... 32
5.4. Virtual NIC Trunking.................................................................................................... 35

6. Conclusion............................................................................................. 36

Appendix......................................................................................................................... 37
AHV Networking Terminology.............................................................................................37
AHV Networking Best Practices Checklist..........................................................................37
AHV Command Line Tutorial..............................................................................................40

3
AHV Networking

AHV Networking Command Examples............................................................................... 44


References.......................................................................................................................... 45
About the Author.................................................................................................................45
About Nutanix......................................................................................................................46

List of Figures................................................................................................................47

List of Tables................................................................................................................. 48

4
AHV Networking

1. Executive Summary
The default networking that we describe in the AHV Best Practices Guide covers a wide range of
scenarios that Nutanix administrators encounter. However, for those scenarios with unique VM
and host networking requirements that are not covered elsewhere, use this advanced networking
guide.
The default AHV networking configuration provides a highly available network for guest VMs
and the Nutanix Controller VM (CVM). This structure includes simple control and segmentation
of guest VM traffic using VLANs, as well as IP address management. The network visualization
for AHV available in Prism also provides a view of the guest and host network configuration for
troubleshooting and verification.
This advanced guide is useful when the defaults don't match customer requirements.
Configuration options include host networking high availability and load balancing mechanisms
beyond the default active-backup, tagged VLAN segmentation for host and CVM traffic, and
detailed command line configuration techniques for situations where a GUI may not be sufficient.
The tools we present here enable you to configure AHV to meet the most demanding network
requirements.

1. Executive Summary | 5
AHV Networking

2. Introduction

2.1. Audience
This best practices guide is part of the Nutanix Solutions Library. It is intended for AHV
administrators configuring advanced host and VM networking. Readers of this document should
already be familiar with the AHV Best Practices Guide, which covers basic networking.

2.2. Purpose
In this document, we cover the following topics:
Command line overview and tips.
Open vSwitch in AHV.
VLANs for hosts, CVMs, and guest VMs.
IP address management (IPAM).
Network adapter teaming within bonds.
Network adapter load balancing.

Table 1: Document Version History

Version Number Published Notes


1.0 February 2017 Original publication.

2. Introduction | 6
AHV Networking

3. Nutanix Enterprise Cloud Platform Overview

3.1. Nutanix Acropolis Overview


Nutanix delivers a hyperconverged infrastructure solution purpose-built for virtualization and
cloud environments. This solution brings the performance and economic benefits of web-scale
architecture to the enterprise through the Nutanix Enterprise Cloud Platform, which combines two
product families: Nutanix Acropolis and Nutanix Prism.
Attributes of this solution include:
Storage and compute resources hyperconverged on x86 servers.
System intelligence located in software.
Data, metadata, and operations fully distributed across entire cluster of x86 servers.
Self-healing to tolerate and adjust to component failures.
API-based automation and rich analytics.
Nutanix Acropolis can be broken down into three foundational components: the Distributed
Storage Fabric (DSF), the App Mobility Fabric (AMF), and AHV. Prism provides one-click
infrastructure management for virtual environments running on Acropolis. Acropolis is hypervisor
agnostic, supporting two third-party hypervisorsESXi and Hyper-Vin addition to the native
Nutanix hypervisor, AHV.

Figure 1: Enterprise Cloud Platform

3. Nutanix Enterprise Cloud Platform Overview | 7


AHV Networking

3.2. Nutanix Acropolis Architecture


Acropolis does not rely on traditional SAN or NAS storage or expensive storage network
interconnects. It combines highly dense storage and server compute (CPU and RAM) into a
single platform building block. Each building block is based on industry-standard Intel processor
technology and delivers a unified, scale-out, shared-nothing architecture with no single points of
failure.
The Nutanix solution has no LUNs to manage, no RAID groups to configure, and no complicated
storage multipathing to set up. All storage management is VM-centric, and the DSF optimizes
I/O at the VM virtual disk level. There is one shared pool of storage that includes flash-based
SSDs for high performance and HDDs for affordable capacity. The file system automatically tiers
data across different types of storage devices using intelligent data placement algorithms. These
algorithms make sure the most frequently used data is available in memory or in flash for the
fastest possible performance.

Figure 2: Information Life Cycle Management

For more detailed information on the Nutanix Enterprise Cloud Platform, please visit
Nutanix.com.

3. Nutanix Enterprise Cloud Platform Overview | 8


AHV Networking

4. AHV Networking Overview


AHV uses Open vSwitch (OVS) to connect the CVM, the hypervisor, and guest VMs to each
other and to the physical network on each node.

4.1. Open vSwitch


Open vSwitch (OVS) is an open source software switch implemented in the Linux kernel and
designed to work in a multiserver virtualization environment. By default, OVS behaves like
a layer-2 learning switch that maintains a MAC address table. The hypervisor host and VMs
connect to virtual ports on the switch.
OVS supports many popular switch features, such as VLAN tagging, load balancing, and link
aggregation control protocol (LACP). Each AHV server maintains an OVS instance, and all OVS
instances combine to form a single logical switch. Constructs called bridges manage the switch
instances residing on the AHV hosts.

4.2. Bridges
Bridges act as virtual switches to manage traffic between physical and virtual network interfaces.
The default AHV configuration includes an OVS bridge called br0 and a native Linux bridge
called virbr0. The virbr0 Linux bridge exclusively carries management communication between
the CVM and AHV host. All other storage, host, and VM network traffic flows through the br0
OVS bridge. The AHV host, VMs, and physical interfaces use "ports" for connectivity to the
bridge.

4.3. Ports
Ports are logical constructs created in a bridge that represent connectivity to the virtual switch.
Nutanix uses several port types, including internal, tap, VXLAN, and bond.
An internal portwith the same name as the default bridge (br0)provides access for the
AHV host.
Tap ports act as bridge connections for virtual NICs presented to VMs.
VXLAN ports are used for the IP address management functionality provided by Acropolis.
Bonded ports provide NIC teaming for the physical interfaces of the AHV host.

4. AHV Networking Overview | 9


AHV Networking

4.4. Bonds
Bonded ports aggregate the physical interfaces on the AHV host. By default, the system creates
a bond named br0-up in bridge br0 containing all physical interfaces. Changes to the default
bond (br0-up) using manage_ovs commands often rename it to bond0 when using our examples,
so keep in mind that your system may be named differently than the diagram below.
OVS bonds allow for several load-balancing modes, including active-backup, balance-slb, and
balance-tcp. Administrators can also activate LACP for a bond to negotiate link aggregation with
a physical switch. Because the bond_mode setting is not specified during installation, it defaults
to active-backup, which is the configuration we recommend.
The following diagram illustrates the networking configuration of a single host immediately after
imaging. The best practice is to use only the 10 Gb NICs and to disconnect the 1 Gb NICs if you
do not need them. For additional information on bonds, please refer to the Best Practices section
below.

Note: Only utilize NICs of the same speed within the same bond.

Figure 3: Post-Imaging Network State

4. AHV Networking Overview | 10


AHV Networking

4.5. Virtual Local Area Networks (VLANs)


AHV supports the use of VLANs for the CVM, AHV host, and user VMs. We discuss the steps for
assigning VLANs to the AHV host and CVM in the Best Practices section below. You can easily
create and manage a virtual NICs networks for user VMs in the Prism GUI, the Acropolis CLI
(aCLI), or using REST without any additional AHV host configuration.
Each virtual network in AHV maps to a single VLAN and bridge. In the following figure, were
using Prism to assign a friendly network name of Production and VLAN ID 27 for a user VM
network on the default bridge, br0.

Figure 4: Prism UI Network Creation

By default, all virtual NICs are created in access mode, which permits only one VLAN per virtual
network. However, you can choose to configure a virtual NIC in trunked mode instead, allowing
multiple VLANs on a single VM NIC for network-aware user VMs. For more information on virtual
NIC modes, please refer to the Best Practices section below.

4.6. IP Address Management (IPAM)


In addition to network creation and VLAN management, AHV also supports IP address
management (IPAM), as shown in the figure below. IPAM allows AHV to assign IP addresses
automatically to VMs using DHCP. Administrators can configure each virtual network with a
specific IP subnet, associated domain settings, and group of IP address pools available for

4. AHV Networking Overview | 11


AHV Networking

assignment. AHV uses VXLAN and OpenFlow rules in OVS to intercept DHCP requests from
user VMs and fill those requests with the configured IP address pools and settings.

Figure 5: IPAM

Administrators can use AHV with IPAM to deliver a complete virtualization deployment, including
network management, from the unified Prism interface. This capability radically simplifies the
complex network management traditionally associated with provisioning VMs and assigning
network addresses. To avoid address overlap, be sure to work with your network team to reserve
a range of addresses for VMs before enabling the IPAM feature.
AHV assigns an IP address from the address pool when creating a managed VM NIC; the
address releases back to the pool when the VM NIC or VM is deleted. In a managed network,
AHV intercepts DHCP requests and bypasses traditional network-based DHCP servers.

4. AHV Networking Overview | 12


AHV Networking

4.7. View Network Status


The following sections illustrate the most common methods used to view network configuration
for VMs and AHV hosts. Some information is visible in both Prism and the CLI, and we show
both outputs when available. Refer to the AHV Command Line section in the appendix for more
information on CLI usage.

Viewing Network Configuration for VMs in Prism


Select Network Configuration to view VM virtual networks under the VM page, as shown in the
figure below.

Figure 6: Prism UI Network List

You can see individual VM network details under the Table view on the VM page by selecting the
desired VM and choosing Update, as shown in the figure below.

4. AHV Networking Overview | 13


AHV Networking

Figure 7: Prism UI VM Network Details

Viewing AHV Host Network Configuration in Prism


Select the Network page to view VM- and host-specific networking details. When you select a
specific AHV host, Prism displays the network configuration, as shown in the figure below. For
more information on the new Network Visualization feature, refer to the Prism Web Console
guide.

4. AHV Networking Overview | 14


AHV Networking

Figure 8: AHV Host Network Visualization

View AHV Host Network Configuration in the CLI


You can view Nutanix AHV network configuration in detail using aCLI, native AHV, and OVS
commands as shown in the appendix. The following sections outline basic administration tasks
and the commands needed to review and validate a configuration.
Administrators can perform all management operations through the Prism web interface and
APIs or through SSH access to the Controller VM.

Tip: For better security and a single point of management, avoid connecting directly
to the AHV hosts. All AHV host operations can be performed from the CVM by
connecting to 192.168.5.1, the internal management address of the AHV host.

4. AHV Networking Overview | 15


AHV Networking

View Physical NIC Status from the CVM


nutanix@CVM$ manage_ovs --bridge_name br0 show_uplinks
Uplink ports: bond0
Uplink ifaces: eth3 eth2
nutanix@CVM$ manage_ovs show_interfaces
name mode link speed
eth0 1000 True 1000
eth1 1000 True 1000
eth2 10000 True 10000
eth3 10000 True 10000

View OVS Bond Status from the AHV Host


nutanix@CVM$ ssh [email protected] "ovs-appctl bond/show bond0"
---- bond0 ----
bond_mode: active-backup
bond-hash-basis: 0
updelay: 0 ms
downdelay: 0 ms
lacp_status: off
slave eth2: enabled
may_enable: true
slave eth3: enabled
active slave
may_enable: true

4. AHV Networking Overview | 16


AHV Networking

View OVS Bridge Configuration from the AHV Host


nutanix@CVM$ ssh [email protected] "ovs-vsctl show"
4d060060-0f23-4e47-bbea-a28c344525cc
Bridge "br0"
Port "vnet0"
Interface "vnet0"
Port "br0-dhcp"
Interface "br0-dhcp"
type: vxlan
options: {key="2", remote_ip="10.4.56.84"}
Port "br0"
Interface "br0"
type: internal
Port "br0-arp"
Interface "br0-arp"
type: vxlan
options: {key="2", remote_ip="192.168.5.2"}
Port "bond0"
Interface "eth2"
Interface "eth3"
ovs_version: "2.3.2"

4. AHV Networking Overview | 17


AHV Networking

View VM Network Configuration from a CVM Using the aCLI


We have taken the example output below from AOS 4.5 with unmanaged networks. Output from
other versions or managed networks may differ slightly.
nutanix@CVM$ acli
<acropolis> net.list
Network name Network UUID Type Identifier
Production ea8468ec-c1ca-4220-bc51-714483c6a266 VLAN 27
vlan.0 a1850d8a-a4e0-4dc9-b247-1849ec97b1ba VLAN 0
<acropolis> net.list_vms vlan.0
VM UUID VM name MAC address
7956152a-ce08-468f-89a7-e377040d5310 VM1 52:54:00:db:2d:11
47c3a7a2-a7be-43e4-8ebf-c52c3b26c738 VM2 52:54:00:be:ad:bc
501188a6-faa7-4be0-9735-0e38a419a115 VM3 52:54:00:0c:15:35

4. AHV Networking Overview | 18


AHV Networking

5. AHV Networking Best Practices


The main best practice for AHV networking is to keep things simple. The default networking
configuration, with two 10 Gb adapters using active-backup, provides a highly available
environment that performs well. Nutanix CVMs and AHV hosts communicate in the native
untagged VLAN, and tagged VLANs serve guest VM traffic. Use this basic configuration unless
there is a compelling business requirement for advanced configuration.

5.1. Open vSwitch Bridge and Bond Recommendations


This section addresses advanced bridge and bond configuration scenarios using a combination
of CLI commands that we describe in more detail in the AHV Command Line section in the
appendix. Identify the scenario that best matches the desired use case and follow those
instructions.

Table 2: Bridge and Bond Use Cases

Bridge and Bond Scenario Use Case


Use when you can send all CVM and user VM traffic over
2x 10 Gb (no 1 Gb) the same pair of 10 Gb adapters. Compatible with any load
balancing algorithm. Default configuration for easy setup.
Use when you need an additional, separate pair of physical
1 Gb adapters for VM traffic that must be isolated to another
2x 10 Gb and 2x 1 Gb adapter or physical switch. Keep the CVM traffic on the 10 Gb
separated network. You can either allow user VM traffic on the 10 Gb
network or force it onto the 1 Gb network. Compatible with any
load balancing algorithm.
Use to physically separate CVM traffic from user VM traffic
4x 10 Gb and 2x 1 Gb
while still providing 10 Gb connectivity for both traffic types.
separated
Compatible with any load balancing algorithm.
Use to provide additional bandwidth and failover capacity to the
CVM and user VMs sharing four 10 Gb adapters in the same
4x 10 Gb combined and 2x 1
bond. We recommend using balance-slb or LACP with balance-
Gb separated
tcp to take advantage of all adapters. This case is not illustrated
in the diagrams below.

5. AHV Networking Best Practices | 19


AHV Networking

Administrators can view and change OVS configuration from the CVM command line with the
manage_ovs command. To execute a single command on every Nutanix CVM in a cluster, use
the allssh shortcut described in the AHV Command Line appendix.

Note: The order in which flags and actions pass to manage_ovs is critical. Flags
must come first. Any flag passed after an action is not parsed.
nutanix@CVM$ manage_ovs --help
USAGE: manage_ovs [flags] <action>

To list all physical interfaces on all nodes, use the show_interfaces action. The show_uplinks
action returns the details of a bonded adapter for a single bridge.

Note: If you do not enter a bridge_name, the action reports on the default bridge,
br0.
nutanix@CVM$ allssh "manage_ovs show_interfaces"
nutanix@CVM$ allssh "manage_ovs --bridge_name <bridge> show_uplinks"

The update_uplinks action takes a comma-separated list of interfaces and configures these
into a single uplink bond in the specified bridge. All members of the bond must be physically
connected, or the manage_ovs command produces a warning and exits without configuring the
bond. To avoid this error and provision members of the bond even if they are not connected, use
the require_link =false flag.
nutanix@CVM$ allssh "manage_ovs --bridge_name <bridge> --interfaces <interfaces>
update_uplinks"
nutanix@CVM$ allssh "manage_ovs --bridge_name <bridge> --interfaces <interfaces> --
require_link=false update_uplinks"

5. AHV Networking Best Practices | 20


AHV Networking

5.1. Scenario 1: 2x 10 Gb

Figure 9: Network Connections for 2x 10 Gb NICs

The most common network configuration is to utilize the 10 Gb interfaces within the default
bond for all networking traffic. The CVM and all user VMs use the 10 Gb interfaces. In this
configuration, we dont use the 1 Gb interfaces. Note that this is different from the factory
configuration, because we have removed the 1 Gb interfaces from the OVS bond.
This scenario uses two physical upstream switches, and each 10 Gb interface within a bond
plugs into a separate physical switch for high availability. Within each bond, only one physical
interface is active when using the default active-backup load balancing mode. See the Load
Balancing within Bond Interfaces section below for more information and alternate configurations.
Remove NICs that are not in use from the default bond, especially when they are of different
speeds. To do so, perform the following manage_ovs action for each Nutanix node in the cluster,
or use with the command allssh:
From the CVM, remove eth0 and eth1 from the default bridge br0 on all CVMs by specifying
that only eth2 and eth3 remain in the bridge. The 10g shortcut lets you include all 10 Gb
interfaces without having to specify the interfaces explicitly by name. Some Nutanix models
have different ethX names for 1 Gb and 10 Gb links, so this shortcut is helpful:
nutanix@CVM$ allssh "manage_ovs --bridge_name br0 --bond_name bond0 --interfaces 10g
update_uplinks"

5. AHV Networking Best Practices | 21


AHV Networking

Note: Running this command changes the name of the bond for bridge br0 to bond0
from the default, br0-up. We suggest keeping the new name, bond0, for simple
identification, but you can use any bond name.

5.1. Scenario 2: 2x 10 Gb and 2x 1 Gb Separated

Figure 10: Network Connections for 2x 10 Gb and 2x 1 Gb NICs

If you need the 1 Gb physical interfaces, separate the 10 Gb and 1 Gb interfaces into different
bridges and bonds to ensure that CVM and user VM traffic always traverses the fastest possible
link.

Note: Links of different speeds cannot be mixed in the same bond.

Here, weve grouped the 10 Gb interfaces (eth2 and eth3) into bond0 and dedicated them to the
CVM and User VM1. Weve grouped the 1 Gb interfaces into bond1; only a second link on User
VM2 uses them. Bond0 and bond1 are added into br0 and br1, respectively.
In this configuration, the CVM and user VMs use the 10 Gb interfaces. Bridge br1 is available
for VMs that require physical network separation from the CVM and VMs on br0. Devices eth0
and eth1 could alternatively plug into a different pair of upstream switches for further traffic
separation.

5. AHV Networking Best Practices | 22


AHV Networking

Perform the following actions for each Nutanix node in the cluster to achieve the configuration
shown above:
On each AHV host, add bridge br1. You can reach the AHV host local to the CVM with the
local 192.168.5.1 interface address. Bridge names must be 10 characters or fewer. We
suggest using the name br1. Repeat the following command once on each CVM in the cluster:
nutanix@CVM$ ssh [email protected] "ovs-vsctl add-br br1"

Alternatively, perform the previous command using hostssh, which applies it to all Nutanix
nodes in the cluster.
nutanix@CVM$ hostssh "ovs-vsctl add-br br1"

From the CVM, remove eth0 and eth1 from the default bridge br0 on all CVMs, as described
in the first scenario. You can use the allssh shortcut here, but first run the appropriate show
commands to make sure that all interfaces are in a good state before executing the update.
nutanix@CVM$ allssh "manage_ovs show_interfaces"
nutanix@CVM$ allssh "manage_ovs --bridge_name br0 show_uplinks"

The output from the show commands above should clarify that the 10 Gb interfaces have
connectivity to the upstream switchesjust look for the columns labeled link and speed. Once
youve confirmed connectivity, update the bond to include only 10 Gb interfaces.
nutanix@CVM$ allssh "manage_ovs --bridge_name br0 --bond_name bond0 --interfaces 10g
update_uplinks"

Add the eth0 and eth1 uplinks to br1 in the CVM using the 1g interface shortcut.

Note: Use the -- require_link =false flag to create the bond even if not all 1g
adapters are connected.
nutanix@CVM$ allssh "manage_ovs --bridge_name br1 --bond_name bond1 --interfaces 1g --
require_link=false update_uplinks"

Now that a bridge, br1, exists just for the 1 Gb interfaces, you can create networks for "User
VM2" with the following global aCLI command. Putting the bridge name in the network name
is helpful when viewing the network in the Prism GUI. In this example, Prism shows a network
named "br1_vlan99" to indicate that this network sends VM traffic over VLAN 99 on bridge
br1.
nutanix@CVM$ acli net.create br1_vlan99 vswitch_name=br1 vlan=99

5. AHV Networking Best Practices | 23


AHV Networking

5.1. Scenario 3: 4x 10 Gb and 2x 1 Gb Separated

Figure 11: Network Connections for 4x 10 Gb and 2x 1 Gb NICs

Nutanix servers are also available with four 10 Gb adapters. In this configuration Nutanix
recommends dedicating two 10 Gb adapters to CVM traffic and two 10 Gb adapters for user
VM traffic, thus separating user and CVM network traffic. Alternatively, for increased data
throughput, you can combine all 10 Gb adapters into a single bridge and bond with advanced
load balancing. Split the 1 Gb adapters into a dedicated bond for user VMs if required; otherwise,
these interfaces can remain unused. In the diagram above we show all interfaces connected to
the same pair of switches, but you could use separate switches for each bond to provide further
traffic segmentation.
To achieve this configuration, perform the following actions for each Nutanix node in the cluster:
On each AHV host, add bridge br1 and br2 from the CVM. Bridge names must be 10
characters or fewer. Use hostssh to execute the command against all AHV hosts in the
cluster.
nutanix@CVM$ hostssh "ovs-vsctl add-br br1"
nutanix@CVM$ hostssh "ovs-vsctl add-br br2"

On the CVM, remove eth0, eth1, eth2, and eth3 from the default bridge br0 on all CVMs by
specifying that only eth4 and eth5 remain in the bridge. We cant use the 10 Gb interface
shortcut here because only two of the four 10 Gb adapters are needed in the bond. If all

5. AHV Networking Best Practices | 24


AHV Networking

Nutanix nodes in the cluster share identical network configuration, use the allssh shortcut.
Again, confirm the network link using show commands before executing these changes.
nutanix@CVM$ allssh "manage_ovs show_interfaces"
nutanix@CVM$ allssh "manage_ovs --bridge_name br0 show_uplinks"
nutanix@CVM$ allssh "manage_ovs --bridge_name br0 --bond_name bond0 --interfaces eth4,eth5
update_uplinks"

Add the eth2 and eth3 uplinks to br1 in the CVM and eth0 and eth1 to br2.

Note: Use the -- require_link =false flag to create the bond even if not all adapters
are connected.
nutanix@CVM$ allssh "manage_ovs --bridge_name br1 --bond_name bond1 --interfaces eth2,eth3
update_uplinks"
nutanix@CVM$ allssh "manage_ovs --bridge_name br2 --bond_name bond2 --interfaces eth0,eth1
update_uplinks"

You can now create networks in these new bridges using the same syntax as in the previous
scenarios in the aCLI. Once youve created the networks in the aCLI, you can view them in the
Prism GUI, so its helpful to include the bridge name in the network name.
nutanix@CVM$ acli net.create <net_name> vswitch_name=<br_name> vlan=<vlan_num>

For example:
nutanix@CVM$ acli net.create br1_production vswitch_name=br1 vlan=1001
nutanix@CVM$ acli net.create br2_production vswitch_name=br2 vlan=2001

5.2. Load Balancing within Bond Interfaces


OVS connects the AHV host to the network via bond interfaces. Each of these bonds contains
multiple physical interfaces that can connect to one or more physical switches. To build a fault-
tolerant network connection between the AHV host and the rest of the network, connect the
physical interfaces to separate physical switches, as shown in the diagrams above.
A bond distributes its traffic between multiple physical interfaces according to the bond mode.

5. AHV Networking Best Practices | 25


AHV Networking

Table 3: Load Balancing Use Cases

Maximum
Maximum Host
Bond Mode Use Case VM NIC
Throughput*
Throughput*
Default configuration, which
active-backup transmits all traffic over a single 10 Gb 10 Gb
active adapter.
Increases host bandwidth utilization
beyond a single 10 Gb adapter.
balance-slb 10 Gb 20 Gb
Places each VM NIC on a single
adapter at a time.
Increases host and VM bandwidth
utilization beyond a single 10 Gb
adapter by balancing each VM NIC
LACP and balance-tcp 20 Gb 20 Gb
TCP session on a different adapter.
Also used when network switches
require LACP negotiation.
Combines adapters for fault
tolerance, but sources VM NIC
LACP and balance-slb traffic from only a single adapter 10 Gb 20 Gb
at a time. This use case is not
illustrated in the diagrams below.
* Assuming 2x 10 Gb adapters

Active-Backup
The default bond mode is active-backup, where one interface in the bond carries traffic and other
interfaces in the bond are used only when the active link fails. Active-backup is the simplest bond
mode, easily allowing connections to multiple upstream switches without any additional switch
configuration. The downside is that traffic from all VMs uses only the single active link within
the bond. All backup links remain unused until the active link fails. In a system with dual 10 Gb
adapters, the maximum throughput of all VMs running on a Nutanix node is limited to 10 Gbps,
the speed of a single link.

5. AHV Networking Best Practices | 26


AHV Networking

Figure 12: Active-Backup Fault Tolerance

Active-backup mode is enabled by default, but you can also configure it with the following AHV
command:
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port bond0 bond_mode=active-backup"

View the bond mode and current active interface with the following AHV command:
nutanix@CVM$ ssh [email protected] "ovs-appctl bond/show"

5. AHV Networking Best Practices | 27


AHV Networking

In the active-backup configuration, this commands output would be similar to the following,
where eth2 is the active and eth3 is the backup interface:
---- bond0 ----
bond_mode: active-backup
bond-hash-basis: 0
updelay: 0 ms
downdelay: 0 ms
lacp_status: off
slave eth2: enabled
active slave
may_enable: true
slave eth3: enabled
may_enable: true

Balance-SLB
To take advantage of the bandwidth provided by multiple upstream switch links, you can use
the balance-slb bond mode. The balance-slb bond mode in OVS takes advantage of all links
in a bond and uses measured traffic load to rebalance VM traffic from highly used to less used
interfaces. When the configurable bond-rebalance interval expires, OVS uses the measured
load for each interface and the load for each source MAC hash to spread traffic evenly among
links in the bond. Traffic from some source MAC hashes may move to a less active link to more
evenly balance bond member utilization. Perfectly even balancing may not always be possible,
depending on the number of traffic streams and their sizes.
Each individual VM NIC uses only a single bond member interface at a time, but a hashing
algorithm distributes multiple VM NICs (multiple source MAC addresses) across bond member
interfaces. As a result, it is possible for a Nutanix AHV node with two 10 Gb interfaces to use up
to 20 Gbps of network throughput, while individual VMs have a maximum throughput of 10 Gbps.

5. AHV Networking Best Practices | 28


AHV Networking

Figure 13: Balance-SLB Load Balancing

The default rebalance interval is 10 seconds, but Nutanix recommends setting this interval to
60 seconds to avoid excessive movement of source MAC address hashes between upstream
switches. Nutanix has tested this configuration using two separate upstream switches with AHV.
No additional configuration (such as link aggregation) is required on the switch side, as long as
the upstream switches are interconnected physically or virtually and both uplinks trunk the same
VLANs.
Configure the balance-slb algorithm for each bond on all AHV nodes in the Nutanix cluster with
the following commands:
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port bond0 bond_mode=balance-slb"
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port bond0 other_config:bond-rebalance-
interval=60000"

Repeat this configuration on all CVMs in the cluster.

5. AHV Networking Best Practices | 29


AHV Networking

Verify the proper bond mode on each CVM with the following commands:
nutanix@CVM$ ssh [email protected] "ovs-appctl bond/show bond0"
---- bond0 ----
bond_mode: balance-slb
bond-hash-basis: 0
updelay: 0 ms
downdelay: 0 ms
next rebalance: 59108 ms
lacp_status: off
slave eth2: enabled
may_enable: true
hash 120: 138065 kB load
hash 182: 20 kB load
slave eth3: enabled
active slave
may_enable: true
hash 27: 0 kB load
hash 31: 20 kB load
hash 104: 1802 kB load
hash 206: 20 kB load

LACP and Link Aggregation


Taking full advantage of the bandwidth provided by multiple links to upstream switches from a
single VM requires link aggregation in OVS using LACP and balance-tcp.

Note: Ensure that youve appropriately configured the upstream switches before
enabling LACP. On the switch, link aggregation is commonly referred to as port
channel or LAG, depending on the switch vendor. Using multiple upstream switches
may require additional configuration such as MLAG or vPC.

With LACP, multiple links to separate physical switches appear as a single layer-2 link. A traffic-
hashing algorithm such as balance-tcp can split traffic between multiple links in an active-
active fashion. Because the uplinks appear as a single L2 link, the algorithm can balance traffic
among bond members without any regard for switch MAC address tables. Nutanix recommends
using balance-tcp when LACP is configured, because each TCP stream from a single VM can
potentially use a different uplink in this configuration. With link aggregation, LACP, and balance-

5. AHV Networking Best Practices | 30


AHV Networking

tcp, a single user VM with multiple TCP streams could use up to 20 Gbps of bandwidth in an
AHV node with two 10 Gb adapters.

Figure 14: LACP and Balance-TCP Load Balancing

Configure LACP and balance-tcp with the commands below on all Nutanix CVMs in the cluster.

Note: You must configure upstream switches for LACP before configuring the AHV
host from the CVM.

If upstream LACP negotiation fails, the default configuration disables the bond, thus blocking all
traffic. The following command allows fallback to active-backup bond mode in the event of LACP
negotiation failure:
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port bond0 other_config:lacp-fallback-
ab=true"

Next, enable LACP negotiation and set the hash algorithm to balance-tcp.
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port bond0 lacp=active"
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port bond0 bond_mode=balance-tcp"

5. AHV Networking Best Practices | 31


AHV Networking

Confirm the LACP negotiation with the upstream switch or switches using ovs-appctl, looking for
the word "negotiated" in the status lines.
nutanix@CVM$ ssh [email protected] "ovs-appctl bond/show bond0"
nutanix@CVM$ ssh [email protected] "ovs-appctl lacp/show bond0"

Storage Traffic between CVMs


Using active-backup or any other OVS load balancing method, it is not possible to select the
active adapter for the CVM in a way that is persistent between host reboots. When multiple
uplinks from the AHV host connect to multiple switches, ensure that adequate bandwidth
exists between these switches to support Nutanix CVM replication traffic between nodes.
Nutanix recommends redundant 40 Gbps or faster connections between switches. A leaf-spine
configuration or direct inter-switch link can satisfy this recommendation.

5.3. VLANs for AHV Hosts and CVMs


The recommended VLAN configuration is to place the CVM and AHV host in the default native
untagged VLAN as shown in the figure below. Neither the CVM nor the AHV host requires
special configuration with this option. Configure the switch to trunk VLANs for guest VM networks
to the AHV host using standard 802.1Q VLAN tags. Also, configure the switch to send and
receive untagged traffic for the CVM and AHV hosts VLAN. Choose any VLAN other than 1 as
the native untagged VLAN on ports facing AHV hosts.

5. AHV Networking Best Practices | 32


AHV Networking

Figure 15: Default Untagged VLAN for CVM and AHV Host

The setup above works well for situations where the switch administrator can set the CVM and
AHV VLAN to untagged. However, if you do not want to send untagged traffic to the AHV host
and CVM, or if the security policy doesnt allow this configuration, you can add a VLAN tag to the
host and the CVM with the following arrangement.

5. AHV Networking Best Practices | 33


AHV Networking

Figure 16: Tagged VLAN for CVM and AHV Host

Configure VLAN tags on br0 on every AHV host in the cluster.


nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0 tag=10"
nutanix@CVM$ ssh [email protected] "ovs-vsctl list port br0"

Configure VLAN tags for the CVM on every CVM in the Nutanix cluster.
nutanix@CVM$ change_cvm_vlan 10

Tip: Perform the commands above individually on every AHV host and CVM.
Nutanix does not recommend using the allssh or hostssh shortcuts to set VLAN tags,
because if one switch is misconfigured, one shortcut could disconnect all Nutanix
nodes in a cluster.

If you need network segmentation between storage data and management traffic within the
CVM to meet security requirements, please see KB article KB-2748 for AHV.

5. AHV Networking Best Practices | 34


AHV Networking

5.4. Virtual NIC Trunking


When using the aCLI, you can create a virtual NIC in trunked mode, which allows multiple VLANs
on a single virtual NIC. With the aCLI, you can also change an existing virtual NIC from access
mode (with only a single VLAN) to trunked mode and modify an existing trunked virtual NICs
VLANs. To modify an existing virtual NIC, you must power off the VM.
To create a new virtual NIC in network vlan100 with access to VLANs 200 and 300 for VM1:
nutanix@CVM$ acli
<acropolis> vm.nic_create VM1 network=vlan100 vlan_mode=kTrunked trunked_networks=200,300
NicCreate: complete
<acropolis> vm.get VM1
(truncated output)
nic_list {
mac_addr: "52:54:00:76:8b:2c"
network_name: "vlan100"
network_uuid: "f0d549f7-0717-4712-9a7d-ac3b513b74e3"
trunked_networks: 200
trunked_networks: 300
vlan_mode: "kTrunked"
}

To modify an existing virtual NIC:


nutanix@CVM$ acli
<acropolis> vm.nic_update VM1 52:54:00:76:8b:2c update_vlan_trunk_info=true
vlan_mode=kTrunked trunked_networks=200,300,400
NicUpdate: completed

5. AHV Networking Best Practices | 35


AHV Networking

6. Conclusion
Nutanix recommends using the default AHV networking settings, configured via the Prism GUI,
for most Nutanix deployments. But when your requirements demand specific configuration
outside the defaults, this advanced networking guide provides detailed CLI configuration
examples that can help.
Administrators can use the Nutanix CLI to configure advanced networking features on a single
host or, conveniently, on all hosts at once. VLAN trunking for guest VMs allows a single VM NIC
to pass traffic on multiple VLANs for network-intensive applications. You can apply VLAN tags to
the AHV host and CVM in situations that require all traffic to be tagged. Grouping host adapters
in different ways can provide physical traffic isolation or allow advanced load balancing and link
aggregation to provide maximum throughput and redundancy for VMs and hosts.
With these advanced networking techniques, administrators can configure a Nutanix system with
AHV to meet the demanding requirements of any VM or application.
For feedback or questions, please contact us using the Nutanix NEXT Community forums.

6. Conclusion | 36
AHV Networking

Appendix

AHV Networking Terminology

Table 4: Networking Terminology Matrix

AHV Term VMware Term Microsoft Hyper-V or SCVMM Term


vSwitch, Distributed Virtual
Bridge Virtual Switch, Logical Switch
Switch
Bond NIC Team Team or Uplink Port Profile
Port Port N/A
Network Port Group VLAN Tag or Logical Network
Uplink pNIC or vmnic Physical NIC or pNIC
VM NIC vNIC VM NIC
Internal Port VMkernel Port Virtual NIC
Active-Backup Active-Standby Active-Standby
Route based on source MAC
Balance-slb hash combined with route based Switch Independent / Dynamic
on physical NIC load
LACP with balance- LACP and route based on IP Switch Dependent (LACP) / Address
tcp hash Hash

AHV Networking Best Practices Checklist


Command line
Use the Prism GUI network visualization feature before resorting to the command line.
Use the allssh and hostssh shortcuts primarily with view and show commands. Use
extreme caution with commands that make configuration changes, as the shortcuts execute
them on every CVM or AHV host.

Appendix | 37
AHV Networking

Connect to a CVM instead of to the AHV hosts when using SSH. Use the hostssh or
192.168.5.1 shortcut for any AHV host operation.
For high availability, connect to the cluster Virtual IP (VIP) for cluster-wide commands
entered in the aCLI rather than to a single CVM.
Open vSwitch
Do not modify the OpenFlow tables associated with the default OVS bridge br0; the AHV
host, CVM, and IPAM rely on this bridge.
While it is possible to set QoS policies and other network configuration on the VM tap
interfaces manually (using the ovs-vsctl command), we do not recommend it. Policies are
ephemeral and do not persist across VM power cycles or migrations between hosts.
Do not delete or rename OVS bridge br0.
Do not modify the native Linux bridge virbr0.
OVS bonds
Aggregate the 10 Gb interfaces on the physical host to an OVS bond named bond0 on the
default OVS bridge br0 and trunk VLANs to these interfaces on the physical switch.
Use active-backup load balancing unless you have a specific need for balance-slb or LACP
with balance-tcp.
Create a separate bond and bridge for the connected 1 Gb interfaces, or remove them from
the primary bond0.
Do not include 1 Gb interfaces in the same bond or bridge as the 10 Gb interfaces.
If required, connect the 1 Gb interfaces to different physical switches than those connecting
to the 10 Gb to provide physical network separation for user VMs.
Use LACP with balance-tcp only if guest VMs require link aggregation. Ensure that you
have completed LACP configuration on the physical switches first.
Physical network layout
Use redundant top-of-rack switches in a leaf-spine architecture. This simple, flat network
design is well suited for a highly distributed, shared-nothing compute and storage
architecture.
Add all the nodes that belong to a given cluster to the same layer-2 network segment.
If you need more east-west traffic capacity, add spine switches.
Use redundant 40 Gbps (or faster) connections to ensure adequate bandwidth between
upstream switches.
Upstream physical switch specifications

Appendix | 38
AHV Networking

Connect the 10 Gb uplink ports on the AHV node to switch ports that are nonblocking
datacenter-class switches that provide line-rate traffic throughput.
Use an Ethernet switch that has a low-latency, cut-through design, and that provides
predictable, consistent traffic latency regardless of packet size, traffic pattern, or the
features enabled on the 10 Gb interfaces. Port-to-port latency should be no higher than two
microseconds.
Use fast-convergence technologies (such as Cisco PortFast) on switch ports that are
connected to the AHV host.
Switch and host VLANs
Keep the CVM and AHV in the same VLAN. By default, the CVM and the hypervisor
are assigned to VLAN 0, which effectively places them on the native untagged VLAN
configured on the upstream physical switch.
Configure switch ports connected to AHV as VLAN trunk ports.
Configure any dedicated native untagged VLAN other than 1 on switch ports facing AHV
hosts to carry only CVM and AHV traffic.
Guest VM VLANs
Configure guest VM network VLANs on br0 using the Prism GUI.
Use VLANs other than the dedicated CVM and AHV VLAN.
Use the aCLI to add guest VM network VLANs for additional bridges, and include the
bridge name in the network name for easy bridge identification.
Use VM NIC VLAN trunking only in cases where guest VMs require multiple VLANs on the
same NIC. In all other cases, add a new VM NIC with a single VLAN to bring new VLANs to
guest VMs.
CVM network configuration
Do not remove the CVM from either the OVS bridge br0 or the native Linux bridge virbr0.
If necessary, you can separate network traffic for CVM management and storage by adding
a virtual NIC to the CVM following KB-2748. You have the option to connect this additional
virtual NIC to a separate physical network.
Jumbo frames
Nutanix does not currently recommend jumbo frames when using AHV. Performance
improvements are generally not significant when switching from 1,500 byte frames to 9,000
byte frames.
IP address management

Appendix | 39
AHV Networking

Coordinate the configuration of IP address pools to avoid address overlap with existing
network DHCP pools.
Confirm IP address availability with the network administrator before configuring an IPAM
address pool in AHV.
IPMI ports
Do not trunk multiple VLANs to switch ports that connect to the IPMI interface. For
management simplicity, only configure the IPMI switch ports as access ports in a single
VLAN.

AHV Command Line Tutorial


Nutanix systems have a number of command line utilities that make it easy to inspect the status
of network parameters and adjust advanced attributes that may not be available in the Prism
GUI. In this section, we address the three primary locations where you can enter CLI commands.
The first such location is in the CVM BASH shell. A command entered here takes effect locally on
a single CVM. Administrators can also enter CLI commands in the CVM aCLI shell. Commands
entered in aCLI operate on the level of an entire Nutanix cluster, even though youre accessing
the CLI from one CVM. Finally, administrators can enter CLI commands in an AHV hosts BASH
shell. Commands entered here take effect only on that AHV host. The diagram below illustrates
the basic CLI locations.

Figure 17: Command Line Operation Overview

Appendix | 40
AHV Networking

CLI shortcuts exist to make cluster management a bit easier. Often, you need to execute a
command on all CVMs, or on all AHV hosts, rather than on just a single host. It would be tedious
to log on to every system and enter the same command on each of them, especially in a large
cluster. That's where the allssh and hostssh shortcuts come in. allssh takes a given command
entered on the CVM BASH CLI and executes that command on every CVM in the cluster.
hostssh works similarly, taking a command entered on the CVM BASH CLI and executing that
command on every AHV host in the cluster, as shown in the figure above.
To streamline the management of CVMs and AHV hosts, the SSH shortcut connects a single
CVM directly to the local AHV host. From any single CVM, you can use SSH to connect to the
AHV hosts local address at IP address 192.168.5.1. Similarly, any AHV host can SSH to the
local CVM using the IP address 192.168.5.2. This SSH connection uses the internal Linux bridge
virbr0, which we discuss in the sections below.
Let's take a look at a few examples to demonstrate the usefulness of these commands.

Example 1: allssh
Imagine that we need to determine which network interfaces are plugged in on all nodes in the
cluster, and the link speed of each interface. We could use manage_ovs show_interfaces at
each CVM, but instead let's use the allssh shortcut. First, SSH into any CVM in the cluster as
the nutanix user, then execute the command allssh "manage_ovs show_interfaces" at the CVM
BASH shell:
nutanix@NTNX-A-CVM:10.0.0.25:~$ allssh "manage_ovs show_interfaces"

Appendix | 41
AHV Networking

In the sample output below, we've truncated the results after the second node to save space.
Executing manage_ovs show_interfaces on the cluster
================== 10.0.0.25 =================
name mode link speed
eth0 1000 True 1000
eth1 1000 True 1000
eth2 10000 True 10000
eth3 10000 True 10000
Connection to 10.0.0.25 closed.
================== 10.0.0.26 =================
name mode link speed
eth0 1000 True 1000
eth1 1000 True 1000
eth2 10000 True 10000
eth3 10000 True 10000
Connection to 10.0.0.26 closed.

Example 2: hostssh
If we wanted to view the MAC address of the eth0 interface on every AHV host, we could connect
to each AHV host individually and use ifconfig eth0. To make things faster, let's use the hostssh
shortcut instead. In this example, we still use SSH to connect to the CVM BASH shell, then prefix
our desired command with hostssh.
nutanix@NTNX-A-CVM:10.0.0.25:~$ hostssh "ifconfig eth0 | grep HWaddr"
============= 10.0.0.23 ============
eth0 Link encap:Ethernet HWaddr 0C:C4:7A:46:B1:FE
============= 10.0.0.22 ============
eth0 Link encap:Ethernet HWaddr 0C:C4:7A:46:B2:4E

Appendix | 42
AHV Networking

Example 3: aCLI
Administrators can use the aCLI shell to view Nutanix cluster information that might not be easily
available in the Prism GUI. For example, let's list all of the VMs in a given network. First, connect
to any CVM using SSH, then enter the aCLI.
nutanix@NTNX-A-CVM:10.0.0.25:~$ acli
<acropolis> net.list_vms 1GBNet
VM UUID VM name MAC address
0d6afd4a-954d-4fe9-a184-4a9a51c9e2c1 VM2 50:6b:8d:cb:1b:f9

Example 4: ssh [email protected]


The shortcut between the CVM and AHV host can be helpful when we're connected directly to
a CVM but need to view some information or execute a command against the local AHV host
instead. In this example, were verifying the localhost line of the /etc/hosts file on the AHV host
while we're already connected to the CVM.
nutanix@NTNX-14SM36510031-A-CVM:10.0.0.25:~$ ssh [email protected] "cat /etc/hosts | grep 127"
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

Example 5: hostssh Building Blocks


Before the hostssh command was available, the only way to execute a command on every single
host was to use the allssh command in combination with the ssh [email protected] shortcut. As
in the hostssh example above, here were viewing eth0 MAC addresses, but this time we use
allssh and the shortcut instead.

Note: Watch carefully for the double and single quotes that encapsulate these
commands.
nutanix@NTNX-A-CVM:10.0.0.25:~$ allssh "ssh [email protected] 'ifconfig eth0 | grep HWaddr' "
Executing ssh [email protected] 'ifconfig eth0 | grep HWaddr' on the cluster
================== 10.0.0.25 =================
eth0 Link encap:Ethernet HWaddr 0C:C4:7A:46:B1:78
Connection to 10.0.0.25 closed.
================== 10.0.0.26 =================
eth0 Link encap:Ethernet HWaddr 0C:C4:7A:46:B2:4E
Connection to 10.0.0.26 closed.

With these command line utilities, we can manage a large number of Nutanix nodes at once.
Centralized management helps administrators apply configuration consistently and verify
configuration across a number of servers.

Appendix | 43
AHV Networking

AHV Networking Command Examples


Network view commands
nutanix@CVM$ allssh "manage_ovs --bridge_name br0 show_uplinks"
nutanix@CVM$ ssh [email protected] "ovs-appctl bond/show bond0"
nutanix@CVM$ ssh [email protected] "ovs-vsctl show"
nutanix@CVM$ acli
<acropolis> net.list
<acropolis> net.list_vms vlan.0
nutanix@CVM$ manage_ovs --help
nutanix@CVM$ allssh "manage_ovs show_interfaces"
nutanix@CVM$ allssh "manage_ovs --bridge_name <bridge> show_uplinks"
nutanix@CVM$ allssh "manage_ovs --bridge_name <bridge> --interfaces <interfaces>
update_uplinks"
nutanix@CVM$ allssh "manage_ovs --bridge_name <bridge> --interfaces <interfaces> --
require_link=false update_uplinks"

Bond configuration for 2x 10 Gb


nutanix@CVM$ ssh [email protected] "ovs-vsctl add-br br1"
or
nutanix@CVM$ hostssh "ovs-vsctl add-br br1"
nutanix@CVM$ manage_ovs --bridge_name br0 --bond_name bond0 --interfaces 10g update_uplinks
nutanix@CVM$ manage_ovs --bridge_name br1 --bond_name bond1 --interfaces 1g update_uplinks
nutanix@cvm$ acli net.create br1_vlan99 vswitch_name=br1 vlan=99

Bond configuration for 4x 10 Gb


nutanix@CVM$ ssh [email protected] "ovs-vsctl add-br br1"
nutanix@CVM$ ssh [email protected] "ovs-vsctl add-br br2"
nutanix@CVM$ manage_ovs --bridge_name br0 --bond_name bond0 --interfaces eth4,eth5
update_uplinks
nutanix@CVM$ manage_ovs --bridge_name br1 --bond_name bond1 --interfaces eth2,eth3
update_uplinks
nutanix@CVM$ manage_ovs --bridge_name br2 --bond_name bond2 --interfaces eth0,eth1
update_uplinks
nutanix@cvm$ acli net.create br1_vlan99 vswitch_name=br1 vlan=99
nutanix@cvm$ acli net.create br2_vlan100 vswitch_name=br2 vlan=100

Appendix | 44
AHV Networking

Load balance view command


nutanix@CVM$ ssh [email protected] "ovs-appctl bond/show"

Load balance active-backup configuration


nutanix@CVM$ ssh [email protected] "ovs-vsctl set port bond0 bond_mode=active-backup"

Load balance balance-slb configuration


nutanix@CVM$ ssh [email protected] "ovs-vsctl set port bond0 bond_mode=balance-slb"
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port bond0 other_config:bond-rebalance-
interval=60000"
nutanix@CVM$ ssh [email protected] "ovs-appctl bond/show bond0"

Load balance balance-tcp and LACP configuration


nutanix@CVM$ ssh [email protected] "ovs-vsctl set port bond0 other_config:lacp-fallback-
ab=true"
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port bond0 lacp=active"
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port bond0 bond_mode=balance-tcp"
nutanix@CVM$ ssh [email protected] "ovs-appctl bond/show bond0"
nutanix@CVM$ ssh [email protected] "ovs-appctl lacp/show bond0"

CVM and AHV host tagged VLAN configuration


nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0 tag=10"
nutanix@CVM$ ssh [email protected] "ovs-vsctl list port br0"
nutanix@CVM$ change_cvm_vlan 10

References
1. AHV Best Practices Guide
2. AHV Administration Guide: Host Network Configuration
3. AHV Administration Guide: VM Network Configuration
4. KB-2748 CVM Network Segmentation
5. Open vSwitch Documentation
6. Prism Web Console Guide: Network Visualization

About the Author


Jason Burns is an NPP-certified Staff Solutions Architect at Nutanix, Inc. and CCIE Collaboration
#20707. He designs, tests, and documents virtual workloads on the Nutanix platform, creating
solutions that solve critical business problems.

Appendix | 45
AHV Networking

Jason has designed and supported Unified Communications infrastructure in the enterprise for
the past decade, deploying UC to connect hundreds of thousands of end-users. Outside of his
day job, he has an unusual passion for certificates, security, and motorcycles.
Follow Jason on Twitter @bbbburns.

About Nutanix
Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that
power their business. The Nutanix Enterprise Cloud Platform leverages web-scale engineering
and consumer-grade design to natively converge compute, virtualization, and storage into
a resilient, software-defined solution with rich machine intelligence. The result is predictable
performance, cloud-like infrastructure consumption, robust security, and seamless application
mobility for a broad range of enterprise applications. Learn more at www.nutanix.com or follow up
on Twitter @nutanix.

Appendix | 46
AHV Networking

List of Figures
Figure 1: Enterprise Cloud Platform.................................................................................. 7

Figure 2: Information Life Cycle Management.................................................................. 8

Figure 3: Post-Imaging Network State............................................................................ 10

Figure 4: Prism UI Network Creation.............................................................................. 11

Figure 5: IPAM.................................................................................................................12

Figure 6: Prism UI Network List...................................................................................... 13

Figure 7: Prism UI VM Network Details.......................................................................... 14

Figure 8: AHV Host Network Visualization......................................................................15

Figure 9: Network Connections for 2x 10 Gb NICs.........................................................21

Figure 10: Network Connections for 2x 10 Gb and 2x 1 Gb NICs.................................. 22

Figure 11: Network Connections for 4x 10 Gb and 2x 1 Gb NICs.................................. 24

Figure 12: Active-Backup Fault Tolerance...................................................................... 27

Figure 13: Balance-SLB Load Balancing........................................................................ 29

Figure 14: LACP and Balance-TCP Load Balancing...................................................... 31

Figure 15: Default Untagged VLAN for CVM and AHV Host...........................................33

Figure 16: Tagged VLAN for CVM and AHV Host.......................................................... 34

Figure 17: Command Line Operation Overview.............................................................. 40

47
AHV Networking

List of Tables
Table 1: Document Version History.................................................................................. 6

Table 2: Bridge and Bond Use Cases............................................................................ 19

Table 3: Load Balancing Use Cases...............................................................................26

Table 4: Networking Terminology Matrix.........................................................................37

48

You might also like