Cisco ACI Multi-Pod: Deployment

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 13

Cisco ACI Multi-Pod Deployment

Introduction

• Cisco Application-Centric Infrastructure (ACI) provides different options for fabric extensions and interconnections,
which can meet business demands for different customers, and provide innovative and open architecture for
distributed data center networking for service providers.
Cisco ACI Multi-Pod Overview
Cisco ACI Multi-Pod design represents a single Cisco APIC cluster/single domain that interconnects
portions of the fabrics (referred as pods) while each one has its own leaf-and-spine architecture.
Why do you need the Cisco ACI Multi-Pod?

•Deployment of active-active disaster recovery solution for business continuity.


•Data center deployed in multiple server rooms.
•Infrastructure for a virtualization solution that supports live VM mobility across Layer 3 domains, etc.
•Use a single administration domain.
Cisco ACI Multi-Pod Overview
The main infra control plane protocols that run individually in each pod are as follows:

• Intermediate System-to-Intermediate System (IS-IS): For infra tunnel endpoint (TEP) reachability within a pod.
• Council of Oracles Protocol (COOP): For endpoint information within a pod.
• VPNv4/v6 MP-BGP: For Layer 3 Outside (L3Out) routes distribution within a pod.
Cisco ACI Multi-Pod Overview
IPN

• The IPN that provides IP connection between pods must support specific requirements, such as
• Bidirectional Protocol Independent Multicast (PIM Bidir) support and acceptable latency
• (IPN requirements are described further in this section in more details), and influences the Cisco ACI
• Multi-Pod use cases. There are two main use cases for the deployment of Cisco ACI Multi-Pod
• regarding physical location of different pods:
Cisco ACI Multi-Pod Overview
Based on the use cases described above, Cisco ACI Multi-Pod can be deployed in these topologies:
Cisco ACI Multi-Pod Overview
Based on the use cases described above, Cisco ACI Multi-Pod can be deployed in these topologies:
Inter-Pod Network Overview

The IPN connects different Cisco ACI pods providing pod-to-pod communication (east-west traffic) as an extension
of the Cisco ACI fabric underlay infrastructure. Therefore, the IPN must support several specific
functionalities to perform those connectivity functions, such as:

• Multicast support (PIM Bidir with at least /15 subnet mask), which is needed to handle Layer 2 broadcast, unknown unicast,
and multicast (BUM) traffic.
• Dynamic Host Configuration Protocol (DHCP) relay support.
• Open Shortest Path First (OSPF) support between spine and routers.
• Increased maximum transmission unit (MTU) support to handle the Virtual Extensible LAN (VXLAN) encapsulated traffic.
• Quality of service (QoS) considerations for consistent QoS policy deployment across pods.
• Routed subinterface support, since the use of subinterfaces on the IPN devices is mandatory for the connections toward the
spine switches (the traffic originated from the spine switches interface is always tagged with an 802.1Q VLAN 4 value).
• LLDP must be enabled on the IPN device
Inter-Pod Network Overview

Multicast Support

Besides the unicast communication, the east-west traffic in the Cisco Multi-Pod solution contains Layer 2 multi-destination
flows belonging to bridge domains that are extended across pods. This type of traffic is referred to as BUM, which
is encapsulated into a VXLAN multicast frame and transmitted to all the local leaf nodes inside the pod.
For this purpose, a unique multicast group is associated to each defined bridge domain, using the name of
Bridge Domain Group IP–outer (BD GIPo). This multicast group is dynamically picked from the multicast range
configured during the initial startup script of APIC. Once received by the leaf switches, it is forwarded to the
connected devices that are part of that bridge domain or dropped depending on the specific bridge domain configuration.

DHCP Relay Support


The IPN must support DHCP relay, which enables you to auto-provision configuration for all the Cisco ACI devices deployed
in remote pods. The devices can join the Cisco ACI Multi-Pod fabric with zero-touch configuration (similar to a single pod fabric).
Inter-Pod Network Overview

OSPF Support

OSPFv2 is the only routing protocol (in addition to static routing) supported for connectivity between the IPN and
the spine switches. It is used to advertise the TEP address range to other pods. As long as IPN devices and spines are
OSPF neighbors, any protocols can be used to deliver the TEP information from one IPN device to another. For example,
IPN devices in Pod 1 and Pod 2 can be MPLS as long as it advertises OSPF routes they learned from spines in each pod.

Increased MTU Support

The IPN must support increased MTU on its physical connections, so the VXLAN data-plane traffic can be exchanged
between pods without fragmentation and reassembly, which slows down the communication.
Before Cisco ACI Release 2.2, the spine nodes were hardcoded to generate 9150 bytes full-size frames for exchanging
MP-BGP control plane traffic with spine switches in remote pods. This mandated support for that 9150 bytes MTU
size on all the Layer 3 interfaces of the IPN devices.
Inter-Pod Network Overview

QoS Considerations

Typically, you will deploy Cisco APIC cluster across pods, and you must ensure that the intracluster communication between
the Cisco APIC nodes is prioritized across the IPN infrastructure.
Since the IPN is not under Cisco APIC management and may modify the 802.1p (class of service [CoS]) priority settings,
you need to take additional steps to guarantee the QoS priority in a Cisco ACI Multi-Pod topology.
Inter-Pod Network Overview

IPN Control Plane

Since the IPN represents an extension of the Cisco ACI infrastructure network, it must ensure that the VXLAN tunnels can
be established across pods for endpoint communication.
Inside each Cisco ACI pod, IS-IS is the infrastructure routing protocol that is used by the leaf and spine
nodes to peer with each other and exchange IP information for locally defined loopback interfaces
(usually referred to as TEP addresses). During the auto-provisioning process for the nodes belonging to a pod,
the APIC assigns one (or more) IP addresses to the loopback interfaces of the leaf and spine nodes part of the pod.
All those IP addresses are part of an IP pool that is specified during the bootup process of the first APIC node and
takes the name of "TEP pool." The IPN should facilitate communication between the local TEPs and the remote TEPs
IPN 1 Router
feature pim
IPN Configuration Example IPN2 Router
feature pim
feature OSPF feature OSPF
Feature DHCP
vlan 4
vlan 4
vrf context VRF-MutiPod
ip pim rp-address 10.1.1.1 bidir vrf context VRF-MutiPod
ip pim rp-address 10.1.1.1 bidir
router ospf VRF-MutiPod
vrf VRF-MutiPod router ospf VRF-MutiPod
vrf VRF-MutiPod

interface Ethernet1/1.4 interface Ethernet1/1.4


encapsulation dot1q 4 encapsulation dot1q 4
des to Seed POD des to Seed POD
mtu 9150
mtu 9150
vrf member VRF-MutiPod
ip address 192.168.1.1/30 vrf member VRF-MutiPod
ip router ospf interpod-ospf area 0.0.0.0 ip address 192.168.1.2/30
ip pim sparse-mode ip router ospf interpod-ospf area 0.0.0.0
ip pim sparse-mode
no shutdown no shutdown

interface Ethernet1/2.4 interface Ethernet1/2.4


encapsulation dot1q 4 encapsulation dot1q 4
des to WAN des to WAN
mtu 9150
mtu 9150
vrf member VRF-MutiPod
ip address 172.16.1.1/24 vrf member VRF-MutiPod
ip router ospf interpod-ospf area 0.0.0.0 ip address 172.16.2.1/24
ip pim sparse-mode ip router ospf interpod-ospf area 0.0.0.0
no shutdown ip pim sparse-mode
no shutdown
interface loopback 3 interface loopback 3
description BIDIR RP description BIDIR RP
vrf member VRF-MutiPod vrf member VRF-MutiPod
ip address 10.1.1.1/30 ip address 10.1.1.2/30
ip ospf network point-to-point
ip ospf network point-to-point
ip router ospf interpod-ospf area 0.0.0.0
ip pim sparse-mode ip router ospf interpod-ospf area 0.0.0.0
ip pim sparse-mode

You might also like