NFV-Foundations-Workshop 2-Day CISCO India VC Sept 03-04-2020 Part1 PDF
NFV-Foundations-Workshop 2-Day CISCO India VC Sept 03-04-2020 Part1 PDF
NFV-Foundations-Workshop 2-Day CISCO India VC Sept 03-04-2020 Part1 PDF
DHANUNJAYA VUDATHA
SENIOR SOLUTION ARCHITECT
1
© 2014 Criterion Networks. All Rights Reserved
Criterion Networks Overview
Cloud-based Enablement Solutions for Network
Transformation
Accelerate Network Transformation with Criterion SDCloud®
Network Virtualization and Automation
SDDC ⎸SD-WAN ⎸SDA | Custom Solutions
Development | Qualification ⎸Learning Labs ⎸Proof-of-Concept
Key Expertize
•Provide consultancy for SDN/NFV/Networking related deployments
•Deployment knowledge of IP networking technologies, CORD, SDN Operator Network
•Deep knowledge on the software and hardware architectures by having worked on the industry
leading next-generation products.
5
© 2014 Criterion Networks. All Rights Reserved
Workshop Outline
Case for NFV NFV Architecture, Functions, Interfaces
➢ Physical Appliance Challenges ➢ ETSI NFV Reference Architecture
➢ NFV Terminology
➢ Introduction to NFV
➢ NFV Infrastructure
➢ Goals of NFV ▪ Compute domain
➢ How can NFV solve Challenges? ▪ Hypervisor domain
▪ Infrastructure Network domain
➢ Cloud computing architectures/ Business ➢ NFV Virtual Infrastructure Management (VIM)
Models ▪ VIM - Functionality
➢ Server, Network Virtualization ▪ OpenStack Overview
▪ OpenStack Networking
➢ Service Provider Guidelines
What is SDN?
What is NFV?
NFV
NFV encompasses the virtualization of Physical Network Functions(PNFs), and the
orchestration of workloads (virtual functions) and resources on shared commodity
hardware (COTS/Racks-Spine/Leaf).
SDN+NFV
Service Provider’s Virtual Service Delivery Platforms will span from the core of the
network to the customer premise involving SDN and NFV Principles built-in.
Therefor SDN and NFV are two complimentary technologies for network transformations
8
© 2014 Criterion Networks. All Rights Reserved
Network Device Functionality Abstraction
OSS/BSS
Management
NMS Layers
EMS
Management Plane
Device Layers
Control Plane
Data Plane
Network Device
© 2014 Criterion Networks. All Rights Reserved
9
Networking planes – 3 Plane Architecture (Traditional )
Multi-vendor
VM VM VM VM
Hypervisor
COTS
Commercial off the shelf (COTS) refers to any product or service that is developed and marketed commercially. COTS
hardware refers to general-purpose computing, storage, and networking gear that is built and sold for any use case that
requires these resources. It doesn’t enforce usage of a proprietary hardware or software.
Virtualization
is the technology used to run multiple operating systems (OSs) or applications on top of a single physical infrastructure
by providing each of them an abstract view of the hardware. It enables these applications or OSs to run in isolation
while sharing the same hardware resources.
Linux Network Interfaces
13
HSI
IP/MPLS
Core
Video
Head End
PIM
Multicast
Unicast
Services - IGMP Multicast(Live Broadcast- IPTV),
VoIP, HIS – MPLS
Unicast (VoIP, HSI) Q-in- Pseudo Wires
Management – TR-069, OMCI, PLOAM) Q/SVLAN IPTV Multicast – MPLS
IGMP Snooping
VPLS
Access Network
10/100/1
000
Ethernet BNG
MOE CISCO ASR 9K
ONT 1:32/
RG 1:64 Subscriber
Splitter Management
OLT
Adtran
TA5K ALU -7750SR
Home Calix c7
Network ALU Distribution
ing Network
15
© 2014 Criterion Networks. All Rights Reserved
The modern Networking Stack
L4-L7 L4-L7
NFV
L3
L2 L1-L3
SDN
L1
16
© 2014 Criterion Networks. All Rights Reserved
SDN ?. Or NFV ?.
17
© 2014 Criterion Networks. All Rights Reserved
SDN?. NFV?.
18
© 2014 Criterion Networks. All Rights Reserved
NE DESIGN AND DEPLOYMENT PHILOSOPHY
- TRADITIONAL
Basic Terminology
OSS/BSS
Management
NMS Layers
EMS
Management Plane
Device
Layers
Control Plane
Data Plane
Network Device
Figure: Network Device Functionality Abstraction
© 2014 Criterion Networks. All Rights Reserved
Basic Terminology
Management/Policy Plane (M.P) Control Plane (C.P)
• To configure control plane • Runs in switch/router CPU
• Processing speeds of thousands of
• Monitor the device its operation, packets/sec
interface counters etc • Processes such as STP, Routing
• CLI/SNMP/Netconf Protocols
Figure: Today’s network design with distributed control plane and per device
management; Various Data Plane implementations
Focusing on Device C appliances
Management Plane Management Plane Management Plane
Provisioning
Site A Site C
More time-to-
market
Less Flexible More Opex More time-to-
and Less Agile Costs deploy and
Site B Less Resource
Validate
utilization
25
© 2014 Criterion Networks. All Rights Reserved
Case Study 2-
Redundant Components
Active Firewall
Active LB
Not all networks
might require 99.999
up time
28
© 2014 Criterion Networks. All Rights Reserved
Limitations of traditional networking devices-II
• High Operational Costs
o Device by device provisioning, Multi-vendor purpose built distributed control
• Capacity Over-Provisioning
o Short- and long-term network capacity demands are hard to predict, and as a result networks are built with excess capacity and
are often more than 50% undersubscribed.
o Underutilized and overprovisioned networks result in lower return on investment.
29
© 2014 Criterion Networks. All Rights Reserved
Data Center Needs
► Automation
▪ Agility, the ability to dynamically instantiate networks and to disable
them when they are no longer needed
► Scalability
▪ The use of tunnels and virtual networks can contain the number of
devices in a broadcast domain to a reasonable number.
► Multipathing
▪ Application Aware Routing, SLAs, Transport Independence
► Multitenancy
▪ Hosting dozens, or even hundreds or thousands of customers or
tenants in the same physical data center has become a requirement.
▪ The data center has to provide each of its multiple tenants with their
own (virtual) network that they can manage in a manner similar to
the way that they would manage a physical network.
► Network Virtualization
► Service Insertion
© 2014 Criterion Networks. All Rights Reserved 30
A CASE FOR NFV
Before Software Defined Networking
Servers vs Networking
COMPUTE
EVOLUTION
NETWORKING
EVOLUTION
NETWORK FUNCTION VIRTUALIZATION
Evolution of Application Deployment
VM VM VM VM
Hypervisor
COTS
Commercial off the shelf (COTS) refers to any product or service that is developed and marketed commercially. COTS
hardware refers to general-purpose computing, storage, and networking gear that is built and sold for any use case that
requires these resources. It doesn’t enforce usage of a proprietary hardware or software.
Virtualization
is the technology used to run multiple operating systems (OSs) or applications on top of a single physical infrastructure
by providing each of them an abstract view of the hardware. It enables these applications or OSs to run in isolation
while sharing the same hardware resources.
Hypervisor based Virtualization
VM VM VM
Hypervisor
Physical server
Containers - Docker
37
© 2014 Criterion Networks. All Rights Reserved
Comparing Containers and VMs
Containers Virtual Machines
CONTAINER VM
App A App B App C App A App B App C Shared resources Isolated resources
Host OS Hypervisor
No hypervisor Hypervisor-based
Infrastructure Infrastructure
No underlying OS
Linux and Windows
(Type I)
Host OS Hypervisor
Infrastructure Infrastructure
monolithic apps
microservices
40 Nov 2019
Copyright © 2019 Criterion Networks Inc. All Rights Reserved
Reasons Why Containers are Good for NFV
• Lower Overhead.
• NO guest OS
• Containers have a far smaller memory footprint than virtual machines
• Startup speed.
• Virtual machine images are large because they include a complete guest operating system
• Time taken to start a new VM is largely dictated by the time taken to copy its image to the host on which it is to
run, which may take many seconds.
• By contrast, container images tend to be very small, and they can often start up in less than 50 ms.
• Reduced maintenance.
• Virtual machines contain guest operating systems, and these must be maintained, for example to apply security
patches to protect against recently discovered vulnerabilities.
• Containers require no equivalent maintenance.
• Ease of deployment.
• Containers provide a high degree of portability across operating environments
42
© 2014 Criterion Networks. All Rights Reserved
© 2014 Criterion Networks. All Rights Reserved 43
Upgrade Strategy
44
© 2014 Criterion Networks. All Rights Reserved
NFV Drivers
• SERVICE VELOCITY
▪ Ability to launch/create a service faster (means for faster revenue generation opportunities)
▪ Automation of service launch, capacity increase
• MULTI-VENDOR & MULTI-DOMAIN SUPPORT
▪ Ability to do mix and match with the network elements/components
▪ Simpler unified provisioning
▪ Moving away from vendor lock-in
• CAPEX & OPEX REDUCTION
• NFV & SDN – 95 % of operators had confirmed the roadmap (Source Infonetics
Research – Telco Market Research & Consulting Firm)
• NFV & SDN Market size is projected to be 11 Billion USD in next 4 years (2015 to
2020) (Source Infonetics Research – Telco Market Research & Consulting Firm)
45
© 2014 Criterion Networks. All Rights Reserved
How can NFV solve previous
challenges?
46
© 2014 Criterion Networks. All Rights Reserved
Case Study 1 -
Virtual Nodes Per Role/Site
Provisioning
Site A Site C
Virtual
Virtual
Virtual Less time-to-
market
More Flexible Less time-to-
Less Opex
and More deploy and
Costs Site B More
Agile Validate Resource
utilization
47
© 2014 Criterion Networks. All Rights Reserved
Case Study 2-
No Redundant Components
Active Firewall
Active LB
Not all networks
might require 99.999
up time
No need to purchase
many specialized
devices upfront. Easy
planning
50
© 2014 Criterion Networks. All Rights Reserved
Service Provider Guidelines for SDN/NFV
Case study - ATT domain 2.0
ATT Domain 2.0 Architecture
• Network traffic increased by 150,000 % between 2007 and 2015
• 60% of that traffic is Video
• IoT, Virtual and Augmented reality expected to push more !
Elastic Network Capabilities for Customers, Partners,
3rd Party Provider Tenants of Commercial Clouds
Commercial Cloud Computing
Environments
Network Function Virtualization Infrastructure a cloud
Tenant Applications & Tenant Applications &
Virtual Machine Virtual Machine
distributed where needed to optimize characteristics
such as latency, costs, etc. and with control,
orchestration, and management capabilities for real-
APIs and Dynamic Policy Control
time, automated operations
Network Function Software that will be evolving from
the current form embedded in network appliances to
Network Function Virtualization Infrastructure Cloud
Functions Designed for Cloud
software (re)designed for cloud computing
Virtual Virtual
Wireless Access Applications Applications
52
Copyright © 2016 Criterion Networks Inc. All Rights Reserved
Domain 2.0 Principles
Open API
Simple
Scale
53
© 2014 Criterion Networks. All Rights Reserved
Domain 2.0 Journey
• Domain 2.0 White Paper in Nov, 2013
• Announced the Domain 2.0 suppliers list
• Suppliers: Tail-f (Cisco), Ericsson, Juniper, Nokia, Metaswitch etc
• In 2014, launched User Defined Network Cloud (Network on Demand)
• Virtualizing the Mobile Packet Core (Connected Car Apps)
• Virtualized Universal Service Platform (Enterprise VOIP)
• ATT Integrated Cloud (AIC) sites across Central offices
• Total of 150 Network functions
• Virtualize 5% by 2015
• Virtualize 30% by 2016
• Virtualize 75% by 2020
• Software Development
• ECOMP (Enhanced Control, Orchestration, Management & Policy)
software. Open sourced it recently to Linux Foundation (ECOMP+OPEN-O =
ONAP)
• Orange leveraging ECOMP for testing.
54
© 2014 Criterion Networks. All Rights Reserved
ATT Domain 2.0 Virtual Function (VF) Guidelines
55
© 2014 Criterion Networks. All Rights Reserved
OEM Guidelines for SDN/NFV
Case study – CISCO DNA
Cisco DNA Vision
Source: http://www.cisco.com/c/en/us/solutions/enterprise-networks/digital-network-architecture/index.html
57
© 2014 Criterion Networks. All Rights Reserved
Example: Draw a Square!
58
© 2014 Criterion Networks. All Rights Reserved
59
© 2014 Criterion Networks. All Rights Reserved
NFV Architecture
61
© 2014 Criterion Networks. All Rights Reserved
ETSI NFV Architecture
62
63
Carrier Grade NFVI
Source: http://www.cisco.com/c/dam/m/fr_fr/events/2015/cisco_day/pdf/4-ciscoday-10june2016-nfvi.pdf
64
© 2014 Criterion Networks. All Rights Reserved
vCPE (Residential/Business)
Multiple Partners
Cisco
VNFs
Enterprise Fabric
65
NFV Concepts
• Network Function (NF): Functional building block with a well defined interfaces
and well defined functional behavior
• Virtualized Network Function (VNF): Software implementation of NF that can be
deployed in a virtualized infrastructure and replaces a vendor’s specialized
hardware with systems performing the same function, yet running on a generic
hardware.
• VNF Set: Connectivity between VNF with no connectivity specified
• NFVI: Hardware and Software required to deploy, manage and execute VNFs
including computation, storage and network
• NFVI PoP: Location of NFVI
66
© 2014 Criterion Networks. All Rights Reserved
NFV Concepts (Cont..)
67
© 2014 Criterion Networks. All Rights Reserved
NFV Concepts (Cont..)
• User Services: Services offered to end customers/users/subscribers
• Deployment Behavior: Deployment resources that NFVI requires
• Number of VMs, memory, disk, images, bandwidth, latency
• Operational Behavior: VNF instance topology and life cycle operations
• Start, Stop, Pause, Migrate
• VNF Descriptor(VNFD): Deployment behavior + Operational Behavior
• NFV Orchestrator(NFVO): Automates the network service deployment,
operation, management, coordination of VNFs and NFVI
• VNF Forwarding Graph(VNFFG): Service Chain when network connectivity
is important
68
© 2014 Criterion Networks. All Rights Reserved
Network Forwarding Graph
69
© 2014 Criterion Networks. All Rights Reserved
Key Components
70
© 2014 Criterion Networks. All Rights Reserved
End-to-End Flow in the ETSI NFV Framework
Step 1.
The full view of the end-to-end topology is visible to the NFVO.
Step 2.
The NFVO instantiates the required VNFs and communicate this to the VNFM.
Step 3. VNFM determines the number of VMs needed as well as the resources
that each of these will need and reverts back to NFVO with this requirement to
be able to fulfill the VNF creation.
Step 5. NFVO sends request to VIM to create the VMs and allocate the
necessary resources to those VMs.
Step 6. VIM asks the virtualization layer to create these VMs.
Step 7. Once the VMs are successfully created, VIM acknowledges this back to
NFVO.
Step 8. NFVO notifies VNFM that the VMs it needs are available to bring up the
VNFs.
Step 9. VNFM now configures the VNFs with any specific parameters.
Step 10. Upon successful configuration of the VNFs, VNFM communicates to
NFVO that the VNFs are ready, configured, and available to use.
71
© 2014 Criterion Networks. All Rights Reserved
NFV – CISCO
Network Functions Virtualization Infrastructure
Virtual Router Virtual Router Virtual Firewall Virtual WAN Optimization Virtual Wireless LAN
3rd Party VNFs
(ISRv) (vEdge) (ASAv) (vWAAS) Controller (vWLC)
73
© 2014 Criterion Networks. All Rights Reserved
Extending Orchestration to the Datacenter for NFV
OSS Systems
VNF Manager
(ESC)
• WAN Edge Router, Firewall, VPN, DHCP, DNS Servers (Universal CPE)
• Consistent functionality in SW across branch and cloud sites
• Simple and automated up gradation of software
75
© 2014 Criterion Networks. All Rights Reserved
Overlay Networking
ETSI NFV Architecture Revisited
77
Shortcomings of the VLAN technology
• We can have a maximum of 4,096 VLANs—remove some administrative and pre-assigned ones, and we are left
with just over 4,000 VLAN’s.
• Now this becomes a problem if we have say 500 customers in our cloud and each of them is using about 10 of
them. We can very quickly run out of VLANs.
• VLANs need to be configured on all the devices in the Layer 2 (Switching) domain for them to work.
• When we use VLANs, we will need to use Spanning Tree Protocol (STP) for loop protection, and thereby we lose
a lot of multipathing ability (as most multi-path abilities are L3 upwards and not so much on the Layer 2
network).
• VLANs are site-specific, and they are not generally extended between two datacenters.
• In the cloud world, where we don't care where our computing resources stay, we would like to have access to
the same networks, say for a disaster recovery (DR) kind of scenario.
• One of the methods that can alleviate some of the aforementioned problems is the use of an overlay network.
78
© 2014 Criterion Networks. All Rights Reserved
What is an overlay network?
• An overlay network is a network running on top of
another network, underlay network
79
© 2014 Criterion Networks. All Rights Reserved
Overlay technologies
• Generic Routing Encapsulation (GRE) is one of the first overlay technologies that existed.
• GRE encapsulates layer-3 payload , by setting the destination as the address of the remote tunnel endpoint, and then sends it
down the wire; it performs the opposite function on the IP packet at the other end.
• This way, the underlay network sees the packet as a general IP packet and routes it accordingly.
• Number of VXLANs possible: Theoretically, this has been beefed up to 16 million VXLANs in a network, thereby giving ample
room for growth.
• Virtual tunnel endpoint (VTEP): VXLAN also supports VTEP, which can be used to create a Layer-2 overlay network atop the
Layer 3 endpoints.
80
© 2014 Criterion Networks. All Rights Reserved
DC-1 DC-2
81
Virtual Network Inside the Server
The Virtual Machines are connected to a Virtual
Switch inside the Compute Node (or server).
82
83
• The OVS bridge, br-int, uses VLANs to segregate the traffic in the Hypervisors.
• These VLANs are locally significant to the Hypervisor.
• Neutron allocates a unique VNI for every virtual network.
• For any packet leaving the Hypervisor, OVS replaces the VLAN tag with the VNI in the encapsulation header.
• OVS uses local_ip from the plugin configuration as the source VTEP IP for the VXLAN packet.
84
© 2014 Criterion Networks. All Rights Reserved
OpenVswtich Architecture
• Tap devices
• Linux bridges
• Virtual ethernet cables
• OVS bridges
• OVS patch ports
ovs-vsctl show: Prints a brief overview of the switch database configuration, including ports, VLANs, and so on
ovs-vsctl list-br: Prints a list of configured bridges
ovs-vsctl list-ports <bridge>: Prints a list of ports on the specified bridge
ovs-vsctl list interface: Prints a list of interfaces along with statistics and other data
85
© 2014 Criterion Networks. All Rights Reserved
Exercises
Step1: Create a OVS Bridge “br-test” on network node
Step2: Create veth pair {vnet0, vnet1} on network node
Step3: Create veth pair {vnet2, vnet3} on network node
Step4: Create 2 namespaces “tom” and “jerry” on network node
• Namespaces enables multiple instances of a routing table to co-exist within the same Linux box
• Network namespaces make it possible to separate network domains (network interfaces,
routing tables, iptables) into completely separate and independent domains.
• L3 Agent: The neutron-l3-agent is designed to use network namespaces to provide multiple
independent virtual routers per node, that do not interfere with each other or with routing of
the compute node on which they are hosted
Step5: Add vnet1 to “tom” and vnet3 to “jerry”
Step6: Assign IP 10.1.1.1/24 to vnet1 and vnet3 on network node
Step7: Repeat the steps 1 to 5 on compute node
Step8: Assign IP 10.1.1.2/24 to vnet1 and vnet3 on compute node
Step9: Create a Vxlan tunnel port and add to “br-test” on each node using the remote
IP as 172.16.4.0/24 network
Step10: Add flows with flows.txt file in home directory
Step11: Ping across interfaces present in tom namespace
Step12: Ping across interfaces present in jerry namespace
86
© 2014 Criterion Networks. All Rights Reserved
OVS Management
Base commands
OVS is feature rich with different configuration commands, but the majority of your
configuration and troubleshooting can be accomplished with the following 4 commands:
• ovs-vsctl : Used for configuring the ovs-vswitchd configuration database (known as ovs-db)
• ovs-ofctl : A command line tool for monitoring and administering OpenFlow switches
• ovs-dpctl : Used to administer Open vSwitch datapaths
• ovs−appctl : Used for querying and controlling Open vSwitch daemons
87
© 2014 Criterion Networks. All Rights Reserved
oscontrol osnetwork
TOM JERRY TOM JERRY
10.1.1.1 10.1.1.1 10.1.1.2 10.1.1.2
vnet1 vnet3 vnet3
vnet1
TOR1 TOR2
VXLAN
89
# ip netns exec ns1 ping -c 3 192.168.0.2
# ip netns exec ns2 ping -c 3 192.168.0.1
ETSI NFV Architecture Revisited
91
NFV MANO
------- VIM - OPENSTACK