Nexus
Nexus
Nexus
Campus LAN Switches: To scale network performance in an enterprise LAN, there are core,
distribution, access, and compact switches. These switch platforms vary from fanless switches with
eight fixed ports to 13-blade switches supporting hundreds of ports. Campus LAN switch platforms
include the Cisco 2960, 3560, 3750, 3850, 4500, 6500, and 6800 Series.
Cloud-Managed Switches: The Cisco Meraki cloud-managed access switches enable virtual stacking
of switches. They monitor and configure thousands of switch ports over the web, without the
intervention of onsite IT staff.
Data Center Switches: A data center should be built based on switches that promote infrastructure
scalability, operational continuity, and transport flexibility. The data center switch platforms include
the Cisco Nexus Series switches and the Cisco Catalyst 6500 Series switches.
Service Provider Switches: Service provider switches fall under two categories: aggregation switches
and Ethernet access switches. Aggregation switches are carrier-grade Ethernet switches that
aggregate traffic at the edge of a network. Service provider Ethernet access switches feature
application intelligence, unified services, virtualization, integrated security, and simplified
management.
Virtual Networking: Networks are becoming increasingly virtualized. Cisco Nexus virtual networking
switch platforms provide secure multitenant services by adding virtualization intelligence technology
to the data center network.
In addition to these considerations, the following list highlights other common business
considerations when selecting switch equipment:
Cost: The cost of a switch will depend on the number and speed of the interfaces, supported
features, and expansion capability.
Port Density: Network switches must support the appropriate number of devices on the network.
Power: It is now common to power access points, IP phones, and even compact switches using
Power over Ethernet (PoE). In addition to PoE considerations, some chassis-based switches
support redundant power supplies.
Reliability: The switch should provide continuous access to the network.
Port Speed: The speed of the network connection is of primary concern to end users.
Frame Buffers: The ability of the switch to store frames is important in a network where there
might be congested ports to servers or other areas of the network.
Scalability: The number of users on a network typically grows over time; therefore, the switch
should provide the opportunity for growth.
Forwarding rates define the processing capabilities of a switch by rating how much data the switch
can process per second. Switch product lines are classified by forwarding rates.
Forwarding Rate
Entry-level switches have lower forwarding rates than enterprise-level switches. Forwarding rates
are important to consider when selecting a switch. If the switch forwarding rate is too low, it cannot
accommodate full wire-speed communication across all of its switch ports. Wire speed is the data
rate that each Ethernet port on the switch is capable of attaining. Data rates can be 100 Mb/s, 1
Gb/s, 10 Gb/s, or 100 Gb/s.
For example, a typical 48-port gigabit switch operating at full wire speed generates 48 Gb/s of traffic.
If the switch only supports a forwarding rate of 32 Gb/s, it cannot run at full wire speed across all
ports simultaneously. Fortunately, access layer switches typically do not need to operate at full wire
speed, because they are physically limited by their uplinks to the distribution layer. This means that
less expensive, lower-performing switches can be used at the access layer, and more expensive,
higher-performing switches can be used at the distribution and core layers, where the forwarding
rate has a greater impact on network performance.
Power over Ethernet (PoE) allows the switch to deliver power to a device over the existing Ethernet
cabling. This feature can be used by IP phones and some wireless access points.
The relatively new Cisco Catalyst 2960-C and 3560-C Series compact switches support PoE pass-
through.
PoE Pass-Through
PoE pass-through allows a network administrator to power PoE devices connected to the switch, as
well as the switch itself, by drawing power from certain upstream switches.
Multilayer switches are typically deployed in the core and distribution layers of an organization’s
switched network. Multilayer switches are characterized by their ability to build a routing table,
support a few routing protocols, and forward IP packets at a rate close to that of Layer 2 forwarding.
Multilayer switches often support specialized hardware, such as application-specific integrated
circuits (ASIC). ASICs, along with dedicated software data structures, can streamline the forwarding
of IP packets independent of the CPU.
There is a trend in networking toward a pure Layer 3 switched environment. When switches were
first used in networks, none of them supported routing; now, almost all switches support routing. It
is likely that soon all switches will incorporate a route processor because the cost of doing so is
decreasing relative to other constraints. Eventually the term multilayer switch will be redundant.
Catalyst Switch:
Catalyst switches are designed primarily for use in campus networks where the traffic is
generated by Users.
That means that Catalyst switches support access-level security features like TrustSec and
dot1x.
Catalyst switches also support PoE for such devices like IP phones, wireless access points, or
IP cameras.
Finally, Catalyst switches run IOS, which is designed for smaller, simpler networks.
Nexus switch:
Nexus switches are designed for use in data centers, where the traffic is generated primarily
by servers, and the emphasis is on scale, availability, and automation.
Security features such as TrustSec and dot1x are replaced by dedicated appliances such as
virtual or physical firewalls.
PoE is not available for Nexus switches, since the types of devices that use PoE are
generally not seen in data centers.
First NX-OS is modular. Rather than simply making all supported features available, NX-OS
requires an engineer to turn up the desired features. In this way, you can maximize
efficiency.
Nexus/NX-OS also supports automating processes via software platforms such as UCS and
ACI.
Virtualization features such as virtual device context, virtual port channel, and VXLAN
supported only on Nexus.
Nexus switches support Fibre Channel and FCoE to integrate storage access into the data
center and to take advantage of efficiencies available through unified wire.
Nexus has a high port density, even with 10G ports, which supports such a high port density with
10G ports.
Cisco NX-OS: Originally named SAN-OS (where the SAN acronym stood for Storage Area Network),
NX-OS offers some vast architectural improvements over traditional Cisco IOS. Although it was
originally a 32-bit operating system, it has since evolved into a 64-bit OS.
Unlike Cisco IOS, NX-OS doesn’t share a single memory space, and it does support symmetric
multiprocessing.
It also allows pre-emptive multitasking, which allows a high priority process to get CPU time ahead
of a lower priority process.
NX-OS is built on a Linux kernel, and it natively supports the Python language for creating scripts on
Cisco Nexus switches.
Additionally, it has multiple high availability features, and it doesn’t load all of its features at once.
Instead, you can specify which features you wish to activate. Eliminating the running of unnecessary
features frees up memory and processor cycles for those features you do want.
Cisco IOS-XR: Originally designed for 64-bit operation, IOS-XR offers many of the enhancements
found in NX-OS (e.g. symmetric multiprocessing, separate memory spaces, and activating only
services that are needed). However, while NX-OS is built on a Linux kernel, IOS-XR is built on the QNX
Neutrino Microkernel. QNX is similar to UNIX and is now owned by BlackBerry.
A feature IOS-XR offers that is not found in NX-OS is the ability to have a single instance of the
operating system controlling multiple chassis. Also, since IOS-XR targets service provider
environments, it offers support for interfaces such as DWDM and Packet over SONET.
NX-OS Overview
Cisco built the next-generation data center-class operating system designed for maximum scalability
and application availability. The NX-OS data center-class operating system was built with modularity,
resiliency, and serviceability at its foundation. NX-OS is based on the industry-proven Cisco Storage
Area Network Operating System (SAN-OS) Software and helps ensure continuous availability to set
the standard for mission-critical data center environments. The self-healing and highly modular
design of Cisco NX-OS enables for operational excellence increasing the service levels and enabling
exceptional operational flexibility. Several advantages of Cisco NX-OS include the following:
Virtual device contexts (VDC): Cisco Nexus 7000 Series switches can be segmented into
virtual devices based on customer requirements. VDCs offer several benefits such as fault
isolation, administration plane, separation of data traffic, and enhanced security.
Virtual Port Channels (vPC): Enables a server or switch to use an EtherChannel across two
upstream switches without an STP-blocked port to enable use of all available uplink
bandwidth.
Continuous system operation: Maintenance, upgrades, and software certification can be
performed without service interruptions due to the modular nature of NX-OS and features
such as In-Service Software Upgrade (ISSU) and the capability for processes to restart
dynamically.
Security: Cisco NX-OS provides outstanding data confidentiality and integrity, supporting
standard IEEE 802.1AE link-layer cryptography with 128-bit Advanced Encryption Standard
(AES) cryptography. In addition to CTS, there are many additional security features such as
access control lists (ACL) and port-security, for example.
Base services: The default license that ships with NX-OS covers Layer 2 protocols including
such features such as Spanning Tree, virtual LANs (VLAN), Private VLANS, and Unidirectional
Link Detection (UDLD).
Enterprise Services Package: Provides Layer 3 protocols such as Open Shortest Path First
(OSPF), Border Gateway Protocol (BGP), Intermediate System-to-Intermediate System (ISIS),
Enhanced Interior Gateway Routing Protocol (EIGRP), Policy-Based Routing (PBR), Protocol
Independent Multicast (PIM), and Generic Routing Encapsulation (GRE).
Advanced Services Package: Provides Virtual Device Contexts (VDC), Cisco Trustsec (CTS),
and Overlay Transport Virtualization (OTV).
Transport Services License: Provides Overlay Transport Virtualization (OTV) and
Multiprotocol Label Switching (MPLS) (when available).
Example 1-1 shows the simplicity of installing the NX-OS license file.
NOTE
NX-OS offers feature testing for a 120-day grace period. Here is how to enable a 120-day grace
period:
The feature is disabled after the 120-day grace period begins. The license grace period is enabled
only for the default admin VDC, VDC1.
Using the grace period enables customers to test, configure, and fully operate a feature without the
need for a license to be purchased. This is particularly helpful for testing a feature prior to
purchasing a license.
NX-OS uses a kickstart image and a system image. Both images are identified in the
configuration file as the kickstart and system boot variables; this is the same as the Cisco
Multilayer Director Switch (MDS) Fibre Channel switches running SAN-OS.
NX-OS removed the write memory command; use the copy running-config startup-config;
there is also the alias command syntax.
The default Spanning Tree mode in NX-OS is Rapid-PVST+.
NX-OS data center-class operating system, designed for maximum scalability and application
availability, has a wide variety of platform support, including the following:
Nexus 7000
Nexus 5000
Nexus 2000
Nexus 1000V
Cisco MDS 9000
Cisco Unified Computing System (UCS)
Nexus 4000
Nexus Versions:
VPC Vs VSS
https://community.cisco.com/t5/switching/ask-the-expert-nexus-virtual-port-channel-vpc/td-
p/1761659
http://www.firewall.cx/cisco-technical-knowledgebase/cisco-data-center/1208-nexus-vpc-
configuration-design-operation-troubleshooting.html
I think you are familiar with fixed Switches/Routers etc., in those boxes you have single ASIC (or
single system of ASIC's - now i mean computing power when i use term ASIC), usually every port is
connected to the ASIC. In networking all components that are used for next-hop resolution and
packet manipulation (CEF, TCAM, etc.) are named with single word Supervisor. When you want to
forward traffic from one port to another you have to buffer the traffic on ingress buffer then do
some lookups (everything is done by supervisor - depending if it is router/switch), then usually L2
and L3 headers of the original packet are pushed to the RP and the RP (RP is subsystem of ASIC that
do routing lookups in Cisco boxes its specialized chips that do hardware acceleration of the lookups
(CEF - FIB and adjacency table) so CPU is usually not involved and everything is done by hardware
chips) do next-hop/exit-interface resolution and then if you are routing do L3 rewrite and push the
packet to egress interface.
So everything is done by Supervisor, every lookup, MAC learning, Routing protocol operations etc..
So throughput of your chassis is based on the ASIC inside your chassis.
In modular boxes you usually have option to choose your supervisor and line-cards. Supervisor now
is the computing system and line cards represent your ports and now you have two options how are
lookups and switching made:
A. (Centralized system)
In centralized system you behave really similar to fixed boxes. Every line card is connected to every
Supervisor through Supervisor-backplane (called switch-fabric). When your receive traffic on your
ingrees port it is buffered in the line-card and then pushed via Supervisor switch-fabric to RSP (Route
switch processor), which makes necessary lookups than again through Supervisor switch-fabric is
switched to egress line card (even if it is the same LC that was ingress for that traffic the traffic need
to be redirected to RSP).
So again nearly everything is made by Supervisor. Line cards often has some other hardware chips
for example for line-rate MAC-SEC and this kind of thing to accelerate throughput with some
advanced features active, because if Supervisor has to take care of everything the throughput would
be really low.
In Centralized system the chips that is responsible for lookup is called RSP, because it has to switch
the packet from line-card to RP and do the lookup and then again switch the packet to egress LC. The
throughput of your chassis is based on ASIC's in your Supervisors, usually cisco provide you more
than 2 supervisor model for each modular chassis.
For example, centralized system from switch perspective use Catalyst's (4500, 6500, 6800 and 9400)
Centralized system from router perspective use (ASR 9xxx)
B. (Distributed system)
So now i hope you understand centralized system it is essential to understand distributed system.
In distributed system you have supervisor, line-cards and something called fabric-modules or
something like that.
Distributed systems works differently, in LC you have usually internal switch fabric, NPU's (Network
processor units) and its own computing power (TCAM's, FIB, Adj-table etc.). So Supervisor in
Distributed system chassis controls only Control-plane, Management-plane a some really special
cases of data-plane. Each LC can learn MACs do ARP etc. it synchronizes with RP and RP distributes
this information to other LC's (that is why it’s called distributed). So when you received traffic that
go through the chassis LC is making forwarding decision with NPU's (because it has all information
that it need it has FIB, Adjacency table and MAC table). When the traffic is local - now i mean that
ingress and egress port is on the same LC, the traffic is switched internally (in some chassis it
must go through switch fabric from fabric modules). When the traffic is not local it is send through
switch fabric (which is made by fabric modules) to egress LC. When it receives control-plane traffic
for example BGP update it must switch the packet to the RP.
So now only control-plane and management-plane is made by supervisor. Data plane traffic is
handled by LC's and fabric-modules, and pieces of Control-plane (for example if you enbale BFD for
fast detection it is controlled by line card NPU).
So throughput of your chassis is now based on you Fabric-modules. How fast they can switch packet
from ingress LC to egress LC.
For example, Distributed system from switch perspective use Nexuses 7000 (partial), 7700 and 9500.
For example, Distributed system from router perspective use ASR 99xx.
Line Card in combination with fabric module is responsible for packet lookup and forwarding.
CatOS on the Supervisor Engine and Cisco IOS Software on the MSFC (Hybrid): a CatOS image can be
used as the system software to run the Supervisor Engine on Catalyst 6500/6000 switches. With the
MSFC installed, a separate Cisco IOS Software image is used to run the routing module.
In the latest Supervisor Engine, the MSFC is integrated. See the table for more details:
PISA, which
integrates MSFC2A MSFC2A onboard;
functions of Layer 3 support
Supervisor Engine 32 with layer 3 MSFC2 optional; not
board license MSFC3 onboard MSFC2 optional field upgradeable
Cisco IOS Software on both the Supervisor Engine and MSFC (Native): a single Cisco IOS Software
image can be used as the system software to run both the Supervisor Engine and MSFC on Catalyst
6500/6000 switches.
What is the difference between M1 and F1 Cisco Nexus Line cards?
Cisco Nexus series switches brought a new technology to the data center. The whole designed is
changed from the Catalyst 6500 series. Nexus is no longer dependent on SUP’s backplane, it is more
like a midplane architecture. Let me elaborate a little on this, what that statement means that
currently if there is any limitation of speed, then it is posed by the Line Card. Then how the Line
cards communicate with each other, they do with Fabric Modules. Read for further details into basic
architecture difference between Catalyst 6500 vs Nexus 7000
Nexus Line card modules fall into two major categories. M1, and F1. There is another variation to the
M1 which is M1-XL. Brad Hedlund wrote a good article that can be referenced for reading, titled
“Cisco Nexus 7000 connectivity solutions for Cisco UCS”
M1, M1-XL
M1 Series were the introductory line cards that were offered by Cisco for Nexus. They come with a
fabric of 80GB. These cards have 10Gig links making them ideal for Distribution layer. Lets put down
the specifications or performance Metrics from the data sheets. These cards provide the Layer 2 and
Layer 3 connectivity! You can always multiply these numbers with the maximum line cards possible
to install into a chassis to get the marketing figures.
1- Delivery at 60 Million Packets per second (Mpps) for layer 2,3 IPv4.
2- Delivery at 30 Mpps IPv6 unicast.
3- Delivery of Access Control List (ACL) to 64k entries per module. The entries include address of
Layer 2,3,4 and Cisco’s Metadata fields- security group tags (SGTs)
4- in 32 Port line card, each 4 ports share 10GB of Fabric. They can run either 1 port 10GIG disable
2,3, and 4 OR all 4 in shared mode.
5- Memory 1GB DRAM
6- Network management: Cisco DCNM 4.0
7- Mac addresses table size of 128k entry
8- FIB table of 128k entry
9- Netflow supports 512k Entry in both Ingres and Egress
10- 16384 bridge domains and 4096 vlan per Virtual Device Context (VDC)
11- Policers of 16k entry
M1-XL Series offers the flexibility or the performance to be internet-facing deployment with wider
transceivers module support. What it basically offers the possibility of larger FIB. This can be seen
from the following:
* up to 1M IPv4 routes (depending on prefix distribution)
* up to 350k IPv6 routes (depending on prefix distribution)
This was not possible in the M1 Line Cards. M1-XL does provide extra ACL entries support compared
to M1, which increased DRAM
1- Memory 2GB DRAM
2- Delivery of Access Control List (ACL) to 128k entries per module.
3- Network management: Cisco DCNM 5.1
F1 Series Card:
F1 Series Line Cards were introduced after the M1. They provide a slight cheaper and more port
density with ONLY layer 2 forwarding. This makes an ideal Line card for Access layer. What happens
if layer three processing is required? The Line card will forward that traffic to M1, M1-XL cards for
processing. These cards have Fabric of 230 GB.
The forwarding engine is something new. Every two ports are connected by a switch on chip. (SoC),
these SoC are the forwarding engine. So each SoC supports 16k. What this implies (How marketing
figured came) that for 32 port, we have 16 SoC. With careful planning, if we use one VLAN per SoC
we get total of 256k of Mac address support. But if we span one vlan among all SoC then we are
bounded by max limit of 16k MAC entry.
These cards have the Cisco FiberPath Technology. From the data sheet
• Operational simplicity: Cisco Fabric Path embeds an auto discovery mechanism that does not
require any additional platform configuration. By offering Layer 2 connectivity, this “VLAN
anywhere” characteristic simplifies provisioning and offers workload flexibility across the network.
• High resiliency and performance: Since Cisco Fabric Path is a Layer 2 routed protocol, it offers
stability, scalability, and optimized resiliency along with network failure containment.
• Massively scalable fabric: By building a forwarding model on 16-way ECMP, Cisco FabricPath helps
prevent bandwidth bottlenecks and allows capacity to be added dynamically, without network
disruption.
They also have the ability to connect FCoE. these features include
1-Virtual Sans (VSANs)
2-Inter-VSAN Routing
3-PortChannels (UP to 16 links)
4- Storage VDC.
Supervisor engine is the heart of Chassis type switches like Cisco 4500,6500 switch. by default, on
chassis type switches, it doesn't have any supervisor engine fixed on to it.
According to our requirement, we can choose the appropriate supervisor engine. assume that you
have a Supervisor engine 1, which you have purchased few years before doesn't support 10Gbps
Ethernet.
Sup. eng. 720 has got the backplane capacity of 720 Gbps
By installing Latest Supervisor Engines in your existing investments (Switches and Routers) you can
scale system performance and integrate next-generation services into your Networks.
Within a single multilayer switch chassis, two supervisor modules with integrated route
processors can be used to provide hardware redundancy. If an entire supervisor module fails, the
other module can pick up the pieces and continue operating the switch
The supervisor engine contains the following integrated daughter cards that perform forwarding and
routing and provide the protocols supported on the router.
Policy Feature Card (PFC) is the forwarding plane and does the following:
Multilayer Switch Feature Card (MSFC) is the control plane and does the following:
Performs routing for the chassis. The MSFC contains the route processor (RP) and Switch
processor (SP) for the router.
Runs Layer 2 and Layer 3 protocols, such as the Spanning Tree Protocol (STP) and others.
You can View a 3D model of Catalyst 6500 Switch, here you can see how the Catalyst Switch 6500
looks like, Mount/Demount Supervisor engines, and other inserted modules, view short details of
them by demounting from Chassis.
32 5 IOS
https://blog.router-switch.com/2011/12/cisco-supervisor-engines-benefits/
https://www.cisco.com/c/en/us/support/docs/switches/nexus-2000-series-fabric-
extenders/200363-nexus-2000-fabric-extenders-supported-un.html
CAUTION
In NX-OS, you have to enable features such as OSPF, BGP, and CTS; if you remove a feature via
the no feature command, all relevant commands related to that feature are removed from the
running configuration.
The kickstart image, contains the Linux kernel, basic drivers, and initial file system.
The system image contains the system software, infrastructure, Layers 4 through 7.
The Erasable Programmable Logic Device (EPLD) image: EPLDs are found on the Nexus 7000
currently shipping I/O modules. EPLD images are not released frequently; even if an EPLD
image is released, the network administrator is not forced to upgrade to the new image.
EPLD image upgrades for I/O modules disrupt traffic going through the I/O module. The I/O
module powers down briefly during the upgrade. The EPLD image upgrades are performed
one module at a time.
On the Nexus 7000 with dual-supervisor modules installed, NX-OS supports in-service software
upgrades (ISSU). NX-OS ISSU upgrades are performed without disrupting data traffic. If the upgrade
requires EPLD to be installed onto the line cards that causes a disruption of data traffic, the NX-OS
software warns you before proceeding so that you can stop the upgrade and reschedule it to a time
that minimizes the impact on your network.
Kickstart image
System image
Supervisor module BIOS
Data module image
Data module BIOS
Connectivity management processor (CMP) image
CMP BIOS
Nexus Boot Process
You will need to know each phase of boot, what is the output, and how to interrupt the
process. This table is from my notes.
BIOS loader> CTRL-C no bootable The BIOS begins the POST, memory test
device and other operating systems
applications. While the test is in
progress, press Ctrl-C to enter the BIOS
config utility and use the netboot option.
Kickstart uncompressing CTRL-] switch(boot)# When the boot loader phase is over
system image press Ctrl-] to enter the switch(boot)#
prompt. Depending on your Telnet client,
you may have to remap the keystroke. If
corruption causes the console to stop at
this prompt, copy the system image and
reboot.
1. Step 1. Upgrade the BIOS on the active and standby supervisor modules and the line cards (data
cards/nonsupervisor modules).
2. Step 2. Bring up the standby supervisor module with the new kickstart and system images.
3. Step 3. Switch over from the active supervisor module to the upgraded standby supervisor module.
4. Step 4. Bring up the old active supervisor module with the new kickstart image and the new system
image.
5. Step 5. Upgrade the CMP on both supervisor modules.
6. Step 6. Perform nondisruptive image upgrade for line card (data cards/nonsupervisor modules), one
at a time.
7. Step 7. ISSU upgrade is complete.
http://www.firewall.cx/cisco-technical-knowledgebase/cisco-data-center/1220-nexus-7000-nx-os-
upgrade-via-issu.html
N7K
https://www.cisco.com/c/en/us/td/docs/switches/datacenter/sw/6_x/nx-
os/upgrade/guide/b_Cisco_Nexus_7000_Series_NX-
OS_Software_Upgrade_and_Downgrade_Guide_Release_6-x.html
N5K
https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/upgrade/503_N1_1/n
5000_upgrade_downgrade_503_n1_1.html
N9K
https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/6-
x/upgrade/guide/b_Cisco_Nexus_9000_Series_NX-
OS_Software_Upgrade_and_Downgrade_Guide_Release_6x/b_Cisco_Nexus_9000_Series_NX-
OS_Software_Upgrade_and_Downgrade_Guide_Release_6x_chapter_01.html