Catalyst 6500 Sup2T System QOS Architecture

Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

Catalyst 6500 Sup2T System QOS Architecture

White Paper
April 13, 2011

White Paper

Abstract
This document explains the Quality of Service (QoS) capabilities available in the Catalyst 6500 Switch as it applies to Policy Feature Card 4 (PFC4) engine and provides some examples of QoS implementation. It will expand on the concepts and terminologies detailed in the white paper, [Understanding Quality of Service on the Catalyst 6500 Switch. http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/white_paper_c11_538840.html] This QoS paper focuses on hardware that is shipping as of the date of this publication, and is not meant to be a configuration guide. Configuration examples are used throughout this paper to assist in the explanation of QoS features of the Catalyst 6500 hardware and software. For syntax reference for QoS command structures, please refer to the [configuration and command guides for the Catalyst 6500. http://www.cisco.com/univercd/cc/td/doc/product/lan/cat6000/index.htm]

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 2 of 28

White Paper

Contents
1. Overview ................................................................................................................................................................... 6 2. New QOS Features in PFC 4-Based System .......................................................................................................... 6 3. QoS Hardware Support in Supervisor 2T Systems ............................................................................................... 6 3.1. PFC 4 ................................................................................................................................................................. 6 3.2. Interface Types Supported ................................................................................................................................. 7 3.3. Buffers, Queues, and Thresholds in PFC4-Based Line Cards ........................................................................... 8 3.4. Line Card Port ASIC Queue Structure ............................................................................................................... 9 3.4.1. 10 Gigabit Ethernet Line Card (WS-X6908-10G and WS-X6816-10G) ...................................................... 9 3.4.2. 40 Gigabit Ethernet Line Card (WS-X6904-40GE) ................................................................................... 11 4. QoS Processing in PFC4 System ......................................................................................................................... 11 5. QoS TCAM .............................................................................................................................................................. 12 6. Serial QoS Model with PFC4 Hardware ................................................................................................................ 12 7. Unified Policy Configuration with C3PL .............................................................................................................. 13 7.1. Change in Default QoS Behavior ..................................................................................................................... 14 7.2. Default State of Port Level QoS ....................................................................................................................... 14 7.3. Port Ingress CoS to Queue Mapping ............................................................................................................... 14 7.4. Configuration CLI ............................................................................................................................................. 15 8. PFC4 Ingress Map and Port Trust ........................................................................................................................ 18 9. Layer 2 Classification of Layer 3 Packets ............................................................................................................ 18 9.1. Use Cases for Layer 2 Classification of Layer 3 Traffic ................................................................................... 18 9.2. Configuration.................................................................................................................................................... 19 10. Enhanced IPv4/IPv6 Classification ..................................................................................................................... 20 11. Marking ................................................................................................................................................................. 20 11.1. Use Case for Marking .................................................................................................................................... 20 12. Policing ................................................................................................................................................................. 20 12.1. Distributed Policer .......................................................................................................................................... 20 12.1.1. Use Cases for Distributed Policing ......................................................................................................... 21 12.1.2. Configuration .......................................................................................................................................... 21 12.2.1. Packets and Bytes-Based Policing ......................................................................................................... 22 13. IP Tunnel QoS ...................................................................................................................................................... 23 13.1. Ability to Mark Inner Header with PFC4 ......................................................................................................... 23 13.1.1. Use Case for IP Tunnel QoS .................................................................................................................. 23 13.1.2. Configuration .......................................................................................................................................... 24 13.2. MPLS Over GRE Tunnels .............................................................................................................................. 25 14. MPLS QoS ............................................................................................................................................................ 25 14.1. Ability to Distinguish IP-to-IP from IP-to-Tag Traffic ....................................................................................... 25 14.1.1 Use Case for MPLS QoS ......................................................................................................................... 25 14.2. Improved Performance................................................................................................................................... 25 15. Multicast QoS ....................................................................................................................................................... 25 16. Appendix 1 ........................................................................................................................................................... 26

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 3 of 28

White Paper

List of Figures
Figure 1. Figure 2. Figure 3. Figure 4. Figure 5. Figure 6. Figure 7. Figure 8. Figure 9. Policy Feature Card 4 on the Supervisor 2T........................................................................................... 7 WS-X6908-10G Line Card ....................................................................................................................... 10 WS-X6816-10G Line Card ....................................................................................................................... 10 WS-X6904-40G Line Card: Can Operate in 40 G or 10 G Mode ........................................................... 11 QoS Processing in PFC4-Based Line Card .......................................................................................... 12 IFE and OFE Process ............................................................................................................................. 13 Unified Policy Configuration with C3PL ............................................................................................... 13 PFC4 Default Port QoS Status ............................................................................................................... 14 PFC4 Default Port Ingress CoS to Queue Mapping ............................................................................. 14

Figure 10. PFC4 Ingress Map ................................................................................................................................... 18 Figure 11. Classification of Layer 3 Packets with Layer 2 .................................................................................... 18 Figure 12. Distributed Policer in the PFC4-Based System .................................................................................... 20 Figure 13. Diff Serv Uniform Mode .......................................................................................................................... 23 Figure 14. Diff Serv Pipe Mode ................................................................................................................................ 23

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 4 of 28

White Paper

List of Tables
Table 1. Table 2. Table 3. Table 4. Table 5. Table 6. Table 7. Table 8. Table 9. Table 10. Policing Capability Differences Between PFC4 and PFC3 .................................................................... 7 Hardware Interface Capability Differences Between PFC3 and PFC4 .................................................. 8 Buffers, Queues, and Thresholds in PFC4-Based Line Cards .............................................................. 8 Line Card Port ASIC Queue Structure for PFC4-Based 10 G Cards ................................................... 10 Line Card Port ASIC Queue Structure for PFC4-Based 40 G Cards ................................................... 11 TCAM Resource Differences for QoS in PFC4 ..................................................................................... 12 Microflow Policer Capability Differences Between PFC3 and PFC4 .................................................. 22 Summary of Command Migration for Cat6500 Specific Global CLI ................................................... 26 Summary of Command Migration for Cat6k Specific Interface CLI .................................................... 26 Summary of Migration for Cat6K Specific Policy Map Commands .................................................... 27

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 5 of 28

White Paper

1. Overview
Policy Feature Card 4 (PFC4) is the next-generation forwarding engine for the Cisco Catalyst 6500 switch, and provides significant improvements over its predecessor, Policy Feature Card 3 (PFC3). Some of these improvements include support for distributed policing, expanded tables for holding more QoS policies, enhanced IPv4 and IPv6 classification, packet and byte mode policing, and more. Furthermore, one of the more significant enhancements is support for Cisco Common Classification Policy Language (C3PL), a platform-independent interface for configuring QoS that will now be supported in the Cisco Catalyst 6500.

2. New QOS Features in PFC 4-Based System


As with the PFC3, the PFC4 consists of two ASIC components. The first ASIC is responsible for frame parsing and Layer 2 switching. The second ASIC is responsible for IPv4/IPv6 routing, MPLS label switching and imposition/disposition, Access Control Lists, QoS policies, NetFlow, and more. The PFC4-based system supports the following new QOS capabilities:

Serialized QoS model in hardware Separated ingress and egress processing Port trust/COS defined both in PFC4/DFC4 (Distributed Forwarding Card 4) Up to 256K QoS TCAM entries Layer 2 classification for Layer 3 packet Enhanced IPv4 classification (Packet Length, Time To Live, and Option) Enhanced IPv6 classification (Extended Header and Flow Label) Ingress/egress aggregate/microflow policer Packet/byte mode policing More accurate policing result, even at low policing rate Distributed ingress/egress policing Cisco Common Classification Policy Language (C3PL)-based Command Line Interface (CLI)

Before we look into these capabilities in detail, here is an overview of the hardware:

3. QoS Hardware Support in Supervisor 2T Systems


3.1. PFC 4 The PFC4 supports the following new capabilities from a QoS standpoint:

Microflow policer on both the ingress and egress directions Improved microflow policer rate configuration accuracy Distributed policers (hardware capable of 4 K) Better hardware Control Plane Policing policy for Layer 2 traffic, matchable on exceptions

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 6 of 28

White Paper

Figure 1.

Policy Feature Card 4 on the Supervisor 2T

Internal/discard-class markdown without rewriting outgoing packet Differentiated Services Code Point (DSCP) Bytes and packets-based policing, compared to bytes only in the PFC3 Ability to set QoS policies on IP tunnels Ability to mark VPLS traffic on ingress Unified policy configurations for ingress/egress queuing Improved MPLS performance, with no packet recirculation when an IP policy is configured on an egress interface.

Support for MPLS pipe mode QoS model on an egress Provider Edge (PE) interface.

The policing feature comparison between PFC4 and PFC3 can be found in Table 1 below.
Table 1. Policing Capability Differences Between PFC4 and PFC3
PFC3 System Number Direction Configuration Accuracy Distributed Microflow Policer Number Direction Configuration Accuracy 1K Both 1023 <=3-5% No 256K In 63 <=3-5% PFC4 System 16 K Both 1023 Maximum (0.1%,1 kbps) Yes (supports up to 4095 distributed policers) 512 K/1 M (depending on non-XL or XL PFC Both 127 Maximum (0.1%, 1 kbps)

Policing Capability Aggregate Policer

3.2. Interface Types Supported PFC3 provided features on a per-port or per-VLAN basis. PFC4 provides an additional way to map a port or VLAN or a port-VLAN combination to a Logical Interface (LIF). This represents an internal (to the operating system) structure for forwarding services/features for a port, VLAN or port-VLAN pair. This capability increases granularity by allowing for association of properties or features at port, VLAN, or port-VLAN level. Table 2 captures the hardware interface capability differences between the PFC3 and PFC4.

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 7 of 28

White Paper

Table 2.
Capability

Hardware Interface Capability Differences Between PFC3 and PFC4


PFC3 2k 4k 4k 4k 2k No PFC4 10 k 128 k 128 k 16 k 128 k Yes

Maximum number of physical ports Maximum number of routed Interfaces Maximum number of L3 sub-interfaces Maximum number of SVI Number of tunnel interfaces Global VLAN map

3.3. Buffers, Queues, and Thresholds in PFC4-Based Line Cards Buffers are used to store frames while forwarding decisions are made within the switch, or as packets are enqueued for transmission on a port at a rate greater than the physical medium can support. When QoS is enabled on the switch, the port buffers are divided into one or more individual queues. Each queue has one or more drop thresholds associated with it. The combination of multiple queues within a buffer, and the drop thresholds associated with each queue, allow the switch to make intelligent decisions when faced with congestion. Traffic sensitive to jitter and delay variance, such as VoIP packets, can be moved to a higher priority queue for transmission, while other less important or less sensitive traffic can be buffered or dropped. The number of queues and the amount of buffering per port is dependent on the line card module and the port ASIC that is used on that module. Table 3 provides an overview of the QoS queue and buffer structures for the Supervisor 2T and the 69xx and 68xx line card modules. The following information is detailed for each of the Catalyst 6500 series Ethernet modules in the following table:

Total buffer size per port (total buffer size) Overall receive buffer size per port (Rx buffer size) Overall transmit buffer size per port (Tx buffer size) Port receive queue and drop threshold structure (Rx port type) Port transmit queue and drop threshold structure (Tx port type) Default size of receive buffers per queue with QoS enabled (Rx queue sizes) Default size of transmit buffers per queue with QoS enabled (Tx queue sizes)

The individual queues and thresholds on a port are represented in the table using a simple terminology, which describes the number of strict priority queues (if present), the number of standard queues, and the number of taildrop or Weighted Random Early Detection (WRED) thresholds within each of the standard queues. For example, a transmit queue of 1p7q4t will represent one strict priority queue, seven standard queues with four WRED drop thresholds per queue, supporting both Deficit Weighted Round Robin (DWRR) and Shaped Round Robin (SRR). Similarly, a receive queue of 8q4t will represent zero strict priority queues, eight standard queues with four tail-drop thresholds per queue.
Table 3. Buffers, Queues, and Thresholds in PFC4-Based Line Cards
Module Description Supervisor 2T 10 Gb uplink ports in 10 G only mode Total Buffer Size 191.8 MB Rx Buffer Size 104.2 MB Tx Buffer Size 87.6 MB Rx Port Type 8q4t Tx Port Type 1p7q4t Rx Queue Size Q8-20.8 MB Q7-0 MB Q6-0 MB Q5-0 MB Q4-0 MB Q3-0 MB Q2-0 MB Q1-83.4 MB Tx Queue Size SP-13.9 MB Q7-0 MB Q6-0 MB Q5-0 MB Q4-0 MB Q3-13.0 MB Q2-17.3 MB Q1-43.4 MB

Module Model Name Supervisor Module VS-S2T-10G-XL

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 8 of 28

White Paper

Module Model Name

Module Description Supervisor 2T 10 Gb uplink ports

Total Buffer Size 191.8 MB

Rx Buffer Size 104.2 MB

Tx Buffer Size 87.6 MB

Rx Port Type 2q4t

Tx Port Type 1p3q4t, DWRR, SRR

Rx Queue Size Q2-20.8 MB Q1-83.4 MB

Tx Queue Size SP-13.9 MB Q3-13.0 MB Q2-17.3 MB Q1-43.4 MB

Supervisor 2T Gb 17.7 MB uplink ports

9.6 MB

8.1 MB

2q4t

1p3q4t, DWRR, SRR

Q2-1.9 MB Q1-7.7 MB

SP-1.2 MB Q3-1.2 MB Q2-1.6 MB Q1-4.1 MB

WS-X6908-10G

10 Gb Ethernet line card

191.8 MB

104.2 MB

87.6 MB

8q4t

1p7q4t

Q8-20.8 MB Q7-0 MB Q6-0 MB Q5-0 MB Q4-0 MB Q3-0 MB Q2-0 MB Q1-83.4 MB

SP-13.9 MB Q7-0 MB Q6-0 MB Q5-0 MB Q4-0 MB Q3-13.0 MB Q2-17.3 MB Q1-43.4 MB

WS-X6816-10G

10 Gb Ethernet 16-port line card (oversubscription mode) 10 Gb Ethernet 16-port line card (performance mode)

91 MB

1 MB

90 MB

1p7q2t

1p7q4t

199 MB

109 MB

90 MB

8q4t

1p7q4t

WS-X6904-40G

40 Gb Ethernet line card (40 G mode)

93 MB

5 MB

88 MB

1p7q4t

1p7q4t WRR, DWRR, SRR*

SP-640 KB Q7-640 KB Q6-640 KB Q5-640 KB Q4-640 KB Q3-640 KB Q2-640 KB Q1-640 KB

SP-11 MB Q7-11 MB Q6-11 MB Q5-11 MB Q4-11 MB Q3-11 MB Q2-11 MB Q1-11 MB Q8-2.65 MB Q7-2.65 MB Q6-2.65 MB Q5-2.65 MB Q4-2.65 MB Q3-2.65 MB Q2-2.65 MB Q1-2.65 MB

WS-X6904-40G (10G mode)

40 Gb Ethernet line card (10 G mode)

22.25 MB

1.25 MB

21 MB

8q4t

1p7q4t WRR DWRR, SRR*

Q8- 160 KB Q7-160 KB Q6-160 KB Q5-160 KB Q4-160 KB Q3-160 KB Q2-160 KB Q1-160 KB

[Learn more about buffers and queues for the Catalyst 6500 Ethernet line cards. http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/prod_white_paper09186a0080131086.html] 3.4. Line Card Port ASIC Queue Structure It is important to note that PFC4-based line cards (WS-X6816-10G, WS-6908-10G, and WS-6904-40G) are compatible only with the Supervisor 2T, and not with previous- generation Supervisors. QoS details for these new PFC4-based line cards are provided below. 3.4.1. 10 Gigabit Ethernet Line Card (WS-X6908-10G and WS-X6816-10G) The 6908 is a 1:1 oversubscribed 8-port 10 Gb Ethernet line card that ships with a DFC4 on board. This module has a total of 80 Gb of bandwidth connecting into the switching fabric, and supports Cisco Trusted Security (CTS) and Layer 2 encryption (based on the IEEE 802.1ae standard) on all ports at wire speed.

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 9 of 28

White Paper

Figure 2.

WS-X6908-10G Line Card

Figure 3.

WS-X6816-10G Line Card

The WS-6816-10G line card is 4:1 oversubscribed and has a 40 Gb connection to the switching fabric. It can operate in two modes: performance mode and oversubscription mode. When configured in performance mode, the line card ports are classified by port groups, wherein each port group consists of four physical ports. It is important to note that only the first port of the port group is enabled, and that port comes with enhanced buffering and QoS functionality. The other three ports in the port group will be administratively shut down. When configured in oversubscription mode, the default mode of operation, all 16 ports are operational, although they are oversubscribed. The QoS specifications for both line cards are detailed in the following table:
Table 4. Line Card Port ASIC Queue Structure for PFC4-Based 10 G Cards
WS-X6708-10 G Number of 10 GE ports Number of port ASICs in the line card Number of physical ports per port ASIC Transmit (Tx) queue structure per port Receive (Rx) queue structure per port 8 8 1 1p7q4t 8q4t WS-X6816-10 G 16 16 1 1p7q4t 1p7q4t (oversubscription mode) 8q4t (performance mode) Receive strict priority queue No Yes (oversubscription mode) No (performance mode) Transmit strict priority queue Port level shaping capability Yes No Yes No

Note that both line cards require a Supervisor 2T to be installed in the chassis.

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 10 of 28

White Paper

3.4.2. 40 Gigabit Ethernet Line Card (WS-X6904-40GE) This line card comes pre-installed with a DFC4 and is capable of 80 Gbps (4 * 40 G or 16 * 10 G) bandwidth per slot. In both 40 G and 10 G mode, this line card is 2:1 oversubscribed. The line card supports 10 G interfaces through SFP+ and the FourX adapter. All ports in both modes support Cisco Trusted Security (CTS) and Layer 2 encryption IEEE 802.1ae at wire speed. The line card is capable of port level shaping; however, this will be supported in a postFCS software release. The QoS specifications for this line card are detailed in the following table.
Figure 4. WS-X6904-40G Line Card: Can Operate in 40 G or 10 G Mode

Table 5.

Line Card Port ASIC Queue Structure for PFC4-Based 40 G Cards


WS-X6904-40 G (40 G mode) WS-X6904-40 G (10 G mode) 0 16 2 8 1p7q4t 8q4t No Yes Yes (egress only)

Number of 40 GE ports Number of 10 GE ports Number of port ASICs in the line card Number of physical ports per port ASIC Transmit queue structure per port Receive queue structure per port Receive (Rx) strict priority queue Transmit (Tx) strict priority queue Port level shaping capability

4 0 2 2 1p7q4t 1p7q4t Yes Yes Yes (egress only)

It is important to note that this line card requires a Supervisor 2T to be installed in the chassis.

4. QoS Processing in PFC4 System


QoS processing for the PFC4-based line cards can be split into three processing steps, with each step occurring at a different point in the system. These three steps are:

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 11 of 28

White Paper

Figure 5.

QoS Processing in PFC4-Based Line Card

1.

Ingress QoS is performed on the ingress line card port; features include input queue scheduling and congestion avoidance.

2. 3.

PFC4 QoS features include port trust, marking, classification, QoS ACLs, and policing. Egress QoS is performed on the egress line card port; features include queue scheduling, congestion avoidance, and, in some line cards, shaping.

5. QoS TCAM
In PFC3, QoS and ACL functions own separate Ternary Content Addressable Memories (TCAMs), with each TCAM supporting 32 K. The PFC4 moves to using a single TCAM with a flexible bank utilization capability supporting both QoS and other ACL features together. The TCAM size differences in the PFC4 (XL) and PFC4 (non-XL) modes can be found in Table 6.
Table 6.
Resources QoS TCAM Security ACL TCAM

TCAM Resource Differences for QoS in PFC4


PFC3/PFC3XL 32 K 32 K PFC4 16 K (default) 48 K (default) PFC4-XL 64 K (default) 192 K (default)

6. Serial QoS Model with PFC4 Hardware


One of the limitations of the PFC3 is that decisions are made in multiple cycles, thereby adding latency to the whole forwarding process. PFC4 makes many of these decisions in a single pass, albeit by going through the Layer 2 and Layer 3 components in a step-by-step process. The component that performs Layer 3 and QoS functionalities is implemented in a pipeline mode, with each stage in the pipeline performing a specific task. The two logical pipelines that make up the physical pipeline are the Input Forwarding Engine (IFE) and Output Forwarding Engine (OFE).

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 12 of 28

White Paper

Figure 6.

IFE and OFE Process

IFE performs input classification, QoS, ACL, input NetFlow, FIB forwarding, RPF check, and ingress NetFlow. OFE performing adjacency lookup, egress classification, egress NetFlow, and rewrite instruction generation

IFE and OFE are two separate processes within the same physical hardware. The PFC4 architecture allows ingress and egress policing mechanisms in a single pass through the forwarding engine. (Learn more details about PFC4 hardware in the Cisco Catalyst 6500 Supervisor 2T Architecture White Paper.)

7. Unified Policy Configuration with C3PL


Cisco Common Classification Policy Language (C3PL) is similar to the Modular QoS CLI (MQC), in which class maps identify the traffic that is affected by the action that the policy map applies. C3PL supports configuration for QoS across Cisco platforms, and provides a platform-independent interface for configuring QoS.
Figure 7. Unified Policy Configuration with C3PL

Due to differences in queueing implementation in port ASICs, Modular Quality of Service (MQC)-compliance in PFC3based Catalyst 6500 systems is limited to marking and policing. Queuing configuration can map incoming traffic to a specific queue, allowing other functionalities such as trust, global maps, and more. This is configurable through platform-specific command-line interfaces (CLI). PFC4-based Supervisor 2T system removes these limitations by supporting C3PL, which is a policy-driven CLI syntax, for marking, policing, and queueing.

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 13 of 28

White Paper

7.1. Change in Default QoS Behavior Prior to Supervisor 2T, there was a single global command to enable or disable QoS, which applied both at the PFC and port level. The major change with QoS in a PFC4 system is the behavior of default QoS at the PFC level. By default, QoS will be enabled in the PFC4, and there will only be a global command option to enable or disable QoS at the port level. The main changes can be broadly summarized as follows:

No global CLI required to enable QoS in the box QoS for an interface is always defined by the attached service policies By default, packets are passed through without a change in DSCP, EXP, or CoS for L2 packets or L2classified L3 packets

Service-policy marking does not depend on port trust The port state has no effect on marking, by default.

The PFC3-based mls qos global command is replaced with the auto qos default global command, which is used for enabling QoS just at the port level and not at the PFC level. 7.2. Default State of Port Level QoS As alluded to in the previous section, the PFC4 global QoS command cannot be used to control QoS at the PFC. By default, the port level QoS is disabled and the port level ingress queue scheduling and congestion avoidance are CoS-based.
Figure 8. PFC4 Default Port QoS Status

If a port is trusted and is not a Dot1Q trunk port, it will also use the default port CoS. 7.3. Port Ingress CoS to Queue Mapping When port is in default QoS mode, frames entering the switch get placed into either the strict priority queue or normal queue, based on ingress CoS values.
Figure 9. PFC4 Default Port Ingress CoS to Queue Mapping

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 14 of 28

White Paper

If a strict priority queue is present, frames with a CoS value of 5 are placed in it. All other frames are queued in the normal queue. The normal queues are configured with drop thresholds that define which CoS packets can be dropped when the queue fills up beyond the threshold. 7.4. Configuration CLI The QoS status for the system can be identified using the following command: 6513E.SUP2T.SA.1#show auto qos default "auto qos default" is configured Earl qos Enabled port qos Enabled queueing-only No 6513E.SUP2T.SA.1# 6513E.SUP2T.SA.1#conf t Enter configuration commands, one per line. 6513E.SUP2T.SA.1(config)#no auto qos defa 6513E.SUP2T.SA.1(config)#no auto qos default 6513E.SUP2T.SA.1(config)#end 6513E.SUP2T.SA.1# 6513E.SUP2T.SA.1#show auto qos default "auto qos default" is not configured Earl qos Enabled port qos Disabled queueing-only No 6513E.SUP2T.SA.1# The QoS status, such as the queueing at the port level, can be obtained using the following CLI: 6513E.SUP2T.SA.1#show queueing interface Gig1/24 Interface GigabitEthernet1/24 queueing strategy: Port QoS is enabled globally Queueing on Gi1/24: Tx Enabled Rx Enabled Trust boundary disabled Trust state: trust DSCP Trust state in queueing: trust COS Extend trust state: not trusted [COS = 0] Default COS is 0 Queueing Mode In Tx direction: mode-cos Transmit queues [type = 1p3q8t]: Queue Id 01 02 03 04 Scheduling WRR WRR WRR Priority Num of thresholds 08 08 08 01 100[queue 1] 150[queue 2] 200[queue 3] 50[queue 1] 20[queue 2] 15[queue 3] 15[Pri Queue] ----------------------------------------Weighted Round-Robin End with CNTL/Z.

WRR bandwidth ratios: queue-limit ratios:

queue tail-drop-thresholds ------------------------- 2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 15 of 28

White Paper

1 2 3

70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8] 70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8] 100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]

queue random-detect-min-thresholds ---------------------------------1 2 3 40[1] 70[2] 70[3] 70[4] 70[5] 70[6] 70[7] 70[8] 40[1] 70[2] 70[3] 70[4] 70[5] 70[6] 70[7] 70[8] 70[1] 70[2] 70[3] 70[4] 70[5] 70[6] 70[7] 70[8]

queue random-detect-max-thresholds ---------------------------------1 2 3 70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8] 70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8] 100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]

WRED disabled queues: queue thresh cos-map --------------------------------------1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 4 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 5 6 7 2 3 4 0 1

Queueing Mode In Rx direction: mode-cos Receive queues [type = 1q8t]:


2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 16 of 28

White Paper

Queue Id 01

Scheduling WRR

Num of thresholds 08 100[queue 1] 100[queue 1]

-----------------------------------------

WRR bandwidth ratios: queue-limit ratios:

queue tail-drop-thresholds -------------------------1 50[1] 50[2] 60[3] 60[4] 80[5] 80[6] 100[7] 100[8]

queue thresh cos-map --------------------------------------1 1 1 1 1 1 1 1 1 2 3 4 5 6 7 8 5 6 7 1 2 3 4 0

Packets dropped on Transmit: BPDU packets: queue 0 dropped [cos-map]

--------------------------------------------1 2 3 4 Packets dropped on Receive: BPDU packets: queue 1 6513E.SUP2T.SA.1# 6513E.SUP2T.SA.1# A table detailing summary of changed CLIs for a PFC4-based system can be found in Appendix 1. 0 dropped 0 [cos-map] [0 1 2 3 4 6 7 5 ] 0 0 0 0 [0 1 ] [2 3 4 ] [6 7 ] [5 ]

---------------------------------------------

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 17 of 28

White Paper

8. PFC4 Ingress Map and Port Trust


In a PFC4 system, port trust is now defined in the PFC4/DFC4, instead of being taken from the port ASIC. The Layer 3 forwarding logic will assign a 6-bit Discard Class value for the packet passing through the whole packet processing pipeline.
Figure 10. PFC4 Ingress Map

As represented in Figure 11, each frames CoS, IP precedence, EXP, and DSCP value gets mapped to a corresponding discard value, based on an ingress map maintained in the PFC4.

9. Layer 2 Classification of Layer 3 Packets


PFC4 provides a new capability to perform classification on a Layer 3 packet using Layer 2 information. The decision matching is performed on the MAC address, even though the packet may not have arrived on a Layer 2 interface. This feature allows:

Separate ingress/egress control Control of Layer 2 or Layer 3 NetFlow creation DSCP for Layer 2-classified IP packets preserved by default, unless rewritten by an explicit DSCP marking command

Per-port option to choose VLAN-based packet classification setting


Classification of Layer 3 Packets with Layer 2

Figure 11.

In a PFC3 system, MAC access list functions are for non-IP traffic only, as it is not possible to police Layer 3 traffic based on Layer 2 MAC information. In a PFC4 system, this limitation has been lifted, so a class map using Layer 2 information can be applied to IP traffic. 9.1. Use Cases for Layer 2 Classification of Layer 3 Traffic Consider a provider edge with pure Layer 2 network that wants to classify Layer 3 traffic, based on one of the following Layer 2 elements:
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 18 of 28

White Paper

Match incoming CoS for Layer 3 traffic Match outer VLAN for Q-in-Q traffic Match inner VLAN for Q-in-Q traffic Match inner Dot1Q CoS for Q-in-Q traffic Match Layer 2 destination missed traffic Match ARP traffic Match BPDU traffic

9.2. Configuration SUP2T(config-if)#mac packet-classify ? input output classify L3 packets as layer2 on input classify L3 packets as layer2 on output

SUP2T(config)#class-map match-all [Name] SUP2T(config-cmap)# match cos 5 Sup2T(config)#mac packet-classify use outer-vlan ? in out Apply to Ingress mac acl Apply to egress mac acl

Sup2T(config)#mac access-list extended [Name] Sup2T(config-ext-macl)#permit any any ce_vlan [ID] SUP2T(config-if)#mac packet-classify use ce_cos ? input User inner cos for classification in ingress direction

Match Layer 2 Destination Missed Traffic: SUP2T(config)#class-map match-all [Name] SUP2T(config-cmap)# match l2 miss

Match ARP: SUP2T(config)#arp access-list test SUP2T(config-arp-nacl)#permit ip any mac ? H.H.H any host Senders MAC address (and mask) Any MAC address Single Sender host

BPDU Classification: SUP2T(config)# mac packet-classify bpdu

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 19 of 28

White Paper

10. Enhanced IPv4/IPv6 Classification


With the PFC4 system, classification for IPv4 and IPv6 packets can now be performed based on packet length, Time To Live (TTL), and various different fields in the headers. For the IPv6 packets, classification can be performed using flow label and extended header.

11. Marking
The important changes for QoS marking in PFC4 involve compatibility to the C3PL and are summarized below:

No global CLI required to enable PFC QoS QoS global maps defined using C3PL table-map syntax DSCP is preserved by default, independent of port state CoS is preserved by default for Layer2 packets, independent of port state Port trust dscp/precedence command is eliminated

In the PFC3-based system, prioritizing a packet resulted in a rewrite of the IP packet and, therefore, the DSCP. The PFC4 provides the ability to prioritize a packet without rewriting the IP packet. DSCP transparency can be controlled on a per-policy class basis. 11.1. Use Case for Marking Consider a scenario where a user gets packets with a certain discard-class on the ingress, but does not want them to be classified with a discard-class on the egress. Here, both match dscp and match discard-class commands can be present in the configuration. With the PFC4 system, it is possible for match dscp to use the incoming packets DSCP to classify and for match discard-class to use the discard-class after the rewrite for classification.

12. Policing
12.1. Distributed Policer Distributed Policing is a new PFC4 feature with which policers from multiple DFC4 line cards can be configured to collectively rate-limit the aggregate traffic received on a set of interfaces. This is not possible with previous-generation PFC systems, which can only rate limit traffic received local to the specific line card. As illustrated in Figure 11, Distributed Policing is desirable when customers want to apply rate-limiting on a set of interfaces that belong to a cluster. This can include, for example, a VLAN or ether channel with member ports on different PFC-based line cards. In addition to metering traffic independently, each policer within the system is capable of synchronizing with other policers. As a result, a policer on any PFC4 effectively sees all the traffic received for that cluster throughout the system across all PFC4s.
Figure 12. Distributed Policer in the PFC4-Based System

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 20 of 28

White Paper

A distributed policer maintains two sets of buckets and thresholds:


Global counts and thresholds, which reflect the aggregate traffic across all PFC4s Local counts and thresholds, which reflect the local unpoliced traffic.

Policing decisions are made using the global and local bucket counts. When the sum of these two counts exceeds the global threshold, the policer on each PFC4-based line card applies the policer action independently. Distributed policing is supported in the first 4 K of the 16 K aggregate policers available on any PFC4 in the system. There are two modes of distributed policing:

Strict mode, where a single policy map will be rejected on the interface if it cannot fit fully within the 4 k distributed policer region

Loose mode, where the policy map will not be rejected, but will be installed in the non-distributed policer region, and will behave as in PFC3

12.1.1. Use Cases for Distributed Policing The following use cases are applicable for distributed policing: 1. 2. 3. When the traffic for a particular VLAN that is spread across multiple line cards needs to be rate-limited. When a port channel has members across line cards, and a rate-limiting policy is applied to it When traffic of a set of interfaces on different line cards needs to be policed together as a cluster; specifically, a shared/aggregate policer that is also distributed across PFC4-based line cards. 12.1.2. Configuration Distributed Policing is disabled by default, and can be enabled or disabled with a global command, making the feature very flexible for customers. Since we can have only up to 4 K distributed policers, if there are more number of policers configured on the VLANs or port-channels, subsequent ones will not be distributed and will behave as in PFC3. Disabling Distributed Policing Globally: Cat6500(config)#no platform qos police distributed strict | loose Enabling Distributed Policing Globally: Cat6500(config)#platform qos police distributed The distributed policer status can be obtained by issuing the following command: 6513E.SUP2T.SA.1#show platform qos QoS is enabled globally Port QoS is disabled globally QoS serial policing mode enabled globally Distributed Policing is Loose enabled Secondary PUPs are enabled QoS 10g-only mode supported: Yes [Current mode: Off]

----- Module [3] ----Counter IFE Pkts IFE Bytes 0 OFE Pkts 0 OFE Bytes 0
Page 21 of 28

----------------------------------------------------------Policing Drops 0

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

White Paper

Policing Forwards 718139331 Police-hi Actions (Lvl3) 0 Police-lo Actions (Lvl2) 0 Aggregate Drops 0 Aggregate Forwards 718139342 Aggregate Exceeds-Hi 0 Aggregate Exceeds-Lo 0 NF Drops 0 NF Forwards 0 NF Exceeds 0 0 0 0 0

75135381110 0 0 0 75135382034 0 0 0 0 0

718203413 0 0 0 718203424 0 0 0 0 0

75162822588 0 0 75162823512 0 0

TOS Changes 0 TC Changes 0 EXP Changes 0 COS Changes 250238 Tunnel Decaps 0 Tunnel Encaps 0

12.2. Microflow Policer Microflow policing is predominantly used to perform traffic control and accounting. Like aggregate policing, all flows arriving on ports associated with the policer are policed down to the stated rate. PFC4 supports double the number of microflow policers, compared to the number supported on PCF3, and has the additional capability to perform egress microflow policing. The important capability differences between PFC3 and PFC4 can be found in Table 6.
Table 7.
Feature Number of microflow policers

Microflow Policer Capability Differences Between PFC3 and PFC4


PFC4 1 M (input + output) PFC3 128 K/256 K (Non-XL and XL-based PFC3)

Number of microflow policer Configurations Egress microflow policing Shared NetFlow and microflow policing

128 Yes Yes

64 No No

In addition to supporting a greater number of microflow policers, the PFC4 improves the policer configuration accuracy down to 0.1 percent for microflow and distributed policing. (Previously, in PFC3, it was 3 to 5 percent.) This accuracy is maintained even at low policing rates. 12.2.1. Packets and Bytes-Based Policing PFC4-based line cards, including the Supervisor 2T, can now be configured for either packet-based or byte-based policing, unlike PFC3-based cards that support byte-based policing only. Packet-Based Policer Configuration: 6513E.SUP2T.SA.1(config-pmap-c)#police rate < 7-10,000,000,000 > pps burst < 12000000> packets peak-rate < 7-10,000,000,000> pps peak-burst < 1-2000000> packets conform-action etc

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 22 of 28

White Paper

Byte-Based Policer: 6513E.SUP2T.SA.1(config-pmap-c)#police rate <7-10,000,000,000 > bps burst <1512000000> bytes peak-rate <7-10,000,000,000> bps peak-burst <1-512000000> bytes conform-action etc

13. IP Tunnel QoS


13.1. Ability to Mark Inner Header with PFC4 The need to distinguish inner from outer headers in an IP tunnel packet for classification and marking poses challenges for QoS. Equally challenging is the additional requirement to recirculate, in order to forward tunnel packets. The first pass of the address lookup identifies the tunnel to which the packet is destined for or arriving from. The second pass lookup identifies the actual egress interface for the encapsulated (tunneled) packet. With PFC3, it is not possible to mark the inner header of a tunnel packet upon encapsulation. PFC4 removes this limitation and adds the capability to mark both the outer and inner header or mark just the outer header. As represented in Figures 15 and 16, the PFC4 system supports the following two operational modes for tunneled traffic:

Diff-serv uniform mode Diff-serv pipe mode

Although PFC3 supported these operational modes, support for these modes with tunnel interfaces is available only with the PFC4. Additionally, PFC4 offers better control for tunnel interface trust, as the bits after recirculation are no longer derived from the port ASIC.
Figure 13. Diff Serv Uniform Mode

Figure 14.

Diff Serv Pipe Mode

Note that in the absence of ingress QoS policy, default mode in PFC4 is uniform mode, whereas the default mode in PFC3 is pipe mode. 13.1.1. Use Case for IP Tunnel QoS Customers willing to control the marking of packets in an IP tunnel interface for uniform or pipe tunnel modes can use the new PFC4 capability.

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 23 of 28

White Paper

13.1.2. Configuration It is important to note that QoS policies and configurations are similar between PFC3 and PFC4, except that PFC4 allows a new action to be defined under the policy class, where marking can be performed based on either outer and inner or outer header. Marking Based on Inner Header SUP2T(config)#policy-map [Name] SUP2T(config-pmap)#class class-default SUP2T(config-pmap-c)#set dscp [Value]

Marking Based on Outer Header SUP2T(config)#policy-map [Name] SUP2T(config-pmap)#class class-default SUP2T(config-pmap-c)#set dscp tunnel [Value]

Example of a Tunnel Interface Attached with a Service Policy Sup2T#show run interface g6/1 Building configuration... Current configuration : 117 bytes ! interface GigabitEthernet6/1 ip address 3.0.0.2 255.0.0.0 service-policy input interface-ingress-policy service-policy output interface-egress-policy end Sup2T#show run interface Tunnel0 Building configuration... Current configuration : 168 bytes ! interface Tunnel0 ip address 5.0.0.2 255.0.0.0 tunnel source GigabitEthernet6/1 tunnel destination 4.0.0.2 service-policy input tunnel-ingress-policy service-policy output tunnel-egress-policy end

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 24 of 28

White Paper

13.2. MPLS Over GRE Tunnels In order to enable hardware switching of MPLSoGRE packets, the ingress line card must remove the IP GRE encapsulation before forwarding them to the PFC, and the egress line card must add the IP GRE encapsulation to the egress tag packets. PFC3 did not support MPLS over GRE natively, so Sup 720 PFC3 supervisors provide support using WAN line cards, which performed the actual GRE encapsulation and decapsulation operations. The PFC4 eliminates the need for WAN modules by supporting MPLSoGRE natively in the hardware. PFC4 handles both single MPLS label push/swap followed by GRE encapsulation on an IPv4 tunnel in one single pass. After GRE encapsulation, the packet is recirculated for Layer 2 MAC rewrite. For GRE decapsulation, the GRE header is removed in the first pass and the packet must be recirculated for further processing. MPLS over GRE tunnels operate in pipe mode by default. If there is an explicit uniform mode policy, marking is done for both outer tunnel header DSCP and inner MPLS EXP bits.

14. MPLS QoS


Previous-generation PFCs support comprehensive MPLS features, along with QoS for MPLS packets. PFC3-based systems support MPLS pipe mode, as well as the ability to perform EXP marking.

For an IP to MPLS packet, IP DSCP can be mapped into the outgoing EXP value, with an option to override the EXP value

For MPLS packets, the packet EXP can be mapped into an internal DSCP, so that the regular QoS marking and policing logic can be applied

For MPLS-to-IP, there is an option to propagate CoS value from EXP into IP DSCP in the underlying IP packet

Note that the above can be performed only at the egress PE side. PFC4 overcomes this limitation with a new capability 14.1. Ability to Distinguish IP-to-IP from IP-to-Tag Traffic Unlike PFC3, PFC4 supports the capability to distinguish IP-to-IP traffic from IP-to-MPLS traffic at ingress. As a result, it can perform MPLS EXP marking for IP-to-MPLS traffic both on an ingress and egress PE. Additionally, this capability helps avoid the need to do ingress pipe policy for an IP-to-IP packet. Although PFC3 supports MPLS pipe mode and the ability to do EXP marking, the lack of this new capability means that it can only be used at the ingress PE side for tunnel interfaces. 14.1.1 Use Case for MPLS QoS Consider scenarios where QoS implemented by service providers in an MPLS cloud needs to be different from the QoS implemented by a customers IP policy. PFC4 MPLS QoS capabilities can be utilized for cases where IP packets need to be tunneled through an MPLS network without losing the DSCP. 14.2. Improved Performance In the PFC3, a packet gets recirculated if its IP policy is configured on an egress interface. PFC4 does not have this limitation, and is capable of delivering improved performance.

15. Multicast QoS


For ingress QoS, multicast behavior in PFC4 is similar to that in PFC3. While PFC4 has the hardware capability to perform egress policing in egress replication mode, there are several limitations:

For egress QoS, EARL8 has restrictions for multicast packets, as egress policing and marking of bridged multicast packets is not supported

Egress policing is not supported with egress replication enabled


Page 25 of 28

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

White Paper

16. Appendix 1
A summary of commands with changes necessary to migrate from PFC3 to PFC4 is shown in Tables 8, 9, and 10. The status column is indicated by letters that refer to: R: Retained and converted to a new CLI I: Ignored P: To be phased out, converted to a temporary platform CLI
Table 8. Summary of Command Migration for Cat6500 Specific Global CLI
Sta-tus P PFC4 Migration NVGEN and Action auto qos default Used in mls qos trust migration actions as indicator of existing PFC3 configuration Comments on PFC4 Behavior Auto-configure default queueing on ports without queueing policy. No direct effect on Earl policing/marking. Migration actions triggered if mls qos is seen on bootup or on configuration copy. no mls qos mls qos queueing-only I R Ignored platform qos queueing-only No effect in PFC4 platform Queueing behavior identical to auto qos default. Policing/marking and DSCP/CoS rewrite is disabled. Same behavior as in PFC3. Not necessary: PFC4 rewrite control is per-class Port trust ignored in PFC4 marking, by default Same behavior as in PFC3 Serial mode is always on in PFC4 Ingress policing of redirected packets is always enabled Egress policing of packets to RP is always disabled, except for CPP Egress policing of packets, redirected to service cards, is controlled by a policy on the respective egress VLAN mls qos map R table-map Same behavior as in PFC3. Certain dscp map names are auto-converted to discard-class map names Same behavior as in PFC3 Same behavior as in PFC3 Same behavior as in PFC3 VLAN field is always enabled in PFC4 MAC ACLs Uniform mode is default and is controlled by C3PL per ingress policy class Uniform mode is default and is controlled by C3PL per ingress policy class

PFC3 Command mls qos

no mls qos rewrite ip dscp mls qos marking ignore porttrust mls qos marking statistics mls qos police serial mls qos police redirected

P I R I I

no platform qos rewrite ip dscp Used as indicator that PFC3 port trust commands can be ignored platform qos marking statistics Ignored Ignored

mls qos aggregate-policer mls qos protocol mls qos statistics-export mac packet-classify use vlan mls qos gre input uniform-mode (ST2) mls mpls input uniform-mode (ST2)

R R R I I I

platform qos aggregate-policer platform qos protocol platform qos statistics-export N/A N/A N/A

Table 9.

Summary of Command Migration for Cat6k Specific Interface CLI


Sta-tus R PFC4 Migration NVGEN and Action platform qos trust cos Comments on PFC4 Behavior Initial ingress discard-class mapped from COS. Does not rewrite packet DSCP. PFC4 Differences: In port-based mode, port trust cos is ignored if there is a port (ingress) policy In VLAN-based mode, port trust cos is ignored in both default and VLAN policy case

PFC3 Command mls qos trust cos

mls qos trust dscp

N/A

Default behavior in PFC4 and C3PL

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 26 of 28

White Paper

PFC3 Command mls qos trust precedence no mls qos trust

Sta-tus I R

PFC4 Migration NVGEN and Action N/A platform qos trust none remark

Comments on PFC4 Behavior Handling is the same as mls qos trust dscp. Low 3 bits of incoming DSCP are not zeroed Converted to trust none if mls qos present, otherwise, ignored. Ignored in VLAN-based mode Same behavior as in PFC3 Same behavior as in PFC3 Same behavior as in PFC3 Same behavior as in PFC3 Same behavior as in PFC3 Same behavior as in PFC3 Same behavior as in PFC3 Auto-enabled/disabled internally per NetFlow profile, depending on the presence of microflow policing Equivalent to mac packet-classify input Same behavior as in PFC3 Same behavior as in PFC3. Not supported in C3 LCs Same behavior as in PFC3 Same behavior as in PFC3. Not supported in C3 LCs Same behavior as in PFC3. Not supported in C3 LCs Same behavior as in PFC3. Not supported in C3 LCs Enabled by auto qos default. Future: member ports must have the same ingress and egress queueing policy.

mls qos trust extend mls qos trust device mls qos mpls trust experimental mls qos vlan-based mls qos dscp-mutation mls qos exp-mutation mls qos statistics-export mls qos bridged

R R R R R R R I

platform qos trust extend platform qos trust device platform qos mpls trust experimental platform qos vlan-based platform qos dscp-mutation platform qos exp-mutation platform qos statistics-export N/A

mac packet-classify mls qos loopback mls qos queueing mode mls qos cos wrr-queue rcv-queue pri-queue mls qos channel-consistency

R R R R R R R R

mac packet-classify input platform qos loopback platform qos queueing mode platform qos cos wrr-queue rcv-queue pri-queue platform qos channel consistency

Table 10.

Summary of Migration for Cat6K Specific Policy Map Commands


Sta-tus R R R PFC4 Migration NVGEN and Action trust dscp trust precedence trust cos Comments on PFC4 behavior Default behavior in PFC4 and C3PL Handling is the same as trust dscp. Low 3 bits of incoming DSCP are not zeroed PFC4 Differences: The incoming 802.1q CoS is always used unless port CoS override configured. Warning to user to use set dscp cos or set-dscp-costransmit in police conform-action.

PFC3 Command trust dscp trust precedence trust cos

no trust police {exceed| violate} policeddscp police flow police aggregate

I R R R

N/A police {exceed| violate} policed-dscp police flow police aggregate

Packet QoS is preserved by default Retained as part of C3PL syntax Retained as part of C3PL syntax Retained as part of C3PL syntax

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 27 of 28

White Paper

Printed in USA

C11-652042-00

07/11

2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 28 of 28

You might also like