Cisco AVVID Infra IP MCast
Cisco AVVID Infra IP MCast
Cisco AVVID Infra IP MCast
IP Multicast Design
Solutions Reference Network Design
March, 2003
Corporate Headquarters
Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
800 553-NETS (6387)
Fax: 408 526-4100
THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT
SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE
OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY.
The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB’s public
domain version of the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California.
NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED “AS IS” WITH
ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT
LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF
DEALING, USAGE, OR TRADE PRACTICE.
IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING,
WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO
OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
CCIP, the Cisco Powered Network mark, the Cisco Systems Verified logo, Cisco Unity, Follow Me Browsing, FormShare, Internet Quotient, iQ Breakthrough, iQ Expertise,
iQ FastTrack, the iQ Logo, iQ Net Readiness Scorecard, Networking Academy, ScriptShare, SMARTnet, TransPath, and Voice LAN are trademarks of Cisco Systems, Inc.;
Changing the Way We Work, Live, Play, and Learn, Discover All That’s Possible, The Fastest Way to Increase Your Internet Quotient, and iQuick Study are service marks
of Cisco Systems, Inc.; and Aironet, ASIST, BPX, Catalyst, CCDA, CCDP, CCIE, CCNA, CCNP, Cisco, the Cisco Certified Internetwork Expert logo, Cisco IOS, the Cisco
IOS logo, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Empowering the Internet Generation, Enterprise/Solver, EtherChannel, EtherSwitch,
Fast Step, GigaStack, IOS, IP/TV, LightStream, MGX, MICA, the Networkers logo, Network Registrar, Packet, PIX, Post-Routing, Pre-Routing, RateMUX, Registrar,
SlideCast, StrataView Plus, Stratm, SwitchProbe, TeleRouter, and VCO are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and certain other
countries.
HSRP 2-6
Solution 2-6
RP of Last Resort 2-8
IP Multicast Small Campus Design 2-8
Core/Distribution-Layer Switch Configuration 2-10
Summary 2-21
Summary 3-8
Security 8-1
Intended Audience
This document is intended for use by the Enterprise Systems Engineer (SE) or customer who may be
unfamiliar with the deployment choices available to an AVVID Enterprise customer for IP multicast.
Document Organization
This document contains the following chapters:
Note This document contains product and configuration information that is complete at the publish date.
Subsequent product introductions may modify recommendations made in this document.
Document Conventions
This guide uses the following conventions to convey instructions and information:
Convention Description
boldface font Commands and keywords.
italic font Variables for which you supply values.
[ ] Keywords or arguments that appear within square brackets are optional.
{x | y | z} A choice of required keywords appears in braces separated by vertical bars. You must select one.
screen font Examples of information displayed on the screen.
boldface screen Examples of information you must enter.
font
< > Nonprinting characters, for example passwords, appear in angle brackets.
[ ] Default responses to system prompts appear in square brackets.
Note Means reader take note. Notes contain helpful suggestions or references to material not
covered in the manual.
Timesaver Means the described action saves time. You can save time by performing the action
described in the paragraph.
Tips Means the following information will help you solve a problem. The tips information might
not be troubleshooting or even an action, but could be useful information, similar to a
Timesaver.
Caution Means reader be careful. In this situation, you might do something that could result in
equipment damage or loss of data.
Obtaining Documentation
The following sections explain how to obtain documentation from Cisco Systems.
Documentation CD-ROM
Cisco documentation and additional literature are available in a Cisco Documentation CD-ROM
package, which is shipped with your product. The Documentation CD-ROM is updated monthly and may
be more current than printed documentation. The CD-ROM package is available as a single unit or
through an annual subscription.
Ordering Documentation
Cisco documentation is available in the following ways:
• Registered Cisco.com users (Cisco direct customers) can order Cisco product documentation from
the Networking Products MarketPlace:
http://www.cisco.com/cgi-bin/order/order_root.pl
• Registered Cisco.com users can order the Documentation CD-ROM through the online Subscription
Store:
http://www.cisco.com/go/subscription
• Nonregistered Cisco.com users can order documentation through a local account representative by
calling Cisco corporate headquarters (California, USA) at 408 526-7208 or, elsewhere in North
America, by calling 800 553-NETS (6387).
Documentation Feedback
If you are reading Cisco product documentation on Cisco.com, you can submit technical comments
electronically. Click the Fax or Email option under the “Leave Feedback” at the bottom of the Cisco
Documentation home page.
You can e-mail your comments to [email protected].
To submit your comments by mail, use the response card behind the front cover of your document, or
write to the following address:
Cisco Systems
Attn: Document Resource Connection
170 West Tasman Drive
San Jose, CA 95134-9883
We appreciate your comments.
Cisco.com
Cisco.com is the foundation of a suite of interactive, networked services that provides immediate, open
access to Cisco information, networking solutions, services, programs, and resources at any time, from
anywhere in the world.
Cisco.com is a highly integrated Internet application and a powerful, easy-to-use tool that provides a
broad range of features and services to help you to
• Streamline business processes and improve productivity
• Resolve technical issues with online support
• Download and test software packages
• Order Cisco learning materials and merchandise
• Register for online skill assessment, training, and certification programs
You can self-register on Cisco.com to obtain customized information and service. To access Cisco.com,
go to the following URL:
http://www.cisco.com
IP multicast allows for a streamlined approach to data delivery whenever multiple hosts need to receive
the same data at the same time. For example:
• When configured for IP multicast services, Music-on-Hold (MoH) can stream the same audio file to
multiple IP phones without the overhead of duplicating that stream one time for each phone on hold.
• IP/TV allows for the streaming of audio, video, and slides to thousands of receivers simultaneously
across the network. High-rate IP/TV streams that would normally congest a low-speed WAN link
can be filtered to remain on the local campus network.
Receiver
Unicast Receiver
Source
3 copies sent
Receiver
Router
Receiver
Multicast Receiver
1 copy sent
Source
Receiver
Router
76642
IP multicast traffic is UDP based and, as such, has some less-than-desirable characteristics. For example,
it does not detect packet loss and, due to the lack of a windowing mechanism, it does not react to
congestion. To compensate for this, applications and network devices can be configured to classify,
queue, and provision multicast traffic using QoS. QoS virtually eliminates dropped packets and
minimizes delay and delay variation for multicast streams. Thus, these limitations of IP multicast are not
an issue.
Multicast MoH natively sends the audio music streams with a classification of DiffServ Code-Point
equal to Expedited Forwarding (DSCP=EF). The classification values can be used to identify MoH
traffic for preferential treatment when QoS policies are applied. Multicast streaming video is
recommended to be classified at DSCP CS4.
Multicast Addressing
IP multicast uses the Class D range of IP addresses (224.0.0.0 through 239.255.255.255). Within the IP
multicast Class D address range, there are a number of addresses reserved by the Internet Assigned
Numbers Authority (IANA). These addresses are reserved for well-known multicast protocols and
applications, such as routing protocol hellos.
For multicast addressing, there are generally two types of addresses as follows:
• Well known addresses designated by IANA
– Packets using the following Reserved Link Local Addresses (also called the Local Network
Control Block [224.0.0.0 - 224.0.0.255]) are sent throughout the local subnet only and are
transmitted with TTL=1.
Note The addresses listed below are just a few of the many addresses in the Link Local
Address space.
Note The addresses listed below are just a few of the many addresses in the Internetwork
Control Block.
224.0.1.39—Cisco-RP-Announce (Auto-RP)
224.0.1.40— Cisco-RP-Discovery (Auto-RP)
• Administratively scoped addresses (239.0.0.0 - 239.255.255.255). For more information, see
RFC 2365.
Administratively-scoped addresses should be constrained to a local group or organization. They are used
in a private address space and are not used for global Internet traffic. “Scoping” can be implemented to
restrict groups with a given address or range from being forwarded to certain areas of the network.
Organization-local and site-local scopes are defined scopes that fall into the administratively scoped
address range.
• Organization-local scope (239.192.0.0 - 239.251.255.255)—Regional or global applications that are
used within a private enterprise network.
• Site-local scope (239.255.0.0 - 239.255.255.255)—Local applications that are isolated within a
site/region and blocked on defined boundaries.
Scoping group addresses to applications allows for easy identification and control of each application.
The addressing used in this chapter reflects the organization-local scope and site-local scope ranges
found in the administratively scoped address range.
For illustration purposes, the examples in this chapter implement IP/TV and MoH in an IP multicast
environment. Table 1-1 lists the example address ranges used in these examples.
Table 1-1 Design Guide IP Multicast Address Assignment for Multicast Music-on-Hold and IP/TV
Multicast
Application Groups /22 Address Range Scope Notes
IP/TV High-Rate 239.255.0.0/16 239.255.0.0 - Site-local Restricted to local
Traffic 239.255.255.255 Campus
IP/TV 239.192.248.0/22 239.192.248.0 - Organization-local Restricted to
Medium-Rate 239.192.251.255 768k+ Sites
Traffic
IP/TV Low-Rate 239.192.244.0/22 239.192.244.0 - Organization-local Restricted to
Traffic 239.192.247.255 256k+ Sites
Multicast 239.192.240.0/22 239.192.240.0 - Organization-Local No restrictions
Music-on-Hold 239.192.243.255
The IP/TV streams have been separated based on the bandwidth consumption of each stream. IP/TV
High-Rate traffic falls into the site-local scope (239.255.0.0/16) and is restricted to the local campus
network. IP/TV Medium-Rate traffic falls into one range of the organization-local scope
(239.192.248.0/22) and is restricted to sites with bandwidth of 768 Kbps or greater. IP/TV Low-Rate
traffic falls into another range of the organization-local scope (239.192.244.0/22) and is restricted to
sites with bandwidth of 256 Kbps or greater. Finally, multicast MoH traffic falls into yet another range
of the organization-local scope (239.192.240.0/22) and has no restrictions.
This type of scoping allows multicast applications to be controlled through traffic engineering methods
discussed later in this chapter.
Note The /22 networks were subnetted from the 239.192.240.0/20 range, allowing for four address classes.
239.192.252.0/22 can be used for additional applications not defined in this document.
Multicast Forwarding
IP multicast delivers source traffic to multiple receivers using the least amount of network resources as
possible without placing additional burden on the source or the receivers. Multicast packets are
replicated in the network by Cisco routers and switches enabled with Protocol Independent Multicast
(PIM) and other supporting multicast protocols.
Multicast capable routers create “distribution trees” that control the path that IP Multicast traffic takes
through the network in order to deliver traffic to all receivers. PIM uses any unicast routing protocol to
build data distribution trees for multicast traffic. The two basic types of multicast distribution trees are
source trees and shared trees.
• Source trees—The simplest form of a multicast distribution tree is a source tree with its root at the
source and branches forming a tree through the network to the receivers. Because this tree uses the
shortest path through the network, it is also referred to as a shortest path tree (SPT).
• Shared trees—Unlike source trees that have their root at the source, shared trees use a single
common root placed at some chosen point in the network. This shared root is called a Rendezvous
Point (RP).
PIM uses the concept of a designated router (DR). The DR is responsible for sending Internet Group
Management Protocol (IGMP) Host-Query messages, PIM Register messages on behalf of sender hosts,
and Join messages on behalf of member hosts.
Note In a many-to-many deployment (many sources to many receivers), Bidir PIM is the recommended
forwarding mode. Bidir PIM is outside the scope of this document. For more information on Bidir PIM,
see the IP Multicast Technology Overview white paper located at:
http://www.cisco.com/en/US/tech/tk648/tk363/technologies_white_paper09186a00800d6b5e.shtml
Resource Requirements
The memory impact on the router occurs when the router has to carry (*,G) state, which is the indication
that a receiver has signaled an IGMP join, and (S,G), which is the indication that the Source is sending
to the Group. The RP and any other router between the RP and the source are required to carry both state
entries.
Note The default behavior of PIM-SM is to perform a SPT-switchover. By default, all routers will carry both
states. The spt-threshold infinity command, described in Chapter 2, “IP Multicast in a Campus
Network”, can be used to control the state.
When deciding which routers should be used as RPs, use the following to determine the memory impact
on the router:
• Each (*,G) entry requires 380 bytes + outgoing interface list (OIL) overhead.
• Each (S,G) entry requires 220 bytes + outgoing interface list overhead.
• The outgoing interface list overhead is 150 bytes per OIL entry.
For example, if there are 10 groups with 6 sources per group and 3 outgoing interfaces:
# of (*,G)s x (380 + (# of OIL entries x 150)) = 10 x (380 + (3 x 150)) = 8300 bytes for (*,G)
# of (S,G)s x (220 + (# of OIL entries x 150)) = 60 x (220 + (3 x 150))= 40,200 bytes for (S,G)
RP Deployment
There are several methods for deploying RPs.
• RPs can be deployed using a single, static RP. This method does not provide redundancy or
load-balancing and is not recommended.
• Auto-RP is used to distribute group-to-RP mapping information and can be used alone or with
Anycast RP. Auto-RP alone provides failover, but does not provide the fastest failover nor does it
provide load-balancing.
• Anycast RP is used to define redundant and load-balanced RPs and can be used with static RP
definitions or with Auto-RP. Anycast RP is the optimal choice as it provides the fast failover and
load-balancing of the RPs.
Note In this document, the examples illustrate the most simplistic approach to Anycast RP by using
locally-defined RP mappings.
Anycast RP
Anycast RP is the preferred deployment model as opposed to a single static RP deployment. It provides
for fast failover of IP multicast (within milliseconds or in some cases seconds of IP Unicast routing) and
allows for load-balancing.
In the PIM-SM model, multicast sources must be registered with their local RP. The router closest to a
source performs the actual registration. Anycast RP provides load sharing and redundancy across RPs in
PIM-SM networks. It allows two or more RPs to share the load for source registration and to act as hot
backup routers for each other (multicast only). Multicast Source Discovery Protocol (MSDP) is the key
protocol that makes Anycast RP possible. MSDP allows RPs to share information about active sources.
With Anycast RP, the RPs are configured to establish MSDP peering sessions using a TCP connection.
When the RP learns about a new multicast source (through the normal PIM registration mechanism), the
RP encapsulates the first data packet in a Source-Active (SA) message and sends the SA to all MSDP
peers.
Two or more RPs are configured with the same IP address on loopback interfaces. The Anycast RP
loopback address should be configured with a 32-bit mask, making it a host address. All the downstream
routers are configured to “know” that the Anycast RP loopback address is the IP address of their RP. The
non-RP routers will use the RP (host route) that is favored by the IP unicast route table. When an RP
fails, IP routing converges and the other RP assumes the RP role for sources and receiver that were
previously registered with the failed RP. New sources register and new receivers join with the remaining
RP.
Auto-RP
Auto-RP automates the distribution of group-to-RP mappings in a network supporting PIM-SM.
Auto-RP supports the use of multiple RPs within a network to serve different group ranges and allows
configurations of redundant RPs for reliability purposes. Auto-RP allows only one RP to be active at
once. Auto-RP can be used as the distribution mechanism to advertise the Anycast RP addresses
previously discussed The automatic distribution of group-to-RP mappings simplifies configuration and
guarantees consistency.
The Auto-RP mechanism operates using two basic components, the candidate RPs and the RP mapping
agents.
• Candidate RPs advertise their willingness to be an RP via “RP-announcement” messages. These
messages are periodically sent to a reserved well-known group 224.0.1.39
(CISCO-RP-ANNOUNCE).
• RP mapping agents join group 224.0.1.39 and map the RPs to the associated groups. The RP
mapping agents advertise the authoritative RP-mappings to another well-known group address
224.0.1.40 (CISCO-RP-DISCOVERY). All PIM routers join 224.0.1.40 and store the RP-mappings
in their private cache.
This chapter discusses the basic layout needed to use IP multicast in a campus network and includes the
following sections:
• Multicast Campus Deployment Recommendations
• Campus Deployment
• IP Multicast Small Campus Design
• IP Multicast Medium Campus Design
• IP Multicast Large Campus Design
• Summary
Note This chapter uses MoH and IP/TV in the examples. It does not, however, provide detailed configurations
and designs for MoH and IP/TV. A basic MoH and IP/TV implementation is covered in Chapter 7,
“Multicast Music-on-Hold and IP/TV Configurations.”
Also, other types of IP multicast implementations, such as IP multicast for financial deployments, are
not covered.
To get the most out of this chapter, the reader should understand the AVVID recommendations for the
following:
• Campus design
• IP Telephony
• Content Delivery with IP/TV
• QoS
• High-Availability
• Security
• Management
Campus Deployment
This section provides information for deploying the following IP multicast elements in a campus
network:
• IGMP Snooping and CGMP
• Non-RPF Traffic
• HSRP
Optimal bandwidth management can be achieved on IGMP snooping enabled switches by enabling the
IGMP Fast-Leave processing. With Fast-Leave, upon receiving an “IGMP leave group” message, the
switch immediately removes the interface from its Layer 2 forwarding table entry for that multicast
group. Without leave processing, the multicast group will remain in the Layer 2 forwarding table until
the default IGMP timers expire and the entry is flushed.
The following example shows how to configure IGMP Fast-Leave on a Catalyst switch running
Native IOS:
switch(config)#ip igmp snooping vlan 1 immediate-leave
The following example shows how to configure IGMP Fast-Leave on a Catalyst switch running
Catalyst OS:
CatOS> (enable)set igmp fastleave enable
Use Fast-Leave processing only on VLANs where only one host is connected to each Layer 2 LAN
interface. Otherwise, some multicast traffic might be dropped inadvertently. For example, if multiple
hosts are attached to a Wireless LAN Access Point that connects to a VLAN where Leave processing is
enabled (as shown in Figure 2-1), then Fast-Leave processing should not be used.
Leave
Both PCs receive
PC A the same stream.
PC A "Leaves"
PC B loses stream
Fast leave
87379
enabled
PC B
Cisco Group Management Protocol (CGMP) is a Cisco-developed protocol that allows Catalyst switches
to leverage IGMP information on Cisco routers to make Layer 2 forwarding decisions. CGMP must be
configured on the multicast routers and the Layer 2 switches. With CGMP, IP multicast traffic is
delivered only to the Catalyst switch ports that are attached to interested receivers. All ports that have
not explicitly requested the traffic will not receive it unless these ports are connected to a multicast
router. Multicast router ports must receive every IP multicast data packet.
The default behavior of CGMP is to not remove multicast entries until an event, such as a spanning tree
topology change, occurs or the router sends a CGMP leave message. The following example shows how
to enable the CGMP client (switch) to act on actual IGMP leave messages:
switch(config)#cgmp leave-processing
Note Due to a conflict with HSRP, CGMP Leave processing is disabled by default.If HSRP hellos pass through
a CGMP enabled switch, then refer to CSCdr59007 before enabling CGMP leave-processing.
Table 2-1 lists the support for CGMP and IGMP snooping in Cisco switches.
Non-RPF Traffic
A router drops any multicast traffic received on a non-reverse path forwarding (non-RPF) interface. If
there are two routers for a subnet, the DR will forward the traffic to the subnet and the non-DR will
receive that traffic on its own VLAN interface. This will not be its shortest path back to the source and
so the traffic will fail the RPF check. How non-RPF traffic is handled depends on the Catalyst switch
platform and the version of software running (as shown in Figure 2-2).
2
interfaces connecting
the same two routers,
the non-RPF traffic that
hits the non-DRs CPU
3 is amplified "N" times
1
Multicast traffic the source rate The DR will forward the
received on a non-reverse traffic from the source to
path forwarding (non-RPF) the receivers on the
interface is dropped outgoing interfaces
Non-RPF
interface VLAN 51
Designated
Non-DR Router
VLAN 50
87104
M
CallManager with MoH
Versions of code for the Catalyst 6500 series Supervisor II (Catalyst OS 6.2(1) and IOS 12.1(5)EX and
later) support Multicast Fast Drop (MFD) and will rate-limit non-RPF traffic by default. The mls ip
multicast non-rpf cef command is enabled by default on the MSFC. Use the show mls ip multicast
summary command to verify that non-RPF traffic is being rate-limited in hardware.
Catalyst 3550
The Catalyst 3550 does not use a command to enable non-RPF traffic to be hardware filtered. In 3550
switches, the non-RPF packets reach the CPU through the RPF Failure Queue. This hardware queue is
separate from other queues, which are reserved for routing protocols and Spanning-Tree protocol
packets. Thus, non-RPF packets will not interfere with these critical packets. The RPF Failure Queue is
of minimal depth. So if this queue is full, then subsequent packets will be dropped by the hardware itself.
The CPU gives low priority and shorter time to process the packets from the RPF Failure Queue to ensure
that priority is given to routing protocol packets. A limited number of packet buffers are available for
the RPF Failure Queue and when all of the buffers allocated for the RPF Failure Queue are full, the
software will drop the incoming packets. If the rate of the non-RPF packets is still high and if this in turn
makes the software process a lot of packets within a certain period of time, then the queue is disabled
and re-enabled after 50 milliseconds. This will flush the queue and give the CPU a chance to process the
existing packets
To see the number of packets dropped, use the show controller cpu-interface | include rpf command.
HSRP
There is often some confusion about how HSRP active and standby devices route IP Unicast and IP
multicast traffic. The issue appears when an HSRP active device is successfully routing IP Unicast traffic
and the HSRP standby device is forwarding the IP multicast traffic. IP multicast forwarding is not
influenced by which box is active or in standby.
The path that multicast traffic takes is based on the shortest path to the source or, in a shared-tree model,
the shortest path to the RP. The shortest path to the source or RP is based on the route entries in the
unicast routing table. In most Campus designs, the links between layers are equal cost paths. Therefore,
the multicast traffic will follow the path through the DR. The DR is determined by which PIM router has
the highest IP address on the shared subnet and also which has an RPF interface toward the source.
If multiple paths exist and they are not equal, it is possible for the DR to decide that the shortest path to
the source or RP is actually back out the same VLAN that the host is on and through the non-DR router.
Figure 2-3 illustrates the possible issue with HSRP and IP multicast. In this example, it is assumed that
the routes are equal-cost. Switch 1 is configured with an HSRP priority of 120 and Switch 2 is configured
with a priority of 110. Therefore, HSRP will use Switch 1 as the active router. However, Switch 2 has a
higher IP address than Switch 1. Therefore, PIM will use Switch 2 as the DR. The result is that Unicast
traffic is sent through Switch 1 while Multicast traffic is sent through Switch 2.
Unicast Multicast
Solution
To avoid this situation, either adjust the HSRP priority to make Switch 2 the active router or change the
IP addresses so that Switch 1 has the higher address. Either of these actions will make the HSRP active
router and the PIM DR the same.
Note In IOS Release 12.2, the ip pim dr-priority command forces a router to be the DR (regardless of IP
address). Not all platforms support the ip pim dr-priority command.
Figure 2-4 shows how changing the HSRP priority value on both switches will cause both unicast and
multicast traffic will flow the same way.
Unicast Multicast
76644
CallManager with MoH
The following example shows the configuration of Switch 2:
interface Vlan10
description To Server-Farm MoH Server
ip address 10.5.10.3 255.255.255.0
ip pim sparse-mode
standby 10 priority 120 preempt
standby 10 ip 10.5.10.1
The following example shows how to verify that the switch is the HSRP active device:
switch2#show standby brief
P indicates configured to preempt.
|
Interface Grp Prio P State Active addr Standby addr Group addr
Vl10 10 120 P Active local 10.5.10.3 10.5.10.1
The following example shows how to verify that the switch is the PIM DR for the subnet:
switch2#show ip pim interface vlan10
Trace routes should show that unicast traffic is flowing through Switch 2. The mtrace and show ip
multicast active commands should show that the multicast traffic is also flowing through Switch 2.
Note In some situations, there may be a desire to load-balance over the two links from the access layer to the
distribution layer. If this is the case, simply ensure that the HSRP Active router is not the DR for the
VLAN. For this configuration, which is counter to the one recommended above, give the HSRP Active
router the lower IP address of the two distribution switches to ensure that it is selected as the DR.
RP of Last Resort
If the active RPs are no longer available or there are no RPs configured for a specific group, the default
behavior is to dense-mode flood the multicast traffic. This is called dense-mode fallback. Typically, after
deploying the recommendations in this document, an RP will always be available. However, in the event
that something happens to all of the RPs or routing instability diverts access from the RPs, it is necessary
to ensure that dense-mode fallback does not occur.
To prevent dense mode flooding, on every PIM-enabled router configure an access control list (ACL)
and use the ip pim rp-address address group-list command to specify an “RP of last resort.” This way,
if all else fails, the RP defined in this command will be used. It is recommended that a local loopback
interface be configured on each router and that the address of this loopback interface be specified as the
IP address of the RP of last resort. By configuring an RP of last resort, the local router will be aware of
an RP and will not fallback to dense mode.
Note Do not advertise this loopback interface in the unicast routing table.
interface Loopback2
description Garbage-CAN RP
ip address 2.2.2.2 255.255.255.255
This not only helps with dense-mode fallback (by ensuring that an RP is always present on the local
router if the main RPs become unavailable), but this also helps to guard against rogue sources that may
stream unwanted multicast traffic. This blocking of unwanted multicast sources is sometimes referred to
as a “Garbage-can RP.” For more information about using Garbage-can RPs, see Chapter 8, “Security,
Timers, and Traffic Engineering in IP Multicast Networks.”
3550-left-access 3524-right-access
3550-left-SF 3550-right-SF
Data Center
VLAN 10 VLAN 11
76645
CallManager with MoH
IPTV server
In this design:
• Both IGMP snooping and CGMP are used. There is a Catalyst 3550 (IGMP snooping) and a Catalyst
3524-PWR (CGMP) in the access layer.
• There are two Catalyst 4006 switches with Supervisor III in the collapsed core/distribution layer.
• The access-layer switches rely on the backbone switches as the RPs for PIM-SM.
Based on the size of the network and the number of sources and receivers that will be active on the
network, there is no need for an advanced multicast deployment model. PIM-SIM is used on the two
backbone switches to provide multicast forwarding between the Layer 3 links/VLANs in this design.
Auto-RP is used to distribute group-to-RP mapping information. Additionally, simple security features
are implemented to secure the network from unauthorized sources and RPs.
The following example shows the configuration of “4kS3-right-BB.” This switch is configured for
Auto-RP and is the HSRP active device.
The following examples show how to verify that the access layer switches have found their attached
multicast routers. On 3550-left-access, display the IGMP snooping multicast router information:
3524-right-access#show cgmp
CGMP is running.
CGMP Fast Leave is not running.
CGMP Allow reserved address to join GDA .
Default router timeout is 300 sec.
Once Auto-RP multicast traffic is flowing to the switches, check to see that multicast group entries show
up in their CAM tables.
Because the access-layer switches are Layer 2 switches, they will display Layer 2 Multicast address
information instead of Layer 3 IP multicast addresses. The two key group addresses listed are:
• 0100.5e00.0127—RP announcement address of 224.0.1.39
• 0100.5e00.0128—Mapping agent discovery address of 224.0.1.40
Note For recommendations and configurations for securing the network from unauthorized sources and RPs,
see Chapter 8, “Security, Timers, and Traffic Engineering in IP Multicast Networks.”.
Note Unlike the small campus design, which used collapsed distribution and core layer, the medium campus
network has a distinct distribution and core layer. The IP addressing, VLAN numbering, and HSRP
numbering conventions are the same as those used in the small campus design.
Building 1 Building 2
VLAN 2 data VLAN 3 data VLAN 4 data VLAN 5 data
VLAN 12 voice VLAN 13 voice VLAN 14 voice VLAN 15 voice
3524-b1- 3550-b1- 3550-b2- 4k-b2-
left-access right-access left-access right-access
Access layer
Trunk
(HSRP)
4kS3-bldg1- 4kS3-bldg1- 4kS3-bldg2- 4kS3-bldg2-
dist left dist right dist right dist left
Distribution layer
Layer 3
Core layer 6509-left-core 6509-right-core
Data Center
VLAN 10 VLAN 11
76646
ip multicast-routing
!
interface Loopback0
description MSDP local peer address
ip address 10.6.1.1 255.255.255.255
!
interface Loopback1
description Anycast RP address
ip address 10.6.2.1 255.255.255.255 Loopback 1 has same address on both the right and left
! routers.
!
interface Vlanxx
description VLANs for other L3 links
ip address 10.0.0.x 255.255.255.252
ip pim sparse-mode
!
ip pim rp-address 10.6.2.1 Identifies the address of the RP.
!
ip msdp peer 10.6.1.2 connect-source Loopback0 Enables MSDP and identifies the address of the
! MSDP peer.
!
ip msdp cache-sa-state Creates SA state (S,G). The SA cache entry is created
! when either MSDP peer has an active source.
!
ip msdp originator-id Loopback0 Sets RP address in SA messages to be Loopback 0.
The following example shows the configuration of “6509-right-core.” This switch is configured for
Anycast RP and is the HSRP active device.
ip multicast-routing
!
interface Loopback0
description MSDP local peer address
ip address 10.6.1.2 255.255.255.255
!
interface Loopback1
description Anycast RP address
ip address 10.6.2.1 255.255.255.255 Loopback 1 has same address on both the right and left
! routers.
!
interface Vlanxx
description VLANs for other L3 links
ip address 10.0.0.x 255.255.255.252
ip pim sparse-mode
!
ip pim rp-address 10.6.2.1 Identifies the address of the RP.
!
ip msdp peer 10.6.1.1 connect-source Loopback0 Enables MSDP and identifies the address of the
! MSDP peer. The connect-source identifies the
! primary interface used to source the TCP connection
! between peers.
!
Note If the Layer 3 interface connects to a CGMP-enabled switch (3524-PWR), CGMP Server operation must
be enabled. If the interface connects to an IGMP snooping enabled router, IP CGMP does not need to be
enabled, but PIM must be enabled.
The following example shows how to verify that each distribution switch has established a PIM neighbor
relationship.
Note For information about configuring the Data Center portion of the network for IP multicast, see Chapter 4,
“IP Multicast in a Data Center.”
Note The large campus network also has a distinct distribution and core layer. The IP addressing, VLAN
numbering, and HSRP numbering conventions are the same as those used in the small campus design.
Building 1 Building 2
VLAN 2 data VLAN 3 data VLAN 4 data VLAN 5 data
VLAN 12 voice VLAN 13 voice VLAN 14 voice VLAN 15 voice
6k-b1- 6k-b1-
left-access right-access
Access layer
Trunk
(HSRP)
6k-b1- 6k-b1-
dist left dist right
Distribution layer
Aggregation layer
Core
layer 3
6k-topleft-core 6k-topright-core
6k-botleft-core 6k-botright-core
Data Center
76647
VLAN 10 VLAN 11
Each client and server access-layer switch is dual-connected to, and served by, a pair of
distribution-layer routers running HSRP. For multicast, one of the two routers is the DR with the other
being the IGMP querier. The IP Unicast routing protocol is configured such that the trunk from the
access-layer switch to the DR is always preferred over that of the non-DR. This forces the unicast and
multicast paths to be the same. The IGMP querier is responsible for sending IGMP queries. Both the DR
and the non-DR receive the subsequent reports from clients, but only the DR will act on them. If the DR
fails, the non-DR will take its role. If the IGMP querier fails, the DR will take over its role. The
distribution routers have dual connections to the core.
Keep the RP placement simple. With Anycast RP, it is recommended that the RPs be placed in the center
of the network. Placing RP operations on the core-layer switches is a good idea because the core is the
central point in the network servicing the distribution-layer switches in each building, the
aggregation-layer switches in the server farms, and the WAN and Internet blocks.
The applications used in this sample design (MoH and IP/TV) have few sources to many receivers. The
sources are located in the server farm. So, a complex distribution of RPs throughout the network is not
required.
Note Due to the number of devices in a large campus design, this section presents configurations for a
sampling of the devices. The remaining devices should have similar configurations with the exception
of the unique IP addresses, VLANs, HSRP, and other specifics.
In this design:
• The access layer switches have IGMP snooping enabled.
• The RPs are located on the two core-layer switches.
• PIM-SM is configured on all distribution-layer switches and core-layer switches.
• Anycast RP is configured for fast recovery of IP multicast traffic.
• PIM-SM and MSDP are enabled on all core-layer switches.
• Each distribution-layer switch points to the Anycast RP address as their RP.
• MSDP is used to synchronize SA state between the core switches.
Note Only two RPs are required to run Anycast RP. In most situations, two RPs will sufficiently provide
redundancy for multicast. The following sections show the RP configuration for “6k-botleft-core” and
“6k-botright-core”.
ip multicast-routing
!
interface Loopback0
description MSDP local peer address
ip address 10.6.1.1 255.255.255.255
!
interface Loopback1
description Anycast RP address
ip address 10.6.2.1 255.255.255.255
ip pim sparse-mode
!
interface Vlanxx
description VLANs for other L3 links
ip address 10.0.0.x 255.255.255.252
ip pim sparse-mode
!
ip pim rp-address 10.6.2.1
ip msdp peer 10.6.1.2 connect-source Loopback0 Identifies the TCP peer and source interface.
!
ip msdp cache-sa-state Creates SA state (S,G).
ip msdp originator-id Loopback0 Sets RP address in SA messages to Loopback 0.
ip multicast-routing
!
interface Loopback0
description MSDP local peer address
ip address 10.6.1.2 255.255.255.255
!
interface Loopback1
description Anycast RP address
ip address 10.6.2.1 255.255.255.255
ip pim sparse-mode
!
interface Vlanxx
description VLANs for other L3 links
ip address 10.0.0.x 255.255.255.252
ip pim sparse-mode
!
ip pim rp-address 10.6.2.1
ip msdp peer 10.6.1.1 connect-source Loopback0
!
ip msdp cache-sa-state
ip msdp originator-id Loopback0
Figure 2-8 provides a logical view of the MSDP configuration of the core switches.
MSDP
Lo0–10.6.1.1 Lo0–10.6.1.2
Lo1–10.6.2.1 Lo1–10.6.2.1
76648
6k-botleft-core 6k-botright-core
ip multicast-routing
!
interface Vlanxx
description VLANs for Access Layer VLANs
ip address 10.1.x.x 255.255.255.0
ip pim sparse-mode
!
interface Vlanxx
description VLANs for other L3 links to Core
ip address 10.0.0.x 255.255.255.252
ip pim sparse-mode
!
ip pim rp-address 10.6.2.1 Identifies the RP addresses for this device pointing to the Anycast RP
! address in the core layer.
ip pim spt-threshold infinity Reduces multicast state (S,G) from the leaf routers by keeping traffic
! on the shared tree. (Optional)
ip multicast-routing
!
interface Vlanxx
description VLANs for Access Layer VLANs
ip address 10.1.x.x 255.255.255.0
ip pim sparse-mode
!
interface Vlanxx
description VLANs for other L3 links to Core
ip address 10.0.0.x 255.255.255.252
ip pim sparse-mode
!
ip pim rp-address 10.6.2.1 Identifies the RP addresses for this device pointing to the Anycast RP
! address in the core layer.
ip pim spt-threshold infinity Reduces multicast state (S,G) from the leaf routers by keeping traffic
! on the shared tree. (Optional)
Summary
In summary, when using IP multicast with MoH or IP/TV in the campus follow these recommendations.
• Use PIM-SM (recommended)
• Ensure that the DR is configured to be the HSRP Active router (recommended)
• Use Anycast RP (recommended)
• Keep RP placement simple (recommended)
This chapter describes the configurations needed to control IP Multicast traffic over a Wireless LAN
(WLAN) and includes the following sections:
• Multicast WLAN Deployment Recommendations
• IP Multicast WLAN Configuration
• Other Considerations
• Summary
Tip For information about wireless theory, deployment, and configuration, please see the Cisco AVVID
Wireless LAN Design SRND.
Note This chapter uses MoH and IP/TV in the examples. It does not, however, provide detailed configurations
and designs for MoH and IP/TV. A basic MoH and IP/TV implementation is covered in Chapter 7,
“Multicast Music-on-Hold and IP/TV Configurations.”
Also, other types of IP multicast implementations, such as IP multicast for financial deployments, are
not covered.
Note Filters on the AP and bridge do not provide the flexibility needed for true multicast control.
If IP Multicast is to be deployed and streamed across the wireless network, then the following
recommendations should be implemented:
• Prevent unwanted multicast traffic from being sent out on the air interface.
– Place the WLAN in its own subnet.
– Control which multicast groups are allowed by implementing multicast boundaries on the egress
Layer 3 interface connecting to the VLAN or interface to the AP or bridge.
• To gain the highest AP/bridge performance for multicast traffic and data traffic, configure the APs
and bridges to run at the highest possible fixed data rate. This removes the requirement for multicast
to clock out at a slower rate, which can impact the range of the AP/bridge and must be taken into
account in the site survey.
• If multicast reliability is a problem (seen as dropped packets), ignore the preceding recommendation
and use a slower data rate (base rate) for multicast. This gives the multicast a better signal-to-noise
ratio and can reduce the number of dropped packets.
• Test the multicast application for suitability in the WLAN environment. Determine the application
and user performance effects when packet loss is higher than that seen on wired networks.
Note IP multicast configuration for the campus is covered in Chapter 2, “IP Multicast in a Campus Network.”
10.5.10.22
IP/TV server
Source For:
239.255.0.1–high-rate stream
239.192.248.1–Low-rate stream
Campus
VLAN 200
10.1.200.x
87046
.1 .100 .101
350
L3-Switch AccessPoint PC with
350 PC Card
In this configuration:
• L3-SWITCH connects to the campus network and the Cisco Aironet 350 Access Point
(10.1.200.100).
• The VLAN 200 interface on L3-SWITCH has the IP address of 10.1.200.1 and is the interface that
provides the boundary for IP multicast.
• The laptop computer (10.1.200.101) has a Cisco Aironet 350 PC Card and is running the IP/TV
Viewer software.
Below is the configuration is for L3-SWITCH.
interface Vlan200
description WLAN VLAN
ip address 10.1.200.1 255.255.255.0
ip pim sparse-mode Enables PIM on the interface.
ip multicast boundary IPMC-WLAN Boundary refers to named ACL “IPMC-WLAN” and controls
! multicast forwarding AND IGMP packets.
ip access-list standard IPMC-WLAN
permit 239.192.248.1 Permits low-rate stream (239.192.248.1).
The low-rate stream is allowed and the high-rate stream is disallowed on the P2P wireless link. To
control what multicast traffic passes over the P2P link, only the ip multicast boundary configuration
on ROUTER is needed. Because the multicast boundary prevents hosts from joining unwanted groups,
the network never knows to forward unwanted traffic over the P2P link.
PC with
10.5.10.22 350 PC Card
IP/TV server
Source For: .2
239.255.0.1–high-rate stream
239.192.248.1–Low-rate stream 10.1.101.x
Campus
L2-Switch-PWR
VLAN 100
10.1.100.x
.1
87047
.1 .100 .101 .2
L3-Switch 350-Bridge-L 350-Bridge-R Router
In this configuration:
• L3-SWITCH (VLAN 100-10.1.100.1) connects to the campus network and the P2P wireless
network.
• The P2P wireless link is made possible by two Cisco Aironet 350 Bridges, 350-Bridge-L
(10.1.100.100) and 350-Bridge-R (10.1.100.101).
• ROUTER (10.1.100.2) connects to the P2P wireless network and the remote site network
(10.1.101.1) via L2-SWITCH-PWR.
• The laptop computer (10.1.101.2) is running the IP/TV Viewer software.
If the remote side of the P2P link has a Layer 2 switch and no Layer 3 switch or router, then a boundary
can be placed on the VLAN 100 interface of L3-SWITCH2. Also, in a Point-to-Multipoint (P2MP)
deployment, a mix of both may be needed. Both configurations are shown here for reference.
Following is the configuration for L3-SWITCH.
interface Vlan100
description VLAN for P2P Bridge
ip address 10.1.100.1 255.255.255.0
ip pim sparse-mode Enables PIM on the interface.
ip multicast boundary IPMC-BRIDGE Boundary refers to named ACL “IPMC-BRIDGE.”
!
ip access-list standard IPMC-BRIDGE
permit 239.192.248.1 Permits low-rate stream (239.192.248.1).
To prevent unwanted IGMP messaging and multicast traffic from traversing the P2P wireless link on the
receiver side (remote LAN - 10.1.101.x), an ip multicast boundary is configured on the Fast Ethernet
0/1 interface of ROUTER.
Step 2 On the PC, open the IP/TV viewer and request the program associated with the 239.192.248.1 group.
The following ACL console message should appear on L3-SWITCH showing that the traffic was
permitted (number of packets may vary):
5w1d: %SEC-6-IPACCESSLOGS: list IPMC-WLAN permitted 239.192.248.1 7 packets
Step 3 Issue the show ip mroute active command on L3-SWITCH to see the active multicast stream. The result
is an active multicast stream for group 239.192.248.1 being sourced from the IP/TV server 10.5.10.22
with a rate of 100 kbps.
L3-SWITCH#show ip mroute active
Active IP Multicast Sources - sending >= 4 kbps
The IP/TV viewer should be displaying the video content on the screen.
Step 4 On the PC, stop the IP/TV program that is running for the 239.192.248.1 group.
Step 5 Enable the debug for the high-rate stream:
L3-SWITCH#debug ip igmp 239.255.0.1
Step 6 On the PC, open the IP/TV viewer and request the program associated with the 239.255.0.1 group.
The following ACL console message should appear on L3-SWITCH showing that the traffic was denied
(number of packets may vary):
5w1d: %SEC-6-IPACCESSLOGS: list IPMC-WLAN denied 239.255.0.1 1 packet
Step 7 The following debug entry should appear for the discarded IGMP join attempt:
5w1d: IGMP(0): Discard report at boundary (Vlan200) for 239.255.0.1
Step 2 On the PC, open the IP/TV viewer and request the program associated with the 239.192.248.1 group.
As an alternative to using the IP/TV viewer, run the ip igmp join-group 239.192.248.1 command on the
Fast Ethernet 0/1 interface on ROUTER.
The following ACL console message should appear on ROUTER showing that the traffic was permitted
(number of packets may vary):
1w1d: %SEC-6-IPACCESSLOGS: list IPMC-BRIDGE permitted 239.192.248.1 5 packets
Step 3 Run the show ip mroute active command on L3-SWITCH and ROUTER.
ROUTER#show ip mroute active
Active IP Multicast Sources - sending >= 4 kbps
The IP/TV viewer should be displaying the video content on the screen.
Step 4 On the PC, stop the IP/TV program that is running for the 239.192.248.1 group.
Step 6 On the PC, open the IP/TV viewer and request the program associated with the 239.255.0.1 group.
The following ACL console message should appear on ROUTER showing that the traffic was denied
(number of packets may vary):
1w1d: %SEC-6-IPACCESSLOGS: list IPMC-BRIDGE denied 239.255.0.1 1 packet
The following debug entry should appear for the discarded IGMP join attempt:
1w1d: IGMP(0): Discard report at boundary (FastEthernet0/1) for 239.255.0.1
Other Considerations
The following additional considerations apply to deploying IP multicast in a WLAN environment:
• The WLAN LAN extension via EAP and WLAN static WEP solutions can support multicast traffic
on the WLAN; the WLAN LAN extension via IPSec solution cannot.
• The WLAN has an 11 Mbps available bit rate that must be shared by all clients of an AP. If the AP
is configured to operate at multiple bit-rates, multicasts and broadcasts are sent at the lowest rate to
ensure that all clients receive them. This reduces the available throughput of the network because
traffic must queue behind traffic that is being clocked out at a slower rate.
• WLAN clients can roam from one AP to another seamlessly within the same subnet. If roaming
multicast is to be supported, Cisco Group Management Protocol (CGMP) and/or Internet Group
Management Protocol (IGMP) snooping must be turned off for the port to which the AP is connected
because a multicast user roaming from one AP to another is roaming from one switch port to another.
The new switch port might not have this stream setup and it has no reliable way of determining the
required multicast stream.
• Multicast and broadcast from the AP are sent without requiring link-layer acknowledgement. Every
unicast packet is acknowledged and retransmitted if unacknowledged. The purpose of the
acknowledgement is to overcome the inherent unreliable nature of wireless links. Broadcasts and
multicasts are unacknowledged due to the difficulty in managing and scaling the acknowledgements.
This means that a network that is seen as operating well for unicast applications, can experience
degraded performance in multicast applications.
• Enterprise customers who are using WLAN in laptops would normally use (Constant Awake Mode)
CAM as the Power-Save Mode. If delay-sensitive multicast traffic is being sent over the WLAN,
customers should ensure that only the CAM configuration is used on their WLAN clients. Based on
the 802.11 standard, if the client is in power-save mode, then the AP will buffer broadcast and
multicast traffic until the next beacon period that contains a delivery traffic information map (DTIM)
transmission. The default period is 200ms. Enterprises that use WLAN on small handheld devices
will most likely need to use the WLAN power-save features (Max or Fast) and should not attempt
to run delay-sensitive multicast traffic over the same WLAN.
Summary
In summary, when using IP multicast in the WLAN, follow these recommendations.
• Place the WLAN AP or bridge on a separate VLAN or Layer 3 interface so multicast boundaries can
be implemented.
• Use the ip multicast boundary command to prevent IGMP joins and multicast forwarding on
denied multicast groups.
• In a WLAN using AP, the boundary should be placed on the VLAN or Layer 3 interface connecting
to the AP.
• In a WLAN using bridges, the boundary is placed on the VLAN or Layer 3 interface connecting to
the remote receiver side. If no Layer 3 capable device is used at the remote site, the boundary is
placed on the VLAN or Layer 3 interface connecting to the bridge at the main site. Also, a
combination of a boundary at the receiver side and bridge connection at the main site, may be needed
in a P2MP deployment.
• Set the highest possible fixed data rate on the APs and bridges to ensure the best possible
performance for multicast and data traffic.
• If dropped packets occur and impact the performance of the application, the fixed data rate on the
APs and bridges may need to be reduced to ensure a better signal-to-noise ratio, which can reduce
dropped packets.
Because many of the servers residing in the data center may use multicast to communicate, it becomes
necessary to enable IP multicasting on the data center switches. Platforms that deliver video services,
such as Cisco’s IP/TV broadcast servers, rely on multicast to deliver their services to end users. Many
back-end applications use multicast for replication purposes.
This chapter provides an overview of the data center architecture and recommendations for
implementing IP multicast in the data center. For more information about designing data centers, see the
Designing Enterprise Data Centers Solution Reference Network Design guide.
Aggregation Layer
The aggregation layer consists of network infrastructure components and other devices supporting
application, security, and management services. The aggregation layer is analogous to the traditional
distribution layer in the campus network in its Layer 3 and Layer 2 functionality. Those services that are
common to the devices in the front-end layer and the layers behind it should be centrally located for
consistency, manageability, and predictability. This makes the aggregation layer the location where
services are centralized. The devices in this layer include aggregation switches that connect the server
farms to the core layer, content switches, firewalls, intrusion detection systems (IDSs), content engines,
and SSL offloading.
For IP multicast:
• The aggregation layer requires multicast routing with PIM Sparse-mode to be configured.
• The Layer 3 switches in the aggregation layer point to the RPs previously defined in the campus
core.
• A dedicated VLAN is used to connect multicast sources that are not located in the front-end layer.
An example multicast source that will be placed in the dedicated VLAN is a streaming media
application, like an IP/TV broadcast server.
• Some sources are part of another server role that may need to be located in the front-end layer. An
example of a multicast source that is located in the front-end layer is Multicast Music-on-Hold
(MMoH). MMoH is often deployed in a co-resident fashion with Cisco Call Manager.
Front-End Layer
The front-end layer consists of infrastructure, security, and management devices supporting the
front-end server farms. The front-end layer is analogous to the traditional access layer in the campus
network and provides the same functionality. The front-end server farms typically include FTP, Telnet,
TN3270, SMTP, Web servers, and other business application servers. In addition, it includes
network-based application servers, such as IPTV Broadcast servers, and call managers that are not
placed at the aggregation layer due to port density or other design caveats.
The specific features required depend on the server and their functions. For example, if Video streaming
over IP is supported, multicast must be enabled, or if Voice over IP is supported, QoS must be enabled.
Layer 2 connectivity through VLANs is required between servers and service devices, such as content
switches, and between servers that belong to the same server farm or subnet and are located in the same
or different access switches. This is known as server-to-server communication, which could also span
multiple tiers. Other features may include the use of host IDS if the servers are critical and need constant
monitoring. In general, the infrastructure components such as the Layer 2 switches provide intelligent
network services that enable front-end servers to provide their functions.
Cisco Catalyst switches support IGMP snooping or CGMP at Layer 2. Because IGMP snooping or
CGMP (platform dependent) is enabled on Layer 2 switches by default, no multicast configuration is
required at the front-end layer.
Application Layer
The application layer consists of the infrastructure, security, and management devices that support the
application servers. Applications servers run a portion of the software used by business applications and
provide the communication logic between front-end and the back-end, which is typically referred to as
the middleware or business logic. Application servers translate user requests to commands the back-end
database systems understand. Increasing the security at this layer is focused on controlling the protocols
used between the front-end servers and the application servers.
The features required at this layer are almost identical to those needed in the front-end layer. Like the
front-end layer, the application layer infrastructure must support intelligent network services as a direct
result of the functions provided by the application services. However, the application layer requires
additional security.
Additional security is based on how much protection is needed by the application servers as they have
direct access to the database systems. Depending on the security policies, firewalls between web and
application servers, IDS, and host IDSs are used. By default firewalls do not permit nor do they support
multicast forwarding. Careful consideration must be given to the deployment of firewall services when
multicast traffic is to be permitted across a secure boundary. It is not uncommon to deploy GRE
tunneling, multicast helper, or Policy Based Routing (PBR) to support multicast across secured
boundaries. These methods are difficult to deploy and troubleshoot. They also require Layer 3
intelligence in an area of the network that needs only Layer 2 features.
The best way to accommodate IP Multicast in the data center is to place multicast sources on a separate
VLAN that is located in the aggregation layer. Placing the multicast sources on a dedicated VLAN will
bypass the difficulties of using firewalls and multicast, as well as other issues like multicast on
router-less subnets.
Back-End Layer
The back-end layer consists of the infrastructure, security, and management devices supporting the
interaction with the database systems that hold the business data. The back-end layer features are almost
identical to those needed at the application layer, yet the security considerations are more stringent and
aimed at protecting the data, critical or not.
The back-end layer is primarily for the relational database systems that provide the mechanisms to
access the Enterprise information, which makes them highly critical. The hardware supporting the
relational database systems range from medium sized servers to mainframes, some with locally attached
disks and others with separate storage.
Multicast requirements are not common at the back-end layer. The synchronization of database contents
requires an acknowledgement-based method of synchronization and multicast does not provide a
mechanism to support an acknowledgement-based service, unless a vendor has written a proprietary
extension.
Storage Layer
The storage layer consists of the storage infrastructure such as Fibre-Channel switches or iSCSI routers
that connect server to storage devices, as well as the storage devices to where the data resides. At the
data center, storage devices, such as tape and disk subsystems, need a high-speed connection to provide
block level access to information from the database servers. This implies the disk subsystems are used
to consolidate information from dedicated local disks to a centralized repository supporting the database
systems. Multicast is not typically used at this layer.
Campus Internet
Campus
core edge
DWDM
FC
• Deploy PIM Sparse-mode on the aggregation-layer switches and define the RPs as the previously
configured campus core-layer switches.
• Select Catalyst switches that have IGMP snooping and use CGMP in low-end switches that do not
support IGMP snooping.
• Use recommended commands to ensure that the correct RPs and sources are used.
• Use “show” commands to ensure proper operation of the multicast configurations and enable SNMP
traps to log multicast events.
• If a firewall module (PIX) or CSM is used in the aggregation layer, then a VLAN that is terminated
on the Layer 3 switch (aggregation layer) must be created for IP multicast traffic to bypass the
firewall and CSM. Currently, the firewall and CSM do not support IP multicast forwarding. Once
PIM is supported on the various services modules, then this step will not be necessary.
Campus Core
Core
L3
6k-botleft-core 6k-botright-core
Data Center
Aggregation
layer
switches
Dedicated Front-End
multicast layer
VLAN switches
VLAN 11 VLAN 10
87140
CallManager
IPTV server
with MoH
interface Vlan50
description To agg (6k-agg-left) for front-end/dedicated IPmc VLAN
ip pim sparse-mode
The following example shows excerpts of the configuration of “6k-botright-core” that pertain to the
aggregation layer.
interface Vlan60
description To agg (6k-agg-right) for front-end/dedicated IPmc VLAN
ip pim sparse-mode
!
ip multicast-routing
!
interface Vlan50
description To Campus Core (6k-botleft-core)
ip pim sparse-mode
!
interface Vlan10
description To front-end layer - MMoH server
ip pim sparse-mode
!
interface Vlan11
description To dedicated VLAN - IP/TV Server
ip pim sparse-mode
!
ip pim rp-address 10.6.2.1 Identifies the RP addresses for this device pointing to the Anycast RP
! address in the core layer.
ip multicast-routing
!
interface Vlan60
description To Campus Core (6k-botright-core)
ip pim sparse-mode
!
interface Vlan10
description To front-end layer - MMoH Server
ip pim sparse-mode
!
interface Vlan11
description To dedicated VLAN - IP/TV Server
ip pim sparse-mode
!
ip pim rp-address 10.6.2.1 Identifies the RP addresses for this device pointing to the Anycast RP
! address in the core layer.
This chapter discusses the basic layout needed to use IP multicast in a WAN and includes the following
sections:
• Multicast WAN Deployment Recommendations
• IP Multicast WAN Configuration
• Summary
Note This chapter uses MoH and IP/TV in the examples. It does not, however, provide detailed configurations
and designs for MoH and IP/TV. A basic MoH and IP/TV implementation is covered in Chapter 7,
“Multicast Music-on-Hold and IP/TV Configurations.”
Also, other types of IP multicast implementations, such as IP multicast for financial deployments, are
not covered.
CallManager
IP/TV server M with MMoH
Key IP Addresses
Corporate Name IP Address
HQ campus 6k-core-1 network 6k-core-2 IPTV server 10.5.11.20
CallManager 10.5.10.20
with MMoH
F 5/0/0 F 0/0
L0 L0
wanaggleft wanaggright
L1 L1
Primary PVC S 0/0/0 S 4/0
Secondary PVC
Site BW
Left 768K
IP WAN
Middle 256k
Right 128K
IP IP IP IP IP IP
Voice VLAN 120 IPTV Viewer Voice VLAN 130 IPTV Viewer Voice VLAN 140 IPTV Viewer
Data VLAN 20 Data VLAN 30 Data VLAN 40
This section provides information and sample configurations for deploying the following IP multicast
elements in a WAN:
• Anycast RP
• IGMP Snooping and CGMP
Anycast RP
This section provides sample Anycast RP configurations for the routers shown in Figure 5-1.
Branch
The IP multicast configuration steps for the branch routers are as follows:
• Enable IP multicast routing.
• Enable IP PIM Sparse Mode on each interface that will receive and forward multicast traffic.
Identify the RP for the router. With Anycast RP, the address defined is the Loopback address that is
duplicated on each participating Anycast RP router. (Loopback 1 - 10.6.2.1).
Following is the basic multicast configuration for “leftrtr.” This configuration is duplicated on each
branch router.
WAN Aggregation
The IP multicast configuration steps for the WAN aggregation routers are as follows:
• Enable IP multicast routing.
• Enable IP PIM Sparse Mode on each interface that will receive and forward multicast traffic.
Identify the RP for the router. With Anycast RP, the address defined is the Loopback address that is
duplicated on each participating Anycast RP router. (Loopback 1 - 10.6.2.1).
Note Multicast Distributed Fast Switching (MDFS) is not supported on any type of virtual interface, such as
virtual-template or multilink interfaces. Enabling ip mroute-cache distributed on unsupported
interfaces can result in process switching of the multicast packets or dropped packets.
interface FastEthernet5/0/0
description To HQ Switch
ip pim sparse-mode
ip mroute-cache distributed Enable distributed multicast fast-switching.
!
interface Serial0/0/0
description WANAGGLEFT to Branch Routers
ip mroute-cache distributed
!
interface Serial0/0/0.1 point-to-point
description To LEFTRTR DLCI 121
ip pim sparse-mode
!
interface Serial0/0/0.2 point-to-point
description To MIDDLERTR DLCI 131
ip pim sparse-mode
!
interface Serial0/0/0.3 point-to-point
description To RIGHTRTR DLCI 141
ip pim sparse-mode
!
ip pim rp-address 10.6.2.1
ip multicast-routing
!
interface FastEthernet0/0
description To HQ Switch
ip pim sparse-mode
!
interface Serial4/0.1 point-to-point
description To LEFTRTR DLCI 120
ip pim sparse-mode
!
interface Serial4/0.2 point-to-point
description To MIDDLERTR DLCI 130
ip pim sparse-mode
!
interface Serial4/0.3 point-to-point
description To RIGHTRTR DLCI 140
ip pim sparse-mode
!
ip pim rp-address 10.6.2.1
MSDP Filters
There are recommended MSDP filters that should be applied if the network is connected to native IP
Multicast on the Internet. The MSDP filters are used to reduce the excessive amount of (S, G) state that
is passed between Internet MSDP peers.
ip multicast-routing
!
interface FastEthernet0/0.20
ip pim sparse-mode
ip cgmp Enable CGMP on the router interface. PIM must be
! enabled prior to CGMP being enabled.
interface FastEthernet0/0.120
ip pim sparse-mode
ip cgmp
CGMP operation between the switch and router can be verified using the show cgmp, as shown below:
left_3524#show cgmp
CGMP is running.
CGMP Fast Leave is not running.
CGMP Allow reserved address to join GDA .
Default router timeout is 300 sec.
Summary
In summary, when using IP multicast with MoH or IP/TV in the WAN follow these recommendations.
• Use PIM-SM (recommended)
• Anycast RP with RPs located in the campus (recommended)
– Fast convergence, redundancy, load balancing
This chapter discusses the basic layout needed to use IP multicast in a Virtual Private Network (VPN)
and includes the following sections:
• Site-to-Site VPN Overview
• VPN Deployment Model
• Multicast VPN Deployment Recommendations
• Multicast Site-to-Site VPN Deployment
• Summary
IP
IP Packet Data
Hdr
20 Large
bytes
New IP IP
GRE Added GRE Data
Hdr Hdr
20 4 20 Large
bytes bytes bytes
20 32 bytes 20 4 20 Large
bytes variable bytes bytes bytes
87042
MTU Size
Tip Additional information for tunnel allocation and sizing devices can be found in the Cisco AVVID
Network Infrastructure Enterprise Site-to-Site VPN Design SRND.
Head-end
devices
Primary
Secondary
87043
Branch Office
devices
To plan for proper tunnel aggregation and load distribution in the case of a head-end device failure, the
following process should be used:
• Start with the number of total branch devices to be aggregated at the head-end.
• Divide this number by the number of head-end devices.
• Multiply the result by 2 for primary and secondary tunnels. This is the total tunnels per head-end
device.
• Allocate the primary tunnels to each head-end device in the arrangement shown in Figure 2 above
(in green).
• For each group, allocate the secondary tunnels in a round-robin fashion to all head-end devices
except the one serving as a primary for that group. This arrangement is also shown in Figure 2 above
(in yellow).
• Check to see that each head-end device is now allocated the same number of total tunnels per
head-end device.
Branch VPN
87044
Branch Primary VPN
Secondary
IKE Configuration
There must be at least one matching Internet Key Exchange (IKE) policy between two potential IPSec
peers. The example configuration below shows a policy using pre-shared keys with 3DES as the
encryption transform. There is a default IKE policy that contains the default values for the transform,
hash method, Diffie-Helman group, authentication and lifetime parameters. This is the lowest priority
IKE policy.
When using pre-shared keys, Cisco recommends that wildcard keys not be used. Instead, the example
shows two keys configured for two separate IPSec peers. The keys should be carefully chosen; “cisco”
is used only as an example. The use of alpha-numeric and punctuation characters as keys is
recommended.
The IKE configurations shown below are all the same for each device, with the exception of the unique
IP address used for each router.
Head-End
Following is the IKE configuration for VPN-HE-1.
interface FastEthernet0/1
description to ISP for VPN
ip address 131.108.1.1 255.255.255.252
!
crypto isakmp policy 1
encr 3des
authentication pre-share
group 2
crypto isakmp key cisco address 131.108.101.1
crypto isakmp key cisco address 131.108.102.1
interface FastEthernet0/1
description to ISP for VPN
ip address 131.108.1.5 255.255.255.252
!
crypto isakmp policy 1
encr 3des
authentication pre-share
group 2
crypto isakmp key cisco address 131.108.102.1
crypto isakmp key cisco address 131.108.101.1
Branch
Following is the IKE configuration for VPN-Branch-1.
interface Serial0/0
description to ISP for VPN
ip address 131.108.101.1 255.255.255.252
!
crypto isakmp policy 1
encr 3des
authentication pre-share group 2
crypto isakmp key cisco address 131.108.1.1
crypto isakmp key cisco address 131.108.1.5
interface Serial0
description to ISP for VPN
ip address 131.108.102.1 255.255.255.252
!
crypto isakmp policy 1
encr 3des
authentication pre-share group 2
Tip These defaults and more information can be found at: http://www.cisco.com/univercd/cc/td/doc/product/
software/ios122/122cgcr/fsecur_r/fipsencr/srfike.htm#xtocid17729
Head-End
Following is the transform configuration for the head-end routers.
Branch
Following is the transform configuration for the branch routers.
Head-End
Following is the ACL configuration for VPN-HE-1.
Branch
Following is the ACL configuration for VPN-Branch-1.
Head-End
Following is the crypto map configuration for VPN-HE-1.
interface FastEthernet0/1
description to ISP for VPN
ip address 131.108.1.1 255.255.255.252
!
crypto map static-map local-address
FastEthernet0/1
!
crypto map static-map 1 ipsec-isakmp
set peer 131.108.101.1
set transform-set strong
match address toBranch1
interface FastEthernet0/1
description to ISP for VPN
ip address 131.108.1.5 255.255.255.252
!
crypto map static-map local-address
FastEthernet0/1
!
crypto map static-map 1 ipsec-isakmp
set peer 131.108.102.1
set transform-set strong
match address toBranch2
crypto map static-map 2 ipsec-isakmp
set peer 131.108.101.1
set transform-set strong
match address toBranch1
Branch
Following is the crypto map configuration for VPN-Branch-1.
interface Serial0/0
description to ISP for VPN
ip address 131.108.101.1 255.255.255.252
!
crypto map static-map local-address Serial0/0
!
crypto map static-map 1 ipsec-isakmp
set peer 131.108.1.1
set transform-set strong
match address toHE-1
crypto map static-map 2 ipsec-isakmp
set peer 131.108.1.5
set transform-set strong
match address toHE-2
interface Serial0
description to ISP for VPN
ip address 131.108.102.1 255.255.255.252
!
crypto map static-map local-address Serial0
!
crypto map static-map 1 ipsec-isakmp
set peer 131.108.1.5
set transform-set strong
match address toHE-2
crypto map static-map 2 ipsec-isakmp
set peer 131.108.1.1
set transform-set strong
match address toHE-1
Head-End
Following is the configuration that applies the crypto maps for VPN-HE-1.
interface FastEthernet0/1
description to ISP for VPN
ip address 131.108.1.1 255.255.255.252
crypto map static-map
!
interface Tunnel0
description Primary Tunnel to Branch1
ip address 10.200.1.1 255.255.255.252
tunnel source FastEthernet0/1
tunnel destination 131.108.101.1
crypto map static-map
!
interface Tunnel1
description Secondary Tunnel to Branch2
ip address 10.200.1.13 255.255.255.252
delay 6000
tunnel source FastEthernet0/1
tunnel destination 131.108.102.1
crypto map static-map
Following is the configuration that applies the crypto maps for VPN-HE-2.
interface FastEthernet0/1
description to ISP for VPN
ip address 131.108.1.5 255.255.255.252
crypto map static-map
!
interface Tunnel0
description Primary Tunnel to Branch2
ip address 10.200.1.9 255.255.255.252
tunnel source FastEthernet0/1
tunnel destination 131.108.102.1
crypto map static-map
!
interface Tunnel1
description Secondary Tunnel to Branch1
ip address 10.200.1.5 255.255.255.252
delay 6000
tunnel source FastEthernet0/1
tunnel destination 131.108.101.1
crypto map static-map
Note EIGRP is the routing protocol used in this design. See the Cisco AVVID Network Infrastructure
Enterprise Site-to-Site VPN Design SRND for alternative ways to influence cost in other IGP routing
protocols.
Branch
Following is the configuration that applies the crypto maps for VPN-Branch-1.
interface Serial0/0
description to ISP for VPN
ip address 131.108.101.1 255.255.255.252
crypto map static-map
!
interface Tunnel0
description Primary Tunnel to HE1
ip address 10.200.1.2 255.255.255.252
tunnel source Serial0/0
tunnel destination 131.108.1.1
crypto map static-map
!
interface Tunnel1
description Secondary Tunnel to HE2
ip address 10.200.1.6 255.255.255.252
delay 60000
tunnel source Serial0/0
tunnel destination 131.108.1.5
crypto map static-map
Following is the configuration that applies the crypto maps for VPN-Branch-2.
interface Serial0
description to ISP for VPN
ip address 131.108.102.1 255.255.255.252
crypto map static-map
!
interface Tunnel0
description Primary Tunnel to HE2
ip address 10.200.1.10 255.255.255.252
tunnel source Serial0
tunnel destination 131.108.1.5
crypto map static-map
!
interface Tunnel1
description Secondary Tunnel to HE1
ip address 10.200.1.14 255.255.255.252
delay 60000
tunnel source Serial0
tunnel destination 131.108.1.1
crypto map static-map
Tip See the Cisco AVVID Network Infrastructure Enterprise Site-to-Site VPN Design SRND for more on
static and dynamic routing with VPN.
6k-core-1
RP
6k-core-2
RP
87045
Branch Primary VPN HE2
Secondary
This section provides information and sample configurations for deploying Anycast RP in a WAN.
Branch
Following is the basic multicast configuration for Branch1.
ip multicast-routing
!
interface Tunnel0
description Primary Tunnel to HE1
ip pim sparse-mode
!
interface Tunnel1
description Secondary Tunnel to HE2
ip pim sparse-mode
!
interface FastEthernet0/0.1
description DATA VLAN 4
encapsulation dot1Q 4
ip address 40.1.4.1 255.255.255.0
ip pim sparse-mode
!
interface FastEthernet0/0.2
description VOICE VLAN 40
encapsulation dot1Q 40
ip address 40.1.40.1 255.255.255.0
ip pim sparse-mode
!
ip pim rp-address 10.6.2.1 Loopback 1 address of both RPs.
ip multicast-routing
!
interface Tunnel0
description Primary Tunnel to HE2
ip pim sparse-mode
!
interface Tunnel1
description Secondary Tunnel to HE1
ip pim sparse-mode
!
interface FastEthernet0/0.1
description DATA VLAN 5
encapsulation dot1Q 5
ip address 50.1.5.1 255.255.255.0
ip pim sparse-mode
!
interface FastEthernet0/0.2
description VOICE VLAN 50
encapsulation dot1Q 50
ip address 50.1.50.1 255.255.255.0
ip pim sparse-mode
!
ip pim rp-address 10.6.2.1 Loopback 1 address of both RPs.
Head-End
Following is the basic multicast configuration for VPN-HE-1.
ip multicast-routing
!
interface Tunnel0
description Primary Tunnel to Branch1
ip pim sparse-mode
!
interface Tunnel1
description Secondary Tunnel to Branch2
ip pim sparse-mode
!
interface FastEthernet0/0
description to Corporate Network
ip pim sparse-mode
!
ip pim rp-address 10.6.2.1 Loopback 1 address of both RPs.
ip multicast-routing
!
interface Tunnel0
description Primary Tunnel to Branch2
ip pim sparse-mode
!
interface Tunnel1
description Secondary Tunnel to Branch1
ip pim sparse-mode
!
interface FastEthernet0/0
description to Corporate Network
ip pim sparse-mode
!
ip pim rp-address 10.6.2.1 Loopback 1 address of both RPs.
Note Ensure that the IP subnets for the GRE tunnels are advertised in the IGP routing protocol. If not, RPF
failures will occur.
Summary
In summary, when using IP multicast with MoH or IP/TV in the WAN follow these recommendations.
• Use PIM-SM (recommended)
• Anycast RP (recommended)
– Fast convergence, redundancy, load balancing
This chapter discusses the basics of MMoH and IP/TV implementation. Detailed configuration and
design for both MoH and IP/TV are not covered; only the basic layout is needed to enable multicast
operation.
Multicast Music-on-Hold
Call Manager version 3.1 and higher allows for the configuration of MoH services. Unicast and
Multicast MoH streams can be generated by two source types.
• A file source uses actual files located on the MoH server to “play” or stream audio music out to the
network.
• A fixed source uses a soundcard located in the MoH server. The soundcard can connect a wide-range
of devices, such as CD and cassette players.
There are two types of hold that can be used with MoH.
• Network hold is used when features, such as transfer, conference, and call-park, are used.
• User hold is used when the IP phone user selects the “hold” softkey.
The following are guidelines for implementing MoH with IP multicast:
• Use an IP multicast address group range from the administratively scoped address range.
• Enable multicast on the MoH server and enter the starting address range selected from the scoped
range.
• Increment the multicast streams on IP addresses.
• Use the G.711 CODEC on the LAN and the G.729 CODEC for streams crossing the WAN.
• Change the default “Max Hops” or TTL to a value representative of the routed network.
• Enable multicast for each of the selected audio sources.
• Enable the user and network hold sources for each IP phone.
• Over provision the Low-Latency Queue (LLQ) by one or two streams to account for the additional
traffic brought on by MoH.
It is important to understand the basic operation of how the MoH server provides music to the IP phone.
Figure 7-1 illustrates a MoH setup.
Headquarters
Cisco CallManager
cluster
M
IP Music on
hold server M M
2000
CallManager tells IP phone M M
IP to "join" MMoH group
2001
WAN
IP
1000 in call with 2000 IP IP
1001 in call with 2001
1000 1001
Call is placed on hold
IP Music is played to 2000
76649
Ext. 2001 gets local copy
of MMoH stream
Note It is important to make sure that the audio source, region, location and Media Resource Group
List (MRGL) are configured properly for the branch office. If not, the wrong MoH audio
CODEC may be used and a separate audio stream will be sent to the branch. This defeats the
purpose of saving bandwidth with multicast.
Step 6 The IP phone (Extension 2001) signals the network that it wishes to receive the multicast stream
associated with the MoH group number. The local switch and router know, via CGMP/PIM or IGMP
snooping, that an existing stream is already being forwarded to Extension 2000. So a copy of the existing
stream is forwarded from the local branch office switch.
Step 7 When the holding phone resumes the call, the held phone sends an IGMP leave to signal that it no longer
wants the multicast traffic.
Note The recommendation is to use the G.711ulaw or Wideband CODEC in the campus network. G.729
consumes less bandwidth than the G.711 or Wideband CODEC and is recommended for use in the WAN.
Using G.729 will conserve bandwidth, but the audio quality is poor.
Table 7-1 lists the maximum number of MoH sessions for each model of server. The maximum sessions
refers to unicast, multicast, or unicast and multicast sessions. This is the recommended maximum
sessions a platform can support, irrespective of the transport mechanism.
Step 1 From the Call Manager Admin main page, select Service >Music On Hold > Configure Music On
Hold Servers. Figure 7-2 shows the MoH Server Configuration screen.
Step 2 Select the desired server from the list on the left.
Step 3 Under the “Multicast Audio Source Information” section, select Enable Multicast Audio Sources on
this MoH Server.
Step 4 Change the Base Multicast IP Address to the range identified from the administratively scoped range.
Step 5 Ensure that Increment Multicast on IP Address is selected. The default depends on version of Call
Manager.
Step 6 Change the default max hops (TTL) to reflect the routed network.
Step 7 Click on Update to save the changes.
Note A warning message will be displayed if the MoH Server has not been associated with a Media Resource
Group (MRG) and Media Resource Group List (MRGL). The MoH Server should be associated with the
appropriate MRG and MRGL before the source is assigned to the IP phone. MRG and MRGL
configurations are not covered in this document as this involves the configurations of other Media
Resources such as conference bridges.
Step 1 From the Call Manager Admin main page, select Service >Music On Hold. Figure 7-3 shows the MoH
Audio Source Configuration screen.
Step 2 Select the desired audio source from the list on the left.
Step 3 Select Allow Multicasting.
Step 4 Click on Update to save the changes.
Step 5 A message is displayed indicating that the MoH Servers must be reset. To reset the servers, click on
Reset MoH Servers.
Step 1 From the CallManager Admin main page, select Device>Phone>Find and enter the search criteria.
Figure 7-4 shows the Phone Configuration screen.
Note A different user and network hold source can be configured for each Directory Number (DN)
listed for this device.
Step 1 From the Call Manager Admin main page, select Service >System Parameters >Cisco IP Voice Media
Streaming App>DefaultMOHCodec.
Step 2 Change the defaults as desired,
Step 3 Click on Update to save the changes.
Tip For information about configuring QoS for IP Telephony, see the Cisco AVVID Network Infrastructure
Enterprise Quality of Service Design guide.
IP/TV Server
The Cisco IP/TV solution is used to serve media streams for live broadcast, stored broadcast, and
on-demand content. The server used in this design is streaming out to multicast groups using
pre-recorded files. IP/TV servers provide streaming of Video, Audio, and Slidecast (presentations)
media.
The following are guidelines for implementing IP/TV with IP multicast:
• Use multicast addresses found in the administratively scoped address range.
• Associate multicast group ranges for each type of stream.
• Use IP multicast boundaries to restrict the forwarding of certain streams to selected areas of the
network.
• Configure QoS rules for the IP/TV server at the ingress point in the campus.
• Use the recommended DSCP of CS4.
Following is an example IP/TV configuration for a WAN aggregation router (Frame-relay). In this
example:
• The high-rate stream is configured to stream in the campus network and stop at the WAN
aggregation router. If satellite feeds or high-speed WAN links exist, the High-Rate stream can pass
to those areas of the network. However, the design discussed in this section prevents this to illustrate
how IP multicast boundaries and filters can be used to keep certain traffic from passing over the
WAN.
• The medium stream is configured to stream in the campus and links 768kbps and higher.
• The low stream is configured to stream in the campus and links 256kbps and higher.
• Multicast MoH is configured to stream to all locations.
• The group range for Multicast MoH is added for full configuration reference (239.192.240.0 /22).
Note For more information about the boundaries, see the “Traffic Engineering” section on page 8-4.
To verify the configuration, first use the IP/TV Viewer client to request the stream at the branch office.
Then use the show ip mroute active command. The following example shows the low stream (100k).
The following example illustrates the recommended configuration for IP/TV streaming media. On the
server-farm switch, configure the port connected to the IP/TV server to classify all traffic from the source
as CoS=4 and to override any CoS value previously set.
interface GigabitEthernet0/1
description To IP/TV Server
mls qos cos 4 Assign a CoS of 4 to any traffic arriving on this port
mls qos cos override Override previously classified CoS value with the
. one set above
Because a COS of 4 maps, by default, to a DSCP of 32 (CS4), the mls qos map command is optional for
the IP/TV server-farm switch. Also, on the server-farm switch there is no need to remap any of the CoS
values for Voice (CoS 5 by defalut maps to DSCP 46) or Voice-Control (CoS 3 by default maps to DSCP
26). All of these should be left at defaults for this example.
To verify that the default mappings are in place, issue the following command:
Cos-dscp map:
cos: 0 1 2 3 4 5 6 7
--------------------------------
dscp: 0 8 16 24 32 40 48 56
Once this mapping is complete, the IP/TV traffic can be scheduled and provisioned according to the
CoS/DSCP values identified above.
Summary
In summary, when using IP multicast with MoH or IP/TV follow these recommendations.
Multicast Music-on-Hold
• Increment on IP Address (recommended)
• Change max Hops to reflect specific routing topology (recommended)
• Allow G.729 Multicast Audio Source to Branch offices (recommended)
• Restrict G.711 and Wideband to local Campus (recommended)
• Provision LLQ appropriately to account for Multicast MoH streams in QoS Policy (recommended)
IP/TV Server
• Use administrative scoping for different streams sizes (recommended)
• Boundary streams by using the scoped group address (recommended)
• Classify IP/TV streaming media at the ingress port in the server-farm switch
– Use QoS markings of CoS=4 and DSCP=CS4 (recommended)
This chapter provides recommendations for security measures, timer adjustments, and traffic
engineering for an IP multicast network.
Security
With IP multicast, it is important to protect the traffic from Denial-of-Service (DoS) attacks or stream
hijacking by rogue sources or rogue RPs.
• DoS attacks affect the availability and efficiency of a network. If a rogue application or device can
generate enough traffic and target that traffic at a source, then CPU and memory resources can be
severely impacted.
• Stream hijacking allows any host on the network to become an active source for any legitimate
multicast group. It is easy to download a free multicast-enabled chat application from the Internet
and change the IP Multicast group address assignment to be the same as that used by legitimately
configured multicast applications. If the network devices are not secured from “accepting”
unauthorized sources, the rogue source can impact the IP Multicast streams. For the most part,
receivers are ignorant of the details associated with which source is really responsible for which
group.
Use the following commands on IP Multicast-enabled routers to guard against rogue sources and rogue
RPs:
• ip pim accept-register
• rp-announce-filter
• ip pim rp-address
• ip igmp access-group
Rogue Sources
A source is any host that is capable of sending IP Multicast traffic. Rogue sources are unauthorized
sources that send IP Multicast traffic.
Sources send group traffic to the first-hop router. The first-hop router sends a Register message to the
RP with information about the active source. To protect the router from unauthorized Register messages,
use the ip pim accept-register command. This command, which can be used only on candidate RPs,
configures the RP to accept Register messages only from a specific source. If a Register message is
denied, a Register-Stop is sent back to the originator of the Register.
• If the list acl attribute is used, extended access lists can be configured to determine which pairs
(source and group) are permitted or denied when seen in a Register message.
• If the route-map map attribute is used, typical route-map operations can be applied on the router
for the source address that appears in a Register message.
The following example illustrates a configuration that permits a registration from the sources listed
(10.5.10.20 MoH server and 10.5.11.20 IP/TV Server).
Note For more information about the addresses used, see Table 6-1.
Additionally, a list can be configured that indicates which groups are permitted from the sources at time
of registration. The following example illustrates a configuration that permits the MoH group address
from the MoH server and the three IP/TV groups from the IP/TV server.
If an unauthorized source comes online and the first-hop router attempts to register this new source with
the RP, the registration will be rejected. The following example shows the debug output for a failed
registration attempt by router 10.0.0.37 for source 10.5.12.1 and group 239.194.1.1.
Note The streams sent by rogue sources would flow on the local subnet where the source resides. In addition
to not blocking the source on the local subnet, there are other topological cases where the
“accept-register” mechanism fails to block rogue sources.
Rogue RPs
A rogue RP is any router that, by mistake or maliciously, acts as an RP for a group. To guard against
maliciously configured routers acting as a candidate RP, use the following commands:
• The ip pim rp-announce-filter command is used on Mapping Agents to filter Auto-RP
announcement messages coming from the RP. This command can be used only when Auto-RP is
deployed.
In the following example, the router is configured to accept RP announcements from RPs in access
list 11 for group ranges described in access list 12.
ip pim rp-announce-filter rp-list 11 group-list 12
• The ip pim rp-address command configures the PIM RP address for a particular group or group
range. Without an associated group-acl, the default group range is 224.0.0.0/4. The RP address is
used by first-hop routers to send Register messages on behalf of source multicast hosts. The RP
address is also used by routers on behalf of multicast hosts that want to become members of a group.
These routers send Join and Prune messages to the RP. Although the command is not used
specifically for security purposes, it does help to ensure that a non-RP router uses the authorized
RPs for the network.
In Chapter 2, “IP Multicast in a Campus Network,” a filter was used to control which groups an RP
was responsible for. If a source becomes active for a group that is not in the ACL for the RP
group-list, then there will be no active RP for the newly defined group. This will cause the group to
fall into dense-mode. As an extra layer of precaution against configurations mistakes or acts of DoS,
an RP should be defined that covers all unused multicast groups. This will ensure that undefined
groups have an RP on the network and they will not fall into dense-mode nor will the groups be
forwarded. This method, commonly referred to as a “Garbage-can RP” can also be used to “log”
attempts by rogue sources and groups to register on the network.
• The ip igmp access-group command is applied to an interface to restrict the group ranges to which
devices are permitted to become members. The interface will discard Join messages for illegal
groups. The following example shows that the members of VLAN 10 are only allowed to join the
group w.x.y.z.
interface Vlan 10
ip igmp access-group 1
!
access-list 1 permit w.x.y.z
Note The use of IGMP-based ACLs can become a management issue if widely deployed. If there
is a need to restrict which groups that the clients can join, try to restrict the groups on the
RP, Mapping Agents, or PIM-enable, first-hop routers. If IGMP-based ACLs have been
deployed and a group is added or deleted, the ACLs will have to be reconfigured on each
VLAN or physical interface to which the clients are attached.
The streams sent by rogue sources would flow on the local subnet where the source resides. Also,
depending on the topological layout of the network, the accept-register feature may not block all sources.
Query Interval
The ip pim query-interval command configures the frequency of PIM Router-Query messages. Router
Query messages are used to elect a PIM DR. The default value is 30 seconds. If the default value is
changed, the recommended interval is 1 second.
To verify the interval for each interface, issue the show ip pim interface command, as shown below.
Announce Interval
The ip pim send-rp-announce command has an interval option. Adjusting the interval allows for faster
RP failover when using Auto-RP. The default interval is 60 seconds and the holdtime is 3 times the
interval. So the default failover time is 3 minutes. The lower the interval, the faster the failover time.
Decreasing the interval will increase Auto-RP traffic but not enough to cause any kind of a performance
impact. If the default is to be changed, use the recommended values of 3 to 5 seconds.
Traffic Engineering
Traffic engineering adds a great deal of control to IP multicast deployment and operation. It also adds
complexity. One option for controlling IP multicast is through the use of scoped boundaries.
The ip multicast boundary command configures an administratively scoped boundary on an interface
and permits or denies multicast group addresses found in the access-list. No multicast packets will be
allowed to flow across the boundary from either direction. This allows reuse of the same multicast group
address in different administrative domains.
If the RPF interface for a multicast route has a multicast boundary configured for that group, its outgoing
interfaces will not be populated with multicast forwarding state. Join messages received on other
interfaces will be ignored as long as the boundary remains on the RPF interface. If the RPF interface
changes and the boundary no longer applies to the new RPF interface, join latency will be introduced
because of the delay in populating outgoing interfaces.
The boundary controls traffic based on the permit/deny configuration of the ACL associated with the
boundary command.
The following example shows a boundary on VLAN2 connected to an access layer. The boundary shown
permits the group range 239.255.0.0 and denies all other streams on the VLAN2 interface.
interface VLAN2
description To Access VLAN 2
ip multicast boundary moh
!
ip access-list standard moh
remark Permit 239.255, Deny all others
permit 239.255.0.0 0.0.255.255
deny any
Figure 8-1 shows a high-level view of how scoped boundaries can be used to restrict traffic to certain
areas of the network. The “Allowed Scopes” list the permitted streams at each location.
HQ campus
M
Corporate
network
Branch sites
V V
V
Left Middle Right
The following is an example configuration for a WAN aggregation router (Frame-relay). The boundary
is placed on the serial sub-interfaces that connect to each branch office. Boundary commands could also
be configured on the VLAN/interface from the core-layer switches to the WAN aggregation routers.
To assist in management of the IP multicast, routers can be enabled to send SNMP traps (with IP
multicast, MSDP, and PIM information) to the SNMP server. To enable the traps, use the following
commands:
• snmp-server enable traps ipmulticast
• snmp-server enable traps msdp
• snmp-server enable traps pim
Tip Details for each of the MIBs for IP multicast can be found at:
ftp://ftpeng.cisco.com/ipmulticast/config-notes/mib-info.txt