Multicast Quick Start Configuration Guide: Document ID: 9356

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Multicast QuickStart Configuration Guide

Document ID: 9356


Contents
Introduction
Prerequisites
Requirements
Components Used
Conventions
Dense Mode
Sparse Mode with one RP
Sparse Mode with Multiple RPs
AutoRP with one RP
AutoRP with Multiple RPs
DVMRP
MBGP
MSDP
Stub Multicast Routing
IGMP UDLR for Satellite Links
PIMv2 BSR
CGMP
IGMP Snooping
PGM
MRM
Troubleshooting
Related Information
Introduction
IP multicasting is a bandwidthconserving technology that reduces traffic because it simultaneously delivers a
single stream of information to thousands of corporate recipients and homes. Applications that take advantage
of multicast include video conferencing, corporate communications, distance learning, and distribution of
software, stock quotes, and news. This document discusses the basics of how to configure multicast for
various networking scenarios.
Prerequisites
Requirements
Cisco recommends that readers of this document have basic knowledge of Internet Protocol (IP) Multicast.
Note: Refer to Internet Protocol Multicast documentation for more information.
Components Used
This document is not restricted to specific software and hardware versions.
Conventions
Refer to Cisco Technical Tips Conventions for more information on document conventions.
Dense Mode
Cisco recommends that you use Protocol Independent Multicast (PIM) sparse mode, particularly AutoRP,
where possible and especially for new deployments. However, if dense mode is desired, configure the global
command ip multicastrouting and the interface command ip pim sparsedensemode on each interface
that needs to process multicast traffic. The common requirement, for all configurations within this document,
is to configure multicasting globally and configure PIM on the interfaces. As of Cisco IOS Sotware Release
11.1, you can configure the interface commands ip pim densemode and ip pim sparsemode
simultaneously with the ip pim sparsedensemode command. In this mode, the interface is treated as
densemode if the group is in densemode. If the group is in sparsemode (for example, if an RP is known),
the interface is treated as sparsemode.
Note: The "Source" in the examples throughout this document represents the source of multicast traffic, and
"Receiver" represents the receiver of multicast traffic.
Router A Configuration
ip multicastrouting
interface ethernet0
ip address <address> <mask>
ip pim sparsedensemode
interface serial0
ip address <address> <mask>
ip pim sparsedensemode
Router B Configuration
ip multicastrouting
interface serial0
ip address <address> <mask>
ip pim sparsedensemode
interface ethernet0
ip address <address> <mask>
ip pim sparsedensemode
Sparse Mode with one RP
In this example, Router A is the RP which is typically the closest router to the source. Static RP configuration
requires that all routers in the PIM domain have the same ip pim rpaddress commands configured. You can
configure multiple RPs, but there can only be one RP per specific group.
Router A Configuration
ip multicastrouting
ip pim rpaddress 1.1.1.1
interface ethernet0
ip address <address> <mask>
ip pim sparsedensemode
interface serial0
ip address 1.1.1.1 255.255.255.0
ip pim sparsedensemode
Router B Configuration
ip multicastrouting
ip pim rpaddress 1.1.1.1
interface serial0
ip address <address> <mask>
ip pim sparsedensemode
interface ethernet0
ip address <address> <mask>
ip pim sparsedensemode
Sparse Mode with Multiple RPs
In this example, SourceA sends to 224.1.1.1, 224.1.1.2, and 224.1.1.3. SourceB sends to 224.2.2.2,
224.2.2.3, and 224.2.2.4. You could have one router, either RP 1 or RP 2, be the RP for all groups. However,
if you want different RPs to handle different groups, you need to configure all routers to include which groups
the RPs will serve. This type of static RP configuration requires that all routers in the PIM domain have the
same ip pim rpaddressaddress acl commands configured. You can also use AutoRP in order to achieve
the same setup, which is easier to configure.
RP 1 Configuration
ip multicastrouting
ip pim RPaddress 1.1.1.1 2
ip pim RPaddress 2.2.2.2 3
accesslist 2 permit 224.1.1.1
accesslist 2 permit 224.1.1.2
accesslist 2 permit 224.1.1.3
accesslist 3 permit 224.2.2.2
accesslist 3 permit 224.2.2.3
accesslist 3 permit 224.2.2.4
RP 2 Configuration
ip multicastrouting
ip pim RPaddress 1.1.1.1 2
ip pim RPaddress 2.2.2.2 3
accesslist 2 permit 224.1.1.1
accesslist 2 permit 224.1.1.2
accesslist 2 permit 224.1.1.3
accesslist 3 permit 224.2.2.2
accesslist 3 permit 224.2.2.3
accesslist 3 permit 224.2.2.4
Configuration for Routers 3 and 4
ip multicastrouting
ip pim RPaddress 1.1.1.1 2
ip pim RPaddress 2.2.2.2 3
accesslist 2 permit 224.1.1.1
accesslist 2 permit 224.1.1.2
accesslist 2 permit 224.1.1.3
accesslist 3 permit 224.2.2.2
accesslist 3 permit 224.2.2.3
accesslist 3 permit 224.2.2.4
AutoRP with one RP
AutoRP requires that you configure the RPs to announce their availability as RPs and mapping agents. The
RPs use 224.0.1.39 to send their announcements. The RP mapping agent listens to the announced packets
from the RPs, then sends RPtogroup mappings in a discovery message that is sent to 224.0.1.40. These
discovery messages are used by the remaining routers for their RPtogroup map. You can use one RP that
also serves as the mapping agent, or you can configure multiple RPs and multiple mapping agents for
redundancy purposes.
Note that when you choose an interface from which to source RP announcements, Cisco recommends that you
use an interface such as a loopback instead of a physical interface. Also, it is possible to use Switched VLAN
Interfaces (SVIs). If a VLAN interface is used to announce the RP address, then the interfacetype option in
the ip pim [vrf vrfname] sendrpannounce {interfacetype interfacenumber | ipaddress} scope
ttlvalue command should contain the VLAN interface and the VLAN number. For example, the command
looks like ip pim sendrpannounce Vlan500 scope 100. If you choose a physical interface, you rely on
that interface to always be up. This is not always the case, and the router stops advertising itself as the RP
once the physical interface goes down. With a loopback interface, it is always up and never goes down, which
ensures the RP continues to advertise itself through any available interfaces as an RP. This is the case even if
one or more of its physical interfaces fails. The loopback interface must be PIMenabled and advertised by an
Interior Gateway Protocol (IGP), or it must be reachable with static routing.
Router A Configuration
ip multicastrouting
ip pim sendrpannounce loopback0 scope 16
ip pim sendrpdiscovery
scope 16
interface loopback0
ip address <address> <mask>
ip pim sparsedensemode
interface ethernet0
ip address <address> <mask>
ip pim sparsedensemode
interface serial0
ip address <address> <mask>
ip pim sparsedensemode
Router B Configuration
ip multicastrouting
interface ethernet0
ip address <address> <mask>
ip pim sparsedensemode
interface serial0
ip address <address> <mask>
ip pim sparsedensemode
AutoRP with Multiple RPs
The access lists in this example allow the RPs to be an RP only for the groups you want. If no access list is
configured, the RPs are available as an RP for all groups. If two RPs announce their availability to be RPs for
the same group(s), the mapping agent(s) resolve these conflicts with "the highest IP address wins" rule.
When two RPs announce for that group, you can configure each router with a loopback address in order to
influence which router is the RP for a particular group. Place the higher IP address on the preferred RP, then
use the loopback interface as the source of the announce packets; for example, ip pim sendRPannounce
loopback0. When multiple mapping agents are used, they each advertise the same group to RP mappings to
the 224.0.1.40 discovery group.
RP 1 Configuration
ip multicastrouting
interface loopback0
ip address <address> <mask>
ip pim sparsedensemode
ip pim sendRPannounce loopback0 scope 16 grouplist 1
ip pim sendRPdiscovery scope 16
accesslist 1 permit 239.0.0.0 0.255.255.255
RP 2 Configuration
ip multicastrouting
interface loopback0
ip address <address> <mask>
ip pim sparsedensemode
ip pim sendRPannounce loopback0 scope 16 grouplist 1
ip pim sendRPdiscovery scope 16
accesslist 1 deny 239.0.0.0 0.255.255.255
accesslist 1 permit 224.0.0.0 15.255.255.255
Refer to Guide to AutoRP Configuration and Diagnostics for more information on AutoRP.
DVMRP
Your Internet service provider (ISP) could suggest that you create a Distance Vector Multicast Routing
Protocol (DVMRP) tunnel to the ISP in order to gain access to the multicast backbone in the Internet (mbone).
The minimum commands in order to configure a DVMRP tunnel are shown here:
interface tunnel0
ip unnumbered <any pim interface>
tunnel source <address of source>
tunnel destination <address of ISPs mrouted box>
tunnel mode dvmrp
ip pim sparsedensemode
Typically, the ISP has you tunnel to a UNIX machine running "mrouted" (DVMRP). If the ISP has you tunnel
to another Cisco device instead, use the default GRE tunnel mode.
If you want to generate multicast packets for others on the mbone to see instead of receive multicast packets,
you need to advertise the source subnets. If your multicast source host address is 131.108.1.1, you need to
advertise the existence of that subnet to the mbone. Directlyconnected networks are advertised with metric 1
by default. If your source is not directly connected to the router with the DVMRP tunnel, configure this under
interface tunnel0:
ip dvmrp metric 1 list 3
accesslist 3 permit 131.108.1.0 0.0.0.255
Note: You must include an access list with this command in order to prevent advertising the entire Unicast
routing table to the mbone.
If your setup is similar to the one shown here, and you want to propagate DVMRP routes through the domain,
configure the ip dvmrp unicastrouting command on the serial0 interfaces of Routers A and B. This action
provides the forwarding of DVMRP routes to PIM neighbors who then have a DVMRP routing table used for
Reverse Path Forwarding (RPF). DVMRP learned routes take RPF precedence over all other protocols, except
for directlyconnected routes.
MBGP
Multiprotocol Border Gateway Protocol (MBGP) is a basic method to carry two sets of routes: one set for
unicast routing and one set for multicast routing. MBGP provides the control necessary to decide where
multicast packets are allowed to flow. PIM uses the routes associated with multicast routing in order to build
data distribution trees. MBGP provides the RPF path, not the creation of multicast state. PIM is still needed in
order to forward the multicast packets.
Router A Configuration
ip multicastrouting
interface loopback0
ip pim sparsedensemode
ip address 192.168.2.2 255.255.255.0
interface serial0
ip address 192.168.100.1 255.255.255.0
interface serial1
ip pim sparsedensemode
ip address 192.168.200.1 255.255.255.0
router bgp 123
network 192.168.100.0 nlri unicast
network 192.168.200.0 nlri multicast
neighbor 192.168.1.1 remoteas 321 nlri unicast multicast
neighbor 192.168.1.1 ebgpmultihop 255
neighbor 192.168.100.2 updatesource loopback0
neighbor 192.168.1.1 routemap setNH out
routemap setNH permit 10
match nlri multicast
set ip nexthop 192.168.200.1
routemap setNH permit 20
Router B Configuration
ip multicastrouting
interface loopback0
ip pim sparsedensemode
ip address 192.168.1.1 255.255.255.0
interface serial0
ip address 192.168.100.2 255.255.255.0
interface serial1
ip pim sparsedensemode
ip address 192.168.200.2 255.255.255.0
router bgp 321
network 192.168.100.0 nlri unicast
network 192.168.200.0 nlri multicast
neighbor 192.168.2.2 remoteas 123 nlri unicast multicast
neighbor 192.168.2.2 ebgpmultihop 255
neighbor 192.168.100.1 updatesource loopback0
neighbor 192.168.2.2 routemap setNH out
routemap setNH permit 10
match nlri multicast
set ip nexthop 192.168.200.2
routemap set NH permit 20
If your unicast and multicast topologies are congruent (for example, they are going over the same link), the
primary difference in the configuration is with the nlri unicast multicast command. An example is shown
here:
network 192.168.100.0 nlri unicast multicast
Congruent topologies with MBGP have a benefiteven though the traffic traverses the same paths, different
policies can be applied to unicast BGP versus multicast BGP.
Refer to What is MBGP? for more information about MBGP.
MSDP
Multicast Source Discovery Protocol (MSDP) connects multiple PIMSM domains. Each PIMSM domain
uses its own independent RP(s) and does not have to depend on RPs in other domains. MSDP allows domains
to discover multicast sources from other domains. If you are also BGPpeering with the MSDP peer, you
must use the same IP address for MSDP as for BGP. When MSDP does peer RPF checks, MSDP expects the
MSDP peer address to be the same address that BGP/MBGP gives it when it performs a route table lookup on
the RP in the SA message. However, you are not required to run BGP/MBGP with the MSDP peer if there is a
BGP/MBGP path between the MSDP peers. If there is no BGP/MBGP path and more than one MSDP peer,
you must use the ip msdp defaultpeer command. The example here shows that RP A is the RP for its
domain and RP B is the RP for its domain.
Router A Configuration
ip multicastrouting
ip pim sendRPannounce loopback0 scope 16
ip pim sendRPdiscovery scope 16
ip msdp peer 192.168.100.2
ip msdp sarequest 192.168.100.2
interface loopback0
ip address <address> <mask>
ip pim sparsedensemode
interface serial0
ip address 192.168.100.1 255.255.255.0
ip pim sparsedensemode
Router B Configuration
ip multicastrouting
ip pim sendRPannounce loopback0 scope 16
ip pim sendRPdiscovery scope 16
ip msdp peer 192.168.100.1
ip msdp sarequest 192.168.100.1
interface loopback0
ip address <address> <mask>
ip pim sparsedensemode
interface serial0
ip address 192.168.100.2 255.255.255.0
ip pim sparsedensemode
Stub Multicast Routing
Stub multicast routing allows you to configure remote/stub routers as IGMP proxy agents. Rather than fully
particpate in PIM, these stub routers forward IGMP messages from the host(s) to the upstream multicast
router.
Router 1 Configuration
int s0
ip pim sparsedensemode
ip pim neighborfilter 1
accesslist 1 deny 140.1.1.1
The ip pim neighborfilter command is needed so that Router 1 does not recognize Router 2 as a PIM
neighbor. If you configure Router 1 in sparse mode, the neighbor filter is unnecessary. Router 2 must not run
in sparse mode. When in dense mode, the stub multicast sources can flood to the backbone routers.
Router 2 Configuration
ip multicastrouting
int e0
ip pim sparsedensemode
ip igmp helperaddress 140.1.1.2
int s0
ip pim sparsedensemode
IGMP UDLR for Satellite Links
Unidirectional Link Routing (UDLR) provides a method for forwarding multicast packets over a
unidirectional satellite link to stub networks that have a back channel. This is similar to stub multicast routing.
Without this feature, the uplink router is not able to dynamically learn which IP multicast group addresses to
forward over the unidirectional link, because the downlink router cannot send anything back.
Uplinkrtr Configuration
ip multicastrouting
interface Ethernet0
description Typical IP multicast enabled interface
ip address 12.0.0.1 255.0.0.0
ip pim sparsedensemode
interface Ethernet1
description Back channel which has connectivity to downlinkrtr
ip address 11.0.0.1 255.0.0.0
ip pim sparsedensemode
interface Serial0
description Unidirectional to downlinkrtr
ip address 10.0.0.1 255.0.0.0
ip pim sparsedensemode
ip igmp unidirectionallink
no keepalive
Downlinkrtr Configuration
ip multicastrouting
interface Ethernet0
description Typical IP multicast enabled interface
ip address 14.0.0.2 255.0.0.0
ip pim sparsedensemode
ip igmp helperaddress udl serial0
interface Ethernet1
description Back channel which has connectivity to downlinkrtr
ip address 13.0.0.2 255.0.0.0
ip pim sparsedensemode
interface Serial0
description Unidirectional to uplinkrtr
ip address 10.0.0.2 255.0.0.0
ip pim sparsedensemode
ip igmp unidirectionallink
no keepalive
PIMv2 BSR
If all routers in the network are running PIMv2, you can configure a BSR instead of AutoRP. BSR and
AutoRP are very similar. A BSR configuration requires that you configure BSR candidates (similar to
RPAnnounce in AutoRP) and BSRs (similar to AutoRP Mapping Agents). In order to configure a BSR,
follow these steps:
On the candidate BSRs configure:
ip pim bsrcandidate interface hashmasklen pref
Where interface contains the candidate BSRs IP address. It is recommended (but not required) that
hashmaskLen be identical across all candidate BSRs. A candidate BSR with the largest pref value
is elected as the BSR for this domain.
An example of command usage is shown:
ip pim bsrcandidate ethernet0 30 4
1.
The PIMv2 BSR collects candidate RP information and disseminates RPset information associated
with each group prefix. In order to avoid single point of failure, you can configure more than one
router in a domain as candidate BSRs.
A BSR is elected among the candidate BSRs automatically, based on the preference values
configured. In order to serve as candidate BSRs, the routers must be connected and be in the
backbone of the network, instead of in the dialup area of the network.
Configure candidate RP routers. This example shows a candidate RP, on the interface ethernet0, for
the entire adminscope address range:
accesslist 11 permit 239.0.0.0 0.255.255.255
ip pim rpcandidate ethernet0 grouplist 11
2.
CGMP
In order to configure Group Management Protocol (CGMP), configure this on the router interface facing the
switch:
ip pim sparsedensemode
ip cgmp
Then, configure this on the switch:
set cgmp enable
IGMP Snooping
Internet Group Management Protocol (IGMP) snooping is available with release 4.1 of the Catalyst 5000.
IGMP snooping requires a Supervisor III card. No configuration other than PIM is necessary to configure
IGMP snooping on the router. A router is still necessary with IGMP snooping to provide the IGMP querying.
The example provided here shows how to enable IGMP snooping on the switch:
Console> (enable) set igmp enable
IGMP Snooping is enabled.
CGMP is disabled.
If you try to enable IGMP, but CGMP is already enabled, you see the following:
Console> (enable) set igmp enable
Disable CGMP to enable IGMP Snooping feature.
PGM
Pragmatic General Multicast (PGM) is a reliable multicast transport protocol for applications that require
ordered, duplicatefree, multicast data delivery from multiple sources to multiple receivers. PGM guarantees
that a receiver in the group either receives all data packets from transmissions and retransmissions or can
detect unrecoverable data packet loss.
There are no PGM global commands. PGM is configured per interface with the ip pgm command. You must
enable Multicast routing on the router with PIM on the interface.
MRM
Multicast Routing Monitor (MRM) facilitates automated fault detection in a large multicast routing
infrastructure. MRM is designed to alert a network administrator of multicast routing problems near to
realtime.
MRM has two components: MRM tester and MRM manager. MRM tester is a sender or receiver.
MRM is available in Cisco IOS Software Release 12.0(5)T and later. Only the MRM testers and managers
need to be running the MRMsupported Cisco IOS version.
Test Sender Configuration
interface Ethernet0
ip mrm testsender
Test Receiver Configuration
interface Ethernet0
ip mrm testreceiver
Test Manager Configuration
ip mrm manager test1
manager e0 group 239.1.1.1
senders 1
receivers 2 senderlist 1
accesslist 1 permit 10.1.1.2
accesslist 2 permit 10.1.4.2
Output from the show ip mrm manager command on Test Manager is shown here:
Test_Manager# show ip mrm manager
Manager:test1/10.1.2.2 is not running
Beacon interval/holdtime/ttl:60/86400/32
Group:239.1.1.1, UDP port testpacket/statusreport:16384/65535
Test sender:
10.1.1.2
Test receiver:
10.1.4.2
Start the test with the command shown here. The test manager sends control messages to the test sender and
the test receiver as configured in the test parameters. The test receiver joins the group and monitors test
packets sent from the test sender.
Test_Manager# mrm start test1
*Feb 4 10:29:51.798: IP MRM test test1 starts ......
Test_Manager#
In order to display a status report for the test manager, enter this command:
Test_Manager# show ip mrm status
IP MRM status report cache:
Timestamp Manager Test Receiver Pkt Loss/Dup (%) Ehsr
*Feb 4 14:12:46 10.1.2.2 10.1.4.2 1 (4%) 29
*Feb 4 18:29:54 10.1.2.2 10.1.4.2 1 (4%) 15
Test_Manager#
The output shows that the receiver sent two status reports (one line each) at a given time stamp. Each report
contains one packet loss during the interval window (default of one second). The "Ehsr" value shows the
estimated next sequence number value from the test sender. If the test receiver sees duplicate packets, it shows
a negative number in the "Pkt Loss/Dup" column.
In order to stop the test, enter this command:
Test_Manager# mrm stop test1
*Feb 4 10:30:12.018: IP MRM test test1 stops
Test_Manager#
While running the test, the MRM sender starts sending RTP packets to the configured group address at the
default interval of 200 ms. The receiver monitors (expects) the same packets at the same default interval. If
the receiver detects a packet loss in the default window interval of five seconds, it sends a report to the MRM
manager. You can display the status report from the receiver if you issue the show ip mrm status command
on the manager.
Troubleshooting
Some of the most common problems found when you implement IP multicast in a network are when the
router does not forward multicast traffic because of either a RPF failure or TTL settings. Refer to the IP
Multicast Troubleshooting Guide for a detailed discussion about these and other common problems,
symptoms, and resolutions.
Related Information
IP Multicast Troubleshooting Guide
Basic Multicast Troubleshooting Tools
TCP/IP Multicast Support Page
Technical Support Cisco Systems
Contacts & Feedback | Help | Site Map
2009 2010 Cisco Systems, Inc. All rights reserved. Terms & Conditions | Privacy Statement | Cookie Policy | Trademarks of
Cisco Systems, Inc.
Updated: Aug 30, 2005 Document ID: 9356

You might also like