TriplePlay Con Multicast
TriplePlay Con Multicast
TriplePlay Con Multicast
In This Chapter
This chapter provides information about Triple Play Multicast aspects, including configuration
process overview, and implementation notes.
Introduction to Multicast
IP multicast provides an effective method of many-to-many communication. Delivering unicast
datagrams is fairly simple. Normally, IP packets are sent from a single source to a single recipient.
The source inserts the address of the target host in the IP header destination field of an IP
datagram, intermediate routers (if present) simply forward the datagram towards the target in
accordance with their respective routing tables.
Multicast sources can send a single copy of data using a single address for the entire group of
recipients. The routers between the source and recipients route the data using the group address
route. Multicast packets are delivered to a multicast group. A multicast group specifies a set of
recipients who are interested in a particular data stream and is represented by an IP address from a
specified range. Data addressed to the IP address is forwarded to the members of the group. A
source host sends data to a multicast group by specifying the multicast group address in
datagram’s destination IP address. A source does not have to register in order to send data to a
group nor do they need to be a member of the group.
Routers and Layer 3 switches use the Internet Group Management Protocol (IGMP) to manage
membership for a multicast session. When a host wants to receive one or more multicast sessions
it will send a join message for each multicast group it wants to join. When a host want to leave a
multicast group, it will send a leave message.
• Internet Group Management Protocol (Internet Group Management Protocol on page 696)
• Source Specific Multicast Groups (Internet Group Management Protocol on page 696)
• Protocol Independent Multicast (Sparse Mode) (PIM-SM on page 698)
Multicast group memberships include at least one member of a multicast group on a given
attached network, not a list of all of the members. With respect to each of its attached networks, a
multicast router can assume one of two roles, querier or non-querier. There is normally only one
querier per physical network.
A querier issues two types of queries, a general query and a group-specific query. General queries
are issued to solicit membership information with regard to any multicast group. Group-specific
queries are issued when a router receives a leave message from the node it perceives as the last
group member remaining on that network segment.
Hosts wanting to receive a multicast session issue a multicast group membership report. These
reports must be sent to all multicast enabled routers.
Version 1 — Specified in RFC-1112, Host extensions for IP Multicasting, was the first widely
deployed version and the first version to become an Internet standard.
Version 2 — Specified in RFC-2236, Internet Group Management Protocol, added support for low
leave latency, that is, a reduction in the time it takes for a multicast router to learn that there are no
longer any members of a particular group present on an attached network.
Version 3 —Specified in RFC-3376, Internet Group Management Protocol, adds support for
source filtering, that is, the ability for a system to report interest in receiving packets only from
specific source addresses, as required to support Source-Specific Multicast (See Source Specific
Multicast (SSM)), or from all but specific source addresses, sent to a particular multicast address.
IGMPv3 must keep state per group per attached network. This group state consists of a filter-
mode, a list of sources, and various timers. For each attached network running IGMP, a multicast
router records the desired reception state for that network.
IGMP version 3 specifies that if at any point a router receives an older version query message on
an interface that it must immediately switch into a compatibility mode with that earlier version.
Since none of the previous versions of IGMP are source aware, should this occur and the interface
switch to Version 1 or 2 compatibility mode, any previously learned group memberships with
specific sources (learned from the IGMPv3 specific INCLUDE or EXCLUDE mechanisms)
MUST be converted to non-source specific group memberships. The routing protocol will then
treat this as if there is no EXCLUDE definition present.
The range of multicast addresses from 232.0.0.0 to 232.255.255.255 is currently set aside for
source-specific multicast in IPv4. For groups in this range, receivers should only issue source-
specific IGMPv3 joins. If a PIM router receives a non-source-specific join for a group in this
range, it should ignore it.
An Alcatel-Lucent PIM router must silently ignore a received (*, G) PIM join message where G is
a multicast group address from the multicast address group range that has been explicitly
configured for SSM. This occurrence should generate an event. If configured, the IGMPv2 request
can be translated into IGMPv3. The SR allows for the conversion of an IGMPv2 (*,G) request into
a IGMPv3 (S,G) request based on manual entries. A maximum of 32 SSM ranges is supported.
IGMPv3 also permits a receiver to join a group and specify that it only wants to receive traffic for
a group if that traffic does not come from a specific source or sources. In this case, the DR will
perform a (*,G) join as normal, but can combine this with a prune for each of the sources the
receiver does not wish to receive.
PIM-SM uses the unicast routing table to perform the Reverse Path Forwarding (RPF) check
function instead of building up a completely independent multicast routing table.
PIM-SM only forwards data to network segments with active receivers that have explicitly
requested the multicast group. PIM-SM in the ASM model initially uses a shared tree to distribute
information about active sources. Depending on the configuration options, the traffic can remain
on the shared tree or switch over to an optimized source distribution tree. As multicast traffic starts
to flow down the shared tree, routers along the path determine if there is a better path to the source.
If a more direct path exists, then the router closest to the receiver sends a join message toward the
source and then reroutes the traffic along this path.
Two policies define how each path should be managed, the bandwidth policy, and how multicast
channels compete for the available bandwidth, the multicast information policy.
Two parameters control the way multicast traffic traverses the line card.
Chassis multicast planes should not be confused with IOM/IMM multicast paths. The IOM/IMM
uses multicast paths to reach multicast planes on the switch fabric. An IOM/IMM may have less or
more multicast paths than the number of multicast planes available in the chassis.
Each IOM/IMM multicast path is either a primary or secondary path type. The path type indicates
the multicast scheduling priority within the switch fabric. Multicast flows sent on primary paths
are scheduled at multicast high priority while secondary paths are associated with multicast low
priority.
The system determines the number of primary and secondary paths from each IOM/IMM
forwarding plane and distributes them as equally as possible between the available switch fabric
multicast planes. Each multicast plane may terminate multiple paths of both the primary and
secondary types.
The system ingress multicast management module evaluates the ingress multicast flows from each
ingress forwarding plane and determines the best multicast path for the flow. A particular path
may be used until the terminating multicast plane is “maxed” out (based on the rate limit defined
in the per-mcast-plane-capacity commands) at which time either flows are moved to other paths
or potentially blackholed (flows with the lowest preference are dropped first). In this way, the
system makes the best use of the available multicast capacity without congesting individual
multicast planes.
The switch fabric is simultaneously handling both unicast and multicast flows. The switch fabric
uses a weighted scheduling scheme between multicast high, unicast high, multicast low and
unicast low when deciding which cell to forward to the egress forwarding plane next. The
weighted mechanism allows some amount of unicast and lower priority multicast (secondary) to
drain on the egress switch fabric links used by each multicast plane. The amount is variable based
on the number of switch fabric planes available on the amount of traffic attempting to use the
fabric planes. The per-mcast-plane-capacity commands allows the amount of managed multicast
traffic to be tuned to compensate for the expected available egress multicast bandwidth per
multicast plane. In conditions where it is highly desirable to prevent multicast plane congestion,
the per-mcast-plane-capacity commands should be used to compensate for the non-multicast or
secondary multicast switch fabric traffic.
IGMP Snooping
For most Layer 2 switches, multicast traffic is treated like an unknown MAC address or broadcast
frame, which causes the incoming frame to be flooded out (broadcast) on every port within a
VLAN. While this is acceptable behavior for unknowns and broadcasts, as IP Multicast hosts may
join and be interested in only specific multicast groups, all this flooded traffic results in wasted
bandwidth on network segments and end stations.
IGMP snooping entails using information in layer 3 protocol headers of multicast control
messages to determine the processing at layer 2. By doing so, an IGMP snooping switch provides
the benefit of conserving bandwidth on those segments of the network where no node has
expressed interest in receiving packets addressed to the group address.
On the Alcatel-Lucent 7750 SR, IGMP snooping can be enabled in the context of VPLS services.
The IGMP snooping feature allows for optimization of the multicast data flow for a group within a
service to only those Service Access Points (SAPs) and Service Distribution Points (SDPs) that
are members of the group. In fact, the Alcatel-Lucent 7750 SR implementation performs more
than pure snooping of IGMP data, since it also summarizes upstream IGMP reports and responds
to downstream queries.
• A port database on each SAP and SDP lists the multicast groups that are active on this
SAP or SDP.
• All port databases are compiled into a central proxy database. Towards the multicast
routers, summarized group membership reports are sent based on the information in the
proxy database.
• The information in the different port databases is also used to compile the multicast
forwarding information base (MFIB). This contains the active SAPs and SDPs for every
combination of source router and group address (S,G), and is used for the actual multicast
replication and forwarding.
When the router receives a join report from a host for a particular multicast group, it adds the
group to the port database and (if it is a new group) to the proxy database. It also adds the SAP or
SDP to existing (S,G) in the MFIB, or builds a new MFIB entry.
When the router receives a leave report from a host, it first checks if other devices on the SAP or
SDP still want to receive the group (unless fast leave is enabled). Then it removes the group from
the port database, and from the proxy database if it was the only receiver of the group. The router
also deletes entries if it does not receive periodic membership confirmations from the hosts.
The fast leave feature finds its use in multicast TV delivery systems, for example. Fast Leave
speeds up the membership leave process by terminating the multicast session immediately, rather
then the standard procedure of issuing a group specific query to check if other group members are
present on the SAP or SDP.
Proxy Proxy
Database Database
A. Host joins channel (not yet received) D. Host leaves channel (no fast leave,
no other hosts on same channel)
GMR (Join) GMR (Join)
GMR (Leave)
GMR (Leave)
B. Host joins channel (already received) Query
GMR (Join) (no GMRs)
E. Host leaves channel (no fast leave,
C. Periodic refresh other hosts on SAP are on same channel)
Query Query GMR (Leave)
MR MR
Query (No Leave)
GMR (Join)
Scenario A: A host joins a multicast group (TV channel) which is not yet being received by other
hosts on the router, and thus is not yet present in the proxy database. The 7750 SR adds the group
to the proxy database and sends a new IGMP Join group-specific membership report upstream to
the multicast router.
Scenario B: A host joins a channel which is already being received by one or more hosts on the
7750 SR, and thus is already present in the proxy database. No upstream IGMP report is generated
by the router.
Scenario C: The multicast router will periodically send IGMP queries to the router, requesting it to
respond with generic membership reports. Upon receiving such a query, the 7750 SR will compile
a report from its proxy database and send it back to the multicast router.
In addition, the router will flood the received IGMP query to all hosts (on SAPs and spoke SDPs),
and will update its proxy database based on the membership reports received back.
Scenario D: A host leaves a channel by sending an IGMP leave message. If fast-leave is not
enabled, the router will first check whether there are other hosts on the same SAP or spoke SDP by
sending a query. If no other host responds, the 7750 SR removes the channel from the SAP. In
addition, if there are no other SAPs or spoke SDPs with hosts subscribing to the same channel, the
channel is removed from the proxy database and an IGMP leave report is sent to the upstream
Multicast Router.
Scenario E: A host leaves a channel by sending an IGMP leave message. If fast-leave is not
enabled, the router will check whether there are other hosts on the same SAP or spoke SDP by
sending a query. Another device on the same SAP or spoke SDP still wishes to receive the channel
and responds with a membership report. Thus the 7750 SR does not remove the channel from the
SAP.
Scenario F: A host leaves a channel by sending an IGMP leave report. Fast-leave is enabled, so the
7750 SR will not check whether there are other hosts on the same SAP or spoke SDP but
immediately removes the group from the SAP. In addition, if there are no other SAPs or spoke
SDPs with hosts subscribing to the same group, the group is removed from the proxy database and
an IGMP leave report is sent to the upstream multicast router.
IGMP Filtering
A provider may want to block receive or transmit permission to individual hosts or a range of
hosts. To this end, the Alcatel-Lucent 7750 SR supports IGMP filtering. Two types of filter can be
defined:
• Filter IGMP membership reports from a particular host or range of hosts. This is
performed by importing an appropriately defined routing policy into the SAP or spoke
SDP.
• Filter to prevent a host from transmitting multicast streams into the network. The operator
can define a data-plane filter (ACL) which drops all multicast traffic, and apply this filter
to a SAP or spoke SDP.
MVR assumes that subscribers join and leave multicast streams by sending IGMP join and leave
messages. The IGMP leave and join message are sent inside the VPLS to which the subscriber port
is assigned. The multicast VPLS is shared in the network while the subscribers remain in separate
VPLS services. Using MVR, users on different VPLS cannot exchange any information between
them, but still multicast services are provided.
On the MVR VPLS, IGMP snooping must be enabled. On the user VPLS, IGMP snooping and
MVR work independently. If IGMP snooping and MVR are both enabled, MVR reacts only to join
and leave messages from multicast groups configured under MVR. Join and leave messages from
all other multicast groups are managed by IGMP snooping in the local VPLS. This way,
potentially several MVR VPLS instances could be configured, each with its own set of multicast
channels.
MVR by proxy — In some situations, the multicast traffic should not be copied from the MVR
VPLS to the SAP on which the IGMP message was received (standard MVR behavior) but to
another SAP. This is called MVR by proxy.
T Telephone (voice)
V Video
Normal MVR
D Data
VPLS y
IGMPjoin user a, VLANa With MVR ALA-1
T V D Spoke V
SAP
Video Data SDP
user b, VLANb
IGMPjoin
T V D SAP Spoke D
SDP
Video Data
VPLS x
MVR by Proxy
VPLS y
user a, VLANa With MVR ALA-1
IGMPjoin
T D Spoke V
SAP
SDP
user b, VLANb
Video Data Spoke
V SAP D
SDP
VPLS x
OSSG067
When implementing this feature, there are several considerations. When multicast load balancing
is not configured, the distribution remains as is. Multicast load balancing is based on the number
of “s,g” groups. This means that bandwidth considerations are not taken into account. The
multicast groups are distributed over the available links as joins are processed. When link failure
occurs, the load is distributed on the failed channel to the remaining channels so multicast groups
are evenly distributed over the remaining links. When a link is added (or failed link returned) all
multicast joins on the added link(s) are allocated until a balance is achieved.
When multicast load balancing is configured, but the channels are not found in the multicast-info-
policy, then multicast load balancing is based on the number of “s,g” groups. This means that
bandwidth considerations are not taken into account. The multicast groups are distributed over the
available links as joins are processed. The multicast groups are evenly distributed over the
remaining links. When link failure occurs, the load is distributed on the failed channel to the
remaining channels. When a link is added (or failed link returned) all multicast joins on the added
link(s) are allocated until a balance is achieved.A manual redistribute command enables the
operator to re-evaluate the current balance and, if required, move channels to different links to
achieve a balance.A timed redistribute parameter allows the system to automatically, at regular
intervals, redistribute multicast groups over available links. If no links have been added or
removed from the ECMP/LAG interface, then no redistribution is attempted.
When multicast load balancing is configured, multicast groups are distributed over the available
links as joins are processed based on bandwidth configured for the specified group address. If the
bandwidth is not configured for the multicast stream then the configured default value is used.
If link failure occurs, the load is distributed on the failed channel to the remaining channels. The
bandwidth required over each individual link is evenly distributed over the remaining links.
When an additional link is available for a given multicast stream, then it is considered in all
multicast stream additions applied to the interface. This multicast stream is included in the next
scheduled automatic rebalance run. A rebalance run re-evaluates the current balance with regard
to the bandwidth utilization and if required, move multicast streams to different links to achieve a
balance.
By default multicast load balancing over ECMP links is enabled and set at 30 minutes.
The rebalance process can be executed as a low priority background task while control of the
console is returned to the operator. When multicast load rebalancing is not enabled, then ECMP
changes will not be optimized, however, when a link is added occurs an attempt is made to balance
the number of multicast streams on the available ECMP links. This however may not result in
balanced utilization of ECMP links.
Only a single mc-ecmp-rebalance command can be executed at any given time, if a rebalance is
in progress and the command is entered, it is rejected with the message saying that a rebalance is
already in progress. A low priority event is generated when an actual change for a given multicast
stream occurs as a result of the rebalance process.
• billing purposes
• market research/data mining to gain view into the most frequently watched channels,
duration of the channel viewing, frequency of channel zapping by the time of the day, etc.
The information about channel viewership is based on IGMP states maintained per each
subscriber host. Each event related to the IGMP state creation is recorded and formatted by the
IGMP process. The formatted event is then sent to another task in the system (Exporter), which
allocates a TX buffer and start a timer.
The event is then be written by the Exporter into the buffer. The buffer in essence corresponds to
the packet that will contain a single event or a set of events. Those events are transported as data
records over UDP transport to an external collector node. The packet itself has a header followed
by a set of TLV type data structures, each describing a unique filed within the IGMP event.
The packet is transmitted when it reaches a preconfigured size (1400bytes), or when the timer
expires, whichever comes first. Note that the timer started when the buffer was initially created.
The receiving end (collector node) accepts the data on the destination UDP port. It must be aware
of the data format so that it can interpret incoming data accordingly. The implementation details of
the receiving node are outside of the scope of this description and are left to the network operator.
The IGMP state recording per subscriber host must be supported for hosts which are replicating
multicast traffic directly as well as for those host that are only keeping track of IGMP states for the
HQoS Adjustment purpose. The latter will be implemented via redirection and not the Host
Tracking (HT) feature as originally proposed. The IGMP reporting must differentiate events
between direct replication and redirection.
It further distinguish events that are related to denial of IGMP state creation (due to filters, MCAC
failure, etc.) and the ones that are related to removal of an already existing IGMP state in the
system.
Each IGMP state change generates a data record that is formatted by the IGMP task and written
into the buffer. IGMP state transitions configured statically through CLI are not reported.
In order to minimize the size of the records when transported over the network, most fields in the
data record are HEX coded (as opposed to ASCII descriptive strings).
Application:
• 0x01 - IGMP
• 0x02 - IGMP Host Tracking Event:
Event:
Length:
• The length of the entire data record (including the header and TLVs) in octets.
Timestamp:
• Timestamp is in Unix format (32 bit integer in seconds since 01/01/1970) plus an extra 8
bits for 10msec resolution.
TLVs describing the IGMP state record will have the following structure:
0 1 2 3
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
Type Length Value
The redirection destination TLV is a mandatory TLV that is sent only in cases where redirection is
enabled. It contains two 32 bit integer numbers. The first number identifies the VRF where IGMPs
are redirected; the second number identifies the interface index.
Optional fields can be included in the data records according to the configuration.
In IGMPv3, if an IGMP message (Join or Leave) contains multiple multicast groups or a multicast
group contains multiple IP sources, only a single event is generated per group-source combination.
In other words, data records are transmitted with a single source IP address and multiple mcast
group addresses or a single multicast group address with multiple source IP addresses, depending
on the content of the IGMP message. (*,G)
Transport Mechanism
Data is transported via UDP socket. Destination IP address, the destination port and the source IP
address are configurable. The default UDP source and destination port number is 1037.
Upon the arrival of an IGMP event, the Exporter allocates a buffer for the packet (if not already
allocated) and starts writing the events into the buffer (packet). Along with the initial buffer
creation, a timer is started. The trigger for the transmission of the packet is either the TX buffer
being filled up to 1400B (hard coded value), or the timer expiry, whichever comes first.
The source IP address is configurable within GRT (by default system IP), and the destination IP
address can be reachable only via GRT. The source IP address is modified via
system>security>source-address>application CLI hierarchy.
The receiving end (the collector node) collects the data and process them according to the
formatting rules defined in this document. The capturing and processing of the data on the
collector node is outside of the context of this description.
It should be noted that the processing node will need to have sufficient resources to accept and
process packets that contain information about every IGMP state change for every host from a set
of network BRASes that are transporting data to this particular collector node.
Multicast Reporter traffic will be marked as BE (all 6 DSCP bits are set to 0) exiting our system.
HA Compliance
IGMP Events are synchronized between two CPMs before they are transported out of the system.
QoS Awareness
IGMP Reporter is a client of sgt-qos so that DSCP/dot1p bits can be appropriately be marked
when egressing the system.
Hardware Support
The following hardware is supported on the 7750 platform.
• IPoE subscribers
• v6 (MLD)
• Regular (non-subscriber) interfaces
• SAM support as the collector device
For the business customers, the main drivers are enterprise multicast and Internet multicast
applications.
On multicast-capable ANs, a single copy of each multicast stream is delivered over a separate
regular IP interface. AN would then perform the replication. This is how multicast would be
deployed in a Routed CO environment with 7x50s.
On legacy, non-multicast ANs , or in environments with low volume multicast traffic where it is
not worth setting up a separate multicast topology (from BNG to AN), multicast replication is
performed via subscriber-interfaces in 7x50. There are differences in replicating multicast traffic
on IPoE vs PPPoX which will be described in subsequent sessions.
CPE CPE
Business Enterprise 1
Site 1
ALU BRAS
SUB 2”
Retail
VPRN 1
SUB 1”
PPPoE Wholesale
Termination VPRN
Points SUB 2” SUB 2”
CPE Retail
VPRN 2
PPPoE VPN Service
Corporate Internet
Business Enterprise 2
Site 1 Corporate Internet
SUB 2”
CPE
al_0169
In this example, HSI is terminated in a Global Routing Table (GRT) whereas VPRN services are
terminated in Wholesale/Retail VPRN fashion, with each customer using a separate VPRN.
The actual connectivity model that will be deployed depends on many operational aspects that are
present in the customer environment.
Multicast over subscriber-interfaces in a Routed CO model is supported for both types of hosts,
IPoE and PPPoE which can be simultaneously enabled on a shared SAP.
There are some fundamental differences in multicast behavior between two host types (IPoE and
PPPoX). The differences will be discussed further in the next sections.
Hardware Support
Multicast over subscriber interfaces is supported on all FP2 based hardware that supports Routed
CO model. This includes:
• 7750 SR-7/12
• 7750-c4/12
• 7450 in mixed mode
• 1:1 model (subscriber per VLAN/SAP) with the Access Node (AN) that is not IGMP
aware.
• N:1 model (service per VLAN/SAP) with the AN in the Snooping mode.
• N:1 model with the AN in the Proxy mode.
• N:1 model with the AN that is not IGMP aware.
There are two modes of operation for subscriber multicast that can be chosen to address the above
mentioned deployment scenarios:
1. Per SAP replication — A single multicast stream per group is forwarded on any given SAP.
Even if the SAP has a multicast group (channel) that is registered to multiple hosts, only a
single copy of the multicast stream is forwarded over this SAP. The multicast stream will
have a multicast destination MAC address (as opposed to unicast). IGMP states will be main-
tained per host. This is the default mode of operation.
2. Per subscriber host replication in this mode of operation, multicast is replicated per sub-
scriber host even if this means that multiple copies of the same stream will be forwarded over
the same SAP. For example, if two hosts on the same SAP are registered to receive the same
multicast group (channel), then this multicast channel will be replicated twice on the same
SAP. The streams will have a unique unicast destination MAC address (otherwise it would
not make sense to replicate the streams twice).
In all deployment scenarios and modes of operation the IGMP states per source IP address of the
incoming IGMP message is maintained. This source IP address might represent a subscriber hosts
or the AN (Proxy mode).
IGMP states are maintained per subscriber host and per SAP.
Multicast traffic over subscribers in a per SAP replication mode is flowing via a SAP queue which
is outside of the subscriber queues context. Sending the multicast traffic over the default SAP
queue is characterized by:
• The inability to classify multicast traffic into separate subscriber queues and therefore
include it natively in HQoS. However, multicast traffic can be classified into a specific
SAP queues, assuming that such queues are enabled via SAP based QoS policy. While
multiple SAP queues can be defined under static SAPs, the dynamic SAPs (MSAPs) are
limited to a single SAP queue defined in the default egress-sap policy. This default egress-
sap policy under MSAP cannot be replaced or modified.
• Redirection of multicast traffic via internal queues in case that the SAP queue in
subscriber environment is disabled (sub-sla-mgmt>single-sub-parameters>profiled-
traffic-only). This is applicable only to 1:1 subscriber model.
• A possible necessity for HQoS Adjustment as multicast traffic is flowing outside of the
subscriber queues.
• De-coupling of the multicast forwarding statistics from the overall subscriber forwarding
statistics obtained via subscriber specific show commands.
This model is shown in Figure 46. The AN is not IGMP aware, all replications are performed in
the BNG. From the BNG perspective this deployment model has the following characteristics:
• IGMP states are kept per hosts and SAPs. Each host can be registered to more than one
group.
• IGMP Joins will be accepted only from the active subscriber hosts as dictated by
antispoofing.
• IGMP statistics can be displayed per host or per group.
• Multicast traffic for the subscriber is forwarded through the egress SAP queue. In case
that the SAP queue is disabled (profiled-traffic-only command), multicast traffic will flow
via internal queues outside of the subscriber context.
• A single copy of any multicast stream is generated per SAP. This can be viewed as
replication per unique multicast group per SAP, rather than the replication per host. In
other words, the number of multicast streams on this SAP is equal to the number of unique
groups across all hosts on this SAP (subscriber).
• Traffic statistics are kept per the SAP queue. Consequently multicast traffic stats will be
shown outside of the subscriber context.
• HQoS Adjustment might be necessary.
• Traffic cannot be explicitly classified (forwarding classes and queue mappings) inside of
the subscriber queues.
Subscriber A
G1 G2 G1
Host Host
BNG S1 S2
VLAN
Brown
Sub A
queues
Host Host
VLANs IGMP (S1, G1) S1 S2
IGMP (S1, G2)
G1 G1
G2
This model is shown in Figure 47. The AN is IGMP aware and is participating in multicast
replication. From the BNG perspective this deployment model has the following characteristics:
• IGMP states are kept per hosts and SAPs. Each host can be registered to more than one
group.
• IGMP Joins are accepted only from the active subscriber hosts as dictated by
antispoofing.
• IGMP statistics are displayed per host, per group or per subscriber.
• Multicast traffic for ALL subscribers on this SAP is forwarded through the egress SAP
queues.
• A single copy of any multicast stream is generated per SAP. This can be viewed as the
replication per unique multicast group per SAP, rather than the replication per host or
subscriber. In other words, the number of multicast streams on this SAP is equal to the
number of unique groups across all hosts and subscribers on this SAP.
• The AN will receive a single multicast stream and based on its own (AN) IGMP snooping
information, it will replicate the mcast stream to the appropriate subscribers.
• Traffic statistics are kept per the SAP queue. Consequently multicast traffic stats will be
shown on a per SAP basis (aggregate of all subscribers on this SAP).
• Traffic cannot be explicitly classified (forwarding classes and queue mappings) inside of
the subscriber queues.
• Redirection to the common multicast VLAN is supported.
• Multicast streams have multicast destination MAC.
• IGMP Joins are accepted (src IP address) only for the sub hosts that are already created in
the system. IGMP Joins coming from the hosts that are nonexistent in the system will be
rejected, unless this functionality is explicitly enabled by the sub-hosts-only command
under the IGMP group-int CLI hierarchy level.
Many Subscribers
Sub Sub Sub Sub Sub
A B C D E
Many Subscribers
Sub Sub Sub Sub Sub
A B C D E
G1 G2 G1
VLAN
Brown
SAP Sub C
queue queues
VLAN
IGMP (S1, G1)
IGMP MCAST
Snooping AN IGMP (S1, G2) Host Host
S1 S2
Mode
IGMP (S2, G1)
G1 G2
Mcast Streams with
Multicast Destination
MAC Address
al_0012
This model is shown in Figure 48. The AN is configured as IGMP Proxy node and is participating
in downstream multicast replication. IGMP messages from multiple sources (subscribers hosts)
for the same multicast group are consolidated in the AN into a single IGMP messages. This single
IGMP message has the source IP address of the AN.
From the BNG perspective this deployment model has the following characteristics:
In the following example, IGMPs from the source IP address <ip> is accepted even though there is
no subscriber-host with that IP addresses present in the system. An IGMP state will be created
under the sap context (service per vlan, or N:1 model) for the group <pref-definition>. All other
IGMP messages originated from non-subscriber hosts will be rejected. IGMP messages for
subscriber hosts will be processed according to the igmp-policy applied to each subscriber host.
configure
service vprn <id>
igmp
group-interface <name>
import <policy-name>
configure
router
policy-options
begin
prefix-list <pref-name>
prefix <pref-definition>
policy-statement proxy-policy
entry 1
from
group-address <pref-name>
source-address <ip>
protocol igmp
exit
action accept
exit
exit
default-action reject
This functionality (accepting IGMP from non-subscriber hosts) can be disabled with the following
flag.
configure
service vprn <id>
igmp
group-interface <name>
sub-host-only
G1 G2 NO NO
IGMP IGMP
STATE STATE
BNG Host Host Host
SX S1 S2
VLAN
Brown
SAP Sub B
queue queues
AN
G1 G2 G1
al_0013
Per host replication mode can be enabled on a subscriber basis with the per-host-replication
command in the subscriber-management>igmp-policy hierarchy.
This model is shown in Figure 49. The AN is NOT IGMP aware and multicast replication is
performed in the BNG. Multicast streams are sent directly to the hosts using their unicast MAC
addresses. HQoS Adjustment is NOT needed as multicast traffic is flowing through subscriber
queues. From the BNG perspective this deployment model has the following characteristics:
• IGMP states are kept per hosts. Each host can be registered to multiple IGMP groups.
• IGMP Joins will be accepted only from the active subscriber hosts. In other words
antispoofing is in effect for IGMP messages.
• IGMP statistics can be displayed per host, per group or per subscriber.
• Multicast traffic is forwarded through subscriber queues using unicast destination MAC
address of the destination host.
• Multiple copies of the same multicast stream can be generated per SAP. The number of
copies depends on the number of hosts on the SAP that are registered to the same
multicast group (channel). In other words, the number of multicast streams on the SAP is
equal to the number of groups registered across all hosts on this SAP.
• Traffic statistics are kept per the host queue. In case that multicast statistics need to be
separated from unicast, the multicast traffic should be classified in a subscriber separate
queue.
• HQoS Adjustment is not needed as traffic is flowing within the subscriber queues and is
automatically accounted in HQoS.
• Multicast traffic can be explicitly classified into forwarding classes and consequently
directed into desired queues.
• MCAC is supported.
• profiled-traffic-only mode defined under sub-sla-mgmt is supported. This mode (profiled-
traffic-only) is used to save the number of queues in 1:1 model (sub-sla-mgmt-> no multi-
sub-SAP) by preventing the creation of the SAP queues. Since multicast traffic is not
using the SAP queue, enabling this feature will not have any effect on the multicast
operation.
Subscriber A
G1 G2 G1
VLAN
Brown
SAP Sub A
queue queues
VLANs
G1 G2
Mcast Streams with
Multicast Destination
MAC Address
al_0011
This model is shown in Figure 50. The AN is not IGMP aware and is not participating in multicast
replication. From the BNG perspective this deployment model has the following characteristics:
• IGMP states are kept per hosts. Each host can be registered to multiple multicast groups.
• IGMP Joins will be accepted only from the active subscriber hosts, subject to
antispoofing.
• IGMP statistics can be displayed per host, per group or per subscriber.
• Multicast traffic is forwarded through subscriber queues using unicast destination MAC
address of the destination host.
• Multiple copies of the same multicast stream can be generated per SAP. The number of
copies depends on the number of hosts on the SAP that are registered to the same
multicast group (channel). In other words, the number of multicast streams on the SAP is
equal to the number of groups registered across all hosts on this SAP.
• Traffic statistics are kept per the host queue. In case that multicast statistics need to be
separated from unicast, the multicast traffic should be classified in a separate subscriber
queue.
• HQoS Adjustment is NOT needed as traffic is flowing within the subscriber queues and is
automatically accounted in HQoS.
• Multicast traffic can be explicitly classified into forwarding classes and consequently
directed into desired queues.
• MCAC is supported.
G1 G2 G1
Host Host
S1 S2
BNG
VLAN
Brown Sub C
queues
Host Host
S1 S2
VLAN
IGMP (S1, G1)
IGMP
Snooping AN IGMP (S1, G2)
Mode
IGMP (S2, G1) G1 G1
G2
PID
0x0021 IPv4 MRU
0x8021 IPCP
0xC021 LLC PPP PID (2 bytes) DATA
0x0057 IPv6
0x003d MLPPP
14/18 6
bytes bytes 4
bytes
Ether
PPPoE DA SA VLAN Type Ver Type Code Session ID Length PPP Payload FCS
PADI
PADO
Unicast MAC Discovery
PADR
Address 0x8863
PADS
of the Host
PADT
Session
0x 0000
0x8864
al_0016
Figure 51: Multicast IPv4 Address and Unicast MAC Address in PPPoE Subscriber Multicast
IGMP Timers
IGMP timers are maintained under the following hierarchy:
configure>router>igmp
configure>service vprn>igmp
As it can be seen, the igmp timers are controlled on a per routing instance (VRF or GRT) level.
However, the timers can be different for hosts and redirected interface in case that redirection
between VRFs is enabled.
In case of redirection, the subscriber-host IGMP state will determine the IGMP state on the
redirected interface, assuming that IGMP messages are not directly received on the redirected
interface (for example from the AN performing IGMP forking). For example if the redirected
interface is not receiving IGMP messages from the downstream node, then the IGMP state under
redirected interface will be removed simultaneously with the removal of the IGMP state for the
subscriber host (due to leave or a timeout).
In case that the redirected interface is receiving IGMP message directly from the downstream
node, the IGMP states on that redirected interface will be driven by those direct IGMP messages.
For example, an IGMP host in VRF1 has an expiry time of 60 seconds and the expiry time defined
under the VRF2 where multicast traffic is redirected is set to 90 seconds. The IGMP state will time
out for the host in VRF1 after 60s, and if no host has joined the same multicast group in VRF2
(where redirected interface resides), the IGMP state will be removed there too.
If a join was received directly on the redirection interface in VRF2, the IGMP state for that group
will be maintained for 90s, regardless of the IGMP state for the same group in VRF1.
HQoS Adjustment
HQoS Adjustment is required in the scenarios where subscriber multicast traffic flow is
disassociated from subscriber queues. In other words, the unicast traffic for the subscriber is
flowing through the subscriber queues while at the same time multicast traffic for the same
subscriber is explicitly (through redirection) or implicitly (per-sap replication mode) redirected
through a separate non-subscriber queue. In this case HQoS Adjustment can be deployed where
preconfigured multicast bandwidth per channel is artificially included in HQoS. For example,
bandwidth consumption per multicast group must be known in advance and configured within the
7x50. By keeping the IGMP state per host, the bandwidth for the multicast group (channel) to
which the host is registered is known and is deducted as consumed from the aggregate subscriber
bandwidth.
The multicast bandwidth per channel must be known (this is always an approximation) and
provisioned in the BNG node in advance.
In PPPoE and in IPoE per host replication environment, HQoS Adjustment is not needed as
multicast traffic is unicasted to each subscriber and therefore is flowing through subscriber
queues.
For HQoS Adjustment, the channel bandwidth definition and association with an interface is the
same as in the MCAC case. This is a departure from the legacy HT channel bandwidth definition
which is done via multicast-info-policy.
Channel definition:
configure
router
mcac
policy <name>
<channel definition>
• group-interface
configure
service vprn <id>
igmp
group-interface <grp-if-name>
mcac
policy <mcac-policy-name>
• plain interface
configure
router/service vprn
igmp
interface <name>
mcac
policy <mcac-policy-name>
• retailer group-interface:
configure
service vprn <id>
igmp
group-interface fwd-service <svc-id> <grp-if-name>
mcac
policy <mcac-policy-name>
configure
subscriber-management
igmp-policy <name>
egress-rate-modify [egress-aggregate-rate-limit | scheduler <name>]
configure
subscriber-management
sub-profile <name>
igmp-policy <name>
In order to activate HQoS adjustment on the subscriber level, the sub-mcac-policy must be enabled
under the subscriber via the following CLI:
configure
subscriber-management
sub-mcac-policy <pol-name>
no shutdown
configure
subscriber-management
sub-profile <name>
sub-mcac-policy <pol-name>
The adjusted bandwidth during operation can be verified with the following commands
(depending whether agg-rate-limit or scheduler-policy is used):
*A:Dut-C>config>subscr-mgmt>sub-prof# info
----------------------------------------------
igmp-policy "pol1"
sub-mcac-policy "smp"
egress
scheduler-policy "h1"
scheduler "t2" rate 30000
exit
exit
----------------------------------------------
*A:Dut-C>config>subscr-mgmt>igmp-policy# info
----------------------------------------------
egress-rate-modify scheduler "t2"
redirection-policy "mc_redir1"
----------------------------------------------
Now, assume that the subscriber joins now a new channel with bandwidth of 1mbps (1000 kbps).
Root (Egr)
| slot(1)
|--(S) : t1
| | AdminPIR:90000 AdminCIR:10000
| |
| |
| | [Within CIR Level 0 Weight 0]
| | Assigned:0 Offered:0
| | Consumed:0
| |
| | [Above CIR Level 0 Weight 0]
| | Assigned:0 Offered:0
| | Consumed:0
| | TotalConsumed:0
| | OperPIR:90000
| |
| | [As Parent]
| | Rate:90000
| | ConsumedByChildren:0
| |
| |
| |--(S) : t2
| | | AdminPIR:29000 AdminCIR:10000(sum) <==== bw 1000 from igmp sub-
stracted
| | |
| | |
| | | [Within CIR Level 0 Weight 1]
| | | Assigned:10000 Offered:0
| | | Consumed:0
| | |
| | | [Above CIR Level 1 Weight 1]
| | | Assigned:29000 Offered:0 <==== bw 1000 from igmp sub-
stracted
| | | Consumed:0
| | |
| | |
| | | TotalConsumed:0
| | | OperPIR:29000 <==== bw 1000 from igmp substracted
| | |
| | | [As Parent]
| | | Rate:29000 <==== bw 1000 from igmp substracted
| | | ConsumedByChildren:0
| | |
| | |
| | |--(S) : t3
| | | | AdminPIR:70000 AdminCIR:10000
| | | |
| | | |
| | | | [Within CIR Level 0 Weight 1]
| | | | Assigned:10000 Offered:0
| | | | Consumed:0
| | | |
| | | | [Above CIR Level 1 Weight 1]
| | | | Assigned:29000 Offered:0
| | | | Consumed:0
| | | |
| | | |
| | | | TotalConsumed:0
| | | | OperPIR:29000
| | | |
| | | | [As Parent]
| | | | Rate:29000
| | | | ConsumedByChildren:0
| | | |
When HT is enabled, the AN will fork off (duplicate) the IGMP messages on the common mcast
SAP to the subscriber SAP. IGMP states will not be fully maintained per sub-host in the BNG,
instead they will be only tracked (less overhead) for bandwidth adjustment purposes.
Example of HT
Channel Definition:
configure
mcast-management
multicast-info-policy <name>
<channel to b/w mapping definition>
configure>router>multicast-info-policy <name>
configure>service>vrpn>multicast-info-policy <name>
configure
subscriber-management
host-tracking-policy <name>
egress-rate-modify [agg-rate-limit | scheduler <sch-name>]
configure
subscriber-management
sub-profile <name>
host-tracking-policy <name> => mutually exclusive with igmp-policy
Per Subscriber
Automatic
Bandwidth
Adjustment
Based on
Multicast Unicast Total IGMP Joins
Traffic Traffic Subscriber
BW = M+U
BNG BNG Per Vport
Automatic
Mcast Sub 1 Sub n Unicast Bandwidth
L3 Intf Subscriber
Adjustment
BW–
Based on
HQoS
Adjust Total Vport IGMP Joins
(PON)
BW = M+U
Vport
Unicast
LAG
Subscriber
OLT BW–
OLT HQoS
Adjust
Sub 1
ONT1 ONT2
al_0166
A single copy of each channel is replicated on the PON as long as there is at least one subscriber
on that PON interested in this channel (has joined the IGMP group).
7x50 monitors IGMP Joins at the subscriber level and consequently the channel bandwidth is
subtracted from the current Vport rate limit only in the case that this is the first channel flowing
through the corresponding PON. Otherwise, the Vport bandwidth is not modified. Similarly, when
the channel is removed from the last subscriber on the PON, the channel bandwidth is returned to
the VPort.
Association between the Vport and the subscriber is performed via inter-destination-string or
svlan during the subscriber setup phase. Inter-destination-string can be obtained either via Radius
or LUDB. In case that the association between the Vport and the subscriber is performed based on
the svlan (as specified in sub-sla-mgmt under the sap/msap), then the destination string under the
Vport must be a number matching the svlan.
The mcac-policy (channel definition bandwidth) can be applied on the group interface under
which the subscribers are instantiated or in case of redirection under the redirected-interface.
In a LAG environment, the Vport instance is instantiated per member lag link on the IOM. For
accurate bandwidth control, it is prerequisite for this feature that subscriber traffic hashing is
performed per Vport.
configure
port <port-id>
ethernet
access
egress
vport <name>
egress-rate-modify
agg-rate-limit <rate>
host-match <destination-string>
port-scheduler-policy <port-scheduler-policy-name>
scheduler-policy <scheduler-policy-name>
configure
port <port-id>
sonnet-sdh
path [<sonnet-sdh-index>]
access
egress
vport <name>
egress-rate-modify
agg-rate-limit <rate>
host-match <destination-string>
port-scheduler-policy <port-scheduler-policy-name>
scheduler-policy <scheduler-policy-name>
The Vport rate that will be affected by this functionality depends on the configuration:
• In case the agg-rate-limit within the Vport is configured, its value will be modified based
on the IGMP activity associated with the subscriber under this Vport.
• In case that the port-scheduler-policy within the Vport is referenced, the max-rate defined
in the corresponding port-scheduler-policy will be modified based on the IGMP activity
associated with the subscriber under this Vport.
The Vport rates can be displayed with the following two commands:
As an example:
In this case, the configured Vport aggregate-rate-limit max value has been reduced by 14Mbps.
Similarly, if the Vport had a port-scheduling-policy applied, the max-rate value configured in the
port-scheduling-policy would have been modified by the amount shown in the Modify delta output
in the above command.
MULTI-CHASSIS REDUNDANCY
SCALABILITY CONSIDERATIONS
It is assumed that the rate of the IGMP state change on the Vport level is substantially lower than
on the subscriber level.
The reason for this is that the IGMP Join/Leaves are shared amongst subscribers on the same
Vport (PON for example) and thus the IGMP state on the VPort level is changed only for the first
IGMP Join per channel and the last IGMP leave per channel.
Redirection
Two levels of mcac can be enabled simultaneously and in such case this is referred as Hierarchical
MCAC (H-MCAC). In case that redirection is enabled, H-MCAC per subscriber and the
redirected interface is supported. However, mcac per group-interface in this case is not supported.
Channel definition policy for the subscriber and the redirected interface is in this case referenced
under the igmp->interface (redirected interface) CLI.
In case that redirection is disabled, H-MCAC for both, the subscriber and the group-interface is
supported. The channel definition policy is in this case configured under the igmp>group-
interface context.
Example:
configure
router
policy-options
begin
policy-statement <name>
default-action accept
multicast-redirection [fwd-service <svc id>] <interface name>
exit
exit
exit
exit
configure
subscr-mgmt
igmp-policy <name>
import <policy-name>
redirection-policy <name>
Redirection that cross-connects GRT and VPRN is not supported. Redirection can be only
performed between interfaces in the GRT, or between the interfaces in any of the VPRN (cross
connecting VPRNs is allowed).
• per subscriber
• per group-interface
• per redirected interface
Two levels of MCAC can be enabled simultaneously and in such case this is referred as
Hierarchical MCAC (H-MCAC). In case that redirection is enabled, H-MCAC per subscriber and
the redirected interface is supported. However, MCAC per group-interface in this case is not
supported. Channel definition policy for the subscriber and the redirected interface is in this case
referenced under the igmp->interface (redirected interface) CLI hierarchy.
In case that redirection is disabled, H-MCAC for both, the subscriber and the group-interface is
supported. The channel definition policy is in this case configured under the igmp>group-
interface CLI hierarchy.
Examples
Note that the same channel definition and association with interfaces is used for MCAC/H-MCAC
and HQoS Adjustment.
Channel definition:
configure
router
mcac
policy <mcac-pol-name>
bundle <bundle-name>
bandwidth <kbps>
channel <start-address> <end-address> bw <bw> [class {high|low}]
[type {mandatory|optional}]
:
:
• plain interface
configure
router
igmp
interface <name>
mcac
policy <mcac-policy-name>
configure
service vprn <id>
igmp
interface <if-name>
mcac
unconstrained-bw <bandwidth> mandatory-bw <mandatory-bw>
Enabling MCAC:
• per subscriber
configure
subscr-mgmt
sub-mcac-policy <name>
unconstrained-bw <bandwidth> mandatory-bw <mandatory-bw>
configure
subscr-mgmt
sub-profile <name>
sub-mcac-policy <name>
• per-group-interface
configure
service vprn <id>
igmp
group-interface <grp-if-name>
mcac
unconstrained-bw <bandwidth> mandatory-bw <mandatory-bw>
configure
service vprn <id>
igmp
interface <if-name>
mcac
unconstrained-bw <bandwidth> mandatory-bw <mandatory-bw>
The MCAC policy, aside from the channel bandwidth definitions, could optionally contain this
bandwidth cap for the group of channels:
config>router>mcac# info
----------------------------------------------
policy "test"
bundle "test" create
bandwidth 100000
channel 225.0.0.10 225.0.0.10 bw 10000 type mandatory
channel 225.0.0.11 225.0.0.15 bw 5000 type mandatory
channel 225.0.0.20 225.0.0.30 bw 5000 type optional
exit
exit
This can be used to prevent a single set of channels from monopolizing MCAC bandwidth
allocated to the entire interface. The bandwidth of each individual bundle will be capped to some
value below the interface MCAC bandwidth limit, allowing each bundle to have its own share of
the interface MCAC bandwidth.
In most cases, the bandwidth limit per bundle is not necessary to configure. The aggregate limit
per all channels as defined under the subscriber/interface will cover majority of scenarios. In case
that one wants to explore the bundle bandwidth limits and how they affect MCAC behavior, the
following text will help understanding this topic.
To further understand how various MCAC bandwidth limits are applied, one need to understand
the concept of the mandatory bandwidth that is pre-allocated in the following way:
(guaranteed for mandatory channels) and the remaining 8mbps can be used by the
optional channels on a first come first serve basis.
config>router>igmp# info
----------------------------------------------
interface "ge-1/1/1"
mcac
unconstrained-bw 10000 mandatory-bw 2000
exit
exit
The bundle bandwidth limit poses a problem when the MCAC policy is applied under the group-
interface. The reason is that the group-interface represents the aggregation point for the
subscribers and their bandwidth. As such it is natural that the any aggregated bandwidth limit
under the group interface be larger than the bandwidth limit applied to any individual subscriber
under it. Since the MCAC policy, along with the bundle bandwidth limit, is inherited by all
subscribers under the group-interface, the exhaustion of the bundle bandwidth limit under the
group interface will coincide with the exhaustion of the bundle bandwidth limit of any individual
subscriber. This will result in a single subscriber starving out of multicast bandwidth the
remaining subscribers under the same group-interface. While it is perfectly acceptable for the
subscribers to inherit the multicast channel definition from the group-interface, for the above
reasons it is not acceptable that the subscriber inherit the bandwidth cap from the group-interface.
To remedy this situation, the MCAC bandwidth limits are independently configured under the
group-interface level (aggregated level) and the subscriber level via the command unconstrained-
bw <kbps> mandatory-bw <kbps>. The undesired bundle bandwidth cap in the MCAC policy will
be ignored under the group-interface AND under the subscriber. However, the bundle bandwidth
cap will be applied automatically to each SAP under the group interface. A SAP is a natural place
for a bundle bandwidth limit since each channel on a SAP can be replicated only once and
therefore the amount of pre-allocated mandatory bandwidth can be pre-calculated. This is
obviously not the case for the group interface where single channel can be replicated multiple
times (one per each SAP under the grp-if). Similarly, the same channel can be replicated multiple
times for the same subscriber in per-host replication mode. Only subscribers in per-sap replication
mode will warrant a single replication per channel. Therefore, if bundle cap is configured, it will
be applied to limit the bandwidth of the bundle that is applied to a subscriber in a per-sap
replication mode.
Figure 53 depicts MCAC related inheritances and MCAC bandwidth allocation model in per-sap
replication mode.The MCAC policy is applied to the group interface and inherited by each
subscriber as well as each SAP under the same group interface. However, the bundle bandwidth
limit in the MCAC policy is ignored on the group-interface and under the subscriber (denoted by
the red X in the figure). The bundle limit is applied only to each sap under the group-interface.
Overall (non-bundle) MCAC bandwidth limits are independently applied to the group-interface
and the subscribers. According to our example, 20mbps of multicast bandwidth in total is
allocated per group-interface. 6mbps of the 20mbps is allocated for mandatory channels. This
leaves 14mbps of multicast bandwidth for the optional channels combined served on a first come
first serve basis. Each physical replication (multiple replications of the same channel can occur,
one per each SAP), counts towards the respective group-interface bandwidth limits.
Similar logic applies to the subscriber MCAC bandwidth limits which are applied per sub-profile.
Finally, each SAP can optionally contain the bundle bandwidth limit. Note that in a hierarchical
MCAC fashion, if either of the bandwidth checks fails (SAP, sub or grp-if) the channel admission
for the subscriber also fails.
In our example, 6 subscriber hosts watch the same channel but there are only 3 active replications
(one per SAP). This would yield:
• 14mbps of available multicast bandwidth under the group-interface. This bandwidth can
be used for optional channels on a first come first serve basis. No reserved bandwidth is
left.
• Subscriber A - 3mbps is still reserved for mandatory channels and 5mps is available for
optional channels (first come first serve). All this assume that the SAP and the grp-if
bandwidth checks pass.
• Subscriber B - 2 mbps is still reserved for mandatory channels and 6mps is available for
optional channels (first come first serve). All this assume that the SAP and the grp-if
bandwidth checks pass.
• Subscriber C - No reserved bandwidth for mandatory channels is left. 3 mbps is still left
for optional channels. All this assume that the SAP and the grp-if bandwidth checks pass.
• SAPs - considering that 2 mbps are currently replicating (ch A, each SAP can still accept 1
mbps of the mandatory bandwidth (channel B and 7 mbps of the remaining optional
channels.
GRP-IF config>router>mcac#
unconstrained_bw 20000 group-if policy <mcac-pol-name>
mandatory_bw 6000 bundle <bundle-name>
bandwidth 10000
channel “A” bw 2000 type mandatory
channel “B” bw 1000 type mandatory
the rest are optional channels
Figure 54 depicts behavior in per-host replication mode. MCAC policy inheritance flow is the
same as in the previous example with the difference that the bundle limit has NO effect at all. Each
host generates its own copy of the same multicast stream that is flowing via subscriber queues and
not the SAP queue. Since each of the copies counts towards the subscriber or group-interface
bandwidth limits, the multicast bandwidth consumption is higher in this example. This needs to be
reflected in the configured multicast bandwidth limits. For example, the group-interface
mandatory bandwidth limit is increased to 12mbps.
In our example, 6 subscriber hosts are still watching the same channel but now the number of
replications is doubled from previous example. So the final tally for our MCAC bandwidth limit is
as follows:
• Subscriber A - 1 mbps is still reserved for mandatory channels and 5mps for optional
channels (first come first serve). All this assume that SAP and grp-if bandwidth checks
pass.
• Subscriber B - No reserved mandatory bandwidth is left. 6 mps is s till left for optional
channels (first come first serve). All this assume that SAP and grp-if bandwidth checks
pass.
• Subscriber C - No reserved mandatory bandwidth is left. 1 mps is still left for optional
channels (first come first serve). All this assume that SAP and grp-if bandwidth checks
pass.
• No reserved bandwidth is left under the group-interface. 8 mbps of available multicast
bandwidth under the group-interface is still left for optional channels on a first come first
serve basis
• Bundle limit on a SAP is irrelevant in this case.
GRP-IF config>router>mcac#
unconstrained_bw 20000 group-if policy <mcac-pol-name>
mandatory_bw 12000 bundle <bundle-name>
bandwidth 10000
channel “A” bw 2000 type mandatory
channel “B” bw 1000 type mandatory
the rest are optional channels
configure>router/service>igmp>interface/grp-if
The following configuration options can lead to the confusion as to which MCAC policy is in
effect:
• The MCAC policy (channel bandwidth definition) can be applied under two different
places (grp-if and/or regular intf)
• The same policy is used for (H)MCAC and HQoS Adjust with redirection enabled/
disabled
The general, the rule is that the MCAC policy under the group-interface will always be in effect in
cases where redirection is disabled. This is valid for subscriber or group-interface MCAC,
hMCAC (subscriber and group-interface) and HQoS Adjust in per SAP replication mode.
If redirection is enabled, the MCAC policy under the group-interface will be ignored.
the redirected interface1 (regardless of whether the MCAC policy under the group-interface is
applied or not) then:
1. Redirected interface is the interface to which IGMP Joins are redirected from subscriber
hosts.
Multicast Filtering
Multicast filtering must be done per session (host) for IPoE and PPPoE. There are two types of
filters that are supported:
1. IGMP filters on access ingress. Those filters control the flow of IGMP messages between the
host and the BNG. They are applied via the import statement in the igmp-policy. The same
filters are used for multicast-redirection policy:
configure
subscr-mgmt
igmp-policy <name>
import <policy-name>
configure
router
policy-options
begin
prefix-list <pref-name>
prefix <pref-definition>
policy-statement <name>
entry 1
from
group-address <pref-name>
source-address <ip>
protocol igmp
exit
action accept
exit
exit
default-action reject
2. Regular traffic filters where control multicast traffic flow can be controlled in both directions
(ingress/egress). This is supported today through ip-filters under the SLA profile.
Wholesale/Retail Requirements
Multicast support on subscriber interfaces is supported in both wholesale/retail models:
LNS
SUB1
IES/VPRN
LAC Tunnels
LNS
SUB3
SUB2
al_0019
The distinction between these two models is that in the case of LAC/LNS, the replication will be
done further up in the network on an LNS node. This means that the traffic between LAC and LNS
will be multiplied by the amount of replications.
QoS Considerations
In per-sap replication mode (which is applicable only to IPoE subscribers), multicast traffic is
forwarded through the SAP queue which is outside of the subscribers queues and therefore not
accounted in subscriber aggregate rate limit. HQoS Adjust is used to remedy this situation.
In case that the SAP queue is removed from the static SAP in IPoE 1:1 model (with profiled-only-
traffic command), multicast traffic will flow via internal queues which cannot be tied into a port-
scheduler as part of HQoS. Consequently, the port-scheduler max-rate as defined in the port-
scheduler-policy will be used only to rate limit unicast traffic. In other words, the max-rate value
in port-scheduler-policy must be lowered for the amount of anticipated multicast traffic that will
flow via the port where port-scheduler-policy is applied.
A similar logic applies to per-sap replication mode on dynamic SAPs (MSAPs) even if the SAP
queue is not removed. Although the multicast traffic is flowing via the SAP queue in this case, the
SAP qos policy on MSAP cannot be changed from the default one. The default QoS policy on a
SAP contains a single queue that is not parented by the port-scheduler.
Those restrictions do not apply to static SAPs where the SAP QoS policy can be customized and
its queues consequently tied to the port-scheduler.
Redundancy Considerations
Subscriber IGMP states can be synchronized across multiple 7x50 nodes in order to ensure
minimal interruption of (video delivery) service during network outages. The IGMP state of a
subscriber-host in a 7x50 node is tied to the state of the underlying protection mechanism: MC-
LAG and/or SRRP. For example, IGMP states will be activated only for subscribers that are
anchored under the group-interfaces with master SRRP state or under the entire port under active
MC-LAG.
In case of IGMP redirection, it must be ensured that the redirected interface (the interface to which
multicast forwarding is redirected) is under the same MC-LAG as the subscriber. Otherwise,
IGMP states on the redirected interface will be derived independently of the IGMP states for the
subscriber from which IGMP messages are redirected.
In a nutshell the IGMP Synchronization process in conjunction with underlying access protection
mechanisms will work as follows:
• IGMP states for the subscriber will be updated only if IGMP messages (Joins/Leaves/
Reports, etc.) are received:
→ Directly from the downstream access node on a group-interface with SRRP in master
state. This is valid irrespective of the IGMP querier status for the subscriber.
→ Directly from the downstream node on an active MC-LAG link. This is valid
irrespective of the IGMP querier status for the subscriber.
In all other cases, assuming that some protection mechanism in the access is present
(SRRP or MC-LAG), the IGMP messages are discarded and consequently no IGMP state
is updated. Similar logic applies to regular Layer 3 interfaces, where SRRP is replaced
with VRRP.
• Once the subscriber IGMP state is updated as a result of directly received IGMP message
on an active2 subscriber (SRRP master of active MC-LAG), the sync IGMP message is
sent to the standby subscriber over the Multi-Chassis Synchronization protocol.
Synchronized IGMP states will be populated in Multi-chassis Synchronization (MCS) DB
in all pairing 7x50 nodes.
• In case that a IGMP sync (MCS) message is received from the peering node, the IGMP
state for the standby subscriber is updated in the MCS DB but it is not downloaded into
the forwarding plane unless there is a switchover. In case that the IGMP sync message is
received for the active subscriber, the message will be discarded.
• In case that MC-LAG or SRRP are not deployed, then both nodes (irrespective of the
querier status) are eligible to receive direct IGMP messages and send corresponding
IGMP MCS Syncs to each other.
• IGMP queries are sent out only by IGMP querier. In SRRP scenario IGMP querier
corresponds with the SRRP Master and in MC-LAG environment it corresponds with the
node with the active MC-LAG link. If both are deployed simultaneously (as it should be),
then the SRRP state will be derived from the MC-LAG state.
• IGMP states from the MCS DB will be:
→ Activated on non-querier subscriber in case that neither SRRP nor MC-LAG is
deployed. It is assumed that the querier subscriber has received the original IGMP
message and consequently sent the IGMP MCS Sync to the non-querier (standby).
Non-querier interface will accept the MCS sync message and also it will propagate the
IGMP states to PIM.
The querier subscriber will not accept the IGMP update from the MCS database.
→ aware of the state of MC-LAG. As soon as the standby MC-LAG becomes active, the
IGMP states will be activated and they will be propagated to PIM. Traffic will be
forwarded as soon as multicast streams are delivered to the node and the IGMP states
under the subscriber are activated. On a standby MC-LAG, IGMP states will not be
propagated from the MCS DB to PIM and consequently subscribers.
→ aware of the SRRP state. Since the subscriber with SRRP Master state is considered
active, the states will be propagated to PIM as well. On standby SRRP, IGMP states
will not be propagated from MCS DB to PIM and consequently to subscribers.
• Once the switchover is triggered via MC-LAG or SRRP, the IGMP states from MCS DB
on the newly active MC-LAG node or subscriber under the newly SRRP Master will be
2. Active and standby subscriber refers to the state of underlying protection mechanism (Mas-
ter SRRP or active MC-LAG). Note that the subscribers themselves are always instantiated
(or active) on both nodes. However, traffic forwarding over those subscribers will be driven
by the state of the underlying protection mechanism (SRRP or MC-LAG). Hence the terms
active and standby subscriber.
sent to PIM and consequently to the forwarding plane effectively turning on multicast
forwarding.
Note that in subscriber environment, SRRP should be always activated in dual-homing scenario,
even if MC-LAG is deployed. SRRP in subscriber environment will ensure that downstream
traffic is forwarded via the same node that is forwarding upstream traffic. In this fashion,
accounting and QoS for the subscriber are consolidated within a single node.
Redirection Considerations
For redirection and MCS to work simultaneously in predictable manner, the redirected interface
and the corresponding subscribers have to be protected by the same MC-LAG. This binds the
redirected interfaces and the subscriber-hosts to the same physical port(s).
• The active subscriber will replicate its received IGMP message to the redirected Layer 3
interface. The Layer 3 redirected interface will accept this message:
→ Independently of the corresponding VRRP state if MC-LAG is not used.
→ Only if the Layer 3 interface is IGMP querier
→ MC-LAG is used and in active state
• In all other cases the IGMP message under the Layer 3 redirected interface will be
rejected. Note that Layer 3 redirected interface can also receive IGMP message directly
from the downstream node in case that IGMP forking in the access node is activated.
• The Layer 3 redirected interface will NOT accept the IGMP state update from the MCS
DB unless the Layer 3 interface is a non-querier.
• In case that the Layer 3 redirected interface is part of MC-LAG, the IGMP state update
sent to it via MCS DB will be accepted only during the transitioning phase from standby
to active MC-LAG state.
Briefly, IGMP states on Layer 3 interface are not VRRP aware. However, they are MC-LAG
aware.