Showing posts with label Sparse. Show all posts
Showing posts with label Sparse. Show all posts

Thursday, February 6, 2014

NTS: Multicast VPN

Multicast VPN




Multicast VPN is defined in RFC 6513 and RFC 6514.
Cisco's Multicast VPN is defined in RFC 6037.



Two solutions
  • PIM/GRE mVPN or draft-rosen (RFC 6037)
    • PIM adjacencies between PEs to exchange mVPN routing information
    • unique multicast address per VPN
    • per-VPN PIM adjacencies between PEs and CEs
    • per-VPN MDT (GRE) tunnels between PEs
    • data MDT tunnels for optimization
  • BGP/MPLS mVPN or NG mVPN
    • BGP peerings between PEs to exchange mVPN routing information
    • PIM messages are carried in BGP
    • BGP autodiscovery for inter-PE tunnels
    • MPLS P2MP inclusive tunnels between PEs
    • selective tunnels for optimization

Only the PIM/GRE mVPN model (Cisco's original implementation) is described below.



MVPN

MVPN combines multicast with MPLS VPN. PE routers establish virtual PIM neighborships with other PE routers that are connected to the same VPN.

The VPN-specific multicast routing and forwarding database is referred to as MVRF.

A MDT (multicast distribution tree) tunnel interface is an interface that MVRF uses to access the multicast domain. MDT tunnels are point-to-multipoint.
 
Multicast packets are sent from the CE to the ingress PE and then encapsulated and transmitted across the core (over the MDT tunnel). At the egress PE, the encapsulated packets are decapsulated and then sent to the receiving CE.

When sending customer VRF traffic, PEs encapsulate the traffic in their own (S,G) state, where the G is the MDT group address, and the S is the MDT source for the PE. By joining the (S,G) MDT of its PE neighbors, a PE router is able to receive the encapsulated multicast traffic for that VRF.

All VPN packets passing through the provider network are viewed as native multicast packets and are routed based on the routing information in the core network.

To support MVPN, PE routers only need to support native multicast routing.

RTs should be configured so that the receiver VRF has unicast reachability to prefixes in the source VRF.


Data MDT

MVPN also supports optimized VPN traffic forwarding for high-bandwidth applications that have sparsely distributed receivers.

A dedicated multicast group can be used to encapsulate packets from a specific source and an optimized MDT can be created to send traffic only to PE routers connected to interested receivers.

A unique group per vrf should be used on the PEs.



Configuration

IOS
ip multicast-routing
!
ip pim ssm default
!
interface Loopback0
 ip pim sparse-mode
!
interface X
 ip pim sparse-mode
!
ip multicast-routing vrf VPN
!
vrf definition VPN
 address-family ipv4
  mdt default x.x.x.x
  mdt data x.x.x.x y.y.y.y
 exit-address-family
!
router bgp 100
 address-family ipv4 mdt
  neighbor x.x.x.x activate
 exit-address-family


IOS-XR
multicast-routing
 address-family ipv4
  interface Loopback0
   enable
  !
  mdt source Loopback0
 !
 vrf VPN
  address-family ipv4
   mdt default ipv4 x.x.x.x

   mdt data y.y.y.y/24
   interface all enable
!
router bgp 100
 address-family ipv4 mdt
 !
 neighbor x.x.x.x
  address-family ipv4 mdt



"mdt source" is required in IOS-XR (it can be configured under the VRF if it's specific for it).

Sparse mode must be activated on all physical interfaces where multicast will be passing through (global or VRF ones) and on the loopback interface used for the BGP VPNv4 peerings.

The RP setup of the CEs must agree with the VRF RP setup on the PEs. In case you manually define the RP (static RP) on the CEs, then this must be done on the PEs too (inside the vrf).



Verification
  • There should be (S,G) entries for each BGP neighbor, where S=BGP loopback and G=MDT default address
  • There should be a bidirectional PIM adjacency across a tunnel between the PEs, but inside each PE's VRF
  • If an RP is used on a CE, then each remote CE should know this RP 
  • Sources/Receivers from any site should be viewable on the RP
  • There should be an MDT data (S,G) entry for each pair of customer (S,G) entries


Verification (using only a default mdt)


MDT default (S,G) entries

IOS
R5#sh ip mroute sum
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.255.255.1), 00:34:36/stopped, RP 10.0.0.1, OIF count: 1, flags: SJCFZ
  (10.0.0.6, 239.255.255.1), 00:24:11/00:02:18, OIF count: 1, flags: JTZ
  (10.0.0.5, 239.255.255.1), 00:34:35/00:02:54, OIF count: 1, flags: FT


R5#sh ip mroute 239.255.255.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.255.255.1), 00:46:12/stopped, RP 10.0.0.1, flags: SJCFZ
  Incoming interface: FastEthernet0/0.15, RPF nbr 10.1.5.1
  Outgoing interface list:
    MVRF VPN, Forward/Sparse, 00:46:12/00:01:46

(10.0.0.6, 239.255.255.1), 00:35:47/00:02:28, flags: JTZ
  Incoming interface: FastEthernet0/0.57, RPF nbr 10.5.7.7
  Outgoing interface list:
    MVRF VPN, Forward/Sparse, 00:35:47/00:01:46

(10.0.0.5, 239.255.255.1), 00:46:12/00:03:19, flags: FT
  Incoming interface: Loopback0, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet0/0.57, Forward/Sparse, 00:35:46/00:03:11



 
R5#sh bgp ipv4 mdt all 10.0.0.6/32
BGP routing table entry for 100:1:10.0.0.6/32        version 2
Paths: (1 available, best #1, table IPv4-MDT-BGP-Table)
  Not advertised to any peer
  Local
    10.0.0.6 from 10.0.0.1 (10.0.0.1)
      Origin incomplete, metric 0, localpref 100, valid, internal, best
      Originator: 10.0.0.6, Cluster list: 10.0.0.1, 10.0.0.20,
      MDT group address: 239.255.255.1



R5#sh ip pim mdt
  * implies mdt is the default MDT
  MDT Group/Num   Interface   Source                   VRF
* 239.255.255.1   Tunnel1     Loopback0                VPN
 


R5#sh ip pim mdt bgp
MDT (Route Distinguisher + IPv4)               Router ID         Next Hop
  MDT group 239.255.255.1
   100:1:10.0.0.6                              10.0.0.1          10.0.0.6





Verification (using a default and a data mdt)

MDT default (S,G) entries
MDT data (S,G) entries

IOS
R2#sh ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(2.2.2.2, 232.0.0.1), 00:08:53/00:03:27, flags: sT
  Incoming interface: Loopback0, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet0/0.24, Forward/Sparse, 00:08:53/00:03:27

(19.19.19.19, 232.0.0.1), 00:50:48/stopped, flags: sTIZ
  Incoming interface: FastEthernet0/0.24, RPF nbr 20.2.4.4
  Outgoing interface list:
    MVRF VPN, Forward/Sparse, 00:50:48/00:00:11

(19.19.19.19, 232.0.1.0), 00:08:23/00:00:12, flags: sTIZ
  Incoming interface: FastEthernet0/0.24, RPF nbr 20.2.4.4
  Outgoing interface list:
    MVRF VPN, Forward/Sparse, 00:02:47/00:00:12

(19.19.19.19, 232.0.1.1), 00:01:59/00:01:00, flags: sTIZ
  Incoming interface: FastEthernet0/0.24, RPF nbr 20.2.4.4
  Outgoing interface list:
    MVRF VPN, Forward/Sparse, 00:01:59/00:01:00

R2#sh ip pim mdt
  * implies mdt is the default MDT
  MDT Group/Num   Interface   Source                   VRF
* 232.0.0.1       Tunnel0     Loopback0                VPN
  232.0.1.0       Tunnel0     Loopback0                VPN
  232.0.1.1       Tunnel0     Loopback0                VPN




R2#sh ip pim mdt bgp
MDT (Route Distinguisher + IPv4)               Router ID         Next Hop
  MDT group 232.0.0.1
   100:1:19.19.19.19                           19.19.19.19       19.19.19.19




In both scenarios, you can also verify the mGRE tunnels by looking at the tunnel interface itself.

IOS
R5#sh int tun1 | i protocol/transport
  Tunnel protocol/transport multi-GRE/IP


When all PIM adjacencies come up, as PIM neighbors in a VRF you should see all the other MDT PEs though a tunnel and all the local connected CEs through a physical interface.

IOS
R5#sh ip pim vrf VPN nei
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
192.168.59.9      FastEthernet0/0.59       00:00:22/00:01:22 v2    1 / DR S G
10.0.0.6          Tunnel1                  00:25:52/00:01:27 v2    1 / DR S P G





PIM inside a VRF Tunnel

IOS
interface Tunnel1
 ip vrf forwarding VPN-A
 ip address 99.99.99.1 255.255.255.0
 ip pim sparse-mode
 tunnel source 10.0.0.1
 tunnel destination 10.0.0.2

 tunnel vrf VPN-B
!
interface Tunnel1

 ip vrf forwarding VPN-A
 ip address 99.99.99.2 255.255.255.0
 ip pim sparse-mode
 tunnel source 10.0.0.2
 tunnel destination 10.0.0.1
 tunnel vrf VPN-B


"ip vrf forwarding" defines the vrf under which the tunnel (99.99.99.0/24) operates; above it's VPN-A.

"tunnel vrf" defines the vrf which is used to build the tunnel (from 10.0.0.1 to 10.0.0.2); above it's VPN-B. If the tunnel source and destination are in the global routing table, then you don't need to define their vrf with the "tunnel vrf X" command.




Extranet

An extranet site can have either the multicast source or the receivers (otherwise multicast happens intra-as).

The Source PE has the multicast source behind a directly connected CE through the Source MVRF

The Receiver PE has one or more receivers behind a directly connected CE through the Receiver MVRF

In order to achieve multicast connectivity between the Source and Receiver PEs, you must have the same default MDT group in the source and receiver MVRF.

Two solutions:
  • Configure the Receiver MVRF on the Source PE router
    • you need each receiver MVRF copied on the Source PE router
  • Configure the Source MVRF on the Receiver PE routers
    • you need the Source MVRF copied on all interested Receiver PE routers
In both cases, the receiver MVRF (wherever placed) must import the source MVRF's RT.

Only PIM-SM and PIM-SSM are supported.

The multicast source and the RP must reside in the same site of the MVPN, behind the same PE router.


Receiver MVRF on the Source PE

Source PE (IOS)
ip vrf VPN1-S-MVRF
 rd 100:1
 route-target export 100:1
 route-target import 100:1
 mdt default 232.1.1.1
!
ip vrf VPN2-R-MVRF
 rd 100:2
 route-target export 100:2
 route-target import 100:2
 route-target import 100:1
 mdt default 232.2.2.2
!
ip multicast-routing
ip multicast-routing vrf VPN1-S-MVRF
ip multicast-routing vrf VPN2-R-MVRF


Receiver PE (IOS)
ip vrf VPN2-R-MVRF
 rd 100:2
 route-target export 100:2
 route-target import 100:2
 route-target import 100:1
 mdt default 232.2.2.2
!
ip multicast-routing
ip multicast-routing vrf VPN2-R-MVRF



Source MVRF on the Receiver PE

Source PE (IOS)
ip vrf VPN1-S-MVRF
 rd 100:1
 route-target export 100:1
 route-target import 100:1
 mdt default 232.1.1.1
!
ip multicast-routing
ip multicast-routing vrf VPN1-S-MVRF


Receiver PE (IOS)
ip vrf VPN1-S-MVRF
 rd 100:1
 route-target export 100:1
 route-target import 100:1
 mdt default 232.1.1.1
!

ip vrf VPN2-R-MVRF
 rd 100:2
 route-target export 100:2
 route-target import 100:2
 route-target import 100:1
 mdt default 232.2.2.2
!
ip multicast-routing 

ip multicast-routing vrf VPN1-S-MVRF
ip multicast-routing vrf VPN2-R-MVRF

What matters most in both cases when doing the MVRF replication, is to have the same MDT on a MVRF on the Source PE and on a MVRF on the Receiver PE (excluding the Source MVRF).


Fixing RPF

There are two options:

static mroute between VRFs

Receiver PE (IOS)
ip mroute vrf VPN2-R-MVRF 192.168.1.1 255.255.255.255 fallback-lookup vrf VPN1-S-MVRF


group-based VRF selection

Receiver PE (IOS)
ip multicast vrf VPN2-R-MVRF rpf select vrf VPN1-S-MVRF group-list 1
ip multicast vrf VPN2-R-MVRF rpf select vrf VPN3-S-MVRF group-list 3
!
access-list 1 permit 231.0.0.0 0.255.255.255
access-list 3 permit 233.0.0.0 0.255.255.255



Inter-AS MVPN

To establish a Multicast VPN between two ASes, a MDT-default tunnel must be setup between the involved PE routers. The appropriate MDT-default group is configured on the PE router and is unique for each VPN.

All three (A,B,C) inter-as options are supported. For option A nothing extra is required since every AS is completely isolated from the others.

In order to solve the various RPF issues imposed by the limited visibility of PEs between different ASes, each VPNv4 route carries a new transitive attribute (the BGP connector attribute) that defines the route's originator.

Inside a common AS, the BGP connector attribute is the same as the next hop. Between ASes the BGP connector attribute stores (in case of ipv4 mdt) the ip address of the PE router that originated the VPNv4 prefix and is preserved even after the next hop attribute is rewritten by ASBRs.

The BGP connector attribute also helps ASBRs and receiver PEs insert the RPF vector needed to build the inter-AS MDT for source PEs in remote ASes.

The RPF proxy vector is a PIM TLV that contains the ip address of the router that will be used as proxy for RPF checks (helping in the forwarding of PIM Joins between ASes).

A new PIM hello option has also been introduced along with the PIM RPF Vector extension to determine if the upstream router is capable of parsing the new TLV. An RPF Vector is included in PIM messages only when all PIM neighbors on an RPF interface support it.

The RPF proxy (usually the ASBR) removes the vector for the PIM Join message when it sees itself in it.

  • BGP connector attribute
    • used in RPF checks inside a VRF
  • RPF proxy
    • used in RPF checks in the core

Configuration Steps
  • Option A
    • no MDT sessions between ASes is required
    • intra-as MDT sessions are configured as usual
  • Option B
    • intra-as MDT sessions between PEs, ASBRs and RRs
    • inter-as MDT session between ASBRs
    • RPF proxy vector on all PEs for their VRFs
    • RPF proxy vector on all Ps and ASBRs
    • next-hop-self on the MDT ASBRs
  • Option C
    • intra-as MDT sessions between PEs and RRs
    • inter-as MDT sessions between RRs
    • RPF proxy vector on all PEs for their VRFs
    • RPF proxy vector on all Ps and ASBRs
    • next-hop-unchanged on the MDT RRs

MSDP will be required if using an RP on both ASes. Prefer to use SSM in the core of both ASes.


Links



NTS: Multicast

Multicast




PIM-DM (Protocol Independent Multicast- Dense Mode) is defined in RFC 3973.
PIM-SM (PIM - Sparse Mode) is defined in RFC 4601.
PIM-SSM (PIM - Source Specific Multicast) is defined in RFC 4607.
BSR (BootStrap Router) for PIM is defined in RFC 5059.



Multicast Ranges

Multicast range:
  • 224.0.0.0/8
  • FF00::/8

Important ranges (IANA):
  • Link-local range (TTL=1)
    • 224.0.0.0/24 
    • FFx2::/16
  • SSM range:
    • 232.0.0.0/8
    • FF3x::/32 (FF3x::8000:0000  - FF3x::FFFF:FFFF)
  • GLOP range: 233.0.0.0/8
  • Admin-scope range: 239.0.0.0/8

Link-local addresses are not constrained by IGMP snooping. The same also applies to x.0.0.x or x.128.0.x (due to 32:1 mcast=>eth mapping).


GLOP range

GLOP addressing is defined in RFC 3180.

Every 16bit ASN has its own GLOP 233.X.Y.0/24 range.

i.e. for AS 12345, one hex=>dec and two dec=>hex conversions need to be made.

12345 => 0x3039
0x30 => 48 (X)
0x39 => 57 (Y)

AS 12345 = > 233.48.57.0/24

Organizations with a 32-bit ASN may apply for space in AD-HOC Block III (233.252.0.0 - 233.255.255.255) also known as Extended GLOP (EGLOP), or consider using IPv6 multicast addresses.



Multicast Routing Protocols

Main functionalities
  • setup multicast forwarding state
  • exchange information about the multicast forwarding state

Protocols
  • DVMRP
  • PIM-DM
  • PIM-SM
Expect to see only PIM-SM being used in most networks.


IOS
ip multicast-routing

IOS-XR
multicast-routing


Enabling multicast-routing on an interface in IOS-XR will enable PIM and IGMP automatically. There is no need to configure the "router pim" command (unless something extra is required, like MDT or RP), since the PIM mode it's automatically determined by the group range.

IOS-XR
GSR#sh pim group-map
Fri Jun  9 06:11:08.463 UTC

IP PIM Group Mapping Table
(* indicates group mappings being used)
(+ indicates BSR group mappings active in MRIB)

Group Range         Proto Client   Groups RP address      Info

224.0.1.39/32*      DM    perm     0      0.0.0.0
224.0.1.40/32*      DM    perm     1      0.0.0.0
224.0.0.0/24*       NO    perm     0      0.0.0.0
232.0.0.0/8*        SSM   config   1      0.0.0.0
224.0.0.0/4*        SM    static   0      0.0.0.0         RPF: Null,0.0.0.0


Each multicast group can be one of sparse, dense, bidir, ssm. Each interface can be many modes, each one depending on the egress multicast group.

In IOS-XR, the router itself is also listed as a pim neighbor with a "*".



PIM-SM RP

For PIM-SM to work properly, all routers in a domain must know and agree on the active RP for each multicast group (Group-to-RP mappings).

Group-to-RP mappings can be created using:
  • Static group-to-RP mapping
  • Auto-RP
    • uses dense-mode (configure "sparse-dense-mode" or "sparse-mode" & "ip pim auto-rp listener")
    • uses 224.0.1.39 for RP announcements (from RP to MA)
    • uses 224.0.1.40 for MA announcements (from MA to all PIM routers)
    • the priority of each RP cannot be defined (the RP with the higher ip address wins)
    • the interval/scope of Auto-RP announcements can be defined
    • use "ip multicast boundary ACL in/out filter-autorp" to filter Auto-RP announcements entering/leaving your network
  • PIM BSR (bootstrap router)
    • uses sparse-mode (configure "no ip pim dm-fallback" in dense environments)
    • uses 224.0.0.13 for BSR announcements (from BSR to all PIM routers)
    • unicast for RP announcements (from RP to BSR router)
    • the priority of each RP can be defined
    • the interval/scope of BSR announcements cannot be defined
    • use "ip pim bsr-border" to filter BSR announcements from entering/leaving your network
On hub-n-spoke networks, when auto-rp announcements must pass between the spokes, you cannot use nbma-mode, because this works only in sparse mode (and announcements are in dense). You have to use BSR or create a pim-enabled tunnel between the spokes.

Static RP and BSR are the most common ways to configure a Group-to-RP mapping.


Static RP

IOS
interface Loopback0
 ip pim sparse-mode
!
ip pim rp-address 1.1.1.1

IOS-XR
router pim
 address-family ipv4
  rp-address 1.1.1.1

  interface Loopback0
   enable



Auto-RP

IOS
interface Loopback0
 ip pim sparse-mode
!
ip pim send-rp-announce Loopback0 scope 10
ip pim send-rp-discovery Loopback0 scope 10




IOS-XR
router pim
 address-family ipv4
  auto-rp mapping-agent Loopback0 scope 10 

  auto-rp candidate-rp Loopback0 scope 10
  interface Loopback0
   enable


In IOS-XR, Auto-RP is not supported for VRFs.

In IOS, you can use an ACL to filter the auto-rp groups.

Auto-RP requires either sparse-dense mode or sparse mode and auto-rp listener (by default enabled in IOS-XR).


BSR


IOS
interface Loopback0
 ip pim sparse-mode
!
ip pim bsr-candidate Loopback0
ip pim rp-candidate Loopback0

IOS-XR
router pim
 address-family ipv4

  interface Loopback0
   enable
  !
  bsr candidate-bsr 1.1.1.1

  bsr candidate-rp 1.1.1.1


When static RP is configured together with Auto-RP or BSR for the same PR mappings, Auto-RP/BSR created mappings take precedence, unless "override" is configured with the static rp-address.



PIM-SSM


PIM-SSM uses PIM-SM + IGMPv3/MLDv2.

Summary
  • Receivers report interest for a particular source using IGMv3
  • PIM routers do RPF lookups for the source and join upstream towards the source
  • An SPT is created between the source and the receivers
  • Multicast traffic starts to flow from source to the receivers on the SPT

Detailed analysis

  1. Receiver
    1. sends an IGMPv3 Report (S,G) to the LAN
  2. Receiver DR
    1. receives the IGMP Report (S,G) from the Receiver
    2. adds the incoming IGMP interface to OIL for (S,G)
    3. performs RPF lookup for source S
    4. sends a PIM Join (S,G) to the upstream router out of the RPF interface
  3. Upstream Routers (from Receiver DR towards the Source)
    1. add the incoming PIM interface to OIL for (S,G)
    2. perform RPF lookup for source S
    3. send a PIM Join (S,G) to the next upstream router out of the RPF interface (*)
    4. ...

(*) This propagation of PIM Join messages our of the RPF interface continues until the source DR is reached or an upstream router has already multicast forwarding state for this group.


IOS
ip pim ssm default
!
interface X
 ip pim sparse-mode



IOS-XR
multicast-routing
 address-family ipv4
  interface X
   enable



SSM is enabled by default on IOS-XR for 232.0.0.0/8.

In IOS-XR, interfaces enabled under multicast-routing run PIM sparse-mode by default.



PIM-SM


Summary
  • A receiver asks for multicast data and an RPT (*,G) is built from the receiver DR to the RP
  • When the source transmits the multicast data, first packets are sent encapsulated into PIM Register messages by the source DR to the RP and then an SPT (S,G) is built from the RP to the source
  • When the multicast data reaches the receiver, there is a new SPT (S,G) built from the receiver DR directly to the source
  • Receiver DR sends a prune message to the RP to stop the initial RPT

Detailed analysis

Phase #1 (Receiver asks for multicast data)
  1. Receiver
    1. sends an (*,G) IGMP Report to the LAN
  2. Receiver DR Router
    1. receives the (*,G) IGMP Report from the Receiver
    2. adds the incoming IGMP interface to OIL for (*,G)
    3. performs RPF lookup for RP
    4. sends a (*,G) PIM Join to the upstream router out of the RPF interface
  3. Upstream Routers (from Receiver DR towards the RP)
    1. receive the (*,G) PIM Join from the downstream router
    2. add the incoming PIM interface to OIL for (*,G)
    3. perform RPF lookup for RP
    4. send a (*,G) PIM Join to the next upstream router out of the RPF interface (*)
Phase #2 (Source sends multicast data and Receiver receives it through the RP)
  1. Source S
    1. starts sending native multicast data (S,G) to the LAN
  2. Source DR Router
    1. receives native multicast data from the source
    2. adds the incoming multicast data interface to IIF
    3. encapsulates multicast data in PIM Register messages
    4. sends the PIM Register messages to the RP as unicast packets
  3. RP (with active RPT)
    1. receives and decapsulates the PIM Register messages
    2. sends the decapsulated native multicast data to the receiver down the RPT
  4. Receiver
    1. receives the native multicast data through the RP
  5. RP (with active RPT)
    1. performs RPF lookup for source S
    2. sends a (S,G) PIM Join to the upstream router (towards source) out of the RPF interface
  6. Upstream Routers (from RP towards the Source)
    1. receive the (S,G) PIM Join
    2. add the incoming PIM interface to OIL for (S,G)
    3. perform RPF lookup for source S
    4. send a (S,G) PIM Join to the next upstream router (towards source) out of the RPF interface (*)
  7. Source DR Router
    1. receives the (S,G) PIM Join
    2. starts sending native multicast data to the RP
  8. RP (with active RPT)
    1. receives duplicate multicast data (encapsulated from Source DR and native from source/upstream)
    2. sends a PIM Register-Stop message to the Source DR
  9. Source DR Router
    1. receives the PIM Register-Stop message from the RP
    2. stops sending PIM Register messages with encapsulated multicast data to the RP
  10. RP (with active RPT)
    1. receives only native multicast data
    2. sends the native multicast data to the receivers down the RPT
  11. Receiver
    1. receives the native multicast data through the RP
Phase #3 (Receiver receives the multicast data directly from the Source)
  1. Receiver DR Router
    1. receives multicast data (S,G) from source S through the RP
    2. performs RPF lookup for source S
    3. sends a (S,G) PIM Join to the upstream router out of the RPF interface 
  2. Upstream Routers (from Receiver DR towards the Source DR)
    1. receive the (S,G) PIM Join from the downstream router
    2. add the incoming PIM interface to OIL for (S,G)
    3. perform RPF lookup for source S
    4. send a (S,G) PIM Join to the next upstream router out of the RPF interface (*)
  3. Source DR Router
    1. receives the (S,G) PIM Join from the downstream router
    2. starts sending native multicast data to the Receiver DR Router
  4. Receiver DR Router
    1. receives duplicate multicast data (from source DR through SPT and from RP through RPT)
    2. sends a PIM Prune (S,G,RPT) to the upstream router out of the RPF interface
  5. Upstream Routers (from Receiver DR towards the RP)
    1. receive the PIM Prune (S,G,RPT)
    2. remove the incoming PIM interface from OIL for (*,G)
    3. if OIL is null, send a PIM Prune (S,G) to the upstream router out of the RPF interface
  6. RP
    1. receives the PIM Prune (S,G)
    2. removes the incoming PIM interface from OIL for (*,G)
    3. if OIL is null, sends a PIM Prune (S,G) to the upstream router out of the RPF interface
  7. Source DR
    1. receives the PIM Prune (S,G)
    2. removes the incoming PIM interface from OIL for (S,G)

(*) This propagation of PIM Join messages our of the RPF interface continues until the RP or the Source DR is reached or an upstream router has already multicast forwarding state for this group.

PIM Prune messages might not be sent, if there is another active multicast state for the same group in the router.

The Source DR periodically sends a PIM Null-Register message (without any multicast data inside) to the RP, in order to inform it that the source is still active.



IOS
interface X
 ip pim sparse-mode



IOS-XR
multicast-routing
 address-family ipv4
  interface X
   enable




OIL = Outgoing Interface List
IIF = Incoming Interface

In the shortest path tree (SPT), the root of the tree is the source.
In the shared path tree (RPT), the root of the tree is the RP.

PIM Joins are sent to 224.0.0.13.

The threshold for switching from the RPT to SPT is 0 by default, which means once the Receiver DR receives the first multicast packet and learns the source, it sends a (S,G) PIM Join towards the source. When is starts receiving multicast data from the SPT, it sends a PIM Prune for traffic received via the RPT towards the RP.




PIM Bidir

PIM Bidir is significantly simpler in operation than PIM-SM.

It eliminates the maintenance of  (S,G) entries (only  (*,G) are kept) and there's no data driven events hence packets never need to be signaled and preserved (much more scalable for many-to-many applications).

Bidir PIM uses the Designated Forwarder (DF) election mechanism (based on routing protocol cost to the RP) to elect a single router on each link, which becomes responsible for the following tasks:
  • picking up packets from the link and forwarding them upstream towards the RP
  • forwarding downstream multicast packets from the RP onto the link

When configuring both sparse and bidir groups, you need to explicitly define them, because everything excluded from bidir is dense by default.

IOS
ip pim bidir-enable
ip pim rp-address 1.1.1.1 bidir


IOS-XR
router pim
 address-family ipv4
  rp-address 1.1.1.1 bidir



In some IOS-XR releases you might need to define the mcast bidir range of group addresses.

IOS
R2#sh ip pim rp map
PIM Group-to-RP Mappings

Group(s): 224.0.0.0/4, Static, Bidir Mode
    RP: 1.1.1.1 (?)


IOS-XR
GSR#sh pim group-map
Tue Feb  4 13:57:55.368 UTC

IP PIM Group Mapping Table
(* indicates group mappings being used)
(+ indicates BSR group mappings active in MRIB)

Group Range         Proto Client   Groups RP address      Info

224.0.1.39/32*      DM    perm     0      0.0.0.0
224.0.1.40/32*      DM    perm     1      0.0.0.0
224.0.0.0/24*       NO    perm     0      0.0.0.0
232.0.0.0/8*        SSM   config   0      0.0.0.0
224.0.0.0/4*        BD    config   1      1.1.1.1         RPF: Gi0/2/1/2.28,2.2.28.2
224.0.0.0/4         SM    static   0      0.0.0.0         RPF: Null,0.0.0.0



You will see only (*,G) entries for the bidir groups, no (S,G) entries.



PIM Register Tunnels

In latest software releases, PIM uses tunnel interfaces for RP Register communication. You can use "sh ip pim tunnel" in order to verify the PIM connectivity, either inside or outside a vrf. If no tunnel exists but should exist, then there is probably something wrong.

IOS
R1#sh ip pim tunnel
Tunnel0
  Type  : PIM Encap
  RP    : 10.0.0.1*
  Source: 10.0.0.1
Tunnel1*
  Type  : PIM Decap
  RP    : 10.0.0.1*
  Source: -
 

R1#sh ip pim vrf VPN tunnel
Tunnel1
  Type  : PIM Encap
  RP    : 10.0.0.1
  Source: 10.1.7.1


R1#sh int tun1 | i protocol/transport
  Tunnel protocol/transport PIM/IPv4



IOS-XR
GSR#sh pim tunnel info all
Fri Jun  9 06:04:51.319 UTC

Interface           RP Address      Source Address

Encapstunnel0       10.0.0.1        10.0.0.1
Decapstunnel0       0.0.0.0          -


The router acting as RP should have at least two PIM tunnels: one for Encapsulation (usually src=RP) and another one for Decapsulation. All other PIM routers should have only one for Encapsulation (src=local PIM address, dst=RP).



IGMP

You can use the following as multicast receiver.

IOS
interface X
 ip igmp join-group 224.1.1.1


IOS-XR
router igmp
 interface X
  join-group 232.1.1.1 192.168.1.1


PIM (sparse) is also required on the interface where IGMP is enabled.

"ip igmp static-group x.x.x.x" can also be used, if we want to avoid having the multicast traffic being processed by the router.

IGMP (v3) for SSM requires to also define the source of multicast traffic.

In IOS, when using multicast ping to test connectivity, the multicast packets go out through all multicast enabled interfaces by default (regardless of the source ip/interface chosen in CLI). If you want to send it out through a specific interface then you need to use extended ping and choose a specific interface in the "Interface" option.

IOS
R1#ping
Protocol [ip]:
Target IP address: 232.1.1.1
Repeat count [1]:
Datagram size [100]:
Timeout in seconds [2]:
Extended commands [n]: y
Interface [All]: FastEthernet0/0

Time to live [255]:
Source address: 1.1.1.1