Showing posts with label Inter-AS. Show all posts
Showing posts with label Inter-AS. Show all posts

Thursday, February 6, 2014

NTS: Inter-AS MPLS L3VPN

Inter-AS MPLS L3VPN




Inter-AS MPLS L3VPN Options are defined in RFC 4364.



Inter-AS Options
  • Inter-AS Option A (Back-to-Back VRF)
    • one logical/physical interface per VRF in the interconnection
    • one PE-CE eBGP/IGP session per VRF between ASBRs
    • IP traffic between ASBRs
    • no need for common RDs/RTs between ASNs 
    • 2 LSPs and 1 IP path from one PE to the other PE
  • Inter-AS Option B (MP-eBGP between ASBRs)
    • one physical/logical interface for all VRFs in the interconnection
    • eBGP VPNv4 between ASBRs
    • MPLS traffic between ASBRs
    • common RDs/RTs between ASNs (unless RT rewrite is used)
    • next-hop-self on each ASBR for iBGP
      • 3 LSPs from one PE to the other PE
    • redistribute connected/static on each ASBR for the interconnection
      • 2 LSPs from one PE to the other PE
      • filter to redistribute only the peer's address
    • multihop (loopback) peering between ASBRs
      • 2 LSPs from one PE to the other PE
      • static routes for peer's loopback on each ASBR
      • LDP between ASBRs
      • MPLS static label binding for peer's loopback pointing to interconnection on each ASBR
  • Inter-AS Option C (Multihop MP-eBGP between RRs/PEs)
    • one physical/logical interface for all VRFs in the interconnection
    • labeled eBGP session between ASBRs for next-hop exchange
    • multihop eBGP VPNv4 session between RRs
    • MPLS traffic between ASBRs
    • common RDs/RTs between ASNs (unless RT rewrite is used) 
    • change next-hop on each VPNv4 RR for the eBGP session (default)
      • 2 LSPs from one PE to the other PE
    • next-hop-unchanged on each VPNv4 RR for the eBGP session
      • 1 LSP from one PE to the other PE
    • eBGP session between ASBRs with directly connected interfaces
      • next-hop-self on each ASBR for the iBGP sessions
    • multihop (loopback) eBGP session between ASBRs with loopbacks
      • static routes for peer's loopback on each ASBR
      • LDP between ASBRs
      • MPLS static label binding for peer's loopback pointing to interconnection on each ASBR

The transport label changes whenever the next-hop changes.



Inter-AS Option A

ASBR-1

IOS
ip vrf VPN1
 rd 1:100
 route-target 1:100
!
ip vrf VPN2
 rd 1:200
 route-target 1:200
!
interface FastEthernet0/0
 description ** Inter-AS NNI **
!
interface FastEthernet0/0.10
 description ** Customer VPN1 **
 encapsulation dot1q 10
 ip vrf forwarding VPN1
 ip address 10.10.10.1 255.255.255.0
!
interface FastEthernet0/0.20
 description ** Customer VPN2 **
 encapsulation dot1q 20
 ip vrf forwarding VPN2
 ip address 20.20.20.1 255.255.255.0
!
router bgp 1
 neighbor 1.1.1.1 remote-as 1

 neighbor 1.1.1.1 update-source Loopback0
 neighbor 1.1.1.1 description iBGP-VPNv4
!
 address-family vpnv4
  neighbor 1.1.1.1 activate
  neighbor 1.1.1.1 send-community extended
  neighbor 1.1.1.1 next-hop-self
 exit-address-family
!
 address-family ipv4 vrf VPN1
  neighbor 10.10.10.2 remote-as 2
  neighbor 10.10.10.2 activate
 exit-address-family
!
 address-family ipv4 vrf VPN2
  neighbor 20.20.20.2 remote-as 2
  neighbor 20.20.20.2 activate
 exit-address-family



ASBR-2

IOS
ip vrf test1
 rd 2:100
 route-target 2:100
!
ip vrf test2
 rd 2:200
 route-target 2:200
!
interface FastEthernet0/0
 description ** Inter-AS NNI **
!
interface FastEthernet0/0.10
 description ** Customer VPN1 **
 encapsulation dot1q 10
 ip vrf forwarding VPN1
 ip address 10.10.10.2 255.255.255.0
!
interface FastEthernet0/0.20
 description ** Customer VPN2 **
 encapsulation dot1q 20
 ip vrf forwarding VPN2
 ip address 20.20.20.2 255.255.255.0
!
router bgp 2
 neighbor 2.2.2.2 remote-as 2
 neighbor 2.2.2.2 update-source Loopback0
 neighbor 2.2.2.2 description iBGP-VPNv4
!
 address-family vpnv4
  neighbor 2.2.2.2 activate
  neighbor 2.2.2.2 send-community extended
  neighbor 2.2.2.2 next-hop-self
 exit-address-family
!
 address-family ipv4 vrf VPN1
  neighbor 10.10.10.1 remote-as 1
  neighbor 10.10.10.1 activate
 exit-address-family
!
 address-family ipv4 vrf VPN2
  neighbor 20.20.20.1 remote-as 1
  neighbor 20.20.20.1 activate
 exit-address-family



You can also use a different router-id per VRF, using the "bgp router-id" under each vrf address-family.



Inter-AS Option B

ASBR-1

IOS
interface FastEthernet0/0
 description ** Inter-AS NNI **
 ip address
x.x.x.x
 mpls bgp forwarding
!
router bgp 1
 no bgp default route-target filter
 neighbor
PE-1 remote-as 1
 neighbor
PE-1 update-source Loopback0
 neighbor
PE-1 description MP-iBGP with PE-1
 neighbor ASBR-2 remote-as 2
 neighbor
ASBR-2 description MP-eBGP with ASBR-2
 no auto-summary
!
 address-family vpnv4
  neighbor PE-1 activate
  neighbor
PE-1 send-community extended
  neighbor
PE-1 next-hop-self
  neighbor
ASBR-2 activate
  neighbor
ASBR-2 send-community extended
 exit-address-family



ASBR-2

IOS
interface FastEthernet0/0
 description ** Inter-AS NNI **
 ip address
x.x.x.x
 mpls bgp forwarding
!
router bgp 2
 no bgp default route-target filter
 neighbor PE-2 remote-as 2
 neighbor
PE-2 update-source Loopback0
 neighbor
PE-2 description MP-iBGP with PE-2
 neighbor ASBR-1 remote-as 1
 neighbor
ASBR-1 description MP-eBGP with ASBR-1
!
 address-family vpnv4
  neighbor PE-2 activate
  neighbor
PE-2 send-community extended
  neighbor
PE-2 next-hop-self
  neighbor
ASBR-1 activate
  neighbor
ASBR-1 send-community extended
 exit-address-family





Inter-AS Option C

RR-1

IOS
router bgp 1
 no synchronization
 neighbor PE-1 remote-as 1
 neighbor
PE-1 update-source Loopback0
 neighbor
PE-1 description MP-iBGP with PE-1
 neighbor ASBR-1 remote-as 1
 neighbor
ASBR-1 update-source Loopback0
 neighbor
ASBR-1 description MP-iBGP with ASBR-1
 neighbor RR-2 remote-as 2
 neighbor RR-2 ebgp-multihop 255
 neighbor RR-2 update-source Loopback0
 neighbor RR-2 description MP-eBGP with RR-2
 no auto-summary
!
 address-family vpnv4
  neighbor
PE-1 activate
  neighbor
PE-1 send-community extended
  neighbor
PE-1 route-reflector-client
  neighbor ASBR-1 activate
  neighbor
ASBR-1 send-community extended
  neighbor
ASBR-1 route-reflector-client
  neighbor RR-2 activate
  neighbor RR-2 send-community extended
  neighbor RR-2 next-hop-unchanged
  exit-address-family



ASBR-1

IOS
interface FastEthernet0/0
 description ** Inter-AS NNI **
 ip address x.x.x.x
 mpls bgp forwarding
!

route-map PE2-TO-IGP permit 10
 match ip address
PE-2

!
router IGP 100
 redistribute bgp 1 route-map PE2-TO-IGP
!
router bgp 1
 no synchronization
 network PE-1 mask 255.255.255.255
!
 neighbor RR-1 remote-as 1
 neighbor
RR-1 update-source Loopback0
 neighbor
RR-1 description MP-iBGP to RR-1
 neighbor ASBR-2 remote-as 2
 neighbor
ASBR-2 send-label
 Î½eighbor
ASBR-2 description MP-eBGP to ASBR-2
 no auto-summary
!
 address-family vpnv4
  neighbor
RR-1 activate
  neighbor
RR-1 send-community extended
 exit-address-family




ASBR-2

IOS
interface FastEthernet0/0
 description ** Inter-AS NNI **
 ip address x.x.x.x
 mpls bgp forwarding
!

route-map PE1-TO-IGP permit 10
 match ip address PE-1

!
router IGP 200
 redistribute bgp 2 route-map PE1-TO-IGP
!
router bgp 2
 network PE-2 mask 255.255.255.255
!
 neighbor RR-2 remote-as 2
 neighbor
RR-2 update-source Loopback0
 neighbor
RR-2 description MP-iBGP to RR-2
 neighbor ASBR-1 remote-as 1
 neighbor
ASBR-1 send-label
 neighbor
ASBR-1 description MP-eBGP to ASBR-1
!
 address-family vpnv4
  neighbor RR-2 activate
  neighbor RR-2 send-community extended
 exit-address-family




RR-2

IOS
router bgp 2
 neighbor PE-2 remote-as 2
 neighbor PE-2 update-source Loopback0
 neighbor PE-2 description MP-iBGP with PE-2
 neighbor ASBR-2 remote-as 2
 neighbor
ASBR-2 update-source Loopback0
 neighbor
ASBR-2 description MP-iBGP with ASBR-2
 neighbor RR-1 remote-as 1
 neighbor
RR-1 ebgp-multihop 255
 neighbor
RR-1 update-source Loopback0
 neighbor
RR-1 description MP-eBGP with RR-1
!
 address-family vpnv4
  neighbor PE-2 activate
  neighbor
PE-2 send-community extended
  neighbor
PE-2 route-reflector-client
  neighbor ASBR-2 activate
  neighbor
ASBR-2 send-community extended
  neighbor
ASBR-2 route-reflector-client
  neighbor RR-1 activate
  neighbor
RR-1 send-community extended
  neighbor
RR-1 next-hop-unchanged
 exit-address-family





In IOS-XR, in order to send IPv4 prefixes with labels over a labeled BGP session, the IOS-XR router must be the originator of the prefixes. On the other hand, an IOS router can send labeled IPv4 prefixes over a labeled BGP session whether it's the originator or not of those prefixes.

If an output route-map is applied on a labeled BGP session, then labels will be added only to those prefixes that have the command "set mpls-label" under the relevant statement in the route-map. Generally, if a router is advertising IPv4 prefixes with labels, then you can use an output route-map (with the "set mpls-label" command) to specify which prefixes will be sent with a label.

You need to disable the default RT filter from the ASBRs, unless they have all the VRFs locally configured or they are VPNv4 RRs.

In most IOS software releases, the command "mpls bgp forwarding" is added automatically under the eBGP peering interface when a VPNv4 or labeled BGP session is configured between directly connected peers. If you use loopbacks for peering, then you must manually configure it. Always verify its existence, together with the interface's mpls operational state.

IOS
R1#sh mpls int
Interface              IP            Tunnel   BGP Static Operational
FastEthernet0/0.13     Yes (ldp)     No       No  No     Yes
FastEthernet0/0.30     No            No       Yes No     Yes


Generally, Cisco software requires a /32 route for each next-hop that should be label switched. In the Inter-AS B/C options, in IOS-XR you must add manually a /32 static route for the peer address of the interconnection in order to create a label for that. IOS creates automatically a /32 connected route when the relevant VPNv4 or labeled BGP session comes up.

IOS-XR
router static
 address-family ipv4 unicast
  10.10.10.2/32 GigabitEthernet0/2/1/2



IOS
Dec 29 15:45:30.703: %BGP-5-ADJCHANGE: neighbor 10.10.10.2 Up
Dec 29 15:45:30.707: CONN: add connected route, idb: FastEthernet0/0.30, addr: 10.10.10.2, mask: 255.255.255.255



If you want to achieve load-sharing in a MPLS L3VPN environment with RRs, you can use a different RD per PE in combination with BGP multipath.

Inter-AS scenarios emulated in GNS3 might sometimes cause very large delays in data forwarding. Increase the ping/traceroute timeout in order to verify connectivity.



Static Label Bindings

In some cases you don't have the option of enabling LDP or having a VPNv4 or labeled BGP session between directly connected peers, but you still need to have the label switching functionality on their interconnection.

i.e. if you configure the following static route in order to reach peer's loopback:

IOS
ip route 19.19.19.19 255.255.255.255 12.1.19.19

IOS
R1#sh mpls forwarding-table 19.19.19.19 detail
Local      Outgoing   Prefix           Bytes Label   Outgoing   Next Hop
Label      Label      or Tunnel Id     Switched      interface
24         No Label   19.19.19.19/32   0             Fa0/0  12.1.19.19
        MAC/Encaps=18/18, MRU=1504, Label Stack{}
        CA02141C0008CA0417EC0000810000770800
        No output feature configured


then you need to also add a static (outgoing) label binding for that:

IOS
mpls static binding ipv4 19.19.19.19 255.255.255.255 output 12.1.19.19 implicit-null

IOS
R1#sh mpls static binding
19.19.19.19/32: Incoming label: none;
  Outgoing labels:
     12.1.19.19           implicit-null


R1#sh mpls forwarding-table 19.19.19.19 detail
Local      Outgoing   Prefix           Bytes Label   Outgoing   Next Hop
Label      Label      or Tunnel Id     Switched      interface
24         Pop Label  19.19.19.19/32   0             Fa0/0  12.1.19.19
        MAC/Encaps=18/18, MRU=1504, Label Stack{}
        CA02141C0008CA0417EC0000810000770800
        No output feature configured


At the same time, you must enable MPLS on this interface without using LDP:

IOS
R1#sh mpls int FastEthernet0/0
Interface              IP            Tunnel   BGP Static Operational

IOS
interface FastEthernet0/0
 mpls bgp forwarding

IOS
R1#sh mpls int FastEthernet0/0
Interface              IP            Tunnel   BGP Static Operational
FastEthernet0/0        No            No       Yes No     Yes



Static Label Bindings per Interface

  • multiaccess interfaces
    • next-hop ip address required
    • label required
  • point-to-point interfaces
    • interface required

The above differentiation per interface is applicable only on specific software releases. The multiaccess interface is the common one.

If you must configure specific static labels, then you must first define the label range (which will sometimes require a reload).

Implicit-null is used in the above example due to PHP (pop label) that must happen for the directly connected peer.



Inter-AS L3VPN

If you want to follow a Inter-AS L3VPN path (assuming control-plane has been setup correctly), then you can execute the following algorithm:
  • first router (start PE)
    • Find the VPN label for the prefix
    • Find the Transport label(s) for the prefix's next-hop
  • n router
    • Follow the Transport top label swaps until there is a "Pop Label" for next router
  • n+1 router
    • Find the local VPN label for the prefix
      • If VPN label is "nolabel", then
        • router is the end PE
        • VPN is locally attached
      • If VPN label is other, then
        • router is an RR/ASBR
        • find the Transport label(s) for the prefix's new next-hop
        • go to "n router"
      • If VPN label doesn't exist, then 
        • multiple Transport labels exist
        • go to "n router"

If the route is learned from IGP, the Transport label must be allocated through LDP/RSVP.
If the route is learned from BGP, the Transport label must be allocated through BGP.


Example

R6(PE1)=>R4(P1)=>XR1(ASBR1)=>R1(ASBR2)=>R3(P2)=>R2(PE3)

Start PE

IOS
R6#sh bgp vpnv4 unicast all 7.7.7.7/32
BGP routing table entry for 102:202:7.7.7.7/32, version 36
Paths: (1 available, best #1, table VPN_B)
  Not advertised to any peer
  100
    2.2.2.2 (metric 20) from 20.20.20.20 (20.20.20.20)
      Origin incomplete, metric 0, localpref 100, valid, internal, best
      Extended Community: RT:102:202 0x8800:32768:0 0x8801:1:130560
        0x8802:65281:25600 0x8803:65281:1500 0x8806:0:0
      mpls labels in/out nolabel/26


VPN label is 26


IOS
R6#sh mpls forwarding-table 2.2.2.2 detail
Local      Outgoing   Prefix           Bytes Label   Outgoing   Next Hop
Label      Label      or Tunnel Id     Switched      interface
None       23         2.2.2.2/32       0             Fa0/0.46   20.4.6.4
        MAC/Encaps=18/26, MRU=1496, Label Stack{16 23}
        CA0611100000CA0115B000008100002E8847 0001000000017000
        No output feature configured


Transport label is 16/23, VPN label is 26



IOS
R4#sh mpls forwarding-table labels 16 detail
Local      Outgoing   Prefix           Bytes Label   Outgoing   Next Hop
Label      Label      or Tunnel Id     Switched      interface
16         Pop Label  19.19.19.19/32   18896         Fa0/0.419  20.4.19.19
        MAC/Encaps=18/18, MRU=1504, Label Stack{}
        CA02141C0008CA0611100000810001A38847
        No output feature configured


Transport label is 23, VPN label is 26


IOS

XR1#sh mpls forwarding-table labels 23 detail
Local      Outgoing   Prefix           Bytes Label   Outgoing   Next Hop
Label      Label      or Tunnel Id     Switched      interface
23         20         2.2.2.2/32       22628         Fa0/0.119  12.1.19.1
        MAC/Encaps=18/22, MRU=1500, Label Stack{20}
        CA0417EC0000CA02141C0008810000778847 00014000
        No output feature configured



Transport label is 20, VPN label is 26


IOS

R1#sh mpls forwarding-table labels 20 detail
Local      Outgoing   Prefix           Bytes Label   Outgoing   Next Hop
Label      Label      or Tunnel Id     Switched      interface
20         19         2.2.2.2/32       24518         Fa0/0.13   10.1.3.3
        MAC/Encaps=18/22, MRU=1500, Label Stack{19}
        CA0711100000CA0417EC00008100000D8847 00013000
        No output feature configured



Transport label is 19, VPN label is 26



IOS

R3#sh mpls forwarding-table labels 19 detail
Local      Outgoing   Prefix           Bytes Label   Outgoing   Next Hop
Label      Label      or Tunnel Id     Switched      interface
19         Pop Label  2.2.2.2/32       85693         Fa0/0.23   10.2.3.2
        MAC/Encaps=18/18, MRU=1504, Label Stack{}
        CA0517EC0000CA0711100000810000178847
        No output feature configured



VPN label is 26



IOS

R2#sh bgp vpnv4 unicast all 7.7.7.7/32
BGP routing table entry for 102:202:7.7.7.7/32, version 4
Paths: (1 available, best #1, table VPN_B)
  Advertised to update-groups:
     1
  Local
    40.2.7.7 from 0.0.0.0 (2.2.2.2)
      Origin incomplete, metric 156160, localpref 100, weight 32768, valid, sourced, best
      Extended Community: RT:102:202 Cost:pre-bestpath:128:156160
        0x8800:32768:0 0x8801:1:130560 0x8802:65281:25600 0x8803:65281:1500
        0x8806:0:0
      mpls labels in/out 26/nolabel


End PE found




RT Rewrite

It is used mainly in Inter-AS topologies, when there is a need to keep different RTs between the ASes. It allows the ASBR (or any other router that's involved) to replace the peer ASN's RTs with their own.

Configuration Steps
  • define the RTs to be replaced
  • configure a route-map that matches the above RTs, deletes them and then adds the new RTs
  • apply the route-map to the bgp neighbor session


IOS
ip extcommunity-list 1 permit rt 200:1
ip extcommunity-list 2 permit rt 200:2
!
route-map RT-REWRITE-ROUTEMAP permit 10
 match extcommunity 1
 set extcomm-list 1 delete
 set extcommunity rt 100:1 additive
 continue 20
!
route-map RT-REWRITE-ROUTEMAP permit 20
 match extcommunity 2
 set extcomm-list 2 delete
 set extcommunity rt 100:2 additive
!
route-map RT-REWRITE-ROUTEMAP permit 30
!
router bgp 100
 neighbor 10.10.10.2 remote-as 200
 !
 address family vpnv4
  neighbor 10.10.10.2 activate
  neighbor 10.10.10.2 send-community extended
  neighbor 10.10.10.2 route-map RT-REWRITE-ROUTEMAP in


Use the "additive" keyword when setting the new RT in order to not erase all other extended communities.

Use the "continue" statement (in ingress route-maps) when you need to rewrite more than one RTs in the same prefix.



NTS: Multicast VPN

Multicast VPN




Multicast VPN is defined in RFC 6513 and RFC 6514.
Cisco's Multicast VPN is defined in RFC 6037.



Two solutions
  • PIM/GRE mVPN or draft-rosen (RFC 6037)
    • PIM adjacencies between PEs to exchange mVPN routing information
    • unique multicast address per VPN
    • per-VPN PIM adjacencies between PEs and CEs
    • per-VPN MDT (GRE) tunnels between PEs
    • data MDT tunnels for optimization
  • BGP/MPLS mVPN or NG mVPN
    • BGP peerings between PEs to exchange mVPN routing information
    • PIM messages are carried in BGP
    • BGP autodiscovery for inter-PE tunnels
    • MPLS P2MP inclusive tunnels between PEs
    • selective tunnels for optimization

Only the PIM/GRE mVPN model (Cisco's original implementation) is described below.



MVPN

MVPN combines multicast with MPLS VPN. PE routers establish virtual PIM neighborships with other PE routers that are connected to the same VPN.

The VPN-specific multicast routing and forwarding database is referred to as MVRF.

A MDT (multicast distribution tree) tunnel interface is an interface that MVRF uses to access the multicast domain. MDT tunnels are point-to-multipoint.
 
Multicast packets are sent from the CE to the ingress PE and then encapsulated and transmitted across the core (over the MDT tunnel). At the egress PE, the encapsulated packets are decapsulated and then sent to the receiving CE.

When sending customer VRF traffic, PEs encapsulate the traffic in their own (S,G) state, where the G is the MDT group address, and the S is the MDT source for the PE. By joining the (S,G) MDT of its PE neighbors, a PE router is able to receive the encapsulated multicast traffic for that VRF.

All VPN packets passing through the provider network are viewed as native multicast packets and are routed based on the routing information in the core network.

To support MVPN, PE routers only need to support native multicast routing.

RTs should be configured so that the receiver VRF has unicast reachability to prefixes in the source VRF.


Data MDT

MVPN also supports optimized VPN traffic forwarding for high-bandwidth applications that have sparsely distributed receivers.

A dedicated multicast group can be used to encapsulate packets from a specific source and an optimized MDT can be created to send traffic only to PE routers connected to interested receivers.

A unique group per vrf should be used on the PEs.



Configuration

IOS
ip multicast-routing
!
ip pim ssm default
!
interface Loopback0
 ip pim sparse-mode
!
interface X
 ip pim sparse-mode
!
ip multicast-routing vrf VPN
!
vrf definition VPN
 address-family ipv4
  mdt default x.x.x.x
  mdt data x.x.x.x y.y.y.y
 exit-address-family
!
router bgp 100
 address-family ipv4 mdt
  neighbor x.x.x.x activate
 exit-address-family


IOS-XR
multicast-routing
 address-family ipv4
  interface Loopback0
   enable
  !
  mdt source Loopback0
 !
 vrf VPN
  address-family ipv4
   mdt default ipv4 x.x.x.x

   mdt data y.y.y.y/24
   interface all enable
!
router bgp 100
 address-family ipv4 mdt
 !
 neighbor x.x.x.x
  address-family ipv4 mdt



"mdt source" is required in IOS-XR (it can be configured under the VRF if it's specific for it).

Sparse mode must be activated on all physical interfaces where multicast will be passing through (global or VRF ones) and on the loopback interface used for the BGP VPNv4 peerings.

The RP setup of the CEs must agree with the VRF RP setup on the PEs. In case you manually define the RP (static RP) on the CEs, then this must be done on the PEs too (inside the vrf).



Verification
  • There should be (S,G) entries for each BGP neighbor, where S=BGP loopback and G=MDT default address
  • There should be a bidirectional PIM adjacency across a tunnel between the PEs, but inside each PE's VRF
  • If an RP is used on a CE, then each remote CE should know this RP 
  • Sources/Receivers from any site should be viewable on the RP
  • There should be an MDT data (S,G) entry for each pair of customer (S,G) entries


Verification (using only a default mdt)


MDT default (S,G) entries

IOS
R5#sh ip mroute sum
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.255.255.1), 00:34:36/stopped, RP 10.0.0.1, OIF count: 1, flags: SJCFZ
  (10.0.0.6, 239.255.255.1), 00:24:11/00:02:18, OIF count: 1, flags: JTZ
  (10.0.0.5, 239.255.255.1), 00:34:35/00:02:54, OIF count: 1, flags: FT


R5#sh ip mroute 239.255.255.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.255.255.1), 00:46:12/stopped, RP 10.0.0.1, flags: SJCFZ
  Incoming interface: FastEthernet0/0.15, RPF nbr 10.1.5.1
  Outgoing interface list:
    MVRF VPN, Forward/Sparse, 00:46:12/00:01:46

(10.0.0.6, 239.255.255.1), 00:35:47/00:02:28, flags: JTZ
  Incoming interface: FastEthernet0/0.57, RPF nbr 10.5.7.7
  Outgoing interface list:
    MVRF VPN, Forward/Sparse, 00:35:47/00:01:46

(10.0.0.5, 239.255.255.1), 00:46:12/00:03:19, flags: FT
  Incoming interface: Loopback0, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet0/0.57, Forward/Sparse, 00:35:46/00:03:11



 
R5#sh bgp ipv4 mdt all 10.0.0.6/32
BGP routing table entry for 100:1:10.0.0.6/32        version 2
Paths: (1 available, best #1, table IPv4-MDT-BGP-Table)
  Not advertised to any peer
  Local
    10.0.0.6 from 10.0.0.1 (10.0.0.1)
      Origin incomplete, metric 0, localpref 100, valid, internal, best
      Originator: 10.0.0.6, Cluster list: 10.0.0.1, 10.0.0.20,
      MDT group address: 239.255.255.1



R5#sh ip pim mdt
  * implies mdt is the default MDT
  MDT Group/Num   Interface   Source                   VRF
* 239.255.255.1   Tunnel1     Loopback0                VPN
 


R5#sh ip pim mdt bgp
MDT (Route Distinguisher + IPv4)               Router ID         Next Hop
  MDT group 239.255.255.1
   100:1:10.0.0.6                              10.0.0.1          10.0.0.6





Verification (using a default and a data mdt)

MDT default (S,G) entries
MDT data (S,G) entries

IOS
R2#sh ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(2.2.2.2, 232.0.0.1), 00:08:53/00:03:27, flags: sT
  Incoming interface: Loopback0, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet0/0.24, Forward/Sparse, 00:08:53/00:03:27

(19.19.19.19, 232.0.0.1), 00:50:48/stopped, flags: sTIZ
  Incoming interface: FastEthernet0/0.24, RPF nbr 20.2.4.4
  Outgoing interface list:
    MVRF VPN, Forward/Sparse, 00:50:48/00:00:11

(19.19.19.19, 232.0.1.0), 00:08:23/00:00:12, flags: sTIZ
  Incoming interface: FastEthernet0/0.24, RPF nbr 20.2.4.4
  Outgoing interface list:
    MVRF VPN, Forward/Sparse, 00:02:47/00:00:12

(19.19.19.19, 232.0.1.1), 00:01:59/00:01:00, flags: sTIZ
  Incoming interface: FastEthernet0/0.24, RPF nbr 20.2.4.4
  Outgoing interface list:
    MVRF VPN, Forward/Sparse, 00:01:59/00:01:00

R2#sh ip pim mdt
  * implies mdt is the default MDT
  MDT Group/Num   Interface   Source                   VRF
* 232.0.0.1       Tunnel0     Loopback0                VPN
  232.0.1.0       Tunnel0     Loopback0                VPN
  232.0.1.1       Tunnel0     Loopback0                VPN




R2#sh ip pim mdt bgp
MDT (Route Distinguisher + IPv4)               Router ID         Next Hop
  MDT group 232.0.0.1
   100:1:19.19.19.19                           19.19.19.19       19.19.19.19




In both scenarios, you can also verify the mGRE tunnels by looking at the tunnel interface itself.

IOS
R5#sh int tun1 | i protocol/transport
  Tunnel protocol/transport multi-GRE/IP


When all PIM adjacencies come up, as PIM neighbors in a VRF you should see all the other MDT PEs though a tunnel and all the local connected CEs through a physical interface.

IOS
R5#sh ip pim vrf VPN nei
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
192.168.59.9      FastEthernet0/0.59       00:00:22/00:01:22 v2    1 / DR S G
10.0.0.6          Tunnel1                  00:25:52/00:01:27 v2    1 / DR S P G





PIM inside a VRF Tunnel

IOS
interface Tunnel1
 ip vrf forwarding VPN-A
 ip address 99.99.99.1 255.255.255.0
 ip pim sparse-mode
 tunnel source 10.0.0.1
 tunnel destination 10.0.0.2

 tunnel vrf VPN-B
!
interface Tunnel1

 ip vrf forwarding VPN-A
 ip address 99.99.99.2 255.255.255.0
 ip pim sparse-mode
 tunnel source 10.0.0.2
 tunnel destination 10.0.0.1
 tunnel vrf VPN-B


"ip vrf forwarding" defines the vrf under which the tunnel (99.99.99.0/24) operates; above it's VPN-A.

"tunnel vrf" defines the vrf which is used to build the tunnel (from 10.0.0.1 to 10.0.0.2); above it's VPN-B. If the tunnel source and destination are in the global routing table, then you don't need to define their vrf with the "tunnel vrf X" command.




Extranet

An extranet site can have either the multicast source or the receivers (otherwise multicast happens intra-as).

The Source PE has the multicast source behind a directly connected CE through the Source MVRF

The Receiver PE has one or more receivers behind a directly connected CE through the Receiver MVRF

In order to achieve multicast connectivity between the Source and Receiver PEs, you must have the same default MDT group in the source and receiver MVRF.

Two solutions:
  • Configure the Receiver MVRF on the Source PE router
    • you need each receiver MVRF copied on the Source PE router
  • Configure the Source MVRF on the Receiver PE routers
    • you need the Source MVRF copied on all interested Receiver PE routers
In both cases, the receiver MVRF (wherever placed) must import the source MVRF's RT.

Only PIM-SM and PIM-SSM are supported.

The multicast source and the RP must reside in the same site of the MVPN, behind the same PE router.


Receiver MVRF on the Source PE

Source PE (IOS)
ip vrf VPN1-S-MVRF
 rd 100:1
 route-target export 100:1
 route-target import 100:1
 mdt default 232.1.1.1
!
ip vrf VPN2-R-MVRF
 rd 100:2
 route-target export 100:2
 route-target import 100:2
 route-target import 100:1
 mdt default 232.2.2.2
!
ip multicast-routing
ip multicast-routing vrf VPN1-S-MVRF
ip multicast-routing vrf VPN2-R-MVRF


Receiver PE (IOS)
ip vrf VPN2-R-MVRF
 rd 100:2
 route-target export 100:2
 route-target import 100:2
 route-target import 100:1
 mdt default 232.2.2.2
!
ip multicast-routing
ip multicast-routing vrf VPN2-R-MVRF



Source MVRF on the Receiver PE

Source PE (IOS)
ip vrf VPN1-S-MVRF
 rd 100:1
 route-target export 100:1
 route-target import 100:1
 mdt default 232.1.1.1
!
ip multicast-routing
ip multicast-routing vrf VPN1-S-MVRF


Receiver PE (IOS)
ip vrf VPN1-S-MVRF
 rd 100:1
 route-target export 100:1
 route-target import 100:1
 mdt default 232.1.1.1
!

ip vrf VPN2-R-MVRF
 rd 100:2
 route-target export 100:2
 route-target import 100:2
 route-target import 100:1
 mdt default 232.2.2.2
!
ip multicast-routing 

ip multicast-routing vrf VPN1-S-MVRF
ip multicast-routing vrf VPN2-R-MVRF

What matters most in both cases when doing the MVRF replication, is to have the same MDT on a MVRF on the Source PE and on a MVRF on the Receiver PE (excluding the Source MVRF).


Fixing RPF

There are two options:

static mroute between VRFs

Receiver PE (IOS)
ip mroute vrf VPN2-R-MVRF 192.168.1.1 255.255.255.255 fallback-lookup vrf VPN1-S-MVRF


group-based VRF selection

Receiver PE (IOS)
ip multicast vrf VPN2-R-MVRF rpf select vrf VPN1-S-MVRF group-list 1
ip multicast vrf VPN2-R-MVRF rpf select vrf VPN3-S-MVRF group-list 3
!
access-list 1 permit 231.0.0.0 0.255.255.255
access-list 3 permit 233.0.0.0 0.255.255.255



Inter-AS MVPN

To establish a Multicast VPN between two ASes, a MDT-default tunnel must be setup between the involved PE routers. The appropriate MDT-default group is configured on the PE router and is unique for each VPN.

All three (A,B,C) inter-as options are supported. For option A nothing extra is required since every AS is completely isolated from the others.

In order to solve the various RPF issues imposed by the limited visibility of PEs between different ASes, each VPNv4 route carries a new transitive attribute (the BGP connector attribute) that defines the route's originator.

Inside a common AS, the BGP connector attribute is the same as the next hop. Between ASes the BGP connector attribute stores (in case of ipv4 mdt) the ip address of the PE router that originated the VPNv4 prefix and is preserved even after the next hop attribute is rewritten by ASBRs.

The BGP connector attribute also helps ASBRs and receiver PEs insert the RPF vector needed to build the inter-AS MDT for source PEs in remote ASes.

The RPF proxy vector is a PIM TLV that contains the ip address of the router that will be used as proxy for RPF checks (helping in the forwarding of PIM Joins between ASes).

A new PIM hello option has also been introduced along with the PIM RPF Vector extension to determine if the upstream router is capable of parsing the new TLV. An RPF Vector is included in PIM messages only when all PIM neighbors on an RPF interface support it.

The RPF proxy (usually the ASBR) removes the vector for the PIM Join message when it sees itself in it.

  • BGP connector attribute
    • used in RPF checks inside a VRF
  • RPF proxy
    • used in RPF checks in the core

Configuration Steps
  • Option A
    • no MDT sessions between ASes is required
    • intra-as MDT sessions are configured as usual
  • Option B
    • intra-as MDT sessions between PEs, ASBRs and RRs
    • inter-as MDT session between ASBRs
    • RPF proxy vector on all PEs for their VRFs
    • RPF proxy vector on all Ps and ASBRs
    • next-hop-self on the MDT ASBRs
  • Option C
    • intra-as MDT sessions between PEs and RRs
    • inter-as MDT sessions between RRs
    • RPF proxy vector on all PEs for their VRFs
    • RPF proxy vector on all Ps and ASBRs
    • next-hop-unchanged on the MDT RRs

MSDP will be required if using an RP on both ASes. Prefer to use SSM in the core of both ASes.


Links