Friday, July 23, 2010

Command Scheduler KRON Policy

Cisco IOS has a built-in command scheduler called kron .Introduced in cisco IOS 12.3(1)
this command scheduler similar to windows at program and the UNIX cron or at programs.

For Example let's say you want to automatically disable the all debug command after 1 minute.

First,createa kron policy list, essentially .this policy list serves as your "script",which list what 
you want the router to run at the scheduled time ,Here an example :

Router(config)# kron policy-list UNALL 
Router(config-kron-policy)# cli un all
Router(config-kron-policy)# exit 

Next create a kron occurrence ,in which you tell the router when and how often you want
to run this policy list ,Here an example:

Router(config)#kron occurrence UNALL in 00:01 recurring
Router(config-kron-occurrence)#policy-list UNALL

This code sets up your router to disable the all debug command every one minute

Finally,verify that you've entered everything correctly by using the show command:

Router#sh kron schedule
Kron Occurrence Schedule
UNALL inactive, will run again in 0 days 00:00:47

Router#sh kron schedule
Kron Occurrence Schedule
UNALL inactive, will run again in 0 days 00:00:45

Router#sh kron schedule
Kron Occurrence Schedule
UNALL inactive, will run again in 0 days 00:00:44  

 another tip is to run a backup for your router once in a day.

Thursday, July 15, 2010

mLDP-Multicast VPN

The new way refers to the setting up of Multipoint LSP in the MPLS VPN environment to carry multicast traffic in the VPN. Here, all CE routers belong to a single customer at different branches. There is no multicast receiver behind CE3 router. The MPLS core is PIM-free. Only PE routers will run PIM with the CE routers
This Internet Draft introduces the Label Distribution Protocol (LDP) extensions for point-to-multipoint (P2MP) and multipoint-to-multipoint (MP2MP) Label Switched Paths (LSPs) in MPLS networks. These extensions are also called mLDP or Multipoint LDP. Of the various applications for multipoint LSPs, one is support for multicast in MPLS VPN. Previously, this was achieved through mVPN.

LDP RFC introduced the mechanism to setup point-to-point LSP (P2P) in the MPLS network where there is a single source and single destination. However, a P2MP LSP allows traffic from a single ingress router (root node) to be delivered to multiple egress routers (leaf nodes). A MP2MP LSP allows traffic from multiple ingress routers to multiple egress routers. At any point, a single copy of packet is sent to any LSP without any multicast routing protocol in the network.

PE routers configuration
The Loopback 0 interface of PE1 router is configured to be used as the Root Node IP address. The Opaque value for the multipoint LSP is constructed based on the VPN ID value of 1:1. The mdt default mpls mldp command creates the MP2MP LSP known to all PE routers for that particular VRF. This LSP is used to forward all customer multicast traffic by default
PE1 router:
Setting up a P2MP LSP with LDP Traditionally, LDP-signaled LSPs are initiated by the egress router. The egress (receiving) router initiates the label propagation and is propagated throughout the MPLS network. All LSRs maintain a forwarding state towards the egress router following the shortest IGP path, and any LSR can act as an ingress LSR. This, essentially, sets up a multipoint-to-point (MP2P) LSP as multiple senders can send traffic to a single receiver.In contrast, a P2MP LSP has a single ingress (root) node and one or more egress (leaf) nodes. The transit nodes provide reachability to the root node. Leaf nodes initiate P2MP LSP setup. The leaf nodes should be aware of the ingress router. Also, the leaf nodes should be able to identify the correct P2MP LSP as several P2MP LSPs could be originated from the ingress router. A new Capability Parameter is introduced for P2MP capability which is exchanged using LDP Initialization message. A new P2MP FEC Element is defined which carries the IPv4 address of the root and an Opaque value (also called tree identifier and is manually configured VPN ID). This combination uniquely identifies a P2MP LSP within the MPLS network. Leaf node allocates a label and advertises its P2MP Label Mapping {Root IP Address, Opaque Value, Label} to the upstream LDP node on the shortest path to the root. The upstream node creates its own Label Mapping on receiving this from its downstream node. When the root node receives this P2MP Label Mapping from its downstream (transit) node, it checks the forwarding state for {Root IP Address, Opaque Value}. If not, it creates the forwarding state and pushes this "Label" onto all traffic that is forwarded over this P2MP LSP.In P2MP LSP, the rule for distribution is to advertise a label only towards the neighbor that lies on the IGP best path to the root. Thus the sender of the label determines the best path to the root.
ip vrf CUST1
 rd 1:1
 vpn id 1:1                           
route-target both 1:1
 mdt default mpls mldp 1.1.1.1  
!
 interface Loopback 0
 ip address 1.1.1.1 255.255.255.255
 ip ospf 1 area 0
!
ip multicast-routing vrf CUST1          
!
ip pim vrf CUST1 rp-address 12.1.1.1
!
interface fastethernet 1/1
 ip vrf forwarding CUST1
 ip address 192.168.1.1 255.255.255.0
 ip pim sparse-mode
!
router bgp 100
 neighbor 3.3.3.3 remote-as 100
 neighbor 3.3.3.3 update-source Loopback 0
 neighbor 4.4.4.4 remote-as 100
 neighbor 4.4.4.4 update-source Loopback 0
 !
 address-family vpnv4
 neighbor 3.3.3.3 activate
 neighbor 4.4.4.4 activate
 exit-address-family
 !
 address-family ipv4 vrf CUST1
 redistribute connected
 exit-address-family
!


PE2 and PE3 are config the same!!.
 PE1 has no PIM adjacency with P router. However, it has PIM adjacency with PE2 and PE3 routers via Lspvif0 interface.
!--- The following output shows no PIM adjacencies within MPLS core

PE1# show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
PE1#
!--- The following output shows PIM adjacencies with CE1 router, PE2 and PE3 routers

PE1# show ip pim vrf CUST1 neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
192.168.1.2       FastEthernet1/1          00:01:03/00:01:40 v2    1 / DR S G
4.4.4.4           Lspvif0                  00:26:24/00:01:25 v2    1 / DR S P G
3.3.3.3           Lspvif0                  00:28:36/00:01:23 v2    1 / S P G


Now multicast traffic is sourced from CE1 router with CE2 being the multicast receiver. For PE1 router, the incoming interface is the interface connected to the CE1 router. The outgoing interface will be Lspvif0.
PE1# show ip mroute vrf CUST1 239.10.10.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.10.10.1), 00:00:55/stopped, RP 12.1.1.1, flags: SP
  Incoming interface: Lspvif0, RPF nbr 3.3.3.3
  Outgoing interface list: Null

(192.168.1.2, 239.10.10.1), 00:00:55/00:02:04, flags: T
  Incoming interface: FastEthernet1/1, RPF nbr 192.168.1.2
  Outgoing interface list:
    Lspvif0, Forward/Sparse, 00:00:50/00:02:39

http://blog.ine.com/2010/03/08/using-mpls-and-m-ldp-signaling-for-multicast-vpns/comment-page-1/#comment-108787
tested on "Cisco 7200 routers with 12.3(33)SRE"

Monday, July 12, 2010

QOS Multi-Service-Site

cool feature in Alcatel , limit two PW to one QOS package:
here are example of customer that have two Epipe, with limit together to 75Mbps :
A:PE-7750-LAB1# configure service customer 550 
A:PE-7750-LAB1>config>service>cust# info 
----------------------------------------------
            multi-service-site "LIMIT-75Mbps" create
                description "LIMIT-75Mbps"
                assignment port 2/2/5
                ingress
                    scheduler-policy "SLA-75Mbps"
                exit
                egress
                    scheduler-policy "SLA-75Mbps"
                exit
            exit
            description "ESP NET-60038464"
----------------------------------------------

A:PE-7750-LAB1# configure service epipe 1268 
A:PE-7750-LAB1>config>service>epipe# info 
----------------------------------------------
            description "EPIPE-NAMEXXX-7750-PE-LAB1"
            service-mtu 2014
            sap 2/2/5:1655.0 create
                description "EPIPE-NAMEXXX-GI2/2/5:1655-7001291-59.38"
                multi-service-site "LIMIT-75Mbps"
                collect-sLAB1ts
            exit
            sap lag-4:1655 create
                description "EPIPE-NAMEXXX-Lag 4:1655-NV-017"
                ingress
                    qos 90 
                exit
                egress
                    qos 84
                exit
                collect-sLAB1ts
            exit
            no shutdown
----------------------------------------------

A:PE-7750-LAB1>config>service>epipe# info 
----------------------------------------------
            description "EPIPE-ESP NET-"
            service-mtu 2014
            sap 2/2/5:1656.0 create
                multi-service-site "LIMIT-75Mbps"
                collect-sLAB1ts
            exit
            sap lag-4:1656 create
                ingress
                    qos 90 
                exit
                egress
                    qos 84
                exit
                collect-sLAB1ts
            exit
            no shutdown
----------------------------------------------


A:PE-7750-LAB1# configure qos sap-ingress 90 
A:PE-7750-LAB1>config>qos>sap-ingress# info 
----------------------------------------------
            description "Silver-75M-ONLY-FOR-EPIPE"
            queue 1 create
                parent "SLA-75Mbps"
                rate 75000
            exit
            queue 11 multipoint create
                parent "SLA-75Mbps"
                rate 75000
            exit
            fc "be" create
                queue 1
            exit
----------------------------------------------


A:PE-7750-LAB1# configure qos sap-egress 84  
A:PE-7750-LAB1>config>qos>sap-egress# info 
----------------------------------------------
            description "Silver-75M-ONLY-FOR-EPIPE"
            queue 1 create
                parent "SLA-75Mbps"
                rate 75000
            exit
            fc be create
                queue 1
            exit 
----------------------------------------------


A:PE-7750-LAB1# monitor qos scheduler-sLAB1ts customer 550 site LIMIT-75Mbps rate 

===============================================================================
Monitor Scheduler SLAB1tistics
===============================================================================
Scheduler                          Forwarded Packets      Forwarded Octets     
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
At time t = 0 sec (Base SLAB1tistics)
-------------------------------------------------------------------------------
Ingress Schedulers
SLA-75Mbps                         1866961307             937166785248         
 
Egress Schedulers
SLA-75Mbps                         2076599441             1682363329954        
 
-------------------------------------------------------------------------------
At time t = 11 sec (Mode: Rate)
-------------------------------------------------------------------------------
Ingress Schedulers
SLA-75Mbps                         4164                   2196854              
 
Egress Schedulers
SLA-75Mbps                         4333                   3531426              
 
-------------------------------------------------------------------------------
At time t = 22 sec (Mode: Rate)
-------------------------------------------------------------------------------
Ingress Schedulers
SLA-75Mbps                         3653                   1954032              
 
Egress Schedulers
SLA-75Mbps                         4067                   3216902              
 


BGP-Regular-Expression

Using regexp with as-path access-list are one of the coolest features of BGP. The show ip bgp regexp command is good way to test your regular expression

Here is what I have currently on R1's bgp table:
R1#show ip bgp | be Ne
Network Next Hop Metric LocPrf Weight Path
*> 100.3.0.0/24 172.12.123.3 0 0 300 i
*> 100.3.1.0/24 172.12.123.3 0 0 300 i
*> 100.3.2.0/24 172.12.123.3 0 0 300 i
*> 100.6.0.0/24 172.12.123.3 0 300 600 i
*> 100.6.1.0/24 172.12.123.3 0 300 600 i
*>; 100.6.2.0/24 172.12.123.3 0 300 600 i
*> 100.6.3.0/24 172.12.123.3 0 300 600 1000 1200 i
*> 100.6.4.0/24 172.12.123.3 0 300 600 1000 1200 i

Suppose I want to match routes that contain one AS or two AS but no more. I could do this:
R1#show ip bgp regexp ^[0-9]*$|^[0-9]*_[0-9]*$

Network  Next Hop Metric LocPrf Weight Path
*> 100.3.0.0/24 172.12.123.3 0 0 300 i
*> 100.3.1.0/24 172.12.123.3 0 0 300 i
*> 100.3.2.0/24 172.12.123.3 0 0 300 i
*> 100.6.0.0/24 172.12.123.3 0 300 600 i
*> 100.6.1.0/24 172.12.123.3 0 300 600 i
*> 100.6.2.0/24 172.12.123.3 0 300 600 i

How about paths that only contain at least one 4-digit AS# (why? i have no clue but here's how)
R1#show ip bgp regexp _[0-9][0-9][0-9][0-9]_

Network Next Hop Metric LocPrf Weight Path
*> 100.6.3.0/24 172.12.123.3 0 300 600 1000 1200 i
*> 100.6.4.0/24 172.12.123.3 0 300 600 1000 1200 i


http://www.cisco.com/en/US/tech/tk365/technologies_tech_note09186a0080094a92.shtml#

http://ccietobe.blogspot.com/

Wednesday, July 7, 2010

MPLS NAT Aware

Internet access is perhaps one of the most popular services that Service Providers offer their customers. Customers have flexibility to purchase MPLS VPN services Internet connectivity from separate Service Providers. Customers can alternatively offer Internet connectivity directly from their network may it be from one of their remote sites or the central site. In the latter case, the Internet Service Provider (ISP) does not need to distinguish customer’s Internet and VPN traffic, because all traffic traversing through a Service Provider network would be MPLS VPN traffic.

In MPLS based BGP-VPNs (RFC 2547), ISPs offered customers an interface that was capable of carrying intranet and internet traffic.

Traffic between intranet and internet in a MPLS BGP-VPNs requires NAT Services at the customer edge router, between the customer private addresses and a globally routable address.






R3NATPE#conf ter
Enter configuration commands, one per line.  End with CNTL/Z.
R3NATPE(config)#
R3NATPE(config)#ip vrf 23
R3NATPE(config-vrf)#rd 23:23
R3NATPE(config-vrf)#route-t 23:23
R3NATPE(config-vrf)#
R3NATPE(config-vrf)#ip vrf 13
R3NATPE(config-vrf)#rd 13:13
R3NATPE(config-vrf)#route-t 13:13
R3NATPE(config-vrf)#
R3NATPE(config-vrf)#int s0/0
R3NATPE(config-if)#ip vrf for 13
R3NATPE(config-if)#ip add 10.1.13.3 255.255.255.0
R3NATPE(config-if)#ip nat inside
R3NATPE(config-if)#no sh
R3NATPE(config-if)#
R3NATPE(config-if)#int s0/1
R3NATPE(config-if)#ip vrf for 23
R3NATPE(config-if)#ip add 10.1.23.3 255.255.255.0
R3NATPE(config-if)#ip nat inside
R3NATPE(config-if)#no sh
R3NATPE(config-if)#
R3NATPE(config-if)#int s0/2
R3NATPE(config-if)#ip add 10.1.34.3 255.255.255.0
R3NATPE(config-if)#ip nat out
R3NATPE(config-if)#no sh
R3NATPE(config-if)#exit
R3NATPE(config)#access-list 1 permit any
R3NATPE(config)#ip route vrf 13 1.1.1.1 255.255.255.255 10.1.13.1
R3NATPE(config)#ip route vrf 13 0.0.0.0 0.0.0.0 10.1.34.4 global
R3NATPE(config)#
R3NATPE(config)#ip route vrf 23 2.2.2.2 255.255.255.255 10.1.23.2
R3NATPE(config)#ip route vrf 23 0.0.0.0 0.0.0.0 10.1.34.4 global
R3NATPE(config)#
R3NATPE(config)#ip nat pool MYPOOL 10.1.34.50 10.1.34.255 netmask 255.255.255.0
R3NATPE(config)#ip nat inside source list 1 pool MYPOOL vrf 13
R3NATPE(config)#
R3NATPE(config)#ip nat inside source list 1 pool MYPOOL vrf 23
R3NATPE(config)#
NAT get hold of the packet, and does the translation (static or dynamic) and also stores the VRF table ID in the translation entry

R3NATPE#show ip nat translations verbose
Pro Inside global      Inside local       Outside local      Outside global
icmp 10.1.34.50:5      10.1.23.2:5        4.4.4.4:5          4.4.4.4:5
 create 00:00:10, use 00:00:00 timeout:60000, left 00:00:59, Map-Id(In): 2,
 flags:
extended, use_count: 0, VRF : 23, entry-id: 3, lc_entries: 0
--- 10.1.34.50         10.1.23.2          ---                ---
 create 00:16:50, use 00:00:11 timeout:86400000, left 23:59:48, Map-Id(In): 2,
 flags:
none, use_count: 1, VRF : 23, entry-id: 1, lc_entries: 0
NAT receives the packet before routing and performs lookup on the translation table. NAT performs the reverse translation, and also sets the VRF table ID in the packet descriptor header. This enables the subsequent route lookup to occur on the right Forwarding Information Block (FIB). If the outgoing interface is in a VRF on the same PE, then the packet is forwarded as an IP packet. If the destination is on a remote PE, then the packet is imposed with labels and forwarded on the core facing interface.

Note:For security reasons, this approach is not recommended. It is not a good practice to
bring in Internet traffic using the corporate VPN. This practice negates the isolation of the
corporate VPN.
This option is briefly discussed only to show an alternate practice that has been used in the
industry.(from the  Implementing Cisco Mpls Volume 2)