So with the news about the change from the CCIE Routing and Switching V4 blueprint to the V5 many (i.e. some good sources) reckon that DMVPN will be on the
new V5 blueprint.
What is DMVPN?
DMVPN stands for Dynamic Multipoint Virtual Private Network, what it does is allow multiple IPSec VPN connections with just one tunnel configuration, so for a site with one central "hub" and three other sites (or "spokes") instead of having three separate VPN configurations there is just one - it does mean that the traffic say from spoke1 to spoke3 needs to go through the hub, but from a configuration standpoint, life is much easier.
DMVPN is based on
GRE (and we have covered GRE tunnels before, or mGRE if we are doing spoke-to-spoke tunnels),
NHRP (next-hop resolution protocol) and IPSec (because VPN tunnels should be secure). DMVPN also
requires a dynamic routing protocol, and
CEF (Cisco Express Forwarding).
When it comes to the routing protocol to use within the tunnel EIGRP is preferred because it is an advanced distance vector protocol, better suited to the NBMA network that is built when using DMVPN.
DMVPN can be configured as Hub-and-Spoke or (using mGRE) Spoke-to-Spoke.
Consider the following topology:
We have a central site (called "Hub") and three different spoke at the bottom. In the middle is the cloud, which can be frame-relay or any other method of providing a connection between the hub and the spoke routers.
From the viewpoint of the routers, with our DMVPN in place, they will see the 10.10.1.0/24 network:
We start of with a basic configuration to provide connectivity:
Hub:
hostname Hub
!
interface Serial0/0
ip address 10.25.1.2 255.255.255.0
!
ip route 10.35.1.0 255.255.255.0 10.25.1.1
ip route 10.45.1.0 255.255.255.0 10.25.1.1
ip route 10.55.1.0 255.255.255.0 10.25.1.1
Cloud:
hostname Cloud
!
interface Serial0/0
ip address 10.25.1.1 255.255.255.0
!
interface Serial0/1
ip address 10.35.1.1 255.255.255.0
!
interface Serial0/2
ip address 10.45.1.1 255.255.255.0
!
interface Serial0/3
ip address 10.55.1.1 255.255.255.0
Spoke1
hostname Spoke1
!
interface Loopback0
ip address 10.50.1.1 255.255.255.0
!
interface Serial0/0
ip address 10.35.1.2 255.255.255.0
!
ip route 10.25.1.2 255.255.255.255 10.35.1.1
Spoke 2
hostname Spoke2
!
interface Loopback0
ip address 10.60.1.1 255.255.255.0
!
interface Serial0/0
ip address 10.45.1.2 255.255.255.0
!
ip route 10.25.1.2 255.255.255.255 10.45.1.1
Spoke3
hostname Spoke3
!
interface Loopback0
ip address 10.70.1.1 255.255.255.0
!
interface Serial0/0
ip address 10.55.1.2 255.255.255.0
!
ip route 10.25.1.2 255.255.255.255 10.55.1.1
So you can see that we are starting off easy, with just basic connectivity from the Hub to each of the Spokes using the cloud to pass the traffic through. At this stage none of the spoke have any knowledge of each other.
DMVPN Tunnel configuration
The tunnel configuration is much like a standard GRE tunnel but with a couple of additional commands.
The Hub is where everything points to, and we associate the tunnel with a network-id (this is the NBMA identifier), setting the mode as gre multipoint. The spokes map the tunnel IP set on the Hub (10.10.1.1) to the external IP address of the Hub (10.25.1.2), and set this as the next-hop-server (ip nhrp
nhs 10.10.1.1).
Hub tunnel:
interface Tunnel0
ip address 10.10.1.1 255.255.255.0
no ip redirects
ip mtu 1416
no ip next-hop-self eigrp 1
ip nhrp map multicast dynamic
ip nhrp network-id 1
no ip split-horizon eigrp 1
tunnel source 10.25.1.2
tunnel mode gre multipoint
Spoke 1 tunnel:
interface Tunnel0
ip address 10.10.1.2 255.255.255.0
no ip redirects
ip mtu 1416
no ip next-hop-self eigrp 1
ip nhrp map 10.10.1.1 10.25.1.2
ip nhrp map multicast 10.25.1.2
ip nhrp network-id 1
no ip split-horizon eigrp 1
ip nhrp nhs 10.10.1.1
tunnel source 10.35.1.2
tunnel mode gre multipoint
Spoke 2 tunnel
interface Tunnel0
ip address 10.10.1.3 255.255.255.0
no ip redirects
ip mtu 1416
no ip next-hop-self eigrp 1
ip nhrp map 10.10.1.1 10.25.1.2
ip nhrp map multicast 10.25.1.2
ip nhrp network-id 1
no ip split-horizon eigrp 1
ip nhrp nhs 10.10.1.1
tunnel source 10.45.1.2
tunnel mode gre multipoint
Spoke 3 tunnel:
interface Tunnel0
ip address 10.10.1.4 255.255.255.0
no ip redirects
ip mtu 1416
no ip next-hop-self eigrp 1
ip nhrp map 10.10.1.1 10.25.1.2
ip nhrp map multicast 10.25.1.2
ip nhrp network-id 1
no ip split-horizon eigrp 1
ip nhrp nhs 10.10.1.1
tunnel source 10.55.1.2
tunnel mode gre multipoint
We can check that our DMVPN tunnel is working using the "sh dmvpn" command:
Hub#sh dmvpn | beg Interface
Interface: Tunnel0, IPv4 NHRP Details
Type:Hub, NHRP Peers:3,
# Ent Peer NBMA Addr Peer Tunnel Add State UpDn Tm Attrb
----- --------------- --------------- ----- -------- -----
1 10.35.1.2 10.10.1.2 UP 00:06:19 D
1 10.45.1.2 10.10.1.3 UP 00:05:06 D
1 10.55.1.2 10.10.1.4 UP 00:04:28 D
We can also use basic ping tests:
Hub#ping 10.10.1.2
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.10.1.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 20/20/20 ms
Hub#ping 10.10.1.3
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.10.1.3, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 16/20/24 ms
Hub#ping 10.10.1.4
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.10.1.4, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 20/20/20 ms
Adding IPSec to DMVPN
One of the requirements of DMVPN is IPSec, and this is quite easy to add, the same configuration can go on the Hub and the three spokes.The major thing to point out is that we associate the "dmvpn123" key with any IP address by using the 0.0.0.0 0.0.0.0 address and subnet.
crypto isakmp policy 10
encryption 3des
hash md5
authentication pre-share
crypto isakmp key dmvpn123 address 0.0.0.0 0.0.0.0
!
!
crypto ipsec transform-set MyIPSEC esp-3des
!
crypto ipsec profile DMVPN
set transform-set MyIPSEC
!
interface Tunnel0
tunnel protection ipsec profile DMVPN
Adding EIGRP to DMVPN
DMVPNs require a routing protocol within the tunnel (otherwise they would be rather useless), and this is a simple case of adding one!
Hub#sh run | beg router
router eigrp 1
network 10.0.0.0
no auto-summary
Spoke1#sh run | beg router
router eigrp 1
network 10.0.0.0
no auto-summary
Spoke2#sh run | beg router
router eigrp 1
network 10.0.0.0
no auto-summary
Spoke3#sh run | beg router
router eigrp 1
network 10.0.0.0
no auto-summary
Bringing all of DMVPN together
With all of the nuits and bolts in place now we should have some good visilibilty between our spoke routers. We should see routers learned through EIGRP (indicated with a "D") and be able to ping to the loop back addresses that we configured at the start.
Spoke1#sh ip route | beg Gateway
Gateway of last resort is not set
10.0.0.0/8 is variably subnetted, 10 subnets, 2 masks
C 10.10.1.0/24 is directly connected, Tunnel0
L 10.10.1.2/32 is directly connected, Tunnel0
D 10.25.1.0/24 [90/27392000] via 10.10.1.1, 00:09:46, Tunnel0
S 10.25.1.2/32 [1/0] via 10.35.1.1
C 10.35.1.0/24 is directly connected, Serial0/0
L 10.35.1.2/32 is directly connected, Serial0/0
C 10.50.1.0/24 is directly connected, Loopback0
L 10.50.1.1/32 is directly connected, Loopback0
D 10.60.1.0/24 [90/28288000] via 10.10.1.3, 00:08:06, Tunnel0
D 10.70.1.0/24 [90/28288000] via 10.10.1.4, 00:07:02, Tunnel0
Spoke1#sh dmvpn | beg Interface
Interface: Tunnel0, IPv4 NHRP Details
Type:Spoke, NHRP Peers:1,
# Ent Peer NBMA Addr Peer Tunnel Add State UpDn Tm Attrb
----- --------------- --------------- ----- -------- -----
3 10.25.1.2 10.10.1.1 UP 00:34:25 S
10.10.1.3 UP 00:02:07 D
10.10.1.4 UP 00:02:11 D
Spoke1#ping 10.70.1.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.70.1.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 40/43/48 ms
Spoke1#ping 10.60.1.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.60.1.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 40/44/48 ms
Spoke1#
Hub-and-spoke or Spoke-to-Spoke?
Depending on the requirements you can either setup a hub-to-spoke or spoke-to-spoke topology. We have been using spoke-to-spoke through this, we can see this using the command "sh ip nhrp", because tunnels will drop if there is no traffic (apart from the one to the hub) we can see what NHRP believes to be the next hop for the end-point - so with the tunnel from Spoke1 to Spoke3 down we can issue a ping and see the tunnel come back up again - also showing the tunnel is a spoke-to-spoke.
Spoke1#sh ip nhrp
10.10.1.1/32 via 10.10.1.1
Tunnel0 created 01:15:40, never expire
Type: static, Flags: used
NBMA address: 10.25.1.2
Spoke1#ping 10.70.1.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.70.1.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 40/40/40 ms
Spoke1#sh ip nhrp
10.10.1.1/32 via 10.10.1.1
Tunnel0 created 01:15:47, never expire
Type: static, Flags: used
NBMA address: 10.25.1.2
10.10.1.4/32 via 10.10.1.4
Tunnel0 created 00:00:01, expire 00:03:03
Type: dynamic, Flags: temporary
NBMA address: 10.25.1.2
Spoke1#
We can, should we wish make it a true hub-to-spoke topology with adding the line "ip nhrp server-only" to the spokes
With our configuration before we can see that the tunnel to the Hub always remains up. We can ping Spoke3 and confirm that another tunnel is created:
Spoke2#sh dmvpn | beg Interface
Interface: Tunnel0, IPv4 NHRP Details
Type:Spoke, NHRP Peers:1,
# Ent Peer NBMA Addr Peer Tunnel Add State UpDn Tm Attrb
----- --------------- --------------- ----- -------- -----
1 10.25.1.2 10.10.1.1 UP 00:01:18 S
Spoke2#ping 10.70.1.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.70.1.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 40/43/48 ms
Spoke2#sh dmvpn | beg Interface
Interface: Tunnel0, IPv4 NHRP Details
Type:Spoke, NHRP Peers:1,
# Ent Peer NBMA Addr Peer Tunnel Add State UpDn Tm Attrb
----- --------------- --------------- ----- -------- -----
2 10.25.1.2 10.10.1.1 UP 00:01:28 S
10.10.1.4 UP 00:00:03 D
If we then go into the interface and add the "ip nhrp server-only" command we can confirm that we still have reachability to Spoke3, but only have one tunnel:
Spoke2(config)#int tunnel 0
Spoke2(config-if)#ip nhrp server-only
Spoke2(config-if)#exit
Spoke2(config)#exit
Spoke2#sh dmvp | beg Interface
Interface: Tunnel0, IPv4 NHRP Details
Type:Spoke, NHRP Peers:1,
# Ent Peer NBMA Addr Peer Tunnel Add State UpDn Tm Attrb
----- --------------- --------------- ----- -------- -----
1 10.25.1.2 10.10.1.1 UP 00:02:24 S
Spoke2#ping 10.70.1.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.70.1.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 40/42/48 ms
Spoke2#sh dmvp | beg Interface
Interface: Tunnel0, IPv4 NHRP Details
Type:Spoke, NHRP Peers:1,
# Ent Peer NBMA Addr Peer Tunnel Add State UpDn Tm Attrb
----- --------------- --------------- ----- -------- -----
1 10.25.1.2 10.10.1.1 UP 00:02:31 S
There are some more commands we can use to confirm that our tunnels are looking how they should:
Spoke2#sh ip nhrp
10.10.1.1/32 via 10.10.1.1
Tunnel0 created 00:14:31, never expire
Type: static, Flags: used
NBMA address: 10.25.1.2
Spoke2#sh ip cef 10.70.1.0
10.70.1.0/24
nexthop 10.10.1.4 Tunnel0
So we have full reachability between the spokes! Pretty neat, and so much easier than creating three different VPNs on each router! It is pretty simple really, if you have created a GRE tunnel before then really we are only looking at a couple of extra lines - and these extra few lines can be copied from one spoke router and pasted onto every other spoke router because they are identical!