DMVPN with VRF’s for the Internet interfaces and BGP

I’ve been playing with some different DMVPN configurations. In this scenario, I wanted the Internet facing interface to have a separate routing table, which I accomplished with a VRF. I also wanted to use a phase 2 DMVPN – which allows spokes to communicate directly to each other without having to send all traffic to the hub. The tricky part was getting the DMVPN tunnels to form over that interface. This is accomplished via the tunnel vrf command in the tunnel interface and specifying the vrf in the crypto keyring.

Here is my hub config:

Here is my spoke config:

One of the scenarios that I wanted to play with was having BGP dynamically create peers. However, my specific version of code doesn’t support dynamic BGP peers. If my code did support it, the BGP config would look something like:

Update:

I had an interesting idea. Having the hub’s and the spokes in the same BGP ASN. Having the DMVPN hubs act as BGP route reflectors and having the spoke connect to the hubs. As the hubs are route reflectors, they will propagate all routes about the spokes to all other spokes. In a DMVPN phase 2 scenario, this would allow the spokes to communicate next to each other as the spokes know about each other through BGP next-hop. I set it up in my lab and it actually works pretty well.

Here the BGP configuration from my hub:

Here is the BGP configuration from one of my spokes:

Here is the isakmp session status, BGP table, and trace route to a neighbor spoke from the DMVPN-SPOKE1-R4 spoke.

One way to make this scale, without manual intervention of having to add neighbor relationships in BGP would be to have the dynamic neighbor relations statement in the DMVPN hubs. In my lab set up, BGP works pretty well in a DMVPN environment.

Share on FacebookTweet about this on TwitterShare on LinkedInShare on RedditEmail this to someone

November 26, 2013

Posted In: BGP, DMVPN, VRF

Rackspace Performance vs Standard Cloud Server Disk I/O

I just spun up a Rackspace High Performance Cloud Server and ran some i/o benchmarks on it and compared it to one of my standard cloud servers. Here are my findings.

This is the script that I ran to gather I/O stats:

As you can see, it’s a simple test that writes 2GB of 0’s to a file, turns around and reads it, then runs the test nine more times.

First, here are the stats from my standard cloud server:

Average Write Speed: 127.46 MB/s
Average Read Speed: 93.49 MB/s

Now here are the results from the High Performance Server:

Average Write Speed: 467.8 MB/s
Average Read Speed: 175.5 MB/s

As you can see, given my simple test, High Performance Cloud Servers have a write speed that is 267% faster and a read speed that is 88% faster than the standard Cloud Servers from Rackspace. Pretty interesting!

Share on FacebookTweet about this on TwitterShare on LinkedInShare on RedditEmail this to someone

November 22, 2013

Posted In: Filesystems, Rackspace