Rackspace Cloud Servers and Networks with Open vSwitch and VXLAN between Data Centers

I’ve been playing with Open vSwitch and the VXLAN patch that is available at:

https://github.com/mestery/ovs-vxlan

So far all my testing has been done on my Rackspace Cloud account. I realize that you wouldn’t use VXLAN in a scenario like this on any production network, but for my testing, I thought that it would be good to have a physical separation of the networks. While I was able to get my VXLAN tunnel up, I haven’t been able to get the traffic to completely pass from my test-dfw to my test-ord servers. The traffic is getting lost at some point where the traffic leaves the ovs-ord internal interface (eth2) destined to test-ord server. I believe that I either need to add some configuration to the open vswitch service or the Rackspace Cloud Networks is stripping some data as it leaves ovs-ord, destined to test-ord. I’m still trying to figure that piece out. Below is that I have so far, along with the testing and troubleshooting section at the bottom.

Here is a diagram of the lab:

VXLAN-LAB

I started off by building the OVS servers that would be used to create the VXLAN tunnels would would pass traffic to the servers sitting behind them. To do this, I used the Rackspace Cloud Networks to create a private internal network in both the ORD and DFW data centers. All my servers would use eth2 to access that network and as the data centers are physically separated, my internal networks would be isolated from each other as well. I also used the Rackspace Cloud Servers to build the lab infrastructure. This includes four servers in total. Each running the Rackspace provided Fedora 17 image and all would be 512 meg instances.

First, I created all the instances, which are named as ovs-dfw, ovs-ord, test-dfw, test-ord. I then configured the ovs-{dfw, ord} instances.

#######################################
OVS Server Builds
#######################################

* Executed on both servers:

Once the servers came back up, I wanted to verify the kernel version that was running. This will be needed when building the open vswitch kernel module RPM. My kernel version is: 3.3.4-5.fc17.x86_64. If your kernel version is different, then you will need to take note and make the appropriate changes when building the openvswitch kernel moduele. I’ll be sure to remind you of this later, as those processes come up.

 

After you’ve taken note of the running kernel version, we’ll need to install the utilities needed to compile code, build RPMs, and troubleshoot networks. Note that I installed a kernel specific kernel-devel package – kernel-devel-3.3.4-5.fc17.x86_64. If your running kernel is different, then chage the package name to the appropriate kernel.

Once the packages have installed, we can start downloading and building openvswitch packages needed to build the RPMs.

 

Now, we can start building the RPMs.

 

Just a quick note. If you’re not running the same kernel version that I am, then you will need to change the next line to reflect the kernel version that you are running, or it will error out.

 

Now, let’s install the newly minted RPMs!

 

Once that is completed, enable the open vswitch services to start at boot and start the services.

 

Now that the hard part is done, we can verify that open vswitch is running and functioning properly before going on to creating the VXLAN tunnels.

 

Once open vswitch has been verified, let’s configure it for VXLAN! Where */ip_addr_of_remote_server/* is, replace that with the IP Address of the remote OVS server. So, on the OVS-DFW server, you should put the IP Address of the OVS-ORD server and vise versa. Those IP Addresses on the Rackspace Cloud servers reside on eth0.

That’s it! The VXLAN tunnels have been built and we’re now ready to work on the test-{dfw, ord} servers. This setup is easy. All we need to do is set up IP Addresses on the eth2 interfaces. For this test, I’m using 192.168.1.11 as the test-dfw server and 192.168.1.12 as the test-ord server. When I created the internal networks on my cloud account, I left the default CIDR as 192.168.3.0/24. I’ll want to change this configuration on the servers, so that they boot with the IP Addresses that I want to use.

#######################################
TEST-DFW eth2 configuration
#######################################

 

 

#######################################
TEST-ORD eth2 configuration
#######################################

 

 

#######################################
Test connectivity from test-dfw to test-ord
#######################################

We’ll do this in steps. I’ll initiate a ping from test-dfw to test-ord. This will be done in steps.
* On test-dfw, I’ll start a ping to test-ord (ping 192.168.1.2).
* On ovs-ord, I’ll use tcpdump to listen for traffic on br0 and eth2
* On test-ord, I’ll use tcpdump to listen for traffic on eth2
* If I don’t receive a ping reply or traffic is lost along the path, I’ll set VXLAN connectivity by assigning IP Addresses on the br0 interfaces of ovs-dfw (192.168.1.1) and ovs-ord (192.168.1.2). While I have addreses assigned to the br0 interfaces of ovs-{dfw, ord}, I’ll test connectivity directly to their local LAN connected servers. On ovs-dfw, I’ll ping test-dfw and on ovs-ord, I’ll ping test-ord.

#######################################
tpcdump of a successful ping between ovs-ord and test-ord
#######################################

 

January 12, 2013

Posted In: Linux, openvswitch, SDN, Software Defined Networking, VXLAN

Playing with Openvswitch.

I’ve been playing with openvswitch a little bit this evening. Here are some notes that I took for a very basic configuration on Ubuntu 12.04.

————————————————————
Documentation References
————————————————————

http://networkstatic.net/openflow-openvswitch-lab/

http://openvswitch.org/support/config-cookbooks/vlan-configuration-cookbook/

https://help.ubuntu.com/community/BridgingNetworkInterfaces

————————————————————
Install, Update, and Configure Ubuntu
————————————————————

Installed Ubuntu 12.04 from a thumb drive.
– Started with an 80 GB drive / 4 GB RAM
– Chose custom partitioning
– 500 MB /boot partition
– 4 GB swap partition
– 10 GB / partition
– remaining untouched (~65 GB) will be converted to LVM later.

apt-get -y install vim openssh-server lvm2
apt-get -y update
apt-get -y dist-upgrade
reboot

apt-get -y purge network-manager

echo "auto eth0
iface eth0 inet static
address 172.16.2.11
netmask 255.255.255.0
network 172.16.2.0
broadcast 172.16.2.255
dns-nameservers 172.16.2.1
gateway 172.16.2.1" >> /etc/network/interfaces

/etc/init.d/networking restart

————————————————————
Install Openvswitch
————————————————————

apt-get -y install openvswitch-datapath-source bridge-utils
module-assistant auto-install openvswitch-datapath
apt-get -y install openvswitch-brcompat openvswitch-common

————————————————————
Test Openvswitch Install
————————————————————

service openvswitch-switch status
ovs-vsctl show

————————————————————
Configure Openvswitch
————————————————————

The first thing that we’ll want to do is enable bridging compatibility.
Bridging will act as the interface between the hypervisor physical network cards and the virtual machines. This will be controlled by openvswitch.

sed -i 's/# BRCOMPAT=no/BRCOMPAT=yes/g' /etc/default/openvswitch-switch
service openvswitch-switch restart

Once the bridging compatibility has been enabled and openvswitch restarted, we’ll need to define a bridging interface and add the physical nic to the bridge.

*/ NOTE: This should be performed on the physical computer as it will bring down the networking to the host /*

sed -i 's/eth0/br0/g' /etc/network/interfaces
echo "auto eth0
iface eth0 inet manual
up ip link set eth0 up" >> /etc/network/interfaces
ovs-vsctl add-br br0
ovs-vsctl add-port br0 eth0
/etc/init.d/networking restart

At this point, the networking should be working again and you should be able to log into the host remotely.

December 30, 2012

Posted In: Linux, openvswitch, SDN, Software Defined Networking