I finally got a chance to sit down and play with pre-built Open Stack ‘Private Cloud Edition’ built by Rackspace. Once it’s installed, you can spin up instances right out of the box, but there are a few nuances to getting a functional platform for remote access and serving. I figured that I’d do a run through of the install and the initial changes that I made to get my install working.

The first thing that you need to do is obtain the Private Cloud Edition (PCE) iso. The iso can be downloaded for FREE at the Rackspace website – http://www.rackspace.com/cloud/private/. Once it’s downloaded and you have a bootable thumb drive or DVD, you’re ready to rock! The system requirements for Installing PCE on the Rackspace website are pretty stout. They list the controller node as needing 16 GB of RAM, 144 GB of disk space, and a dual socket CPU with dual cores or a single quad core. Then they list the compute node as needing the same specs with the exception of RAM, which they list as 32 GB. Those are more of recommendations. I installed the compute and controller node (all-in-one) on a single desktop PC with a single dual core CPU, 4 GB of RAM, and a 80 GB hard drive. For testing purposes, this is completely fine. The requirement that is needed is that your CPU’s will need to support virtualization technologies (VT-x), as the underlying hypervisor runs on KVM.

The install is a pretty painless process. The first screen prompts you for a EULA, then how what you want to install – Controller, Compute, or All-in-One.

20130118_011938

 

After that, you set up the IP Address of the server, the netblock assigned to VM’s, (the default is 172.31.0.0/24 – I left this default), and the user accounts (admin Open Stack account, Open Stack user account, and server local user account). After that it’s a matter of the automated installer installing the packages. Once it’s done installing, it will boot up and you’ll be ready to play!

20130118_152157

 

If you’ve ever used the Rackspace public cloud, you will notice that the UI looks very familiar. Though, if you prefer, the UI can be changed to the default Open Stack UI. The first thing that we’ll want to do when we log in is to grab your API credentials, so that you can easily use the command line tools. To do this, log in with your admin account, select the ‘Settings’ link at the top right of the screen, then select the “OpenStack API” tab, select ‘admin’ as the project, and finally press the “Download RC File” button.

Screenshot from 2013-01-19 22:29:37

Once the openrc.sh file is downloaded, you can copy it to your PCE server so that we can begin configuring what needs to be configured from the command line. As you can see below, I used scp to copy the file to the server.

[jtdub@jtdub-desktop Downloads]$ scp openrc.sh james@172.16.2.254:/home/james/.novarc
james@172.16.2.254's password: 
openrc.sh                                                                                                                                              100%  958     0.9KB/s   00:00

After the file is copied to the server, we’ll ssh to the server and use the CLI tools to configure floating IP Addresses. The one thing that I’ve noticed while playing around with Open Stack is that, at least in my limited experience, the ‘nova-manage’ and chef commands will not properly execute unless you have administrative privileges on the server, therefore, I generally go ahead and ‘sudo su -‘ to root while I’m using the ‘nova-manage’ and chef commands, but will continue to utilize my non-privileged account for using the ‘nova’ command.

So, let’s add a floating IP Address range. My PCE All-in-One server is currently sitting on the 172.16.2.0/24 network. 172.16.2.1 is the router, 172.16.2.254 is the PCE server, and there is another computer on 172.16.2.12. I want to add a range of addresses that will be on the 172.16.2.0/24 network, but will not conflict with existing hosts. For testing purposes, I do not need a large number of addresses, so I decided to carve out a section of my 172.16.2.0/24 network to assign as floating IP Addresses to instances that spin up. In my case, I only need about 16 addresses, so I chose to use the prefix of 172.16.2.32/28. That will tell Open Stack to assign addresses 172.16.2.33 – 46 to VM instances as they spin up and will re-claim those addresses as the instances are torn down. This allows me to continue to utilize the 172.16.2.0/24 network without conflict.

james@openstack:~$ sudo su -
root@openstack:~# source /home/james/.novarc 
Please enter your OpenStack Password: 
root@openstack:~# nova-manage floating create --pool=172.16.2.32-net --ip_range=172.16.2.32/28
2013-01-19 23:49:15 DEBUG nova.utils [req-99d8cbf4-8821-4c3d-afc7-9a584cfc1748 None None] backend <module 'nova.db.sqlalchemy.api' from '/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.pyc'> __get_backend /usr/lib/python2.7/dist-packages/nova/utils.py:502
root@openstack:~# nova-manage floating list
2013-01-19 23:49:22 DEBUG nova.utils [req-034aa938-a81f-428b-be81-96895607bb4c None None] backend <module 'nova.db.sqlalchemy.api' from '/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.pyc'> __get_backend /usr/lib/python2.7/dist-packages/nova/utils.py:502
None	172.16.2.33	None	172.16.2.32-net	br0
None	172.16.2.34	None	172.16.2.32-net	br0
None	172.16.2.35	None	172.16.2.32-net	br0
None	172.16.2.36	None	172.16.2.32-net	br0
None	172.16.2.37	None	172.16.2.32-net	br0
None	172.16.2.38	None	172.16.2.32-net	br0
None	172.16.2.39	None	172.16.2.32-net	br0
None	172.16.2.40	None	172.16.2.32-net	br0
None	172.16.2.41	None	172.16.2.32-net	br0
None	172.16.2.42	None	172.16.2.32-net	br0
None	172.16.2.43	None	172.16.2.32-net	br0
None	172.16.2.44	None	172.16.2.32-net	br0
None	172.16.2.45	None	172.16.2.32-net	br0
None	172.16.2.46	None	172.16.2.32-net	br0

At this point, I should spend a little bit of time to describe how the networking for VM instances is going to work. When we initially installed PCE, we were prompted with a screen asking us for a CIDR block for Nova fixed (VM) networking. The default is 172.31.0.0/24

20130118_012148One IP Address on the 172.31.0.0/24 will be allocated to the br0 interface of your PCE server and the remaining will be assigned to your instances as they boot up. The br0 interface will also contain the IP Address of your PCE server. In this case, that IP Address will be 172.16.2.254.

root@openstack:~# ip addr show br0
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 00:23:ae:90:aa:7c brd ff:ff:ff:ff:ff:ff
    inet 172.31.0.5/24 brd 172.31.0.255 scope global br0
    inet 172.16.2.254/24 brd 172.16.2.255 scope global br0
    inet6 fe80::4858:14ff:fe72:7112/64 scope link 
       valid_lft forever preferred_lft forever
root@openstack:~# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.2.1      0.0.0.0         UG    100    0        0 br0
169.254.123.0   0.0.0.0         255.255.255.0   U     0      0        0 chefbr0
172.16.2.0      0.0.0.0         255.255.255.0   U     0      0        0 br0
172.31.0.0      0.0.0.0         255.255.255.0   U     0      0        0 br0

The br0 interface is a bridge interface that connects the VM network to the physical interface, eth0. Open Stack then routes (layer 3) traffic coming from eth0 to the 172.31.0.0/24 network. It also uses iptables to create a PAT/NAT, so that the instances can communicate on the network, and the internet if you allow it. However, computers outside the PCE environment can’t communicate with the VM instances directly, because those computers will be unaware of the 172.31.0.0/24 network. This is where the floating IP Addresses come into play. The floating IP Addresses create a one-to-one NAT mapping a VM instance to an address in your floating IP Address range. In this case, my floating IP Address range is 172.16.2.32/28. Also, by default, the PCE iptables rules are very restrictive and don’t allow incoming traffic to communicate with the VM instances. To allow this traffic, you will have to create or edit security groups. This will come later on. Below is a diagram of the PCE network environment.

OpenStack-networkNow that we understand how the networking works, let’s log back into the UI as a normal user. As the normal user, we’re going to edit our default security group to define what traffic we want to allow to our VM instances, we’ll add a couple floating IP Addresses to our project, create keypairs that will be used to allow us to access our VM, we’ll add a pre-built Fedora 17 image to our default images, and finally, we’ll spin up an instance and verify that we can access it from an outside computer.

Once we login as our normal user, the first thing that we’ll do is edit our default security group to define what traffic that we want to allow to our VM instances from the outside world by default. In my test, I am going to allow ICMP echo (code -1, type -1), all UDP traffic, and all TCP traffic. To access the security groups, select the “Access & Security” tab along the top menu, locate the “Security Groups” section, and press the “Edit Rules” on the default group.

Screenshot from 2013-01-19 23:52:32

Once you’ve located that screen, enter your rules.

Screenshot from 2013-01-19 23:55:18

Now that the security group has been edited, we’ll go ahead and add floating IP Addresses, which are on the same “Access & Security” page. To do this, press the “Allocate IP To Project” button, select your pool, if you have multiple IP Address pools, and press theĀ  “Allocate IP” button. You can add as many IP Addresses as you need for your project. By default, there are quotas in place that limits a “project” to 10 floating IP Addresses. This quota can be changed and will be discussed later on.

Screenshot from 2013-01-19 23:59:54

Lastly, on the same “Access & Security” page let’s generate encryption key pairs that will be used to access our VM instances. In the “Keypairs” section, press the “Create Keypair” button. This will bring up a screen that will allow you to name the key pair.

Screenshot from 2013-01-20 00:05:22

Once the keypair has been generated, you’ll be prompted to download a pem file. Do so, and then add the keypair to your keys for ssh. In Linux, you use the ssh-add command. As this is a private key, you won’t want any other users to be able to read the key, so be sure to change the permissions on the file so that only your account can access the file.

[jtdub@jtdub-desktop Downloads]$ chmod 600 jtdub-keypair.pem 
[jtdub@jtdub-desktop Downloads]$ ssh-add jtdub-keypair.pem 
Identity added: jtdub-keypair.pem (jtdub-keypair.pem)

We’re at the light at the end of the tunnel! If you wanted to, you could now just spin up VM instances utilizing the default images that come with PCE. However, I’m going to download a pre-built image of Fedora 17, so that I can demonstrate how to import images. The Fedora 17 image that I’m going to use can be downloaded at http://berrange.fedorapeople.org/images/. In your UI, still logged in as the unprivileged user, select the “Images & Snapshots” tab. Once there, select the “Create Image” button, fill out the information on the form, and press the “Create Image” button.

Screenshot from 2013-01-20 00:16:58Now, you just wait for the image to download. It will take a little while depending on the speed of your Internet connection, as well as the size of the image.

 

Screenshot from 2013-01-20 00:17:16

When the image download completes, we can finally create our first instance. This can be accomplished from the “Instances” tab, in the UI, by pressing the “Launch Instance” button. On the Launch Instance page there are several options. It’s best to spend a few minutes to get familiar with the options. I’ll give a run down of the settings that I used.

  • Details Tab:
    • “Image”, I selected my newly minted fedora17-image.
    • “Instance Name”, I chose the name f17-test
    • “Flavor”, I left a m1.tiny (512MB / RAM) instance.
  • Access & Security Tab:
    • “Keypair”, I chose my jtdub-keypair

After that, I pressed the “Launch” button. In no time flat, my first instance was up and running. The only other thing that I need to do is associate a floating IP Address to the VM instance.

Screenshot from 2013-01-20 00:29:07 Screenshot from 2013-01-20 00:30:19

To associate a floating IP Address to an instance, locate your VM Instance on the “Instances” page, drop down the menu on the “Create Snapshot” button, and select “Associate Floating IP”. Once the “Manage Floating IP Associations” page pulls up, select and IP Address and press the “Associate” button.

Screenshot from 2013-01-20 00:35:16 Screenshot from 2013-01-20 00:36:32

That’s it! The first instance is up and running and should be remotely accessible! To test it, I’ll ssh to the instance.

[jtdub@jtdub-desktop ~]$ ping -c2 172.16.2.33
PING 172.16.2.33 (172.16.2.33) 56(84) bytes of data.
64 bytes from 172.16.2.33: icmp_seq=1 ttl=62 time=0.870 ms
64 bytes from 172.16.2.33: icmp_seq=2 ttl=62 time=0.801 ms

--- 172.16.2.33 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.801/0.835/0.870/0.045 ms
[jtdub@jtdub-desktop ~]$ ssh -l root 172.16.2.33
The authenticity of host '172.16.2.33 (172.16.2.33)' can't be established.
RSA key fingerprint is 3d:ec:47:85:9c:72:9b:3c:87:b6:0a:25:fa:7d:0b:d9.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.16.2.33' (RSA) to the list of known hosts.
[root@f17-test ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:77:9d:16 brd ff:ff:ff:ff:ff:ff
    inet 172.31.0.2/24 brd 172.31.0.255 scope global eth0
    inet6 fe80::f816:3eff:fe77:9d16/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever

That’s it! We now have a PCE compute cloud running. Whew! LONG blog! So for now, I’ll wrap this up. Soon, I’ll create another much shorter blog to show how to modify the UI back to the default Open Stack UI, if you prefer. In that same blog, I’ll also talk about project quotas and how to modify them. That’s it for now! Thanks for reading.

Documentation References:

Share on FacebookTweet about this on TwitterShare on LinkedInShare on RedditEmail this to someone

I’ve been playing with Open vSwitch and the VXLAN patch that is available at:

https://github.com/mestery/ovs-vxlan

So far all my testing has been done on my Rackspace Cloud account. I realize that you wouldn’t use VXLAN in a scenario like this on any production network, but for my testing, I thought that it would be good to have a physical separation of the networks. While I was able to get my VXLAN tunnel up, I haven’t been able to get the traffic to completely pass from my test-dfw to my test-ord servers. The traffic is getting lost at some point where the traffic leaves the ovs-ord internal interface (eth2) destined to test-ord server. I believe that I either need to add some configuration to the open vswitch service or the Rackspace Cloud Networks is stripping some data as it leaves ovs-ord, destined to test-ord. I’m still trying to figure that piece out. Below is that I have so far, along with the testing and troubleshooting section at the bottom.

Here is a diagram of the lab:

VXLAN-LAB

I started off by building the OVS servers that would be used to create the VXLAN tunnels would would pass traffic to the servers sitting behind them. To do this, I used the Rackspace Cloud Networks to create a private internal network in both the ORD and DFW data centers. All my servers would use eth2 to access that network and as the data centers are physically separated, my internal networks would be isolated from each other as well. I also used the Rackspace Cloud Servers to build the lab infrastructure. This includes four servers in total. Each running the Rackspace provided Fedora 17 image and all would be 512 meg instances.

First, I created all the instances, which are named as ovs-dfw, ovs-ord, test-dfw, test-ord. I then configured the ovs-{dfw, ord} instances.

#######################################
OVS Server Builds
#######################################

* Executed on both servers:

yum -y --disableexcludes=all update

for i in disable stop; do
for o in rpcbind.socket rpcbind.service iptables.service; do
systemctl $i $o;
done
done

reboot

Once the servers came back up, I wanted to verify the kernel version that was running. This will be needed when building the open vswitch kernel module RPM. My kernel version is: 3.3.4-5.fc17.x86_64. If your kernel version is different, then you will need to take note and make the appropriate changes when building the openvswitch kernel moduele. I’ll be sure to remind you of this later, as those processes come up.

 

uname -r

After you’ve taken note of the running kernel version, we’ll need to install the utilities needed to compile code, build RPMs, and troubleshoot networks. Note that I installed a kernel specific kernel-devel package – kernel-devel-3.3.4-5.fc17.x86_64. If your running kernel is different, then chage the package name to the appropriate kernel.

yum install -y openvswitch gcc make python-devel openssl-devel kernel-devel kernel-debug-devel git automake autoconf rpmdevtools kernel-devel-3.3.4-5.fc17.x86_64 tcpdump

Once the packages have installed, we can start downloading and building openvswitch packages needed to build the RPMs.

 

git clone https://github.com/mestery/ovs-vxlan.git
cd ovs-vxlan
git checkout vxlan
./boot.sh
./configure --with-linux=/lib/modules/`uname -r`/build
make dist

Now, we can start building the RPMs.

 

rpmdev-setuptree
cp openvswitch-1.9.90.tar.gz ~/rpmbuild/SOURCES/
cd ~/rpmbuild/SOURCES/
tar xvzf openvswitch-1.9.90.tar.gz
cd openvswitch-1.9.90/
rpmbuild -bb rhel/openvswitch-fedora.spec

Just a quick note. If you’re not running the same kernel version that I am, then you will need to change the next line to reflect the kernel version that you are running, or it will error out.

 

sed -i 's/#%define kernel 3.1.5-1.fc16.x86_64/%define kernel 3.3.4-5.fc17.x86_64/' rhel/openvswitch-kmod-fedora.spec
rpmbuild -bb rhel/openvswitch-kmod-fedora.spec

Now, let’s install the newly minted RPMs!

 

cd ~/rpmbuild/RPMS/x86_64/
rpm -Uvh *

Once that is completed, enable the open vswitch services to start at boot and start the services.

 

systemctl enable openvswitch.service
systemctl restart openvswitch.service

Now that the hard part is done, we can verify that open vswitch is running and functioning properly before going on to creating the VXLAN tunnels.

 

ps -ae | grep ovs
ovs-vsctl show

Once open vswitch has been verified, let’s configure it for VXLAN! Where */ip_addr_of_remote_server/* is, replace that with the IP Address of the remote OVS server. So, on the OVS-DFW server, you should put the IP Address of the OVS-ORD server and vise versa. Those IP Addresses on the Rackspace Cloud servers reside on eth0.

ip addr show dev eth0 | grep inet | head -1 | awk '{print $2}' | cut -d / -f 1

eth2=/etc/sysconfig/network-scripts/ifcfg-eth2; \
sed -i 's/IPADDR/#IPADDR/g' $eth2; \
sed -i 's/NETMASK/#NETMASK/g' $eth2; \
sed -i 's/DNS/#DNS/g' $eth2; \
sed -i 's/static/none/g' $eth2
ip addr flush dev eth2
ip addr show dev eth2
ovs-vsctl add-br br0
ovs-vsctl add-port br0 eth2
ovs-vsctl add-port br0 vx0 -- set interface vx0 type=vxlan options:remote_ip=*/ip_addr_of_remote_server/*
ovs-vsctl show

That’s it! The VXLAN tunnels have been built and we’re now ready to work on the test-{dfw, ord} servers. This setup is easy. All we need to do is set up IP Addresses on the eth2 interfaces. For this test, I’m using 192.168.1.11 as the test-dfw server and 192.168.1.12 as the test-ord server. When I created the internal networks on my cloud account, I left the default CIDR as 192.168.3.0/24. I’ll want to change this configuration on the servers, so that they boot with the IP Addresses that I want to use.

#######################################
TEST-DFW eth2 configuration
#######################################

 

eth2=/etc/sysconfig/network-scripts/ifcfg-eth2; \
sed -i 's/192.168.3.[0-9]/192.168.1.11/g' $eth2
ip addr flush dev eth2
ip addr add 192.168.1.11/24 dev eth2
ip addr show dev eth2

 

#######################################
TEST-ORD eth2 configuration
#######################################

 

eth2=/etc/sysconfig/network-scripts/ifcfg-eth2; \
sed -i 's/192.168.3.[0-9]/192.168.1.12/g' $eth2
ip addr flush dev eth2
ip addr add 192.168.1.12/24 dev eth2
ip addr show dev eth2

 

#######################################
Test connectivity from test-dfw to test-ord
#######################################

We’ll do this in steps. I’ll initiate a ping from test-dfw to test-ord. This will be done in steps.
* On test-dfw, I’ll start a ping to test-ord (ping 192.168.1.2).
* On ovs-ord, I’ll use tcpdump to listen for traffic on br0 and eth2
* On test-ord, I’ll use tcpdump to listen for traffic on eth2
* If I don’t receive a ping reply or traffic is lost along the path, I’ll set VXLAN connectivity by assigning IP Addresses on the br0 interfaces of ovs-dfw (192.168.1.1) and ovs-ord (192.168.1.2). While I have addreses assigned to the br0 interfaces of ovs-{dfw, ord}, I’ll test connectivity directly to their local LAN connected servers. On ovs-dfw, I’ll ping test-dfw and on ovs-ord, I’ll ping test-ord.

[root@test-dfw ~]# ping 192.168.1.12
PING 192.168.1.12 (192.168.1.12) 56(84) bytes of data.
From 192.168.1.11 icmp_seq=1 Destination Host Unreachable
From 192.168.1.11 icmp_seq=2 Destination Host Unreachable
From 192.168.1.11 icmp_seq=3 Destination Host Unreachable
From 192.168.1.11 icmp_seq=4 Destination Host Unreachable
From 192.168.1.11 icmp_seq=5 Destination Host Unreachable
[root@ovs-ord ~]# tcpdump -i br0 -XX -vvv -e -c 5
tcpdump: WARNING: br0: no IPv4 address assigned
tcpdump: listening on br0, link-type EN10MB (Ethernet), capture size 65535 bytes
05:05:08.627796 bc:76:4e:04:82:f2 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.12 tell 192.168.1.11, length 28
	0x0000:  ffff ffff ffff bc76 4e04 82f2 0806 0001  .......vN.......
	0x0010:  0800 0604 0001 bc76 4e04 82f2 c0a8 010b  .......vN.......
	0x0020:  0000 0000 0000 c0a8 010c                 ..........
05:05:09.628891 bc:76:4e:04:82:f2 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.12 tell 192.168.1.11, length 28
	0x0000:  ffff ffff ffff bc76 4e04 82f2 0806 0001  .......vN.......
	0x0010:  0800 0604 0001 bc76 4e04 82f2 c0a8 010b  .......vN.......
	0x0020:  0000 0000 0000 c0a8 010c                 ..........
05:05:10.631546 bc:76:4e:04:82:f2 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.12 tell 192.168.1.11, length 28
	0x0000:  ffff ffff ffff bc76 4e04 82f2 0806 0001  .......vN.......
	0x0010:  0800 0604 0001 bc76 4e04 82f2 c0a8 010b  .......vN.......
	0x0020:  0000 0000 0000 c0a8 010c                 ..........
05:05:12.629095 bc:76:4e:04:82:f2 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.12 tell 192.168.1.11, length 28
	0x0000:  ffff ffff ffff bc76 4e04 82f2 0806 0001  .......vN.......
	0x0010:  0800 0604 0001 bc76 4e04 82f2 c0a8 010b  .......vN.......
	0x0020:  0000 0000 0000 c0a8 010c                 ..........
05:05:13.631575 bc:76:4e:04:82:f2 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.12 tell 192.168.1.11, length 28
	0x0000:  ffff ffff ffff bc76 4e04 82f2 0806 0001  .......vN.......
	0x0010:  0800 0604 0001 bc76 4e04 82f2 c0a8 010b  .......vN.......
	0x0020:  0000 0000 0000 c0a8 010c                 ..........
5 packets captured
5 packets received by filter
0 packets dropped by kernel
[root@ovs-ord ~]# tcpdump -i eth2 -XX -vvv -e -c 5
tcpdump: WARNING: eth2: no IPv4 address assigned
tcpdump: listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes
05:05:40.637676 bc:76:4e:04:82:f2 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.12 tell 192.168.1.11, length 28
	0x0000:  ffff ffff ffff bc76 4e04 82f2 0806 0001  .......vN.......
	0x0010:  0800 0604 0001 bc76 4e04 82f2 c0a8 010b  .......vN.......
	0x0020:  0000 0000 0000 c0a8 010c                 ..........
05:05:41.637641 bc:76:4e:04:82:f2 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.12 tell 192.168.1.11, length 28
	0x0000:  ffff ffff ffff bc76 4e04 82f2 0806 0001  .......vN.......
	0x0010:  0800 0604 0001 bc76 4e04 82f2 c0a8 010b  .......vN.......
	0x0020:  0000 0000 0000 c0a8 010c                 ..........
05:05:42.639147 bc:76:4e:04:82:f2 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.12 tell 192.168.1.11, length 28
	0x0000:  ffff ffff ffff bc76 4e04 82f2 0806 0001  .......vN.......
	0x0010:  0800 0604 0001 bc76 4e04 82f2 c0a8 010b  .......vN.......
	0x0020:  0000 0000 0000 c0a8 010c                 ..........
05:05:44.643446 bc:76:4e:04:82:f2 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.12 tell 192.168.1.11, length 28
	0x0000:  ffff ffff ffff bc76 4e04 82f2 0806 0001  .......vN.......
	0x0010:  0800 0604 0001 bc76 4e04 82f2 c0a8 010b  .......vN.......
	0x0020:  0000 0000 0000 c0a8 010c                 ..........
05:05:45.639364 bc:76:4e:04:82:f2 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.12 tell 192.168.1.11, length 28
	0x0000:  ffff ffff ffff bc76 4e04 82f2 0806 0001  .......vN.......
	0x0010:  0800 0604 0001 bc76 4e04 82f2 c0a8 010b  .......vN.......
	0x0020:  0000 0000 0000 c0a8 010c                 ..........
5 packets captured
5 packets received by filter
0 packets dropped by kernel
[root@test-ord ~]# date
Sun Jan 13 05:11:08 UTC 2013
[root@test-ord ~]# tcpdump -i eth2 -XX -vvv -e -c 5
tcpdump: listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel
[root@test-ord ~]# date
Sun Jan 13 05:11:24 UTC 2013
[root@ovs-dfw ~]# ip addr add 192.168.1.1/24 dev br0
[root@ovs-dfw ~]# ping -c2 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_req=1 ttl=64 time=60.5 ms
64 bytes from 192.168.1.2: icmp_req=2 ttl=64 time=25.7 ms

--- 192.168.1.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 25.734/43.163/60.593/17.430 ms
[root@ovs-dfw ~]# ping -c2 192.168.1.11
PING 192.168.1.11 (192.168.1.11) 56(84) bytes of data.
64 bytes from 192.168.1.11: icmp_req=1 ttl=64 time=252 ms
64 bytes from 192.168.1.11: icmp_req=2 ttl=64 time=1.10 ms

--- 192.168.1.11 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 1.102/126.650/252.198/125.548 ms
[root@ovs-ord ~]# ip addr add 192.168.1.2/24 dev br0
[root@ovs-ord ~]# 
[root@ovs-ord ~]# ping -c2 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_req=1 ttl=64 time=29.1 ms
64 bytes from 192.168.1.1: icmp_req=2 ttl=64 time=26.8 ms

--- 192.168.1.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 26.899/28.039/29.180/1.152 ms
[root@ovs-ord ~]# ping -c2 192.168.1.12
PING 192.168.1.12 (192.168.1.12) 56(84) bytes of data.
64 bytes from 192.168.1.12: icmp_req=1 ttl=64 time=33.8 ms
64 bytes from 192.168.1.12: icmp_req=2 ttl=64 time=1.42 ms

--- 192.168.1.12 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 1.423/17.622/33.821/16.199 ms

#######################################
tpcdump of a successful ping between ovs-ord and test-ord
#######################################

[root@test-ord ~]# tcpdump -i eth2 -XX -vvv -e -c 5
tcpdump: listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes
05:14:42.582679 bc:76:4e:10:5c:74 (oui Unknown) > bc:76:4e:10:5a:89 (oui Unknown), ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84)
    192.168.1.2 > test-ord: ICMP echo request, id 13043, seq 1, length 64
	0x0000:  bc76 4e10 5a89 bc76 4e10 5c74 0800 4500  .vN.Z..vN.\t..E.
	0x0010:  0054 0000 4000 4001 b74a c0a8 0102 c0a8  .T..@.@..J......
	0x0020:  010c 0800 a434 32f3 0001 c242 f250 0000  .....42....B.P..
	0x0030:  0000 a670 0700 0000 0000 1011 1213 1415  ...p............
	0x0040:  1617 1819 1a1b 1c1d 1e1f 2021 2223 2425  ...........!"#$%
	0x0050:  2627 2829 2a2b 2c2d 2e2f 3031 3233 3435  &'()*+,-./012345
	0x0060:  3637                                     67
05:14:42.582755 bc:76:4e:10:5a:89 (oui Unknown) > bc:76:4e:10:5c:74 (oui Unknown), ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 43535, offset 0, flags [none], proto ICMP (1), length 84)
    test-ord > 192.168.1.2: ICMP echo reply, id 13043, seq 1, length 64
	0x0000:  bc76 4e10 5c74 bc76 4e10 5a89 0800 4500  .vN.\t.vN.Z...E.
	0x0010:  0054 aa0f 0000 4001 4d3b c0a8 010c c0a8  .T....@.M;......
	0x0020:  0102 0000 ac34 32f3 0001 c242 f250 0000  .....42....B.P..
	0x0030:  0000 a670 0700 0000 0000 1011 1213 1415  ...p............
	0x0040:  1617 1819 1a1b 1c1d 1e1f 2021 2223 2425  ...........!"#$%
	0x0050:  2627 2829 2a2b 2c2d 2e2f 3031 3233 3435  &'()*+,-./012345
	0x0060:  3637                                     67
05:14:43.583017 bc:76:4e:10:5c:74 (oui Unknown) > bc:76:4e:10:5a:89 (oui Unknown), ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84)
    192.168.1.2 > test-ord: ICMP echo request, id 13043, seq 2, length 64
	0x0000:  bc76 4e10 5a89 bc76 4e10 5c74 0800 4500  .vN.Z..vN.\t..E.
	0x0010:  0054 0000 4000 4001 b74a c0a8 0102 c0a8  .T..@.@..J......
	0x0020:  010c 0800 7b2e 32f3 0002 c342 f250 0000  ....{.2....B.P..
	0x0030:  0000 ce75 0700 0000 0000 1011 1213 1415  ...u............
	0x0040:  1617 1819 1a1b 1c1d 1e1f 2021 2223 2425  ...........!"#$%
	0x0050:  2627 2829 2a2b 2c2d 2e2f 3031 3233 3435  &'()*+,-./012345
	0x0060:  3637                                     67
05:14:43.583067 bc:76:4e:10:5a:89 (oui Unknown) > bc:76:4e:10:5c:74 (oui Unknown), ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 43536, offset 0, flags [none], proto ICMP (1), length 84)
    test-ord > 192.168.1.2: ICMP echo reply, id 13043, seq 2, length 64
	0x0000:  bc76 4e10 5c74 bc76 4e10 5a89 0800 4500  .vN.\t.vN.Z...E.
	0x0010:  0054 aa10 0000 4001 4d3a c0a8 010c c0a8  .T....@.M:......
	0x0020:  0102 0000 832e 32f3 0002 c342 f250 0000  ......2....B.P..
	0x0030:  0000 ce75 0700 0000 0000 1011 1213 1415  ...u............
	0x0040:  1617 1819 1a1b 1c1d 1e1f 2021 2223 2425  ...........!"#$%
	0x0050:  2627 2829 2a2b 2c2d 2e2f 3031 3233 3435  &'()*+,-./012345
	0x0060:  3637                                     67
05:14:44.584026 bc:76:4e:10:5c:74 (oui Unknown) > bc:76:4e:10:5a:89 (oui Unknown), ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84)
    192.168.1.2 > test-ord: ICMP echo request, id 13043, seq 3, length 64
	0x0000:  bc76 4e10 5a89 bc76 4e10 5c74 0800 4500  .vN.Z..vN.\t..E.
	0x0010:  0054 0000 4000 4001 b74a c0a8 0102 c0a8  .T..@.@..J......
	0x0020:  010c 0800 8628 32f3 0003 c442 f250 0000  .....(2....B.P..
	0x0030:  0000 c27a 0700 0000 0000 1011 1213 1415  ...z............
	0x0040:  1617 1819 1a1b 1c1d 1e1f 2021 2223 2425  ...........!"#$%
	0x0050:  2627 2829 2a2b 2c2d 2e2f 3031 3233 3435  &'()*+,-./012345
	0x0060:  3637                                     67
5 packets captured
6 packets received by filter
0 packets dropped by kernel

 

Share on FacebookTweet about this on TwitterShare on LinkedInShare on RedditEmail this to someone