Rackspace Private Cloud Edition – Compute Setup

I finally got a chance to sit down and play with pre-built Open Stack ‘Private Cloud Edition’ built by Rackspace. Once it’s installed, you can spin up instances right out of the box, but there are a few nuances to getting a functional platform for remote access and serving. I figured that I’d do a run through of the install and the initial changes that I made to get my install working.

The first thing that you need to do is obtain the Private Cloud Edition (PCE) iso. The iso can be downloaded for FREE at the Rackspace website – http://www.rackspace.com/cloud/private/. Once it’s downloaded and you have a bootable thumb drive or DVD, you’re ready to rock! The system requirements for Installing PCE on the Rackspace website are pretty stout. They list the controller node as needing 16 GB of RAM, 144 GB of disk space, and a dual socket CPU with dual cores or a single quad core. Then they list the compute node as needing the same specs with the exception of RAM, which they list as 32 GB. Those are more of recommendations. I installed the compute and controller node (all-in-one) on a single desktop PC with a single dual core CPU, 4 GB of RAM, and a 80 GB hard drive. For testing purposes, this is completely fine. The requirement that is needed is that your CPU’s will need to support virtualization technologies (VT-x), as the underlying hypervisor runs on KVM.

The install is a pretty painless process. The first screen prompts you for a EULA, then how what you want to install – Controller, Compute, or All-in-One.

20130118_011938

 

After that, you set up the IP Address of the server, the netblock assigned to VM’s, (the default is 172.31.0.0/24 – I left this default), and the user accounts (admin Open Stack account, Open Stack user account, and server local user account). After that it’s a matter of the automated installer installing the packages. Once it’s done installing, it will boot up and you’ll be ready to play!

20130118_152157

 

If you’ve ever used the Rackspace public cloud, you will notice that the UI looks very familiar. Though, if you prefer, the UI can be changed to the default Open Stack UI. The first thing that we’ll want to do when we log in is to grab your API credentials, so that you can easily use the command line tools. To do this, log in with your admin account, select the ‘Settings’ link at the top right of the screen, then select the “OpenStack API” tab, select ‘admin’ as the project, and finally press the “Download RC File” button.

Screenshot from 2013-01-19 22:29:37

Once the openrc.sh file is downloaded, you can copy it to your PCE server so that we can begin configuring what needs to be configured from the command line. As you can see below, I used scp to copy the file to the server.

After the file is copied to the server, we’ll ssh to the server and use the CLI tools to configure floating IP Addresses. The one thing that I’ve noticed while playing around with Open Stack is that, at least in my limited experience, the ‘nova-manage’ and chef commands will not properly execute unless you have administrative privileges on the server, therefore, I generally go ahead and ‘sudo su -‘ to root while I’m using the ‘nova-manage’ and chef commands, but will continue to utilize my non-privileged account for using the ‘nova’ command.

So, let’s add a floating IP Address range. My PCE All-in-One server is currently sitting on the 172.16.2.0/24 network. 172.16.2.1 is the router, 172.16.2.254 is the PCE server, and there is another computer on 172.16.2.12. I want to add a range of addresses that will be on the 172.16.2.0/24 network, but will not conflict with existing hosts. For testing purposes, I do not need a large number of addresses, so I decided to carve out a section of my 172.16.2.0/24 network to assign as floating IP Addresses to instances that spin up. In my case, I only need about 16 addresses, so I chose to use the prefix of 172.16.2.32/28. That will tell Open Stack to assign addresses 172.16.2.33 – 46 to VM instances as they spin up and will re-claim those addresses as the instances are torn down. This allows me to continue to utilize the 172.16.2.0/24 network without conflict.

At this point, I should spend a little bit of time to describe how the networking for VM instances is going to work. When we initially installed PCE, we were prompted with a screen asking us for a CIDR block for Nova fixed (VM) networking. The default is 172.31.0.0/24

20130118_012148One IP Address on the 172.31.0.0/24 will be allocated to the br0 interface of your PCE server and the remaining will be assigned to your instances as they boot up. The br0 interface will also contain the IP Address of your PCE server. In this case, that IP Address will be 172.16.2.254.

The br0 interface is a bridge interface that connects the VM network to the physical interface, eth0. Open Stack then routes (layer 3) traffic coming from eth0 to the 172.31.0.0/24 network. It also uses iptables to create a PAT/NAT, so that the instances can communicate on the network, and the internet if you allow it. However, computers outside the PCE environment can’t communicate with the VM instances directly, because those computers will be unaware of the 172.31.0.0/24 network. This is where the floating IP Addresses come into play. The floating IP Addresses create a one-to-one NAT mapping a VM instance to an address in your floating IP Address range. In this case, my floating IP Address range is 172.16.2.32/28. Also, by default, the PCE iptables rules are very restrictive and don’t allow incoming traffic to communicate with the VM instances. To allow this traffic, you will have to create or edit security groups. This will come later on. Below is a diagram of the PCE network environment.

OpenStack-networkNow that we understand how the networking works, let’s log back into the UI as a normal user. As the normal user, we’re going to edit our default security group to define what traffic we want to allow to our VM instances, we’ll add a couple floating IP Addresses to our project, create keypairs that will be used to allow us to access our VM, we’ll add a pre-built Fedora 17 image to our default images, and finally, we’ll spin up an instance and verify that we can access it from an outside computer.

Once we login as our normal user, the first thing that we’ll do is edit our default security group to define what traffic that we want to allow to our VM instances from the outside world by default. In my test, I am going to allow ICMP echo (code -1, type -1), all UDP traffic, and all TCP traffic. To access the security groups, select the “Access & Security” tab along the top menu, locate the “Security Groups” section, and press the “Edit Rules” on the default group.

Screenshot from 2013-01-19 23:52:32

Once you’ve located that screen, enter your rules.

Screenshot from 2013-01-19 23:55:18

Now that the security group has been edited, we’ll go ahead and add floating IP Addresses, which are on the same “Access & Security” page. To do this, press the “Allocate IP To Project” button, select your pool, if you have multiple IP Address pools, and press theĀ  “Allocate IP” button. You can add as many IP Addresses as you need for your project. By default, there are quotas in place that limits a “project” to 10 floating IP Addresses. This quota can be changed and will be discussed later on.

Screenshot from 2013-01-19 23:59:54

Lastly, on the same “Access & Security” page let’s generate encryption key pairs that will be used to access our VM instances. In the “Keypairs” section, press the “Create Keypair” button. This will bring up a screen that will allow you to name the key pair.

Screenshot from 2013-01-20 00:05:22

Once the keypair has been generated, you’ll be prompted to download a pem file. Do so, and then add the keypair to your keys for ssh. In Linux, you use the ssh-add command. As this is a private key, you won’t want any other users to be able to read the key, so be sure to change the permissions on the file so that only your account can access the file.

We’re at the light at the end of the tunnel! If you wanted to, you could now just spin up VM instances utilizing the default images that come with PCE. However, I’m going to download a pre-built image of Fedora 17, so that I can demonstrate how to import images. The Fedora 17 image that I’m going to use can be downloaded at http://berrange.fedorapeople.org/images/. In your UI, still logged in as the unprivileged user, select the “Images & Snapshots” tab. Once there, select the “Create Image” button, fill out the information on the form, and press the “Create Image” button.

Screenshot from 2013-01-20 00:16:58Now, you just wait for the image to download. It will take a little while depending on the speed of your Internet connection, as well as the size of the image.

 

Screenshot from 2013-01-20 00:17:16

When the image download completes, we can finally create our first instance. This can be accomplished from the “Instances” tab, in the UI, by pressing the “Launch Instance” button. On the Launch Instance page there are several options. It’s best to spend a few minutes to get familiar with the options. I’ll give a run down of the settings that I used.

  • Details Tab:
    • “Image”, I selected my newly minted fedora17-image.
    • “Instance Name”, I chose the name f17-test
    • “Flavor”, I left a m1.tiny (512MB / RAM) instance.
  • Access & Security Tab:
    • “Keypair”, I chose my jtdub-keypair

After that, I pressed the “Launch” button. In no time flat, my first instance was up and running. The only other thing that I need to do is associate a floating IP Address to the VM instance.

Screenshot from 2013-01-20 00:29:07 Screenshot from 2013-01-20 00:30:19

To associate a floating IP Address to an instance, locate your VM Instance on the “Instances” page, drop down the menu on the “Create Snapshot” button, and select “Associate Floating IP”. Once the “Manage Floating IP Associations” page pulls up, select and IP Address and press the “Associate” button.

Screenshot from 2013-01-20 00:35:16 Screenshot from 2013-01-20 00:36:32

That’s it! The first instance is up and running and should be remotely accessible! To test it, I’ll ssh to the instance.

That’s it! We now have a PCE compute cloud running. Whew! LONG blog! So for now, I’ll wrap this up. Soon, I’ll create another much shorter blog to show how to modify the UI back to the default Open Stack UI, if you prefer. In that same blog, I’ll also talk about project quotas and how to modify them. That’s it for now! Thanks for reading.

Documentation References:

Share on FacebookTweet about this on TwitterShare on LinkedInShare on RedditEmail this to someone

January 20, 2013

Posted In: Openstack, Private Cloud Edition, Rackspace

Rackspace Cloud Servers and Networks with Open vSwitch and VXLAN between Data Centers

I’ve been playing with Open vSwitch and the VXLAN patch that is available at:

https://github.com/mestery/ovs-vxlan

So far all my testing has been done on my Rackspace Cloud account. I realize that you wouldn’t use VXLAN in a scenario like this on any production network, but for my testing, I thought that it would be good to have a physical separation of the networks. While I was able to get my VXLAN tunnel up, I haven’t been able to get the traffic to completely pass from my test-dfw to my test-ord servers. The traffic is getting lost at some point where the traffic leaves the ovs-ord internal interface (eth2) destined to test-ord server. I believe that I either need to add some configuration to the open vswitch service or the Rackspace Cloud Networks is stripping some data as it leaves ovs-ord, destined to test-ord. I’m still trying to figure that piece out. Below is that I have so far, along with the testing and troubleshooting section at the bottom.

Here is a diagram of the lab:

VXLAN-LAB

I started off by building the OVS servers that would be used to create the VXLAN tunnels would would pass traffic to the servers sitting behind them. To do this, I used the Rackspace Cloud Networks to create a private internal network in both the ORD and DFW data centers. All my servers would use eth2 to access that network and as the data centers are physically separated, my internal networks would be isolated from each other as well. I also used the Rackspace Cloud Servers to build the lab infrastructure. This includes four servers in total. Each running the Rackspace provided Fedora 17 image and all would be 512 meg instances.

First, I created all the instances, which are named as ovs-dfw, ovs-ord, test-dfw, test-ord. I then configured the ovs-{dfw, ord} instances.

#######################################
OVS Server Builds
#######################################

* Executed on both servers:

Once the servers came back up, I wanted to verify the kernel version that was running. This will be needed when building the open vswitch kernel module RPM. My kernel version is: 3.3.4-5.fc17.x86_64. If your kernel version is different, then you will need to take note and make the appropriate changes when building the openvswitch kernel moduele. I’ll be sure to remind you of this later, as those processes come up.

 

After you’ve taken note of the running kernel version, we’ll need to install the utilities needed to compile code, build RPMs, and troubleshoot networks. Note that I installed a kernel specific kernel-devel package – kernel-devel-3.3.4-5.fc17.x86_64. If your running kernel is different, then chage the package name to the appropriate kernel.

Once the packages have installed, we can start downloading and building openvswitch packages needed to build the RPMs.

 

Now, we can start building the RPMs.

 

Just a quick note. If you’re not running the same kernel version that I am, then you will need to change the next line to reflect the kernel version that you are running, or it will error out.

 

Now, let’s install the newly minted RPMs!

 

Once that is completed, enable the open vswitch services to start at boot and start the services.

 

Now that the hard part is done, we can verify that open vswitch is running and functioning properly before going on to creating the VXLAN tunnels.

 

Once open vswitch has been verified, let’s configure it for VXLAN! Where */ip_addr_of_remote_server/* is, replace that with the IP Address of the remote OVS server. So, on the OVS-DFW server, you should put the IP Address of the OVS-ORD server and vise versa. Those IP Addresses on the Rackspace Cloud servers reside on eth0.

That’s it! The VXLAN tunnels have been built and we’re now ready to work on the test-{dfw, ord} servers. This setup is easy. All we need to do is set up IP Addresses on the eth2 interfaces. For this test, I’m using 192.168.1.11 as the test-dfw server and 192.168.1.12 as the test-ord server. When I created the internal networks on my cloud account, I left the default CIDR as 192.168.3.0/24. I’ll want to change this configuration on the servers, so that they boot with the IP Addresses that I want to use.

#######################################
TEST-DFW eth2 configuration
#######################################

 

 

#######################################
TEST-ORD eth2 configuration
#######################################

 

 

#######################################
Test connectivity from test-dfw to test-ord
#######################################

We’ll do this in steps. I’ll initiate a ping from test-dfw to test-ord. This will be done in steps.
* On test-dfw, I’ll start a ping to test-ord (ping 192.168.1.2).
* On ovs-ord, I’ll use tcpdump to listen for traffic on br0 and eth2
* On test-ord, I’ll use tcpdump to listen for traffic on eth2
* If I don’t receive a ping reply or traffic is lost along the path, I’ll set VXLAN connectivity by assigning IP Addresses on the br0 interfaces of ovs-dfw (192.168.1.1) and ovs-ord (192.168.1.2). While I have addreses assigned to the br0 interfaces of ovs-{dfw, ord}, I’ll test connectivity directly to their local LAN connected servers. On ovs-dfw, I’ll ping test-dfw and on ovs-ord, I’ll ping test-ord.

#######################################
tpcdump of a successful ping between ovs-ord and test-ord
#######################################

 

Share on FacebookTweet about this on TwitterShare on LinkedInShare on RedditEmail this to someone

January 12, 2013

Posted In: Linux, openvswitch, SDN, Software Defined Networking, VXLAN