Using Ansible to PUSH Cisco IOS Configurations

There are a lot of very good articles on the Internet about how Network Engineers can use Ansible to create standardized network device configurations or use Ansible with existing network vendor API’s to make changes to network devices. Some of my favorites can be found on the Python for Network Engineers and Jason Edelman’s sites.

However, what if you have [older, legacy] network devices or are running on software revisions that don’t support the newer vendor API’s? What if you need to push a common configuration among a multi-vendor or multi-platform of devices quickly? Pushing configurations quickly is easy with my PyMultiChange tool, but one of its biggest limitations is a multi-[vendor, platform] support – where common configurations may have differing syntax, by [vendor, platform] to accomplish the same task. I have yet to find any blogs on Google that share ideas of this category.

For a while, this led me to believe that it it just wasn’t possible, unless you invested the time in developing the appropriate Ansible modules.  However, I had an idea the other day, which proved that it is possible to push configurations to this category of network devices.

Here is my example playbook:

In this playbook, you can see that I call a group of devices call ‘netdevices’. The first play is to generate the configuration. In this case, I am modifying the snmp-server contact information. It calls a source template called snmp-contact.j2. Here is what the template looks like:

The template calls the ‘contact_name’ variable and destination of the template is in the input directory and named using the ‘hostname’ variable.

The hostname variable is called from host_vars. Here is the host_var for a test device, called core1a:

The contact_name variable is called from group_vars/all. Here is what my group_var/all looks like:

The result is a file in the input directory called core1a.conf, with the following configuration:

Once the configuration file has been created, the next play is called. This play is responsible for pushing the configuration to each device. It runs a local script called netsible.py. The script takes two arguments. The first is the hostname of the device to access. The second is the location of the configuration file that was created.

In the background, the script connects to the network device, via SSH, accesses enable mode, reads the configuration file, then executes each command on the router. The script utilizes my netlib library, to make this process simple. Here is the code for the netsible.py script:

If your device is running on a version of code that doesn’t support ssh, it would be easy, with the netlib library, to utilize telnet. All you would have to do is import the Telnet library via:

Then replace the ssh variable with the Telnet Library.

In the playbook, the ‘delegate_to’ call tells Ansible to run the command locally on the Ansible master, rather than Ansible connecting to the remote devices directly.

Here is what it looks like when I run the playbook:

This obviously works, but it does have a couple limitations, currently the playbook is not multi-[vendor, platform] ready. To do this, I would need to specify host_vars that define each device by vendor or platoform.

For example, I could define a variable called ‘network_platform’ in the host_vars and define each host by platform. I could use the values of IOS, NX-OS, IOS-XR, EOS, or JUN-OS defined as the ‘network_platform’ in the host_vars. Then when I called my playbooks, it could look like:

The other limitation that is that the script writes the configuration to the network devices every time that the playbook is ran, regardless of whether it’s needed or not. For creating an snmp contact, this isn’t a huge deal, with the exception of taking extra CPU cycles. However, what if you ran a playbook that was entirely roll based, and it called a role to define BGP route reflectors. Obviously, this would bounce BGP neighbors every time that you ran the playbook. Basically, it boils down to needing a method of checking whether the configuration is actually needed before the script applies it. This is something that I hope to be able to work on. In the mean time, I hope that you’ve enjoyed this. If you have any ideas, please feel free to share them with me!

I have a generic Github repository that I’ve been using to play with Ansible Network Engineering functionality. Feel free to play with it and contribute to it! Note that ‘netlib‘ is called as a submodule. :) Enjoy!

Share on FacebookTweet about this on TwitterShare on LinkedInShare on RedditEmail this to someone

August 29, 2015

Posted In: Ansible, Cisco Administration Python Scripting, DevOps, IOS, IOS-XE, IOS-XR, Network DevOps, Network Programmability, Python Tips

Dockerizing IOS-XRv

I’ve been playing with docker off and on for about a year or so now. One of my ideas, with Docker, is to use it for my network lab. These days, I’ve mostly virtualized my lab. Lately, been doing a lot of it in VIRL, but this hasn’t stopped me from tinkering.

For a while, I’ve had a base docker container that sets up Open vSwitch and KVM. Once the docker container is started, you can access the container and spin up VM’s or play with Open vSwitch. The Dockerfile to set this container up can be found on my github.

The next iteration of this was to actually have the VM in the container and have it boot up directly. I did this with IOS-XRv. It’s a pretty straight forward set up. The Dockerfile uses centos:6 as its base, installs a couple yum repositories, installs needed packages, and adds the associated files. When it’s all done, you have a docker container that will run the IOS-XRv. You can spin this container up and down at will. It’s pretty nifty.

My next goal in this set up is to have the container generate dynamic mac addresses for IOS-XRv when it boots up. Currently the mac addresses are hard coded. The reasoning for this is that I eventually want Open vSwitch to connect to a ‘controller’ Open vSwitch via VXLAN or GRE. The purpose of this is to spin up multiple containers and have them all connect to each other. This will make the lab environment much more flexible and scalable.

Anyways, check out the docker-ios-xrv github for the README, Dockerfile, and associated files. I’ll post more when I have updates.

Share on FacebookTweet about this on TwitterShare on LinkedInShare on RedditEmail this to someone

April 5, 2015

Posted In: Cisco VIRL, Docker, IOS-XR, KVM, Linux, Miscellaneous Hacking, NFV, openvswitch, SDN, Software Defined Networking