Automating vJunOS and vEOS standup using Python

From the previous post I showed how to get a VM environment stood up for vEOS and vJunOS-Switch using manual commands and explaining things along the way.

That entire process is error prone. I like to have a process and procedure in the way I stand up a lab. The benefit I achieve by doing this is I’m rarely fighting typo’s or bridge names that don’t align and can focus on the what I really want to do which is lab POC’ing w/ these vEOS and vJunOS devices. But really this can be extended to any vendor offering a VM or container of their device (Palo Alto, Nokia, Cisco, etc.).

In this post I’m going to automate those commands using Python, Jinja templates, and YAML:

1.) Python will be what I use to drive the program and tap different libraries for different things (ex. libvirt to start/stop the VM’s)

  • Note: I could just have python reference the exact same CLI commands but the benefit of using libvirt API is I get a return code of pass or exception and error code. Much like the benefit of using NetConf or gRPC rather than doing CLI scraping against a network device.

2.) Jinja provides me a pre-defined templating structure for the CLI configs.

3.) YAML provides me a dictionary for each host where I can define variables for each VM host.

I’m going to reserve another update to this blog to using objects in python but for this blog I will use ‘quick and dirty’ scripting to illustrate it can all be done in a pinch.

A rough outline of the creation process is:

  • Initialize the KVM images and copy over fresh ones if none exist
  • Read the YAML files to determine the ‘inventory’ of the VM lab being stood up
  • Plug in data from the YAML inventory into the jinja templates to generate a VM XML
  • Create the OVS and tap interfaces
  • Defines and start the VMs

And on the reverse order to stop the VM’s:

  • Reads YAML files to determine the ‘inventory’ of the VM lab to shutdown
  • Begins to shutdown and undefine the VM’s
  • Remove the OVS and tap interfaces

If the lab is just running pure vEOS it’s not an issue to leave it running all day/night. But if it’s running any vJunos VM’s I do not want to leave it running since all of the vJunOS VFPC’s have 100% core utilization which wastes power and makes the fans in my server spin up 🙂

Here is the github which I will link throughout this post that contains that has all the python scripts, yaml, j2, dockerfile, etc. is here.

Here’s a breakdown of what each file is doing:

Let’s start with defining the veos host in the YAML in kvm_hosts.yaml:

kvm_nodes:
- hostname: veos01
mgmtmac: 00:01:0F:67:c3:a5
image_name: veos01.qcow2
node_type: veos
interfaces:
- interface: eth1
bridge: veos01-vjunos01
interfaces:
- interface: eth2
bridge: veos01-b-et2

The KVM XML jinja for ‘veos_node’ and ‘vjunos_node’ type with variables referenced by the above YAML are on too lengthy to go in but one note is that I used a ‘for’ function in the jinja template which allows me to parse the interface list of the kvm_nodes.yaml for each particular host.

To tie it all together I have these functions for the KVM stand up in lab_builder.py:

make_xml - reads the YAML parameters and uses the jinja template to make the KVM XML
image_init - creates a fresh copy of the image for the lab being stood up
init_tap - initialize the tap interfaces
init_ovs - initialize the OVS interfaces
init_vm - initialize the VM in KVM
start_vm - start the VM in KVM

And these functions for the container stand up in container_builder.py:

init_container - intialize and start the container
init_cont_interfaces - initalize and connect the logical docker interfaces to the OVS bridge found in the YAML file
init_cont_conf - connect to the docker container and assign relevant IP's to the docker container interfaces
init_lldp - start lldp on the container
init_static_routes - deploy static routes in the container

There is an opposite ‘delete’ function for the majority of these as well to stop the lab that can be found in the above two scripts.

The final part of this project is a CLI. The idea being the CLI script will order these functions appropriately to stop/start the whole lab or call individual functions. And also allow me the ability to specify which lab topology YAML and/or what host I want to work on. The CLI script is viewable here for containers and here for the KVM’s.

Here is output from my terminal that shows the CLI options:

root@intelnuc:/usr/local/kvm/vjunos-part2# sudo python3 kvm_cli.py
usage: kvm_cli.py [-h]
(-a {startvlab,stopvlab} | -s {image_init,create_xml,create_tap,create_ovs,define_vm,start_vm,stop_vm,delete_tap,delete_ovs,undefine_vm} [{image_init,create_xml,create_tap,create_ovs,define_vm,start_vm,stop_vm,delete_tap,delete_ovs,undefine_vm} …])
--hosts HOSTS [HOSTS …]
kvm_cli.py: error: the following arguments are required: --hosts
root@intelnuc:/usr/local/kvm/vjunos-part2#

Here is output from my terminal that showing the script starting the lab:

root@intelnuc:/usr/local/kvm/vjunos-part2# sudo python3 kvm_cli.py -a startvlab --hosts all
Adding veos01 into the list of hosts to work against..
Adding vjunos01 into the list of hosts to work against..
created veos01.xml !
created vjunos01.xml !
/usr/local/kvm/vjunos-part2/veos01.qcow2 image already exists! not creating image!
/usr/local/kvm/vjunos-part2/vjunos01.qcow2 image already exists! not creating image!
creating tap interface veos01-eth1
creating tap interface veos01-eth2
creating tap interface vjunos01-ge0
creating tap interface vjunos01-ge1
creating ovs bridge veos01-vjunos01
creating ovs bridge veos01-b-et2
creating ovs bridge veos01-vjunos01
ovs-vsctl: cannot create a bridge named veos01-vjunos01 because a bridge named veos01-vjunos01 already exists
creating ovs bridge vjunos-b-ge1
/usr/local/kvm/vjunos-part2/veos01 has been defined!
/usr/local/kvm/vjunos-part2/vjunos01 has been defined!
/usr/local/kvm/vjunos-part2/veos01 has been started!
/usr/local/kvm/vjunos-part2/vjunos01 has been started!
root@intelnuc:/usr/local/kvm/vjunos-part2#

Then I start the containers using a different CLI script:

root@intelnuc:/usr/local/kvm/vjunos-part2# sudo python3 container_cli.py -a start_containers --hosts allcreating container ubsrv01

creating container ubsrv02
creating container interfaces for host ubsrv01
ovs-docker: Port already attached for CONTAINER=ubsrv01 and INTERFACE=eth1
creating container interfaces for host ubsrv02
ovs-docker: Port already attached for CONTAINER=ubsrv02 and INTERFACE=eth1
configuring container interfaces for host ubsrv01

starting LLDP on ubsrv01

Starting LLDP daemon lldpd [success]
starting LLDP on ubsrv02

Starting LLDP daemon lldpd [success]
deploying static routes on ubsrv01
deploying static route 192.168.2.0/24 via gw 192.168.1.1 on ubsrv01

deploying static routes on ubsrv02
deploying static route 192.168.1.0/24 via gw 192.168.2.1 on ubsrv02

root@intelnuc:/usr/local/kvm/vjunos-part2#
root@intelnuc:/usr/local/kvm/vjunos-part2#
root@intelnuc:/usr/local/kvm/vjunos-part2# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
32113f9d24f0 ub-docker "/bin/bash" 6 minutes ago Up 6 minutes ubsrv02
b67959c11c5e ub-docker "/bin/bash" 6 minutes ago Up 6 minutes ubsrv01

And pings work just fine to the locally connected container from vEOS:

 virsh console 5
Connected to domain 'veos01'
Escape character is ^] (Ctrl + ])


localhost login: admin
localhost>en
localhost#show lldp neighbors
Last table change time   : 0:08:22 ago
Number of table inserts  : 3
Number of table deletes  : 1
Number of table drops    : 0
Number of table age-outs : 1

Port          Neighbor Device ID       Neighbor Port ID    TTL
---------- ------------------------ ---------------------- ---
Et1           2c6b.f57d.a1c0           519                 120
Et2           ubsrv01                  9e40.ad9e.880d      120

localhost#conf t
int et2localhost(config)#int et2
localhost(config-if-Et2)#ip address 192.168.1.1/24
! IP configuration will be ignored while interface Ethernet2 is not a routed port.
localhost(config-if-Et2)#no switchport
localhost(config-if-Et2)#end
localhost#ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 72(100) bytes of data.
80 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.042 ms
80 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=0.021 ms
80 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=0.021 ms
80 bytes from 192.168.1.1: icmp_seq=4 ttl=64 time=0.021 ms
80 bytes from 192.168.1.1: icmp_seq=5 ttl=64 time=0.015 ms

--- 192.168.1.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4ms
rtt min/avg/max/mdev = 0.015/0.024/0.042/0.009 ms, ipg/ewma 1.152/0.032 ms
localhost#

To stop the lab I should stop the containers first as the KVM script will attempt to delete the OVS:

root@intelnuc:/usr/local/kvm/vjunos-part2# sudo python3 container_cli.py -a stop_containers --hosts all
trying to delete interfaces
deleting container interfaces for host ubsrv01
deleting container interfaces for host ubsrv02
delete container ubsrv01
delete container ubsrv02
root@intelnuc:/usr/local/kvm/vjunos-part2#
root@intelnuc:/usr/local/kvm/vjunos-part2# docker ps -a | grep ubs
root@intelnuc:/usr/local/kvm/vjunos-part2#

root@intelnuc:/usr/local/kvm/vjunos-part2# sudo python3 kvm_cli.py -a stopvlab --hosts all
Adding veos01  into the list of hosts to work against..
Adding vjunos01  into the list of hosts to work against..
/usr/local/kvm/vjunos-part2/veos01 has been destroyed/stopped
/usr/local/kvm/vjunos-part2/vjunos01 has been destroyed/stopped
deleting tap interface veos01-eth1
deleting tap interface veos01-eth2
deleting tap interface vjunos01-ge0
deleting tap interface vjunos01-ge1
deleting ovs switch: veos01-vjunos01
deleting ovs switch: veos01-b-et2
deleting ovs switch: veos01-vjunos01
ovs-vsctl: no bridge named veos01-vjunos01
deleting ovs switch: vjunos-b-ge1
/usr/local/kvm/vjunos-part2/veos01 has been undefined
/usr/local/kvm/vjunos-part2/vjunos01 has been undefined
root@intelnuc:/usr/local/kvm/vjunos-part2#


root@intelnuc:/usr/local/kvm/vjunos-part2# virsh list --all
 Id   Name   State
--------------------

root@intelnuc:/usr/local/kvm/vjunos-part2# ovs-vsctl show
7bfddb50-5269-4ad0-9b46-48f0c289b48b
    ovs_version: "2.17.7"
root@intelnuc:/usr/local/kvm/vjunos-part2#

After writing a small program to automate the bring up/tear down of this virtual lab I’ve found it extremely useful. The value in this grows when the lab grows and also when other colleagues of mine who use it. As it brings consistency to our lab environments and (hopefully) keeps us from troubleshooting the underlying system and more on task to our real job which is running network proof of concepts.

On the next blog post I will use objects in Python (using Pydantic) to give the code a cleaner approach.

Leave a Reply