Category Archives: software

Nicira (VMWare) NVP/NSX: A Python API and Toolkit

Python

Python Programming Language

vmware NSX by Nicira.

Over the past few years working at EMC we regularly use NVP/NSX in some of our lab environments and throughout the years this means that we have needed to upgrade, manipulate and overhaul our network architecture with NVP/NSX every so often. We have been using a Python library developed internally by myself and Patrick Mullaney with some help from Erik Smith in the early days, and I wanted to share some of its tooling.

(need to drop this in here)

*Disclaimer : By no means does EMC make any representation or take any obligation with regard to the use of this library, NVP or NSX in any way shape or form. This post is the thoughts of the author and the author alone. The examples and API calls have mainly been testing against NVP up to 3.2 before it became NSX. However, most calls should work against the NSX API except for any net-new api endpoints that have not been added. Another note is that this API is not fully featured, it is merely a tool that we use in the lab for what we need, it can be extended, expanded however you like.(update 4/14/15′ See Open-source section toward the bottom for more information ) I’ll be working on opensource this library and try and fill in some of the missing features as I go along. There are two main uses for this python library:

1. Manage NVP/NSX Infrastructure

Managing, automating, and orchestrating the setup of NVP/NSX components was a must for us. We wanted to be able to spin up and down environments on the flow, and or manage upgrading new components when we wanted to upgrade.

The Library allows you to remotely setup Hypervisor Nodes, Gateway Nodes, Service Nodes etc. (Examples Below)

2. Python bindings to NVP/NSX REST API

 Having python bindings for some of the investigative projects we have been working on was out first motivation. A) because we developed and were familiar with Python, B) We have been working with OpenStack and it just made sense.

With the library you can list networking, attach ports, query nodes etc. (Examples are given in the test.py example below.)


Manage NVP/NSX Infrastructure Let me preface this by saying this isn’t a complete M&O / DevOps / Baremetal Provisioning service for NVP or NSX components, it does however manage setting up much of the logical state components with as little hands-on cli commands as possible,  like setting up OVS certificates, creating integrations bridges and contacting the Control Cluster to register the node as a Hypervisor Node, Service Node etc. Openvswitch is the only thing that does need to be installed, and any other nicira debs that come with their software switch/ hypervisor node offering. You just need to install the debs and then let the configuration tool take over.  (I am running this on Ubuntu 14.04 in this demo)

sudo dpkg --purge openvswitch-pki 
sudo dpkg -i openvswitch-datapath-dkms_1.11.0*.deb 
sudo dpkg -i openvswitch-common_1.11.0*.deb openvswitch-switch_1.11.0*.deb 
sudo dpkg -i nicira-ovs-hypervisor-node_1.11.0*.deb

Once OVS is installed on the nodes that you will be adding to your NVP/NSX architecture you can use the library and cli tools to setup, connect, and register the virtual network components. The way this is done is by describing the infrastructure components in JSON form from a single control host, or even a NVP/NSX linux host. We thought about using YAML here, but after everything we chose JSON. Sorry if you a YAML fan. The first thing needed by the tooling is some basic control cluster information. The IP of the controller, the username and password along with the port to use for authentication. (sorry for the ugly blacked out IPs) ss2-1 Next, you can describe different types of nodes for setup and configuration. These nodes can be:

SERVICE, GATEWAY, or COMPUTE

The difference with the SERVICE or GATEWAY node is that it will have a:

"mgmt_rendezvous_server" : true|false 
"mgmt_rendezvous_client" : true|false

respectively instead of a “data network interface. Service or Gateway nodes need this metadata in it’s configuration JSON to be created correctly. I dont have examples of using this (Sorry) and am focusing on using this for Hypervisor Nodes, since this is what we find we are creating/reconfiguring most. Here is an example of such a config. ss5 Once this configuration is complete, you can now reference the “name” of the compute node, and it will provision the COMPUTE node to the NVP/NSX system/cluster. If the hypervisor node is remote the toolkit will use remote sudo SSH, so you will need to enter a user/pass when you get prompted. ss3 Screen Shot 2014-10-10 at 2.17.05 PM As you can see, the command runs through all the process needed to setup the node, and at the end of it, you should have a working hypervisor node ready to go. (Scroll down, you can verify it’s setup by looking at the UI) Here is what it looks like when you do it remotely. loginpass You’ll then see similar output to the below, running through the sequence of setting up the hypervisor node remotely, keys, OVS calls, and contacting the NVP/NSX cluster to register the node using the PKI.

Sending… rm -f /etc/openvswitch/vswitchd.cacert
Sending… mkdir -p /etc/openvswitch
Sending… ovs-pki init –force
Sending… ovs-pki req+sign ovsclient controller –force
Sending… ovs-vsctl — –bootstrap set-ssl /etc/openvswitch/ovsclient-privkey.pem /etc/openvswitch/ovsclient-cert.pem /etc/openvswitch/vswitchd.cacert
Sending… cat /etc/openvswitch/ovsclient-cert.pem
printing status
0
[sudo] password for labadmin:
Certificate:
    Data:
        Version: 1 (0x0)
        Serial Number: 12 (0xc)
    Signature Algorithm: md5WithRSAEncryption
        Issuer: C=US, ST=CA, O=Open vSwitch, OU=controllerca, CN=OVS controllerca CA Certificate (2014 Oct 10 10:56:26)
        Validity
            Not Before: Oct 10 18:15:57 2014 GMT
            Not After : Oct  7 18:15:57 2024 GMT
        Subject: C=US, ST=CA, O=Open vSwitch, OU=Open vSwitch certifier, CN=ovsclient id:4caa4f75-f7b5-4c23-8275-9e2aa5b43221
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:c2:b9:7d:9f:1d:00:78:4b:c0:0d:8a:52:d5:61:
12:02:d3:01:7d:ea:f6:22:d4:7d:af:6f:c1:91:40:
                    7f:e1:a1:a1:3d:2d:3f:38:f6:37:f6:83:85:3c:62:
                    b4:cc:60:40:d9:d3:61:8d:26:96:94:47:57:2d:fa:
                    53:ce:48:84:4c:a2:01:84:d8:11:61:de:50:f9:b5:
                    ff:9c:4b:6e:9c:df:84:48:f1:44:ec:e0:fd:e4:a1:
                    b6:0b:5c:23:59:5c:1d:cf:46:44:19:14:1c:92:a1:
                    28:52:19:ab:b8:5e:23:17:7a:b8:51:af:bc:48:1c:
                    d2:d8:58:67:61:a3:e6:51:f5:a0:57:a9:16:36:e8:
16:35:f6:20:3c:51:f8:4c:82:51:74:8b:48:90:e4:
                    dc:7a:44:f2:2b:d7:68:81:f6:9e:df:15:14:80:27:
                    77:e1:24:36:ac:fd:79:c2:03:64:1a:c0:4a:5b:b7:
                    dd:d3:fb:ca:20:13:f7:09:9e:03:f8:b0:fe:14:e7:
                    c9:7e:aa:1d:79:c3:c1:c2:a6:c3:68:cf:ff:ec:a4:
                    9a:5f:d4:f4:df:c2:e6:1d:a2:63:68:f2:d1:1d:00:
                    19:18:18:93:72:37:9e:b9:a4:2b:23:fd:83:ab:40:
                    52:0d:2e:9c:08:82:50:c0:1b:ec:e9:40:fc:1d:74:
                    1d:f3
                Exponent: 65537 (0x10001)
    Signature Algorithm: md5WithRSAEncryption
         85:fe:4b:97:77:ce:82:32:67:fc:74:12:1e:d4:a5:80:a3:71:
         a5:be:31:6c:be:89:71:fb:18:9f:5f:37:eb:e7:86:b8:8d:2e:
         0c:d7:3d:42:4e:14:2c:63:4e:9b:47:7e:ca:34:8d:8e:95:e8:
         6c:35:b4:7c:fc:fe:32:5b:8c:49:64:78:ee:14:07:ba:6d:ea:
         60:e4:90:e5:c0:29:c2:bb:52:38:2e:b5:38:c0:05:c2:a2:c9:
         8b:d0:08:27:57:ec:aa:14:51:e2:a1:16:f6:cf:25:54:1f:64:
         c8:37:f0:31:22:c9:cb:9d:a1:5a:76:b8:3e:77:95:19:5e:de:
         78:ce:56:0d:60:0a:b2:e6:b8:f5:3a:30:63:68:24:27:e2:95:
         dc:09:9e:30:ff:ea:51:14:9c:55:bd:31:3c:0c:a2:26:8e:71:
         fc:ec:85:56:a7:9c:e0:27:1b:ad:d4:e8:35:f7:da:ee:2a:55:
         90:9b:bd:d1:db:b9:b8:f9:3a:f6:95:94:c2:34:32:ea:27:3f:
         f8:46:c0:40:c2:0c:32:45:0d:82:14:c2:f6:a8:3e:28:33:9b:
         64:79:c0:2e:06:7a:1b:a3:56:9e:16:70:a0:3c:57:95:cf:e1:
         b2:7f:97:42:c9:82:f0:3c:1e:77:07:86:60:c8:00:a6:c8:96:
         94:26:94:e3
—–BEGIN CERTIFICATE—–
MIIDjTCCAnUCAQwwDQYJKoZIhvcNAQEEBQAwgYkxCzAJBgNVBAYTAlVTMQswCQYD
VQQIEwJDQTEVMBMGA1UEChMMT3BlbiB2U3dpdGNoMRUwEwYDVQQLEwxjb250cm9s
bGVyY2ExPzA9BgNVBAMTNk9WUyBjb250cm9sbGVyY2EgQ0EgQ2VydGlmaWNhdGUg
KDIwMTQgT2N0IDEwIDEwOjU2OjI2KTAeFw0xNDEwMTAxODE1NTdaFw0yNDEwMDcx
ODE1NTdaMIGOMQswCQYDVQQGEwJVUzELMAkGA1UECBMCQ0ExFTATBgNVBAoTDE9w
ZW4gdlN3aXRjaDEfMB0GA1UECxMWT3BlbiB2U3dpdGNoIGNlcnRpZmllcjE6MDgG
A1UEAxMxb3ZzY2xpZW50IGlkOjRjYWE0Zjc1LWY3YjUtNGMyMy04Mjc1LTllMmFh
NWI0MzIyMTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMK5fZ8dAHhL
wA2KUtVhEgLTAX3q9iLUfa9vwZFAf+GhoT0tPzj2N/aDhTxitMxgQNnTYY0mlpRH
Vy36U85IhEyiAYTYEWHeUPm1/5xLbpzfhEjxROzg/eShtgtcI1lcHc9GRBkUHJKh
KFIZq7heIxd6uFGvvEgc0thYZ2Gj5lH1oFepFjboFjX2IDxR+EyCUXSLSJDk3HpE
8ivXaIH2nt8VFIAnd+EkNqz9ecIDZBrASlu33dP7yiAT9wmeA/iw/hTnyX6qHXnD
wcKmw2jP/+ykml/U9N/C5h2iY2jy0R0AGRgYk3I3nrmkKyP9g6tAUg0unAiCUMAb
7OlA/B10HfMCAwEAATANBgkqhkiG9w0BAQQFAAOCAQEAhf5Ll3fOgjJn/HQSHtSl
gKNxpb4xbL6JcfsYn1836+eGuI0uDNc9Qk4ULGNOm0d+yjSNjpXobDW0fPz+MluM
SWR47hQHum3qYOSQ5cApwrtSOC61OMAFwqLJi9AIJ1fsqhRR4qEW9s8lVB9kyDfw
MSLJy52hWna4PneVGV7eeM5WDWAKsua49TowY2gkJ+KV3AmeMP/qURScVb0xPAyi
Jo5x/OyFVqec4CcbrdToNffa7ipVkJu90du5uPk69pWUwjQy6ic/+EbAQMIMMkUN
ghTC9qg+KDObZHnALgZ6G6NWnhZwoDxXlc/hsn+XQsmC8DwedweGYMgApsiWlCaU
4w==
—–END CERTIFICATE—–
Sending… ovs-vsctl set-manager ssl:10.*.*.*
Sending… ovs-vsctl br-set-external-id br-int bridge-id br-int
Sending… ovs-vsctl — –may-exist add-br br-int
Sending… ovs-vsctl — –may-exist add-br br-eth1
Sending… ovs-vsctl — –may-exist  add-port br-eth1 eth1

After setup you can verify that the node is setup by logging into it if your remote, and running the ovs-vsctl show command. This will show you all configuration that has been done to setup the hypervisor node. Managers and Controllers are setup and connected, and bridge interfaces created and ready to connect tunnels. Screen Shot 2014-10-10 at 2.17.16 PM We can also verify that the Hypervisor node is setup correctly by looking at the NSX/NVP Manager Dashboard. This shows that the Ubuntu Node is now connected, using the correct Transport Zone, and is up and ready to go. Screen Shot 2014-10-10 at 2.17.29 PM Thats the end of what I wanted to show as far as remote configuration of NVP/NSX components goes. We use this a lot when setting up our OpenStack environments and when we add or remove new Compute nodes that need to talk on the Neutron/Virtual Network. Again there are some tweaks and cleanups I need to address, but hopefully I can have this available on a public repo soon.

Python bindings to NVP/NSX REST API

Now I want to get into using the toolkit’s NVP/NSX Python API. This is a API written in python that addresses the REST API’s exposed from NVP/NSX. The main class of the API takes Service Libraries as arguments, and them calls the init method on them to instantiate their provided functions. For instance “ControlSevices” focuses on control cluster API calls rather than the logical virtual network components.

from nvp2 import NVPClient
from transport import Transport
from network_services import NetworkServices
from control_services import ControlServices
import logging

class NVPApi(Transport, NetworkServices, ControlServices)
    client = NVPClient()
    log = logging
    def __init__(self, debug=False, verbose=False):
        self.client.login()
        if verbose:
          self.log.basicConfig(level=self.log.INFO)
        if debug:
          self.log.basicConfig(level=self.log.DEBUG)
        Transport.__init__(self, self.client, self.log)
        NetworkServices.__init__(self, self.client, self.log)
        ControlServices.__init__(self, self.client, self.log)

Some example of how to instantiate the library class and use the provided service functions for NVP/NSX are below. This is not a full featured list, and more than less they are for NVP 3.2 and below. We are in the process of making sure it works across the board, so far NVP/NSX APIs have been pretty good at being backward compatible.

from api.nvp_api import NVPApi
#Instantiate an  API Object
#api = NVPApi()
api = NVPApi(debug=True)
#api = NVPApi(debug=True, verbose=True)
#print "Available Functions of the API"
# print dir(api)
# See an existing "Hypervisor Node"
print api.tnode_exists("My_Hypervisor_Node")
# Check for existing Transport Zones
print api.get_transport_zones()
# Check for existing Transport Zone
print api.check_transport_exists("My_Transport_Zone")
# Check Control Cluster Nodes
nodes = api.get_control_cluster()
for node in nodes:
print "\n"
  print node['display_name']
  for role in node['roles']:
     print "%s" % role['role']
     print "%s\n" % role['listen_addr']
print "\n"
# Check stats for interfaces on transport node
stats = api.get_interface_statistics("80d9cb27-432c-43dc-9a6b-15d4c45005ee",
   "breth2+")
print "Interface %s" % ("breth2")
for stat, val in stats.iteritems():
  print "%s -- %s" % (stat, val)
print "\n"

I hope you enjoyed reading through some of this, it’s really gone from a bunch of scripts no one can understand to something halfway useful. There are definitely better ways to do this, and by no means is this a great solution, it just one we had lying around that I still use from time to time. Managing the JSON state/descriptions about the nodes was the hardest part when multiple people start using this. We wound up managing the files with Puppet and also using Puppet for installing base openvswitch software for new NVP/NSX components on Ubuntu servers. Happy Friday, Happy Coding, Happy Halloween!

(Update): Open-source

I wanted to update the post with information about the library and how to access it. The great folks at EMC Code ( http://emccode.github.io/ ) have added this project to the #DevHigh5 Program and it is available on their site. Look at the right side and lick the DevHigh5 tag, and look for Nicira NVP/NSX Python under the projects.

Screen Shot 2015-04-14 at 9.21.44 PM

Information can be seen by hovering over the project.

Screen Shot 2015-04-14 at 9.22.03 PM

The code can be viewed on Github and on pypi and can be installed via pip install nvpnsxapi

Screen Shot 2015-04-14 at 9.44.59 PM

https://pypi.python.org/pypi/nvpnsxapi  Screen Shot 2015-04-14 at 9.54.46 PM

Enjoy 🙂

Puppet, Chef, Orchestration and DevOps:

The world of IT Systems Administration, DevOps and Orchestration of bare-metal resources to virtual applications is starting to have the need to become fully automated with custom hooks for different scale-out HPC workloads, cloud environments, to single deployments for SMBs. Automation via specialized scripts for network and compute need to become a thing of the past.

In steps:

I personally have had the opportunity to work with Foreman, Heat, Puppet, and (somewhat) Chef. The others also contain great ways to automate from bare-metal, all the way to fully virtual “stacks” of network, compute and storage.

Puppet and Chef share a space in the IT automation industry and both succeed in their vision. Whether you decide to use Puppet or Chef depends on your alignment and what you’re trying to accomplish. I’ve heard that Chef’s master node scales better but have personally never tested this theory. Both try to accomplish sub-version of package management and configuration control over a subset of nodes in an IT environment. Razor bare metal provisioning was developed as a venture between EMC and Puppetlabs and offers a path to full automation between the two, Not to say that any other DevOps tool or bare-metal provisioning workflows can’t be substituted, I may just be a bit bias.

Into the IT wormhole 

(brief notes)

Puppet

  • Client-server based. Puppetmaster and Puppet Clients. Declarative Language for “write once deploy many”.
  • Has open-source Openstack Packages for on-demand Openstack Delivery/Configuration Version control.
  • Integrates with Openstack Heat/TripleO for managing packages and configurations.
  • Deployment of monitoring tools, security tools all possible within private cloud.
  • Integrates well with Razor

Chef

  • Client-server based orchestration management “infrastructure as code” for deploying applications, version control, config files.
  • Written in Ruby. “Cookbooks” CB’s can be written to deploy Openstack “Chef for openstack” components, and potential to deploying security, monitoring etc.
  • Github.com/opscode/openstack-chef-repo (Grizzly, Nicira Plugin, KVM, LXC)
  • Ceilometer, Quantum Cookbooks (By Dreamhost)
  • NVP, OVS Cookbooks (By Nicira)
  • Chef agent for Arista switches, “kind of SDN”
  • Roles and recipes, Role could be “Allinone Devstack or Controller Node or Base Node

Juju

  • Like heat, JuJu deploys and manages services and application within a cloud provider. JuJu can deploy openstack components (e.g Glance) or deploy applications (e.g wordpress) on top of existing clouds.
  • Juju.ubuntu.org

To integrate with openstack you must specify these options:

openstack:
type: openstack_s3
control-bucket:  admin-secret:
auth-url: https://yourkeystoneurl:443/v2.0/
default-series: precise
juju-origin: ppa
ssl-hostname-verification: True
default-image-id: bb636e4f-79d7-4d6b-b13b-c7d53419fd5a
default-instance-type: m1.small

Heat

  • Heat is an orchestration tool for managing “stacks” or applications deployed on the cloud. Heat can orchestrate ports, routers, instances, Floating IPs, Private Networks etc.
  • Packaging can also be installed via Heat templates to do things like “deploy a stack and make it a 4 node WordPress cluster.”
  • Provides OpenStack-like CLI and Database show, list, create methods for interactions.

TripleO

  • Dynamic “Cloud on Cloud” version control of your cloud.
  • Need a “seed” cloud stack to provision 2 HA Nova Bare Metal (Ironic) servers, these bare metal stacks will provision a “overcloud” via Heat, the bare metal servers will know about available nodes via node enrollment via MAC Address.
  • Integrates well with Puppet/Chef for Package Management/Configuration if you did not want to use Heat.
  • Comes with a set of tools, os-apply-config, os-refresh-config, diskimage-builder.
    • diskimage-builder is used to build custom images with a notion of “elements”, these elements can be anything from a service, a application, a database, etc (e.g Glance, MySQL) and you can add them to the image you build. Quit a useful tool by itself actually.
    • You can build a base ubuntu qcow image that works with Openstack and Glance (Grizzly) by using the command:

 disk-image-create vm base -o base -a i386

Razor

  • Specialized microkernel used to PXE boot with that checks in with Razor to provide inventory of the system, user-created policies will apply a configuration to the node
  • Able to and off to DevOps (Chef, Puppet)

Pxe_dust

  • Complete solution for pxe booting, not really a package mgmt. or config solution.
  • Chef has pxe_dust recipe, AFAIK is interoperable with Chef.

Crowbar

  • Hardware provisioning and application mgmt. (by Dell/SUSE)
  • Crowbar.github.com
  • Features
    • server discovery (crowbar_machines –U crowbar –P crowbar list)
    • firmware upgrades
    • operating system installation via PXE Boot.
    • application deployment via Chef. (e.g. openstack)

Cobbler

Cobbler is a Linux installation server that allows for rapid setup of network installation environments. It glues together and automates many associated Linux tasks so you do not have to hop between many various commands and applications when deploying new systems, and, in some cases, changing existing ones. Cobbler can help with provisioning, managing DNS and DHCP, package updates, power management, configuration management orchestration, and much more. With a simple series of commands, network installs can be configured for PXE, reinstallation, media-based net-installs, and virtualized installs (supporting Xen, qemu, KVM, and some variants of VMware). Cobbler uses a helper program called ‘koan’ (which interacts with Cobbler) for reinstallation and virtualization support.

Foreman

Through deep integration with configuration management, DHCP, DNS, TFTP, and PXE-based unattended installations, Foreman manages every stage of the lifecycle of your physical or virtual servers. The Foreman provides comprehensive, auditable interaction facilities including a web frontend and robust, RESTful API

  • Theforman.org
  • Foreman has tight integration with Puppetlabs as well, Foreman integrates puppet manifests directory into it Web UI which makes for a nice management dashboard for provisioning applications.(see below)

xcat

xCAT’s purpose is to enable you to manage large numbers of servers used for any type of technical computing (HPC clusters, clouds, render farms, web farms, online gaming infrastructure, financial services, datacenters, etc.). xCAT is known for its exceptional scaling, for its wide variety of supported hardware, operating systems, and virtualization platforms, and for its complete day 0 setup capabilities.

  • Allows for a stateless boot (boot off/download RAMDisk image of xcat management node) with available scratch disk for persistent data on reboot. Satalite files (NFS mounted filesystem ) allows for other reboot/persistence. Though, in both cases, no stateful information should be allowed on either.
  • Developed by IBM, Power and Z Support.