Category Archives: EMC

Exploring Powerstrip from ClusterHQ: A Socketplane Adapter for Docker

icon_512x512ClusterHQ_logopowerstrip@2x

sources: http://socketplane.io, https://github.com/ClusterHQ/powerstrip, http://clusterhq.com

Over the past few months one of the areas worth exploring within the container ecosystem is how it works with external services and applications. I currently work in EMC CTO Advanced Development so naturally my interest level is more about data services, but because my background working with SDN controllers and architectures is still one of my highest interests I figured I would get to know what Powerstrip was by working with Socketplane’s Tech Release.

*Disclaimer:

This is not the official integration for powerstrip with sockeplane merged over the last week or so, I was working on this in a rat hole and it works a little differently than the one that Socketplane merged recently.

What is Powerstrip?

Powerstrip is a simple proxy for docker requests and responses to and from the docker client/daemon that allows you to plugin “adapters” that can ingest a docker request, perform an action, modification, service setup etc, and output a response that is then returned to Docker. There is a good explaination on ClusterHQ’s Github page for the project.

Powerstrip is really a prototype tool for Docker Plugins, and a more formal discussion , issues, and hopefully future implementation of Docker Plugins will come out of such efforts and streamline the development of new plugins and services for the container ecosystem.

Using a plugin or adapter architecture, one could imagine plugging storage services, networking services, metadata services, and much more. This is exactly what is happening, Weave, Flocker both had adapters, as well as Socketplane support recently.

Example Implementation in GOlang

I decided to explore using Golang, because at the time I did not see an implementation of the PowerStripProtocol in Go. What is the PowerStripProtocol?

The Powerstrip protocol is a JSON schema that Powerstrip understands so that it can hook in it’s adapters with Docker. There are a few basic objects within the schema that Powerstrip needs to understand and it varies slightly for PreHook and PostHook requests and responses.

Pre-Hook

The below scheme is what PowerStripProtocolVersion: 1 implements, and it needs to have the pre-hook Type as well as a ClientRequest.

{
    PowerstripProtocolVersion: 1,
    Type: "pre-hook",
    ClientRequest: {
        Method: "POST",
        Request: "/v1.16/container/create",
        Body: "{ ... }" or null
    }
}

Below is what your adapter should respond with, a ModifiedClientRequest

{
    PowerstripProtocolVersion: 1,
    ModifiedClientRequest: {
        Method: "POST",
        Request: "/v1.16/container/create",
        Body: "{ ... }" or null
    }
}

Post-Hook

The below scheme is what PowerStripProtocolVersion: 1 implements, and it needs to have the post-hook Type as well as a ClientRequest and a Server Response. We add ServerResponse here because post hooks are already processed by Docker, therefore they already have a response.

{
    PowerstripProtocolVersion: 1,
    Type: "post-hook",
    ClientRequest: {
        Method: "POST",
        Request: "/v1.16/containers/create",
        Body: "{ ... }"
    }
    ServerResponse: {
        ContentType: "text/plain",
        Body: "{ ... }" response string
                        or null (if it was a GET request),
        Code: 404
    }
}

Below is what your adapter should respond with, a ModifiedServerResponse

{
    PowerstripProtocolVersion: 1,
    ModifiedServerResponse: {
        ContentType: "application/json",
        Body: "{ ... }",
        Code: 200
    }
}

Golang Implementation of the PowerStripProtocol

What this looks like in Golang is the below. (I’ll try and have this open-source soon, but it’s pretty basic :] ). Notice we implement the main PowerStripProtocol in a Go struct, but the JSON tag and options likes contain an omitempty for certain fields, particularly the ServerResponse. This is because we always get a ClientRequest in pre or post hooks but now a ServerResponse.

powerstripprotogo

We can implement these Go structs to create Builders, which may be Generic, or serve a certain purpose like catching pre-hook Container/Create Calls from Docker and setting up socketplane networks, this you will see later. Below are generall function heads that return an Marshaled []byte Go Struct to gorest.ResponseBuilder.Write()

buildprehook

builtposthook

Putting it all together

Powerstrip suggests that adapters be created as Docker containers themselves, so the first step was to create a Dockerfile that built an environment that could run the Go adapter.

Dockerfile Snippets

First, we need a Go environment inside the container, this can be set up like the following. We also need a couple of packages so we include the “go get” lines for these.

pwerstripdockerfilego

Next we need to enable our scipt (ADD’ed earlier in the Dockerfile) to be runnable and use it as an ENTRYPOINT. This script takes commands like run, launch, version, etc

runascript

Our Go-based socketplane adapter is laid out like the below. (Mind the certs directory, this was something extra to get it working with a firewall).

codelayout

“powerstrip/” owns the protocol code, actions are Create.go and Start.go (for pre-hook create and post-hook Start, these get the ClientRequests from:

  • POST /*/containers/create

And

  • POST /*/containers/*/start

“adapter/” is the main adapter that processes the top level request and figures out whether it is a prehook or posthook and what URL it matches, it uses a switch function on Type to do this, then sends it on its way to the correct Action within “action/”

“actions” contains the Start and Create actions that process the two pre hook and post hook calls mentioned above. The create hook does most of the work, and I’ll explain a little further down in the post.

actions

Now we can run “docker buid -t powerstrip-socketplane .” in this directory to build the image. Then we use this image to start the adapter like below. Keep in mind the script is actually using the “unattended nopowerstrip” options for socketplane, since were using our own separate one here.

docker run -d --name powerstrip-socketplane \
 --expose 80 \
 --privileged \ 
 --net=host \
 -e BOOTSTRAP=true \
 -v /var/run/:/var/run/ \
 -v /usr/bin/docker:/usr/bin/docker \
 powerstrip-socketplane launch

Once it is up an running, we can use a simple ping REST URL to test if its up: It should respond “pong” if everything is running.

$curl http://localhost/v1/ping
pong

Now we need to create our YAML file for PowerStrip and start our Powerstrip container.

Screen Shot 2015-02-04 at 4.23.59 PM

Screen Shot 2015-02-04 at 4.24.05 PM

If all is well, you should see a few containers running and look somthing like this

dddd151d4076        socketplane/socketplane:latest   "socketplane --iface   About an hour ago   Up About an hour                             romantic_babbage

6b7a63ce419a        clusterhq/powerstrip:v0.0.1      "twistd -noy powerst   About an hour ago   Up About an hour    0.0.0.0:2375->2375/tcp   powerstrip
d698047800b1        powerstrip-socketplane:latest    "/opt/run.sh launch"   2 hours ago         Up About an hour                             powerstrip-socketplane

The adapter will automatically spawn off a socketplane/socketplane:latest container because it installs socketplane brings up the socketplane software.

Once this is up, we need to update our DOCKER_HOST environment variable and then we are ready to go to start issuing commands to docker and our adapter will catch the requests. Few examples below.

export DOCKER_HOST=tcp://127.0.0.1:2375

Next we create some containers with a SOCKETPLANE_CIDR env vairable, the adapter will automatically catch this and process the networking information for you.

docker create --name powerstrip-test1 -e SOCKETPLANE_CIDR="10.0.6.1/24" ubuntu /bin/sh -c "while true; do echo hello world; sleep 1; done"
docker create --name powerstrip-test2 -e SOCKETPLANE_CIDR="10.0.6.1/24" ubuntu /bin/sh -c "while true; do echo hello world; sleep 1; done”

Next, start the containers.

docker start powerstrip-test1

docker start powerstrip-test2

If you issue an ifconfig on either one of these containers, you will see that it owns an ovs<uuid> port that connects it to the virtual network.

sudo docker exec powerstrip-test2 ifconfig
ovs23b79cb Link encap:Ethernet  HWaddr 02:42:0a:00:06:02

          inet addr:10.0.6.2  Bcast:0.0.0.0  Mask:255.255.255.0

          inet6 addr: fe80::a433:95ff:fe8f:c8d6/64 Scope:Link

          UP BROADCAST RUNNING  MTU:1440  Metric:1

          RX packets:12 errors:0 dropped:0 overruns:0 frame:0

          TX packets:9 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:956 (956.0 B)  TX bytes:726 (726.0 B)

We can issue a ping to test connectivity over the newly created VXLAN networks. (powerstrip-test1=10.0.6.2, and powerstrip-test2=10.0.6.3)

$sudo docker exec powerstrip-test2 ping 10.0.6.2

PING 10.0.6.2 (10.0.6.2) 56(84) bytes of data.

64 bytes from 10.0.6.2: icmp_seq=1 ttl=64 time=0.566 ms

64 bytes from 10.0.6.2: icmp_seq=2 ttl=64 time=0.058 ms

64 bytes from 10.0.6.2: icmp_seq=3 ttl=64 time=0.054 ms

So what’s really going on under the covers?

In my implementation of the powerstrip adapater, the adapter does the following things

  • Adapter recognizes a Pre-Hook POST /containers/create call and forwards it to PreHookContainersCreate
  • PreHookContainersCreate checks the client request Body foe the ENV variable SOCKETPLANE_CIDR, if it doesn’t have it it returns like a normal docker request. If it does then it will probe socketplane to see if the network exists or not, if it doesn’t it creates it.
  • In either case, there will be a “network-only-container” created connected to the OVS VXLAN L2 domain, it will then modify the response body in the ModifiedClientRequest so that the NetworkMode gets changed to –net:container:<new-network-only-container>.
  • Then upon start the network is up and the container boots likes normal with the correct network namespace connected to the socketplane network.

Here is a brief architecture to how it works.

diag

Thanks for reading, please comment or email me with any questions.

Cheers!

Docker Virtual Networking with Socketplane.io

Screen Shot 2015-01-16 at 11.03.27 AM

Screen Shot 2015-01-16 at 11.04.21 AMScreen Shot 2015-01-16 at 11.03.50 AM

Screen Shot 2015-01-16 at 11.06.36 AM

ImgCred: http://openvswitch.org/, http://socketplane.io/, https://consul.io/, https://docker.com/

Containers have no doubt been a hyped technology in 2014 and now moving into 2015. Containers have been around for a while now (See my other post on a high-level overview of the timeline) and will be a major technology to think about for the developer as well as within the datacenter moving forward.

Today I want to take the time to go over Socketplane.io’s first preview of the technology they have been working on and since announcing their company in mid-october. Socketplane.io is “driving DevOps Defined Networking by enabling distributed security, application services and orchestration for Docker and Linux containers.” and is backed by some great tech talent, Brent Salisbury, Dave Tucker, Madhu Venugopal, John M. Willis who all bring leading edge network and ops skills into one great effort which is socketplane.io. I have had the pleasure to meet up with Brent and Madhu at ONS last year and have done some work with Brent way back when I was working on Floodlight, and am very excited for the future of Socketplane.io.

What’s behind Socketplane.io and What is the current preview technology?

The current tech preview released on github allows you to get a taste of multi-host networking between Dockerhosts using Open vSwitch and Consul as core enablers by building VXLAN tunnels between hosts to connect docker containers on the same virtual(logical) network with no remote/external SDN controller needed. The flows are programmed via OVSDB into the software switch so the user experience and maintenance is smooth with the least amount of moving parts. Users will interact with a wrapper CLI called “socketplane” for docker that also controls how socketplane’s virtual networks are created, deleted and manipulated. Socketplane’s preview uses this wrapper but if your following Docker’s plugin trend then you know they hope to provide virtual network services this way in the future (keep posted on this). I’d also love to see this tech be portable to other container technologies such as LXD or Rocket in the future. Enough text, lets get into the use of Socketplane.io

Walkthrough

First lets look at the components of what we will be setting up in this post. Below you will see 2 nodes: socketplane node1 and socketplane node2, we will be setting up these using Vagrant and Virtualbox using Socketplane’s included Vagrantfile. In these two nodes, when socketplane starts up we it will install OVS and Docker and start a socketplane container that runs Consul for managing network state. (one socketplane container will be the master, I’ll show more on this later as well). Then we can create networks, create containers and play with some applications. I will cover this in detail as well as show how the hosts are connected via VXLAN and demo a sample web application across hosts.

socketplane-arch

Setup Socketplane.io’s preview.

First install Virtualbox and Vagrant (I dont cover this, but use the links), then lets Checkout the repo

socketplanedemo1

Set an environment variable named SOCKETPLANE_NODES that tells the installation file how many nodes to setup on your local environment. I chose 3. Then run “vagrant up” in the source directory.

socketplanedemo2

After a few or ten minutes you should be all set to test out socketplane thanks to the easy vagrant setup provided by the socketplane guys. (There are also manual install instructions on their github page if you fancy setting this on on bare-metal or something) You should see 3 nodes in virtualbox after this. Or you can run “vagrant status”

socketplanedemo15

Now we can SSH into one of our socketplane nodes. Lets SSH into node1.

socketplamedemo3

Now you SSHed into one of the socketplane nodes. We can issues a “sudo socketplane” command and see the available options the CLI tool gives us.

socketplanedemo5

socketplanedemo4

Some of the commands that are used to run. start, stop, remove etc containers are used via “socketplanerun | start | stop | rm | attach” and these are used just like “docker run | start | stop | rm | attach”

Socketplane sets up a “default” network that (for me) has a 10.1.0.0/16 subnet address and if you run “socketplane network list” you should see this network. To see how we can create virtual networks (vnets) we can issue a command pictures below “socketplane network create foo4 10.6.0.0/16”

Screen Shot 2015-01-16 at 1.27.10 PM

This will create a vnet named foo4 along with a vlan for vxlan and default gateway at the .1 address. Now we can see both our default network and our “foo4” network in the list command.

Screen Shot 2015-01-16 at 1.55.26 PM

If we look at our Open vSwitch configuration now using the “ovs-vsctl show” command we will also see a new port named foo4 that acts as our gateway so we can talk to the rest of the nodes on this virtual network. You should also see the vxlan endpoints that aligns with your eth1 interfaces on the sockeplane node.

Screen Shot 2015-01-16 at 2.07.18 PMGreat, now we are all set up so run some containers that connect over the virtual network we just created. So on socketplane-1 issue a “sudo socketplane run -n foo4 -it ubuntu:14.10 /bin/bash”, this will start a ubuntu container on socketplane-1 and connect it to the foo4 network.

Screen Shot 2015-01-16 at 2.12.09 PM

You can Ctrl-Q + Ctrl-P to exit the container and leave the tty open. If you issue a ovs-vsctl show command again you will see a ovs<uuid> port added to the docker0-ovs bridge. This connects the container to the bridge allowing it to communicate over the vnet. Lets create another container, but this time on our socketplane-2 host. So exit out and ssh into socketplane-2 and issue the same command. We should then be able to ping between our two containers on different hosts using ths same vnet.

Screen Shot 2015-01-16 at 2.18.18 PM

Awsome, we can ping out first container from our second without having to setup any network information on the second host. This is because the network state it propagated across the cluster so when we reference “foo4” on any of the nodes it will use the same network information. If you Ctrl-Q + Ctrl-P while running ping, we can also see the flows that are in our switch. We just need to use appctl and reference our docker0-ovs integration bridge.

Screen Shot 2015-01-16 at 2.22.05 PM

As we can see our flows indicate the VXLAN flows thatheader and forward it to the destination vxlan endpoint and pop (action:pop_vlan) the vlan off the encap in ingress to our containers.

To show a more useful example we can start a simple web application on socketplane-2 and access it over our vnet on socketplane-1 without having to use the Dockerhost IP or NAT. See blow.

First start an image named tutum/hello-world and add it to the foo4 network and expose port 80 at runtime and give it a name “Web”. Use the “web” name with the socketplane info command to get the IP Address.

Screen Shot 2015-01-16 at 2.28.10 PM

Next, logout and SSH to socketplane-1 and run an image called tutm/curl (simple curl tool) and run a curl <IP-Address> and you should get back a response from the simple “Web” container we just setup.

Screen Shot 2015-01-16 at 2.31.20 PM

This is great! No more accessing pages based on host addresses and NAT. Although a simple use-case, this shows some of the benefit of running a virtual network across many docker hosts.

A little extra

So i mentioned before that socketplane runs Consul in a separate container, you can see the logs of consul by issuing “sudo socketplane agent logs” on any node. But for some more fun and to poke around at some things we are going to use nsenter. First find the socketplane docker container, then follow the commands to get into the socketplane container.

Screen Shot 2015-01-16 at 2.41.32 PM

Now your in the socketplane container, we can issue an ip link see that socketplane uses HOST networking to attach consul and get consul running on the host network so the Consul cluster can communicate. We can confirm this by looking at the script used to start the service container.Screen Shot 2015-01-16 at 2.41.49 PM

See line:5 of this snippet, docker runs socketplane with host networking.

socketplane in container with host networking

You can issue this command on the socketplane-* hosts or in the socketplane container and you should receive a response back from Consul showing you that is listening on 8500.

Screen Shot 2015-01-16 at 2.42.40 PM

You can issue “consul members” on the socketplane hosts to see the cluster as well.

Screen Shot 2015-01-16 at 2.46.08 PM

You can also play around with consul via the python-consul library to see information stored in Consul.

Screen Shot 2015-01-16 at 2.53.38 PM

Conclusion

Overall this a great upgrade to the docker ecosystem, we have seen other software products like Weave, Flannel, Flocker and others i’m probably missing address a clustered Docker setup with some type of networking overlay or communications proxy to connect multi-hosted containers. Socketplane’s preview is completely opensource and is developed on github, if your interested in walking through the code, working on bugs or possibly suggesting or adding features visit the git page. Overall I like the OVS integration a lot mainly because I am a proponent of the software switch and pushing intelligence to the edge. I’d love to see some optional DPDK integration for performance in the near future as well as more features that enable fire-walling between vnets and others. I’m sure its probably on the horizon and am eagerly looking forward to see what Socketplane.io has for containers in the future.

cheers!