Category Archives: Uncategorized

Piping AWS output to Ansible Inventory

piping

I’ve had the opportunity to work with a few different infrastructure automation tools such as Puppet, Chef, Heat and CloudFormation but Ansible just has a simplicity to it that I like, although I admit I do have a strong preference for Puppet because i’ve used it extensively and have had good success with it.

In one of my previous project  I was creating a repeatable solution to create a Docker Swarm cluster (before SwarmKit) with Consul and Flocker. I wanted this to be completely scripted to I climbed on the shoulders of AWS, Ansible and Docker Machine.

The script would do 4 things.

  1. Initialize a security group in an existing VPC and create rules for the given setup.
  2. Create machines using Docker-Machine of Consul and Swarm
  3. Use AWS CLI to output the machines and pipe them to a python script that processes the JSON output and creates an Ansible inventory.
  4. Use the inventory to call Ansible to run something.

This flow can actually be used fairly reliable not only for what I used it for but to automate a lot of things, even expand an existing deployment.

An example of this workflow can be found here.

I’m going to focus on steps #3 and #4 here. First, we use the AWS CLI to output machine information and pass it to a script.

# List only running my-prefix* nodes
$ aws ec2 describe-instances \
   --filter Name=tag:Name,Values=my-prefix* \
   Name=instance-state-code,Values=16 --output=json | \
   python create_flocker_inventory.py

We use the instance-state-code of 16 as it corresponds with Running instances. You can find more codes here: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_InstanceState.html. Then we choose JSON output using –output=json. 

Next, the important piece is the pipe ( `|` ). This signifies we pass the output from the command on the left of the | to the command on the right which is create_flocker_inventory.py so that the output is used as input to the script.

Shell-pipes.png

So what does the python script do with the output? Below is the script that I used to process the JSON output. It first setups up an _AGENT_YML variable that contains YAML for a configuration then the main() function takes the JSON from json.loads() in the script initialization and creates an array of dictionaries that represent instances and opens a file and writes each instance to the Ansible inventory file called “ansible_inventory”. After that the “agent.yml” is written to a file along with some secrets from the environment.

import os
import json
import sys


_AGENT_YML = """
version: 1
control-service:
  hostname: %s
  port: 4524
dataset:
  backend: aws
  access_key_id: %s
  secret_access_key: %s
  region: %s
  zone: %s
"""

def main(input_data):
    instances = [
        {
            u'ip': i[u'Instances'][0][u'PublicIpAddress'],
            u'name': i[u'Instances'][0][u'KeyName']
        }
        for i in input_data[u'Reservations']
    ]

    with open('./ansible_inventory', 'w') as inventory_output:
        inventory_output.write('[flocker_control_service]\n')
        inventory_output.write(instances[0][u'ip'] + '\n')
        inventory_output.write('\n')
        inventory_output.write('[flocker_agents]\n')
        for instance in instances:
            inventory_output.write(instance[u'ip'] + '\n')
        inventory_output.write('\n')
        inventory_output.write('[flocker_docker_plugin]\n')
        for instance in instances:
            inventory_output.write(instance[u'ip'] + '\n')
        inventory_output.write('\n')
        inventory_output.write('[nodes:children]\n')
        inventory_output.write('flocker_control_service\n')
        inventory_output.write('flocker_agents\n')
        inventory_output.write('flocker_docker_plugin\n')

    with open('./agent.yml', 'w') as agent_yml:
        agent_yml.write(_AGENT_YML % (instances[0][u'ip'], os.environ['AWS_ACCESS_KEY_ID'], os.environ['AWS_SECRET_ACCESS_KEY'], os.environ['MY_AWS_DEFAULT_REGION'], os.environ['MY_AWS_DEFAULT_REGION'] + os.environ['MY_AWS_ZONE']))


if __name__ == '__main__':
    if sys.stdin.isatty():
        raise SystemExit("Must pipe input into this script.")
    stdin_json = json.load(sys.stdin)
    main(stdin_json)

After this processes the JSON from the AWS CLI, all that remains is to run Ansible with our newly created Ansible inventory. In this case, we pass the inventory and configuration along with the ansible playbook we want for our installation.

$ ANSIBLE_HOST_KEY_CHECKING=false ansible-playbook \
 --key-file ${AWS_SSH_KEYPATH} \
 -i ./ansible_inventory \
 ./aws-flocker-installer.yml \
 --extra-vars "flocker_agent_yml_path=${PWD}/agent.yml"

 

Conclusion

Overall this flow can be used along with other Cloud CLI tools such as Azure, GCE etc that can output instance state that you can pipe to a script for more processing. It may not be the most effective way but if you want to get a semi complex environment up and running in a repeatable fashion for development needs it has worked pretty well to follow the “pre-setup_get-output_prcocess-output_install_config” flow outlined above.

Docker-based FIO I/O benchmarking

687474703a2f2f692e696d6775722e636f6d2f336f46443358502e706e67

What is FIO?

fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user. The typical use of fio is to write a job file matching the I/O load one wants to simulate. – (https://linux.die.net/man/1/fio)

fio can be a great tool for helping to measure workload I/O of a specific application workload on a particular device or file. Fio proves to be a detailed benchmarking tool used for workloads today with many options. I personally came across the tool while working at EMC when needing to benchmark Disk I/O of application running in different Linux container runtimes. This leads me to my next topic.

Why Docker based fio-tools

One of the projects I was working on was using Docker on AWS and various private cloud deployments and we wanted to see how workloads performed on these different cloud environments inside Docker container with various CPU, Memory, Disk I/O limits with various block, flash, or DAS based storage devices.

One way to wanted to do this was to containerize fio and allow users to pass the workload configuration and disk to the container that was doing the testing.

The first part of this was to containerize fio with the option to pass in JOB files by pathname or by a URL such as a raw Github Gist.

The Dockerfile (below) is based on Ubuntu 14 which admittedly can be smaller but we can easily install fio and pass a CMD script called run.sh.

FROM ubuntu:14.10
MAINTAINER <Ryan Wallner ryan.wallner@clusterhq.com>

RUN sed -i -e 's/archive.ubuntu.com/old-releases.ubuntu.com/g' /etc/apt/sources.list
RUN apt-get -y update && apt-get -y install fio wget

VOLUME /tmp/fio-data
ADD run.sh /opt/run.sh
RUN chmod +x /opt/run.sh
WORKDIR /tmp/fio-data
CMD ["/opt/run.sh"]

What does run.sh do? This script does a few things, is checked that you are passing a JOBFILE name (fio job) which without REMOTEFILES will expect it to exist in `/tmp/fio-data` it also cleans up the fio-data directory by copying the contents which may be jobs files out and then back in while removing any old graphs or output. If the user passes in REMOTEFILES it will be downloaded from the internet with wget before being used.

#!/bin/bash

[ -z "$JOBFILES" ] && echo "Need to set JOBFILES" && exit 1;
echo "Running $JOBFILES"

# We really want no old data in here except the fio script
mv /tmp/fio-data/*.fio /tmp/
rm -rf /tmp/fio-data/*
mv /tmp/*fio /tmp/fio-data/

if [ ! -z "$REMOTEFILES" ]; then
 # We really want no old data in here
 rm -rf /tmp/fio-data/*
 IFS=' '
 echo "Gathering remote files..."
 for file in $REMOTEFILES; do
   wget --directory-prefix=/tmp/fio-data/ "$file"
 done 
fi

fio $JOBFILES

There are two other Dockerfiles that are aimed at doing two other operations. 1. Producing graphs of the output data with fio2gnuplot and serving the graphs and output from a python SimpleHTTPServer on port 8000.

All Dockerfiles and examples can be found here (https://github.com/wallnerryan/fio-tools) and it also includes an All-In-One image that will run the job, generate the graphs and serve them all in one which is called fiotools-aio.

How to use it

  1. Build the images or use the public images
  2. Create a Fio Jobfile
  3. Run the fio-tool image
docker run -v /tmp/fio-data:/tmp/fio-data \
-e JOBFILES= \
wallnerryan/fio-tool

If your file is a remote raw text file, you can use REMOTEFILES

docker run -v /tmp/fio-data:/tmp/fio-data \
-e REMOTEFILES="http://url.com/.fio" \
-e JOBFILES= wallnerryan/fio-tool

Run the fio-genplots script

docker run -v /tmp/fio-data:/tmp/fio-data wallnerryan/fio-genplots \
<fio2gnuplot options>

Serve your Graph Images and Log Files

docker run -p 8000:8000 -d -v /tmp/fio-data:/tmp/fio-data \
wallnerryan/fio-plotserve

Easiest Way, run the “all in one” image. (Will auto produce IOPS and BW graphs and serve them)

docker run -p 8000:8000 -v /tmp/fio-data \
-e REMOTEFILES="http://url.com/.fio" \
-e JOBFILES=<your-fio-jobfile> \
-e PLOTNAME=MyTest \
-d --name MyFioTest wallnerryan/fiotools-aio

Other Examples

Important

  • Your fio job file should reference a mount or disk that you would like to run the job file against. In the job fil it will look something like: directory=/my/mounted/volume to test against docker volumes
  • If you want to run more than one all-in-one job, just use -v /tmp/fio-data instead of -v /tmp/fio-data:/tmp/fio-data This is only needed when you run the individual tool images separately

To use with docker and docker volumes

docker run \
-e REMOTEFILES="https://gist.githubusercontent.com/wallnerryan/fd0146ee3122278d7b5f/raw/cdd8de476abbecb5fb5c56239ab9b6eb3cec3ed5/job.fio" \
-v /tmp/fio-data:/tmp/fio-data \
--volume-driver flocker \
-v myvol1:/myvol \
-e JOBFILES=job.fio wallnerryan/fio-tool

To produce graphs, run the fio-genplots container with -t <name of your graph> -p <pattern of your log files>

Produce Bandwidth Graphs

docker run -v /tmp/fio-data:/tmp/fio-data wallnerryan/fio-genplots \
-t My16kAWSRandomReadTest -b -g -p *_bw*

Produce IOPS graphs

docker run -v /tmp/fio-data:/tmp/fio-data wallnerryan/fio-genplots \
-t My16kAWSRandomReadTest -i -g -p *_iops*

Simply serve them on port 8000

docker run -p 8000:8000 -d \
-v /tmp/fio-data:/tmp/fio-data \
wallnerryan/fio-plotserve

To use the all-in-one image

docker run \
-p 8000:8000 \
-v /tmp/fio-data \
-e REMOTEFILES="https://gist.githubusercontent.com/wallnerryan/fd0146ee3122278d7b5f/raw/006ff707bc1a4aae570b33f4f4cd7729f7d88f43/job.fio" \
-e JOBFILES=job.fio \
-e PLOTNAME=MyTest \
—volume-driver flocker \
-v myvol1:/myvol \
-d \
—name MyTest wallnerryan/fiotools-aio

To use with docker-machine/boot2docker/DockerForMac

You can use a remote fit configuration file using the REMOTEFILES env variable.

docker run \
-e REMOTEFILES="https://gist.githubusercontent.com/wallnerryan/fd0146ee3122278d7b5f/raw/d089b6321746fe2928ce3f89fe64b437d1f669df/job.fio" \
-e JOBFILES=job.fio \
-v /Users/wallnerryan/Desktop/fio:/tmp/fio-data \
wallnerryan/fio-tool

(or) If you have a directory that already has them in it. *NOTE*: you must be using a shared folder such as Docker > Preferences > File Sharing.

docker run -v /Users/wallnerryan/Desktop/fio:/tmp/fio-data \
-e JOBFILES=job.fio wallnerryan/fio-tool

To produce graphs, run the genplots container, -p

docker run \
-v /Users/wallnerryan/Desktop/fio:/tmp/fio-data wallnerryan/fio-genplots \
-t My16kAWSRandomReadTest -b -g -p *_bw*

Simply serve them on port 8000

docker run -v /Users/wallnerryan/Desktop/fio:/tmp/fio-data \
-d -p 8000:8000 wallnerryan/fio-plotserve

Notes

  • The fio-tools container will clean up the /tmp/fio-data volume by default when you re-run it.
  • If you want to save any data, copy this data out or save the files locally.

How to get graphs

  • When you serve on port 8000, you will have a list of all logs created and plots created, click on the .png files to see graph (see below for example screen)

687474703a2f2f692e696d6775722e636f6d2f6e6b73516b5a692e706e67

 

Testing and building with codefresh

As a side note, I recently added this repository to build on Codefresh. Right now, it builds the fiotools-aio Dockerfile  which I find most useful and moves on but it was an easy experience that I wanted to add to the end of this post.

Navigate to https://g.codefresh.io/repositories? or create a free account by logging into codefresh with your Github account. By logging in with Github it will have access to your repositories you gave access to and this is where the fio-tools images are.

I added the repository as a build and configured it like so.

screen-shot-2016-12-29-at-2-45-46-pm

This will automatically build my Dockerfile and run any integration tests and unit tests I may have configured in codefresh, thought right now I have none but will soon add some simple job to run against a file as an integration test with a codefresh composition.

Conclusion

I found over my time using both native linux tools and docker-based or containerized tools that there is need for both sometimes and in fact when testing container-native application workloads sometimes it is best to get metrics or benchmarks from the point of view of the application which is why we chose to run fio as a microservice itself.

Hopefully this was an enjoyable read and thanks for stopping by!

Ryan

Migrating the monolith from EC2 to an ECS-based multi-service Docker app

  

On my spare time I run a website for a tax accounting company. This monolithic app is a largely stateless app (not really but we don’t need persistent stores/volumes), uses one of the best online tax software, and uses ruby on rails and runs on an EC2 instance with Apache and MySQL Server and Client installed. This is what is referenced as a “monolithic app” because all components are installed on 1 single VM. This makes the application more complicated to edit, patch, update etc (even though it barely ever needs to updated apart from making sure the current year’s Tax information and links are updated.) If we want to migrate to a new version of the database or newer version of rails, things are not isolated and can overlap. Now, RVM and other mechanisms can be used to do this, but by using Docker to isolate components and Amazon ECS to deploy the app, we can rapidly develop, and push changes into production in a fraction of the time we used to. This blog post will go through the experience of migrating that “Monolithic” ruby on rails app and converting it into a 2 service Docker application that can be deployed to Amazon ECS using Docker Compose.

First things first:

The first thing I did was ssh into my EC2 instance and figure out what dependencies I had. The following approach was taken to figure this out:

  • Use lsb_release -a to see what OS and release we’re using.
  • Look at installed packages via “dpkg” / “apt
  • Look at installed Gems used in the Ruby on Rails app, take a peek at Gemfile.lock for this.
  • View the history of commands via “history” see any voodoo magic I may have done and forgot about 🙂
  • View running processes via “ps [options]” which helped me remember what all is running for this app. E.g. Apache2, MySQL or Postgres, etc.

This gives us a bare minimum of what we need to think about for breaking our small monolith into separate services, at-least from a main “component” viewpoint (e.g. Database and Rails App). It’s more complicated to try and figure out how we can carve up the actual Rails / Ruby code to implement smaller services that make up the site, and in some cases this is where we can go wrong, if it works, don’t break it. Other times, go ahead, break out smaller services and deploy them, but start small, think of it like the process of an amoeba splitting 🙂

We can now move into thinking about the “design” of the app as it applies to microservices and docker containers. Read the next section for more details.

Playing with Legos:

As it was, we had apache server, rails, and mysql all running in the same VM. To move this into an architecture which uses containers, we need to separate some of these services into separate building blocks or “legos” which you could think of as connecting individual legos together to build a single service or app. SOA terms calls this “Composite Apps” which are similar in thinking, less in technology. We’ll keep this simple as stated above and we’re going to break our app into 2 pieces, a MySQL database container and a “Ruby on Rails” container running our app.

Database Container

First, we’ll take a look at how we connect the database with rails. typically in Rails, a connection to a database is configured in a database.yml file and looks something like this. (or something like this)

    development:
        adapter: mysql
        database: AppName_development
        username: root
        password:
        host: localhost
    test:
        adapter: mysql
        database: AppName_test
        username: root
        password:
        host: localhost
    production:
        adapter: mysql
        database: AppName_production
        username: root
        password:
        host: localhost

Now, since we can deploy a MySQL container with Docker using something like the following:

docker run -d --name appname_db -e MYSQL_ROOT_PASSWORD=<password> mysql

We need a way to let our application know where this container lives (ip address) and how to connect to it (tcp?, username/password, etc). We’re in luck, with docker we can use the –link (read here for more information on how –link works) flag when spinning up our Rails app and this will inject some very useful environment variables into our app so we can reference them when our application starts. By assuming we will use the –link flag and we link our database container with the alias “appdb” (more on this later) we can change our database.yml file to look something like the following (Test/Prod config left out on purpose, see the rest here)

development:
 adapter: mysql2
 encoding: utf8
 reconnect: false
 database: appdb_dev
 pool: 5
 username: root
 password: <%= ENV['APPDB_ENV_MYSQL_ROOT_PASSWORD'] %>
 host: <%= ENV['APPDB_PORT_3306_TCP_ADDR'] %>
 port: <%= ENV['APPDB_PORT_3306_TCP_PORT'] %>
Rails Container

For the ruby on rails container we can now deploy it making sure we adhere to our dynamic database information like this. Notice how we “link” this container to our “appname_db” container we ran above.

docker run -d --link appname_db:appdb -p 80:3000 --name appname wallnerryan/appname

We also map a port to 3000 because we’re running our ruby on rails application using rails server on port 3000.

Wait, let’s back track and see what we did to “containerize” our ruby on rails app. I had to show the database and configuration first so that once you see how it’s deployed it all connects, but now let’s focus on what’s actually running in the rails container now that we know it will connect to our database.

The first thing we needed to do to containerize our rails container was to create an image based on the rails implementation we had developed for ruby 1.8.7 and rails 3.2.8. These are fairly older version of the two and we had deployed on EC2 on ubuntu, so in the future we can try and use the Ruby base image but instead we will use the ubuntu:12.04 base image because this is the path of least resistance, even though we can reduce our total image size with the prior. (more about squashing our image size later in the post)

Doing this we can create Dockerfile that looks like the following. To see the code, look here We actually don’t need all these packages (I dont think) but I haven’t got around to reducing this to bare minimum by removing one by one and seeing what breaks. (This is actually easy and a fun way to get your container just right because we’re using docker, and things build and run so quickly)

Screen Shot 2015-10-15 at 5.04.06 PM

As you can see we use “FROM ubuntu:12.04” to denote the base image and then we continue to install packages, COPY the app, make sure “bundler” is installed and install using the bundler the dependencies. After this we set the RAILS_ENV to use “production” and rake the assets. (We cannot rake with “db:create” because the DB does not exist at docker build time, more on this in runtime dependencies) Then we throw our init script into the container, chmod it and set it as the CMD used when the container is run via Docker run. (If some of this didn’t make sense, please take the time to run over to the Docker Docs and play with Docker a little bit. )

Great, now we have a rails application container, but a little more detail on the init script and runtime dependencies before we run this. See the below section.

Runtime dependencies:

There are a few runtime dependencies we need to be aware of when running the app in this manner, the first is that we cannot “rake db:create” until we know the database is actually running and can be connected to. So in order to make this happen at runtime we place this inside the init script.

The other portion of the runtime dependencies is to make sure that “rake db:create” does not fire off before the database is initialized and ready to use. We will use Docker Compose to deploy this app and while compose allows us to supply dependencies in the form of links there no real control over this if A) they aren’t linked, and B) if there is a time sequence needed. In this case it takes the MySQL container about 10 seconds to initialize so we need to put a “sleep 15” in our init script before firing off “rake db:create” and then running the server.

In the below script you can see how this is implemented.

Screen Shot 2015-10-15 at 5.14.19 PM

Nothing special, but this ensures our app runs smoothly every time.

Running the application

We can run the app a few different ways, below we can see via Docker CLI and via Docker Compose.

Docker CLI
docker run -d --name appname_db -e MYSQL_ROOT_PASSWORD=root mysql
docker run -d --link appname_db:appdb -p 3000:3000 --name appname app_image
Docker Compose

With compose we can create a docker-comose.yml like the following.

app:
 image: wallnerryan/app
 cpu_shares: 100
 mem_limit: 117964800
 ports:
 - "80:3000"
 links:
 - app_mysql:appdb

app_mysql:
 image: mysql
 cpu_shares: 100
 mem_limit: 117964800
 environment:
 MYSQL_ROOT_PASSWORD: XXXXXX

Then run “docker compose up

Running the app in ECS and moving DNS:

We can run this in Amazon ECS (Elastic Container Service) as well, using the same images and docker compose file we just created. If you’re unfamiliar with EC2 or the Container Service, check out the getting started guide.

First you will need the ecs cli installed, and the first command will be to setup the credentials and the ECS cluster name.

ecs-cli configure --region us-east-1 --access-key <XXXXX> --secret-key <XXXXXXXXX> --cluster ecs-cli-dev

Next, you will want to create the cluster. We only create a cluster of size=1 because we don’t need multiple nodes for failover and we aren’t running a load balancer for scale in this example, but these are all very good ideas to implement for your actual microservice application in production so you do not need to update your domain to point to different ECS cluster instances when your microservices application moves around.

ecs-cli up --keypair keypair-name --capability-iam --size 1 --instance-type t2.micro

After this, we can send our docker compose YAML file to ecs-cli to deploy our app.

ecs-cli compose --file docker-compose.yml up

To see the running app, run the following command.

ecs-cli ps

NOTE: When migrating this from EC2 make sure and update the DNS Zone File of your domain name to point at the ECS Cluster Instance.  

Finally, now that the application is running

Let’s back track and squash our image size for the rails app.

There are a number of different ways we can go about shrinking our application such as a different base image, removing unneeded libraries, running apt-get remove and autoclean and a number of others. Some of these taking more effort than others, such as if we change the base image, we would need to make sure our Dockerfile still installs the needed version of gems, and we can alternatively use a ruby base image but the ones I looked at don’t go back to 1.8.7.

The method we use as a “quick squash” is to export, and import the docker image and re-label it, this will squash out images into one single image and re-import the image.

docker export 7c7e6a6fff3b | docker import - wallnerryan/appname:imported

As you can see, this squashed our image down from 256MB to 184MB, not bad for something so simple. Now I can do more, but this image size for my needs is plenty small. Here is a good post from Brian DeHamer on some other things to consider when optimizing image sizes. Below you can see the snapshot of the docker image (taxmatters is the name of the company, I have been substituting this with “appname” in the examples above).

Screen Shot 2015-10-14 at 1.54.42 PM

Development workflow going forward:

So after we finished migrating to a Docker/ECS based deployment it is very easy to make changes, test in either a ECS development cluster or using Local Docker Machine, then deploy to the production closer on ECS when everything checks out. We could also imagine code changes automated in a CI pipeline where a CI pipeline kicks off lambda deployments to development after initial smoke tests triggered by git push, but we’ll leave that for “next steps” :).

Thanks for reading, cheers!

Ryan

A breakdown of layers and tools within the container and microservices ecosystem

I wrote a post not to long ago about creating a microservices architecture from scratch as part of series I am doing on modern microservices. Some colleagues and friends of mine suggested I break a portion of that post out into its own so I can continue to update it as the ecosystem grows. This is an attempt to do so. The portion they were talking about was the breakdown of layer and tools within MSA in my post here which laid the initial pass at this. This post will try and fill these layers out and and continue to add to them, there is just no way I can touch every single tool or definition correctly as I see them so please take this as my opinion based on the experience I have had in this ecosystem and please comment with additions, corrections, comments etc.

  • Applications / Frameworks / App Manifests
  • Scheduling / Scaling
  • Management Orchestration
  • Monitoring (including Health) / Logging / Auditing
  • Runtime Build/Creation (think build-packs and runtimes like rkt and docker)
  • Networking / Load Balancing
  • Service Discovery / Registration
  • Cluster Management / Distributed Systems State
  • Container OS’s
  • Data Services, Data Intelligence, and Storage Pools

To give you an idea of the tools available and technologies that fall into these categories, here is the list again, but with some of the tools and technologies in the ecosystem added. *Keep in mind this is is probably not an exhaustive list, if you see a missing layer or tool please comment!

*Note: Some of these may seem to overlap, if I put Kubernetes under Orchestration, it could easily fit into Cluster Management, or Scheduling because of its underlying technologies, however this is meant to label something with it’s overall “feel” for how to ecosystem views the tool(s), but some tools may appear in more than one section. I will labels these (overlap)

*Note: I will continue to add links as I continue to update the breakdown*

Again, if you see a missing layer or tool (which I’m sure I am) please comment!

Cheers.

Optimizing the Cloud: Nova/KVM

Sources:[“ http://www.slideshare.net/openstackindia/openstack-nova-and-kvm-optimisation”,” http://www.linux-kvm.org/page/KSM”,” http://pic.dhe.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp?topic=%2Fliaat%2Fliaattunkickoff.htm”]

Compute nodes represent a potential bottleneck in an OpenStack Cloud Environment, because the compute nodes run the VM’s and Applications, the workloads fall on the I/O within the Hypervisor. Everything from Local File system I/O, RAM Resources and CPU can all affect the efficiency of your cloud.

One thing to consider when provisioning physical machines is to look at what Guest/VM flavours you are going to allow to be deployed on that machine. Flavours that eat up 2VCPUs and 32G of RAM may not be well suited with a machine with only 64G of RAM and 8 CPU Cores

The notion of tenancy is also important to keep track of, tenant size and activity factors into how your environments resources are used. Tenants consume images, snapshots, volumes and disk space. Consider how many tenants will consume your cloud and adjust your resources accordingly. Make sure if you plan to take advantage of overprovisioning think about thin provisioning and potential performance hits.

Using KVM:

KVM is a well-supported hypervisor for Nova and has its own ways to increase performance. KVM isn’t the only Hypervisor to choose, Hyper-V, Xen, VMware can also be supported in Openstack, but KVM is a powerful competitor. Tuning your hypervisor is just as important as tuning your cloud resources and environment, so here are some things to consider:

  • I/O Scheduler: cfq vs deadline
  • Huge Pages
  • KSM (Kernel Same-page Merging)
    • A de-duplication feature, saves on memory usage. Helps scale vms/per hypervisor node (compute node)
  • Hyper-threading
  • Guest FS location (on hypervisor block devices)
  • Disable Zone Reclaim
  • Swappiness

Personally running Grizzly, we can see that KSM is working,

cat /sys/kernel/mm/ksm/pages_sharing

1099262

Running Ubuntu linux 3.2.00-23 kernel ns qemu-kvm-1.0. This should mean that running multiple VM’s should have better performance.

Quality of Service using BigSwitch’s Floodlight Controller

So I wanted to tackle something traditional networks can do, but using Openflow and SDN. I came to conclusion that the opensource controller made by BigSwitch “Floodlight” fit just the ticket. Before I deep dive into some of the progress I’ve made in this area I wanted to make sure the audience is aware of a few outstanding issues regarding OpenFlow and QoS.

QoS Refernces:

  • OpenFlow (1.0) supports setting the network type of service bits and enqueuing packets. This does not however mean that every switch will support these actions.
  • Queuing Methods:
    Some Openflow implementation to NOT support queuing structures to attach to a specific ports, in turn then “enqueue:port:queue” action in Openflow 1.0 is optional. Therefore resulting in failure on some switches

So, now that some of the background is out of the way, my ultimate goal was so be able to change the PHB’s of flows within the network. I chose to use an OpenStack like example, assuming that QoS will be applied to “fabric” of OVS switches that support Queuing.  The below example will show you how Floodlight can be used to push basic QoS state into the network.

  • OVS 1.4.3 , Use of ovs,vsctl to set up queues.

Parts of the application:

QoS Module:

  • Allows the QoS service and policies to be managed on the controller and applied to the network

QoSPusher & QoSPath

        

  • Python application used to manage QoS from the command line
  • QoSPath is a python application that utilizes cirtcuitpusher.py to push QoS state along a specific circuit in a network.

Example

Network

Mininet Topo Used
sudo mn –topo linear,4 –switch ovsk –controller=remote,ip= –ipbase=10.0.0.0/8

Enable QoS on the controller:

Visit the tools seciton and click on Quality of Service

Validate that QoS has been enabled.

From the topology above, we want to Rate-Limit traffic from Host 10.0.0.1 to only 2Mbps. The links suggest we need to place 2 flows, one in switch 00:00:00:00:00:00:01 and another in 00:00:00:00:00:00:02 that enqueue the packets that match Host 1 to the rate-limted queue.

./qospusher.py add policy ‘ {“name”: “Enqueue 2:2 s1”, “protocol”:”6″,”eth-type”: “0x0800”, “ingress-port”: “1”,”ip-src”:”10.0.0.1″, “sw”: “00:00:00:00:00:00:00:01″,”queue”:”2″,”enqueue-port”:”2″}’ 127.0.0.1
QoSHTTPHelper
Trying to connect to 127.0.0.1…
Trying server…
Connected to: 127.0.0.1:8080
Connection Succesful
Trying to add policy {“name”: “Enqueue 2:2 s1”, “protocol”:”6″,”eth-type”: “0x0800”, “ingress-port”: “1”,”ip-src”:”10.0.0.1″, “sw”: “00:00:00:00:00:00:00:01″,”queue”:”2″,”enqueue-port”:”2″}
[CONTROLLER]: {“status” : “Trying to Policy: Enqueue 2:2 s1”}
Writing policy to qos.state.json
{
“services”: [],
“policies”: [
” {\”name\”: \”Enqueue 2:2 s1\”, \”protocol\”:\”6\”,\”eth-type\”: \”0x0800\”, \”ingress-port\”: \”1\”,\”ip-src\”:\”10.0.0.1\”, \”sw\”: \”00:00:00:00:00:00:00:01\”,\”queue\”:\”2\”,\”enqueue-port\”:\”2\”}”
]
}
Closed connection successfully

./qospusher.py add policy ‘ {“name”: “Enqueue 1:2 s2”, “protocol”:”6″,”eth-type”: “0x0800”, “ingress-port”: “1”,”ip-src”:”10.0.0.1″, “sw”: “00:00:00:00:00:00:00:02″,”queue”:”2″,”enqueue-port”:”1″}’ 127.0.0.1
QoSHTTPHelper
Trying to connect to 127.0.0.1…
Trying server…
Connected to: 127.0.0.1:8080
Connection Succesful
Trying to add policy {“name”: “Enqueue 1:2 s2”, “protocol”:”6″,”eth-type”: “0x0800”, “ingress-port”: “1”,”ip-src”:”10.0.0.1″, “sw”: “00:00:00:00:00:00:00:02″,”queue”:”2″,”enqueue-port”:”1″}
[CONTROLLER]: {“status” : “Trying to Policy: Enqueue 1:2 s2”}
Writing policy to qos.state.json
{
“services”: [],
“policies”: [
” {\”name\”: \”Enqueue 2:2 s1\”, \”protocol\”:\”6\”,\”eth-type\”: \”0x0800\”, \”ingress-port\”: \”1\”,\”ip-src\”:\”10.0.0.1\”, \”sw\”: \”00:00:00:00:00:00:00:01\”,\”queue\”:\”2\”,\”enqueue-port\”:\”2\”}”,
” {\”name\”: \”Enqueue 1:2 s2\”, \”protocol\”:\”6\”,\”eth-type\”: \”0x0800\”, \”ingress-port\”: \”1\”,\”ip-src\”:\”10.0.0.1\”, \”sw\”: \”00:00:00:00:00:00:00:02\”,\”queue\”:\”2\”,\”enqueue-port\”:\”1\”}”
]
}
Closed connection successfully

Take a look in the Browser to make sure it was taken

Verify the flows work, using iperf, from h1 –> h2

Iperf shows that the bandwith is limited to ~2Mbps. See below for counter iperf test to verify h2 –> h1

Verify the opposite direction is unchanged. (getting ~30mbps benchmark )

The set-up of the queues on OVS was left out of this example. but the basic setup is as follows:

  • Give 10GB bandwidth to the port (thats what is supports)
  • Add a qos record with 3 queues on it
  • 1st queue, q0 is default, give it a max of 10GB
  • 2nd queue is q1, rate limited it to 20Mbps
  • 3rd queue is q2, rate limited to 2Mbps.

I will be coming out with a video on this soon, as well as a community version of it once it is more fully fleshed out. Ultimately QoS and OpenFlow are at their infancy still, it will mature as the latter specs become adopted by hardware and virtual switches. The improvement and adoption of OFConfig will also play a major role in this realm. But this is used as a simple implementation of how it may work. Integrating OFConfig would be an exciting feature.

R

The “godfather” SDN controller

 

With all the buzz about Software Defined Networking and Network Virtualization I figured I’d put a post up giving some explanation on how the network is actually “virtualized” as a resource and what controls it (the SDN controller)

As you may know now Nicira the network virtualization startup and maintainer of OpenVSwitch has been bought by VMWare for 1.26B had given precedence in the field that NV is around to stay. Other companies like BigSwitch, NEC, HP, and IBM (other as well I did not mention) are all joining the industry with their own SDN Controllers. They will all essentially do most of the same core things following the openflow spec as is keeps evolving over time.

(learn more http://www.openflow.org/wp/learnmore/ , http://www.openflow.org/documents/openflow-spec-v1.0.0.pdf, http://www.openflow.org/documents/openflow-spec-v1.1.0.pdf)

Some of the great things network virtualization, SDN and applications on top of a logical network infrastructure provides are isolation, innovation, vendor agnosticism, centralization, public / private cloud integration and much more. I hope to discuss specific NV technologies, theories and test cases.

 

Check out Founder  and CTO of Nicira Netoworks Martin Casado’s site http://networkheresy.com/ for a good source in specific technologies surrounding this area.

Also stop by BigSwitch’s Floodlight Controller developer and informational site for more information http://floodlight.openflowhub.org/