6

Source Code to Docker Image — automated Build


What, if you could release and deploy fully tested and bullet-proof software on a day by day basis? Isn’t this a huge effort for hobby developers like me? Even for hobby developers, Docker and its ecosystem promises an easy way to automate the integration and deployment of the software. Continuous Integration (CI) and continuous deployment (CD) are made easy.

This blog is exploring a tiny aspect of this automation framework: the containerization of an application and the automatic creation of a Docker image from source code for a Rails Web application as target application. For that,

  1. we will create and test a so-called Dockerfile that describes all automation steps: e.g. which base image to use, which files to copy, which commands to perform and which ports to expose.
  2. In a second step, we will create a Link (via a “Webhook”) between the Github Software repository and the Docker Hub docker image repository. This will allow us to automatically create a new Docker image in the background, each time a code change is pushed to the Software repository.

Next possible steps towards full CI/CD (continuous integration/continuous deployment) are discussed a the end of the blog.

Note: the procedure requires Internet downloads of >1 GB. If you are interested in a more lightweight “Hello World” example with (probably) <15 MB of downloads during the process, you might be interested in this example: https://github.com/oveits/docker-nginx-busybox.

Why Docker?

Here are the reasons, why I have chosen to containerize my web application using Docker:

  • similar to Hypervisor based virtualization technologies, container technologies help to create portable application images by bundling libraries with the application and by providing an environment that is independent from other applications.
  • compared to Hypervisor-based virtualization technologies, the resulting images are much more lightweight and thus are much easier to handle and share. A layered container design helps to further reduce latencies and download volumes since unchanged layers can be cached.
  • when comparing the performance of Rails on Windows and dockerized Rails on Windows, we experience a 50% performance gain. For details, see here.
  • when comparing docker with other container technologies, we find that docker is by far the most popular container technology as of today.

See also this blog for more details.

Manual Build from Dockerfile

Prerequisites

  1. Install docker, if not already done. Using Vagrant, I have created this blog that shows how this can be done in less than 10 Minutes, if Vagrant and Virtualbox is installed. Here we will need only a single host, so you can skip the etcd discovery part and the change of the num_instances variable.
    You may also want to test the new official way of installing Docker using the Docker Toolbox (at the time I had started with Docker, the official way of installing Docker on Windows was based on boot2docker and had resulted in a nightmare…).
  2. Install git on the docker host, if it is not installed already (check with “git –version” on the docker host’s command line interface).

Fork and download the Rails App from Github

In the docker host, clone the application’s repository and change the working directory by issuing following commands:

git clone https://github.com/oveits/ProvisioningEngine
cd ProvisioningEngine

Dockerfile

Note, that the Dockerfile is already part of the git repository, you have cloned above, so this sub-chapter is for your information only.

The Dockerfile describes the automation steps during creation of a docker image. Among others, the Dockerfile specifies

  • the base image (i.e. the official “rails” image in this case),
  • the commands that are run within the image container during the build process,
  • the files to be copied from the docker host to the image container,
  • the TCP ports to be exposed to the outside world,
  • the default command that is run within a container (e.g. started with the “docker run”).

In our case, the Dockerfile looks like follows:

FROM rails

# update the operating system:
RUN apt-get update

# if you need "vi" and "less" for easier troubleshooting later on:
RUN apt-get install -y vim; apt-get install -y less

# copy the ProvisioningEngine app to the container:
ADD . /ProvisioningEngine

# Define working directory:
WORKDIR /ProvisioningEngine

# Install the Rails Gems and prepare the database:
RUN bundle install; bundle exec rake db:migrate RAILS_ENV=development

# expose tcp port 80
EXPOSE 80

# default command: run the web server on port 80:
CMD ["rails", "server", "-p", "80"]

Build Docker Image

Now we will build the image. We assume that we have direct Internet access over a NAT firewall, but with no need to pass a HTTP proxy here. In case, a HTTP proxy is involved, note that both, the docker host as well as the container image needs to be prepared for that.

docker build --tag=provisioningengine_manual:latest .

You will see in the log that a new image layer is created at each step. This may take a while.

Test the Docker Image

On the docker host, stop any container that might be running on port 80 (check with “docker ps”, stop the container with the command “docker stop <container-id>”). Then perform:

ID=$(docker run -d -p 80:80 provisioningengine_manual); sleep 5; docker logs $ID

Verify that the Web Server is operational

Now test the web server by issuing the command

curl localhost

on the docker host command line. The output should look like

<html><body>You are being <a href="http://localhost/customers">redirected</a>.</body></html>

Troubleshooting Steps

The logs can be retrieved any time by repeating the “docker logs” command. In case there was no output because the container exits right away (i.e. “docker ps” returns a list without the running container based on the image provisioningengine_manual) and you want to find out, why, you can perform an interactive session:

docker run -it -p 80:80 provisioningengine_manual

In case you want to get access to the Linux shell in the container (in this case, the rails server can be started manually by issuing “rails server -p 80”), start a container with

docker run -it -p 80:80 provisioningengine_manual /bin/bash

Automatically Triggered Build

Prerequisites

  1. Install docker, if not already done. Using Vagrant, I have created this blog that shows how this can be done in less than 10 Minutes, if Vagrant and Virtualbox is installed. Here we will need only a single host, so you can skip the etcd discovery part and the change of the num_instances variable.
    You may also want to test the new official way of installing Docker using the Docker Toolbox (at the time I had started with Docker, the official way of installing Docker on Windows was based on boot2docker and had resulted in a nightmare…).
  2. Sign up for Github and Docker Hub, if not already done.

Steps for linking Github and Docker Hub

  1. Fork the git repository https://github.com/oveits/ProvisioningEngine (button on the upper right). This is needed, since you will trigger a new Docker image build process by committing a software change to your repository.
  2. On your Git Hub repository project home (in my case this was https://github.com/oveits/ProvisioningEngine) goto
    ->  Settings(on the right)-> Webhooks & Services
    -> Add Service
    -> choose Docker
    -> enter your Docker Hub password and confirm
  3. On Docker Hub Home, left of your username
    -> 2015.09.25-17_23_07-hc_001Create
    -> Choose “Create Automated Build” from drop down list
    -> Choose your repository
    -> Enter short description (required)
    -> Create

Test the Link between Github and Docker Hub

Test the automatic build trigger by perform a change the master branch. In my case, I have created a new git repository branch, performed a set of changes (add a Dockerfile and others) and I have migrated back the branch to the master branch. However, for a simple test, you can also perform an online change and commit of the README.rdoc file on the master branch of your forked Github repository.

This should trigger a new Docker image build (this may take a while). Check the results on the Docker Hub Build Details tab of the newly created docker repository.

After some time, the Build Status should turn “done”.

2015.09.25-18_21_26-hc_001

Test the Docker Image

Test the docker image by issuing following commands (change “oveits” to your Docker Hub username)

ID=$(docker run -d -p 80:80 oveits/ProvisioningEngine); echo $ID

Check the container status and the logs with

docker ps
docker logs $ID

And connect to the server by issuing the command

curl localhost

on the docker host. The answer should look like

<html><body>You are being <a href="http://localhost/customers">redirected</a>.</body></html>

Done.

Connecting from an Internet Browser

Docker networking can be somewhat confusing. With the

-p 80:80

option of the docker run command above, we have instructed docker to map the container port 80 to the docker host port 80.

However, in order to reach this port, the IP address of the docker host must be reachable from the Internet browser, obviously.

On the docker host, find the IP addresses by issuing an

ifconfig

command. In my case, this results in an output like (abbreviated):

core@core-01 ~/docker-nginx-busybox $ ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.1.42.1  netmask 255.255.0.0  broadcast 0.0.0.0              <--- not reachable from the outside!!
        ...

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255         <--- not reachable from the outside!!
        ...

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.8.101  netmask 255.255.255.0  broadcast 172.17.8.255    <--- host-only network -> only reachable from the Virtualbox host
        ...

In my case, the docker host is a Virtualbox CoreOS VM running on a Windows host, created using Vagrant. The addresses of docker0 (is reachable only from docker containers running on the host) and eth0 (is a NATed Virtualbox network) are not reachable from the Windows host. Therefore, within the Vagrantfile, I had made sure that the VM is equipped with an additional Ethernet interface eth1 (host only Virtualbox network), that is reachable from the Windows machine. This is the case here:
2015.09.25-18_27_03-hc_001

If I want the docker host to be reachable from outside, additional care must be taken with respect to routing. Using a bridged network instead of a host only network can help here, but you can also create a static NAT/PAT within Virtualbox that maps the Window’s address to the docker host port.

Summary

We have shown how to automate the Docker build process by adding an appropriate Dockerfile to the application, save the application software on Github, and link Github with the Docker Hub image repository. Now, any time a code change is pushed to Github, a new Docker image will be built automatically.

Next Steps towards Continuous Integration

A continuous integration (CI) and continuous deployment (CD) process involves other steps like branching the software, pull requests, merging, automatic testing a.s.o.

steps
Fig.: Continuous Integration steps found on this post.

As a starting point, you might want to check out part 2 and part 3 of this nice post of Michael Herman, which gives an introduction on CI and CD based on the CircleCI framework.

Other CI/CD frameworks are discussed in the answers of the following posted question:

What is the difference between Bamboo, CircleCI, CIsimple/Ship.io, Codeship, Jenkins/Hudson, Semaphoreapp, Shippable, Solano CI, TravisCI and Wercker?

I guess, I will give TravisCI and/or CircleCI a try…

13

What is Docker? An Introduction into Container Technology.


–Originally posted on this linkedIn post and now moved to WordPress–

Introduction

In this blog, I will give a short Introduction into container technologies and I will give reasons, why I have chosen to start proof of concepts with Docker and no other container technology.

For more blogs in this series on Docker, see here.

Why Containers?

Containers are the the new way how run application in the cloud. It is high time to gain hands on experience with container technologies. However, before I plunge into the installation part, I will investigate the benefits of container technologies over other virtualization technologies and compare the different types of containers.

The VMware Hype: a History

I remember being very much excited about the new virtualization trend in the early 2000s: with the free VMware Server 2.0, it was the first time I had the chance to install and test our cool new IP telephony system for up to 100.000 telephones on my laptop. There even was the prospect of sharing defined test environments by sending around VMs containing all the network and VMs you need for testing.

Okay, that dreams of sharing whole test environments never came true, partly because VMs consume a tremendous amount of hardware resources, especially, if you have several VMs encapsulated inside a VM. Still, Hypervisor virtualization has proven to have tremendous operational advantages over hardware servers: in our lab as well as in our SaaS productive environment, we could spin up a new host whenever needed without waiting for our managers to approve the ordering of new hardware and without waiting for the delivery of the new hardware. Cool stuff.

Containers: are they Game Changers?

So, why do we need containers? Both, containers as well as hypervisors offer a way to provide applications with an encapsulated, defined, and portable environment. However, because of the high resource cost and low performance on my laptop, I (as a hobby-developer) do not develop my applications on a virtual machine. Even if would decide to do so, the environment on my laptop and the one in production will differ: I need to convert the image from VMware Workstation format to ESXi format. Network settings need to be changed. Java code needs to be compiled. Many things can happen until my application has reached its final destination in the production cloud.

Containers promise to improve the situation: in theory, containers are much more lightweight, since you can get rid of the Hypervisor layer, the virtual Hardware layer and the guest OS layer.

containers are light-weight and have a better performance

This does not only promise good performance. I was able to show here that a containerized Rails web application runs 50% than the same web application running on Windows Rails.

And this, although the container is located on a Virtualbox Linux VM that in turn runs on Windows. The 50% performance gain holds true, even if we work with shared folders: i.e. even if both, the Rails code and database are located outside the container on a mapped folder on the Linux VM. Quite surprising.

Note, however, you need to be careful with the auto-share between the Linux VM and Windows C:\Users folder as described in the official Docker documentation, which will cause a performance drop by a factor of ~10 (!).

Container technology increasing the Portability?

Docker evangelists claim that Docker images will work the same way in every environment, be it on the developer’s laptop, be it on a production server. Never say again: “It does not work in production? It works fine on on my development PC, though”. If you are starting a container, you can be sure all libraries in compatible versions are included.

Never again say: “It does not work in production? It works fine on on my development PC, though!”

Okay, along the DDDocker series, you will see that this statement remains a vision to be fulfilled by certain careful measures. One of the topics I stumbled over, is the topic of network connectivity and HTTP proxies. Docker commands as well as Cluster technologies like CoreOS depend on Internet connectivity to public services like Docker Hub and Cluster Discovery Services per default. In case of the discovery services, this is aggravated by the fact that the discovery services protocol does not yet support HTTP proxies.

Container images still might work in one environment, but not in the other. It is still up to the image creator to reduce the dependency of external resources. This can be done by bundling the required resources with the application image.

Even with container technologies, it is still up to the image creator to offer a high degree of portability.

E.g. I have found on this blog post, that CoreOS (a clustered Docker container platform) requires a discovery agent that can be reached without passing any HTTP proxy. As a solution of this cluster discovery problem behind a HTTP proxy, a cluster discovery agent could be included in the image. Or the cluster discovery agent can be automatically installed and configured by automation tools like Vagrant, Chef, Puppet or Ansible. In both cases, the docker (CoreOS) host cluster does not need to access the public cluster discovery service anymore.

Containers are getting social

With the more lightweight images (compared to VMware/KVM/XEN & Co), and the offered public repositories like Docker Hub, the exchange of containerized applications might be a factor that can boost the acceptance of containers in the developer community. Docker Hub still needs to be improved (e.g. the size of the images is not yet visible; the image layers and status of the images often are intransparent to the user), but Docker Hub images can help developers to get more easily started with new, unknown applications, since the image comes along with all software and libraries needed.

Containers helping with Resiliency and horizontal Scalability

Containers help with Resiliency and with scalability. E.g. CoreOS and Atomic offer options to easily build cluster solutions with automatic restart of containers on another cluster node, if the original node fails. Moreover, emerging container orchestration systems like Kubernetes offer possibilities to horizontally scale up the containers and their applications by making sure that the right number of containers of the same type are always up and running. Kubernetes also promises to unify the communications between containers of one type to the rest of the world, so other applications talking to a scaled application does not need to care, how many instances of containers it is talking to.

Why Docker?

Before plunging into the container technology, I want to make sure that I spend my free time on the container technology that has the highest possible potential of becoming the de facto standard. Digging into Google, I found articles about “Docker vs. Rocket vs. Vagrant” and alike. Some examples are here and here.

Vagrant is mentioned often together with docker, but it is not a container technology, but a virtual machine configuration/provisioning tool.

I also find “docker vs LXC” pages on Google. What is LXC? According to stackoverflow, LXC refers to Linux’ capabilities of the Linux kernel (specifically namespaces and control groups) which allow sandboxing processes from one another. Flockport points out that LXC offers more like a lightweight virtual machine while Docker is designed for single applications. Docker will loose all “uncommitted” data after a reboot of the container while LXC will keep the data similar to virtual machines. See here for more details.

The CoreOS project, a lightweight, clustered docker host operating system, has developed a competing container technology called Rocket (say: rock-it). When reading the articles, I still do not know who has won the race: Docker or Rocket. However, the post here states

Yet when you see both Amazon and Google announcing Docker container services within weeks of each other, it’s easy to assume that Docker has won.

When looking into Google Trends, the assumption that docker has won the race, seems to be confirmed:

Let us assume that Docker is a good first choice. It is also supposed to be more mature than rocket, since it is available longer than rocket. Anyway, rocket does not seem to be available for Windows (see its GitHub page).*

* = Okay, later I found out that Docker is supported on Windows only by installing a Linux host VM, which in turn is hosting the containers. A docker client is talking to a docker agent on the Linux host, so it looks as if docker was supported on Windows. Still, I think, Docker is a good choice, considering its popularity.

Summary

This blog has discussed the advantages of container technologies like Docker and rocket when compared to the traditional virtualization techniques like VMware.

  • Docker images are more lightweight and, in a first Ruby on Rails test, have proven a 50% higher performance of a web service compared to running the same service on native Windows.
  • Container images have the potential to improve portability. However, external dependencies limit portability.
  • Container help with resiliency, scalability and ease of operations by making sure that a container image gets restarted on another host, if the original host dies. Moreover, container orchestration systems like Kubernetes by googlemake sure that the administrator-specified number of containers is up and running at all times, and that this set of containers appears to the outside world, as if it was a single application.
  • Google trends shows that docker is much more popular than other container technologies like rocket and LXC.

Those are all reasons why I have started to dig deeper into the container technology, and why I have chosen Docker to do so. An overview of those efforts can be found here.

0

Getting Started with Ansible


This is part 1 of a little “Hello World” example using Ansible, an IT automation tool.

The post has following content:

  • Popularity of Ansible, Salt, Chef and Puppet
  • Installation based on Docker
  • Playbooks (i.e. tasks) and Inventories (i.e. groups of targets)
  • Remote shell script execution

As a convenient and quick way of Ansible installation on Windows (or Mac), we are choosing a Docker Ansible image on an Vagrant Ubuntu Docker Host that has won a java performance competition against CoreOS and boot2docker (see this blog post).

Posts of this series:

  • Part 1: Ansible Hello World (this post) with a comparison of Ansible vs. Salt vs. Chef vs. Puppet and a hello world example with focus on Playbooks (i.e. tasks), Inventories (i.e. groups of targets) and remote shell script execution.
  • Part 2: Ansible Hello World reloaded with focus on templating: create and upload files based on jinja2 templates.
  • Part 3: Salt Hello World example: same content as part I, but with Salt instead of Ansible
  • Part 4: Ansible Tower Hello World: investigates Ansible Tower, a professional Web Portal for Ansible

 

2015.11.18-13_40_14-hc_001

Versions

2015-11-09: initial release
2016-06-08: I have added command line prompts basesystem#, dockerhost# and container#, so the reader can see more easily, on which layer the command is issued.
2017-01-06: Added linked Table of Contents

Table of Contents

Why Ansible?

For the “hello world” tests, I have chosen Ansible. Ansible is a relatively new member of a larger family of IT automation toolsThis InfoWorld article from 2013 compares four popular members, namely Puppet, Chef, Ansible and Salt. Here you and here find more recent comparisons, if not as comprehensive as the InfoWorld article.

In order to explore the popularity of the software, let us look at a google trend analysis of those four tools (puppet/ansible/chef/salt + “automation”) (status November 2015; for a discussion of the somewhat more confusing recent results, please consult Appendix F below):

2015.11.09-13_52_20-hc_001

Okay, in the google trends analysis we can see that Ansible is relatively new and that is does not seem to replace Puppet, Chef or Salt. However, Ansible offers a fully maintained, RESTful API on the Ansible Web application called Ansible Tower (which comes at a cost, though).  Moreover, I have seen another article stating that Ansible is the very popular among docker developers. Since I have learned to love docker (it was not love at first sight), let us dig into Ansible, even though Puppet and Chef seems to be more popular in google searches.

For a discussion of REST and Web UI capabilities of the four tools, see Appendix D.

Ansible “Hello World” Example – the Docker Way

We plan to install Ansible, prepare Linux and Windows targets and perform simple tests like follows:

  1. Install a Docker Host
  2. Create an Ansible Docker Image
    • Download an Ansible Onbuild Image from Docker Hub
    • Start and configure the Ansible Container
    • Locally test the Installation
  3. Remote Access to a Linux System via SSH
    • –> Create a key pair, prepare the Linux target, access the Linux target
    • Note: we will use the Docker Host as Linux Target
  4. Working with Playbooks
    • Ansible “ping” to single system specified on command line
    • Run a shell script on single system specified on command line
  5.  Working with Inventory Files
    • Ansible “ping” to inventory items
    • Run a shell script on inventory items

1. Install a Docker Host

Are you new to Docker? Then you might want to read this blog post.

Installing Docker on Windows and Mac can be a real challenge, but no worries: we will show an easy way here, that is much quicker than the one described in Docker’s official documentation:

Prerequisites:
  • I recommend to have direct access to the Internet: via Firewall, but without HTTP proxy. However, if you cannot get rid of your HTTP proxy, read Appendix B.
  • Administration rights on you computer.
Steps to install a Docker Host VirtualBox VM:

Download and install Virtualbox (if the installation fails with error message “Setup Wizard ended prematurely” see Appendix A: Virtualbox Installation Workaround below)

1. Download and Install Vagrant (requires a reboot)

2. Download Vagrant Box containing an Ubuntu-based Docker Host and create a VirtualBox VM like follows (assumed a Linux-like system or bash on Windows):

(basesystem)$ mkdir ubuntu-trusty64-docker ; cd ubuntu-trusty64-docker
(basesystem)$ vagrant init williamyeh/ubuntu-trusty64-docker
(basesystem)$ vagrant up
(basesystem)$ vagrant ssh

Now you are logged into the Docker host and we are ready for the next step: to create the Ansible Docker image.

Note: I have experienced problems with the vi editor when running vagrant ssh in a Windows terminal. In case of Windows, consider to follow Appendix C and to use putty instead.

2. Create an Ansible Docker Image

1. Download an Ansible Onbuild Image from Docker Hub

In order to check that the docker host has Internet access, we issue following command:

dockerhost# docker search ansible

This command will lead to an error, if the you work behind a HTTP proxy, since we have not (yet) configured the docker host for usage behind a HTTP proxy. I recommend to get direct Internet access without HTTP proxy for now. However, if you cannot get rid of your HTTP proxy, read Appendix B.

Now we download the ansible image:

dockerhost# docker pull williamyeh/ansible:ubuntu14.04-onbuild

2. Start and configure the Ansible Container

dockerhost# docker run -it williamyeh/ansible:ubuntu14.04-onbuild /bin/bash

The -it (interactive terminal) flags starts an interactive session in the container. Now you are logged into the Docker Container and you can prepare the Ansible configuration files, namely the inventory file (hosts file) and the playbook.yml:

3. Locally test the Installation

container# ansible all -i 'localhost,' -u vagrant -c local -m ping

Note that the -i is a CSV list of target hosts and needs to end with a comma. With -u, we define the remote user, and the private key file needs to be specified. The response should look like follows:

localhost | success >> {
 "changed": false,
 "ping": "pong"
}

3. Remote Access to a Linux System via SSH

Now let us test the remote access to a Linux system.

We could perform our tests with any target system with a running SSH service and installed Python >v2.0 on /usr/bin/python (see the FAQs). However, the Ubuntu Docker host is up and running already, so why not use it as target system? In this case, the tested architecture looks like follows:

2015.11.18-15_59_31-hc_001

Ansible is agent-less, but we still need to prepare the target system: Ansible’s default remote access method is to use SSH with public key authentication. The best way is to create an RSA key pair on the Ansible machine (if not already available) and to add the corresponding public key as “authorized key” on the target system.

1. Create an RSA key pair on the Ansible container:
container# ssh-keygen -t rsa

and go through the list of question. For a proof of concept, and if you are not concerned about security, you can just hit <enter> several times. Here is a log from my case:

root@930360e7db68:/etc/ssh# ssh-keygen -t rsa

Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
fe:eb:bf:de:30:d6:5a:8c:1b:4a:c0:dd:cb:5f:3b:80 root@930360e7db68
...

On the target machine (i.e. the ubuntu-trusty64-docker system), create a file name /tmp/ansible_id_rsa.pub and copy the content of ansible’s ~/.ssh/id_rsa.pub file into that file. Then:

dockerhost# cat /tmp/ansible_id_rsa.pub >> ~/.ssh/authorized_keys
2. test remote access via SSH:

Now, we should be able to access the target system from the ansible container using this key. To test, try:

container# ssh vagrant@192.168.33.10 -C "echo hello world"

This should echo a “hello world” to your screen. Here 192.168.33.10 is a reachable IP address of the docker host (issue ifconfig on the docker host to check the IP address in your case).  For troubleshooting, you can call ssh with the -vvv option.

3. Remote access via Ansible:

Check that Python version > 2.0 (better: > 2.4) is installed on your target machine. In our case of the ubuntu-trusty64-docker image, this is a pre-installed package and we get:

dockerhost# python --version
Python 2.7.6

Now also a remote Ansible connection should be possible from the container:

container# ansible all -i '192.168.33.10,' -u vagrant -m ping

which results in following output:

192.168.33.10 | success >> {
 "changed": false,
 "ping": "pong"
}

This was your first successful Ansible connection via SSH. Now let us also perform a change on the remote target. For that, we perform a remote shell command:

container# ansible all -i '192.168.33.10,' -u vagrant -m shell -a "echo hello world\! > hello_world"

This time the module is a “shell” and the module’s argument is a echo hello world command. We should get the feedback

192.168.33.10 | success | rc=0 >>

On the target, we can check the result with:

dockerhost# cat hello_world
hello world!

4. Working with Playbooks

This was your first remote shell action via Ansible. Now we want to have a look to playbooks, which are the Ansible way to document and automate the tasks.

1. Create a playbook

On the ansible container terminal, we create a playbook.yml file:

container# vi playbook.yml

and we add and save the following content:

---
# This playbook uses the ping module to test connectivity to Linux hosts
- name: Ping
  hosts: 192.168.33.10
  remote_user: vagrant 

  tasks: 
  - name: ping 
    ping: 
  - name: echo 
    shell: echo Hello World! > hello_world

Note: If you have problems with formatting of the characters in the terminal (I have experienced problems in a Windows terminal), then I recommend to use a putty terminal instead of using vagrant ssh. For that, see Appendix C.

Note also that the number of white spaces are relevant in a yml file. However, note that the ‘!’ does not need to be escaped in the playbook (it was necessare on the command line, though). Now, we perform following command on the Ansible container:

container# ansible-playbook -i '192.168.33.10,' playbook.yml

This time, the -u flag is not needed (it is ignored, if specified), since we have specified the user in the playbook. We get the following feedback:

PLAY [Ping] *******************************************************************

GATHERING FACTS ***************************************************************
ok: [192.168.33.10]

TASK: [ping] ******************************************************************
ok: [192.168.33.10]

TASK: [echo] ******************************************************************
changed: [192.168.33.10]

PLAY RECAP ********************************************************************
192.168.33.10 : ok=3 changed=1 unreachable=0 failed=0

We check the result on the target machine again:

dockerhost# cat hello_world
Hello World!

We see that the file hello_world was overwritten (“Hello World!” with capital letters instead of “hello world!”).

5. Working with Inventory Files

Instead of specifying singular hosts in the playbook.yml, Ansible offers the more elegant way to work with groups of machines. Those are defined in the inventory file.

More information about inventory files can be found the official documentation. However, note that this page describes new 2.0 features that do not work on the Docker image (currently ansible 1.9.4). See Appendix E for more details on the problem and for information on how to upgrade to the latest development build. The features tested in this blog post work on ansible 1.9.4, though; so you do not need to upgrade now.

1. Now we do the same as before, but we use an inventory file to define the target IP addresses:

container# vi /etc/ansible/hosts

and add the following lines:

[vagranthosts]
192.168.33.10

In the playbook.yml we replace 192.168.33.10 by a group name, e.g. vagranthosts

---
# This playbook uses the win_ping module to test connectivity to Windows hosts
- name: Ping
  hosts: vagranthosts
  remote_user: vagranthost

  tasks:
  - name: ping
    ping:
  - name: echo
    shell: echo HELLO WORLD! > hello_world

In order to see the difference, we also have changed the hello world to all capital letters.

Now we perform:

container# ansible-playbook -i /etc/ansible/hosts playbook.yml

Here, we have replaced the list of hosts by a reference to the inventory file.

The output looks like follows:

PLAY [Ping] *******************************************************************

GATHERING FACTS ***************************************************************
ok: [192.168.33.10]

TASK: [ping] ******************************************************************
ok: [192.168.33.10]

TASK: [echo] ******************************************************************
changed: [192.168.33.10]

PLAY RECAP ********************************************************************
192.168.33.10 : ok=3 changed=1 unreachable=0 failed=0

We check the result on the target machine again:

dockerhost# cat hello_world
HELLO WORLD!

We see that the file hello_world was overwritten again (all capital letters). Success!

Summary

We have shown, how you can download and run Ansible as a docker container on any machine (Windows in my case). We have prepared the Ansible container and the target for SSH connections and have shown, how to perform connectivity tests and shell scripts on the remote system. In addition, we have introduced Playbooks as a means to document and run several tasks by one command. Moreover, Inventory files were introduced in order to manage groups of target machines.

Next steps

Following topics are looked at in the next two parts of this series:

  • Part II: Ansible Hello World reloaded will show
    • how to upload files with Ansible
    • how to create dynamic file content with Ansible using jinja2 templates
    • bind it all together by showing a common use case with dynamic shell scripts and data files
  • Part III: Salt Hello World example: same content as part I, but with Salt instead of Ansible
  • Part IV: Ansible Tower Hello World: investigates Ansible Tower, a professional Web Portal for Ansible

Open: Window support of Ansible

Appendix A: Virtualbox Installation (Problems)

  • Download the installer. Easy.
  • When I start the installer, everything seems to be on track until I see “rolling back action” and I finally get this:
    “Oracle VM Virtualbox x.x.x Setup Wizard ended prematurely”

Resolution of the “Setup Wizard ended prematurely” Problem

Let us try to resolve the problem: the installer of Virtualbox downloaded from Oracle shows the exact same error: “…ended prematurely”. This is not a docker bug. Playing with conversion tools from Virtualbox to VMware did not lead to the desired results.

The Solution: Google is your friend: the winner is:https://forums.virtualbox.org/viewtopic.php?f=6&t=61785. After backing up the registry and changing the registry entry

HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Network -> MaxFilters from 8 to 20 (decimal)

and a reboot of the Laptop, the installation of Virtualbox is successful.

Note: while this workaround has worked on my Windows 7 notebook, it has not worked on my new Windows 10 machine. However, I have managed to install VirtualBox on Windows 10 by de-selecting the USB support module during the VirtualBox installation process. I remember having seen a forum post pointing to that workaround, with the additional information that the USB drivers were installed automatically at the first time a USB device was added to a host (not yet tested on my side).

Appendix B: HTTP Proxy Configuration

If you need to work behind a HTTP proxy, you need to consider several levels that need to know of it:

  • the physical host for both, your browser as well as your terminal session (http_proxy and https_proxy variables) for successful vagrant init commands and download of the vagrant boxes.
  • the docker host (if it differs from the physical host) for both, in the docker configuration files as well as on the bash. Note that the configuration files differ between CoreOS, boot2docker and Ubuntu.
  • the docker client for the terminal session; needed for apt-get update+install.

Ubuntu Docker:

sudo vi /etc/default/docker

add proxy, if needed like follows (adapt the names and ports, so it fits to your environment):

export http_proxy='http://proxy.example.com:8080'
export https_proxy='http://proxy.example.com:8080'

then:

sudo restart docker

CoreOS:

sudo mkdir /etc/systemd/system/docker.service.d
sudo vi /etc/systemd/system/docker.service.d/http-proxy.conf

and add something like (adapt the names and ports, so it fits to your environment):

[Service]
Environment="HTTP_PROXY=http://proxy.example.com:8080"

Then:

sudo reboot

or try:

sudo sysctl docker restart

Appendix C: How to use putty for accessing Vagrant Boxes

What is to be done:

  1. locate and convert the vagrant private key to ppk format using puTTYgen.
    1. locate the vagrant private key of the box. In my case, this is C:\Users\vo062111\.vagrant.d\boxes\williamyeh-VAGRANTSLASH-ubuntu-trusty64-docker\1.8.1\virtualbox\vagrant_private_key
    2. start puTTYgen -> Conversions -> import -> select the above path.
    3. press “Save private key” and save vagrant_private_key as vagrant_private_key.ppk
  2. In putty,
    1. create a connection to vagrant@127.0.0.1 port 2201 or port 2222 (the port vagrant uses is show in the terminal during “vagrant up”)
    2. specify the ppk key Connection->SSH->Auth->(om the right)Private key file for authentication
    3. Click on Session on the left menu and press Save
    4. Press Open and accept the RSA fingerprint -> you should be able to log in without password prompt. If there is still a password prompt, there is something wrong with the private key.

Appendix D: REST APIs of Ansible, Puppet, Chef and Salt

Here, I have made a quick research on the RESTful interfaces and Web UI Interfaces of Ansible, Puppet, Chef and Salt. I have not found this information on the  Feature comparison table on Wikipedia:

Appendix E: Install the latest Ansible Development Version

The Ansible version in the docker image has the problem that it has a version 1.9.4 (currently), but the Ansible documentation is describing the latest v2.0 features. E.g. in version 1.9.4, variables in the inventory file described in the documentation are ignored (see e.g. the example “jumper ansible_port=5555 ansible_host=192.168.1.50″) and this leads to a “Could not resolve hostname” error ; see also this stackoverflow post).

Here, we will show, how to install the latest Ansible version in the container. For that, run the container:

docker -it williamyeh/ansible:ubuntu14.04-onbuild /bin/bash

ansible --version

will result in an output similar to:

ansible 1.9.4

If you get a version >=2.0, you might not need to upgrade at all. In all other cased, perform the following steps:

If you are behind a proxy, perform sth. like:

export http_proxy=http://proxy.example.com:8080
export https_proxy=http://proxy.example.com:8080
apt-get update; apt-get install git
git clone git://github.com/ansible/ansible.git --recursive
cd ./ansible
source ./hacking/env-setup
ansible --version

should give you some output like:

ansible 2.0.0 (devel 9b9fb51d9d) last updated ...

Now also the v2.0 features should be available. If you want to update the version in future, you will need to perform the git command

git pull

In the /ansible directory.

(chapter added on 2016-04-11)

In order to explore the popularity of the software, we have looked at a google trend analysis of those four tools (puppet/ansible/chef/salt + “automation”) (status November 2015):

2015.11.09-13_52_20-hc_001

Note that the google trends result looks quite differently 5 months later (2016-04-11). Google seems to have changed their source data and/or algorithm. With the same search terms as we had used in Nov. 2015 (puppet/ansible/chef/salt + “automation”; note that the link works only, if you are logged into your google account), in April 2016 we got following non-convincing graph:

2016.04.11-17_59_42-hc_001

Especially the analysis of Salt’s and Chef’s popularity for 2011 and before does not look very convincing.

If we are searching for “Software” instead via this google trends link (works only, if you are logged into your google account), we get something like the following:

2016.04.11-18_32_34-hc_001

Also this data does not look reliable: according to Wikipedia’s Vagrant page, Vagrant’s initial version was March 2010. Why do we see so many search hits before that time? That is not plausible. The same with Puppet, which has started 2005 and has many hits on 2004.

To be honest, google trends analysis used to (at least) look reliable in November 2015, but it does not look reliable anymore. What a pity: I used to work a lot with google trends in the past for finding out, which technology is trending, but looking at the more recent results, I have lost the confidence that I can rely on the data. If you know an alternative to google trends, please add a comment to this blog post.

In any case; for the time after 2013, it looks like the popularity of Ansible is rising quickly (if we believe it).

3

Docker Performance Tests for Ruby on Rails


Abstract

In a series of performance tests, we could show that a (Rails) web application has 50% higher performance on docker (installed on Windows/boot2docker) than the same application run natively on Windows, no matter whether or not the database and program code is located on a shared volume. However, if such a shared volume is auto-mounted to one of the Windows C:\Users folders, the performance drops by a factor of almost ten.

Preface

This is the 5th blog of my Dummy’s Diary on Docker (= DDDocker), which originally was hosted on LinkedIn. See the DDDocker table of contents to find more.

Introduction

What is the performance of a dockerized application compared to the same application running natively on Windows or Linux or on a Windows or Linux VM? This is, what I have asked myself when starting with Docker. However, my Internet search on that subject has not given me the desired results. I cannot believe that there are no such articles in the Internet, and if you happen to find some, please point them to me. In any case, I have decided to perform some quick&dirty tests myself. I call them “quick&dirty”, because I did not care to separate the measurement tool from the server: everything is performed on my Windows Laptop. Also, I did not care to document the exact test setup (Rails version, Laptop data sheet, etc.). For now, I have deemed it sufficient to get an idea on the trend, rather more than the absolute numbers.

I have received some unexpected results…

Test Scenarios

In DDDocker (4): Persistence via shared Volumes (sorry, this is a LinkedIn blog, which has a limited lifetime; tell me, if you need access) we have experienced a major decrease of performance of a web server, when the server’s code and data base is located on a shared volume on a Windows host.

In the current blog, we perform performance measurements of a Rails web server for following scenarios:

  1. web server run natively on Windows
  2. web server run within a docker container with all source code and database in the container. The docker container runs on a VirtualBox VM with 2 GB DRAM (boot2docker image).
  3. web server run within a docker container with all source code and and database in a shared volume. The docker container runs on a VirtualBox VM with 2 GB DRAM (boot2docker VM). The shared volume is located on the Linux VM (docker host, i.e. the boot2docker VM).
  4. web server run within a docker container with all source code and and database in a shared volume. The docker container runs on a VirtualBox VM with 2 GB DRAM (boot2docker image). The shared volume is located on the C: partition on Windows and is mapped to the VM and from there it is mapped to the docker container.

The quick and dirty tests are performed using Apache Bench tool “ab.exe” on my Windows Laptop, which is also hosting the Web Server and/or the boot2docker host. This is not a scientific setup, but we still will get meaningful results, since we are interested in relative performance numbers only.

1. Windows Scenario

In this scenario the rails application runs natively on Windows. All Ruby on Rails code as well as the local database reside on a local Windows folder.

2. Docker on Linux VM Scenario

In this scenario, Docker is installed on Windows like specified on the Docker documentation. A Linux VM is running on Windows and hosts the container. All Ruby on Rails code as well as the local database reside on a folder within the container.

3. Docker on Linux VM Scenario with shared Volume

The difference between this scenario and scenario 2 is that all Ruby on Rails code as well as the local database reside on a folder in the Linux VM and are mounted as so-called virtual volume into the container.

4. Docker on Linux VM Scenario with shared Volume mounted on Windows

The difference between this scenario and scenario 3 is that all Ruby on Rails code as well as the local database reside on a Windows folder on C:\Users\. The rails executable within the container is accessing a local folder which maps to the Linux’s hosts folder via the shared volume feature, which in turn maps to the auto-mounted Windows folder on C:\Users\<username>\<foldername>. Because of the extra-hops, the lowest performance is expected here. We will see below that this performance impact for this scenario is dramatic.

Test Environment / Equipment

All tests will be run on a Windows 7 Dell i5 notebook with 8 GB of RAM. During the native Windows test, the docker VM will run in parallel, so the CPU and RAM resources available for the app are similar in all 4 tests.

As the performance test tool, Apache Bench (ab.exe) is used, which ships with XAMPP installer (xampp-win32-5.6.8-0-VC11-installer.exe).

We perform 1000 read requests, with a concurrency factor 100. Each request will cause a SQL query and returns ~245 kB text.

For the tests, I have intentionally chosen an operation that is quite slow: a GET command that searches and reads 240 kB text from an SQLite database on an (already overloaded) laptop.

The total performance numbers are not meaningful for other web applications, different databases etc. However, the relative numbers are meaningful for the purpose to compare the different setups.

Results

From the overhead point of view, I would have expected a performance relation like follows:

Expectation:

  1. native Windows performance is greater than
  2. Docker performance on Linux VM on Windows is greater than 
  3. Docker performance on Linux VM on Windows with shared volume is greater than
  4. Docker performance on Linux VM on Windows with shared volume auto-mounted on Windows

However, I have found the surprising results that native Windows performance ~33% lower than Docker performance:

Finding 1: Rails on docker outperforms native Rails on Windows by 50%, even if the Docker Linux host is a VM, which is hosted on Windows.

In addition, we have seen that:

Finding 2: the performance of a dockerized (Rails) web application is not degraded by locating the database and (Rails) program code on a shared volume.

The next finding was already expected from our experience in the last blog:

Finding 3: the performance of a dockerized (Rails) web application drops dramatically (by factor of ~10), if the shared volume is auto-mounted on a C:\Users folder.

The last statement was expected from the DDDocker (4) blog. Here the results in detail:

This graph shows the number of GET requests per second the web portal can handle. Docker outperforms Windows by ~50%, no matter whether docker is run with local folders or with shared volumes pointing to the Linux host. The performance drops by the factor of ~9, if the shared folder is used to access an auto-mounted Windows folder.

The same picture is seen with respect to the throughput, since the transfer rate is proportional to the requests/sec rate:

A corresponding inverse picture is seen with respect to the average time an end user has to wait for each request: because of the overload situation with 100 parallel requests, each requests takes 5.88, 3.89 and 34.59 (!) seconds for the application running on Windows, on a docker container within the VirtualBox VM and on a docker image again, but with all data located on a shared Volume.

Since the shared volume is located on Windows and is auto-mounted to the VM, and then is mounted to the container, there is no surprise that the performance is lower. A response time increase of a factor 8.9 is still an unexpected high factor you have to take into consideration when working with shared volumes on Windows machines.

Appendix

Preparation of Scenario 3

The preparation of scenarios 1, 2 and 4 have been described in the last blog already. Here is a description on how I have built scenario 3:

We need to copy the folder from Windows into the Linux VM. This way the performance test would also have relevance for docker users that are running docker on Linux. Let us see:

(Linux VM)# sudo cp -R /c/Users/vo062111/dockersharedfolder2 /home/rails/

will copy the auto-mounted volume to /home/rails/dockersharedfolder2 inside the VM. Now let us start a container, which maps to this folder instead of the /c/Users/vo062111/dockersharedfolder2, which is volume that is mounted from Windows.

docker@boot2docker:~$ JOB=$(docker run -d -p 8080:3000 -v /home/rails/dockersharedfolder2:/home/rails/ProvisioningEngine oveits/rails_provisioningengine:latest /bin/bash -c "cd /home/rails/ProvisioningEngine; rails s")

With that we get rid of the auto-mount-hop, when we compare it to scenaro 4.

Scenario 1: Test Protocol Rails Server on native Windows

bash-3.1$ ./ab -n 1000 -c 100 "http://127.0.0.1:3000/text_documents.txt/?filter[systemType]=OSV_V7R1&filter[action]=Add%20Customer&filter[templateType]=config"
 This is ApacheBench, Version 2.3 <$Revision: 1638069 $>
 Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
 Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
 Completed 100 requests
 Completed 200 requests
 Completed 300 requests
 Completed 400 requests
 Completed 500 requests
 Completed 600 requests
 Completed 700 requests
 Completed 800 requests
 Completed 900 requests
 Completed 1000 requests
 Finished 1000 requests
 Server Software: WEBrick/1.3.1
 Server Hostname: 127.0.0.1
 Server Port: 300
Document Path: /text_documents.txt/?filter[systemType]=OSV_V7R1&filter[action]=Add%20Customer&filter[templateType]=config
 Document Length: 244884 bytes
Concurrency Level: 100
 Time taken for tests: 58.774 seconds
 Complete requests: 1000
 Failed requests: 0
 Total transferred: 245374000 bytes
 HTML transferred: 244884000 bytes
 Requests per second: 17.01 [#/sec] (mean)
 Time per request: 5877.425 [ms] (mean)
 Time per request: 58.774 [ms] (mean, across all concurrent requests)
 Transfer rate: 4077.01 [Kbytes/sec] received
Connection Times (ms)
 min mean[+/-sd] median max
 Connect: 0 0 0.3 1 4
 Processing: 84 5630 1067.3 5723 12428
 Waiting: 79 5623 1067.7 5718 12424
 Total: 85 5630 1067.3 5723 12428
 ERROR: The median and mean for the initial connection time are more than twice the standard
 deviation apart. These results are NOT reliable.
Percentage of the requests served within a certain time (ms)
50% 5723
66% 6072
75% 6156
80% 6205
90% 6418
95% 6496
98% 6549
99% 6566
100% 12428 (longest request)
 bash-3.1$ ./ab -n 1000 -c 100 "http://127.0.0.1:3000/text_documents.txt/?filter[systemType]=OSV_V7R1&filter[action]=Add%20Customer&filter[templateType]=config"
 This is ApacheBench, Version 2.3 <$Revision: 1638069 $>
 Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
 Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
 Completed 100 requests
 Completed 200 requests
 Completed 300 requests
 Completed 400 requests
 Completed 500 requests
 Completed 600 requests
 Completed 700 requests
 Completed 800 requests
 Completed 900 requests
 Completed 1000 requests
 Finished 1000 requests
 Server Software: WEBrick/1.3.1
 Server Hostname: 127.0.0.1
 Server Port: 3000
Document Path: /text_documents.txt/?filter[systemType]=OSV_V7R1&filter[action]=Add%20Customer&filter[templateType]=config
 Document Length: 244884 bytes
Concurrency Level: 100
 Time taken for tests: 58.761 seconds
 Complete requests: 1000
 Failed requests: 0
 Total transferred: 245374000 bytes
 HTML transferred: 244884000 bytes
 Requests per second: 17.02 [#/sec] (mean)
 Time per request: 5876.075 [ms] (mean)
 Time per request: 58.761 [ms] (mean, across all concurrent requests)
 Transfer rate: 4077.94 [Kbytes/sec] received
Connection Times (ms)
 min mean[+/-sd] median max
 Connect: 0 0 0.3 1 4
 Processing: 55 5590 1117.4 5779 11605
 Waiting: 51 5584 1117.5 5773 11601
 Total: 56 5590 1117.5 5780 11605
 ERROR: The median and mean for the initial connection time are more than twice the standard
 deviation apart. These results are NOT reliable.
Percentage of the requests served within a certain time (ms)
 50% 5780
 66% 5929
 75% 5999
 80% 6052
 90% 6173
 95% 6276
 98% 6411
 99% 6443
 100% 11605 (longest request)
 bash-3.1$

Server Logs

...
Started GET "/text_documents.txt/?filter[systemType]=OSV_V7R1&filter[action]=Add%20Customer&filter[templateType]=config" for 127.0.0.1 at 2015-07-09 08:57:43 +0200
 Processing by TextDocumentsController#index as TEXT
 Parameters: {"filter"=>{"systemType"=>"OSV_V7R1", "action"=>"Add Customer", "templateType"=>"config"}}
 TextDocument Load (0.5ms) SELECT "text_documents".* FROM "text_documents"
 Rendered text_documents/index.text.erb (0.5ms)
 Completed 200 OK in 18ms (Views: 15.0ms | ActiveRecord: 0.5ms)
...

Summary

  • 15.4 requests/sec and 6.5 sec/requests at 100 concurrent requests and a total of 1000 requests with ~245 kB answer per request.

Open Issue:

  • ab.exe tells us that the responses are non-2xx, but the server log shows 200 OK.

2) Test Protocol Rails Server on Docker

bash-3.1$ ./ab -n 1000 -c 100 "http://192.168.56.101:8080/text_documents.txt/?filter[systemType]=OSV_V7R1&filter[action]=Add%20Customer&filter[templateType]=config"
 This is ApacheBench, Version 2.3 <$Revision: 1638069 $>
 Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
 Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.56.101 (be patient)
 Completed 100 requests
 Completed 200 requests
 Completed 300 requests
 Completed 400 requests
 Completed 500 requests
 Completed 600 requests
 Completed 700 requests
 Completed 800 requests
 Completed 900 requests
 Completed 1000 requests
 Finished 1000 requests
 Server Software: WEBrick/1.3.1
 Server Hostname: 192.168.56.101
 Server Port: 8080
Document Path: /text_documents.txt/?filter[systemType]=OSV_V7R1&filter[action]=Add%20Customer&filter[templateType]=config
 Document Length: 244884 bytes
Concurrency Level: 100
 Time taken for tests: 32.550 seconds
 Complete requests: 1000
 Failed requests: 0
 Total transferred: 245374000 bytes
 HTML transferred: 244884000 bytes
 Requests per second: 30.72 [#/sec] (mean)
 Time per request: 3254.951 [ms] (mean)
 Time per request: 32.550 [ms] (mean, across all concurrent requests)
 Transfer rate: 7361.80 [Kbytes/sec] received
Connection Times (ms)
 min mean[+/-sd] median max
 Connect: 1 4 3.1 3 28
 Processing: 66 3103 697.2 3186 6725
 Waiting: 22 3049 694.9 3135 6665
 Total: 73 3107 697.3 3190 6728
Percentage of the requests served within a certain time (ms)
 50% 3190
 66% 3245
 75% 3297
 80% 3349
 90% 3693
 95% 3783
 98% 3888
 99% 6004
 100% 6728 (longest request)
 bash-3.1$

 bash-3.1$ ./ab -n 1000 -c 100 "http://192.168.56.101:8080/text_documents.txt/?filter[systemType]=OSV_V7R1&filter[action]=Add%20Customer&filter[templateType]=config"
 This is ApacheBench, Version 2.3 <$Revision: 1638069 $>
 Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
 Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.56.101 (be patient)
 Completed 100 requests
 Completed 200 requests
 Completed 300 requests
 Completed 400 requests
 Completed 500 requests
 Completed 600 requests
 Completed 700 requests
 Completed 800 requests
 Completed 900 requests
 Completed 1000 requests
 Finished 1000 requests
 Server Software: WEBrick/1.3.1
 Server Hostname: 192.168.56.101
 Server Port: 8080
Document Path: /text_documents.txt/?filter[systemType]=OSV_V7R1&filter[action]=Add%20Customer&filter[templateType]=config
 Document Length: 244884 bytes
Concurrency Level: 100
 Time taken for tests: 45.330 seconds
 Complete requests: 1000
 Failed requests: 0
 Total transferred: 245374000 bytes
 HTML transferred: 244884000 bytes
 Requests per second: 22.06 [#/sec] (mean)
 Time per request: 4533.006 [ms] (mean)
 Time per request: 45.330 [ms] (mean, across all concurrent requests)
 Transfer rate: 5286.18 [Kbytes/sec] received
Connection Times (ms)
 min mean[+/-sd] median max
 Connect: 1 6 8.0 4 127
 Processing: 76 4290 1588.3 3969 14826
 Waiting: 29 4210 1587.7 3901 14647
 Total: 84 4296 1589.0 3976 14846
Percentage of the requests served within a certain time (ms)
 50% 3976
 66% 4130
 75% 4530
 80% 5083
 90% 5699
 95% 7520
 98% 9635
 99% 10546
 100% 14846 (longest request)
 bash-3.1$

Summary

  • 26 requests/sec and 3,9 sec/requests at 100 concurrent requests and a total of 1000 requests with ~245 kB answer per request.

Open Issue:

  • ab.exe tells us that the responses are non-2xx, but the server log shows 200 OK.

3) Test Protocol Rails Server on Docker with Source Code and DB on a shared Volume residing on the Linux VM

Docker command:

docker@boot2docker:~$ JOB=$(docker run -d -p 8080:3000 -v /c/Users/vo062111/dockersharedfolder2:/home/rails/ProvisioningEngine oveits/rails_provisioningengine:latest /bin/bash -c "cd /home/rails/ProvisioningEngine; rails s")

ab.exe log Test 1

bash-3.1$ ./ab -n 1000 -c 100 "http://192.168.56.101:8080/text_documents.txt/?filter[systemType]=OSV_V7R1&filter[action]=Add%20Customer&filter[templateType]=config"
 This is ApacheBench, Version 2.3 <$Revision: 1638069 $>
 Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
 Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.56.101 (be patient)
 Completed 100 requests
 Completed 200 requests
 Completed 300 requests
 Completed 400 requests
 Completed 500 requests
 Completed 600 requests
 Completed 700 requests
 Completed 800 requests
 Completed 900 requests
 Completed 1000 requests
 Finished 1000 requests
 Server Software: WEBrick/1.3.1
 Server Hostname: 192.168.56.101
 Server Port: 8080
Document Path: /text_documents.txt/?filter[systemType]=OSV_V7R1&filter[action]=Add%20Customer&filter[templateType]=config
 Document Length: 244884 bytes
Concurrency Level: 100
 Time taken for tests: 34.173 seconds
 Complete requests: 1000
 Failed requests: 0
 Total transferred: 245374000 bytes
 HTML transferred: 244884000 bytes
 Requests per second: 29.26 [#/sec] (mean)
 Time per request: 3417.283 [ms] (mean)
 Time per request: 34.173 [ms] (mean, across all concurrent requests)
 Transfer rate: 7012.09 [Kbytes/sec] received
Connection Times (ms)
 min mean[+/-sd] median max
 Connect: 1 4 4.4 3 65
 Processing: 103 3232 726.1 3303 7013
 Waiting: 58 3164 725.8 3235 6900
 Total: 107 3236 726.1 3308 7015
Percentage of the requests served within a certain time (ms)
 50% 3308
 66% 3448
 75% 3552
 80% 3619
 90% 3768
 95% 3948
 98% 4148
 99% 4346
 100% 7015 (longest request)

ab.exe log Test 2

bash-3.1$ ./ab -n 1000 -c 100 "http://192.168.56.101:8080/text_documents.txt/?filter[systemType]=OSV_V7R1&filter[action]=Add%20Customer&filter[templateType]=config"
 This is ApacheBench, Version 2.3 <$Revision: 1638069 $>
 Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
 Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.56.101 (be patient)
 Completed 100 requests
 Completed 200 requests
 Completed 300 requests
 Completed 400 requests
 Completed 500 requests
 Completed 600 requests
 Completed 700 requests
 Completed 800 requests
 Completed 900 requests
 Completed 1000 requests
 Finished 1000 requests
 Server Software: WEBrick/1.3.1
 Server Hostname: 192.168.56.101
 Server Port: 8080
Document Path: /text_documents.txt/?filter[systemType]=OSV_V7R1&filter[action]=Add%20Customer&filter[templateType]=config
 Document Length: 244884 bytes
Concurrency Level: 100
 Time taken for tests: 36.615 seconds
 Complete requests: 1000
 Failed requests: 0
 Total transferred: 245374000 bytes
 HTML transferred: 244884000 bytes
 Requests per second: 27.31 [#/sec] (mean)
 Time per request: 3661.482 [ms] (mean)
 Time per request: 36.615 [ms] (mean, across all concurrent requests)
 Transfer rate: 6544.43 [Kbytes/sec] received
Connection Times (ms)
 min mean[+/-sd] median max
 Connect: 1 4 4.1 3 30
 Processing: 83 3458 1077.6 3531 8051
 Waiting: 38 3378 1070.1 3452 7886
 Total: 84 3462 1077.7 3532 8055
Percentage of the requests served within a certain time (ms)
 50% 3532
 66% 3776
 75% 3895
 80% 3985
 90% 4430
 95% 4680
 98% 6623
 99% 7259
 100% 8055 (longest request)
 bash-3.1$

4) Test Protocol Rails Server on Docker with Source Code and DB on a shared Volume residing on Windows

Docker command:

docker@boot2docker:~$ JOB=$(docker run -d -p 8080:3000 -v /c/Users/vo062111/dockersharedfolder2:/home/rails/ProvisioningEngine oveits/rails_provisioningengine:latest /bin/bash -c "cd /home/rails/ProvisioningEngine; rails s")

ab.exe log Test 1

bash-3.1$ ./ab -n 1000 -c 100 "http://192.168.56.101:8080/text_documents.txt/?filter[systemType]=OSV_V7R1&filter[action]=Add%20Customer&filter[templateType]=config"
 This is ApacheBench, Version 2.3 <$Revision: 1638069 $>
 Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
 Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.56.101 (be patient)
 Completed 100 requests
 Completed 200 requests
 Completed 300 requests
 Completed 400 requests
 Completed 500 requests
 Completed 600 requests
 Completed 700 requests
 Completed 800 requests
 Completed 900 requests
 Completed 1000 requests
 Finished 1000 requests
 Server Software: WEBrick/1.3.1
 Server Hostname: 192.168.56.101
 Server Port: 8080
Document Path: /text_documents.txt/?filter[systemType]=OSV_V7R1&filter[action]=Add%20Customer&filter[templateType]=config
 Document Length: 244884 bytes
Concurrency Level: 100
 Time taken for tests: 348.463 seconds
 Complete requests: 1000
 Failed requests: 0
 Total transferred: 245374000 bytes
 HTML transferred: 244884000 bytes
 Requests per second: 2.87 [#/sec] (mean)
 Time per request: 34846.268 [ms] (mean)
 Time per request: 348.463 [ms] (mean, across all concurrent requests)
 Transfer rate: 687.66 [Kbytes/sec] received
Connection Times (ms)
 min mean[+/-sd] median max
 Connect: 1 3 4.8 2 112
 Processing: 598 33224 6430.7 34965 57297
 Waiting: 468 33076 6427.7 34808 57177
 Total: 600 33227 6430.6 34966 57299
Percentage of the requests served within a certain time (ms)
 50% 34966
 66% 35521
 75% 35781
 80% 35961
 90% 37069
 95% 37879
 98% 38203
 99% 38340
 100% 57299 (longest request)

ab.exe log Test 2

bash-3.1$ ./ab -n 1000 -c 100 "http://192.168.56.101:8080/text_documents.txt/?filter[systemType]=OSV_V7R1&filter[action]=Add%20Customer&filter[templateType]=config"
 This is ApacheBench, Version 2.3 <$Revision: 1638069 $>
 Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
 Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.56.101 (be patient)
 Completed 100 requests
 Completed 200 requests
 Completed 300 requests
 Completed 400 requests
 Completed 500 requests
 Completed 600 requests
 Completed 700 requests
 Completed 800 requests
 Completed 900 requests
 Completed 1000 requests
 Finished 1000 requests
 Server Software: WEBrick/1.3.1
 Server Hostname: 192.168.56.101
 Server Port: 8080
Document Path: /text_documents.txt/?filter[systemType]=OSV_V7R1&filter[action]=Add%20Customer&filter[templateType]=config
 Document Length: 244884 bytes
Concurrency Level: 100
 Time taken for tests: 343.399 seconds
 Complete requests: 1000
 Failed requests: 0
 Total transferred: 245374000 bytes
 HTML transferred: 244884000 bytes
 Requests per second: 2.91 [#/sec] (mean)
 Time per request: 34339.862 [ms] (mean)
 Time per request: 343.399 [ms] (mean, across all concurrent requests)
 Transfer rate: 697.80 [Kbytes/sec] received
Connection Times (ms)
 min mean[+/-sd] median max
 Connect: 0 2 2.2 2 23
 Processing: 639 32505 5954.2 33841 37156
 Waiting: 594 32356 5953.0 33699 37114
 Total: 643 32508 5954.2 33848 37157
Percentage of the requests served within a certain time (ms)
 50% 33848
 66% 34555
 75% 34970
 80% 35247
 90% 35872
 95% 36190
 98% 36382
 99% 36774
 100% 37157 (longest request)
 bash-3.1$

Summary

  • 2.9 requests/sec and 34.8 sec/requests at 100 concurrent requests and a total of 1000 requests with ~245 kB answer per request (i.e. 687 kB/sec)

Open Issue:

  • ab.exe tells us that the responses are non-2xx, but the server log shows 200 OK.