4

IT Automation Part II: Ansible “Hello World” for Templating


This post is a continuation of my previous  post, where we have gone through a little “Hello World” example using Ansible, an IT automation tool. Last time we had performed SSH remote shell commands. This time, we will go though a little templating use case, where

  1. a shell script and a data file are created from Jinja2 templates,
  2. the files are uploaded,
  3. the shell script is performed on the remote target machine and
  4. the log is retrieved from the target machine.

We will see, how surprisingly easy those tasks are performed using Ansible and how input variables (like host name, order ID) come into play.

Posts of this series:

  • Part I: Ansible Hello World with a comparison of Ansible vs. Salt vs. Chef vs. Puppet and a hello world example with focus on Playbooks (i.e. tasks), Inventories (i.e. groups of targets) and remote shell script execution.
  • Part II: Ansible Hello World reloaded with focus on templating: create and upload files based on jinja2 templates.
  • Part III: Salt Hello World example: same content as part I, but with Salt instead of Ansible
  • Part IV: Ansible Tower Hello World: investigates Ansible Tower, a professional Web Portal for Ansible

2015.11.20-17_32_39-hc_001

 

Use Case

Our main goal today is to work with jinja2 templates and variables. For that, we look at following use case:

As an SaaS provider, I want to automatically configure an application via the application’s command-line base import mechanism.

The steps that need to be performed are:

  • create a host- and order-specific import script and an import data file
  • upload the import files
  • remotely run the script, which in turn imports the data file
  • retrieve the result (log)

1. Create import script

Prerequisites

If you have followed the instructions in part 1 of the Ansible Hello World example, then all prerequisites are met and you can skip this section. If not, you need to prepare the system like follows

  1. Install a Docker host. For that, follow the instructions “1. Install a Docker Host” on my previous blog.
  2. Follow the instructions 2. Create an Ansible Docker Image.
    download the Ansible Docker image and
  3. start the Ansible container using “docker run -it williamyeh/ansible:ubuntu14.04-onbuild /bin/bash”
  4. Create an inventory file /etc/ansible/inventory with the content
vi /etc/ansible/hosts

and add the following lines:

[vagranthosts]
192.168.33.10

Here, 192.168.33.10 is the IP address of the used Ubuntu VM, installed by Vagrant and based on a Vagrant box “ubuntu-trusty64-docker” from William-Yeh.

1.1 Create and upload a static import script

1. Create an import script on the Ansible machine like follows:

cd /tmp; vi importscript.sh

Add and save the following content:

#!/bin/sh
echo now simulating the import of the file /tmp/import-data.txt
# here you would perform the actual import...

2. In the same /tmp directory, create a playbook file “copy_file.yml” with following content:

---
# This playbook uses the win_ping module to test connectivity to Windows hosts
- name: Copy File
  hosts: vagranthosts
  remote_user: vagranthost

  tasks:
  - name: copy file
    copy: src=/tmp/importscript.sh dest=/tmp/importscript.sh

3. Now we can run the playbook:

ansible-playbook -i /etc/ansible/hosts copy_file.yml

And we get:

root@930360e7db68:/tmp# ansible-playbook -i /etc/ansible/hosts copy_file.yml

PLAY [Copy Files] *************************************************************

GATHERING FACTS ***************************************************************
ok: [192.168.33.10]

TASK: [copy file] *************************************************************
changed: [192.168.33.10]

PLAY RECAP ********************************************************************
192.168.33.10 : ok=2 changed=1 unreachable=0 failed=0

Here, we have replaced the list of hosts by a reference to the inventory file.

4. And we verify that the file has been transferred:

vagrant@localhost ~ $ cat /tmp/importscript.sh
#!/bin/sh
echo now simulating the import of the file /tmp/import-data.txt
# here you would perform the actual import...

All in all, we have successfully uploaded a static file to the target machine.

1.2 Create and upload a templated import script

1. Create an import script template on the Ansible machine like follows:

cd /tmp; vi importscript.sh.jinja2

Add and save the following content:

#!/bin/sh
echo now simulating the import of the file /tmp/import-data-{{orderNumber}}.txt on the host={{ansible_host}}
# here you would perform the actual import...

Note, that we have introduced two variables “orderNumber” and “ansible_host”, which need to be considered at the time we run the ansible playbook.

2. In the same /tmp directory, create a playbook file “template_file.yml”

cp copy_file.yml template_file.yml; vi template_file.yml

with following content:

---
# This playbook uses the win_ping module to test connectivity to Windows hosts
- name: Copy File
  hosts: vagranthosts
  remote_user: vagranthost

  tasks:
  - name: create file from template and copy to remote system 
    template: src=/tmp/importscript.shi.jinja2 dest=/tmp/importscript_from_template.sh

We have replaced the copy statement by a template statement.

3. Now we can run the playbook:

ansible-playbook -i /etc/ansible/hosts template_file.yml

but we get an error:

root@930360e7db68:/tmp# ansible-playbook -i /etc/ansible/hosts template_file.yml

PLAY [Copy Files] *************************************************************

GATHERING FACTS ***************************************************************
ok: [192.168.33.10]

TASK: [create file from template and copy to remote system] *******************
fatal: [192.168.33.10] => {'msg': "AnsibleUndefinedVariable: One or more undefined variables: 'orderNumber' is undefined", 'failed': True}
fatal: [192.168.33.10] => {'msg': "AnsibleUndefinedVariable: One or more undefined variables: 'orderNumber' is undefined", 'failed': True}

FATAL: all hosts have already failed -- aborting

PLAY RECAP ********************************************************************
 to retry, use: --limit @/root/template_file.retry

192.168.33.10 : ok=1 changed=0 unreachable=1 failed=0

It seems like the template module is verifying that all variables in a template are resolved. Good: that feature prevents us from uploading half-resolved template files.

4. Now let us try again, but now we specify the orderNumber variable on the command line:

ansible-playbook -i /etc/ansible/hosts --extra-vars="orderNumber=2015-11-20-0815" template_file.yml

This time we get positive feedback:

root@930360e7db68:/tmp# ansible-playbook -i /etc/ansible/hosts --extra-vars="orderNumber=2015-11-20-0815" template_file.yml
PLAY [Copy Files] *************************************************************

GATHERING FACTS ***************************************************************
ok: [192.168.33.10]

TASK: [create file from template and copy to remote system] *******************
changed: [192.168.33.10]

PLAY RECAP ********************************************************************
192.168.33.10 : ok=2 changed=1 unreachable=0 failed=0

Successful, this time…

But wait a moment, wasn’t there a second variable “ansible_host” in the jinja2 template file, we have forgotten to specify at commandline? Let us see, what we get at the target system:

vagrant@localhost ~ $ cat /tmp/importscript_from_template.sh
#!/bin/sh
echo now simulating the import of the file /tmp/import-data-2015-11-20-0815.txt on the host=192.168.33.10
# here you would perform the actual import...

We find that ansible_host is a pre-defined Ansible variable, that automatically is set to the host as defined in the inventory file.

Are there other possibilities to define variables? Yes, many of them: the commandline (as just tested), the playbook, the host section within the playbook, the inventory file (for development versions >2.0 only!, …), playbook include files&roles, and many more; see e.g. a long precedence list of the variable definitions in the official Ansible variables documentation.

1.3 Testing variables in the Playbook

Let us test variables in the playbook’s host section:

1. For that we change the content of the file “importscript.sh.jinja2” like follows, introducing a new “who” variable:

#!/bin/sh
echo import performed by {{who}}
echo now simulating the import of the file /tmp/import-data-{{orderNumber}}.txt on the host={{ansible_host}}
# here you would perform the actual import...

And we add the “who” variable to the host in the playbook like follows:

---
# This playbook uses the win_ping module to test connectivity to Windows hosts
- name: Copy Files created from templates
  hosts: vagranthosts
  vars:
    who: me
  remote_user: vagrant

  tasks:
  - name: create file from template and copy to remote system
    template: src=/tmp/importscript.sh.jinja2 dest=/tmp/importscript_from_template.sh

2. and we re-run the command

ansible-playbook -i /etc/ansible/hosts --extra-vars="orderNumber=2015-11-20-0815" template_file.yml

and we get:

PLAY [Copy Files created from templates] **************************************

GATHERING FACTS ***************************************************************
ok: [192.168.33.10]

TASK: [create file from template and copy to remote system] *******************
changed: [192.168.33.10]

PLAY RECAP ********************************************************************
192.168.33.10 : ok=2 changed=1 unreachable=0 failed=0

3. let us validate the content of the transferred file:

vagrant@localhost ~ $ cat /tmp/importscript_from_template.sh
#!/bin/sh
echo import performed by me
echo now simulating the import of the file /tmp/import-data-2015-11-20-0815.txt on the host=192.168.33.10
# here you would perform the actual import...

Perfect; success: all in all, we have created a template file with

  • pre-defined variables,
  • variables that are host-specific and
  • variables that are defined at runtime as an argument of the CLI command.

Note, that a variable cannot be defined under the task section (you will get the error message “ERROR: vars is not a legal parameter in an Ansible task or handler”, if you try to). As a workaround, if you want to use task-specific variables, you can create a playbook per task and define the variable under the host section of the playbook.

Note also, that it is considered as Ansible best practice to define host-specific variables in the inventory file instead of the playbook. Check out the documentation in order to find several ways to define variables in the inventory. However, be careful, since the Docker image is still on version 1.9.4 (at the time of writing this is the latest stable release) and specification of variables in the inventory file requires v2.0.

2. Upload an import data file, perform the shell script and download the result log file

1. In order to come closer to our use case, we still need transfer a data file, and execute the import script on the remote target. For that, we define:

vi /tmp/import-data.txt.jinja2

2. add the content

# this is the import data for order={{orderNumber}} and host={{ansible_host}}
# imported by {{who}}
Some import data

3. create a playbook named import_playbook.yml like follows:

cp template_file.yml import_playbook.yml; vi import_playbook.yml

with the content:

---
# This playbook uses the win_ping module to test connectivity to Windows hosts
- name: Copy Files created from templates
  hosts: vagranthosts
  vars:
    who: me
  remote_user: vagrant

  tasks:
  - name: create import script file from template and copy to remote system
    template: src=/tmp/importscript.sh.jinja2 dest=/tmp/importscript-{{orderNumber}}.sh
  - name: create import data file from template and copy to remote system
    template: src=/tmp/import-data.txt.jinja2 dest=/tmp/import-data-{{orderNumber}}.txt
  - name: perform the import of /tmp/import-data-{{orderNumber}}.txt
    shell: /bin/sh /tmp/importscript-{{orderNumber}}.sh > /tmp/importscript-{{orderNumber}}.log
  - name: fetch the log from the target system
    fetch: src=/tmp/importscript-{{orderNumber}}.log dest=/tmp

In this playbook, we perform all required steps in our use case: upload script and data, perform the script and retrieve the detailed feedback we would have got, if we had performed the script locally on the target machine.

4. run the playbook

ansible-playbook -i /etc/ansible/hosts --extra-vars="orderNumber=2015-11-20-0816" import_playbook.yml

With that we get the output:

root@930360e7db68:/tmp# ansible-playbook -i /etc/ansible/hosts --extra-vars="orderNumber=2015-11-20-0816" import_playbook.yml

PLAY [Copy Files created from templates] **************************************

GATHERING FACTS ***************************************************************
ok: [192.168.33.10]

TASK: [create import script file from template and copy to remote system] *****
changed: [192.168.33.10]

TASK: [create import data file from template and copy to remote system] *******
changed: [192.168.33.10]

TASK: [perform the import of /tmp/import-data-{{orderNumber}}.txt] ************
changed: [192.168.33.10]

TASK: [fetch the log from the target system] **********************************
ok: [192.168.33.10]

PLAY RECAP ********************************************************************
192.168.33.10 : ok=5 changed=3 unreachable=0 failed=0

5. On the remote target, we check the files created:

vagrant@localhost ~ $ cat /tmp/importscript-2015-11-20-0816.sh
#!/bin/sh
echo import performed by me
echo now simulating the import of the file /tmp/import-data-2015-11-20-0816.txt on the host=192.168.33.10
# here you would perform the actual import...
vagrant@localhost ~ $ cat /tmp/import-data-2015-11-20-0816.txt
# this is the import data for order=2015-11-20-0816 and host=192.168.33.10
# imported by me
Some import data
vagrant@localhost ~ $ cat /tmp/importscript-2015-11-20-0816.log
import performed by me
now simulating the import of the file /tmp/import-data-2015-11-20-0816.txt on the host=192.168.33.10

6. And on the Ansible machine, check the retrieved log file:

root@930360e7db68:/tmp# cat /tmp/192.168.33.10/tmp/importscript-2015-11-20-0816.log
import performed by me
now simulating the import of the file /tmp/import-data-2015-11-20-0816.txt on the host=192.168.33.10

Note that the file is automatically copied into a path that consists of the specified /tmp” base path, the ansible-host and the source path. This behavior can be suppressed with the variable flat=yes (see http://docs.ansible.com/ansible/fetch_module.html) for details.

Summary

We have shown, how easy it is to implement an IT automation use case, where a script and a data file are created from template, the files are uploaded to a remote target, the script is run and the command line log is retrieved.

Further Testing

If you want to go through a more sophisticated Jinja2 example, you might want to check out this blog post of Daniel Schneller I have found via google.

14

IT Automation Part I: Ansible “Hello World” Example using Ansible on Docker


This is part I of a little “Hello World” example using Ansible, an IT automation tool.

The post has following content:

  • Popularity of Ansible, Salt, Chef and Puppet
  • Installation based on Docker
  • Playbooks (i.e. tasks) and Inventories (i.e. groups of targets)
  • Remote shell script execution

As a convenient and quick way of Ansible installation on Windows (or Mac), we are choosing a Docker Ansible image on an Vagrant Ubuntu Docker Host that has won a java performance competition against CoreOS and boot2docker (see this blog post).

NEW: Try it out!

Get a feeling for Ansible in a real console without the need to install anything!

  1. Quick sign-in to Katacoda via Github, LinkedIn, Twitter Google or email
  2. click the console below or on https://www.katacoda.com/oliverveits/scenarios/ansible-bootstrap

Posts of this series:

  • Part I: Ansible Hello World (this post) with a comparison of Ansible vs. Salt vs. Chef vs. Puppet and a hello world example with focus on Playbooks (i.e. tasks), Inventories (i.e. groups of targets) and remote shell script execution.
  • Part II: Ansible Hello World reloaded with focus on templating: create and upload files based on jinja2 templates.
  • Part III: Salt Hello World example: same content as part I, but with Salt instead of Ansible
  • Part IV: Ansible Tower Hello World: investigates Ansible Tower, a professional Web Portal for Ansible

 

2015.11.18-13_40_14-hc_001

Versions

2015-11-09: initial release
2016-06-08: I have added command line prompts basesystem#, dockerhost# and container#, so the reader can see more easily, on which layer the command is issued.
2017-01-06: Added linked Table of Contents

Table of Contents

Why Ansible?

For the “hello world” tests, I have chosen Ansible. Ansible is a relatively new member of a larger family of IT automation toolsThis InfoWorld article from 2013 compares four popular members, namely Puppet, Chef, Ansible and Salt. Here you and here find more recent comparisons, if not as comprehensive as the InfoWorld article.

In order to explore the popularity of the software, let us look at a google trend analysis of those four tools (puppet/ansible/chef/salt + “automation”) (status November 2015; for a discussion of the somewhat more confusing recent results, please consult Appendix F below):

2015.11.09-13_52_20-hc_001

Okay, in the google trends analysis we can see that Ansible is relatively new and that is does not seem to replace Puppet, Chef or Salt. However, Ansible offers a fully maintained, RESTful API on the Ansible Web application called Ansible Tower (which comes at a cost, though).  Moreover, I have seen another article stating that Ansible is the very popular among docker developers. Since I have learned to love docker (it was not love at first sight), let us dig into Ansible, even though Puppet and Chef seems to be more popular in google searches.

For a discussion of REST and Web UI capabilities of the four tools, see Appendix D.

Ansible “Hello World” Example – the Docker Way

We plan to install Ansible, prepare Linux and Windows targets and perform simple tests like follows:

  1. Install a Docker Host
  2. Create an Ansible Docker Image
    • Download an Ansible Onbuild Image from Docker Hub
    • Start and configure the Ansible Container
    • Locally test the Installation
  3. Remote Access to a Linux System via SSH
    • –> Create a key pair, prepare the Linux target, access the Linux target
    • Note: we will use the Docker Host as Linux Target
  4. Working with Playbooks
    • Ansible “ping” to single system specified on command line
    • Run a shell script on single system specified on command line
  5.  Working with Inventory Files
    • Ansible “ping” to inventory items
    • Run a shell script on inventory items

1. Install a Docker Host

Are you new to Docker? Then you might want to read this blog post.

Installing Docker on Windows and Mac can be a real challenge, but no worries: we will show an easy way here, that is much quicker than the one described in Docker’s official documentation:

Prerequisites:
  • I recommend to have direct access to the Internet: via Firewall, but without HTTP proxy. However, if you cannot get rid of your HTTP proxy, read Appendix B.
  • Administration rights on you computer.
Steps to install a Docker Host VirtualBox VM:

Download and install Virtualbox (if the installation fails with error message “Setup Wizard ended prematurely” see Appendix A: Virtualbox Installation Workaround below)

1. Download and Install Vagrant (requires a reboot)

2. Download Vagrant Box containing an Ubuntu-based Docker Host and create a VirtualBox VM like follows (assumed a Linux-like system or bash on Windows):

(basesystem)$ mkdir ubuntu-trusty64-docker ; cd ubuntu-trusty64-docker
(basesystem)$ vagrant init williamyeh/ubuntu-trusty64-docker
(basesystem)$ vagrant up
(basesystem)$ vagrant ssh

Now you are logged into the Docker host and we are ready for the next step: to create the Ansible Docker image.

Note: I have experienced problems with the vi editor when running vagrant ssh in a Windows terminal. In case of Windows, consider to follow Appendix C and to use putty instead.

2. Create an Ansible Docker Image

1. Download an Ansible Onbuild Image from Docker Hub

In order to check that the docker host has Internet access, we issue following command:

dockerhost# docker search ansible

This command will lead to an error, if the you work behind a HTTP proxy, since we have not (yet) configured the docker host for usage behind a HTTP proxy. I recommend to get direct Internet access without HTTP proxy for now. However, if you cannot get rid of your HTTP proxy, read Appendix B.

Now we download the ansible image:

dockerhost# docker pull williamyeh/ansible:ubuntu14.04-onbuild

2. Start and configure the Ansible Container

dockerhost# docker run -it williamyeh/ansible:ubuntu14.04-onbuild /bin/bash

The -it (interactive terminal) flags starts an interactive session in the container. Now you are logged into the Docker Container and you can prepare the Ansible configuration files, namely the inventory file (hosts file) and the playbook.yml:

3. Locally test the Installation

container# ansible all -i 'localhost,' -u vagrant -c local -m ping

Note that the -i is a CSV list of target hosts and needs to end with a comma. With -u, we define the remote user, and the private key file needs to be specified. The response should look like follows:

localhost | success >> {
 "changed": false,
 "ping": "pong"
}

3. Remote Access to a Linux System via SSH

Now let us test the remote access to a Linux system.

We could perform our tests with any target system with a running SSH service and installed Python >v2.0 on /usr/bin/python (see the FAQs). However, the Ubuntu Docker host is up and running already, so why not use it as target system? In this case, the tested architecture looks like follows:

2015.11.18-15_59_31-hc_001

Ansible is agent-less, but we still need to prepare the target system: Ansible’s default remote access method is to use SSH with public key authentication. The best way is to create an RSA key pair on the Ansible machine (if not already available) and to add the corresponding public key as “authorized key” on the target system.

1. Create an RSA key pair on the Ansible container:
container# ssh-keygen -t rsa

and go through the list of question. For a proof of concept, and if you are not concerned about security, you can just hit <enter> several times. Here is a log from my case:

root@930360e7db68:/etc/ssh# ssh-keygen -t rsa

Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
fe:eb:bf:de:30:d6:5a:8c:1b:4a:c0:dd:cb:5f:3b:80 root@930360e7db68
...

On the target machine (i.e. the ubuntu-trusty64-docker system), create a file name /tmp/ansible_id_rsa.pub and copy the content of ansible’s ~/.ssh/id_rsa.pub file into that file. Then:

dockerhost# cat /tmp/ansible_id_rsa.pub >> ~/.ssh/authorized_keys
2. test remote access via SSH:

Now, we should be able to access the target system from the ansible container using this key. To test, try:

container# ssh vagrant@192.168.33.10 -C "echo hello world"

This should echo a “hello world” to your screen. Here 192.168.33.10 is a reachable IP address of the docker host (issue ifconfig on the docker host to check the IP address in your case).  For troubleshooting, you can call ssh with the -vvv option.

3. Remote access via Ansible:

Check that Python version > 2.0 (better: > 2.4) is installed on your target machine. In our case of the ubuntu-trusty64-docker image, this is a pre-installed package and we get:

dockerhost# python --version
Python 2.7.6

Now also a remote Ansible connection should be possible from the container:

container# ansible all -i '192.168.33.10,' -u vagrant -m ping

which results in following output:

192.168.33.10 | success >> {
 "changed": false,
 "ping": "pong"
}

This was your first successful Ansible connection via SSH. Now let us also perform a change on the remote target. For that, we perform a remote shell command:

container# ansible all -i '192.168.33.10,' -u vagrant -m shell -a "echo hello world\! > hello_world"

This time the module is a “shell” and the module’s argument is a echo hello world command. We should get the feedback

192.168.33.10 | success | rc=0 >>

On the target, we can check the result with:

dockerhost# cat hello_world
hello world!

4. Working with Playbooks

This was your first remote shell action via Ansible. Now we want to have a look to playbooks, which are the Ansible way to document and automate the tasks.

1. Create a playbook

On the ansible container terminal, we create a playbook.yml file:

container# vi playbook.yml

and we add and save the following content:

---
# This playbook uses the ping module to test connectivity to Linux hosts
- name: Ping
  hosts: 192.168.33.10
  remote_user: vagrant 

  tasks: 
  - name: ping 
    ping: 
  - name: echo 
    shell: echo Hello World! > hello_world

Note: If you have problems with formatting of the characters in the terminal (I have experienced problems in a Windows terminal), then I recommend to use a putty terminal instead of using vagrant ssh. For that, see Appendix C.

Note also that the number of white spaces are relevant in a yml file. However, note that the ‘!’ does not need to be escaped in the playbook (it was necessare on the command line, though). Now, we perform following command on the Ansible container:

container# ansible-playbook -i '192.168.33.10,' playbook.yml

This time, the -u flag is not needed (it is ignored, if specified), since we have specified the user in the playbook. We get the following feedback:

PLAY [Ping] *******************************************************************

GATHERING FACTS ***************************************************************
ok: [192.168.33.10]

TASK: [ping] ******************************************************************
ok: [192.168.33.10]

TASK: [echo] ******************************************************************
changed: [192.168.33.10]

PLAY RECAP ********************************************************************
192.168.33.10 : ok=3 changed=1 unreachable=0 failed=0

We check the result on the target machine again:

dockerhost# cat hello_world
Hello World!

We see that the file hello_world was overwritten (“Hello World!” with capital letters instead of “hello world!”).

5. Working with Inventory Files

Instead of specifying singular hosts in the playbook.yml, Ansible offers the more elegant way to work with groups of machines. Those are defined in the inventory file.

More information about inventory files can be found the official documentation. However, note that this page describes new 2.0 features that do not work on the Docker image (currently ansible 1.9.4). See Appendix E for more details on the problem and for information on how to upgrade to the latest development build. The features tested in this blog post work on ansible 1.9.4, though; so you do not need to upgrade now.

1. Now we do the same as before, but we use an inventory file to define the target IP addresses:

container# vi /etc/ansible/hosts

and add the following lines:

[vagranthosts]
192.168.33.10

In the playbook.yml we replace 192.168.33.10 by a group name, e.g. vagranthosts

---
# This playbook uses the win_ping module to test connectivity to Windows hosts
- name: Ping
  hosts: vagranthosts
  remote_user: vagranthost

  tasks:
  - name: ping
    ping:
  - name: echo
    shell: echo HELLO WORLD! > hello_world

In order to see the difference, we also have changed the hello world to all capital letters.

Now we perform:

container# ansible-playbook -i /etc/ansible/hosts playbook.yml

Here, we have replaced the list of hosts by a reference to the inventory file.

The output looks like follows:

PLAY [Ping] *******************************************************************

GATHERING FACTS ***************************************************************
ok: [192.168.33.10]

TASK: [ping] ******************************************************************
ok: [192.168.33.10]

TASK: [echo] ******************************************************************
changed: [192.168.33.10]

PLAY RECAP ********************************************************************
192.168.33.10 : ok=3 changed=1 unreachable=0 failed=0

We check the result on the target machine again:

dockerhost# cat hello_world
HELLO WORLD!

We see that the file hello_world was overwritten again (all capital letters). Success!

Summary

We have shown, how you can download and run Ansible as a docker container on any machine (Windows in my case). We have prepared the Ansible container and the target for SSH connections and have shown, how to perform connectivity tests and shell scripts on the remote system. In addition, we have introduced Playbooks as a means to document and run several tasks by one command. Moreover, Inventory files were introduced in order to manage groups of target machines.

Next steps

Following topics are looked at in the next two parts of this series:

  • Part II: Ansible Hello World reloaded will show
    • how to upload files with Ansible
    • how to create dynamic file content with Ansible using jinja2 templates
    • bind it all together by showing a common use case with dynamic shell scripts and data files
  • Part III: Salt Hello World example: same content as part I, but with Salt instead of Ansible
  • Part IV: Ansible Tower Hello World: investigates Ansible Tower, a professional Web Portal for Ansible

Open: Window support of Ansible

Appendix A: Virtualbox Installation (Problems)

  • Download the installer. Easy.
  • When I start the installer, everything seems to be on track until I see “rolling back action” and I finally get this:
    “Oracle VM Virtualbox x.x.x Setup Wizard ended prematurely”

Resolution of the “Setup Wizard ended prematurely” Problem

Let us try to resolve the problem: the installer of Virtualbox downloaded from Oracle shows the exact same error: “…ended prematurely”. This is not a docker bug. Playing with conversion tools from Virtualbox to VMware did not lead to the desired results.

The Solution: Google is your friend: the winner is:https://forums.virtualbox.org/viewtopic.php?f=6&t=61785. After backing up the registry and changing the registry entry

HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Network -> MaxFilters from 8 to 20 (decimal)

and a reboot of the Laptop, the installation of Virtualbox is successful.

Note: while this workaround has worked on my Windows 7 notebook, it has not worked on my new Windows 10 machine. However, I have managed to install VirtualBox on Windows 10 by de-selecting the USB support module during the VirtualBox installation process. I remember having seen a forum post pointing to that workaround, with the additional information that the USB drivers were installed automatically at the first time a USB device was added to a host (not yet tested on my side).

Appendix B: HTTP Proxy Configuration

If you need to work behind a HTTP proxy, you need to consider several levels that need to know of it:

  • the physical host for both, your browser as well as your terminal session (http_proxy and https_proxy variables) for successful vagrant init commands and download of the vagrant boxes.
  • the docker host (if it differs from the physical host) for both, in the docker configuration files as well as on the bash. Note that the configuration files differ between CoreOS, boot2docker and Ubuntu.
  • the docker client for the terminal session; needed for apt-get update+install.

Ubuntu Docker:

sudo vi /etc/default/docker

add proxy, if needed like follows (adapt the names and ports, so it fits to your environment):

export http_proxy='http://proxy.example.com:8080'
export https_proxy='http://proxy.example.com:8080'

then:

sudo restart docker

CoreOS:

sudo mkdir /etc/systemd/system/docker.service.d
sudo vi /etc/systemd/system/docker.service.d/http-proxy.conf

and add something like (adapt the names and ports, so it fits to your environment):

[Service]
Environment="HTTP_PROXY=http://proxy.example.com:8080"

Then:

sudo reboot

or try:

sudo sysctl docker restart

Appendix C: How to use putty for accessing Vagrant Boxes

What is to be done:

  1. locate and convert the vagrant private key to ppk format using puTTYgen.
    1. locate the vagrant private key of the box. In my case, this is C:\Users\vo062111\.vagrant.d\boxes\williamyeh-VAGRANTSLASH-ubuntu-trusty64-docker\1.8.1\virtualbox\vagrant_private_key
    2. start puTTYgen -> Conversions -> import -> select the above path.
    3. press “Save private key” and save vagrant_private_key as vagrant_private_key.ppk
  2. In putty,
    1. create a connection to vagrant@127.0.0.1 port 2201 or port 2222 (the port vagrant uses is show in the terminal during “vagrant up”)
    2. specify the ppk key Connection->SSH->Auth->(om the right)Private key file for authentication
    3. Click on Session on the left menu and press Save
    4. Press Open and accept the RSA fingerprint -> you should be able to log in without password prompt. If there is still a password prompt, there is something wrong with the private key.

Appendix D: REST APIs of Ansible, Puppet, Chef and Salt

Here, I have made a quick research on the RESTful interfaces and Web UI Interfaces of Ansible, Puppet, Chef and Salt. I have not found this information on the  Feature comparison table on Wikipedia:

Appendix E: Install the latest Ansible Development Version

The Ansible version in the docker image has the problem that it has a version 1.9.4 (currently), but the Ansible documentation is describing the latest v2.0 features. E.g. in version 1.9.4, variables in the inventory file described in the documentation are ignored (see e.g. the example “jumper ansible_port=5555 ansible_host=192.168.1.50″) and this leads to a “Could not resolve hostname” error ; see also this stackoverflow post).

Here, we will show, how to install the latest Ansible version in the container. For that, run the container:

docker -it williamyeh/ansible:ubuntu14.04-onbuild /bin/bash

ansible --version

will result in an output similar to:

ansible 1.9.4

If you get a version >=2.0, you might not need to upgrade at all. In all other cased, perform the following steps:

If you are behind a proxy, perform sth. like:

export http_proxy=http://proxy.example.com:8080
export https_proxy=http://proxy.example.com:8080
apt-get update; apt-get install git
git clone git://github.com/ansible/ansible.git --recursive
cd ./ansible
source ./hacking/env-setup
ansible --version

should give you some output like:

ansible 2.0.0 (devel 9b9fb51d9d) last updated ...

Now also the v2.0 features should be available. If you want to update the version in future, you will need to perform the git command

git pull

In the /ansible directory.

(chapter added on 2016-04-11)

In order to explore the popularity of the software, we have looked at a google trend analysis of those four tools (puppet/ansible/chef/salt + “automation”) (status November 2015):

2015.11.09-13_52_20-hc_001

Note that the google trends result looks quite differently 5 months later (2016-04-11). Google seems to have changed their source data and/or algorithm. With the same search terms as we had used in Nov. 2015 (puppet/ansible/chef/salt + “automation”; note that the link works only, if you are logged into your google account), in April 2016 we got following non-convincing graph:

2016.04.11-17_59_42-hc_001

Especially the analysis of Salt’s and Chef’s popularity for 2011 and before does not look very convincing.

If we are searching for “Software” instead via this google trends link (works only, if you are logged into your google account), we get something like the following:

2016.04.11-18_32_34-hc_001

Also this data does not look reliable: according to Wikipedia’s Vagrant page, Vagrant’s initial version was March 2010. Why do we see so many search hits before that time? That is not plausible. The same with Puppet, which has started 2005 and has many hits on 2004.

To be honest, google trends analysis used to (at least) look reliable in November 2015, but it does not look reliable anymore. What a pity: I used to work a lot with google trends in the past for finding out, which technology is trending, but looking at the more recent results, I have lost the confidence that I can rely on the data. If you know an alternative to google trends, please add a comment to this blog post.

In any case; for the time after 2013, it looks like the popularity of Ansible is rising quickly (if we believe it).