0

Jenkins Part 3.2: Trigger a downstream Job or Workflow with Hand-over of Parameters


This blog post will lead you through the steps how to trigger a downstream Freestyle job or Pipeline workflow from an upstream Freestyle project. We will also show how to pass a parameter from Freestyle project to the downstream workflow or job.

In the next blog post we will make use of this method and show how it can be used to trigger a Pipeline workflow via a trigger mechanism that is supported for Freestyle projects only.

Tools and Versions used

Tested with Jenkins v2.32.2 and v2.46.2

Step Zero: Access or Install a Docker Host

If you have no Docker host available, you may consider to either

  1. (easiest procedure) start this Katacoda scenario. Step 1 of thei blog post below is identical with step 4 of the Katacoda scenario. After performing this step, the Katacoda terminal and Jenkins web page can be used to perform all other steps described below.
  2. (more work) or install a Docker host by using Vagrant and Virtualbox as described in Step 1 of the blog post “Installation of Jenkins the Docker way“. While this is more work to to, the advantage of this option is that you can keep the Jenkins Home directory persistently on your local PC.

The commands we will use in this blog post are chosen in a way that will work in both environments.

Note: the Katacoda environment does not support the mapping of volumes to the Docker host via -v option. If you wish to keep the Jenkins Home directory for later use, you may consider to opt for option 2 and replace Step 1 and 2 by the steps found in the blog post Installation of Jenkins the Docker way. However, using Katacoda and the steps below is the quickest way to reach our goal to test the Jenkins Pipeline plugin.

Step 1: Run a pre-configured Jenkins Image

In order to skip some steps you might have seen already in part 1 and part 7 of this series, we will start a pre-configured Jenkins Docker image like follows:

docker run -d --rm --name jenkins \
       -p 8080:8080 -p 50000:50000 \
       oveits/jenkins:2.46.2-alpine-nologin-with-maven-git-pipelines

You can load the Jenkins’ dashboard by opening a browser and navigating to http://localhost:8080 (or to the link specified within the Katacoda example).

The used image is prepared to skip any login credentials. Maven, Git and Pipelines Plugins are installed and configured.

Note: if you want to start from scratch, consider to follow part 1 of this series. The installation and configuration of Maven, Git and Pipelines Plugins are described on part 7 of this series.

Step 2: Add Plugin: “Parameterized Trigger plugin”

-> 

-> 

-> 

-> 

-> 

->

Step 3: Create downstream Pipeline Project

Let us now create our a downstream Pipeline workflow job.

-> New Item

->  Triggered Pipeline

-> Pipeline (Jenkins Add New Item)

-> 

-> We scroll down to the Pipeline Script section and choose “try sample pipeline” -> Hello World

-> Save (Jenkins Configure)

Step 4: Create and Configure an upstream Freestyle Project

Why using a Freestyle project to trigger a Pipeline project? The reason is that some trigger plugins do not support pipelines yet. E.g. I was trying to use the Bitbucket Pullrequest Builder Plugin within a pipeline project, but I got a java traceback. The same triggering mechanism works for Freestyle projects, though. The idea now is to use a Freestyle trigger mechanism like the Bitbucket Pullrequest Builder Plugin (or any trigger mechanism supported by Freestyle projects) and to use the Parameterized Trigger plugin to trigger a Pipeline from the Freestyle project. Any Parameter available in the Freestyle project can be transferred and re-used in the Pipeline project, as we will show in our simple example:

-> New Item

-> 

-> Freestyle Project

-> OK

-> 

-> Build-> Trigger/call builds on other projects -> Build Triggers -> Triggered Pipeline

Step 5: Add Parameter

For testing how to pass a parameter from Freestyle to triggered Pipeline project, let us define a myparam parameter like follows:

-> Add parameters -> Predefined parameters

-> now we add “myparam = myvalue” in the Parameters field:

Step 6: Define Parameter on Triggered Pipeline

The parameter we have defined in the Freestyle project needs to be caught on the triggered Pipeline project. For that, the Pipeline Project needs to be configured to be parameterized in the configure section:

-> Configure

-> 

In the Pipeline script section, we replace the “Hello World” echo by following code, which will demonstrate three ways to use the parameter that has been passed between Freestyle and Pipeline project:

node {
 echo "Hello ${env.myparam}"
 echo "Hello " + myparam
 echo "Hello ${myparam}"
}

Step 7: Build Freestyle Project

This yields following result on the “Triggered Pipeline” project, when we “Build now” the Freestyle job:

Freestyle Project -> Build Now

Step 8: Review Results on Pipeline Project

-> Pipeline project

-> 

As we can see, all three syntax samples work.

Excellent! Thump up!

Summary

At this point, we have verified, that

  • A Freestyle job can trigger a Pipeline workflow
  • a parameter can be passed from the Freestyle job to the Pipeline workflow

Next Steps

In the next blog post we will show how to use this method to trigger a Pipeline workflow from a pull request on a BitBucket/Stash Git repository, despite of the fact that the corresponding plugin is not supported for Pipeline workflows, currently (v2.46.2).

 

2

Getting Started with DC/OS on Vagrant


In the course of this Hello World style tutorial, we will explore DC/OS, a Data Center Operating System developed and open sourced by Mesosphere with the target to hide the complexity of data centers. We will

  • install DC/OS on your local PC or Notebook using Vagrant and VirtualBox,
  • deploy a “hello world” application with more than one instance,
  • load balance between the application instances
  • and make sure the service is reachable from the outside world.

See also part 2: A Step towards productive Docker: installing and testing DC/OS on AWS (starts from scratch and does not require to have read/tested the current post).

DC/OS is a Data Center Operating System is built upon Apache Mesos and Mesosphere Marathon, an open source container orchestration platform. It has the target to hide the complexity of data centers when deploying applications: DC/OS performs the job of deploying your application on your data center hardware: DC/OS will automatically and choose the hardware servers to run your application on. It helps scaling your application according to your needs by adding or removing application instances at a push of a button. DC/OS will make sure that your client’s requests are load balanced and routed to you application instances: there is no need to manually re-configure the load-balancer(s), if you add or destroy an instance of your application: DC/OS will take care of this for you.

Note: If you want to get started with Marathon and Mesos first, you might be interested in this blog post, especially, if the resource requirements of this blog post exceeds what you have at hand: for the DC/OS tutorial you will need 10 GB or RAM, while in the Marathon/Mesos tutorial, 4 GB are sufficient.

Table of Contents

Target

What I want to do in this session:

  • Install DC/OS on the local machine using Vagrant+VirtualBox
  • Explore the networking and load balancing capabilities of DC/OS

Tools and Versions used

  • Vagrant 1.8.6
  • Virtualbox 5.0.20 r106931
  • for Windows: GNU bash, version 4.3.42(5)-release (x86_64-pc-msys)
  • DCOS 1.8.8

Prerequisites

  • 10 GB free DRAM
  • tested with 4 virtual CPUs (Quad Core CPU)
  • Git is installed

Step 1: Install Vagrant and VirtualBox

Step 1.1: Install VirtualBox

Download and install VirtualBox. I am running version 5.0.20 r106931.

If the installation fails with error message “Setup Wizard ended prematurely” see Appendix A: Virtualbox Installation Workaround below

Step 1.2: Install Vagrant

Download and install Vagrant (requires a reboot).

Step 2: Download Vagrant Box

We are following the Readme on https://github.com/dcos/dcos-vagrant:

Since this might be a long-running task (especially, if you are sitting in a hotel with low speed Internet connection like I do in the moment), we best start by downloading DC/OS first:

(base system)$ vagrant box add https://downloads.dcos.io/dcos-vagrant/metadata.json
==> box: Loading metadata for box 'https://downloads.dcos.io/dcos-vagrant/metadata.json'
==> box: Adding box 'mesosphere/dcos-centos-virtualbox' (v0.8.0) for provider: virtualbox
 box: Downloading: https://downloads.dcos.io/dcos-vagrant/dcos-centos-virtualbox-0.8.0.box
 box: Progress: 100% (Rate: 132k/s, Estimated time remaining: --:--:--)
 box: Calculating and comparing box checksum...
==> box: Successfully added box 'mesosphere/dcos-centos-virtualbox' (v0.8.0) for 'virtualbox'!

Step 3: Clone DCOS-Vagrant Repo

On another window, we clone the dcos-vagrant git repo:

(base system)$ git clone https://github.com/dcos/dcos-vagrant
Cloning into 'dcos-vagrant'...
remote: Counting objects: 2171, done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 2171 (delta 0), reused 0 (delta 0), pack-reused 2167
Receiving objects: 100% (2171/2171), 14.98 MiB | 123.00 KiB/s, done.
Resolving deltas: 100% (1297/1297), done.
Checking connectivity... done.
(base system)$ cd dcos-vagrant

VagrantConfig.yaml shows:

m1:
 ip: 192.168.65.90
 cpus: 2
 memory: 1024
 type: master
a1:
 ip: 192.168.65.111
 cpus: 4
 memory: 6144
 memory-reserved: 512
 type: agent-private
p1:
 ip: 192.168.65.60
 cpus: 2
 memory: 1536
 memory-reserved: 512
 type: agent-public
 aliases:
 - spring.acme.org
 - oinker.acme.org
boot:
 ip: 192.168.65.50
 cpus: 2
 memory: 1024
 type: boot

m1 is the DC/OS master. Private containers will run on a1, while the load balancer containers are public and will run on p1.

Step 4: Install Vagrant Hostmanager Plugin

Installation of the Vagrant Hostmanager Plugin is required; I had tried without, because I did not think that it works on Windows. However, vagrant up will not succeed, if the plugin is not installed; the presence of the plugin is checked before booting up the Vagrant box.

(base system)$ vagrant plugin install vagrant-hostmanager
Installing the 'vagrant-hostmanager' plugin. This can take a few minutes...
Installed the plugin 'vagrant-hostmanager (1.8.5)'!

Note: Some version updates later (VirtualBox 5.1.28 r117968 (Qt5.6.2)), I have found out, that also the VirtualBox Guest additions are needed in order to avoid the error message sbin/mount.vboxsf: mounting failed with the error: No such device.
For that, I needed to re-apply the command
vagrant plugin install vagrant-vbguest.

However, it still did not work. I could vagrant ssh to the box and I found in /var/log/vboxadd-install.log that it did not find the kernel headers during installation of the vbox guest additions. yum install kernel-headers returned that kernel-headers-3.10.0-693.5.2.el7.x86_64 were already installed. However, ls /usr/src/kernels/ showed, that there is a directory named 3.10.0-327.36.1.el7.x86_64 instead of 3.10.0-327.36.1.el7.x86_64. Now I have done a sudo ln -s 3.10.0-327.36.1.el7.x86_64 3.10.0-327.el7.x86_64 within the directory /usr/src/kernels/, and I could do a vagrant up with no problems. I guess un-installing and re-installing the headers would work as well.

All this did not work, but I have found that the build link on was wrong (hint was found here):

I fixed the link with cd /lib/modules/3.10.0-327.el7.x86_64; sudo mv build build.broken; sudo ln -s /usr/src/kernels/3.10.0-327.36.1.el7.x86_64 build
then cd /opt/VBoxGuestAdditions-*/init; sudo ./vboxadd setup

But still did not work! I give up and try installing DC/OS on AWS. Keep tuned.

Step 5: Boot DC/OS

Below I have set the DCOS_VERSION in order to get the exact same results next time I perform the test. If you omit to set the environment variable, the latest stable version will be used, when you boot up the VirtualBox VM:

(base system)$ export DCOS_VERSION=1.8.8
(base system)$ vagrant up Vagrant Patch Loaded: GuestLinux network_interfaces (1.8.6) Validating Plugins... Validating User Config... Downloading DC/OS 1.8.8 Installer... Source: https://downloads.dcos.io/dcos/stable/commit/602edc1b4da9364297d166d4857fc8ed7b0b65ca/dcos_generate_config.sh Destination: installers/dcos/dcos_generate_config-1.8.8.sh Progress: 16% (Rate: 1242k/s, Estimated time remaining: 0:09:16)

The speed of the hotel Internet seems to be better now, this late in the night…

(base system)$ vagrant up
Vagrant Patch Loaded: GuestLinux network_interfaces (1.8.6)
Validating Plugins...
Validating User Config...
Downloading DC/OS 1.8.8 Installer...
Source: https://downloads.dcos.io/dcos/stable/commit/602edc1b4da9364297d166d4857fc8ed7b0b65ca/dcos_generate_config.sh
Destination: installers/dcos/dcos_generate_config-1.8.8.sh
Progress: 100% (Rate: 1612k/s, Estimated time remaining: --:--:--)
Validating Installer Checksum...
Using DC/OS Installer: installers/dcos/dcos_generate_config-1.8.8.sh
Using DC/OS Config: etc/config-1.8.yaml
Validating Machine Config...
Configuring VirtualBox Host-Only Network...
Bringing machine 'm1' up with 'virtualbox' provider...
Bringing machine 'a1' up with 'virtualbox' provider...
Bringing machine 'p1' up with 'virtualbox' provider...
Bringing machine 'boot' up with 'virtualbox' provider...
==> m1: Importing base box 'mesosphere/dcos-centos-virtualbox'...
==> m1: Matching MAC address for NAT networking...
==> m1: Checking if box 'mesosphere/dcos-centos-virtualbox' is up to date...
==> m1: Setting the name of the VM: m1.dcos
==> m1: Fixed port collision for 22 => 2222. Now on port 2201.
==> m1: Clearing any previously set network interfaces...
==> m1: Preparing network interfaces based on configuration...
    m1: Adapter 1: nat
    m1: Adapter 2: hostonly
==> m1: Forwarding ports...
    m1: 22 (guest) => 2201 (host) (adapter 1)
==> m1: Running 'pre-boot' VM customizations...
==> m1: Booting VM...
==> m1: Waiting for machine to boot. This may take a few minutes...
    m1: SSH address: 127.0.0.1:2201
    m1: SSH username: vagrant
    m1: SSH auth method: private key
    m1: Warning: Remote connection disconnect. Retrying...
    m1: Warning: Remote connection disconnect. Retrying...
    m1: Warning: Remote connection disconnect. Retrying...
    m1: Warning: Remote connection disconnect. Retrying...
    m1: Warning: Remote connection disconnect. Retrying...
    m1: Warning: Remote connection disconnect. Retrying...
    m1: Warning: Remote connection disconnect. Retrying...
    m1: Warning: Remote connection disconnect. Retrying...
    m1: Warning: Remote connection disconnect. Retrying...
==> m1: Machine booted and ready!
==> m1: Checking for guest additions in VM...
==> m1: Setting hostname...
==> m1: Configuring and enabling network interfaces...
==> m1: Mounting shared folders...
    m1: /vagrant => D:/veits/Vagrant/ubuntu-trusty64-docker_2017-02/dcos-vagrant
==> m1: Updating /etc/hosts file on active guest machines...
==> m1: Updating /etc/hosts file on host machine (password may be required)...
==> m1: Running provisioner: shell...
    m1: Running: inline script
==> m1: Running provisioner: dcos_ssh...
    host: Generating new keys...
==> m1: Inserting generated public key within guest...
==> m1: Configuring vagrant to connect using generated private key...
==> m1: Removing insecure key from the guest, if it's present...
==> m1: Running provisioner: shell...
    m1: Running: script: Certificate Authorities
==> m1: >>> Installing Certificate Authorities
==> m1: Running provisioner: shell...
    m1: Running: script: Install Probe
==> m1: Probe already installed: /usr/local/sbin/probe
==> m1: Running provisioner: shell...
    m1: Running: script: Install jq
==> m1: jq already installed: /usr/local/sbin/jq
==> m1: Running provisioner: shell...
    m1: Running: script: Install DC/OS Postflight
==> m1: >>> Installing DC/OS Postflight: /usr/local/sbin/dcos-postflight
==> a1: Importing base box 'mesosphere/dcos-centos-virtualbox'...
==> a1: Matching MAC address for NAT networking...
==> a1: Checking if box 'mesosphere/dcos-centos-virtualbox' is up to date...
==> a1: Setting the name of the VM: a1.dcos
==> a1: Fixed port collision for 22 => 2222. Now on port 2202.
==> a1: Clearing any previously set network interfaces...
==> a1: Preparing network interfaces based on configuration...
    a1: Adapter 1: nat
    a1: Adapter 2: hostonly
==> a1: Forwarding ports...
    a1: 22 (guest) => 2202 (host) (adapter 1)
==> a1: Running 'pre-boot' VM customizations...
==> a1: Booting VM...
==> a1: Waiting for machine to boot. This may take a few minutes...
    a1: SSH address: 127.0.0.1:2202
    a1: SSH username: vagrant
    a1: SSH auth method: private key
    a1: Warning: Remote connection disconnect. Retrying...
    a1: Warning: Remote connection disconnect. Retrying...
    a1: Warning: Remote connection disconnect. Retrying...
    a1: Warning: Remote connection disconnect. Retrying...
    a1: Warning: Remote connection disconnect. Retrying...
    a1: Warning: Remote connection disconnect. Retrying...
    a1: Warning: Remote connection disconnect. Retrying...
    a1: Warning: Remote connection disconnect. Retrying...
    a1: Warning: Remote connection disconnect. Retrying...
    a1: Warning: Remote connection disconnect. Retrying...
==> a1: Machine booted and ready!
==> a1: Checking for guest additions in VM...
==> a1: Setting hostname...
==> a1: Configuring and enabling network interfaces...
==> a1: Mounting shared folders...
    a1: /vagrant => D:/veits/Vagrant/ubuntu-trusty64-docker_2017-02/dcos-vagrant
==> a1: Updating /etc/hosts file on active guest machines...
==> a1: Updating /etc/hosts file on host machine (password may be required)...
==> a1: Running provisioner: shell...
    a1: Running: inline script
==> a1: Running provisioner: dcos_ssh...
    host: Found existing keys
==> a1: Inserting generated public key within guest...
==> a1: Configuring vagrant to connect using generated private key...
==> a1: Removing insecure key from the guest, if it's present...
==> a1: Running provisioner: shell...
    a1: Running: script: Certificate Authorities
==> a1: >>> Installing Certificate Authorities
==> a1: Running provisioner: shell...
    a1: Running: script: Install Probe
==> a1: Probe already installed: /usr/local/sbin/probe
==> a1: Running provisioner: shell...
    a1: Running: script: Install jq
==> a1: jq already installed: /usr/local/sbin/jq
==> a1: Running provisioner: shell...
    a1: Running: script: Install DC/OS Postflight
==> a1: >>> Installing DC/OS Postflight: /usr/local/sbin/dcos-postflight
==> a1: Running provisioner: shell...
    a1: Running: script: Install Mesos Memory Modifier
==> a1: >>> Installing Mesos Memory Modifier: /usr/local/sbin/mesos-memory
==> a1: Running provisioner: shell...
    a1: Running: script: DC/OS Agent-private
==> a1: Skipping DC/OS private agent install (boot machine will provision in parallel)
==> p1: Importing base box 'mesosphere/dcos-centos-virtualbox'...
==> p1: Matching MAC address for NAT networking...
==> p1: Checking if box 'mesosphere/dcos-centos-virtualbox' is up to date...
==> p1: Setting the name of the VM: p1.dcos
==> p1: Fixed port collision for 22 => 2222. Now on port 2203.
==> p1: Clearing any previously set network interfaces...
==> p1: Preparing network interfaces based on configuration...
    p1: Adapter 1: nat
    p1: Adapter 2: hostonly
==> p1: Forwarding ports...
    p1: 22 (guest) => 2203 (host) (adapter 1)
==> p1: Running 'pre-boot' VM customizations...
==> p1: Booting VM...
==> p1: Waiting for machine to boot. This may take a few minutes...
    p1: SSH address: 127.0.0.1:2203
    p1: SSH username: vagrant
    p1: SSH auth method: private key
    p1: Warning: Remote connection disconnect. Retrying...
    p1: Warning: Remote connection disconnect. Retrying...
    p1: Warning: Remote connection disconnect. Retrying...
    p1: Warning: Remote connection disconnect. Retrying...
    p1: Warning: Remote connection disconnect. Retrying...
    p1: Warning: Remote connection disconnect. Retrying...
    p1: Warning: Remote connection disconnect. Retrying...
    p1: Warning: Remote connection disconnect. Retrying...
    p1: Warning: Remote connection disconnect. Retrying...
    p1: Warning: Remote connection disconnect. Retrying...
==> p1: Machine booted and ready!
==> p1: Checking for guest additions in VM...
==> p1: Setting hostname...
==> p1: Configuring and enabling network interfaces...
==> p1: Mounting shared folders...
    p1: /vagrant => D:/veits/Vagrant/ubuntu-trusty64-docker_2017-02/dcos-vagrant
==> p1: Updating /etc/hosts file on active guest machines...
==> p1: Updating /etc/hosts file on host machine (password may be required)...
==> p1: Running provisioner: shell...
    p1: Running: inline script
==> p1: Running provisioner: dcos_ssh...
    host: Found existing keys
==> p1: Inserting generated public key within guest...
==> p1: Configuring vagrant to connect using generated private key...
==> p1: Removing insecure key from the guest, if it's present...
==> p1: Running provisioner: shell...
    p1: Running: script: Certificate Authorities
==> p1: >>> Installing Certificate Authorities
==> p1: Running provisioner: shell...
    p1: Running: script: Install Probe
==> p1: Probe already installed: /usr/local/sbin/probe
==> p1: Running provisioner: shell...
    p1: Running: script: Install jq
==> p1: jq already installed: /usr/local/sbin/jq
==> p1: Running provisioner: shell...
    p1: Running: script: Install DC/OS Postflight
==> p1: >>> Installing DC/OS Postflight: /usr/local/sbin/dcos-postflight
==> p1: Running provisioner: shell...
    p1: Running: script: Install Mesos Memory Modifier
==> p1: >>> Installing Mesos Memory Modifier: /usr/local/sbin/mesos-memory
==> p1: Running provisioner: shell...
    p1: Running: script: DC/OS Agent-public
==> p1: Skipping DC/OS public agent install (boot machine will provision in parallel)
==> boot: Importing base box 'mesosphere/dcos-centos-virtualbox'...
==> boot: Matching MAC address for NAT networking...
==> boot: Checking if box 'mesosphere/dcos-centos-virtualbox' is up to date...
==> boot: Setting the name of the VM: boot.dcos
==> boot: Fixed port collision for 22 => 2222. Now on port 2204.
==> boot: Clearing any previously set network interfaces...
==> boot: Preparing network interfaces based on configuration...
    boot: Adapter 1: nat
    boot: Adapter 2: hostonly
==> boot: Forwarding ports...
    boot: 22 (guest) => 2204 (host) (adapter 1)
==> boot: Running 'pre-boot' VM customizations...
==> boot: Booting VM...
==> boot: Waiting for machine to boot. This may take a few minutes...
    boot: SSH address: 127.0.0.1:2204
    boot: SSH username: vagrant
    boot: SSH auth method: private key
    boot: Warning: Remote connection disconnect. Retrying...
    boot: Warning: Remote connection disconnect. Retrying...
    boot: Warning: Remote connection disconnect. Retrying...
    boot: Warning: Remote connection disconnect. Retrying...
    boot: Warning: Remote connection disconnect. Retrying...
    boot: Warning: Remote connection disconnect. Retrying...
    boot: Warning: Remote connection disconnect. Retrying...
    boot: Warning: Remote connection disconnect. Retrying...
    boot: Warning: Remote connection disconnect. Retrying...
    boot: Warning: Remote connection disconnect. Retrying...
==> boot: Machine booted and ready!
==> boot: Checking for guest additions in VM...
==> boot: Setting hostname...
==> boot: Configuring and enabling network interfaces...
==> boot: Mounting shared folders...
    boot: /vagrant => D:/veits/Vagrant/ubuntu-trusty64-docker_2017-02/dcos-vagrant
==> boot: Updating /etc/hosts file on active guest machines...
==> boot: Updating /etc/hosts file on host machine (password may be required)...
==> boot: Running provisioner: shell...
    boot: Running: inline script
==> boot: Running provisioner: dcos_ssh...
    host: Found existing keys
==> boot: Inserting generated public key within guest...
==> boot: Configuring vagrant to connect using generated private key...
==> boot: Removing insecure key from the guest, if it's present...
==> boot: Running provisioner: shell...
    boot: Running: script: Certificate Authorities
==> boot: >>> Installing Certificate Authorities
==> boot: Running provisioner: shell...
    boot: Running: script: Install Probe
==> boot: Probe already installed: /usr/local/sbin/probe
==> boot: Running provisioner: shell...
    boot: Running: script: Install jq
==> boot: jq already installed: /usr/local/sbin/jq
==> boot: Running provisioner: shell...
    boot: Running: script: Install DC/OS Postflight
==> boot: >>> Installing DC/OS Postflight: /usr/local/sbin/dcos-postflight
==> boot: Running provisioner: shell...
    boot: Running: script: DC/OS Boot
==> boot: Error: No such image or container: zookeeper-boot
==> boot: >>> Starting zookeeper (for exhibitor bootstrap and quorum)
==> boot: a58a678182b4c60df5fd4e1a0b86407456a33c75f4289c7fd7b0ce761afed567
==> boot: Error: No such image or container: nginx-boot
==> boot: >>> Starting nginx (for distributing bootstrap artifacts to cluster)
==> boot: c4bceea034f4d7488ae5ddd6ed708640a56064b191cd3d640a3311a58c5dcb5b
==> boot: >>> Downloading dcos_generate_config.sh (for building bootstrap image for system)
==> boot:   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
==> boot:                                  Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
 22  723M   22  160M    0     0   171M      0  0:00:04 --:--:--  0:00:04  171M
 41  723M   41  300M    0     0   155M      0  0:00:04  0:00:01  0:00:03  139M
 65  723M   65  471M    0     0   160M      0  0:00:04  0:00:02  0:00:02  155M
 88  723M   88  642M    0     0   163M      0  0:00:04  0:00:03  0:00:01  160M
100  723M  100  723M    0     0   164M      0  0:00:04  0:00:04 --:--:--  163M
==> boot: Running provisioner: dcos_install...
==> boot: Reading etc/config-1.8.yaml
==> boot: Analyzing machines
==> boot: Generating Configuration: ~/dcos/genconf/config.yaml
==> boot: sudo: cat << EOF > ~/dcos/genconf/config.yaml
==> boot:       ---
==> boot:       master_list:
==> boot:       - 192.168.65.90
==> boot:       agent_list:
==> boot:       - 192.168.65.111
==> boot:       - 192.168.65.60
==> boot:       cluster_name: dcos-vagrant
==> boot:       bootstrap_url: http://192.168.65.50
==> boot:       exhibitor_storage_backend: static
==> boot:       master_discovery: static
==> boot:       resolvers:
==> boot:       - 10.0.2.3
==> boot:       superuser_username: admin
==> boot:       superuser_password_hash: "\$6\$rounds=656000\$123o/Qz.InhbkbsO\$kn5IkpWm5CplEorQo7jG/27LkyDgWrml36lLxDtckZkCxu22uihAJ4DOJVVnNbsz/Y5MCK3B1InquE6E7Jmh30"
==> boot:       ssh_port: 22
==> boot:       ssh_user: vagrant
==> boot:       check_time: false
==> boot:       exhibitor_zk_hosts: 192.168.65.50:2181
==> boot:
==> boot:       EOF
==> boot:
==> boot: Generating IP Detection Script: ~/dcos/genconf/ip-detect
==> boot: sudo: cat << 'EOF' > ~/dcos/genconf/ip-detect
==> boot:       #!/usr/bin/env bash
==> boot:       set -o errexit
==> boot:       set -o nounset
==> boot:       set -o pipefail
==> boot:       echo $(/usr/sbin/ip route show to match 192.168.65.90 | grep -Eo '[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}' | tail -1)
==> boot:
==> boot:       EOF
==> boot:
==> boot: Importing Private SSH Key: ~/dcos/genconf/ssh_key
==> boot: sudo: cp /vagrant/.vagrant/dcos/private_key_vagrant ~/dcos/genconf/ssh_key
==> boot:
==> boot: Generating DC/OS Installer Files: ~/dcos/genconf/serve/
==> boot: sudo: cd ~/dcos && bash ~/dcos/dcos_generate_config.sh --genconf && cp -rpv ~/dcos/genconf/serve/* /var/tmp/dcos/ && echo ok > /var/tmp/dcos/ready
==> boot:
==> boot:       Extracting image from this script and loading into docker daemon, this step can take a few minutes
==> boot:       dcos-genconf.602edc1b4da9364297-5df43052907c021eeb.tar
==> boot:       ====> EXECUTING CONFIGURATION GENERATION
==> boot:       Generating configuration files...
==> boot:       Final arguments:{
==> boot:         "adminrouter_auth_enabled":"true",
==> boot:         "bootstrap_id":"5df43052907c021eeb5de145419a3da1898c58a5",
==> boot:         "bootstrap_tmp_dir":"tmp",
==> boot:         "bootstrap_url":"http://192.168.65.50",
==> boot:         "check_time":"false",
==> boot:         "cluster_docker_credentials":"{}",
==> boot:         "cluster_docker_credentials_dcos_owned":"false",
==> boot:         "cluster_docker_credentials_enabled":"false",
==> boot:         "cluster_docker_credentials_write_to_etc":"false",
==> boot:         "cluster_docker_registry_enabled":"false",
==> boot:         "cluster_docker_registry_url":"",
==> boot:         "cluster_name":"dcos-vagrant",
==> boot:         "cluster_packages":"[\"dcos-config--setup_4869fa95533aed5aad36093272289e6bd389b458\", \"dcos-metadata--setup_4869fa95533aed5aad36093272289e6bd389b458\"]",
==> boot:         "config_id":"4869fa95533aed5aad36093272289e6bd389b458",
==> boot:         "config_yaml":"      \"agent_list\": |-\n        [\"192.168.65.111\", \"192.168.65.60\"]\n      \"bootstrap_url\": |-\n        http://192.168.65.50\n      \"check_time\": |-\n        false\n      \"cluster_name\": |-\n        dcos-vagrant\n      \"exhibitor_storage_backend\": |-\n        static\n      \"exhibitor_zk_hosts\": |-\n        192.168.65.50:2181\n      \"master_discovery\": |-\n        static\n      \"master_list\": |-\n        [\"192.168.65.90\"]\n      \"provider\": |-\n        onprem\n      \"resolvers\": |-\n        [\"10.0.2.3\"]\n      \"ssh_port\": |-\n        22\n      \"ssh_user\": |-\n        vagrant\n      \"superuser_password_hash\": |-\n        $6$rounds=656000$123o/Qz.InhbkbsO$kn5IkpWm5CplEorQo7jG/27LkyDgWrml36lLxDtckZkCxu22uihAJ4DOJVVnNbsz/Y5MCK3B1InquE6E7Jmh30\n      \"superuser_username\": |-\n        admin\n",
==> boot:         "curly_pound":"{#",
==> boot:         "custom_auth":"false",
==> boot:         "dcos_gen_resolvconf_search_str":"",
==> boot:         "dcos_image_commit":"602edc1b4da9364297d166d4857fc8ed7b0b65ca",
==> boot:         "dcos_overlay_config_attempts":"4",
==> boot:         "dcos_overlay_enable":"true",
==> boot:         "dcos_overlay_mtu":"1420",
==> boot:         "dcos_overlay_network":"{\"vtep_subnet\": \"44.128.0.0/20\", \"overlays\": [{\"prefix\": 24, \"name\": \"dcos\", \"subnet\": \"9.0.0.0/8\"}], \"vtep_mac_oui\": \"70:B3:D5:00:00:00\"}",
==> boot:         "dcos_remove_dockercfg_enable":"false",
==> boot:         "dcos_version":"1.8.8",
==> boot:         "dns_search":"",
==> boot:         "docker_remove_delay":"1hrs",
==> boot:         "docker_stop_timeout":"20secs",
==> boot:         "exhibitor_static_ensemble":"1:192.168.65.90",
==> boot:         "exhibitor_storage_backend":"static",
==> boot:         "expanded_config":"\"DO NOT USE THIS AS AN ARGUMENT TO OTHER ARGUMENTS. IT IS TEMPORARY\"",
==> boot:         "gc_delay":"2days",
==> boot:         "ip_detect_contents":"'#!/usr/bin/env bash\n\n  set -o errexit\n\n  set -o nounset\n\n  set -o pipefail\n\n  echo $(/usr/sbin/ip route show to match 192.168.65.90 | grep -Eo ''[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}''\n  | tail -1)\n\n\n  '\n",
==> boot:         "ip_detect_filename":"genconf/ip-detect",
==> boot:         "ip_detect_public_contents":"'#!/usr/bin/env bash\n\n  set -o errexit\n\n  set -o nounset\n\n  set -o pipefail\n\n  echo $(/usr/sbin/ip route show to match 192.168.65.90 | grep -Eo ''[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}''\n  | tail -1)\n\n\n  '\n",
==> boot:         "master_discovery":"static",
==> boot:         "master_dns_bindall":"true",
==> boot:         "master_list":"[\"192.168.65.90\"]",
==> boot:         "master_quorum":"1",
==> boot:         "mesos_container_logger":"org_apache_mesos_LogrotateContainerLogger",
==> boot:         "mesos_dns_ip_sources":"[\"host\", \"netinfo\"]",
==> boot:         "mesos_dns_resolvers_str":"\"resolvers\": [\"10.0.2.3\"]",
==> boot:         "mesos_hooks":"",
==> boot:         "mesos_isolation":"cgroups/cpu,cgroups/mem,disk/du,network/cni,filesystem/linux,docker/runtime,docker/volume",
==> boot:         "mesos_log_directory_max_files":"162",
==> boot:         "mesos_log_retention_count":"137",
==> boot:         "mesos_log_retention_mb":"4000",
==> boot:         "minuteman_forward_metrics":"false",
==> boot:         "minuteman_max_named_ip":"11.255.255.255",
==> boot:         "minuteman_max_named_ip_erltuple":"{11,255,255,255}",
==> boot:         "minuteman_min_named_ip":"11.0.0.0",
==> boot:         "minuteman_min_named_ip_erltuple":"{11,0,0,0}",
==> boot:         "num_masters":"1",
==> boot:         "oauth_auth_host":"https://dcos.auth0.com",
==> boot:         "oauth_auth_redirector":"https://auth.dcos.io",
==> boot:         "oauth_available":"true",
==> boot:         "oauth_client_id":"3yF5TOSzdlI45Q1xspxzeoGBe9fNxm9m",
==> boot:         "oauth_enabled":"true",
==> boot:         "oauth_issuer_url":"https://dcos.auth0.com/",
==> boot:         "package_names":"[\n  \"dcos-config\",\n  \"dcos-metadata\"\n]",
==> boot:         "provider":"onprem",
==> boot:         "resolvers":"[\"10.0.2.3\"]",
==> boot:         "resolvers_str":"10.0.2.3",
==> boot:         "rexray_config":"{\"rexray\": {\"modules\": {\"default-docker\": {\"disabled\": true}, \"default-admin\": {\"host\": \"tcp://127.0.0.1:61003\"}}, \"loglevel\": \"info\"}}",
==> boot:         "rexray_config_contents":"\"rexray:\\n  loglevel: info\\n  modules:\\n    default-admin:\\n      host: tcp://127.0.0.1:61003\\n\\\n  \\    default-docker:\\n      disabled: true\\n\"\n",
==> boot:         "rexray_config_preset":"",
==> boot:         "telemetry_enabled":"true",
==> boot:         "template_filenames":"[\n  \"dcos-config.yaml\",\n  \"cloud-config.yaml\",\n  \"dcos-metadata.yaml\",\n  \"dcos-services.yaml\"\n]",
==> boot:         "ui_banner":"false",
==> boot:         "ui_banner_background_color":"#1E232F",
==> boot:         "ui_banner_dismissible":"null",
==> boot:         "ui_banner_footer_content":"null",
==> boot:         "ui_banner_foreground_color":"#FFFFFF",
==> boot:         "ui_banner_header_content":"null",
==> boot:         "ui_banner_header_title":"null",
==> boot:         "ui_banner_image_path":"null",
==> boot:         "ui_branding":"false",
==> boot:         "ui_external_links":"false",
==> boot:         "use_mesos_hooks":"false",
==> boot:         "use_proxy":"false",
==> boot:         "user_arguments":"{\n  \"agent_list\":\"[\\\"192.168.65.111\\\", \\\"192.168.65.60\\\"]\",\n  \"bootstrap_url\":\"http://192.168.65.50\",\n  \"check_time\":\"false\",\n  \"cluster_name\":\"dcos-vagrant\",\n  \"exhibitor_storage_backend\":\"static\",\n  \"exhibitor_zk_hosts\":\"192.168.65.50:2181\",\n  \"master_discovery\":\"static\",\n  \"master_list\":\"[\\\"192.168.65.90\\\"]\",\n  \"provider\":\"onprem\",\n  \"resolvers\":\"[\\\"10.0.2.3\\\"]\",\n  \"ssh_port\":\"22\",\n  \"ssh_user\":\"vagrant\",\n  \"superuser_password_hash\":\"$6$rounds=656000$123o/Qz.InhbkbsO$kn5IkpWm5CplEorQo7jG/27LkyDgWrml36lLxDtckZkCxu22uihAJ4DOJVVnNbsz/Y5MCK3B1InquE6E7Jmh30\",\n  \"superuser_username\":\"admin\"\n}",
==> boot:         "weights":""
==> boot:       }
==> boot:       Generating configuration files...
==> boot:       Final arguments:{
==> boot:         "adminrouter_auth_enabled":"true",
==> boot:         "bootstrap_id":"5df43052907c021eeb5de145419a3da1898c58a5",
==> boot:         "bootstrap_tmp_dir":"tmp",
==> boot:         "bootstrap_url":"http://192.168.65.50",
==> boot:         "check_time":"false",
==> boot:         "cluster_docker_credentials":"{}",
==> boot:         "cluster_docker_credentials_dcos_owned":"false",
==> boot:         "cluster_docker_credentials_enabled":"false",
==> boot:         "cluster_docker_credentials_write_to_etc":"false",
==> boot:         "cluster_docker_registry_enabled":"false",
==> boot:         "cluster_docker_registry_url":"",
==> boot:         "cluster_name":"dcos-vagrant",
==> boot:         "cluster_packages":"[\"dcos-config--setup_4869fa95533aed5aad36093272289e6bd389b458\", \"dcos-metadata--setup_4869fa95533aed5aad36093272289e6bd389b458\"]",
==> boot:         "config_id":"4869fa95533aed5aad36093272289e6bd389b458",
==> boot:         "config_yaml":"      \"agent_list\": |-\n        [\"192.168.65.111\", \"192.168.65.60\"]\n      \"bootstrap_url\": |-\n        http://192.168.65.50\n      \"check_time\": |-\n        false\n      \"cluster_name\": |-\n        dcos-vagrant\n      \"exhibitor_storage_backend\": |-\n        static\n      \"exhibitor_zk_hosts\": |-\n        192.168.65.50:2181\n      \"master_discovery\": |-\n        static\n      \"master_list\": |-\n        [\"192.168.65.90\"]\n      \"provider\": |-\n        onprem\n      \"resolvers\": |-\n        [\"10.0.2.3\"]\n      \"ssh_port\": |-\n        22\n      \"ssh_user\": |-\n        vagrant\n      \"superuser_password_hash\": |-\n        $6$rounds=656000$123o/Qz.InhbkbsO$kn5IkpWm5CplEorQo7jG/27LkyDgWrml36lLxDtckZkCxu22uihAJ4DOJVVnNbsz/Y5MCK3B1InquE6E7Jmh30\n      \"superuser_username\": |-\n        admin\n",
==> boot:         "curly_pound":"{#",
==> boot:         "custom_auth":"false",
==> boot:         "dcos_gen_resolvconf_search_str":"",
==> boot:         "dcos_image_commit":"602edc1b4da9364297d166d4857fc8ed7b0b65ca",
==> boot:         "dcos_overlay_config_attempts":"4",
==> boot:         "dcos_overlay_enable":"true",
==> boot:         "dcos_overlay_mtu":"1420",
==> boot:         "dcos_overlay_network":"{\"vtep_subnet\": \"44.128.0.0/20\", \"overlays\": [{\"prefix\": 24, \"name\": \"dcos\", \"subnet\": \"9.0.0.0/8\"}], \"vtep_mac_oui\": \"70:B3:D5:00:00:00\"}",
==> boot:         "dcos_remove_dockercfg_enable":"false",
==> boot:         "dcos_version":"1.8.8",
==> boot:         "dns_search":"",
==> boot:         "docker_remove_delay":"1hrs",
==> boot:         "docker_stop_timeout":"20secs",
==> boot:         "exhibitor_static_ensemble":"1:192.168.65.90",
==> boot:         "exhibitor_storage_backend":"static",
==> boot:         "expanded_config":"\"DO NOT USE THIS AS AN ARGUMENT TO OTHER ARGUMENTS. IT IS TEMPORARY\"",
==> boot:         "gc_delay":"2days",
==> boot:         "ip_detect_contents":"'#!/usr/bin/env bash\n\n  set -o errexit\n\n  set -o nounset\n\n  set -o pipefail\n\n  echo $(/usr/sbin/ip route show to match 192.168.65.90 | grep -Eo ''[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}''\n  | tail -1)\n\n\n  '\n",
==> boot:         "ip_detect_filename":"genconf/ip-detect",
==> boot:         "ip_detect_public_contents":"'#!/usr/bin/env bash\n\n  set -o errexit\n\n  set -o nounset\n\n  set -o pipefail\n\n  echo $(/usr/sbin/ip route show to match 192.168.65.90 | grep -Eo ''[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}''\n  | tail -1)\n\n\n  '\n",
==> boot:         "master_discovery":"static",
==> boot:         "master_dns_bindall":"true",
==> boot:         "master_list":"[\"192.168.65.90\"]",
==> boot:         "master_quorum":"1",
==> boot:         "mesos_container_logger":"org_apache_mesos_LogrotateContainerLogger",
==> boot:         "mesos_dns_ip_sources":"[\"host\", \"netinfo\"]",
==> boot:         "mesos_dns_resolvers_str":"\"resolvers\": [\"10.0.2.3\"]",
==> boot:         "mesos_hooks":"",
==> boot:         "mesos_isolation":"cgroups/cpu,cgroups/mem,disk/du,network/cni,filesystem/linux,docker/runtime,docker/volume",
==> boot:         "mesos_log_directory_max_files":"162",
==> boot:         "mesos_log_retention_count":"137",
==> boot:         "mesos_log_retention_mb":"4000",
==> boot:         "minuteman_forward_metrics":"false",
==> boot:         "minuteman_max_named_ip":"11.255.255.255",
==> boot:         "minuteman_max_named_ip_erltuple":"{11,255,255,255}",
==> boot:         "minuteman_min_named_ip":"11.0.0.0",
==> boot:         "minuteman_min_named_ip_erltuple":"{11,0,0,0}",
==> boot:         "num_masters":"1",
==> boot:         "oauth_auth_host":"https://dcos.auth0.com",
==> boot:         "oauth_auth_redirector":"https://auth.dcos.io",
==> boot:         "oauth_available":"true",
==> boot:         "oauth_client_id":"3yF5TOSzdlI45Q1xspxzeoGBe9fNxm9m",
==> boot:         "oauth_enabled":"true",
==> boot:         "oauth_issuer_url":"https://dcos.auth0.com/",
==> boot:         "package_names":"[\n  \"dcos-config\",\n  \"dcos-metadata\"\n]",
==> boot:         "provider":"onprem",
==> boot:         "resolvers":"[\"10.0.2.3\"]",
==> boot:         "resolvers_str":"10.0.2.3",
==> boot:         "rexray_config":"{\"rexray\": {\"modules\": {\"default-docker\": {\"disabled\": true}, \"default-admin\": {\"host\": \"tcp://127.0.0.1:61003\"}}, \"loglevel\": \"info\"}}",
==> boot:         "rexray_config_contents":"\"rexray:\\n  loglevel: info\\n  modules:\\n    default-admin:\\n      host: tcp://127.0.0.1:61003\\n\\\n  \\    default-docker:\\n      disabled: true\\n\"\n",
==> boot:         "rexray_config_preset":"",
==> boot:         "telemetry_enabled":"true",
==> boot:         "template_filenames":"[\n  \"dcos-config.yaml\",\n  \"cloud-config.yaml\",\n  \"dcos-metadata.yaml\",\n  \"dcos-services.yaml\"\n]",
==> boot:         "ui_banner":"false",
==> boot:         "ui_banner_background_color":"#1E232F",
==> boot:         "ui_banner_dismissible":"null",
==> boot:         "ui_banner_footer_content":"null",
==> boot:         "ui_banner_foreground_color":"#FFFFFF",
==> boot:         "ui_banner_header_content":"null",
==> boot:         "ui_banner_header_title":"null",
==> boot:         "ui_banner_image_path":"null",
==> boot:         "ui_branding":"false",
==> boot:         "ui_external_links":"false",
==> boot:         "use_mesos_hooks":"false",
==> boot:         "use_proxy":"false",
==> boot:         "user_arguments":"{\n  \"agent_list\":\"[\\\"192.168.65.111\\\", \\\"192.168.65.60\\\"]\",\n  \"bootstrap_url\":\"http://192.168.65.50\",\n  \"check_time\":\"false\",\n  \"cluster_name\":\"dcos-vagrant\",\n  \"exhibitor_storage_backend\":\"static\",\n  \"exhibitor_zk_hosts\":\"192.168.65.50:2181\",\n  \"master_discovery\":\"static\",\n  \"master_list\":\"[\\\"192.168.65.90\\\"]\",\n  \"provider\":\"onprem\",\n  \"resolvers\":\"[\\\"10.0.2.3\\\"]\",\n  \"ssh_port\":\"22\",\n  \"ssh_user\":\"vagrant\",\n  \"superuser_password_hash\":\"$6$rounds=656000$123o/Qz.InhbkbsO$kn5IkpWm5CplEorQo7jG/27LkyDgWrml36lLxDtckZkCxu22uihAJ4DOJVVnNbsz/Y5MCK3B1InquE6E7Jmh30\",\n  \"superuser_username\":\"admin\"\n}",
==> boot:         "weights":""
==> boot:       }
==> boot:       Package filename: packages/dcos-config/dcos-config--setup_4869fa95533aed5aad36093272289e6bd389b458.tar.xz
==> boot:       Package filename: packages/dcos-metadata/dcos-metadata--setup_4869fa95533aed5aad36093272289e6bd389b458.tar.xz
==> boot:       Generating Bash configuration files for DC/OS
==> boot:       ‘/root/dcos/genconf/serve/bootstrap’ -> ‘/var/tmp/dcos/bootstrap’
==> boot:       ‘/root/dcos/genconf/serve/bootstrap/5df43052907c021eeb5de145419a3da1898c58a5.bootstrap.tar.xz’ -> ‘/var/tmp/dcos/bootstrap/5df43052907c021eeb5de145419a3da1898c58a5.bootstrap.tar.xz’
==> boot:       ‘/root/dcos/genconf/serve/bootstrap/5df43052907c021eeb5de145419a3da1898c58a5.active.json’ -> ‘/var/tmp/dcos/bootstrap/5df43052907c021eeb5de145419a3da1898c58a5.active.json’
==> boot:       ‘/root/dcos/genconf/serve/bootstrap.latest’ -> ‘/var/tmp/dcos/bootstrap.latest’
==> boot:       ‘/root/dcos/genconf/serve/cluster-package-info.json’ -> ‘/var/tmp/dcos/cluster-package-info.json’
==> boot:       ‘/root/dcos/genconf/serve/dcos_install.sh’ -> ‘/var/tmp/dcos/dcos_install.sh’
==> boot:       ‘/root/dcos/genconf/serve/packages’ -> ‘/var/tmp/dcos/packages’
==> boot:       ‘/root/dcos/genconf/serve/packages/dcos-metadata’ -> ‘/var/tmp/dcos/packages/dcos-metadata’
==> boot:       ‘/root/dcos/genconf/serve/packages/dcos-metadata/dcos-metadata--setup_4869fa95533aed5aad36093272289e6bd389b458.tar.xz’ -> ‘/var/tmp/dcos/packages/dcos-metadata/dcos-metadata--setup_4869fa95533aed5aad36093272289e6bd389b458.tar.xz’
==> boot:       ‘/root/dcos/genconf/serve/packages/dcos-config’ -> ‘/var/tmp/dcos/packages/dcos-config’
==> boot:       ‘/root/dcos/genconf/serve/packages/dcos-config/dcos-config--setup_4869fa95533aed5aad36093272289e6bd389b458.tar.xz’ -> ‘/var/tmp/dcos/packages/dcos-config/dcos-config--setup_4869fa95533aed5aad36093272289e6bd389b458.tar.xz’
==> m1: Installing DC/OS (master)
==> m1: sudo: bash -ceu "curl --fail --location --silent --show-error --verbose http://boot.dcos/dcos_install.sh | bash -s -- master"
==> m1:
==> m1:       * About to connect() to boot.dcos port 80 (#0)
==> m1:       *   Trying 192.168.65.50...
==> m1:       * Connected to boot.dcos (192.168.65.50) port 80 (#0)
==> m1:       > GET /dcos_install.sh HTTP/1.1
==> m1:       > User-Agent: curl/7.29.0
==> m1:       > Host: boot.dcos
==> m1:       > Accept: */*
==> m1:       >
==> m1:       < HTTP/1.1 200 OK ==> m1:       < Server: nginx/1.11.4 ==> m1:       < Date: Tue, 07 Mar 2017 22:46:20 GMT ==> m1:       < Content-Type: application/octet-stream ==> m1:       < Content-Length: 15293 ==> m1:       < Last-Modified: Tue, 07 Mar 2017 22:46:11 GMT ==> m1:       < Connection: keep-alive ==> m1:       < ETag: "58bf3833-3bbd" ==> m1:       < Accept-Ranges: bytes ==> m1:       < ==> m1:       { [data not shown]
==> m1:       * Connection #0 to host boot.dcos left intact
==> m1:       Starting DC/OS Install Process
==> m1:       Running preflight checks
==> m1:       Checking if DC/OS is already installed:
==> m1:       PASS (Not installed)
==> m1:       PASS Is SELinux disabled?
==> m1:       Checking if docker is installed and in PATH:
==> m1:       PASS
==> m1:       Checking docker version requirement (>= 1.6):
==> m1:       PASS (1.11.2)
==> m1:       Checking if curl is installed and in PATH:
==> m1:       PASS
==> m1:       Checking if bash is installed and in PATH:
==> m1:       PASS
==> m1:       Checking if ping is installed and in PATH:
==> m1:       PASS
==> m1:       Checking if tar is installed and in PATH:
==> m1:       PASS
==> m1:       Checking if xz is installed and in PATH:
==> m1:       PASS
==> m1:       Checking if unzip is installed and in PATH:
==> m1:       PASS
==> m1:       Checking if ipset is installed and in PATH:
==> m1:       PASS
==> m1:       Checking if systemd-notify is installed and in PATH:
==> m1:       PASS
==> m1:       Checking if systemd is installed and in PATH:
==> m1:       PASS
==> m1:       Checking systemd version requirement (>= 200):
==> m1:       PASS (219)
==> m1:       Checking if group 'nogroup' exists:
==> m1:       PASS
==> m1:       Checking if port 53 (required by spartan) is in use:
==> m1:       PASS
==> m1:       Checking if port 80 (required by adminrouter) is in use:
==> m1:       PASS
==> m1:       Checking if port 443 (required by adminrouter) is in use:
==> m1:       PASS
==> m1:       Checking if port 1050 (required by 3dt) is in use:
==> m1:       PASS
==> m1:       Checking if port 2181 (required by zookeeper) is in use:
==> m1:       PASS
==> m1:       Checking if port 5050 (required by mesos-master) is in use:
==> m1:       PASS
==> m1:       Checking if port 7070 (required by cosmos) is in use:
==> m1:       PASS
==> m1:       Checking if port 8080 (required by marathon) is in use:
==> m1:       PASS
==> m1:       Checking if port 8101 (required by dcos-oauth) is in use:
==> m1:       PASS
==> m1:       Checking if port 8123 (required by mesos-dns) is in use:
==> m1:       PASS
==> m1:       Checking if port 8181 (required by exhibitor) is in use:
==> m1:       PASS
==> m1:       Checking if port 9000 (required by metronome) is in use:
==> m1:       PASS
==> m1:       Checking if port 9942 (required by metronome) is in use:
==> m1:       PASS
==> m1:       Checking if port 9990 (required by cosmos) is in use:
==> m1:       PASS
==> m1:       Checking if port 15055 (required by dcos-history) is in use:
==> m1:       PASS
==> m1:       Checking if port 33107 (required by navstar) is in use:
==> m1:       PASS
==> m1:       Checking if port 36771 (required by marathon) is in use:
==> m1:       PASS
==> m1:       Checking if port 41281 (required by zookeeper) is in use:
==> m1:       PASS
==> m1:       Checking if port 42819 (required by spartan) is in use:
==> m1:       PASS
==> m1:       Checking if port 43911 (required by minuteman) is in use:
==> m1:       PASS
==> m1:       Checking if port 46839 (required by metronome) is in use:
==> m1:       PASS
==> m1:       Checking if port 61053 (required by mesos-dns) is in use:
==> m1:       PASS
==> m1:       Checking if port 61420 (required by epmd) is in use:
==> m1:       PASS
==> m1:       Checking if port 61421 (required by minuteman) is in use:
==> m1:       PASS
==> m1:       Checking if port 62053 (required by spartan) is in use:
==> m1:       PASS
==> m1:       Checking if port 62080 (required by navstar) is in use:
==> m1:       PASS
==> m1:       Checking Docker is configured with a production storage driver:
==> m1:       WARNING: bridge-nf-call-iptables is disabled
==> m1:       WARNING: bridge-nf-call-ip6tables is disabled
==> m1:       PASS (overlay)
==> m1:       Creating directories under /etc/mesosphere
==> m1:       Creating role file for master
==> m1:       Configuring DC/OS
==> m1:       Setting and starting DC/OS
==> m1:       Created symlink from /etc/systemd/system/multi-user.target.wants/dcos-setup.service to /etc/systemd/system/dcos-setup.service.
==> a1: Installing DC/OS (agent)
==> p1: Installing DC/OS (agent-public)
==> a1: sudo: bash -ceu "curl --fail --location --silent --show-error --verbose http://boot.dcos/dcos_install.sh | bash -s -- slave"
==> p1: sudo: bash -ceu "curl --fail --location --silent --show-error --verbose http://boot.dcos/dcos_install.sh | bash -s -- slave_public"
==> a1:
==> p1:
==> a1:       * About to connect() to boot.dcos port 80 (#0)
==> p1:       * About to connect() to boot.dcos port 80 (#0)
==> a1:       *   Trying 192.168.65.50...
==> p1:       *   Trying 192.168.65.50...
==> a1:       * Connected to boot.dcos (192.168.65.50) port 80 (#0)
==> p1:       * Connected to boot.dcos (192.168.65.50) port 80 (#0)
==> p1:       > GET /dcos_install.sh HTTP/1.1
==> p1:       > User-Agent: curl/7.29.0
==> p1:       > Host: boot.dcos
==> p1:       > Accept: */*
==> p1:       >
==> a1:       > GET /dcos_install.sh HTTP/1.1
==> a1:       > User-Agent: curl/7.29.0
==> a1:       > Host: boot.dcos
==> a1:       > Accept: */*
==> a1:       >
==> p1:       < HTTP/1.1 200 OK ==> p1:       < Server: nginx/1.11.4 ==> p1:       < Date: Tue, 07 Mar 2017 22:48:31 GMT ==> p1:       < Content-Type: application/octet-stream ==> p1:       < Content-Length: 15293 ==> p1:       < Last-Modified: Tue, 07 Mar 2017 22:46:11 GMT ==> p1:       < Connection: keep-alive ==> p1:       < ETag: "58bf3833-3bbd" ==> p1:       < Accept-Ranges: bytes ==> p1:       < ==> p1:       { [data not shown]
==> a1:       < HTTP/1.1 200 OK ==> a1:       < Server: nginx/1.11.4 ==> a1:       < Date: Tue, 07 Mar 2017 22:48:31 GMT ==> a1:       < Content-Type: application/octet-stream ==> a1:       < Content-Length: 15293 ==> a1:       < Last-Modified: Tue, 07 Mar 2017 22:46:11 GMT ==> a1:       < Connection: keep-alive ==> a1:       < ETag: "58bf3833-3bbd" ==> a1:       < Accept-Ranges: bytes ==> a1:       < ==> a1:       { [data not shown]
==> p1:       * Connection #0 to host boot.dcos left intact
==> a1:       * Connection #0 to host boot.dcos left intact
==> p1:       Starting DC/OS Install Process
==> p1:       Running preflight checks
==> p1:       Checking if DC/OS is already installed: PASS (Not installed)
==> a1:       Starting DC/OS Install Process
==> a1:       Running preflight checks
==> a1:       Checking if DC/OS is already installed: PASS (Not installed)
==> a1:       PASS Is SELinux disabled?
==> p1:       PASS Is SELinux disabled?
==> p1:       Checking if docker is installed and in PATH:
==> p1:       PASS
==> p1:       Checking docker version requirement (>= 1.6):
==> p1:       PASS (1.11.2)
==> p1:       Checking if curl is installed and in PATH:
==> p1:       PASS
==> p1:       Checking if bash is installed and in PATH:
==> a1:       Checking if docker is installed and in PATH:
==> p1:       PASS
==> p1:       Checking if ping is installed and in PATH:
==> a1:       PASS
==> a1:       Checking docker version requirement (>= 1.6):
==> p1:       PASS
==> p1:       Checking if tar is installed and in PATH:
==> a1:       PASS (1.11.2)
==> p1:       PASS
==> a1:       Checking if curl is installed and in PATH:
==> p1:       Checking if xz is installed and in PATH:
==> a1:       PASS
==> p1:       PASS
==> p1:       Checking if unzip is installed and in PATH:
==> a1:       Checking if bash is installed and in PATH:
==> a1:       PASS
==> p1:       PASS
==> p1:       Checking if ipset is installed and in PATH:
==> p1:       PASS
==> p1:       Checking if systemd-notify is installed and in PATH:
==> a1:       Checking if ping is installed and in PATH:
==> p1:       PASS
==> a1:       PASS
==> a1:       Checking if tar is installed and in PATH:
==> p1:       Checking if systemd is installed and in PATH:
==> a1:       PASS
==> p1:       PASS
==> p1:       Checking systemd version requirement (>= 200):
==> a1:       Checking if xz is installed and in PATH:
==> p1:       PASS (219)
==> p1:       Checking if group 'nogroup' exists:
==> p1:       PASS
==> p1:       Checking if port 53 (required by spartan) is in use:
==> a1:       PASS
==> a1:       Checking if unzip is installed and in PATH:
==> p1:       PASS
==> p1:       Checking if port 5051 (required by mesos-agent) is in use:
==> a1:       PASS
==> p1:       PASS
==> p1:       Checking if port 34451 (required by navstar) is in use:
==> a1:       Checking if ipset is installed and in PATH:
==> p1:       PASS
==> p1:       Checking if port 39851 (required by spartan) is in use:
==> a1:       PASS
==> p1:       PASS
==> p1:       Checking if port 43995 (required by minuteman) is in use:
==> a1:       Checking if systemd-notify is installed and in PATH:
==> p1:       PASS
==> p1:       Checking if port 61001 (required by agent-adminrouter) is in use:
==> a1:       PASS
==> p1:       PASS
==> p1:       Checking if port 61420 (required by epmd) is in use:
==> a1:       Checking if systemd is installed and in PATH:
==> p1:       PASS
==> p1:       Checking if port 61421 (required by minuteman) is in use:
==> p1:       PASS
==> p1:       Checking if port 62053 (required by spartan) is in use:
==> a1:       PASS
==> a1:       Checking systemd version requirement (>= 200):
==> a1:       PASS (219)
==> a1:       Checking if group 'nogroup' exists:
==> p1:       PASS
==> p1:       Checking if port 62080 (required by navstar) is in use:
==> a1:       PASS
==> a1:       Checking if port 53 (required by spartan) is in use:
==> p1:       PASS
==> p1:       Checking Docker is configured with a production storage driver:
==> a1:       PASS
==> a1:       Checking if port 5051 (required by mesos-agent) is in use:
==> p1:       WARNING: bridge-nf-call-iptables is disabled
==> p1:       WARNING: bridge-nf-call-ip6tables is disabled
==> a1:       PASS
==> a1:       Checking if port 34451 (required by navstar) is in use:
==> p1:       PASS (overlay)
==> p1:       Creating directories under /etc/mesosphere
==> a1:       PASS
==> a1:       Checking if port 39851 (required by spartan) is in use:
==> p1:       Creating role file for slave_public
==> a1:       PASS
==> a1:       Checking if port 43995 (required by minuteman) is in use:
==> p1:       Configuring DC/OS
==> a1:       PASS
==> a1:       Checking if port 61001 (required by agent-adminrouter) is in use:
==> a1:       PASS
==> a1:       Checking if port 61420 (required by epmd) is in use:
==> a1:       PASS
==> a1:       Checking if port 61421 (required by minuteman) is in use:
==> a1:       PASS
==> a1:       Checking if port 62053 (required by spartan) is in use:
==> a1:       PASS
==> a1:       Checking if port 62080 (required by navstar) is in use:
==> a1:       PASS
==> a1:       Checking Docker is configured with a production storage driver:
==> p1:       Setting and starting DC/OS
==> a1:       WARNING: bridge-nf-call-iptables is disabled
==> a1:       WARNING: bridge-nf-call-ip6tables is disabled
==> a1:       PASS (overlay)
==> a1:       Creating directories under /etc/mesosphere
==> a1:       Creating role file for slave
==> a1:       Configuring DC/OS
==> a1:       Setting and starting DC/OS
==> a1:       Created symlink from /etc/systemd/system/multi-user.target.wants/dcos-setup.service to /etc/systemd/system/dcos-setup.service.
==> p1:       Created symlink from /etc/systemd/system/multi-user.target.wants/dcos-setup.service to /etc/systemd/system/dcos-setup.service.
==> m1: DC/OS Postflight
==> a1: DC/OS Postflight
==> p1: DC/OS Postflight
==> m1: sudo: dcos-postflight
==> a1: sudo: dcos-postflight
==> p1: sudo: dcos-postflight
==> a1:
==> p1:
==> m1:
==> a1: Setting Mesos Memory: 5632 (role=*)
==> a1: sudo: mesos-memory 5632
==> a1:
==> a1:       Updating /var/lib/dcos/mesos-resources
==> a1: Restarting Mesos Agent
==> a1: sudo: bash -ceu "systemctl stop dcos-mesos-slave.service && rm -f /var/lib/mesos/slave/meta/slaves/latest && systemctl start dcos-mesos-slave.service --no-block"
==> a1:
==> p1: Setting Mesos Memory: 1024 (role=slave_public)
==> p1: sudo: mesos-memory 1024 slave_public
==> p1:
==> p1:       Updating /var/lib/dcos/mesos-resources
==> p1: Restarting Mesos Agent
==> p1: sudo: bash -ceu "systemctl stop dcos-mesos-slave-public.service && rm -f /var/lib/mesos/slave/meta/slaves/latest && systemctl start dcos-mesos-slave-public.service --no-block"
==> p1:
==> boot: DC/OS Installation Complete
==> boot: Web Interface: http://m1.dcos/
==> boot: DC/OS Installation Complete
==> boot: Web Interface: http://m1.dcos/

The VirtualBox GUI shows the four machines we had seen in the VagrantConfig.yaml. They are up and running:

Picture showing 4 VirtualBox machines m1.dcos, a1.dcos, p1.dcos and boot.dcos

Step 6: Log into the DC/OS GUI

Now let us access the Web UI on m1.dcos:

The Vagrant Hostmanager Plugin works also on Windows: we can check this by reading the hosts file on C:\Windows\System32\drivers\etc\hosts. It contains the DNS mappings for the four machines (a1.dcos, boot.dcos, m1.dcos and p1.dcos). The DNS mapping for spring.acme.org with alias oinker.acme.org will be missing in your case and will be added at a later step, when we are installing the Marathon load balancer based on HAProxy.

The host manager has added m1 and some other FQDNs to the hosts file (found on C:\Windows\System32\drivers\etc\hosts):

## vagrant-hostmanager-start id: 9f1502eb-71bf-4e6a-b3bc-44a83db628b7
192.168.65.111 a1.dcos

192.168.65.50 boot.dcos

192.168.65.90 m1.dcos

192.168.65.60 p1.dcos
192.168.65.60 spring.acme.org oinker.acme.org
## vagrant-hostmanager-end

After login in via Google,

and pressing the Allow button, we reach at the DC/OS Dashboard:

(scrolling down)

Step 7: Install the DCOS CLI

Now we will continue to follow the DC/OS 101 Tutorial and install the DC/OS CLI. This can be done by clicking the profile on the lower left of the Web GUI:

-> 

-> 

-> 

Choose the operating system type you are working on. In my case, I have a Windows system and I have performed following steps:

Step 8: Configure DC/OS Master URL

First, we cd into the the  folder, where dcos.exe is located (D:\veits\downloads\DCOS CLI in my case), before we configure the core DCOS URL:

Windows> cd /D "D:\veits\downloads\DCOS CLI"
Windows> dcos config set core.dcos_url http://m1.dcos
Windows> dcos
Command line utility for the Mesosphere Datacenter Operating
System (DC/OS). The Mesosphere DC/OS is a distributed operating
system built around Apache Mesos. This utility provides tools
for easy management of a DC/OS installation.

Available DC/OS commands:

        auth            Authenticate to DC/OS cluster
        config          Manage the DC/OS configuration file
        experimental    Experimental commands. These commands are under development and are subject to change
        help            Display help information about DC/OS
        job             Deploy and manage jobs in DC/OS
        marathon        Deploy and manage applications to DC/OS
        node            Administer and manage DC/OS cluster nodes
        package         Install and manage DC/OS software packages
        service         Manage DC/OS services
        task            Manage DC/OS tasks

Get detailed command description with 'dcos  --help'.

Step 9: Receive Token from the DC/OS Master

Windows> dcos auth login

Please go to the following link in your browser:

    http://m1.dcos/login?redirect_uri=urn:ietf:wg:oauth:2.0:oob
Enter OpenID Connect ID Token:eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIm...-YqOARGFN5Ewcf6YWlw <-------(shortened)
Login successful! 

Here, I have cut&paste the link I have marked in red into the browser URL field:

Then logged in as Google user:

-> 

-> I have signed in with Google

-> 

-> clicked Copy to Clipboard

-> paste the clipboard to the terminal as shown above already (here again) and press <enter>:

Enter OpenID Connect ID Token:eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIm...-YqOARGFN5Ewcf6YWlw <-------(shortened)
Login successful!

With that, you make sure only you have access to the (virtual) cluster.

Step 10 (optional): Explore DC/OS and Marathon

With the dcos service command, we will see, that Marathon is running already:

Windows> dcos service
NAME           HOST      ACTIVE  TASKS  CPU  MEM  DISK  ID
marathon  192.168.65.90   True     0    0.0  0.0  0.0   1d3a11d0-1c3e-4ec2-8485-d1a3aa43c465-0001

With dcos node we see that two (virtual) nodes are connected (as we might have noticed on the dashboard as well):

Windows> dcos node
   HOSTNAME           IP                           ID
192.168.65.111  192.168.65.111  1d3a11d0-1c3e-4ec2-8485-d1a3aa43c465-S2
192.168.65.60   192.168.65.60   1d3a11d0-1c3e-4ec2-8485-d1a3aa43c465-S3

The first one is a1, the private agent, and the second one is p1, the public agent.

With dcos log --leader we can check the Mesos master log:

Windows> dcos node log --leader
dcos-log is not supported
Falling back to files API...
I0309 13:11:45.152153  3217 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:45654 with User-Agent='python-requests/2.10.0'
I0309 13:11:47.176911  3214 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:45660 with User-Agent='python-requests/2.10.0'
I0309 13:11:48.039836  3214 http.cpp:390] HTTP GET for /master/state from 192.168.65.90:41141 with User-Agent='Mesos-State / Host: m1, Pid: 5258'
I0309 13:11:49.195853  3216 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:45666 with User-Agent='python-requests/2.10.0'
I0309 13:11:51.216013  3217 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:45672 with User-Agent='python-requests/2.10.0'
I0309 13:11:51.376802  3217 master.cpp:5478] Performing explicit task state reconciliation for 1 tasks of framework 1d3a11d0-1c3e-4ec2-8485-d1a3aa43c465-0001 (marathon) at scheduler-1a712a58-a49a-4c45-a89a-823b827a49bf@192.168.65.90:15101
I0309 13:11:53.236994  3217 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:45678 with User-Agent='python-requests/2.10.0'
I0309 13:11:55.257347  3216 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:45684 with User-Agent='python-requests/2.10.0'
I0309 13:11:57.274785  3217 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:45690 with User-Agent='python-requests/2.10.0'
I0309 13:11:57.462590  3213 http.cpp:390] HTTP GET for /master/state.json from 192.168.65.90:45704 with User-Agent='Mesos-DNS'

Finally, dcos help shows the output

Windows> dcos help
Description:
    The Mesosphere Datacenter Operating System (DC/OS) spans all of the machines in
your datacenter or cloud and treats them as a single, shared set of resources.

Usage:
    dcos [options] [] [...]

Options:
    --debug
        Enable debug mode.
    --help
        Print usage.
    --log-level=
        Set the logging level. This setting does not affect the output sent to
        stdout. The severity levels are:
        The severity level:
        * debug    Prints all messages.
        * info     Prints informational, warning, error, and critical messages.
        * warning  Prints warning, error, and critical messages.
        * error    Prints error and critical messages.
        * critical Prints only critical messages to stderr.
    --version
        Print version information

Environment Variables:
    DCOS_CONFIG
        Set the path to the DC/OS configuration file. By default, this variable
        is set to ~/.dcos/dcos.toml.
    DCOS_DEBUG
        Indicates whether to print additional debug messages to stdout. By
        default this is set to false.
    DCOS_LOG_LEVEL
        Prints log messages to stderr at or above the level indicated. This is
        equivalent to the --log-level command-line option.

You can also check the CLI documentation.

Step 11: Deploy a Hello World Service per GUI

If you follow steps 11 and 12, you will see in step 13 that the default networking settings are sub-optimal. You can skip steps 11 to 14, if you wish to create a hello service with an improved networking including load balancing.

Now we will create a Hello World Service. For that, log into the DC/OS, if not done already and navigate to Services:

-> Deploy Service (DC/OS)

-> 

Here we have chosen only 0.1 CPU, since Mesos is quite strict on the resource reservations: the sum of CPUs reserved for the applications cannot exceed the number you have at hand, even if the application does not need the resources really. This is, what we have seen in my previous Mesos blog post, where we have deployed hello world applications that only printed out a “Hello World” once a second with a reservation of one CPU. With two CPUs available, I could not start more than two such hello world applications.

Let us deploy a container from the image nginxdemos/hello:

-> 

-> 

Now the Service is getting deployed:

Step 12: Connect to the NginX Service

When we click on the nginx-via-gui service name, we will see that the service is running on the private Mesos agent a1 on 192.168.65.111:

We can directly access the service by entering the private agent’s IP address 192.168.65.111  or name a1.dcos in the Browser’s URL field:

Here we can see that we have a quite simple networking model: the Windows host uses IP address 192.168.65.1 to reach the server on 192.168.65.111, which is the private Mesos agent’s IP address. The NginX container is just sharing the private agent’s network interface.

Because of the simple networking model, that was easier than expected:

  1. in other situations, you often need to configure port forwarding on VirtualBox VM, but not this time: the Mesos Agent is configured with a secondary Ethernet interface with host networking, which allows to connect from the VirtualBox host to any port of the private agent without VirtualBox port forwarding.
  2. in other situations, you often need to configure a port mapping between the docker container and the docker host (the Mesos agent in this case) is needed. Why not this time? Let us explore this in more detail in the next optional step.

Step 13 (optional): Explore the Default Mesos Networking

While deploying the service, we have not reviewed the network tab yet. However, we can do this now by clicking on the service, then “Edit” and then “Network”:

The default network setting is the “Host” networking, which means that the container is sharing the host’s network interface directly. The image, we have chosen is exposing port 80. This is, why we can reach the service by entering the host’s name or IP address with port 80 to the URL field of the browser.

Since the container is re-using the Docker host’s network interface, a port mapping is not needed, as we can confirm with a docker ps command:

(Vagranthost)$ vagrant ssh a1
...
(a1)$ docker ps
CONTAINER ID        IMAGE                         COMMAND             CREATED             STATUS              PORTS               NAMES
cd5a068aaa28        oveits/docker-nginx-busybox   "/usr/sbin/nginx"   39 minutes ago      Up 39 minutes                           mesos-1d3a11d0-1c3e-4ec2-8485-d1a3aa43c465-S2.39067bbf-c4b6-448b-9eb9-975c050bcf57

Here we cannot see any port mapping here (the PORTS field is empty).

Note that the default network configuration does not allow to scale the service: port 80 is already occupied.

 

Let us confirm this assumption by trying to scale the NginX service to two containers:

On Services -> Drop-down list right of name -> Scale

->choose 2 instances:

-> Scale Service

Now the service continually tries to start the second container and the status is toggling between Waiting, Running and Delayed:

Delayed

Running

Waiting

As expected, the second docker container cannot start, because port 80 is already occupied on the docker host. The error log shows:

I0324 11:23:01.820436 7765 exec.cpp:161] Version: 1.0.3
I0324 11:23:01.825763 7769 exec.cpp:236] Executor registered on agent 1d3a11d0-1c3e-4ec2-8485-d1a3aa43c465-S2
I0324 11:23:01.827263 7772 docker.cpp:815] Running docker -H unix:///var/run/docker.sock run --cpu-shares 102 --memory 33554432 -e MARATHON_APP_VERSION=2017-03-24T18:18:00.202Z -e HOST=192.168.65.111 -e MARATHON_APP_RESOURCE_CPUS=0.1 -e MARATHON_APP_RESOURCE_GPUS=0 -e MARATHON_APP_DOCKER_IMAGE=oveits/docker-nginx-busybox -e PORT_10000=10298 -e MESOS_TASK_ID=nginx.ea26c7af-10be-11e7-9134-70b3d5800001 -e PORT=10298 -e MARATHON_APP_RESOURCE_MEM=32.0 -e PORTS=10298 -e MARATHON_APP_RESOURCE_DISK=2.0 -e MARATHON_APP_LABELS= -e MARATHON_APP_ID=/nginx -e PORT0=10298 -e LIBPROCESS_IP=192.168.65.111 -e MESOS_SANDBOX=/mnt/mesos/sandbox -e MESOS_CONTAINER_NAME=mesos-1d3a11d0-1c3e-4ec2-8485-d1a3aa43c465-S2.f752b208-f7d1-49d6-8cdd-cbb62eaf4768 -v /var/lib/mesos/slave/slaves/1d3a11d0-1c3e-4ec2-8485-d1a3aa43c465-S2/frameworks/1d3a11d0-1c3e-4ec2-8485-d1a3aa43c465-0001/executors/nginx.ea26c7af-10be-11e7-9134-70b3d5800001/runs/f752b208-f7d1-49d6-8cdd-cbb62eaf4768:/mnt/mesos/sandbox --net host --name mesos-1d3a11d0-1c3e-4ec2-8485-d1a3aa43c465-S2.f752b208-f7d1-49d6-8cdd-cbb62eaf4768 oveits/docker-nginx-busybox
nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (2: No such file or directory)
2017/03/24 18:23:01 [emerg] 1#0: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2017/03/24 18:23:01 [notice] 1#0: try again to bind() after 500ms
2017/03/24 18:23:01 [emerg] 1#0: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2017/03/24 18:23:01 [notice] 1#0: try again to bind() after 500ms
2017/03/24 18:23:01 [emerg] 1#0: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2017/03/24 18:23:01 [notice] 1#0: try again to bind() after 500ms
2017/03/24 18:23:01 [emerg] 1#0: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2017/03/24 18:23:01 [notice] 1#0: try again to bind() after 500ms
2017/03/24 18:23:01 [emerg] 1#0: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2017/03/24 18:23:01 [notice] 1#0: try again to bind() after 500ms
2017/03/24 18:23:01 [emerg] 1#0: still could not bind()
nginx: [emerg] still could not bind()

This is not a good configuration. Can we choose a different type of networking at the time we start the service? Let us follow Step 14 to create the same service, but now in a scalable and load-balanced fashion:

Step 14: Deploy a Hello World Service per JSON with improved Networking and Load-Balancing

Step 14.1: Install Marathon Load Balancer

Step 14.1.1: Check, if Marathon LB is already installed

In the moment, the Marathon Load Balancer is not installed. This can be checked with following DCOS CLI command:

(DCOS CLI Client)$ dcos package list
There are currently no installed packages. Please use `dcos package install` to install a package.

Step 14.1.2 (optional): Check Options of Marathon Package

Let us install the Marathon Load balancer by following the version 1.8 documentation. First, we will have a look to the package (optional):

(DCOS CLI Client)$ dcos package describe --config marathon-lb
{
  "$schema": "http://json-schema.org/schema#",
  "properties": {
    "marathon-lb": {
      "properties": {
        "auto-assign-service-ports": {
          "default": false,
          "description": "Auto assign service ports for tasks which use IP-per-task. See https://github.com/mesosphere/marathon-lb#mesos-with-ip-per-task-support for details.",
          "type": "boolean"
        },
        "bind-http-https": {
          "default": true,
          "description": "Reserve ports 80 and 443 for the LB. Use this if you intend to use virtual hosts.",
          "type": "boolean"
        },
        "cpus": {
          "default": 2,
          "description": "CPU shares to allocate to each marathon-lb instance.",
          "minimum": 1,
          "type": "number"
        },
        "haproxy-group": {
          "default": "external",
          "description": "HAProxy group parameter. Matches with HAPROXY_GROUP in the app labels.",
          "type": "string"
        },
        "haproxy-map": {
          "default": true,
          "description": "Enable HAProxy VHost maps for fast VHost routing.",
          "type": "boolean"
        },
        "haproxy_global_default_options": {
          "default": "redispatch,http-server-close,dontlognull",
          "description": "Default global options for HAProxy.",
          "type": "string"
        },
        "instances": {
          "default": 1,
          "description": "Number of instances to run.",
          "minimum": 1,
          "type": "integer"
        },
        "marathon-uri": {
          "default": "http://marathon.mesos:8080",
          "description": "URI of Marathon instance",
          "type": "string"
        },
        "maximumOverCapacity": {
          "default": 0.2,
          "description": "Maximum over capacity.",
          "minimum": 0,
          "type": "number"
        },
        "mem": {
          "default": 1024.0,
          "description": "Memory (MB) to allocate to each marathon-lb task.",
          "minimum": 256.0,
          "type": "number"
        },
        "minimumHealthCapacity": {
          "default": 0.5,
          "description": "Minimum health capacity.",
          "minimum": 0,
          "type": "number"
        },
        "name": {
          "default": "marathon-lb",
          "description": "Name for this LB instance",
          "type": "string"
        },
        "role": {
          "default": "slave_public",
          "description": "Deploy marathon-lb only on nodes with this role.",
          "type": "string"
        },
        "secret_name": {
          "default": "",
          "description": "Name of the Secret Store credentials to use for DC/OS service authentication. This should be left empty unless service authentication is needed.",
          "type": "string"
        },
        "ssl-cert": {
          "description": "TLS Cert and private key for HTTPS.",
          "type": "string"
        },
        "strict-mode": {
          "default": false,
          "description": "Enable strict mode. This requires that you explicitly enable each backend with `HAPROXY_{n}_ENABLED=true`.",
          "type": "boolean"
        },
        "sysctl-params": {
          "default": "net.ipv4.tcp_tw_reuse=1 net.ipv4.tcp_fin_timeout=30 net.ipv4.tcp_max_syn_backlog=10240 net.ipv4.tcp_max_tw_buckets=400000 net.ipv4.tcp_max_orphans=60000 net.core.somaxconn=10000",
          "description": "sysctl params to set at startup for HAProxy.",
          "type": "string"
        },
        "template-url": {
          "default": "",
          "description": "URL to tarball containing a directory templates/ to customize haproxy config.",
          "type": "string"
        }
      },
      "required": [
        "cpus",
        "mem",
        "haproxy-group",
        "instances",
        "name"
      ],
      "type": "object"
    }
  },
  "type": "object"
}

Step 14.1.3: Install and Check Marathon Load Balancer

We install the Marathon Package now. We will keep the default configuration:

$ dcos package install marathon-lb
We recommend at least 2 CPUs and 1GiB of RAM for each Marathon-LB instance.

*NOTE*: ```Enterprise Edition``` DC/OS requires setting up the Service Account in all security modes.
Follow these instructions to setup a Service Account: https://docs.mesosphere.com/administration/id-and-access-mgt/service-auth/mlb-auth/
Continue installing? [yes/no] yes
Installing Marathon app for package [marathon-lb] version [1.5.1]
Marathon-lb DC/OS Service has been successfully installed!
See https://github.com/mesosphere/marathon-lb for documentation.

Now let uch check that the package is installed:

$ dcos package list
NAME VERSION APP COMMAND DESCRIPTION
marathon-lb 1.5.1 /marathon-lb --- HAProxy configured using Marathon state

We are able to see the load balancer service on the GUI as well:

After clicking on marathon-lb service  and the container  and scrolling down (see note), we see, that the load balancer is serving the ports 80, 443, 9090, 9091, and 10000 to 10100. We will use one of the high ports soon.

 

Note: scrolling is a little bit tricky at the moment, you might need to re-size the browser view with ctrl minus or ctrl plus to see the scroll bar on the right. Another possibility is to click into the black part of the browser page and use the arrow keys thereafter

Port 9090 is used by the load balancer admin interface. We can see the statistics there:

Step 14.2: Create an Application using Marathon Load Balancer

Now let us follow this instructions to add a service that makes use of the Marathon Load Balancer:

Step 14.2.1: Define the Application’s Configuration File

Save following File content as nginx-hostname-app.json:

{
   "id": "nginx-hostname",
   "container": {
     "type": "DOCKER",
     "docker": {
       "image": "nginxdemos/hello",
       "network": "BRIDGE",
       "portMappings": [
         { "hostPort": 0, "containerPort": 80, "servicePort": 10006 }
       ]
     }
   },
   "instances": 3,
   "cpus": 0.25,
   "mem": 100,
   "healthChecks": [{
       "protocol": "HTTP",
       "path": "/",
       "portIndex": 0,
       "timeoutSeconds": 2,
       "gracePeriodSeconds": 15,
       "intervalSeconds": 3,
       "maxConsecutiveFailures": 2
   }],
   "labels":{
     "HAPROXY_DEPLOYMENT_GROUP":"nginx-hostname",
     "HAPROXY_DEPLOYMENT_ALT_PORT":"10007",
     "HAPROXY_GROUP":"external",
     "HAPROXY_0_REDIRECT_TO_HTTPS":"true",
     "HAPROXY_0_VHOST": "192.168.65.111"
   }
}

If you are running in another environment than the one we have created using Vagrant, you might need to adapt the IP address: replace 192.168.65.111 in the HAPROXY_0_VHOST by your public agent’s IP address.

Step 14.2.2 Create Service using DCOS CLI

Now create the Marathon app using the DCOS CLI (in my case, I have not adapted the Path variable yet, so I had to issue a cd to the full_path_to_dcos.exe, “D:\veits\downloads\DCOS CLI\dcos.exe” in my case.

$ cd <folder_containing_dcos.exe> # needed, if dcos.exe is not in your PATH
$ dcos marathon app add full_path_to_nginx-hostname-app.json
Created deployment 63bac617-792c-488e-8489-80428b1c1e34
$ dcos marathon app list
ID               MEM   CPUS  TASKS  HEALTH  DEPLOYMENT  WAITING  CONTAINER  CMD                                         
/marathon-lb     1024   2     1/1    1/1       ---      False      DOCKER   ['sse', '-m', 'http://marathon.mesos:8080', '--health-check', '--haproxy-map', '--group', 'external']
/nginx-hostname  100   0.25   3/3    3/3       ---      False      DOCKER   None     

On the GUI, under Service we find:

Marathon: Service: nginx-hostname

After clicking on the service name nginx-hostname, we see more details on the three healthy containers that have been started:

nginx-hostname: three containers running on the public agent 192.168.65.111

Now, the service is reachable via curl from within the Mesos netowork (testing on the private agent a1):

(a1)$ curl http://marathon-lb.marathon.mesos:10006

But can we reach it from outside? Yes: marathon-lb.marathon.mesos is mapped to the public agent’s (p1) address 192.168.65.60 and we can reach http://192.168.65.60:10006 from the inside …

(a1)$ curl http://192.168.65.60:10006

…as well as from the outside:

NginX Hostname - Container 3

The image we have chosen will return the server name (i.e. the container ID), the server address and port as seen by the server (172.17.0.x with port 80), the called URI (root), the date and the client IP address and port.

When reloading the page via the browser’s reload button, the answering container will change randomly:

NginX Hostname - Container 2

NginX Hostname - Container 1

This proves that the request are load-balanced between the three NginX containers and can be reached from the Machine hosting the public agent VirtualBox VM. In the next step, we will make sure that the NginX service can be reached from any machine in your local area network.

Step 15: Reaching the Server from the outside World

In case of a physical machine as public agent, the service will be reachable from the local area network (LAN) already. However, in our case, the public agent p1 is a VirtualBox VM using host networks. Since VirtualBox host networks are only reachable from the VirtualBox host, an additional step has to be taken, if the service is to be reachable from outside.

Note that the outside interface of the HAProxy on the DC/OS master hosting the is attached to a VirtualBox host network 192.168.65.0/24. So, if you want to reach the address from the local area network, an additional mapping from an outside interface of the VirtualBox host p1 to port 10006 is needed.

For that, choose

-> VirtualBox GUI

-> p1.dcos

-> Edit

-> Network

Then

-> Adapter1

-> Port Forwarding

-> Add (+)

-> choose a name and map a host port to the port 10006 we have used in the JSON file above:

-> OK

 

In this example you will be able to reach the service via any reachable IP address of the VirtualBox host on port 8081:

With that, the service is reachable from any machine in the local area network.

Appendix A: Virtualbox Installation Problem Resolution

  • On Windows 7 or Windows 10, download the installer. Easy.
  • When I start the installer, everything seems to be on track until I see “rolling back action” and I finally get this:
    “Oracle VM Virtualbox x.x.x Setup Wizard ended prematurely”

Resolution of the “Setup Wizard ended prematurely” Problem

Let us try to resolve the problem: the installer of Virtualbox downloaded from Oracle shows the exact same error: “…ended prematurely”. This is not a docker bug. Playing with conversion tools from Virtualbox to VMware did not lead to the desired results.

The Solution: Google is your friend: the winner is:https://forums.virtualbox.org/viewtopic.php?f=6&t=61785. After backing up the registry and changing the registry entry

HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Network -> MaxFilters from 8 to 20 (decimal)

and a reboot of the Laptop, the installation of Virtualbox is successful.

Note: while this workaround has worked on my Windows 7 notebook, it has not worked on my new Windows 10 machine. However, I have managed to install VirtualBox on Windows 10 by de-selecting the USB support module during the VirtualBox installation process. I remember having seen a forum post pointing to that workaround, with the additional information that the USB drivers were installed automatically at the first time a USB device was added to a host (not yet tested on my side).

Appendix B: dcos node log --leader results in “No files exist. Exiting.” Message

Days later, I have tried again:

dcos node log --leader
dcos-log is not supported
Falling back to files API...
No files exist. Exiting.

The reason is that the Token has expired:

Windows> dcos service
Your core.dcos_acs_token is invalid. Please run: `dcos auth login`

The reason is that the Token has expired:

Windows> dcos auth login

Please go to the following link in your browser:

http://m1.dcos/login?redirect_uri=urn:ietf:wg:oauth:2.0:oob

Enter OpenID Connect ID Token:(paste in the key here)
Login successful!

Now we can try again:

Windows> dcos node log --leader
dcos-log is not supported
Falling back to files API...
I0324 09:36:18.030959 4042 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:49308 with User-Agent='python-requests/2.10.0'
I0324 09:36:18.285975 4047 master.cpp:5478] Performing explicit task state reconciliation for 1 tasks of framework 1d3a11d0-1c3e-4ec2-8485-d1a3aa43c465-0001 (marathon) at scheduler-908fbaff-5dd6-4089-a417-c10c068f5d85@192.168.65.90:15101
I0324 09:36:20.054447 4047 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:49314 with User-Agent='python-requests/2.10.0'
I0324 09:36:22.072386 4044 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:49320 with User-Agent='python-requests/2.10.0'
I0324 09:36:22.875411 4041 http.cpp:390] HTTP GET for /master/slaves from 192.168.65.90:49324 with User-Agent='Go-http-client/1.1'
I0324 09:36:24.083292 4041 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:49336 with User-Agent='python-requests/2.10.0'
I0324 09:36:26.091071 4047 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:49346 with User-Agent='python-requests/2.10.0'
I0324 09:36:28.099954 4047 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:49352 with User-Agent='python-requests/2.10.0'
I0324 09:36:29.773558 4047 http.cpp:390] HTTP GET for /master/state.json from 192.168.65.90:49354 with User-Agent='Mesos-DNS'
I0324 09:36:30.116576 4046 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:49360 with User-Agent='python-requests/2.10.0'

Appendix C: Finding the DC/OS Version

Get DC/OS Version (found via this Mesosphere help desk page):

$ curl http://m1/dcos-metadata/dcos-version.json
{
 "version": "1.8.8",
 "dcos-image-commit": "602edc1b4da9364297d166d4857fc8ed7b0b65ca",
 "bootstrap-id": "5df43052907c021eeb5de145419a3da1898c58a5"
}

Appendix D: Error Message, when changing Service Name

If you see the following message when editing a service:

requirement failed: IP address (Some(IpAddress(List(),Map(),DiscoveryInfo(List()),Some(dcos)))) and ports (List(PortDefinition(0,tcp,None,Map()))) are not allowed at the same time

 

Workaround: Destroy and Re-Create Service

Destroy the service and create a new service like follows:

Copy original service in json format (service -> edit -> choose JSON Mode on upper right corner -> ctrl-a ctrl-c -> Cancel)

Create new service

-> Services
-> Deploy Service
-> Edit
-> JSON Mode
-> click into text field
-> ctrl-a ctrl-v
-> edit ID and VIP_0 <– names should be the same: here “nginx-dcos-network-load-balanced-wo-marathon-lb”

-> Deploy

 

Next Steps

  • Explore the multi-tenant capabilities of DC/OS and Mesos/Marathon: can I use the same infrastructure for more than one customer?
    • Separate Logins, customer A should not see resources of customer B
    • Shared resources and separate resource reservations (pool) for the customers
    • Strict resource reservation vs. scheduler based resource reservation
    • Comparison with OpenShift: does OpenShift offer a resource reservation?
  • Running Jenkins on Mesos Marathon of Mesos Job
    • docker socks usage
1

Testing any Browser on any Hardware using BrowserStack – A Protractor Cross Browser Testing Example


This time we will learn how to test any web site (including your front end software) using many different Internet browsers by integrating a cloud-based cross browser test solution named BrowserStack. We will

  • perform Protractor tests for AngularJS using the Gulp testing toolkit following an example of Wishtack on both, CenOS 7.3 and Ubuntu 16.04
  • perform Protractor tests for AngularJS without the need to install Gulp following an example of BrowserStack on Ubuntu 16.04

We will make use of a BrowserStack trial account: 100 minutes automated testing are free; we will need only a few minutes automated run time for this tutorial.

Tools and Versions used

      • BrowserStack
      • Virtualbox 5.0.20
      • Docker 1.12.1
      • GNU bash on Windows, version 4.3.42(5)-release (x86_64-pc-msys)
      • either (for the Gulp-based example):
        • CentOS 7.3.1611 Docker Container
          • Node.js 6.9.4
          • npm 3.10.10
      • or (for both, the Gulp based example and the example w/o Gulp):
        • Ubuntu 16.04 Docker Container
          • Node.js 4.2.6
          • npm 3.5.2

Getting acquainted with BrowserStack

After signing up for a BrowserStack account, you get a 30 minute free live testing session. Start a local browser, and connect to the BrowserStack start URL. You will be asked to install a Browser plugin, which will take the role of a BrowserStack client.

You can choose any of the many operating systems and browser types. The ones with the Mobile Phone Icon  will run on physical devices, the other ones are running on emulators.

2017-03-04-17_14_58-dashboard

Note that you can interrupt the session any time by clicking Stop on the left (I had overlooked that, so I have wasted most of my 30 minutes free time…)

Now you type in the URL you want to load:

Jenkins running on an iOS Simulator on BrowserStack

As you can see, I have typed in localhost:8080 on the remote iOS simulator running on the BrowserStack cloud. However, the browser is loading the Jenkins server page, which is running on my local notebook. It is not trying to load localhost:8080 on the machine the browser is running on How can that be?

The HTTP request is directed from the browser to the locally running Chrome plugin, which is then resolving the DNS name “localhost” to point to the IP address of a repeater/proxy located in the BrowserStack network. The repeater/proxy is forwarding the request to the BrowserStack plugin running in the browser you are working on. That plugin will resolve the DNS name to be the IP address of your local PC you are working on. This is called local testing, which we will explore in more detail now, before we start our step by step guide.

About BrowserStack Local Testing

Establishing a Tunnel

Local testing means, that the local browser’s BrowserStack plugin (or any other BrowserStack client) is connecting to BrowserStack.com via a tunnel the browser is running in the cloud, but all traffic from the browser to the system under test is relayed by the local BrowserStack client running on your local Browser:

Steps to establish a tunnel between BrowserStack client and the repeater/proxy in the BrowserStack Cloud

Local Testing

Step 1: Sign up for BrowserStack

For completing the steps of this tutorial, you need to sign up for a BrowserStack account. Pricing information can be found here. However, for completing the tasks of this tutorial, I did not need to sign up for any of the paid plans.

Step 2: Prepare BrowserStack Testing for Angular

Step 2.1: Run CentOS container

We will run a BrowserStack automated test in a CentOS Docker container. For that, let us start CentOS interactively:

(dockerhost)$ dockder run -it centos

When you want to test the exact same version as I did in this test, then you need to run docker run -it centos:7.3.1611 instead. However, the latest version should work as well.

Step 2.2: Install Node.js

We follow the instructions from Digital Ocean for installing Node.js and NPM on CentOS via EPEL repository:

(container)$ yum install epel-release
(container)$ yum install nodejs
(container)# node --version
v6.9.4
(container)# yum install npm
...
Nothing to do
(container)# npm --version
3.10.10

Step 2.3: Install Git and Clone the Web Testing Protractor Boilerplate

Now let us install Git and download the Boilerplate:

(container)# yum install -y git
(container)# git clone https://github.com/wishtack/wt-protractor-boilerplate.git
(container)# cd wt-protractor-boilerplate

Step 2.4: Install Gulp

The Boilerplate is using gulp for automating the tests. Let us install the dependencies and gulp:

(container)# npm install
(container)# npm install -g gulp

Step 2.5: Specify BrowserStack Credentials

Now let us specify the BrowserStack credentials you can find on your BrowserStack Account Settings page:

(container)# export BROWSERSTACK_USER=your_browserstack_user_id
(container)# export BROWSERSTACK_KEY=your_browserstack_key

Step 2.6: Run the Test

Now we can run the automated test:

(container)# gulp test-e2e
[22:29:38] Using gulpfile /wt-protractor-boilerplate/gulpfile.js
[22:29:38] Starting 'test-e2e'...
selenium standalone is up to date.
chromedriver is up to date.
[22:29:39] I/hosted - Using the selenium server at http://hub.browserstack.com/wd/hub
[22:29:39] I/launcher - Running 1 instances of WebDriver
Started
.


1 spec, 0 failures
Finished in 3.51 seconds
[22:30:03] I/launcher - 0 instance(s) of WebDriver still running
[22:30:03] I/launcher - chrome #01 passed
[22:30:04] I/hosted - Using the selenium server at http://hub.browserstack.com/wd/hub
[22:30:04] I/launcher - Running 1 instances of WebDriver
Started
.


1 spec, 0 failures
Finished in 3.307 seconds
...

 

Step 2.7: Check the Automation Log

On this BrowserStack link, you can see in detail, which steps were taken during the automated test:

2017-03-04-18_29_58-run-selenium-tests-in-1000-desktop-and-mobile-browsers (text form)

2017-03-04-18_28_40-run-selenium-tests-in-1000-desktop-and-mobile-browsers (visual form)

Excellent!

Soon after running the first automated test via BrowserStack, I have found an Email from BrowserStack in my Email inbox with the information that they have noticed my first automated test and that I can contact them in case of any questions.

Appendix A: GULP Protractor BrowserStack Tests on Ubuntu 16.04 instead of CentOS

Here, we will do the same as above, but on an Ubuntu 16.04 Docker container instead of a CentOS container.

(dockerhost)$ sudo docker run -it ubuntu:16.04 bash 
(container)# mkdir /app; cd /app

Let us install some software we need:

(container)$ apt-get update && apt-get install -y nodejs npm git
Get:1 http://archive.ubuntu.com/ubuntu xenial InRelease [247 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Get:4 http://archive.ubuntu.com/ubuntu xenial/main Sources [1103 kB]
Get:5 http://archive.ubuntu.com/ubuntu xenial/restricted Sources [5179 B]
Get:6 http://archive.ubuntu.com/ubuntu xenial/universe Sources [9802 kB]
Get:7 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages [1558 kB]
Get:8 http://archive.ubuntu.com/ubuntu xenial/restricted amd64 Packages [14.1 kB]
Get:9 http://archive.ubuntu.com/ubuntu xenial/universe amd64 Packages [9827 kB]
Get:10 http://archive.ubuntu.com/ubuntu xenial-updates/main Sources [296 kB]
Get:11 http://archive.ubuntu.com/ubuntu xenial-updates/restricted Sources [2815 B]
Get:12 http://archive.ubuntu.com/ubuntu xenial-updates/universe Sources [176 kB]
Get:13 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages [623 kB]
Get:14 http://archive.ubuntu.com/ubuntu xenial-updates/restricted amd64 Packages [12.4 kB]
Get:15 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 Packages [546 kB]
Get:16 http://archive.ubuntu.com/ubuntu xenial-security/main Sources [75.0 kB]
Get:17 http://archive.ubuntu.com/ubuntu xenial-security/restricted Sources [2392 B]
Get:18 http://archive.ubuntu.com/ubuntu xenial-security/universe Sources [27.0 kB]
Get:19 http://archive.ubuntu.com/ubuntu xenial-security/main amd64 Packages [282 kB]
Get:20 http://archive.ubuntu.com/ubuntu xenial-security/restricted amd64 Packages [12.0 kB]
Get:21 http://archive.ubuntu.com/ubuntu xenial-security/universe amd64 Packages [113 kB]
Fetched 24.9 MB in 3min 13s (129 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
 binutils build-essential bzip2 ca-certificates cpp cpp-5 dpkg-dev fakeroot file g++ g++-5 gcc gcc-5 git-man gyp
 ifupdown iproute2 isc-dhcp-client isc-dhcp-common javascript-common krb5-locales less libalgorithm-diff-perl
 libalgorithm-diff-xs-perl libalgorithm-merge-perl libasan2 libasn1-8-heimdal libatm1 libatomic1 libbsd0 libc-dev-bin
 libc6-dev libcc1-0 libcilkrts5 libcurl3-gnutls libdns-export162 libdpkg-perl libedit2 liberror-perl libexpat1
 libfakeroot libffi6 libfile-fcntllock-perl libgcc-5-dev libgdbm3 libgmp10 libgnutls30 libgomp1 libgssapi-krb5-2
 libgssapi3-heimdal libhcrypto4-heimdal libheimbase1-heimdal libheimntlm0-heimdal libhogweed4 libhx509-5-heimdal
 libicu55 libidn11 libisc-export160 libisl15 libitm1 libjs-inherits libjs-jquery libjs-node-uuid libjs-underscore
 libk5crypto3 libkeyutils1 libkrb5-26-heimdal libkrb5-3 libkrb5support0 libldap-2.4-2 liblsan0 libmagic1 libmnl0
 libmpc3 libmpfr4 libmpx0 libnettle6 libp11-kit0 libperl5.22 libpopt0 libpython-stdlib libpython2.7-minimal
 libpython2.7-stdlib libquadmath0 libroken18-heimdal librtmp1 libsasl2-2 libsasl2-modules libsasl2-modules-db
 libsqlite3-0 libssl-dev libssl-doc libssl1.0.0 libstdc++-5-dev libtasn1-6 libtsan0 libubsan0 libuv1 libuv1-dev
 libwind0-heimdal libx11-6 libx11-data libxau6 libxcb1 libxdmcp6 libxext6 libxmuu1 libxtables11 linux-libc-dev make
 manpages manpages-dev mime-support netbase node-abbrev node-ansi node-ansi-color-table node-archy node-async
 node-block-stream node-combined-stream node-cookie-jar node-delayed-stream node-forever-agent node-form-data
 node-fstream node-fstream-ignore node-github-url-from-git node-glob node-graceful-fs node-gyp node-inherits node-ini
 node-json-stringify-safe node-lockfile node-lru-cache node-mime node-minimatch node-mkdirp node-mute-stream
 node-node-uuid node-nopt node-normalize-package-data node-npmlog node-once node-osenv node-qs node-read
 node-read-package-json node-request node-retry node-rimraf node-semver node-sha node-sigmund node-slide node-tar
 node-tunnel-agent node-underscore node-which nodejs-dev openssh-client openssl patch perl perl-modules-5.22 python
 python-minimal python-pkg-resources python2.7 python2.7-minimal rename rsync xauth xz-utils zlib1g-dev
Suggested packages:
 binutils-doc bzip2-doc cpp-doc gcc-5-locales debian-keyring g++-multilib g++-5-multilib gcc-5-doc libstdc++6-5-dbg
 gcc-multilib autoconf automake libtool flex bison gdb gcc-doc gcc-5-multilib libgcc1-dbg libgomp1-dbg libitm1-dbg
 libatomic1-dbg libasan2-dbg liblsan0-dbg libtsan0-dbg libubsan0-dbg libcilkrts5-dbg libmpx0-dbg libquadmath0-dbg
 gettext-base git-daemon-run | git-daemon-sysvinit git-doc git-el git-email git-gui gitk gitweb git-arch git-cvs
 git-mediawiki git-svn ppp rdnssd iproute2-doc resolvconf avahi-autoipd isc-dhcp-client-ddns apparmor apache2
 | lighttpd | httpd glibc-doc gnutls-bin krb5-doc krb5-user libsasl2-modules-otp libsasl2-modules-ldap
 libsasl2-modules-sql libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal libstdc++-5-doc make-doc
 man-browser node-hawk node-aws-sign node-oauth-sign node-http-signature debhelper ssh-askpass libpam-ssh keychain
 monkeysphere ed diffutils-doc perl-doc libterm-readline-gnu-perl | libterm-readline-perl-perl python-doc python-tk
 python-setuptools python2.7-doc binfmt-support openssh-server
The following NEW packages will be installed:
 binutils build-essential bzip2 ca-certificates cpp cpp-5 dpkg-dev fakeroot file g++ g++-5 gcc gcc-5 git git-man gyp
 ifupdown iproute2 isc-dhcp-client isc-dhcp-common javascript-common krb5-locales less libalgorithm-diff-perl
 libalgorithm-diff-xs-perl libalgorithm-merge-perl libasan2 libasn1-8-heimdal libatm1 libatomic1 libbsd0 libc-dev-bin
 libc6-dev libcc1-0 libcilkrts5 libcurl3-gnutls libdns-export162 libdpkg-perl libedit2 liberror-perl libexpat1
 libfakeroot libffi6 libfile-fcntllock-perl libgcc-5-dev libgdbm3 libgmp10 libgnutls30 libgomp1 libgssapi-krb5-2
 libgssapi3-heimdal libhcrypto4-heimdal libheimbase1-heimdal libheimntlm0-heimdal libhogweed4 libhx509-5-heimdal
 libicu55 libidn11 libisc-export160 libisl15 libitm1 libjs-inherits libjs-jquery libjs-node-uuid libjs-underscore
 libk5crypto3 libkeyutils1 libkrb5-26-heimdal libkrb5-3 libkrb5support0 libldap-2.4-2 liblsan0 libmagic1 libmnl0
 libmpc3 libmpfr4 libmpx0 libnettle6 libp11-kit0 libperl5.22 libpopt0 libpython-stdlib libpython2.7-minimal
 libpython2.7-stdlib libquadmath0 libroken18-heimdal librtmp1 libsasl2-2 libsasl2-modules libsasl2-modules-db
 libsqlite3-0 libssl-dev libssl-doc libssl1.0.0 libstdc++-5-dev libtasn1-6 libtsan0 libubsan0 libuv1 libuv1-dev
 libwind0-heimdal libx11-6 libx11-data libxau6 libxcb1 libxdmcp6 libxext6 libxmuu1 libxtables11 linux-libc-dev make
 manpages manpages-dev mime-support netbase node-abbrev node-ansi node-ansi-color-table node-archy node-async
 node-block-stream node-combined-stream node-cookie-jar node-delayed-stream node-forever-agent node-form-data
 node-fstream node-fstream-ignore node-github-url-from-git node-glob node-graceful-fs node-gyp node-inherits node-ini
 node-json-stringify-safe node-lockfile node-lru-cache node-mime node-minimatch node-mkdirp node-mute-stream
 node-node-uuid node-nopt node-normalize-package-data node-npmlog node-once node-osenv node-qs node-read
 node-read-package-json node-request node-retry node-rimraf node-semver node-sha node-sigmund node-slide node-tar
 node-tunnel-agent node-underscore node-which nodejs nodejs-dev npm openssh-client openssl patch perl
 perl-modules-5.22 python python-minimal python-pkg-resources python2.7 python2.7-minimal rename rsync xauth xz-utils
 zlib1g-dev
0 upgraded, 179 newly installed, 0 to remove and 2 not upgraded.
Need to get 79.4 MB of archives.
After this operation, 337 MB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 libatm1 amd64 1:2.5.1-1.5 [24.2 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial/main amd64 libmnl0 amd64 1.0.3-5 [12.0 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial/main amd64 libpopt0 amd64 1.16-10 [26.0 kB]
Get:4 http://archive.ubuntu.com/ubuntu xenial/main amd64 libgdbm3 amd64 1.8.3-13.1 [16.9 kB]
Get:5 http://archive.ubuntu.com/ubuntu xenial/main amd64 libxau6 amd64 1:1.0.8-1 [8376 B]
Get:6 http://archive.ubuntu.com/ubuntu xenial/main amd64 libxdmcp6 amd64 1:1.1.2-1.1 [11.0 kB]
Get:7 http://archive.ubuntu.com/ubuntu xenial/main amd64 libxcb1 amd64 1.11.1-1ubuntu1 [40.0 kB]
Get:8 http://archive.ubuntu.com/ubuntu xenial/main amd64 libx11-data all 2:1.6.3-1ubuntu2 [113 kB]
Get:9 http://archive.ubuntu.com/ubuntu xenial/main amd64 libx11-6 amd64 2:1.6.3-1ubuntu2 [571 kB]
Get:10 http://archive.ubuntu.com/ubuntu xenial/main amd64 libxext6 amd64 2:1.3.3-1 [29.4 kB]
Get:11 http://archive.ubuntu.com/ubuntu xenial/main amd64 perl-modules-5.22 all 5.22.1-9 [2641 kB]
Get:12 http://archive.ubuntu.com/ubuntu xenial/main amd64 libperl5.22 amd64 5.22.1-9 [3371 kB]
Get:13 http://archive.ubuntu.com/ubuntu xenial/main amd64 perl amd64 5.22.1-9 [237 kB]
Get:14 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpython2.7-minimal amd64 2.7.12-1ubuntu0~16.04.1 [339 kB]
Get:15 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python2.7-minimal amd64 2.7.12-1ubuntu0~16.04.1 [1295 kB]
Get:16 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-minimal amd64 2.7.11-1 [28.2 kB]
Get:17 http://archive.ubuntu.com/ubuntu xenial/main amd64 mime-support all 3.59ubuntu1 [31.0 kB]
Get:18 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libexpat1 amd64 2.1.0-7ubuntu0.16.04.2 [71.3 kB]
Get:19 http://archive.ubuntu.com/ubuntu xenial/main amd64 libffi6 amd64 3.2.1-4 [17.8 kB]
Get:20 http://archive.ubuntu.com/ubuntu xenial/main amd64 libsqlite3-0 amd64 3.11.0-1ubuntu1 [396 kB]
Get:21 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libssl1.0.0 amd64 1.0.2g-1ubuntu4.6 [1082 kB]
Get:22 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpython2.7-stdlib amd64 2.7.12-1ubuntu0~16.04.1 [1884 kB]
Get:23 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python2.7 amd64 2.7.12-1ubuntu0~16.04.1 [224 kB]
Get:24 http://archive.ubuntu.com/ubuntu xenial/main amd64 libpython-stdlib amd64 2.7.11-1 [7656 B]
Get:25 http://archive.ubuntu.com/ubuntu xenial/main amd64 python amd64 2.7.11-1 [137 kB]
Get:26 http://archive.ubuntu.com/ubuntu xenial/main amd64 libgmp10 amd64 2:6.1.0+dfsg-2 [240 kB]
Get:27 http://archive.ubuntu.com/ubuntu xenial/main amd64 libmpfr4 amd64 3.1.4-1 [191 kB]
Get:28 http://archive.ubuntu.com/ubuntu xenial/main amd64 libmpc3 amd64 1.0.3-1 [39.7 kB]
Get:29 http://archive.ubuntu.com/ubuntu xenial/main amd64 bzip2 amd64 1.0.6-8 [32.7 kB]
Get:30 http://archive.ubuntu.com/ubuntu xenial/main amd64 libmagic1 amd64 1:5.25-2ubuntu1 [216 kB]
Get:31 http://archive.ubuntu.com/ubuntu xenial/main amd64 file amd64 1:5.25-2ubuntu1 [21.2 kB]
Get:32 http://archive.ubuntu.com/ubuntu xenial/main amd64 iproute2 amd64 4.3.0-1ubuntu3 [522 kB]
Get:33 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 ifupdown amd64 0.8.10ubuntu1.2 [54.9 kB]
Get:34 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libisc-export160 amd64 1:9.10.3.dfsg.P4-8ubuntu1.5 [153 kB]
Get:35 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libdns-export162 amd64 1:9.10.3.dfsg.P4-8ubuntu1.5 [665 kB]
Get:36 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 isc-dhcp-client amd64 4.3.3-5ubuntu12.6 [223 kB]
Get:37 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 isc-dhcp-common amd64 4.3.3-5ubuntu12.6 [105 kB]
Get:38 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 less amd64 481-2.1ubuntu0.1 [110 kB]
Get:39 http://archive.ubuntu.com/ubuntu xenial/main amd64 libbsd0 amd64 0.8.2-1 [41.7 kB]
Get:40 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libnettle6 amd64 3.2-1ubuntu0.16.04.1 [93.5 kB]
Get:41 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libhogweed4 amd64 3.2-1ubuntu0.16.04.1 [136 kB]
Get:42 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libidn11 amd64 1.32-3ubuntu1.1 [45.6 kB]
Get:43 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libp11-kit0 amd64 0.23.2-5~ubuntu16.04.1 [105 kB]
Get:44 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libtasn1-6 amd64 4.7-3ubuntu0.16.04.1 [43.2 kB]
Get:45 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgnutls30 amd64 3.4.10-4ubuntu1.2 [547 kB]
Get:46 http://archive.ubuntu.com/ubuntu xenial/main amd64 libxtables11 amd64 1.6.0-2ubuntu3 [27.2 kB]
Get:47 http://archive.ubuntu.com/ubuntu xenial/main amd64 netbase all 5.3 [12.9 kB]
Get:48 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 openssl amd64 1.0.2g-1ubuntu4.6 [492 kB]
Get:49 http://archive.ubuntu.com/ubuntu xenial/main amd64 ca-certificates all 20160104ubuntu1 [191 kB]
Get:50 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 krb5-locales all 1.13.2+dfsg-5ubuntu2 [13.2 kB]
Get:51 http://archive.ubuntu.com/ubuntu xenial/main amd64 libroken18-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [41.2 kB]
Get:52 http://archive.ubuntu.com/ubuntu xenial/main amd64 libasn1-8-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [174 kB]
Get:53 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkrb5support0 amd64 1.13.2+dfsg-5ubuntu2 [30.8 kB]
Get:54 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libk5crypto3 amd64 1.13.2+dfsg-5ubuntu2 [81.2 kB]
Get:55 http://archive.ubuntu.com/ubuntu xenial/main amd64 libkeyutils1 amd64 1.5.9-8ubuntu1 [9904 B]
Get:56 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkrb5-3 amd64 1.13.2+dfsg-5ubuntu2 [273 kB]
Get:57 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgssapi-krb5-2 amd64 1.13.2+dfsg-5ubuntu2 [120 kB]
Get:58 http://archive.ubuntu.com/ubuntu xenial/main amd64 libhcrypto4-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [84.9 kB]
Get:59 http://archive.ubuntu.com/ubuntu xenial/main amd64 libheimbase1-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [29.2 kB]
Get:60 http://archive.ubuntu.com/ubuntu xenial/main amd64 libwind0-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [48.2 kB]
Get:61 http://archive.ubuntu.com/ubuntu xenial/main amd64 libhx509-5-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [107 kB]
Get:62 http://archive.ubuntu.com/ubuntu xenial/main amd64 libkrb5-26-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [202 kB]
Get:63 http://archive.ubuntu.com/ubuntu xenial/main amd64 libheimntlm0-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [15.1 kB]
Get:64 http://archive.ubuntu.com/ubuntu xenial/main amd64 libgssapi3-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [96.1 kB]
Get:65 http://archive.ubuntu.com/ubuntu xenial/main amd64 libsasl2-modules-db amd64 2.1.26.dfsg1-14build1 [14.5 kB]
Get:66 http://archive.ubuntu.com/ubuntu xenial/main amd64 libsasl2-2 amd64 2.1.26.dfsg1-14build1 [48.7 kB]
Get:67 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libldap-2.4-2 amd64 2.4.42+dfsg-2ubuntu3.1 [161 kB]
Get:68 http://archive.ubuntu.com/ubuntu xenial/main amd64 librtmp1 amd64 2.4+20151223.gitfa8646d-1build1 [53.9 kB]
Get:69 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcurl3-gnutls amd64 7.47.0-1ubuntu2.2 [184 kB]
Get:70 http://archive.ubuntu.com/ubuntu xenial/main amd64 libedit2 amd64 3.1-20150325-1ubuntu2 [76.5 kB]
Get:71 http://archive.ubuntu.com/ubuntu xenial/main amd64 libicu55 amd64 55.1-7 [7643 kB]
Get:72 http://archive.ubuntu.com/ubuntu xenial/main amd64 libsasl2-modules amd64 2.1.26.dfsg1-14build1 [47.5 kB]
Get:73 http://archive.ubuntu.com/ubuntu xenial/main amd64 libxmuu1 amd64 2:1.1.2-2 [9674 B]
Get:74 http://archive.ubuntu.com/ubuntu xenial/main amd64 manpages all 4.04-2 [1087 kB]
Get:75 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 openssh-client amd64 1:7.2p2-4ubuntu2.1 [587 kB]
Get:76 http://archive.ubuntu.com/ubuntu xenial/main amd64 rsync amd64 3.1.1-3ubuntu1 [325 kB]
Get:77 http://archive.ubuntu.com/ubuntu xenial/main amd64 xauth amd64 1:1.0.9-1ubuntu2 [22.7 kB]
Get:78 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 binutils amd64 2.26.1-1ubuntu1~16.04.3 [2310 kB]
Get:79 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libc-dev-bin amd64 2.23-0ubuntu5 [68.7 kB]
Get:80 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 linux-libc-dev amd64 4.4.0-66.87 [833 kB]
Get:81 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libc6-dev amd64 2.23-0ubuntu5 [2078 kB]
Get:82 http://archive.ubuntu.com/ubuntu xenial/main amd64 libisl15 amd64 0.16.1-1 [524 kB]
Get:83 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 cpp-5 amd64 5.4.0-6ubuntu1~16.04.4 [7653 kB]
Get:84 http://archive.ubuntu.com/ubuntu xenial/main amd64 cpp amd64 4:5.3.1-1ubuntu1 [27.7 kB]
Get:85 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcc1-0 amd64 5.4.0-6ubuntu1~16.04.4 [38.8 kB]
Get:86 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgomp1 amd64 5.4.0-6ubuntu1~16.04.4 [55.0 kB]
Get:87 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libitm1 amd64 5.4.0-6ubuntu1~16.04.4 [27.4 kB]
Get:88 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libatomic1 amd64 5.4.0-6ubuntu1~16.04.4 [8912 B]
Get:89 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libasan2 amd64 5.4.0-6ubuntu1~16.04.4 [264 kB]
Get:90 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 liblsan0 amd64 5.4.0-6ubuntu1~16.04.4 [105 kB]
Get:91 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libtsan0 amd64 5.4.0-6ubuntu1~16.04.4 [244 kB]
Get:92 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libubsan0 amd64 5.4.0-6ubuntu1~16.04.4 [95.3 kB]
Get:93 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcilkrts5 amd64 5.4.0-6ubuntu1~16.04.4 [40.1 kB]
Get:94 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libmpx0 amd64 5.4.0-6ubuntu1~16.04.4 [9766 B]
Get:95 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libquadmath0 amd64 5.4.0-6ubuntu1~16.04.4 [131 kB]
Get:96 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgcc-5-dev amd64 5.4.0-6ubuntu1~16.04.4 [2237 kB]
Get:97 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 gcc-5 amd64 5.4.0-6ubuntu1~16.04.4 [8577 kB]
Get:98 http://archive.ubuntu.com/ubuntu xenial/main amd64 gcc amd64 4:5.3.1-1ubuntu1 [5244 B]
Get:99 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libstdc++-5-dev amd64 5.4.0-6ubuntu1~16.04.4 [1426 kB]
Get:100 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 g++-5 amd64 5.4.0-6ubuntu1~16.04.4 [8300 kB]
Get:101 http://archive.ubuntu.com/ubuntu xenial/main amd64 g++ amd64 4:5.3.1-1ubuntu1 [1504 B]
Get:102 http://archive.ubuntu.com/ubuntu xenial/main amd64 make amd64 4.1-6 [151 kB]
Get:103 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libdpkg-perl all 1.18.4ubuntu1.1 [195 kB]
Get:104 http://archive.ubuntu.com/ubuntu xenial/main amd64 xz-utils amd64 5.1.1alpha+20120614-2ubuntu2 [78.8 kB]
Get:105 http://archive.ubuntu.com/ubuntu xenial/main amd64 patch amd64 2.7.5-1 [90.4 kB]
Get:106 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 dpkg-dev all 1.18.4ubuntu1.1 [584 kB]
Get:107 http://archive.ubuntu.com/ubuntu xenial/main amd64 build-essential amd64 12.1ubuntu2 [4758 B]
Get:108 http://archive.ubuntu.com/ubuntu xenial/main amd64 libfakeroot amd64 1.20.2-1ubuntu1 [25.5 kB]
Get:109 http://archive.ubuntu.com/ubuntu xenial/main amd64 fakeroot amd64 1.20.2-1ubuntu1 [61.8 kB]
Get:110 http://archive.ubuntu.com/ubuntu xenial/main amd64 liberror-perl all 0.17-1.2 [19.6 kB]
Get:111 http://archive.ubuntu.com/ubuntu xenial/main amd64 git-man all 1:2.7.4-0ubuntu1 [735 kB]
Get:112 http://archive.ubuntu.com/ubuntu xenial/main amd64 git amd64 1:2.7.4-0ubuntu1 [3006 kB]
Get:113 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-pkg-resources all 20.7.0-1 [108 kB]
Get:114 http://archive.ubuntu.com/ubuntu xenial/universe amd64 gyp all 0.1+20150913git1f374df9-1ubuntu1 [265 kB]
Get:115 http://archive.ubuntu.com/ubuntu xenial/main amd64 javascript-common all 11 [6066 B]
Get:116 http://archive.ubuntu.com/ubuntu xenial/main amd64 libalgorithm-diff-perl all 1.19.03-1 [47.6 kB]
Get:117 http://archive.ubuntu.com/ubuntu xenial/main amd64 libalgorithm-diff-xs-perl amd64 0.04-4build1 [11.0 kB]
Get:118 http://archive.ubuntu.com/ubuntu xenial/main amd64 libalgorithm-merge-perl all 0.08-3 [12.0 kB]
Get:119 http://archive.ubuntu.com/ubuntu xenial/main amd64 libfile-fcntllock-perl amd64 0.22-3 [32.0 kB]
Get:120 http://archive.ubuntu.com/ubuntu xenial/main amd64 libjs-jquery all 1.11.3+dfsg-4 [161 kB]
Get:121 http://archive.ubuntu.com/ubuntu xenial/universe amd64 libjs-node-uuid all 1.4.0-1 [11.1 kB]
Get:122 http://archive.ubuntu.com/ubuntu xenial/main amd64 libjs-underscore all 1.7.0~dfsg-1ubuntu1 [46.7 kB]
Get:123 http://archive.ubuntu.com/ubuntu xenial/main amd64 zlib1g-dev amd64 1:1.2.8.dfsg-2ubuntu4 [168 kB]
Get:124 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libssl-dev amd64 1.0.2g-1ubuntu4.6 [1344 kB]
Get:125 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libssl-doc all 1.0.2g-1ubuntu4.6 [1079 kB]
Get:126 http://archive.ubuntu.com/ubuntu xenial/universe amd64 libuv1 amd64 1.8.0-1 [57.4 kB]
Get:127 http://archive.ubuntu.com/ubuntu xenial/universe amd64 libuv1-dev amd64 1.8.0-1 [74.7 kB]
Get:128 http://archive.ubuntu.com/ubuntu xenial/main amd64 manpages-dev all 4.04-2 [2048 kB]
Get:129 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 nodejs amd64 4.2.6~dfsg-1ubuntu4.1 [3161 kB]
Get:130 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-async all 0.8.0-1 [22.2 kB]
Get:131 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-node-uuid all 1.4.0-1 [2530 B]
Get:132 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-underscore all 1.7.0~dfsg-1ubuntu1 [3780 B]
Get:133 http://archive.ubuntu.com/ubuntu xenial/main amd64 rename all 0.20-4 [12.0 kB]
Get:134 http://archive.ubuntu.com/ubuntu xenial/universe amd64 libjs-inherits all 2.0.1-3 [2794 B]
Get:135 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-abbrev all 1.0.5-2 [3592 B]
Get:136 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-ansi all 0.3.0-2 [8590 B]
Get:137 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-ansi-color-table all 1.0.0-1 [4478 B]
Get:138 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-archy all 0.0.2-1 [3660 B]
Get:139 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-inherits all 2.0.1-3 [3060 B]
Get:140 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-block-stream all 0.0.7-1 [4832 B]
Get:141 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-delayed-stream all 0.0.5-1 [4750 B]
Get:142 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-combined-stream all 0.0.5-1 [4958 B]
Get:143 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-cookie-jar all 0.3.1-1 [3746 B]
Get:144 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-forever-agent all 0.5.1-1 [3194 B]
Get:145 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-mime all 1.3.4-1 [11.9 kB]
Get:146 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-form-data all 0.1.0-1 [6412 B]
Get:147 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-rimraf all 2.2.8-1 [5702 B]
Get:148 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-mkdirp all 0.5.0-1 [4690 B]
Get:149 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-graceful-fs all 3.0.2-1 [7102 B]
Get:150 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-fstream all 0.1.24-1 [19.5 kB]
Get:151 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-lru-cache all 2.3.1-1 [5674 B]
Get:152 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-sigmund all 1.0.0-1 [3818 B]
Get:153 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-minimatch all 1.0.0-1 [14.0 kB]
Get:154 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-fstream-ignore all 0.0.6-2 [5586 B]
Get:155 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-github-url-from-git all 1.1.1-1 [3138 B]
Get:156 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-once all 1.1.1-1 [2608 B]
Get:157 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-glob all 4.0.5-1 [13.2 kB]
Get:158 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 nodejs-dev amd64 4.2.6~dfsg-1ubuntu4.1 [265 kB]
Get:159 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-nopt all 3.0.1-1 [9544 B]
Get:160 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-npmlog all 0.0.4-1 [5844 B]
Get:161 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-osenv all 0.1.0-1 [3772 B]
Get:162 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-tunnel-agent all 0.3.1-1 [4018 B]
Get:163 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-json-stringify-safe all 5.0.0-1 [3544 B]
Get:164 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-qs all 2.2.4-1 [7574 B]
Get:165 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-request all 2.26.1-1 [14.5 kB]
Get:166 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-semver all 2.1.0-2 [16.2 kB]
Get:167 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-tar all 1.0.3-2 [17.5 kB]
Get:168 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-which all 1.0.5-2 [3678 B]
Get:169 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-gyp all 3.0.3-2ubuntu1 [23.2 kB]
Get:170 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-ini all 1.1.0-1 [4770 B]
Get:171 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-lockfile all 0.4.1-1 [5450 B]
Get:172 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-mute-stream all 0.0.4-1 [4096 B]
Get:173 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-normalize-package-data all 0.2.2-1 [9286 B]
Get:174 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-read all 1.0.5-1 [4314 B]
Get:175 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-read-package-json all 1.2.4-1 [7780 B]
Get:176 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-retry all 0.6.0-1 [6172 B]
Get:177 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-sha all 1.2.3-1 [4272 B]
Get:178 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-slide all 1.1.4-1 [6118 B]
Get:179 http://archive.ubuntu.com/ubuntu xenial/universe amd64 npm all 3.5.2-0ubuntu4 [1586 kB]
Fetched 79.4 MB in 40s (1962 kB/s)
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package libatm1:amd64.
(Reading database ... 7256 files and directories currently installed.)
Preparing to unpack .../libatm1_1%3a2.5.1-1.5_amd64.deb ...
Unpacking libatm1:amd64 (1:2.5.1-1.5) ...
Selecting previously unselected package libmnl0:amd64.
Preparing to unpack .../libmnl0_1.0.3-5_amd64.deb ...
Unpacking libmnl0:amd64 (1.0.3-5) ...
Selecting previously unselected package libpopt0:amd64.
Preparing to unpack .../libpopt0_1.16-10_amd64.deb ...
Unpacking libpopt0:amd64 (1.16-10) ...
Selecting previously unselected package libgdbm3:amd64.
Preparing to unpack .../libgdbm3_1.8.3-13.1_amd64.deb ...
Unpacking libgdbm3:amd64 (1.8.3-13.1) ...
Selecting previously unselected package libxau6:amd64.
Preparing to unpack .../libxau6_1%3a1.0.8-1_amd64.deb ...
Unpacking libxau6:amd64 (1:1.0.8-1) ...
Selecting previously unselected package libxdmcp6:amd64.
Preparing to unpack .../libxdmcp6_1%3a1.1.2-1.1_amd64.deb ...
Unpacking libxdmcp6:amd64 (1:1.1.2-1.1) ...
Selecting previously unselected package libxcb1:amd64.
Preparing to unpack .../libxcb1_1.11.1-1ubuntu1_amd64.deb ...
Unpacking libxcb1:amd64 (1.11.1-1ubuntu1) ...
Selecting previously unselected package libx11-data.
Preparing to unpack .../libx11-data_2%3a1.6.3-1ubuntu2_all.deb ...
Unpacking libx11-data (2:1.6.3-1ubuntu2) ...
Selecting previously unselected package libx11-6:amd64.
Preparing to unpack .../libx11-6_2%3a1.6.3-1ubuntu2_amd64.deb ...
Unpacking libx11-6:amd64 (2:1.6.3-1ubuntu2) ...
Selecting previously unselected package libxext6:amd64.
Preparing to unpack .../libxext6_2%3a1.3.3-1_amd64.deb ...
Unpacking libxext6:amd64 (2:1.3.3-1) ...
Selecting previously unselected package perl-modules-5.22.
Preparing to unpack .../perl-modules-5.22_5.22.1-9_all.deb ...
Unpacking perl-modules-5.22 (5.22.1-9) ...
Selecting previously unselected package libperl5.22:amd64.
Preparing to unpack .../libperl5.22_5.22.1-9_amd64.deb ...
Unpacking libperl5.22:amd64 (5.22.1-9) ...
Selecting previously unselected package perl.
Preparing to unpack .../perl_5.22.1-9_amd64.deb ...
Unpacking perl (5.22.1-9) ...
Selecting previously unselected package libpython2.7-minimal:amd64.
Preparing to unpack .../libpython2.7-minimal_2.7.12-1ubuntu0~16.04.1_amd64.deb ...
Unpacking libpython2.7-minimal:amd64 (2.7.12-1ubuntu0~16.04.1) ...
Selecting previously unselected package python2.7-minimal.
Preparing to unpack .../python2.7-minimal_2.7.12-1ubuntu0~16.04.1_amd64.deb ...
Unpacking python2.7-minimal (2.7.12-1ubuntu0~16.04.1) ...
Selecting previously unselected package python-minimal.
Preparing to unpack .../python-minimal_2.7.11-1_amd64.deb ...
Unpacking python-minimal (2.7.11-1) ...
Selecting previously unselected package mime-support.
Preparing to unpack .../mime-support_3.59ubuntu1_all.deb ...
Unpacking mime-support (3.59ubuntu1) ...
Selecting previously unselected package libexpat1:amd64.
Preparing to unpack .../libexpat1_2.1.0-7ubuntu0.16.04.2_amd64.deb ...
Unpacking libexpat1:amd64 (2.1.0-7ubuntu0.16.04.2) ...
Selecting previously unselected package libffi6:amd64.
Preparing to unpack .../libffi6_3.2.1-4_amd64.deb ...
Unpacking libffi6:amd64 (3.2.1-4) ...
Selecting previously unselected package libsqlite3-0:amd64.
Preparing to unpack .../libsqlite3-0_3.11.0-1ubuntu1_amd64.deb ...
Unpacking libsqlite3-0:amd64 (3.11.0-1ubuntu1) ...
Selecting previously unselected package libssl1.0.0:amd64.
Preparing to unpack .../libssl1.0.0_1.0.2g-1ubuntu4.6_amd64.deb ...
Unpacking libssl1.0.0:amd64 (1.0.2g-1ubuntu4.6) ...
Selecting previously unselected package libpython2.7-stdlib:amd64.
Preparing to unpack .../libpython2.7-stdlib_2.7.12-1ubuntu0~16.04.1_amd64.deb ...
Unpacking libpython2.7-stdlib:amd64 (2.7.12-1ubuntu0~16.04.1) ...
Selecting previously unselected package python2.7.
Preparing to unpack .../python2.7_2.7.12-1ubuntu0~16.04.1_amd64.deb ...
Unpacking python2.7 (2.7.12-1ubuntu0~16.04.1) ...
Selecting previously unselected package libpython-stdlib:amd64.
Preparing to unpack .../libpython-stdlib_2.7.11-1_amd64.deb ...
Unpacking libpython-stdlib:amd64 (2.7.11-1) ...
Processing triggers for libc-bin (2.23-0ubuntu5) ...
Setting up libpython2.7-minimal:amd64 (2.7.12-1ubuntu0~16.04.1) ...
Setting up python2.7-minimal (2.7.12-1ubuntu0~16.04.1) ...
Linking and byte-compiling packages for runtime python2.7...
Setting up python-minimal (2.7.11-1) ...
Selecting previously unselected package python.
(Reading database ... 10145 files and directories currently installed.)
Preparing to unpack .../python_2.7.11-1_amd64.deb ...
Unpacking python (2.7.11-1) ...
Selecting previously unselected package libgmp10:amd64.
Preparing to unpack .../libgmp10_2%3a6.1.0+dfsg-2_amd64.deb ...
Unpacking libgmp10:amd64 (2:6.1.0+dfsg-2) ...
Selecting previously unselected package libmpfr4:amd64.
Preparing to unpack .../libmpfr4_3.1.4-1_amd64.deb ...
Unpacking libmpfr4:amd64 (3.1.4-1) ...
Selecting previously unselected package libmpc3:amd64.
Preparing to unpack .../libmpc3_1.0.3-1_amd64.deb ...
Unpacking libmpc3:amd64 (1.0.3-1) ...
Selecting previously unselected package bzip2.
Preparing to unpack .../bzip2_1.0.6-8_amd64.deb ...
Unpacking bzip2 (1.0.6-8) ...
Selecting previously unselected package libmagic1:amd64.
Preparing to unpack .../libmagic1_1%3a5.25-2ubuntu1_amd64.deb ...
Unpacking libmagic1:amd64 (1:5.25-2ubuntu1) ...
Selecting previously unselected package file.
Preparing to unpack .../file_1%3a5.25-2ubuntu1_amd64.deb ...
Unpacking file (1:5.25-2ubuntu1) ...
Selecting previously unselected package iproute2.
Preparing to unpack .../iproute2_4.3.0-1ubuntu3_amd64.deb ...
Unpacking iproute2 (4.3.0-1ubuntu3) ...
Selecting previously unselected package ifupdown.
Preparing to unpack .../ifupdown_0.8.10ubuntu1.2_amd64.deb ...
Unpacking ifupdown (0.8.10ubuntu1.2) ...
Selecting previously unselected package libisc-export160.
Preparing to unpack .../libisc-export160_1%3a9.10.3.dfsg.P4-8ubuntu1.5_amd64.deb ...
Unpacking libisc-export160 (1:9.10.3.dfsg.P4-8ubuntu1.5) ...
Selecting previously unselected package libdns-export162.
Preparing to unpack .../libdns-export162_1%3a9.10.3.dfsg.P4-8ubuntu1.5_amd64.deb ...
Unpacking libdns-export162 (1:9.10.3.dfsg.P4-8ubuntu1.5) ...
Selecting previously unselected package isc-dhcp-client.
Preparing to unpack .../isc-dhcp-client_4.3.3-5ubuntu12.6_amd64.deb ...
Unpacking isc-dhcp-client (4.3.3-5ubuntu12.6) ...
Selecting previously unselected package isc-dhcp-common.
Preparing to unpack .../isc-dhcp-common_4.3.3-5ubuntu12.6_amd64.deb ...
Unpacking isc-dhcp-common (4.3.3-5ubuntu12.6) ...
Selecting previously unselected package less.
Preparing to unpack .../less_481-2.1ubuntu0.1_amd64.deb ...
Unpacking less (481-2.1ubuntu0.1) ...
Selecting previously unselected package libbsd0:amd64.
Preparing to unpack .../libbsd0_0.8.2-1_amd64.deb ...
Unpacking libbsd0:amd64 (0.8.2-1) ...
Selecting previously unselected package libnettle6:amd64.
Preparing to unpack .../libnettle6_3.2-1ubuntu0.16.04.1_amd64.deb ...
Unpacking libnettle6:amd64 (3.2-1ubuntu0.16.04.1) ...
Selecting previously unselected package libhogweed4:amd64.
Preparing to unpack .../libhogweed4_3.2-1ubuntu0.16.04.1_amd64.deb ...
Unpacking libhogweed4:amd64 (3.2-1ubuntu0.16.04.1) ...
Selecting previously unselected package libidn11:amd64.
Preparing to unpack .../libidn11_1.32-3ubuntu1.1_amd64.deb ...
Unpacking libidn11:amd64 (1.32-3ubuntu1.1) ...
Selecting previously unselected package libp11-kit0:amd64.
Preparing to unpack .../libp11-kit0_0.23.2-5~ubuntu16.04.1_amd64.deb ...
Unpacking libp11-kit0:amd64 (0.23.2-5~ubuntu16.04.1) ...
Selecting previously unselected package libtasn1-6:amd64.
Preparing to unpack .../libtasn1-6_4.7-3ubuntu0.16.04.1_amd64.deb ...
Unpacking libtasn1-6:amd64 (4.7-3ubuntu0.16.04.1) ...
Selecting previously unselected package libgnutls30:amd64.
Preparing to unpack .../libgnutls30_3.4.10-4ubuntu1.2_amd64.deb ...
Unpacking libgnutls30:amd64 (3.4.10-4ubuntu1.2) ...
Selecting previously unselected package libxtables11:amd64.
Preparing to unpack .../libxtables11_1.6.0-2ubuntu3_amd64.deb ...
Unpacking libxtables11:amd64 (1.6.0-2ubuntu3) ...
Selecting previously unselected package netbase.
Preparing to unpack .../archives/netbase_5.3_all.deb ...
Unpacking netbase (5.3) ...
Selecting previously unselected package openssl.
Preparing to unpack .../openssl_1.0.2g-1ubuntu4.6_amd64.deb ...
Unpacking openssl (1.0.2g-1ubuntu4.6) ...
Selecting previously unselected package ca-certificates.
Preparing to unpack .../ca-certificates_20160104ubuntu1_all.deb ...
Unpacking ca-certificates (20160104ubuntu1) ...
Selecting previously unselected package krb5-locales.
Preparing to unpack .../krb5-locales_1.13.2+dfsg-5ubuntu2_all.deb ...
Unpacking krb5-locales (1.13.2+dfsg-5ubuntu2) ...
Selecting previously unselected package libroken18-heimdal:amd64.
Preparing to unpack .../libroken18-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libroken18-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libasn1-8-heimdal:amd64.
Preparing to unpack .../libasn1-8-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libasn1-8-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libkrb5support0:amd64.
Preparing to unpack .../libkrb5support0_1.13.2+dfsg-5ubuntu2_amd64.deb ...
Unpacking libkrb5support0:amd64 (1.13.2+dfsg-5ubuntu2) ...
Selecting previously unselected package libk5crypto3:amd64.
Preparing to unpack .../libk5crypto3_1.13.2+dfsg-5ubuntu2_amd64.deb ...
Unpacking libk5crypto3:amd64 (1.13.2+dfsg-5ubuntu2) ...
Selecting previously unselected package libkeyutils1:amd64.
Preparing to unpack .../libkeyutils1_1.5.9-8ubuntu1_amd64.deb ...
Unpacking libkeyutils1:amd64 (1.5.9-8ubuntu1) ...
Selecting previously unselected package libkrb5-3:amd64.
Preparing to unpack .../libkrb5-3_1.13.2+dfsg-5ubuntu2_amd64.deb ...
Unpacking libkrb5-3:amd64 (1.13.2+dfsg-5ubuntu2) ...
Selecting previously unselected package libgssapi-krb5-2:amd64.
Preparing to unpack .../libgssapi-krb5-2_1.13.2+dfsg-5ubuntu2_amd64.deb ...
Unpacking libgssapi-krb5-2:amd64 (1.13.2+dfsg-5ubuntu2) ...
Selecting previously unselected package libhcrypto4-heimdal:amd64.
Preparing to unpack .../libhcrypto4-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libhcrypto4-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libheimbase1-heimdal:amd64.
Preparing to unpack .../libheimbase1-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libheimbase1-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libwind0-heimdal:amd64.
Preparing to unpack .../libwind0-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libwind0-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libhx509-5-heimdal:amd64.
Preparing to unpack .../libhx509-5-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libhx509-5-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libkrb5-26-heimdal:amd64.
Preparing to unpack .../libkrb5-26-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libkrb5-26-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libheimntlm0-heimdal:amd64.
Preparing to unpack .../libheimntlm0-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libheimntlm0-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libgssapi3-heimdal:amd64.
Preparing to unpack .../libgssapi3-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libgssapi3-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libsasl2-modules-db:amd64.
Preparing to unpack .../libsasl2-modules-db_2.1.26.dfsg1-14build1_amd64.deb ...
Unpacking libsasl2-modules-db:amd64 (2.1.26.dfsg1-14build1) ...
Selecting previously unselected package libsasl2-2:amd64.
Preparing to unpack .../libsasl2-2_2.1.26.dfsg1-14build1_amd64.deb ...
Unpacking libsasl2-2:amd64 (2.1.26.dfsg1-14build1) ...
Selecting previously unselected package libldap-2.4-2:amd64.
Preparing to unpack .../libldap-2.4-2_2.4.42+dfsg-2ubuntu3.1_amd64.deb ...
Unpacking libldap-2.4-2:amd64 (2.4.42+dfsg-2ubuntu3.1) ...
Selecting previously unselected package librtmp1:amd64.
Preparing to unpack .../librtmp1_2.4+20151223.gitfa8646d-1build1_amd64.deb ...
Unpacking librtmp1:amd64 (2.4+20151223.gitfa8646d-1build1) ...
Selecting previously unselected package libcurl3-gnutls:amd64.
Preparing to unpack .../libcurl3-gnutls_7.47.0-1ubuntu2.2_amd64.deb ...
Unpacking libcurl3-gnutls:amd64 (7.47.0-1ubuntu2.2) ...
Selecting previously unselected package libedit2:amd64.
Preparing to unpack .../libedit2_3.1-20150325-1ubuntu2_amd64.deb ...
Unpacking libedit2:amd64 (3.1-20150325-1ubuntu2) ...
Selecting previously unselected package libicu55:amd64.
Preparing to unpack .../libicu55_55.1-7_amd64.deb ...
Unpacking libicu55:amd64 (55.1-7) ...
Selecting previously unselected package libsasl2-modules:amd64.
Preparing to unpack .../libsasl2-modules_2.1.26.dfsg1-14build1_amd64.deb ...
Unpacking libsasl2-modules:amd64 (2.1.26.dfsg1-14build1) ...
Selecting previously unselected package libxmuu1:amd64.
Preparing to unpack .../libxmuu1_2%3a1.1.2-2_amd64.deb ...
Unpacking libxmuu1:amd64 (2:1.1.2-2) ...
Selecting previously unselected package manpages.
Preparing to unpack .../manpages_4.04-2_all.deb ...
Unpacking manpages (4.04-2) ...
Selecting previously unselected package openssh-client.
Preparing to unpack .../openssh-client_1%3a7.2p2-4ubuntu2.1_amd64.deb ...
Unpacking openssh-client (1:7.2p2-4ubuntu2.1) ...
Selecting previously unselected package rsync.
Preparing to unpack .../rsync_3.1.1-3ubuntu1_amd64.deb ...
Unpacking rsync (3.1.1-3ubuntu1) ...
Selecting previously unselected package xauth.
Preparing to unpack .../xauth_1%3a1.0.9-1ubuntu2_amd64.deb ...
Unpacking xauth (1:1.0.9-1ubuntu2) ...
Selecting previously unselected package binutils.
Preparing to unpack .../binutils_2.26.1-1ubuntu1~16.04.3_amd64.deb ...
Unpacking binutils (2.26.1-1ubuntu1~16.04.3) ...
Selecting previously unselected package libc-dev-bin.
Preparing to unpack .../libc-dev-bin_2.23-0ubuntu5_amd64.deb ...
Unpacking libc-dev-bin (2.23-0ubuntu5) ...
Selecting previously unselected package linux-libc-dev:amd64.
Preparing to unpack .../linux-libc-dev_4.4.0-66.87_amd64.deb ...
Unpacking linux-libc-dev:amd64 (4.4.0-66.87) ...
Selecting previously unselected package libc6-dev:amd64.
Preparing to unpack .../libc6-dev_2.23-0ubuntu5_amd64.deb ...
Unpacking libc6-dev:amd64 (2.23-0ubuntu5) ...
Selecting previously unselected package libisl15:amd64.
Preparing to unpack .../libisl15_0.16.1-1_amd64.deb ...
Unpacking libisl15:amd64 (0.16.1-1) ...
Selecting previously unselected package cpp-5.
Preparing to unpack .../cpp-5_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking cpp-5 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package cpp.
Preparing to unpack .../cpp_4%3a5.3.1-1ubuntu1_amd64.deb ...
Unpacking cpp (4:5.3.1-1ubuntu1) ...
Selecting previously unselected package libcc1-0:amd64.
Preparing to unpack .../libcc1-0_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libcc1-0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libgomp1:amd64.
Preparing to unpack .../libgomp1_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libgomp1:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libitm1:amd64.
Preparing to unpack .../libitm1_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libitm1:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libatomic1:amd64.
Preparing to unpack .../libatomic1_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libatomic1:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libasan2:amd64.
Preparing to unpack .../libasan2_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libasan2:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package liblsan0:amd64.
Preparing to unpack .../liblsan0_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking liblsan0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libtsan0:amd64.
Preparing to unpack .../libtsan0_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libtsan0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libubsan0:amd64.
Preparing to unpack .../libubsan0_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libubsan0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libcilkrts5:amd64.
Preparing to unpack .../libcilkrts5_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libcilkrts5:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libmpx0:amd64.
Preparing to unpack .../libmpx0_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libmpx0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libquadmath0:amd64.
Preparing to unpack .../libquadmath0_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libquadmath0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libgcc-5-dev:amd64.
Preparing to unpack .../libgcc-5-dev_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libgcc-5-dev:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package gcc-5.
Preparing to unpack .../gcc-5_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking gcc-5 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package gcc.
Preparing to unpack .../gcc_4%3a5.3.1-1ubuntu1_amd64.deb ...
Unpacking gcc (4:5.3.1-1ubuntu1) ...
Selecting previously unselected package libstdc++-5-dev:amd64.
Preparing to unpack .../libstdc++-5-dev_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libstdc++-5-dev:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package g++-5.
Preparing to unpack .../g++-5_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking g++-5 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package g++.
Preparing to unpack .../g++_4%3a5.3.1-1ubuntu1_amd64.deb ...
Unpacking g++ (4:5.3.1-1ubuntu1) ...
Selecting previously unselected package make.
Preparing to unpack .../archives/make_4.1-6_amd64.deb ...
Unpacking make (4.1-6) ...
Selecting previously unselected package libdpkg-perl.
Preparing to unpack .../libdpkg-perl_1.18.4ubuntu1.1_all.deb ...
Unpacking libdpkg-perl (1.18.4ubuntu1.1) ...
Selecting previously unselected package xz-utils.
Preparing to unpack .../xz-utils_5.1.1alpha+20120614-2ubuntu2_amd64.deb ...
Unpacking xz-utils (5.1.1alpha+20120614-2ubuntu2) ...
Selecting previously unselected package patch.
Preparing to unpack .../patch_2.7.5-1_amd64.deb ...
Unpacking patch (2.7.5-1) ...
Selecting previously unselected package dpkg-dev.
Preparing to unpack .../dpkg-dev_1.18.4ubuntu1.1_all.deb ...
Unpacking dpkg-dev (1.18.4ubuntu1.1) ...
Selecting previously unselected package build-essential.
Preparing to unpack .../build-essential_12.1ubuntu2_amd64.deb ...
Unpacking build-essential (12.1ubuntu2) ...
Selecting previously unselected package libfakeroot:amd64.
Preparing to unpack .../libfakeroot_1.20.2-1ubuntu1_amd64.deb ...
Unpacking libfakeroot:amd64 (1.20.2-1ubuntu1) ...
Selecting previously unselected package fakeroot.
Preparing to unpack .../fakeroot_1.20.2-1ubuntu1_amd64.deb ...
Unpacking fakeroot (1.20.2-1ubuntu1) ...
Selecting previously unselected package liberror-perl.
Preparing to unpack .../liberror-perl_0.17-1.2_all.deb ...
Unpacking liberror-perl (0.17-1.2) ...
Selecting previously unselected package git-man.
Preparing to unpack .../git-man_1%3a2.7.4-0ubuntu1_all.deb ...
Unpacking git-man (1:2.7.4-0ubuntu1) ...
Selecting previously unselected package git.
Preparing to unpack .../git_1%3a2.7.4-0ubuntu1_amd64.deb ...
Unpacking git (1:2.7.4-0ubuntu1) ...
Selecting previously unselected package python-pkg-resources.
Preparing to unpack .../python-pkg-resources_20.7.0-1_all.deb ...
Unpacking python-pkg-resources (20.7.0-1) ...
Selecting previously unselected package gyp.
Preparing to unpack .../gyp_0.1+20150913git1f374df9-1ubuntu1_all.deb ...
Unpacking gyp (0.1+20150913git1f374df9-1ubuntu1) ...
Selecting previously unselected package javascript-common.
Preparing to unpack .../javascript-common_11_all.deb ...
Unpacking javascript-common (11) ...
Selecting previously unselected package libalgorithm-diff-perl.
Preparing to unpack .../libalgorithm-diff-perl_1.19.03-1_all.deb ...
Unpacking libalgorithm-diff-perl (1.19.03-1) ...
Selecting previously unselected package libalgorithm-diff-xs-perl.
Preparing to unpack .../libalgorithm-diff-xs-perl_0.04-4build1_amd64.deb ...
Unpacking libalgorithm-diff-xs-perl (0.04-4build1) ...
Selecting previously unselected package libalgorithm-merge-perl.
Preparing to unpack .../libalgorithm-merge-perl_0.08-3_all.deb ...
Unpacking libalgorithm-merge-perl (0.08-3) ...
Selecting previously unselected package libfile-fcntllock-perl.
Preparing to unpack .../libfile-fcntllock-perl_0.22-3_amd64.deb ...
Unpacking libfile-fcntllock-perl (0.22-3) ...
Selecting previously unselected package libjs-jquery.
Preparing to unpack .../libjs-jquery_1.11.3+dfsg-4_all.deb ...
Unpacking libjs-jquery (1.11.3+dfsg-4) ...
Selecting previously unselected package libjs-node-uuid.
Preparing to unpack .../libjs-node-uuid_1.4.0-1_all.deb ...
Unpacking libjs-node-uuid (1.4.0-1) ...
Selecting previously unselected package libjs-underscore.
Preparing to unpack .../libjs-underscore_1.7.0~dfsg-1ubuntu1_all.deb ...
Unpacking libjs-underscore (1.7.0~dfsg-1ubuntu1) ...
Selecting previously unselected package zlib1g-dev:amd64.
Preparing to unpack .../zlib1g-dev_1%3a1.2.8.dfsg-2ubuntu4_amd64.deb ...
Unpacking zlib1g-dev:amd64 (1:1.2.8.dfsg-2ubuntu4) ...
Selecting previously unselected package libssl-dev:amd64.
Preparing to unpack .../libssl-dev_1.0.2g-1ubuntu4.6_amd64.deb ...
Unpacking libssl-dev:amd64 (1.0.2g-1ubuntu4.6) ...
Selecting previously unselected package libssl-doc.
Preparing to unpack .../libssl-doc_1.0.2g-1ubuntu4.6_all.deb ...
Unpacking libssl-doc (1.0.2g-1ubuntu4.6) ...
Selecting previously unselected package libuv1:amd64.
Preparing to unpack .../libuv1_1.8.0-1_amd64.deb ...
Unpacking libuv1:amd64 (1.8.0-1) ...
Selecting previously unselected package libuv1-dev:amd64.
Preparing to unpack .../libuv1-dev_1.8.0-1_amd64.deb ...
Unpacking libuv1-dev:amd64 (1.8.0-1) ...
Selecting previously unselected package manpages-dev.
Preparing to unpack .../manpages-dev_4.04-2_all.deb ...
Unpacking manpages-dev (4.04-2) ...
Selecting previously unselected package nodejs.
Preparing to unpack .../nodejs_4.2.6~dfsg-1ubuntu4.1_amd64.deb ...
Unpacking nodejs (4.2.6~dfsg-1ubuntu4.1) ...
Selecting previously unselected package node-async.
Preparing to unpack .../node-async_0.8.0-1_all.deb ...
Unpacking node-async (0.8.0-1) ...
Selecting previously unselected package node-node-uuid.
Preparing to unpack .../node-node-uuid_1.4.0-1_all.deb ...
Unpacking node-node-uuid (1.4.0-1) ...
Selecting previously unselected package node-underscore.
Preparing to unpack .../node-underscore_1.7.0~dfsg-1ubuntu1_all.deb ...
Unpacking node-underscore (1.7.0~dfsg-1ubuntu1) ...
Selecting previously unselected package rename.
Preparing to unpack .../archives/rename_0.20-4_all.deb ...
Unpacking rename (0.20-4) ...
Selecting previously unselected package libjs-inherits.
Preparing to unpack .../libjs-inherits_2.0.1-3_all.deb ...
Unpacking libjs-inherits (2.0.1-3) ...
Selecting previously unselected package node-abbrev.
Preparing to unpack .../node-abbrev_1.0.5-2_all.deb ...
Unpacking node-abbrev (1.0.5-2) ...
Selecting previously unselected package node-ansi.
Preparing to unpack .../node-ansi_0.3.0-2_all.deb ...
Unpacking node-ansi (0.3.0-2) ...
Selecting previously unselected package node-ansi-color-table.
Preparing to unpack .../node-ansi-color-table_1.0.0-1_all.deb ...
Unpacking node-ansi-color-table (1.0.0-1) ...
Selecting previously unselected package node-archy.
Preparing to unpack .../node-archy_0.0.2-1_all.deb ...
Unpacking node-archy (0.0.2-1) ...
Selecting previously unselected package node-inherits.
Preparing to unpack .../node-inherits_2.0.1-3_all.deb ...
Unpacking node-inherits (2.0.1-3) ...
Selecting previously unselected package node-block-stream.
Preparing to unpack .../node-block-stream_0.0.7-1_all.deb ...
Unpacking node-block-stream (0.0.7-1) ...
Selecting previously unselected package node-delayed-stream.
Preparing to unpack .../node-delayed-stream_0.0.5-1_all.deb ...
Unpacking node-delayed-stream (0.0.5-1) ...
Selecting previously unselected package node-combined-stream.
Preparing to unpack .../node-combined-stream_0.0.5-1_all.deb ...
Unpacking node-combined-stream (0.0.5-1) ...
Selecting previously unselected package node-cookie-jar.
Preparing to unpack .../node-cookie-jar_0.3.1-1_all.deb ...
Unpacking node-cookie-jar (0.3.1-1) ...
Selecting previously unselected package node-forever-agent.
Preparing to unpack .../node-forever-agent_0.5.1-1_all.deb ...
Unpacking node-forever-agent (0.5.1-1) ...
Selecting previously unselected package node-mime.
Preparing to unpack .../node-mime_1.3.4-1_all.deb ...
Unpacking node-mime (1.3.4-1) ...
Selecting previously unselected package node-form-data.
Preparing to unpack .../node-form-data_0.1.0-1_all.deb ...
Unpacking node-form-data (0.1.0-1) ...
Selecting previously unselected package node-rimraf.
Preparing to unpack .../node-rimraf_2.2.8-1_all.deb ...
Unpacking node-rimraf (2.2.8-1) ...
Selecting previously unselected package node-mkdirp.
Preparing to unpack .../node-mkdirp_0.5.0-1_all.deb ...
Unpacking node-mkdirp (0.5.0-1) ...
Selecting previously unselected package node-graceful-fs.
Preparing to unpack .../node-graceful-fs_3.0.2-1_all.deb ...
Unpacking node-graceful-fs (3.0.2-1) ...
Selecting previously unselected package node-fstream.
Preparing to unpack .../node-fstream_0.1.24-1_all.deb ...
Unpacking node-fstream (0.1.24-1) ...
Selecting previously unselected package node-lru-cache.
Preparing to unpack .../node-lru-cache_2.3.1-1_all.deb ...
Unpacking node-lru-cache (2.3.1-1) ...
Selecting previously unselected package node-sigmund.
Preparing to unpack .../node-sigmund_1.0.0-1_all.deb ...
Unpacking node-sigmund (1.0.0-1) ...
Selecting previously unselected package node-minimatch.
Preparing to unpack .../node-minimatch_1.0.0-1_all.deb ...
Unpacking node-minimatch (1.0.0-1) ...
Selecting previously unselected package node-fstream-ignore.
Preparing to unpack .../node-fstream-ignore_0.0.6-2_all.deb ...
Unpacking node-fstream-ignore (0.0.6-2) ...
Selecting previously unselected package node-github-url-from-git.
Preparing to unpack .../node-github-url-from-git_1.1.1-1_all.deb ...
Unpacking node-github-url-from-git (1.1.1-1) ...
Selecting previously unselected package node-once.
Preparing to unpack .../node-once_1.1.1-1_all.deb ...
Unpacking node-once (1.1.1-1) ...
Selecting previously unselected package node-glob.
Preparing to unpack .../node-glob_4.0.5-1_all.deb ...
Unpacking node-glob (4.0.5-1) ...
Selecting previously unselected package nodejs-dev.
Preparing to unpack .../nodejs-dev_4.2.6~dfsg-1ubuntu4.1_amd64.deb ...
Unpacking nodejs-dev (4.2.6~dfsg-1ubuntu4.1) ...
Selecting previously unselected package node-nopt.
Preparing to unpack .../node-nopt_3.0.1-1_all.deb ...
Unpacking node-nopt (3.0.1-1) ...
Selecting previously unselected package node-npmlog.
Preparing to unpack .../node-npmlog_0.0.4-1_all.deb ...
Unpacking node-npmlog (0.0.4-1) ...
Selecting previously unselected package node-osenv.
Preparing to unpack .../node-osenv_0.1.0-1_all.deb ...
Unpacking node-osenv (0.1.0-1) ...
Selecting previously unselected package node-tunnel-agent.
Preparing to unpack .../node-tunnel-agent_0.3.1-1_all.deb ...
Unpacking node-tunnel-agent (0.3.1-1) ...
Selecting previously unselected package node-json-stringify-safe.
Preparing to unpack .../node-json-stringify-safe_5.0.0-1_all.deb ...
Unpacking node-json-stringify-safe (5.0.0-1) ...
Selecting previously unselected package node-qs.
Preparing to unpack .../node-qs_2.2.4-1_all.deb ...
Unpacking node-qs (2.2.4-1) ...
Selecting previously unselected package node-request.
Preparing to unpack .../node-request_2.26.1-1_all.deb ...
Unpacking node-request (2.26.1-1) ...
Selecting previously unselected package node-semver.
Preparing to unpack .../node-semver_2.1.0-2_all.deb ...
Unpacking node-semver (2.1.0-2) ...
Selecting previously unselected package node-tar.
Preparing to unpack .../node-tar_1.0.3-2_all.deb ...
Unpacking node-tar (1.0.3-2) ...
Selecting previously unselected package node-which.
Preparing to unpack .../node-which_1.0.5-2_all.deb ...
Unpacking node-which (1.0.5-2) ...
Selecting previously unselected package node-gyp.
Preparing to unpack .../node-gyp_3.0.3-2ubuntu1_all.deb ...
Unpacking node-gyp (3.0.3-2ubuntu1) ...
Selecting previously unselected package node-ini.
Preparing to unpack .../node-ini_1.1.0-1_all.deb ...
Unpacking node-ini (1.1.0-1) ...
Selecting previously unselected package node-lockfile.
Preparing to unpack .../node-lockfile_0.4.1-1_all.deb ...
Unpacking node-lockfile (0.4.1-1) ...
Selecting previously unselected package node-mute-stream.
Preparing to unpack .../node-mute-stream_0.0.4-1_all.deb ...
Unpacking node-mute-stream (0.0.4-1) ...
Selecting previously unselected package node-normalize-package-data.
Preparing to unpack .../node-normalize-package-data_0.2.2-1_all.deb ...
Unpacking node-normalize-package-data (0.2.2-1) ...
Selecting previously unselected package node-read.
Preparing to unpack .../node-read_1.0.5-1_all.deb ...
Unpacking node-read (1.0.5-1) ...
Selecting previously unselected package node-read-package-json.
Preparing to unpack .../node-read-package-json_1.2.4-1_all.deb ...
Unpacking node-read-package-json (1.2.4-1) ...
Selecting previously unselected package node-retry.
Preparing to unpack .../node-retry_0.6.0-1_all.deb ...
Unpacking node-retry (0.6.0-1) ...
Selecting previously unselected package node-sha.
Preparing to unpack .../node-sha_1.2.3-1_all.deb ...
Unpacking node-sha (1.2.3-1) ...
Selecting previously unselected package node-slide.
Preparing to unpack .../node-slide_1.1.4-1_all.deb ...
Unpacking node-slide (1.1.4-1) ...
Selecting previously unselected package npm.
Preparing to unpack .../npm_3.5.2-0ubuntu4_all.deb ...
Unpacking npm (3.5.2-0ubuntu4) ...
Processing triggers for libc-bin (2.23-0ubuntu5) ...
Processing triggers for systemd (229-4ubuntu16) ...
Setting up libatm1:amd64 (1:2.5.1-1.5) ...
Setting up libmnl0:amd64 (1.0.3-5) ...
Setting up libpopt0:amd64 (1.16-10) ...
Setting up libgdbm3:amd64 (1.8.3-13.1) ...
Setting up libxau6:amd64 (1:1.0.8-1) ...
Setting up libxdmcp6:amd64 (1:1.1.2-1.1) ...
Setting up libxcb1:amd64 (1.11.1-1ubuntu1) ...
Setting up libx11-data (2:1.6.3-1ubuntu2) ...
Setting up libx11-6:amd64 (2:1.6.3-1ubuntu2) ...
Setting up libxext6:amd64 (2:1.3.3-1) ...
Setting up perl-modules-5.22 (5.22.1-9) ...
Setting up libperl5.22:amd64 (5.22.1-9) ...
Setting up perl (5.22.1-9) ...
update-alternatives: using /usr/bin/prename to provide /usr/bin/rename (rename) in auto mode
Setting up mime-support (3.59ubuntu1) ...
Setting up libexpat1:amd64 (2.1.0-7ubuntu0.16.04.2) ...
Setting up libffi6:amd64 (3.2.1-4) ...
Setting up libsqlite3-0:amd64 (3.11.0-1ubuntu1) ...
Setting up libssl1.0.0:amd64 (1.0.2g-1ubuntu4.6) ...
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76.)
debconf: falling back to frontend: Readline
Setting up libpython2.7-stdlib:amd64 (2.7.12-1ubuntu0~16.04.1) ...
Setting up python2.7 (2.7.12-1ubuntu0~16.04.1) ...
Setting up libpython-stdlib:amd64 (2.7.11-1) ...
Setting up python (2.7.11-1) ...
Setting up libgmp10:amd64 (2:6.1.0+dfsg-2) ...
Setting up libmpfr4:amd64 (3.1.4-1) ...
Setting up libmpc3:amd64 (1.0.3-1) ...
Setting up bzip2 (1.0.6-8) ...
Setting up libmagic1:amd64 (1:5.25-2ubuntu1) ...
Setting up file (1:5.25-2ubuntu1) ...
Setting up iproute2 (4.3.0-1ubuntu3) ...
Setting up ifupdown (0.8.10ubuntu1.2) ...
Creating /etc/network/interfaces.
Setting up libisc-export160 (1:9.10.3.dfsg.P4-8ubuntu1.5) ...
Setting up libdns-export162 (1:9.10.3.dfsg.P4-8ubuntu1.5) ...
Setting up isc-dhcp-client (4.3.3-5ubuntu12.6) ...
Setting up isc-dhcp-common (4.3.3-5ubuntu12.6) ...
Setting up less (481-2.1ubuntu0.1) ...
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76.)
debconf: falling back to frontend: Readline
Setting up libbsd0:amd64 (0.8.2-1) ...
Setting up libnettle6:amd64 (3.2-1ubuntu0.16.04.1) ...
Setting up libhogweed4:amd64 (3.2-1ubuntu0.16.04.1) ...
Setting up libidn11:amd64 (1.32-3ubuntu1.1) ...
Setting up libp11-kit0:amd64 (0.23.2-5~ubuntu16.04.1) ...
Setting up libtasn1-6:amd64 (4.7-3ubuntu0.16.04.1) ...
Setting up libgnutls30:amd64 (3.4.10-4ubuntu1.2) ...
Setting up libxtables11:amd64 (1.6.0-2ubuntu3) ...
Setting up netbase (5.3) ...
Setting up openssl (1.0.2g-1ubuntu4.6) ...
Setting up ca-certificates (20160104ubuntu1) ...
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76.)
debconf: falling back to frontend: Readline
Setting up krb5-locales (1.13.2+dfsg-5ubuntu2) ...
Setting up libroken18-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libasn1-8-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libkrb5support0:amd64 (1.13.2+dfsg-5ubuntu2) ...
Setting up libk5crypto3:amd64 (1.13.2+dfsg-5ubuntu2) ...
Setting up libkeyutils1:amd64 (1.5.9-8ubuntu1) ...
Setting up libkrb5-3:amd64 (1.13.2+dfsg-5ubuntu2) ...
Setting up libgssapi-krb5-2:amd64 (1.13.2+dfsg-5ubuntu2) ...
Setting up libhcrypto4-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libheimbase1-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libwind0-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libhx509-5-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libkrb5-26-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libheimntlm0-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libgssapi3-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libsasl2-modules-db:amd64 (2.1.26.dfsg1-14build1) ...
Setting up libsasl2-2:amd64 (2.1.26.dfsg1-14build1) ...
Setting up libldap-2.4-2:amd64 (2.4.42+dfsg-2ubuntu3.1) ...
Setting up librtmp1:amd64 (2.4+20151223.gitfa8646d-1build1) ...
Setting up libcurl3-gnutls:amd64 (7.47.0-1ubuntu2.2) ...
Setting up libedit2:amd64 (3.1-20150325-1ubuntu2) ...
Setting up libicu55:amd64 (55.1-7) ...
Setting up libsasl2-modules:amd64 (2.1.26.dfsg1-14build1) ...
Setting up libxmuu1:amd64 (2:1.1.2-2) ...
Setting up manpages (4.04-2) ...
Setting up openssh-client (1:7.2p2-4ubuntu2.1) ...
Setting up rsync (3.1.1-3ubuntu1) ...
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of restart.
Setting up xauth (1:1.0.9-1ubuntu2) ...
Setting up binutils (2.26.1-1ubuntu1~16.04.3) ...
Setting up libc-dev-bin (2.23-0ubuntu5) ...
Setting up linux-libc-dev:amd64 (4.4.0-66.87) ...
Setting up libc6-dev:amd64 (2.23-0ubuntu5) ...
Setting up libisl15:amd64 (0.16.1-1) ...
Setting up cpp-5 (5.4.0-6ubuntu1~16.04.4) ...
Setting up cpp (4:5.3.1-1ubuntu1) ...
Setting up libcc1-0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libgomp1:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libitm1:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libatomic1:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libasan2:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up liblsan0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libtsan0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libubsan0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libcilkrts5:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libmpx0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libquadmath0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libgcc-5-dev:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up gcc-5 (5.4.0-6ubuntu1~16.04.4) ...
Setting up gcc (4:5.3.1-1ubuntu1) ...
Setting up libstdc++-5-dev:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up g++-5 (5.4.0-6ubuntu1~16.04.4) ...
Setting up g++ (4:5.3.1-1ubuntu1) ...
update-alternatives: using /usr/bin/g++ to provide /usr/bin/c++ (c++) in auto mode
Setting up make (4.1-6) ...
Setting up libdpkg-perl (1.18.4ubuntu1.1) ...
Setting up xz-utils (5.1.1alpha+20120614-2ubuntu2) ...
update-alternatives: using /usr/bin/xz to provide /usr/bin/lzma (lzma) in auto mode
Setting up patch (2.7.5-1) ...
Setting up dpkg-dev (1.18.4ubuntu1.1) ...
Setting up build-essential (12.1ubuntu2) ...
Setting up libfakeroot:amd64 (1.20.2-1ubuntu1) ...
Setting up fakeroot (1.20.2-1ubuntu1) ...
update-alternatives: using /usr/bin/fakeroot-sysv to provide /usr/bin/fakeroot (fakeroot) in auto mode
Setting up liberror-perl (0.17-1.2) ...
Setting up git-man (1:2.7.4-0ubuntu1) ...
Setting up git (1:2.7.4-0ubuntu1) ...
Setting up python-pkg-resources (20.7.0-1) ...
Setting up gyp (0.1+20150913git1f374df9-1ubuntu1) ...
Setting up javascript-common (11) ...
Setting up libalgorithm-diff-perl (1.19.03-1) ...
Setting up libalgorithm-diff-xs-perl (0.04-4build1) ...
Setting up libalgorithm-merge-perl (0.08-3) ...
Setting up libfile-fcntllock-perl (0.22-3) ...
Setting up libjs-jquery (1.11.3+dfsg-4) ...
Setting up libjs-node-uuid (1.4.0-1) ...
Setting up libjs-underscore (1.7.0~dfsg-1ubuntu1) ...
Setting up zlib1g-dev:amd64 (1:1.2.8.dfsg-2ubuntu4) ...
Setting up libssl-dev:amd64 (1.0.2g-1ubuntu4.6) ...
Setting up libssl-doc (1.0.2g-1ubuntu4.6) ...
Setting up libuv1:amd64 (1.8.0-1) ...
Setting up libuv1-dev:amd64 (1.8.0-1) ...
Setting up manpages-dev (4.04-2) ...
Setting up nodejs (4.2.6~dfsg-1ubuntu4.1) ...
update-alternatives: using /usr/bin/nodejs to provide /usr/bin/js (js) in auto mode
Setting up node-async (0.8.0-1) ...
Setting up node-node-uuid (1.4.0-1) ...
Setting up node-underscore (1.7.0~dfsg-1ubuntu1) ...
Setting up rename (0.20-4) ...
update-alternatives: using /usr/bin/file-rename to provide /usr/bin/rename (rename) in auto mode
Setting up libjs-inherits (2.0.1-3) ...
Setting up node-abbrev (1.0.5-2) ...
Setting up node-ansi (0.3.0-2) ...
Setting up node-ansi-color-table (1.0.0-1) ...
Setting up node-archy (0.0.2-1) ...
Setting up node-inherits (2.0.1-3) ...
Setting up node-block-stream (0.0.7-1) ...
Setting up node-delayed-stream (0.0.5-1) ...
Setting up node-combined-stream (0.0.5-1) ...
Setting up node-cookie-jar (0.3.1-1) ...
Setting up node-forever-agent (0.5.1-1) ...
Setting up node-mime (1.3.4-1) ...
Setting up node-form-data (0.1.0-1) ...
Setting up node-rimraf (2.2.8-1) ...
Setting up node-mkdirp (0.5.0-1) ...
Setting up node-graceful-fs (3.0.2-1) ...
Setting up node-fstream (0.1.24-1) ...
Setting up node-lru-cache (2.3.1-1) ...
Setting up node-sigmund (1.0.0-1) ...
Setting up node-minimatch (1.0.0-1) ...
Setting up node-fstream-ignore (0.0.6-2) ...
Setting up node-github-url-from-git (1.1.1-1) ...
Setting up node-once (1.1.1-1) ...
Setting up node-glob (4.0.5-1) ...
Setting up nodejs-dev (4.2.6~dfsg-1ubuntu4.1) ...
Setting up node-nopt (3.0.1-1) ...
Setting up node-npmlog (0.0.4-1) ...
Setting up node-osenv (0.1.0-1) ...
Setting up node-tunnel-agent (0.3.1-1) ...
Setting up node-json-stringify-safe (5.0.0-1) ...
Setting up node-qs (2.2.4-1) ...
Setting up node-request (2.26.1-1) ...
Setting up node-semver (2.1.0-2) ...
Setting up node-tar (1.0.3-2) ...
Setting up node-which (1.0.5-2) ...
Setting up node-gyp (3.0.3-2ubuntu1) ...
Setting up node-ini (1.1.0-1) ...
Setting up node-lockfile (0.4.1-1) ...
Setting up node-mute-stream (0.0.4-1) ...
Setting up node-normalize-package-data (0.2.2-1) ...
Setting up node-read (1.0.5-1) ...
Setting up node-read-package-json (1.2.4-1) ...
Setting up node-retry (0.6.0-1) ...
Setting up node-sha (1.2.3-1) ...
Setting up node-slide (1.1.4-1) ...
Setting up npm (3.5.2-0ubuntu4) ...
Processing triggers for libc-bin (2.23-0ubuntu5) ...
Processing triggers for systemd (229-4ubuntu16) ...
Processing triggers for ca-certificates (20160104ubuntu1) ...
Updating certificates in /etc/ssl/certs...
173 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.

Now we need to apply a Workaround for Ubuntu I once had seen on StackOverflow:

(container)$ ln -s nodejs /usr/bin/node

Let us clone the Wishtack Example:

(container)# git clone https://github.com/wishtack/wt-protractor-boilerplate
Cloning into 'wt-protractor-boilerplate'...
remote: Counting objects: 78, done.
remote: Compressing objects: 100% (40/40), done.
remote: Total 78 (delta 27), reused 78 (delta 27), pack-reused 0
Unpacking objects: 100% (78/78), done.
Checking connectivity... done.

The next commands is needed for downloading and installing the dependencies:

(container)# cd wt-protractor-boilerplate; npm install

 

npm WARN deprecated dejavu@0.4.8: No
npm WARN deprecated minimatch@2.0.10: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
npm WARN deprecated minimatch@0.2.14: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
npm WARN deprecated graceful-fs@1.2.3: graceful-fs v3.0.0 and before will fail on node releases >= v7.0. Please update to graceful-fs@^4.0.0 as soon as possible. Use 'npm ls graceful-fs' to find it in the tree.
npm WARN deprecated minimatch@0.3.0: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
npm WARN deprecated node-uuid@1.4.7: use uuid module instead
npm WARN deprecated tough-cookie@2.2.2: ReDoS vulnerability parsing Set-Cookie https://nodesecurity.io/advisories/130
npm WARN deprecated minimatch@0.4.0: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
npm WARN prefer global jasmine-node@2.0.0-beta4 should be installed with -g

> dejavu@0.4.8 postinstall /app/wt-protractor-boilerplate/node_modules/dejavu
> node bin/post_install.js

Saving runtime configuration in /app/wt-protractor-boilerplate/.dejavurc
wt-protractor-boilerplate@0.2.1 /app/wt-protractor-boilerplate
+-- dejavu@0.4.8
| +-- amdefine@0.1.1
| `-- mout@0.9.1
+-- gulp@3.9.1
| +-- archy@1.0.0
| +-- chalk@1.1.3
| | +-- ansi-styles@2.2.1
| | +-- escape-string-regexp@1.0.5
| | +-- has-ansi@2.0.0
| | | `-- ansi-regex@2.1.1
| | +-- strip-ansi@3.0.1
| | `-- supports-color@2.0.0
| +-- deprecated@0.0.1
| +-- gulp-util@3.0.8
| | +-- array-differ@1.0.0
| | +-- array-uniq@1.0.3
| | +-- beeper@1.1.1
| | +-- dateformat@2.0.0
| | +-- fancy-log@1.3.0
| | | `-- time-stamp@1.0.1
| | +-- gulplog@1.0.0
| | | `-- glogg@1.0.0
| | +-- has-gulplog@0.1.0
| | | `-- sparkles@1.0.0
| | +-- lodash._reescape@3.0.0
| | +-- lodash._reevaluate@3.0.0
| | +-- lodash._reinterpolate@3.0.0
| | +-- lodash.template@3.6.2
| | | +-- lodash._basecopy@3.0.1
| | | +-- lodash._basetostring@3.0.1
| | | +-- lodash._basevalues@3.0.0
| | | +-- lodash._isiterateecall@3.0.9
| | | +-- lodash.escape@3.2.0
| | | | `-- lodash._root@3.0.1
| | | +-- lodash.keys@3.1.2
| | | | +-- lodash._getnative@3.9.1
| | | | +-- lodash.isarguments@3.1.0
| | | | `-- lodash.isarray@3.0.4
| | | +-- lodash.restparam@3.6.1
| | | `-- lodash.templatesettings@3.1.1
| | +-- multipipe@0.1.2
| | | `-- duplexer2@0.0.2
| | | `-- readable-stream@1.1.14
| | +-- object-assign@3.0.0
| | +-- replace-ext@0.0.1
| | +-- through2@2.0.3
| | | +-- readable-stream@2.2.3
| | | | +-- buffer-shims@1.0.0
| | | | +-- core-util-is@1.0.2
| | | | +-- isarray@1.0.0
| | | | +-- process-nextick-args@1.0.7
| | | | +-- string_decoder@0.10.31
| | | | `-- util-deprecate@1.0.2
| | | `-- xtend@4.0.1
| | `-- vinyl@0.5.3
| | +-- clone@1.0.2
| | `-- clone-stats@0.0.1
| +-- interpret@1.0.1
| +-- liftoff@2.3.0
| | +-- extend@3.0.0
| | +-- findup-sync@0.4.3
| | | +-- detect-file@0.1.0
| | | | `-- fs-exists-sync@0.1.0
| | | +-- is-glob@2.0.1
| | | | `-- is-extglob@1.0.0
| | | +-- micromatch@2.3.11
| | | | +-- arr-diff@2.0.0
| | | | | `-- arr-flatten@1.0.1
| | | | +-- array-unique@0.2.1
| | | | +-- braces@1.8.5
| | | | | +-- expand-range@1.8.2
| | | | | | `-- fill-range@2.2.3
| | | | | | +-- is-number@2.1.0
| | | | | | +-- isobject@2.1.0
| | | | | | | `-- isarray@1.0.0
| | | | | | +-- randomatic@1.1.6
| | | | | | `-- repeat-string@1.6.1
| | | | | +-- preserve@0.2.0
| | | | | `-- repeat-element@1.1.2
| | | | +-- expand-brackets@0.1.5
| | | | | `-- is-posix-bracket@0.1.1
| | | | +-- extglob@0.3.2
| | | | +-- filename-regex@2.0.0
| | | | +-- kind-of@3.1.0
| | | | | `-- is-buffer@1.1.5
| | | | +-- normalize-path@2.0.1
| | | | +-- object.omit@2.0.1
| | | | | +-- for-own@0.1.5
| | | | | | `-- for-in@1.0.2
| | | | | `-- is-extendable@0.1.1
| | | | +-- parse-glob@3.0.4
| | | | | +-- glob-base@0.3.0
| | | | | | `-- glob-parent@2.0.0
| | | | | `-- is-dotfile@1.0.2
| | | | `-- regex-cache@0.4.3
| | | | +-- is-equal-shallow@0.1.3
| | | | `-- is-primitive@2.0.0
| | | `-- resolve-dir@0.1.1
| | | `-- global-modules@0.2.3
| | | +-- global-prefix@0.1.5
| | | | +-- homedir-polyfill@1.0.1
| | | | | `-- parse-passwd@1.0.0
| | | | +-- ini@1.3.4
| | | | `-- which@1.2.12
| | | | `-- isexe@1.1.2
| | | `-- is-windows@0.2.0
| | +-- fined@1.0.2
| | | +-- expand-tilde@1.2.2
| | | +-- lodash.assignwith@4.2.0
| | | +-- lodash.isempty@4.4.0
| | | +-- lodash.pick@4.4.0
| | | `-- parse-filepath@1.0.1
| | | +-- is-absolute@0.2.6
| | | | `-- is-relative@0.2.1
| | | | `-- is-unc-path@0.1.2
| | | | `-- unc-path-regex@0.1.2
| | | +-- map-cache@0.2.2
| | | `-- path-root@0.1.1
| | | `-- path-root-regex@0.1.2
| | +-- flagged-respawn@0.3.2
| | +-- lodash.isplainobject@4.0.6
| | +-- lodash.isstring@4.0.1
| | +-- lodash.mapvalues@4.6.0
| | +-- rechoir@0.6.2
| | `-- resolve@1.3.2
| | `-- path-parse@1.0.5
| +-- minimist@1.2.0
| +-- orchestrator@0.3.8
| | +-- end-of-stream@0.1.5
| | | `-- once@1.3.3
| | | `-- wrappy@1.0.2
| | +-- sequencify@0.0.7
| | `-- stream-consume@0.1.0
| +-- pretty-hrtime@1.0.3
| +-- semver@4.3.6
| +-- tildify@1.2.0
| | `-- os-homedir@1.0.2
| +-- v8flags@2.0.11
| | `-- user-home@1.1.1
| `-- vinyl-fs@0.3.14
| +-- defaults@1.0.3
| +-- glob-stream@3.1.18
| | +-- glob@4.5.3
| | +-- glob2base@0.0.12
| | | `-- find-index@0.1.1
| | +-- minimatch@2.0.10
| | | `-- brace-expansion@1.1.6
| | | +-- balanced-match@0.4.2
| | | `-- concat-map@0.0.1
| | +-- ordered-read-streams@0.1.0
| | +-- through2@0.6.5
| | | `-- readable-stream@1.0.34
| | `-- unique-stream@1.0.0
| +-- glob-watcher@0.0.6
| +-- graceful-fs@3.0.11
| | `-- natives@1.1.0
| +-- mkdirp@0.5.1
| | `-- minimist@0.0.8
| +-- strip-bom@1.0.0
| | +-- first-chunk-stream@1.0.0
| | `-- is-utf8@0.2.1
| +-- through2@0.6.5
| | `-- readable-stream@1.0.34
| | `-- isarray@0.0.1
| `-- vinyl@0.4.6
| `-- clone@0.2.0
+-- node.extend@1.1.5
| `-- is@3.2.1
+-- require-dir@0.3.0
+-- underscore@1.8.3
+-- wt-protractor-runner@0.4.1
| +-- async@2.0.0-rc.6
| | `-- lodash@4.17.4
| +-- jasmine-node@2.0.0-beta4 (git://github.com/mhevery/jasmine-node.git#80459688678e3f302a6a2af9625034ae03bbc1d7)
| | +-- coffee-script@1.7.1
| | | `-- mkdirp@0.3.5
| | +-- gaze@0.5.2
| | | `-- globule@0.1.0
| | | +-- glob@3.1.21
| | | | +-- graceful-fs@1.2.3
| | | | `-- inherits@1.0.2
| | | +-- lodash@1.0.2
| | | `-- minimatch@0.2.14
| | +-- UNMET PEER DEPENDENCY grunt@>=0.4
| | +-- grunt-exec@0.4.7
| | +-- jasmine-growl-reporter@0.2.1
| | | `-- growl@1.7.0
| | +-- jasmine-reporters@0.4.1 (git://github.com/larrymyers/jasmine-reporters.git#2c7242dc11c15c2f156169bc704798568b8cb50d)
| | | `-- mkdirp@0.3.5
| | +-- minimist@0.0.10
| | +-- mkdirp@0.3.5
| | +-- underscore@1.6.0
| | +-- walkdir@0.0.11
| | `-- xml2js@0.4.17
| | +-- sax@1.2.2
| | `-- xmlbuilder@4.2.1
| | `-- lodash@4.17.4
| +-- protractor@3.3.0
| | +-- adm-zip@0.4.7
| | +-- glob@6.0.4
| | | +-- inflight@1.0.6
| | | +-- inherits@2.0.3
| | | `-- path-is-absolute@1.0.1
| | +-- jasmine@2.4.1
| | | +-- exit@0.1.2
| | | +-- glob@3.2.11
| | | | `-- minimatch@0.3.0
| | | `-- jasmine-core@2.4.1
| | +-- jasminewd2@0.0.9
| | +-- optimist@0.6.1
| | | +-- minimist@0.0.10
| | | `-- wordwrap@0.0.3
| | +-- q@1.4.1
| | +-- request@2.67.0
| | | +-- aws-sign2@0.6.0
| | | +-- bl@1.0.3
| | | | `-- readable-stream@2.0.6
| | | | `-- isarray@1.0.0
| | | +-- caseless@0.11.0
| | | +-- combined-stream@1.0.5
| | | | `-- delayed-stream@1.0.0
| | | +-- forever-agent@0.6.1
| | | +-- form-data@1.0.1
| | | | `-- async@2.1.5
| | | | `-- lodash@4.17.4
| | | +-- har-validator@2.0.6
| | | | +-- commander@2.9.0
| | | | | `-- graceful-readlink@1.0.1
| | | | +-- is-my-json-valid@2.16.0
| | | | | +-- generate-function@2.0.0
| | | | | +-- generate-object-property@1.2.0
| | | | | | `-- is-property@1.0.2
| | | | | `-- jsonpointer@4.0.1
| | | | `-- pinkie-promise@2.0.1
| | | | `-- pinkie@2.0.4
| | | +-- hawk@3.1.3
| | | | +-- boom@2.10.1
| | | | +-- cryptiles@2.0.5
| | | | +-- hoek@2.16.3
| | | | `-- sntp@1.0.9
| | | +-- http-signature@1.1.1
| | | | +-- assert-plus@0.2.0
| | | | +-- jsprim@1.3.1
| | | | | +-- extsprintf@1.0.2
| | | | | +-- json-schema@0.2.3
| | | | | `-- verror@1.3.6
| | | | `-- sshpk@1.11.0
| | | | +-- asn1@0.2.3
| | | | +-- assert-plus@1.0.0
| | | | +-- bcrypt-pbkdf@1.0.1
| | | | +-- dashdash@1.14.1
| | | | | `-- assert-plus@1.0.0
| | | | +-- ecc-jsbn@0.1.1
| | | | +-- getpass@0.1.6
| | | | | `-- assert-plus@1.0.0
| | | | +-- jodid25519@1.0.2
| | | | +-- jsbn@0.1.1
| | | | `-- tweetnacl@0.14.5
| | | +-- is-typedarray@1.0.0
| | | +-- isstream@0.1.2
| | | +-- json-stringify-safe@5.0.1
| | | +-- mime-types@2.1.14
| | | | `-- mime-db@1.26.0
| | | +-- node-uuid@1.4.7
| | | +-- oauth-sign@0.8.2
| | | +-- qs@5.2.1
| | | +-- stringstream@0.0.5
| | | +-- tough-cookie@2.2.2
| | | `-- tunnel-agent@0.4.3
| | +-- saucelabs@1.0.1
| | | `-- https-proxy-agent@1.0.0
| | | +-- agent-base@2.0.1
| | | | `-- semver@5.0.3
| | | `-- debug@2.6.2
| | | `-- ms@0.7.2
| | +-- selenium-webdriver@2.52.0
| | | +-- adm-zip@0.4.4
| | | +-- rimraf@2.6.1
| | | | `-- glob@7.1.1
| | | | +-- fs.realpath@1.0.0
| | | | `-- minimatch@3.0.3
| | | +-- tmp@0.0.24
| | | +-- ws@1.1.4
| | | | +-- options@0.0.6
| | | | `-- ultron@1.0.2
| | | `-- xml2js@0.4.4
| | | `-- sax@0.6.1
| | `-- source-map-support@0.4.11
| | `-- source-map@0.5.6
| +-- string-format@0.5.0
| `-- temp@0.8.3
| +-- os-tmpdir@1.0.2
| `-- rimraf@2.2.8
`-- wt-protractor-utils@1.0.0
 +-- jasmine-node@1.14.5
 | +-- gaze@0.3.4
 | | +-- fileset@0.1.8
 | | | +-- glob@3.2.11
 | | | | `-- minimatch@0.3.0
 | | | `-- minimatch@0.4.0
 | | `-- minimatch@0.2.14
 | | +-- lru-cache@2.7.3
 | | `-- sigmund@1.0.1
 | +-- jasmine-growl-reporter@0.0.3
 | +-- jasmine-reporters@1.0.2
 | +-- mkdirp@0.3.5
 | `-- requirejs@2.3.3
 +-- named-parameters@0.0.3
 `-- node.extend@1.1.3
 `-- is@2.1.0

npm WARN grunt-exec@0.4.7 requires a peer of grunt@>=0.4 but none was installed.

Let us install Gulp globally:

(container)# npm install -g gulp
npm WARN deprecated minimatch@2.0.10: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
npm WARN deprecated minimatch@0.2.14: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
npm WARN deprecated graceful-fs@1.2.3: graceful-fs v3.0.0 and before will fail on node releases >= v7.0. Please update to graceful-fs@^4.0.0 as soon as possible. Use 'npm ls graceful-fs' to find it in the tree.
/usr/local/bin/gulp -> /usr/local/lib/node_modules/gulp/bin/gulp.js
/usr/local/lib
`-- gulp@3.9.1
 +-- archy@1.0.0
 +-- chalk@1.1.3
 | +-- ansi-styles@2.2.1
 | +-- escape-string-regexp@1.0.5
 | +-- has-ansi@2.0.0
 | | `-- ansi-regex@2.1.1
 | +-- strip-ansi@3.0.1
 | `-- supports-color@2.0.0
 +-- deprecated@0.0.1
 +-- gulp-util@3.0.8
 | +-- array-differ@1.0.0
 | +-- array-uniq@1.0.3
 | +-- beeper@1.1.1
 | +-- dateformat@2.0.0
 | +-- fancy-log@1.3.0
 | | `-- time-stamp@1.0.1
 | +-- gulplog@1.0.0
 | | `-- glogg@1.0.0
 | +-- has-gulplog@0.1.0
 | | `-- sparkles@1.0.0
 | +-- lodash._reescape@3.0.0
 | +-- lodash._reevaluate@3.0.0
 | +-- lodash._reinterpolate@3.0.0
 | +-- lodash.template@3.6.2
 | | +-- lodash._basecopy@3.0.1
 | | +-- lodash._basetostring@3.0.1
 | | +-- lodash._basevalues@3.0.0
 | | +-- lodash._isiterateecall@3.0.9
 | | +-- lodash.escape@3.2.0
 | | | `-- lodash._root@3.0.1
 | | +-- lodash.keys@3.1.2
 | | | +-- lodash._getnative@3.9.1
 | | | +-- lodash.isarguments@3.1.0
 | | | `-- lodash.isarray@3.0.4
 | | +-- lodash.restparam@3.6.1
 | | `-- lodash.templatesettings@3.1.1
 | +-- multipipe@0.1.2
 | | `-- duplexer2@0.0.2
 | | `-- readable-stream@1.1.14
 | +-- object-assign@3.0.0
 | +-- replace-ext@0.0.1
 | +-- through2@2.0.3
 | | +-- readable-stream@2.2.3
 | | | +-- buffer-shims@1.0.0
 | | | +-- core-util-is@1.0.2
 | | | +-- inherits@2.0.3
 | | | +-- isarray@1.0.0
 | | | +-- process-nextick-args@1.0.7
 | | | +-- string_decoder@0.10.31
 | | | `-- util-deprecate@1.0.2
 | | `-- xtend@4.0.1
 | `-- vinyl@0.5.3
 | +-- clone@1.0.2
 | `-- clone-stats@0.0.1
 +-- interpret@1.0.1
 +-- liftoff@2.3.0
 | +-- extend@3.0.0
 | +-- findup-sync@0.4.3
 | | +-- detect-file@0.1.0
 | | | `-- fs-exists-sync@0.1.0
 | | +-- is-glob@2.0.1
 | | | `-- is-extglob@1.0.0
 | | +-- micromatch@2.3.11
 | | | +-- arr-diff@2.0.0
 | | | | `-- arr-flatten@1.0.1
 | | | +-- array-unique@0.2.1
 | | | +-- braces@1.8.5
 | | | | +-- expand-range@1.8.2
 | | | | | `-- fill-range@2.2.3
 | | | | | +-- is-number@2.1.0
 | | | | | +-- isobject@2.1.0
 | | | | | | `-- isarray@1.0.0
 | | | | | +-- randomatic@1.1.6
 | | | | | `-- repeat-string@1.6.1
 | | | | +-- preserve@0.2.0
 | | | | `-- repeat-element@1.1.2
 | | | +-- expand-brackets@0.1.5
 | | | | `-- is-posix-bracket@0.1.1
 | | | +-- extglob@0.3.2
 | | | +-- filename-regex@2.0.0
 | | | +-- kind-of@3.1.0
 | | | | `-- is-buffer@1.1.5
 | | | +-- normalize-path@2.0.1
 | | | +-- object.omit@2.0.1
 | | | | +-- for-own@0.1.5
 | | | | | `-- for-in@1.0.2
 | | | | `-- is-extendable@0.1.1
 | | | +-- parse-glob@3.0.4
 | | | | +-- glob-base@0.3.0
 | | | | | `-- glob-parent@2.0.0
 | | | | `-- is-dotfile@1.0.2
 | | | `-- regex-cache@0.4.3
 | | | +-- is-equal-shallow@0.1.3
 | | | `-- is-primitive@2.0.0
 | | `-- resolve-dir@0.1.1
 | | `-- global-modules@0.2.3
 | | +-- global-prefix@0.1.5
 | | | +-- homedir-polyfill@1.0.1
 | | | | `-- parse-passwd@1.0.0
 | | | +-- ini@1.3.4
 | | | `-- which@1.2.12
 | | | `-- isexe@1.1.2
 | | `-- is-windows@0.2.0
 | +-- fined@1.0.2
 | | +-- expand-tilde@1.2.2
 | | +-- lodash.assignwith@4.2.0
 | | +-- lodash.isempty@4.4.0
 | | +-- lodash.pick@4.4.0
 | | `-- parse-filepath@1.0.1
 | | +-- is-absolute@0.2.6
 | | | `-- is-relative@0.2.1
 | | | `-- is-unc-path@0.1.2
 | | | `-- unc-path-regex@0.1.2
 | | +-- map-cache@0.2.2
 | | `-- path-root@0.1.1
 | | `-- path-root-regex@0.1.2
 | +-- flagged-respawn@0.3.2
 | +-- lodash.isplainobject@4.0.6
 | +-- lodash.isstring@4.0.1
 | +-- lodash.mapvalues@4.6.0
 | +-- rechoir@0.6.2
 | `-- resolve@1.3.2
 | `-- path-parse@1.0.5
 +-- minimist@1.2.0
 +-- orchestrator@0.3.8
 | +-- end-of-stream@0.1.5
 | | `-- once@1.3.3
 | | `-- wrappy@1.0.2
 | +-- sequencify@0.0.7
 | `-- stream-consume@0.1.0
 +-- pretty-hrtime@1.0.3
 +-- semver@4.3.6
 +-- tildify@1.2.0
 | `-- os-homedir@1.0.2
 +-- v8flags@2.0.11
 | `-- user-home@1.1.1
 `-- vinyl-fs@0.3.14
 +-- defaults@1.0.3
 +-- glob-stream@3.1.18
 | +-- glob@4.5.3
 | | `-- inflight@1.0.6
 | +-- glob2base@0.0.12
 | | `-- find-index@0.1.1
 | +-- minimatch@2.0.10
 | | `-- brace-expansion@1.1.6
 | | +-- balanced-match@0.4.2
 | | `-- concat-map@0.0.1
 | +-- ordered-read-streams@0.1.0
 | +-- through2@0.6.5
 | | `-- readable-stream@1.0.34
 | `-- unique-stream@1.0.0
 +-- glob-watcher@0.0.6
 | `-- gaze@0.5.2
 | `-- globule@0.1.0
 | +-- glob@3.1.21
 | | +-- graceful-fs@1.2.3
 | | `-- inherits@1.0.2
 | +-- lodash@1.0.2
 | `-- minimatch@0.2.14
 | +-- lru-cache@2.7.3
 | `-- sigmund@1.0.1
 +-- graceful-fs@3.0.11
 | `-- natives@1.1.0
 +-- mkdirp@0.5.1
 | `-- minimist@0.0.8
 +-- strip-bom@1.0.0
 | +-- first-chunk-stream@1.0.0
 | `-- is-utf8@0.2.1
 +-- through2@0.6.5
 | `-- readable-stream@1.0.34
 | `-- isarray@0.0.1
 `-- vinyl@0.4.6
 `-- clone@0.2.0

Now let us specify the BrowserStack credentials you can find in the “Automate” section on your BrowserStack Account Settings page:

(container)# export BROWSERSTACK_USER=your_browserstack_user_id
(container)# export BROWSERSTACK_KEY=your_browserstack_key

Finally, we start the test session via Gulp (you can interrupt the test with Ctrl-C, if you want to

(container)# gulp test-e2e
[18:33:40] Using gulpfile /app/wt-protractor-boilerplate/gulpfile.js
[18:33:40] Starting 'test-e2e'...
Updating selenium standalone to version 2.52.0
downloading https://selenium-release.storage.googleapis.com/2.52/selenium-server-standalone-2.52.0.jar...
Updating chromedriver to version 2.21
downloading https://chromedriver.storage.googleapis.com/2.21/chromedriver_linux64.zip...
chromedriver_2.21linux64.zip downloaded to /app/wt-protractor-boilerplate/node_modules/protractor/selenium/chromedriver_2.21linux64.zip
selenium-server-standalone-2.52.0.jar downloaded to /app/wt-protractor-boilerplate/node_modules/protractor/selenium/selenium-server-standalone-2.52.0.jar
[18:33:45] I/hosted - Using the selenium server at http://hub.browserstack.com/wd/hub
[18:33:45] I/launcher - Running 1 instances of WebDriver
Started
.


1 spec, 0 failures
Finished in 3.365 seconds
[18:34:12] I/launcher - 0 instance(s) of WebDriver still running
[18:34:12] I/launcher - chrome #01 passed
[18:34:12] I/hosted - Using the selenium server at http://hub.browserstack.com/wd/hub
[18:34:12] I/launcher - Running 1 instances of WebDriver
Started
.


1 spec, 0 failures
Finished in 2.479 seconds
[18:34:39] I/launcher - 0 instance(s) of WebDriver still running
[18:34:39] I/launcher - safari #01 passed
[18:34:39] I/hosted - Using the selenium server at http://hub.browserstack.com/wd/hub
[18:34:39] I/launcher - Running 1 instances of WebDriver
Started
.


1 spec, 0 failures
Finished in 2.76 seconds
^C

With that, we have automatically tested several browsers using BrowserStack:

Excellent!

On this BrowserStack link, you can see in detail, which steps were taken during the automated test:

2017-03-04-18_29_58-run-selenium-tests-in-1000-desktop-and-mobile-browsers (text form)

2017-03-04-18_28_40-run-selenium-tests-in-1000-desktop-and-mobile-browsers (visual form)

Appendix B: Protractor BrowserStack Tests on Ubuntu 16.04 without Gulp

I have looked for a simpler Protractor example without Gulp and I have found BrowserStack’s Protractor example on GitHub. Let us run it on an Ubuntu 16.04 Docker container.

Here, we will do the same as above, but on an Ubuntu 16.04 Docker container instead of a CentOS container.

(dockerhost)$ sudo docker run -it ubuntu:16.04 bash 
(container)# mkdir /app; cd /app

Let us install some software we need:

(container)$ apt-get update && apt-get install -y nodejs npm git
Get:1 http://archive.ubuntu.com/ubuntu xenial InRelease [247 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Get:4 http://archive.ubuntu.com/ubuntu xenial/main Sources [1103 kB]
Get:5 http://archive.ubuntu.com/ubuntu xenial/restricted Sources [5179 B]
Get:6 http://archive.ubuntu.com/ubuntu xenial/universe Sources [9802 kB]
Get:7 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages [1558 kB]
Get:8 http://archive.ubuntu.com/ubuntu xenial/restricted amd64 Packages [14.1 kB]
Get:9 http://archive.ubuntu.com/ubuntu xenial/universe amd64 Packages [9827 kB]
Get:10 http://archive.ubuntu.com/ubuntu xenial-updates/main Sources [296 kB]
Get:11 http://archive.ubuntu.com/ubuntu xenial-updates/restricted Sources [2815 B]
Get:12 http://archive.ubuntu.com/ubuntu xenial-updates/universe Sources [176 kB]
Get:13 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages [623 kB]
Get:14 http://archive.ubuntu.com/ubuntu xenial-updates/restricted amd64 Packages [12.4 kB]
Get:15 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 Packages [546 kB]
Get:16 http://archive.ubuntu.com/ubuntu xenial-security/main Sources [75.0 kB]
Get:17 http://archive.ubuntu.com/ubuntu xenial-security/restricted Sources [2392 B]
Get:18 http://archive.ubuntu.com/ubuntu xenial-security/universe Sources [27.0 kB]
Get:19 http://archive.ubuntu.com/ubuntu xenial-security/main amd64 Packages [282 kB]
Get:20 http://archive.ubuntu.com/ubuntu xenial-security/restricted amd64 Packages [12.0 kB]
Get:21 http://archive.ubuntu.com/ubuntu xenial-security/universe amd64 Packages [113 kB]
Fetched 24.9 MB in 3min 13s (129 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
 binutils build-essential bzip2 ca-certificates cpp cpp-5 dpkg-dev fakeroot file g++ g++-5 gcc gcc-5 git-man gyp
 ifupdown iproute2 isc-dhcp-client isc-dhcp-common javascript-common krb5-locales less libalgorithm-diff-perl
 libalgorithm-diff-xs-perl libalgorithm-merge-perl libasan2 libasn1-8-heimdal libatm1 libatomic1 libbsd0 libc-dev-bin
 libc6-dev libcc1-0 libcilkrts5 libcurl3-gnutls libdns-export162 libdpkg-perl libedit2 liberror-perl libexpat1
 libfakeroot libffi6 libfile-fcntllock-perl libgcc-5-dev libgdbm3 libgmp10 libgnutls30 libgomp1 libgssapi-krb5-2
 libgssapi3-heimdal libhcrypto4-heimdal libheimbase1-heimdal libheimntlm0-heimdal libhogweed4 libhx509-5-heimdal
 libicu55 libidn11 libisc-export160 libisl15 libitm1 libjs-inherits libjs-jquery libjs-node-uuid libjs-underscore
 libk5crypto3 libkeyutils1 libkrb5-26-heimdal libkrb5-3 libkrb5support0 libldap-2.4-2 liblsan0 libmagic1 libmnl0
 libmpc3 libmpfr4 libmpx0 libnettle6 libp11-kit0 libperl5.22 libpopt0 libpython-stdlib libpython2.7-minimal
 libpython2.7-stdlib libquadmath0 libroken18-heimdal librtmp1 libsasl2-2 libsasl2-modules libsasl2-modules-db
 libsqlite3-0 libssl-dev libssl-doc libssl1.0.0 libstdc++-5-dev libtasn1-6 libtsan0 libubsan0 libuv1 libuv1-dev
 libwind0-heimdal libx11-6 libx11-data libxau6 libxcb1 libxdmcp6 libxext6 libxmuu1 libxtables11 linux-libc-dev make
 manpages manpages-dev mime-support netbase node-abbrev node-ansi node-ansi-color-table node-archy node-async
 node-block-stream node-combined-stream node-cookie-jar node-delayed-stream node-forever-agent node-form-data
 node-fstream node-fstream-ignore node-github-url-from-git node-glob node-graceful-fs node-gyp node-inherits node-ini
 node-json-stringify-safe node-lockfile node-lru-cache node-mime node-minimatch node-mkdirp node-mute-stream
 node-node-uuid node-nopt node-normalize-package-data node-npmlog node-once node-osenv node-qs node-read
 node-read-package-json node-request node-retry node-rimraf node-semver node-sha node-sigmund node-slide node-tar
 node-tunnel-agent node-underscore node-which nodejs-dev openssh-client openssl patch perl perl-modules-5.22 python
 python-minimal python-pkg-resources python2.7 python2.7-minimal rename rsync xauth xz-utils zlib1g-dev
Suggested packages:
 binutils-doc bzip2-doc cpp-doc gcc-5-locales debian-keyring g++-multilib g++-5-multilib gcc-5-doc libstdc++6-5-dbg
 gcc-multilib autoconf automake libtool flex bison gdb gcc-doc gcc-5-multilib libgcc1-dbg libgomp1-dbg libitm1-dbg
 libatomic1-dbg libasan2-dbg liblsan0-dbg libtsan0-dbg libubsan0-dbg libcilkrts5-dbg libmpx0-dbg libquadmath0-dbg
 gettext-base git-daemon-run | git-daemon-sysvinit git-doc git-el git-email git-gui gitk gitweb git-arch git-cvs
 git-mediawiki git-svn ppp rdnssd iproute2-doc resolvconf avahi-autoipd isc-dhcp-client-ddns apparmor apache2
 | lighttpd | httpd glibc-doc gnutls-bin krb5-doc krb5-user libsasl2-modules-otp libsasl2-modules-ldap
 libsasl2-modules-sql libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal libstdc++-5-doc make-doc
 man-browser node-hawk node-aws-sign node-oauth-sign node-http-signature debhelper ssh-askpass libpam-ssh keychain
 monkeysphere ed diffutils-doc perl-doc libterm-readline-gnu-perl | libterm-readline-perl-perl python-doc python-tk
 python-setuptools python2.7-doc binfmt-support openssh-server
The following NEW packages will be installed:
 binutils build-essential bzip2 ca-certificates cpp cpp-5 dpkg-dev fakeroot file g++ g++-5 gcc gcc-5 git git-man gyp
 ifupdown iproute2 isc-dhcp-client isc-dhcp-common javascript-common krb5-locales less libalgorithm-diff-perl
 libalgorithm-diff-xs-perl libalgorithm-merge-perl libasan2 libasn1-8-heimdal libatm1 libatomic1 libbsd0 libc-dev-bin
 libc6-dev libcc1-0 libcilkrts5 libcurl3-gnutls libdns-export162 libdpkg-perl libedit2 liberror-perl libexpat1
 libfakeroot libffi6 libfile-fcntllock-perl libgcc-5-dev libgdbm3 libgmp10 libgnutls30 libgomp1 libgssapi-krb5-2
 libgssapi3-heimdal libhcrypto4-heimdal libheimbase1-heimdal libheimntlm0-heimdal libhogweed4 libhx509-5-heimdal
 libicu55 libidn11 libisc-export160 libisl15 libitm1 libjs-inherits libjs-jquery libjs-node-uuid libjs-underscore
 libk5crypto3 libkeyutils1 libkrb5-26-heimdal libkrb5-3 libkrb5support0 libldap-2.4-2 liblsan0 libmagic1 libmnl0
 libmpc3 libmpfr4 libmpx0 libnettle6 libp11-kit0 libperl5.22 libpopt0 libpython-stdlib libpython2.7-minimal
 libpython2.7-stdlib libquadmath0 libroken18-heimdal librtmp1 libsasl2-2 libsasl2-modules libsasl2-modules-db
 libsqlite3-0 libssl-dev libssl-doc libssl1.0.0 libstdc++-5-dev libtasn1-6 libtsan0 libubsan0 libuv1 libuv1-dev
 libwind0-heimdal libx11-6 libx11-data libxau6 libxcb1 libxdmcp6 libxext6 libxmuu1 libxtables11 linux-libc-dev make
 manpages manpages-dev mime-support netbase node-abbrev node-ansi node-ansi-color-table node-archy node-async
 node-block-stream node-combined-stream node-cookie-jar node-delayed-stream node-forever-agent node-form-data
 node-fstream node-fstream-ignore node-github-url-from-git node-glob node-graceful-fs node-gyp node-inherits node-ini
 node-json-stringify-safe node-lockfile node-lru-cache node-mime node-minimatch node-mkdirp node-mute-stream
 node-node-uuid node-nopt node-normalize-package-data node-npmlog node-once node-osenv node-qs node-read
 node-read-package-json node-request node-retry node-rimraf node-semver node-sha node-sigmund node-slide node-tar
 node-tunnel-agent node-underscore node-which nodejs nodejs-dev npm openssh-client openssl patch perl
 perl-modules-5.22 python python-minimal python-pkg-resources python2.7 python2.7-minimal rename rsync xauth xz-utils
 zlib1g-dev
0 upgraded, 179 newly installed, 0 to remove and 2 not upgraded.
Need to get 79.4 MB of archives.
After this operation, 337 MB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 libatm1 amd64 1:2.5.1-1.5 [24.2 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial/main amd64 libmnl0 amd64 1.0.3-5 [12.0 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial/main amd64 libpopt0 amd64 1.16-10 [26.0 kB]
Get:4 http://archive.ubuntu.com/ubuntu xenial/main amd64 libgdbm3 amd64 1.8.3-13.1 [16.9 kB]
Get:5 http://archive.ubuntu.com/ubuntu xenial/main amd64 libxau6 amd64 1:1.0.8-1 [8376 B]
Get:6 http://archive.ubuntu.com/ubuntu xenial/main amd64 libxdmcp6 amd64 1:1.1.2-1.1 [11.0 kB]
Get:7 http://archive.ubuntu.com/ubuntu xenial/main amd64 libxcb1 amd64 1.11.1-1ubuntu1 [40.0 kB]
Get:8 http://archive.ubuntu.com/ubuntu xenial/main amd64 libx11-data all 2:1.6.3-1ubuntu2 [113 kB]
Get:9 http://archive.ubuntu.com/ubuntu xenial/main amd64 libx11-6 amd64 2:1.6.3-1ubuntu2 [571 kB]
Get:10 http://archive.ubuntu.com/ubuntu xenial/main amd64 libxext6 amd64 2:1.3.3-1 [29.4 kB]
Get:11 http://archive.ubuntu.com/ubuntu xenial/main amd64 perl-modules-5.22 all 5.22.1-9 [2641 kB]
Get:12 http://archive.ubuntu.com/ubuntu xenial/main amd64 libperl5.22 amd64 5.22.1-9 [3371 kB]
Get:13 http://archive.ubuntu.com/ubuntu xenial/main amd64 perl amd64 5.22.1-9 [237 kB]
Get:14 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpython2.7-minimal amd64 2.7.12-1ubuntu0~16.04.1 [339 kB]
Get:15 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python2.7-minimal amd64 2.7.12-1ubuntu0~16.04.1 [1295 kB]
Get:16 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-minimal amd64 2.7.11-1 [28.2 kB]
Get:17 http://archive.ubuntu.com/ubuntu xenial/main amd64 mime-support all 3.59ubuntu1 [31.0 kB]
Get:18 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libexpat1 amd64 2.1.0-7ubuntu0.16.04.2 [71.3 kB]
Get:19 http://archive.ubuntu.com/ubuntu xenial/main amd64 libffi6 amd64 3.2.1-4 [17.8 kB]
Get:20 http://archive.ubuntu.com/ubuntu xenial/main amd64 libsqlite3-0 amd64 3.11.0-1ubuntu1 [396 kB]
Get:21 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libssl1.0.0 amd64 1.0.2g-1ubuntu4.6 [1082 kB]
Get:22 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpython2.7-stdlib amd64 2.7.12-1ubuntu0~16.04.1 [1884 kB]
Get:23 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python2.7 amd64 2.7.12-1ubuntu0~16.04.1 [224 kB]
Get:24 http://archive.ubuntu.com/ubuntu xenial/main amd64 libpython-stdlib amd64 2.7.11-1 [7656 B]
Get:25 http://archive.ubuntu.com/ubuntu xenial/main amd64 python amd64 2.7.11-1 [137 kB]
Get:26 http://archive.ubuntu.com/ubuntu xenial/main amd64 libgmp10 amd64 2:6.1.0+dfsg-2 [240 kB]
Get:27 http://archive.ubuntu.com/ubuntu xenial/main amd64 libmpfr4 amd64 3.1.4-1 [191 kB]
Get:28 http://archive.ubuntu.com/ubuntu xenial/main amd64 libmpc3 amd64 1.0.3-1 [39.7 kB]
Get:29 http://archive.ubuntu.com/ubuntu xenial/main amd64 bzip2 amd64 1.0.6-8 [32.7 kB]
Get:30 http://archive.ubuntu.com/ubuntu xenial/main amd64 libmagic1 amd64 1:5.25-2ubuntu1 [216 kB]
Get:31 http://archive.ubuntu.com/ubuntu xenial/main amd64 file amd64 1:5.25-2ubuntu1 [21.2 kB]
Get:32 http://archive.ubuntu.com/ubuntu xenial/main amd64 iproute2 amd64 4.3.0-1ubuntu3 [522 kB]
Get:33 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 ifupdown amd64 0.8.10ubuntu1.2 [54.9 kB]
Get:34 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libisc-export160 amd64 1:9.10.3.dfsg.P4-8ubuntu1.5 [153 kB]
Get:35 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libdns-export162 amd64 1:9.10.3.dfsg.P4-8ubuntu1.5 [665 kB]
Get:36 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 isc-dhcp-client amd64 4.3.3-5ubuntu12.6 [223 kB]
Get:37 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 isc-dhcp-common amd64 4.3.3-5ubuntu12.6 [105 kB]
Get:38 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 less amd64 481-2.1ubuntu0.1 [110 kB]
Get:39 http://archive.ubuntu.com/ubuntu xenial/main amd64 libbsd0 amd64 0.8.2-1 [41.7 kB]
Get:40 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libnettle6 amd64 3.2-1ubuntu0.16.04.1 [93.5 kB]
Get:41 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libhogweed4 amd64 3.2-1ubuntu0.16.04.1 [136 kB]
Get:42 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libidn11 amd64 1.32-3ubuntu1.1 [45.6 kB]
Get:43 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libp11-kit0 amd64 0.23.2-5~ubuntu16.04.1 [105 kB]
Get:44 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libtasn1-6 amd64 4.7-3ubuntu0.16.04.1 [43.2 kB]
Get:45 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgnutls30 amd64 3.4.10-4ubuntu1.2 [547 kB]
Get:46 http://archive.ubuntu.com/ubuntu xenial/main amd64 libxtables11 amd64 1.6.0-2ubuntu3 [27.2 kB]
Get:47 http://archive.ubuntu.com/ubuntu xenial/main amd64 netbase all 5.3 [12.9 kB]
Get:48 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 openssl amd64 1.0.2g-1ubuntu4.6 [492 kB]
Get:49 http://archive.ubuntu.com/ubuntu xenial/main amd64 ca-certificates all 20160104ubuntu1 [191 kB]
Get:50 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 krb5-locales all 1.13.2+dfsg-5ubuntu2 [13.2 kB]
Get:51 http://archive.ubuntu.com/ubuntu xenial/main amd64 libroken18-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [41.2 kB]
Get:52 http://archive.ubuntu.com/ubuntu xenial/main amd64 libasn1-8-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [174 kB]
Get:53 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkrb5support0 amd64 1.13.2+dfsg-5ubuntu2 [30.8 kB]
Get:54 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libk5crypto3 amd64 1.13.2+dfsg-5ubuntu2 [81.2 kB]
Get:55 http://archive.ubuntu.com/ubuntu xenial/main amd64 libkeyutils1 amd64 1.5.9-8ubuntu1 [9904 B]
Get:56 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkrb5-3 amd64 1.13.2+dfsg-5ubuntu2 [273 kB]
Get:57 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgssapi-krb5-2 amd64 1.13.2+dfsg-5ubuntu2 [120 kB]
Get:58 http://archive.ubuntu.com/ubuntu xenial/main amd64 libhcrypto4-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [84.9 kB]
Get:59 http://archive.ubuntu.com/ubuntu xenial/main amd64 libheimbase1-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [29.2 kB]
Get:60 http://archive.ubuntu.com/ubuntu xenial/main amd64 libwind0-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [48.2 kB]
Get:61 http://archive.ubuntu.com/ubuntu xenial/main amd64 libhx509-5-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [107 kB]
Get:62 http://archive.ubuntu.com/ubuntu xenial/main amd64 libkrb5-26-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [202 kB]
Get:63 http://archive.ubuntu.com/ubuntu xenial/main amd64 libheimntlm0-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [15.1 kB]
Get:64 http://archive.ubuntu.com/ubuntu xenial/main amd64 libgssapi3-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [96.1 kB]
Get:65 http://archive.ubuntu.com/ubuntu xenial/main amd64 libsasl2-modules-db amd64 2.1.26.dfsg1-14build1 [14.5 kB]
Get:66 http://archive.ubuntu.com/ubuntu xenial/main amd64 libsasl2-2 amd64 2.1.26.dfsg1-14build1 [48.7 kB]
Get:67 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libldap-2.4-2 amd64 2.4.42+dfsg-2ubuntu3.1 [161 kB]
Get:68 http://archive.ubuntu.com/ubuntu xenial/main amd64 librtmp1 amd64 2.4+20151223.gitfa8646d-1build1 [53.9 kB]
Get:69 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcurl3-gnutls amd64 7.47.0-1ubuntu2.2 [184 kB]
Get:70 http://archive.ubuntu.com/ubuntu xenial/main amd64 libedit2 amd64 3.1-20150325-1ubuntu2 [76.5 kB]
Get:71 http://archive.ubuntu.com/ubuntu xenial/main amd64 libicu55 amd64 55.1-7 [7643 kB]
Get:72 http://archive.ubuntu.com/ubuntu xenial/main amd64 libsasl2-modules amd64 2.1.26.dfsg1-14build1 [47.5 kB]
Get:73 http://archive.ubuntu.com/ubuntu xenial/main amd64 libxmuu1 amd64 2:1.1.2-2 [9674 B]
Get:74 http://archive.ubuntu.com/ubuntu xenial/main amd64 manpages all 4.04-2 [1087 kB]
Get:75 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 openssh-client amd64 1:7.2p2-4ubuntu2.1 [587 kB]
Get:76 http://archive.ubuntu.com/ubuntu xenial/main amd64 rsync amd64 3.1.1-3ubuntu1 [325 kB]
Get:77 http://archive.ubuntu.com/ubuntu xenial/main amd64 xauth amd64 1:1.0.9-1ubuntu2 [22.7 kB]
Get:78 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 binutils amd64 2.26.1-1ubuntu1~16.04.3 [2310 kB]
Get:79 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libc-dev-bin amd64 2.23-0ubuntu5 [68.7 kB]
Get:80 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 linux-libc-dev amd64 4.4.0-66.87 [833 kB]
Get:81 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libc6-dev amd64 2.23-0ubuntu5 [2078 kB]
Get:82 http://archive.ubuntu.com/ubuntu xenial/main amd64 libisl15 amd64 0.16.1-1 [524 kB]
Get:83 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 cpp-5 amd64 5.4.0-6ubuntu1~16.04.4 [7653 kB]
Get:84 http://archive.ubuntu.com/ubuntu xenial/main amd64 cpp amd64 4:5.3.1-1ubuntu1 [27.7 kB]
Get:85 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcc1-0 amd64 5.4.0-6ubuntu1~16.04.4 [38.8 kB]
Get:86 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgomp1 amd64 5.4.0-6ubuntu1~16.04.4 [55.0 kB]
Get:87 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libitm1 amd64 5.4.0-6ubuntu1~16.04.4 [27.4 kB]
Get:88 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libatomic1 amd64 5.4.0-6ubuntu1~16.04.4 [8912 B]
Get:89 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libasan2 amd64 5.4.0-6ubuntu1~16.04.4 [264 kB]
Get:90 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 liblsan0 amd64 5.4.0-6ubuntu1~16.04.4 [105 kB]
Get:91 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libtsan0 amd64 5.4.0-6ubuntu1~16.04.4 [244 kB]
Get:92 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libubsan0 amd64 5.4.0-6ubuntu1~16.04.4 [95.3 kB]
Get:93 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcilkrts5 amd64 5.4.0-6ubuntu1~16.04.4 [40.1 kB]
Get:94 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libmpx0 amd64 5.4.0-6ubuntu1~16.04.4 [9766 B]
Get:95 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libquadmath0 amd64 5.4.0-6ubuntu1~16.04.4 [131 kB]
Get:96 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgcc-5-dev amd64 5.4.0-6ubuntu1~16.04.4 [2237 kB]
Get:97 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 gcc-5 amd64 5.4.0-6ubuntu1~16.04.4 [8577 kB]
Get:98 http://archive.ubuntu.com/ubuntu xenial/main amd64 gcc amd64 4:5.3.1-1ubuntu1 [5244 B]
Get:99 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libstdc++-5-dev amd64 5.4.0-6ubuntu1~16.04.4 [1426 kB]
Get:100 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 g++-5 amd64 5.4.0-6ubuntu1~16.04.4 [8300 kB]
Get:101 http://archive.ubuntu.com/ubuntu xenial/main amd64 g++ amd64 4:5.3.1-1ubuntu1 [1504 B]
Get:102 http://archive.ubuntu.com/ubuntu xenial/main amd64 make amd64 4.1-6 [151 kB]
Get:103 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libdpkg-perl all 1.18.4ubuntu1.1 [195 kB]
Get:104 http://archive.ubuntu.com/ubuntu xenial/main amd64 xz-utils amd64 5.1.1alpha+20120614-2ubuntu2 [78.8 kB]
Get:105 http://archive.ubuntu.com/ubuntu xenial/main amd64 patch amd64 2.7.5-1 [90.4 kB]
Get:106 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 dpkg-dev all 1.18.4ubuntu1.1 [584 kB]
Get:107 http://archive.ubuntu.com/ubuntu xenial/main amd64 build-essential amd64 12.1ubuntu2 [4758 B]
Get:108 http://archive.ubuntu.com/ubuntu xenial/main amd64 libfakeroot amd64 1.20.2-1ubuntu1 [25.5 kB]
Get:109 http://archive.ubuntu.com/ubuntu xenial/main amd64 fakeroot amd64 1.20.2-1ubuntu1 [61.8 kB]
Get:110 http://archive.ubuntu.com/ubuntu xenial/main amd64 liberror-perl all 0.17-1.2 [19.6 kB]
Get:111 http://archive.ubuntu.com/ubuntu xenial/main amd64 git-man all 1:2.7.4-0ubuntu1 [735 kB]
Get:112 http://archive.ubuntu.com/ubuntu xenial/main amd64 git amd64 1:2.7.4-0ubuntu1 [3006 kB]
Get:113 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-pkg-resources all 20.7.0-1 [108 kB]
Get:114 http://archive.ubuntu.com/ubuntu xenial/universe amd64 gyp all 0.1+20150913git1f374df9-1ubuntu1 [265 kB]
Get:115 http://archive.ubuntu.com/ubuntu xenial/main amd64 javascript-common all 11 [6066 B]
Get:116 http://archive.ubuntu.com/ubuntu xenial/main amd64 libalgorithm-diff-perl all 1.19.03-1 [47.6 kB]
Get:117 http://archive.ubuntu.com/ubuntu xenial/main amd64 libalgorithm-diff-xs-perl amd64 0.04-4build1 [11.0 kB]
Get:118 http://archive.ubuntu.com/ubuntu xenial/main amd64 libalgorithm-merge-perl all 0.08-3 [12.0 kB]
Get:119 http://archive.ubuntu.com/ubuntu xenial/main amd64 libfile-fcntllock-perl amd64 0.22-3 [32.0 kB]
Get:120 http://archive.ubuntu.com/ubuntu xenial/main amd64 libjs-jquery all 1.11.3+dfsg-4 [161 kB]
Get:121 http://archive.ubuntu.com/ubuntu xenial/universe amd64 libjs-node-uuid all 1.4.0-1 [11.1 kB]
Get:122 http://archive.ubuntu.com/ubuntu xenial/main amd64 libjs-underscore all 1.7.0~dfsg-1ubuntu1 [46.7 kB]
Get:123 http://archive.ubuntu.com/ubuntu xenial/main amd64 zlib1g-dev amd64 1:1.2.8.dfsg-2ubuntu4 [168 kB]
Get:124 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libssl-dev amd64 1.0.2g-1ubuntu4.6 [1344 kB]
Get:125 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libssl-doc all 1.0.2g-1ubuntu4.6 [1079 kB]
Get:126 http://archive.ubuntu.com/ubuntu xenial/universe amd64 libuv1 amd64 1.8.0-1 [57.4 kB]
Get:127 http://archive.ubuntu.com/ubuntu xenial/universe amd64 libuv1-dev amd64 1.8.0-1 [74.7 kB]
Get:128 http://archive.ubuntu.com/ubuntu xenial/main amd64 manpages-dev all 4.04-2 [2048 kB]
Get:129 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 nodejs amd64 4.2.6~dfsg-1ubuntu4.1 [3161 kB]
Get:130 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-async all 0.8.0-1 [22.2 kB]
Get:131 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-node-uuid all 1.4.0-1 [2530 B]
Get:132 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-underscore all 1.7.0~dfsg-1ubuntu1 [3780 B]
Get:133 http://archive.ubuntu.com/ubuntu xenial/main amd64 rename all 0.20-4 [12.0 kB]
Get:134 http://archive.ubuntu.com/ubuntu xenial/universe amd64 libjs-inherits all 2.0.1-3 [2794 B]
Get:135 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-abbrev all 1.0.5-2 [3592 B]
Get:136 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-ansi all 0.3.0-2 [8590 B]
Get:137 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-ansi-color-table all 1.0.0-1 [4478 B]
Get:138 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-archy all 0.0.2-1 [3660 B]
Get:139 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-inherits all 2.0.1-3 [3060 B]
Get:140 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-block-stream all 0.0.7-1 [4832 B]
Get:141 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-delayed-stream all 0.0.5-1 [4750 B]
Get:142 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-combined-stream all 0.0.5-1 [4958 B]
Get:143 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-cookie-jar all 0.3.1-1 [3746 B]
Get:144 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-forever-agent all 0.5.1-1 [3194 B]
Get:145 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-mime all 1.3.4-1 [11.9 kB]
Get:146 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-form-data all 0.1.0-1 [6412 B]
Get:147 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-rimraf all 2.2.8-1 [5702 B]
Get:148 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-mkdirp all 0.5.0-1 [4690 B]
Get:149 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-graceful-fs all 3.0.2-1 [7102 B]
Get:150 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-fstream all 0.1.24-1 [19.5 kB]
Get:151 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-lru-cache all 2.3.1-1 [5674 B]
Get:152 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-sigmund all 1.0.0-1 [3818 B]
Get:153 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-minimatch all 1.0.0-1 [14.0 kB]
Get:154 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-fstream-ignore all 0.0.6-2 [5586 B]
Get:155 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-github-url-from-git all 1.1.1-1 [3138 B]
Get:156 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-once all 1.1.1-1 [2608 B]
Get:157 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-glob all 4.0.5-1 [13.2 kB]
Get:158 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 nodejs-dev amd64 4.2.6~dfsg-1ubuntu4.1 [265 kB]
Get:159 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-nopt all 3.0.1-1 [9544 B]
Get:160 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-npmlog all 0.0.4-1 [5844 B]
Get:161 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-osenv all 0.1.0-1 [3772 B]
Get:162 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-tunnel-agent all 0.3.1-1 [4018 B]
Get:163 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-json-stringify-safe all 5.0.0-1 [3544 B]
Get:164 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-qs all 2.2.4-1 [7574 B]
Get:165 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-request all 2.26.1-1 [14.5 kB]
Get:166 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-semver all 2.1.0-2 [16.2 kB]
Get:167 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-tar all 1.0.3-2 [17.5 kB]
Get:168 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-which all 1.0.5-2 [3678 B]
Get:169 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-gyp all 3.0.3-2ubuntu1 [23.2 kB]
Get:170 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-ini all 1.1.0-1 [4770 B]
Get:171 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-lockfile all 0.4.1-1 [5450 B]
Get:172 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-mute-stream all 0.0.4-1 [4096 B]
Get:173 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-normalize-package-data all 0.2.2-1 [9286 B]
Get:174 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-read all 1.0.5-1 [4314 B]
Get:175 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-read-package-json all 1.2.4-1 [7780 B]
Get:176 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-retry all 0.6.0-1 [6172 B]
Get:177 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-sha all 1.2.3-1 [4272 B]
Get:178 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-slide all 1.1.4-1 [6118 B]
Get:179 http://archive.ubuntu.com/ubuntu xenial/universe amd64 npm all 3.5.2-0ubuntu4 [1586 kB]
Fetched 79.4 MB in 40s (1962 kB/s)
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package libatm1:amd64.
(Reading database ... 7256 files and directories currently installed.)
Preparing to unpack .../libatm1_1%3a2.5.1-1.5_amd64.deb ...
Unpacking libatm1:amd64 (1:2.5.1-1.5) ...
Selecting previously unselected package libmnl0:amd64.
Preparing to unpack .../libmnl0_1.0.3-5_amd64.deb ...
Unpacking libmnl0:amd64 (1.0.3-5) ...
Selecting previously unselected package libpopt0:amd64.
Preparing to unpack .../libpopt0_1.16-10_amd64.deb ...
Unpacking libpopt0:amd64 (1.16-10) ...
Selecting previously unselected package libgdbm3:amd64.
Preparing to unpack .../libgdbm3_1.8.3-13.1_amd64.deb ...
Unpacking libgdbm3:amd64 (1.8.3-13.1) ...
Selecting previously unselected package libxau6:amd64.
Preparing to unpack .../libxau6_1%3a1.0.8-1_amd64.deb ...
Unpacking libxau6:amd64 (1:1.0.8-1) ...
Selecting previously unselected package libxdmcp6:amd64.
Preparing to unpack .../libxdmcp6_1%3a1.1.2-1.1_amd64.deb ...
Unpacking libxdmcp6:amd64 (1:1.1.2-1.1) ...
Selecting previously unselected package libxcb1:amd64.
Preparing to unpack .../libxcb1_1.11.1-1ubuntu1_amd64.deb ...
Unpacking libxcb1:amd64 (1.11.1-1ubuntu1) ...
Selecting previously unselected package libx11-data.
Preparing to unpack .../libx11-data_2%3a1.6.3-1ubuntu2_all.deb ...
Unpacking libx11-data (2:1.6.3-1ubuntu2) ...
Selecting previously unselected package libx11-6:amd64.
Preparing to unpack .../libx11-6_2%3a1.6.3-1ubuntu2_amd64.deb ...
Unpacking libx11-6:amd64 (2:1.6.3-1ubuntu2) ...
Selecting previously unselected package libxext6:amd64.
Preparing to unpack .../libxext6_2%3a1.3.3-1_amd64.deb ...
Unpacking libxext6:amd64 (2:1.3.3-1) ...
Selecting previously unselected package perl-modules-5.22.
Preparing to unpack .../perl-modules-5.22_5.22.1-9_all.deb ...
Unpacking perl-modules-5.22 (5.22.1-9) ...
Selecting previously unselected package libperl5.22:amd64.
Preparing to unpack .../libperl5.22_5.22.1-9_amd64.deb ...
Unpacking libperl5.22:amd64 (5.22.1-9) ...
Selecting previously unselected package perl.
Preparing to unpack .../perl_5.22.1-9_amd64.deb ...
Unpacking perl (5.22.1-9) ...
Selecting previously unselected package libpython2.7-minimal:amd64.
Preparing to unpack .../libpython2.7-minimal_2.7.12-1ubuntu0~16.04.1_amd64.deb ...
Unpacking libpython2.7-minimal:amd64 (2.7.12-1ubuntu0~16.04.1) ...
Selecting previously unselected package python2.7-minimal.
Preparing to unpack .../python2.7-minimal_2.7.12-1ubuntu0~16.04.1_amd64.deb ...
Unpacking python2.7-minimal (2.7.12-1ubuntu0~16.04.1) ...
Selecting previously unselected package python-minimal.
Preparing to unpack .../python-minimal_2.7.11-1_amd64.deb ...
Unpacking python-minimal (2.7.11-1) ...
Selecting previously unselected package mime-support.
Preparing to unpack .../mime-support_3.59ubuntu1_all.deb ...
Unpacking mime-support (3.59ubuntu1) ...
Selecting previously unselected package libexpat1:amd64.
Preparing to unpack .../libexpat1_2.1.0-7ubuntu0.16.04.2_amd64.deb ...
Unpacking libexpat1:amd64 (2.1.0-7ubuntu0.16.04.2) ...
Selecting previously unselected package libffi6:amd64.
Preparing to unpack .../libffi6_3.2.1-4_amd64.deb ...
Unpacking libffi6:amd64 (3.2.1-4) ...
Selecting previously unselected package libsqlite3-0:amd64.
Preparing to unpack .../libsqlite3-0_3.11.0-1ubuntu1_amd64.deb ...
Unpacking libsqlite3-0:amd64 (3.11.0-1ubuntu1) ...
Selecting previously unselected package libssl1.0.0:amd64.
Preparing to unpack .../libssl1.0.0_1.0.2g-1ubuntu4.6_amd64.deb ...
Unpacking libssl1.0.0:amd64 (1.0.2g-1ubuntu4.6) ...
Selecting previously unselected package libpython2.7-stdlib:amd64.
Preparing to unpack .../libpython2.7-stdlib_2.7.12-1ubuntu0~16.04.1_amd64.deb ...
Unpacking libpython2.7-stdlib:amd64 (2.7.12-1ubuntu0~16.04.1) ...
Selecting previously unselected package python2.7.
Preparing to unpack .../python2.7_2.7.12-1ubuntu0~16.04.1_amd64.deb ...
Unpacking python2.7 (2.7.12-1ubuntu0~16.04.1) ...
Selecting previously unselected package libpython-stdlib:amd64.
Preparing to unpack .../libpython-stdlib_2.7.11-1_amd64.deb ...
Unpacking libpython-stdlib:amd64 (2.7.11-1) ...
Processing triggers for libc-bin (2.23-0ubuntu5) ...
Setting up libpython2.7-minimal:amd64 (2.7.12-1ubuntu0~16.04.1) ...
Setting up python2.7-minimal (2.7.12-1ubuntu0~16.04.1) ...
Linking and byte-compiling packages for runtime python2.7...
Setting up python-minimal (2.7.11-1) ...
Selecting previously unselected package python.
(Reading database ... 10145 files and directories currently installed.)
Preparing to unpack .../python_2.7.11-1_amd64.deb ...
Unpacking python (2.7.11-1) ...
Selecting previously unselected package libgmp10:amd64.
Preparing to unpack .../libgmp10_2%3a6.1.0+dfsg-2_amd64.deb ...
Unpacking libgmp10:amd64 (2:6.1.0+dfsg-2) ...
Selecting previously unselected package libmpfr4:amd64.
Preparing to unpack .../libmpfr4_3.1.4-1_amd64.deb ...
Unpacking libmpfr4:amd64 (3.1.4-1) ...
Selecting previously unselected package libmpc3:amd64.
Preparing to unpack .../libmpc3_1.0.3-1_amd64.deb ...
Unpacking libmpc3:amd64 (1.0.3-1) ...
Selecting previously unselected package bzip2.
Preparing to unpack .../bzip2_1.0.6-8_amd64.deb ...
Unpacking bzip2 (1.0.6-8) ...
Selecting previously unselected package libmagic1:amd64.
Preparing to unpack .../libmagic1_1%3a5.25-2ubuntu1_amd64.deb ...
Unpacking libmagic1:amd64 (1:5.25-2ubuntu1) ...
Selecting previously unselected package file.
Preparing to unpack .../file_1%3a5.25-2ubuntu1_amd64.deb ...
Unpacking file (1:5.25-2ubuntu1) ...
Selecting previously unselected package iproute2.
Preparing to unpack .../iproute2_4.3.0-1ubuntu3_amd64.deb ...
Unpacking iproute2 (4.3.0-1ubuntu3) ...
Selecting previously unselected package ifupdown.
Preparing to unpack .../ifupdown_0.8.10ubuntu1.2_amd64.deb ...
Unpacking ifupdown (0.8.10ubuntu1.2) ...
Selecting previously unselected package libisc-export160.
Preparing to unpack .../libisc-export160_1%3a9.10.3.dfsg.P4-8ubuntu1.5_amd64.deb ...
Unpacking libisc-export160 (1:9.10.3.dfsg.P4-8ubuntu1.5) ...
Selecting previously unselected package libdns-export162.
Preparing to unpack .../libdns-export162_1%3a9.10.3.dfsg.P4-8ubuntu1.5_amd64.deb ...
Unpacking libdns-export162 (1:9.10.3.dfsg.P4-8ubuntu1.5) ...
Selecting previously unselected package isc-dhcp-client.
Preparing to unpack .../isc-dhcp-client_4.3.3-5ubuntu12.6_amd64.deb ...
Unpacking isc-dhcp-client (4.3.3-5ubuntu12.6) ...
Selecting previously unselected package isc-dhcp-common.
Preparing to unpack .../isc-dhcp-common_4.3.3-5ubuntu12.6_amd64.deb ...
Unpacking isc-dhcp-common (4.3.3-5ubuntu12.6) ...
Selecting previously unselected package less.
Preparing to unpack .../less_481-2.1ubuntu0.1_amd64.deb ...
Unpacking less (481-2.1ubuntu0.1) ...
Selecting previously unselected package libbsd0:amd64.
Preparing to unpack .../libbsd0_0.8.2-1_amd64.deb ...
Unpacking libbsd0:amd64 (0.8.2-1) ...
Selecting previously unselected package libnettle6:amd64.
Preparing to unpack .../libnettle6_3.2-1ubuntu0.16.04.1_amd64.deb ...
Unpacking libnettle6:amd64 (3.2-1ubuntu0.16.04.1) ...
Selecting previously unselected package libhogweed4:amd64.
Preparing to unpack .../libhogweed4_3.2-1ubuntu0.16.04.1_amd64.deb ...
Unpacking libhogweed4:amd64 (3.2-1ubuntu0.16.04.1) ...
Selecting previously unselected package libidn11:amd64.
Preparing to unpack .../libidn11_1.32-3ubuntu1.1_amd64.deb ...
Unpacking libidn11:amd64 (1.32-3ubuntu1.1) ...
Selecting previously unselected package libp11-kit0:amd64.
Preparing to unpack .../libp11-kit0_0.23.2-5~ubuntu16.04.1_amd64.deb ...
Unpacking libp11-kit0:amd64 (0.23.2-5~ubuntu16.04.1) ...
Selecting previously unselected package libtasn1-6:amd64.
Preparing to unpack .../libtasn1-6_4.7-3ubuntu0.16.04.1_amd64.deb ...
Unpacking libtasn1-6:amd64 (4.7-3ubuntu0.16.04.1) ...
Selecting previously unselected package libgnutls30:amd64.
Preparing to unpack .../libgnutls30_3.4.10-4ubuntu1.2_amd64.deb ...
Unpacking libgnutls30:amd64 (3.4.10-4ubuntu1.2) ...
Selecting previously unselected package libxtables11:amd64.
Preparing to unpack .../libxtables11_1.6.0-2ubuntu3_amd64.deb ...
Unpacking libxtables11:amd64 (1.6.0-2ubuntu3) ...
Selecting previously unselected package netbase.
Preparing to unpack .../archives/netbase_5.3_all.deb ...
Unpacking netbase (5.3) ...
Selecting previously unselected package openssl.
Preparing to unpack .../openssl_1.0.2g-1ubuntu4.6_amd64.deb ...
Unpacking openssl (1.0.2g-1ubuntu4.6) ...
Selecting previously unselected package ca-certificates.
Preparing to unpack .../ca-certificates_20160104ubuntu1_all.deb ...
Unpacking ca-certificates (20160104ubuntu1) ...
Selecting previously unselected package krb5-locales.
Preparing to unpack .../krb5-locales_1.13.2+dfsg-5ubuntu2_all.deb ...
Unpacking krb5-locales (1.13.2+dfsg-5ubuntu2) ...
Selecting previously unselected package libroken18-heimdal:amd64.
Preparing to unpack .../libroken18-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libroken18-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libasn1-8-heimdal:amd64.
Preparing to unpack .../libasn1-8-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libasn1-8-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libkrb5support0:amd64.
Preparing to unpack .../libkrb5support0_1.13.2+dfsg-5ubuntu2_amd64.deb ...
Unpacking libkrb5support0:amd64 (1.13.2+dfsg-5ubuntu2) ...
Selecting previously unselected package libk5crypto3:amd64.
Preparing to unpack .../libk5crypto3_1.13.2+dfsg-5ubuntu2_amd64.deb ...
Unpacking libk5crypto3:amd64 (1.13.2+dfsg-5ubuntu2) ...
Selecting previously unselected package libkeyutils1:amd64.
Preparing to unpack .../libkeyutils1_1.5.9-8ubuntu1_amd64.deb ...
Unpacking libkeyutils1:amd64 (1.5.9-8ubuntu1) ...
Selecting previously unselected package libkrb5-3:amd64.
Preparing to unpack .../libkrb5-3_1.13.2+dfsg-5ubuntu2_amd64.deb ...
Unpacking libkrb5-3:amd64 (1.13.2+dfsg-5ubuntu2) ...
Selecting previously unselected package libgssapi-krb5-2:amd64.
Preparing to unpack .../libgssapi-krb5-2_1.13.2+dfsg-5ubuntu2_amd64.deb ...
Unpacking libgssapi-krb5-2:amd64 (1.13.2+dfsg-5ubuntu2) ...
Selecting previously unselected package libhcrypto4-heimdal:amd64.
Preparing to unpack .../libhcrypto4-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libhcrypto4-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libheimbase1-heimdal:amd64.
Preparing to unpack .../libheimbase1-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libheimbase1-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libwind0-heimdal:amd64.
Preparing to unpack .../libwind0-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libwind0-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libhx509-5-heimdal:amd64.
Preparing to unpack .../libhx509-5-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libhx509-5-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libkrb5-26-heimdal:amd64.
Preparing to unpack .../libkrb5-26-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libkrb5-26-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libheimntlm0-heimdal:amd64.
Preparing to unpack .../libheimntlm0-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libheimntlm0-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libgssapi3-heimdal:amd64.
Preparing to unpack .../libgssapi3-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libgssapi3-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libsasl2-modules-db:amd64.
Preparing to unpack .../libsasl2-modules-db_2.1.26.dfsg1-14build1_amd64.deb ...
Unpacking libsasl2-modules-db:amd64 (2.1.26.dfsg1-14build1) ...
Selecting previously unselected package libsasl2-2:amd64.
Preparing to unpack .../libsasl2-2_2.1.26.dfsg1-14build1_amd64.deb ...
Unpacking libsasl2-2:amd64 (2.1.26.dfsg1-14build1) ...
Selecting previously unselected package libldap-2.4-2:amd64.
Preparing to unpack .../libldap-2.4-2_2.4.42+dfsg-2ubuntu3.1_amd64.deb ...
Unpacking libldap-2.4-2:amd64 (2.4.42+dfsg-2ubuntu3.1) ...
Selecting previously unselected package librtmp1:amd64.
Preparing to unpack .../librtmp1_2.4+20151223.gitfa8646d-1build1_amd64.deb ...
Unpacking librtmp1:amd64 (2.4+20151223.gitfa8646d-1build1) ...
Selecting previously unselected package libcurl3-gnutls:amd64.
Preparing to unpack .../libcurl3-gnutls_7.47.0-1ubuntu2.2_amd64.deb ...
Unpacking libcurl3-gnutls:amd64 (7.47.0-1ubuntu2.2) ...
Selecting previously unselected package libedit2:amd64.
Preparing to unpack .../libedit2_3.1-20150325-1ubuntu2_amd64.deb ...
Unpacking libedit2:amd64 (3.1-20150325-1ubuntu2) ...
Selecting previously unselected package libicu55:amd64.
Preparing to unpack .../libicu55_55.1-7_amd64.deb ...
Unpacking libicu55:amd64 (55.1-7) ...
Selecting previously unselected package libsasl2-modules:amd64.
Preparing to unpack .../libsasl2-modules_2.1.26.dfsg1-14build1_amd64.deb ...
Unpacking libsasl2-modules:amd64 (2.1.26.dfsg1-14build1) ...
Selecting previously unselected package libxmuu1:amd64.
Preparing to unpack .../libxmuu1_2%3a1.1.2-2_amd64.deb ...
Unpacking libxmuu1:amd64 (2:1.1.2-2) ...
Selecting previously unselected package manpages.
Preparing to unpack .../manpages_4.04-2_all.deb ...
Unpacking manpages (4.04-2) ...
Selecting previously unselected package openssh-client.
Preparing to unpack .../openssh-client_1%3a7.2p2-4ubuntu2.1_amd64.deb ...
Unpacking openssh-client (1:7.2p2-4ubuntu2.1) ...
Selecting previously unselected package rsync.
Preparing to unpack .../rsync_3.1.1-3ubuntu1_amd64.deb ...
Unpacking rsync (3.1.1-3ubuntu1) ...
Selecting previously unselected package xauth.
Preparing to unpack .../xauth_1%3a1.0.9-1ubuntu2_amd64.deb ...
Unpacking xauth (1:1.0.9-1ubuntu2) ...
Selecting previously unselected package binutils.
Preparing to unpack .../binutils_2.26.1-1ubuntu1~16.04.3_amd64.deb ...
Unpacking binutils (2.26.1-1ubuntu1~16.04.3) ...
Selecting previously unselected package libc-dev-bin.
Preparing to unpack .../libc-dev-bin_2.23-0ubuntu5_amd64.deb ...
Unpacking libc-dev-bin (2.23-0ubuntu5) ...
Selecting previously unselected package linux-libc-dev:amd64.
Preparing to unpack .../linux-libc-dev_4.4.0-66.87_amd64.deb ...
Unpacking linux-libc-dev:amd64 (4.4.0-66.87) ...
Selecting previously unselected package libc6-dev:amd64.
Preparing to unpack .../libc6-dev_2.23-0ubuntu5_amd64.deb ...
Unpacking libc6-dev:amd64 (2.23-0ubuntu5) ...
Selecting previously unselected package libisl15:amd64.
Preparing to unpack .../libisl15_0.16.1-1_amd64.deb ...
Unpacking libisl15:amd64 (0.16.1-1) ...
Selecting previously unselected package cpp-5.
Preparing to unpack .../cpp-5_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking cpp-5 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package cpp.
Preparing to unpack .../cpp_4%3a5.3.1-1ubuntu1_amd64.deb ...
Unpacking cpp (4:5.3.1-1ubuntu1) ...
Selecting previously unselected package libcc1-0:amd64.
Preparing to unpack .../libcc1-0_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libcc1-0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libgomp1:amd64.
Preparing to unpack .../libgomp1_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libgomp1:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libitm1:amd64.
Preparing to unpack .../libitm1_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libitm1:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libatomic1:amd64.
Preparing to unpack .../libatomic1_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libatomic1:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libasan2:amd64.
Preparing to unpack .../libasan2_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libasan2:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package liblsan0:amd64.
Preparing to unpack .../liblsan0_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking liblsan0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libtsan0:amd64.
Preparing to unpack .../libtsan0_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libtsan0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libubsan0:amd64.
Preparing to unpack .../libubsan0_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libubsan0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libcilkrts5:amd64.
Preparing to unpack .../libcilkrts5_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libcilkrts5:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libmpx0:amd64.
Preparing to unpack .../libmpx0_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libmpx0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libquadmath0:amd64.
Preparing to unpack .../libquadmath0_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libquadmath0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libgcc-5-dev:amd64.
Preparing to unpack .../libgcc-5-dev_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libgcc-5-dev:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package gcc-5.
Preparing to unpack .../gcc-5_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking gcc-5 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package gcc.
Preparing to unpack .../gcc_4%3a5.3.1-1ubuntu1_amd64.deb ...
Unpacking gcc (4:5.3.1-1ubuntu1) ...
Selecting previously unselected package libstdc++-5-dev:amd64.
Preparing to unpack .../libstdc++-5-dev_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libstdc++-5-dev:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package g++-5.
Preparing to unpack .../g++-5_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking g++-5 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package g++.
Preparing to unpack .../g++_4%3a5.3.1-1ubuntu1_amd64.deb ...
Unpacking g++ (4:5.3.1-1ubuntu1) ...
Selecting previously unselected package make.
Preparing to unpack .../archives/make_4.1-6_amd64.deb ...
Unpacking make (4.1-6) ...
Selecting previously unselected package libdpkg-perl.
Preparing to unpack .../libdpkg-perl_1.18.4ubuntu1.1_all.deb ...
Unpacking libdpkg-perl (1.18.4ubuntu1.1) ...
Selecting previously unselected package xz-utils.
Preparing to unpack .../xz-utils_5.1.1alpha+20120614-2ubuntu2_amd64.deb ...
Unpacking xz-utils (5.1.1alpha+20120614-2ubuntu2) ...
Selecting previously unselected package patch.
Preparing to unpack .../patch_2.7.5-1_amd64.deb ...
Unpacking patch (2.7.5-1) ...
Selecting previously unselected package dpkg-dev.
Preparing to unpack .../dpkg-dev_1.18.4ubuntu1.1_all.deb ...
Unpacking dpkg-dev (1.18.4ubuntu1.1) ...
Selecting previously unselected package build-essential.
Preparing to unpack .../build-essential_12.1ubuntu2_amd64.deb ...
Unpacking build-essential (12.1ubuntu2) ...
Selecting previously unselected package libfakeroot:amd64.
Preparing to unpack .../libfakeroot_1.20.2-1ubuntu1_amd64.deb ...
Unpacking libfakeroot:amd64 (1.20.2-1ubuntu1) ...
Selecting previously unselected package fakeroot.
Preparing to unpack .../fakeroot_1.20.2-1ubuntu1_amd64.deb ...
Unpacking fakeroot (1.20.2-1ubuntu1) ...
Selecting previously unselected package liberror-perl.
Preparing to unpack .../liberror-perl_0.17-1.2_all.deb ...
Unpacking liberror-perl (0.17-1.2) ...
Selecting previously unselected package git-man.
Preparing to unpack .../git-man_1%3a2.7.4-0ubuntu1_all.deb ...
Unpacking git-man (1:2.7.4-0ubuntu1) ...
Selecting previously unselected package git.
Preparing to unpack .../git_1%3a2.7.4-0ubuntu1_amd64.deb ...
Unpacking git (1:2.7.4-0ubuntu1) ...
Selecting previously unselected package python-pkg-resources.
Preparing to unpack .../python-pkg-resources_20.7.0-1_all.deb ...
Unpacking python-pkg-resources (20.7.0-1) ...
Selecting previously unselected package gyp.
Preparing to unpack .../gyp_0.1+20150913git1f374df9-1ubuntu1_all.deb ...
Unpacking gyp (0.1+20150913git1f374df9-1ubuntu1) ...
Selecting previously unselected package javascript-common.
Preparing to unpack .../javascript-common_11_all.deb ...
Unpacking javascript-common (11) ...
Selecting previously unselected package libalgorithm-diff-perl.
Preparing to unpack .../libalgorithm-diff-perl_1.19.03-1_all.deb ...
Unpacking libalgorithm-diff-perl (1.19.03-1) ...
Selecting previously unselected package libalgorithm-diff-xs-perl.
Preparing to unpack .../libalgorithm-diff-xs-perl_0.04-4build1_amd64.deb ...
Unpacking libalgorithm-diff-xs-perl (0.04-4build1) ...
Selecting previously unselected package libalgorithm-merge-perl.
Preparing to unpack .../libalgorithm-merge-perl_0.08-3_all.deb ...
Unpacking libalgorithm-merge-perl (0.08-3) ...
Selecting previously unselected package libfile-fcntllock-perl.
Preparing to unpack .../libfile-fcntllock-perl_0.22-3_amd64.deb ...
Unpacking libfile-fcntllock-perl (0.22-3) ...
Selecting previously unselected package libjs-jquery.
Preparing to unpack .../libjs-jquery_1.11.3+dfsg-4_all.deb ...
Unpacking libjs-jquery (1.11.3+dfsg-4) ...
Selecting previously unselected package libjs-node-uuid.
Preparing to unpack .../libjs-node-uuid_1.4.0-1_all.deb ...
Unpacking libjs-node-uuid (1.4.0-1) ...
Selecting previously unselected package libjs-underscore.
Preparing to unpack .../libjs-underscore_1.7.0~dfsg-1ubuntu1_all.deb ...
Unpacking libjs-underscore (1.7.0~dfsg-1ubuntu1) ...
Selecting previously unselected package zlib1g-dev:amd64.
Preparing to unpack .../zlib1g-dev_1%3a1.2.8.dfsg-2ubuntu4_amd64.deb ...
Unpacking zlib1g-dev:amd64 (1:1.2.8.dfsg-2ubuntu4) ...
Selecting previously unselected package libssl-dev:amd64.
Preparing to unpack .../libssl-dev_1.0.2g-1ubuntu4.6_amd64.deb ...
Unpacking libssl-dev:amd64 (1.0.2g-1ubuntu4.6) ...
Selecting previously unselected package libssl-doc.
Preparing to unpack .../libssl-doc_1.0.2g-1ubuntu4.6_all.deb ...
Unpacking libssl-doc (1.0.2g-1ubuntu4.6) ...
Selecting previously unselected package libuv1:amd64.
Preparing to unpack .../libuv1_1.8.0-1_amd64.deb ...
Unpacking libuv1:amd64 (1.8.0-1) ...
Selecting previously unselected package libuv1-dev:amd64.
Preparing to unpack .../libuv1-dev_1.8.0-1_amd64.deb ...
Unpacking libuv1-dev:amd64 (1.8.0-1) ...
Selecting previously unselected package manpages-dev.
Preparing to unpack .../manpages-dev_4.04-2_all.deb ...
Unpacking manpages-dev (4.04-2) ...
Selecting previously unselected package nodejs.
Preparing to unpack .../nodejs_4.2.6~dfsg-1ubuntu4.1_amd64.deb ...
Unpacking nodejs (4.2.6~dfsg-1ubuntu4.1) ...
Selecting previously unselected package node-async.
Preparing to unpack .../node-async_0.8.0-1_all.deb ...
Unpacking node-async (0.8.0-1) ...
Selecting previously unselected package node-node-uuid.
Preparing to unpack .../node-node-uuid_1.4.0-1_all.deb ...
Unpacking node-node-uuid (1.4.0-1) ...
Selecting previously unselected package node-underscore.
Preparing to unpack .../node-underscore_1.7.0~dfsg-1ubuntu1_all.deb ...
Unpacking node-underscore (1.7.0~dfsg-1ubuntu1) ...
Selecting previously unselected package rename.
Preparing to unpack .../archives/rename_0.20-4_all.deb ...
Unpacking rename (0.20-4) ...
Selecting previously unselected package libjs-inherits.
Preparing to unpack .../libjs-inherits_2.0.1-3_all.deb ...
Unpacking libjs-inherits (2.0.1-3) ...
Selecting previously unselected package node-abbrev.
Preparing to unpack .../node-abbrev_1.0.5-2_all.deb ...
Unpacking node-abbrev (1.0.5-2) ...
Selecting previously unselected package node-ansi.
Preparing to unpack .../node-ansi_0.3.0-2_all.deb ...
Unpacking node-ansi (0.3.0-2) ...
Selecting previously unselected package node-ansi-color-table.
Preparing to unpack .../node-ansi-color-table_1.0.0-1_all.deb ...
Unpacking node-ansi-color-table (1.0.0-1) ...
Selecting previously unselected package node-archy.
Preparing to unpack .../node-archy_0.0.2-1_all.deb ...
Unpacking node-archy (0.0.2-1) ...
Selecting previously unselected package node-inherits.
Preparing to unpack .../node-inherits_2.0.1-3_all.deb ...
Unpacking node-inherits (2.0.1-3) ...
Selecting previously unselected package node-block-stream.
Preparing to unpack .../node-block-stream_0.0.7-1_all.deb ...
Unpacking node-block-stream (0.0.7-1) ...
Selecting previously unselected package node-delayed-stream.
Preparing to unpack .../node-delayed-stream_0.0.5-1_all.deb ...
Unpacking node-delayed-stream (0.0.5-1) ...
Selecting previously unselected package node-combined-stream.
Preparing to unpack .../node-combined-stream_0.0.5-1_all.deb ...
Unpacking node-combined-stream (0.0.5-1) ...
Selecting previously unselected package node-cookie-jar.
Preparing to unpack .../node-cookie-jar_0.3.1-1_all.deb ...
Unpacking node-cookie-jar (0.3.1-1) ...
Selecting previously unselected package node-forever-agent.
Preparing to unpack .../node-forever-agent_0.5.1-1_all.deb ...
Unpacking node-forever-agent (0.5.1-1) ...
Selecting previously unselected package node-mime.
Preparing to unpack .../node-mime_1.3.4-1_all.deb ...
Unpacking node-mime (1.3.4-1) ...
Selecting previously unselected package node-form-data.
Preparing to unpack .../node-form-data_0.1.0-1_all.deb ...
Unpacking node-form-data (0.1.0-1) ...
Selecting previously unselected package node-rimraf.
Preparing to unpack .../node-rimraf_2.2.8-1_all.deb ...
Unpacking node-rimraf (2.2.8-1) ...
Selecting previously unselected package node-mkdirp.
Preparing to unpack .../node-mkdirp_0.5.0-1_all.deb ...
Unpacking node-mkdirp (0.5.0-1) ...
Selecting previously unselected package node-graceful-fs.
Preparing to unpack .../node-graceful-fs_3.0.2-1_all.deb ...
Unpacking node-graceful-fs (3.0.2-1) ...
Selecting previously unselected package node-fstream.
Preparing to unpack .../node-fstream_0.1.24-1_all.deb ...
Unpacking node-fstream (0.1.24-1) ...
Selecting previously unselected package node-lru-cache.
Preparing to unpack .../node-lru-cache_2.3.1-1_all.deb ...
Unpacking node-lru-cache (2.3.1-1) ...
Selecting previously unselected package node-sigmund.
Preparing to unpack .../node-sigmund_1.0.0-1_all.deb ...
Unpacking node-sigmund (1.0.0-1) ...
Selecting previously unselected package node-minimatch.
Preparing to unpack .../node-minimatch_1.0.0-1_all.deb ...
Unpacking node-minimatch (1.0.0-1) ...
Selecting previously unselected package node-fstream-ignore.
Preparing to unpack .../node-fstream-ignore_0.0.6-2_all.deb ...
Unpacking node-fstream-ignore (0.0.6-2) ...
Selecting previously unselected package node-github-url-from-git.
Preparing to unpack .../node-github-url-from-git_1.1.1-1_all.deb ...
Unpacking node-github-url-from-git (1.1.1-1) ...
Selecting previously unselected package node-once.
Preparing to unpack .../node-once_1.1.1-1_all.deb ...
Unpacking node-once (1.1.1-1) ...
Selecting previously unselected package node-glob.
Preparing to unpack .../node-glob_4.0.5-1_all.deb ...
Unpacking node-glob (4.0.5-1) ...
Selecting previously unselected package nodejs-dev.
Preparing to unpack .../nodejs-dev_4.2.6~dfsg-1ubuntu4.1_amd64.deb ...
Unpacking nodejs-dev (4.2.6~dfsg-1ubuntu4.1) ...
Selecting previously unselected package node-nopt.
Preparing to unpack .../node-nopt_3.0.1-1_all.deb ...
Unpacking node-nopt (3.0.1-1) ...
Selecting previously unselected package node-npmlog.
Preparing to unpack .../node-npmlog_0.0.4-1_all.deb ...
Unpacking node-npmlog (0.0.4-1) ...
Selecting previously unselected package node-osenv.
Preparing to unpack .../node-osenv_0.1.0-1_all.deb ...
Unpacking node-osenv (0.1.0-1) ...
Selecting previously unselected package node-tunnel-agent.
Preparing to unpack .../node-tunnel-agent_0.3.1-1_all.deb ...
Unpacking node-tunnel-agent (0.3.1-1) ...
Selecting previously unselected package node-json-stringify-safe.
Preparing to unpack .../node-json-stringify-safe_5.0.0-1_all.deb ...
Unpacking node-json-stringify-safe (5.0.0-1) ...
Selecting previously unselected package node-qs.
Preparing to unpack .../node-qs_2.2.4-1_all.deb ...
Unpacking node-qs (2.2.4-1) ...
Selecting previously unselected package node-request.
Preparing to unpack .../node-request_2.26.1-1_all.deb ...
Unpacking node-request (2.26.1-1) ...
Selecting previously unselected package node-semver.
Preparing to unpack .../node-semver_2.1.0-2_all.deb ...
Unpacking node-semver (2.1.0-2) ...
Selecting previously unselected package node-tar.
Preparing to unpack .../node-tar_1.0.3-2_all.deb ...
Unpacking node-tar (1.0.3-2) ...
Selecting previously unselected package node-which.
Preparing to unpack .../node-which_1.0.5-2_all.deb ...
Unpacking node-which (1.0.5-2) ...
Selecting previously unselected package node-gyp.
Preparing to unpack .../node-gyp_3.0.3-2ubuntu1_all.deb ...
Unpacking node-gyp (3.0.3-2ubuntu1) ...
Selecting previously unselected package node-ini.
Preparing to unpack .../node-ini_1.1.0-1_all.deb ...
Unpacking node-ini (1.1.0-1) ...
Selecting previously unselected package node-lockfile.
Preparing to unpack .../node-lockfile_0.4.1-1_all.deb ...
Unpacking node-lockfile (0.4.1-1) ...
Selecting previously unselected package node-mute-stream.
Preparing to unpack .../node-mute-stream_0.0.4-1_all.deb ...
Unpacking node-mute-stream (0.0.4-1) ...
Selecting previously unselected package node-normalize-package-data.
Preparing to unpack .../node-normalize-package-data_0.2.2-1_all.deb ...
Unpacking node-normalize-package-data (0.2.2-1) ...
Selecting previously unselected package node-read.
Preparing to unpack .../node-read_1.0.5-1_all.deb ...
Unpacking node-read (1.0.5-1) ...
Selecting previously unselected package node-read-package-json.
Preparing to unpack .../node-read-package-json_1.2.4-1_all.deb ...
Unpacking node-read-package-json (1.2.4-1) ...
Selecting previously unselected package node-retry.
Preparing to unpack .../node-retry_0.6.0-1_all.deb ...
Unpacking node-retry (0.6.0-1) ...
Selecting previously unselected package node-sha.
Preparing to unpack .../node-sha_1.2.3-1_all.deb ...
Unpacking node-sha (1.2.3-1) ...
Selecting previously unselected package node-slide.
Preparing to unpack .../node-slide_1.1.4-1_all.deb ...
Unpacking node-slide (1.1.4-1) ...
Selecting previously unselected package npm.
Preparing to unpack .../npm_3.5.2-0ubuntu4_all.deb ...
Unpacking npm (3.5.2-0ubuntu4) ...
Processing triggers for libc-bin (2.23-0ubuntu5) ...
Processing triggers for systemd (229-4ubuntu16) ...
Setting up libatm1:amd64 (1:2.5.1-1.5) ...
Setting up libmnl0:amd64 (1.0.3-5) ...
Setting up libpopt0:amd64 (1.16-10) ...
Setting up libgdbm3:amd64 (1.8.3-13.1) ...
Setting up libxau6:amd64 (1:1.0.8-1) ...
Setting up libxdmcp6:amd64 (1:1.1.2-1.1) ...
Setting up libxcb1:amd64 (1.11.1-1ubuntu1) ...
Setting up libx11-data (2:1.6.3-1ubuntu2) ...
Setting up libx11-6:amd64 (2:1.6.3-1ubuntu2) ...
Setting up libxext6:amd64 (2:1.3.3-1) ...
Setting up perl-modules-5.22 (5.22.1-9) ...
Setting up libperl5.22:amd64 (5.22.1-9) ...
Setting up perl (5.22.1-9) ...
update-alternatives: using /usr/bin/prename to provide /usr/bin/rename (rename) in auto mode
Setting up mime-support (3.59ubuntu1) ...
Setting up libexpat1:amd64 (2.1.0-7ubuntu0.16.04.2) ...
Setting up libffi6:amd64 (3.2.1-4) ...
Setting up libsqlite3-0:amd64 (3.11.0-1ubuntu1) ...
Setting up libssl1.0.0:amd64 (1.0.2g-1ubuntu4.6) ...
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76.)
debconf: falling back to frontend: Readline
Setting up libpython2.7-stdlib:amd64 (2.7.12-1ubuntu0~16.04.1) ...
Setting up python2.7 (2.7.12-1ubuntu0~16.04.1) ...
Setting up libpython-stdlib:amd64 (2.7.11-1) ...
Setting up python (2.7.11-1) ...
Setting up libgmp10:amd64 (2:6.1.0+dfsg-2) ...
Setting up libmpfr4:amd64 (3.1.4-1) ...
Setting up libmpc3:amd64 (1.0.3-1) ...
Setting up bzip2 (1.0.6-8) ...
Setting up libmagic1:amd64 (1:5.25-2ubuntu1) ...
Setting up file (1:5.25-2ubuntu1) ...
Setting up iproute2 (4.3.0-1ubuntu3) ...
Setting up ifupdown (0.8.10ubuntu1.2) ...
Creating /etc/network/interfaces.
Setting up libisc-export160 (1:9.10.3.dfsg.P4-8ubuntu1.5) ...
Setting up libdns-export162 (1:9.10.3.dfsg.P4-8ubuntu1.5) ...
Setting up isc-dhcp-client (4.3.3-5ubuntu12.6) ...
Setting up isc-dhcp-common (4.3.3-5ubuntu12.6) ...
Setting up less (481-2.1ubuntu0.1) ...
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76.)
debconf: falling back to frontend: Readline
Setting up libbsd0:amd64 (0.8.2-1) ...
Setting up libnettle6:amd64 (3.2-1ubuntu0.16.04.1) ...
Setting up libhogweed4:amd64 (3.2-1ubuntu0.16.04.1) ...
Setting up libidn11:amd64 (1.32-3ubuntu1.1) ...
Setting up libp11-kit0:amd64 (0.23.2-5~ubuntu16.04.1) ...
Setting up libtasn1-6:amd64 (4.7-3ubuntu0.16.04.1) ...
Setting up libgnutls30:amd64 (3.4.10-4ubuntu1.2) ...
Setting up libxtables11:amd64 (1.6.0-2ubuntu3) ...
Setting up netbase (5.3) ...
Setting up openssl (1.0.2g-1ubuntu4.6) ...
Setting up ca-certificates (20160104ubuntu1) ...
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76.)
debconf: falling back to frontend: Readline
Setting up krb5-locales (1.13.2+dfsg-5ubuntu2) ...
Setting up libroken18-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libasn1-8-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libkrb5support0:amd64 (1.13.2+dfsg-5ubuntu2) ...
Setting up libk5crypto3:amd64 (1.13.2+dfsg-5ubuntu2) ...
Setting up libkeyutils1:amd64 (1.5.9-8ubuntu1) ...
Setting up libkrb5-3:amd64 (1.13.2+dfsg-5ubuntu2) ...
Setting up libgssapi-krb5-2:amd64 (1.13.2+dfsg-5ubuntu2) ...
Setting up libhcrypto4-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libheimbase1-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libwind0-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libhx509-5-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libkrb5-26-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libheimntlm0-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libgssapi3-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libsasl2-modules-db:amd64 (2.1.26.dfsg1-14build1) ...
Setting up libsasl2-2:amd64 (2.1.26.dfsg1-14build1) ...
Setting up libldap-2.4-2:amd64 (2.4.42+dfsg-2ubuntu3.1) ...
Setting up librtmp1:amd64 (2.4+20151223.gitfa8646d-1build1) ...
Setting up libcurl3-gnutls:amd64 (7.47.0-1ubuntu2.2) ...
Setting up libedit2:amd64 (3.1-20150325-1ubuntu2) ...
Setting up libicu55:amd64 (55.1-7) ...
Setting up libsasl2-modules:amd64 (2.1.26.dfsg1-14build1) ...
Setting up libxmuu1:amd64 (2:1.1.2-2) ...
Setting up manpages (4.04-2) ...
Setting up openssh-client (1:7.2p2-4ubuntu2.1) ...
Setting up rsync (3.1.1-3ubuntu1) ...
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of restart.
Setting up xauth (1:1.0.9-1ubuntu2) ...
Setting up binutils (2.26.1-1ubuntu1~16.04.3) ...
Setting up libc-dev-bin (2.23-0ubuntu5) ...
Setting up linux-libc-dev:amd64 (4.4.0-66.87) ...
Setting up libc6-dev:amd64 (2.23-0ubuntu5) ...
Setting up libisl15:amd64 (0.16.1-1) ...
Setting up cpp-5 (5.4.0-6ubuntu1~16.04.4) ...
Setting up cpp (4:5.3.1-1ubuntu1) ...
Setting up libcc1-0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libgomp1:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libitm1:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libatomic1:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libasan2:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up liblsan0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libtsan0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libubsan0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libcilkrts5:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libmpx0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libquadmath0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libgcc-5-dev:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up gcc-5 (5.4.0-6ubuntu1~16.04.4) ...
Setting up gcc (4:5.3.1-1ubuntu1) ...
Setting up libstdc++-5-dev:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up g++-5 (5.4.0-6ubuntu1~16.04.4) ...
Setting up g++ (4:5.3.1-1ubuntu1) ...
update-alternatives: using /usr/bin/g++ to provide /usr/bin/c++ (c++) in auto mode
Setting up make (4.1-6) ...
Setting up libdpkg-perl (1.18.4ubuntu1.1) ...
Setting up xz-utils (5.1.1alpha+20120614-2ubuntu2) ...
update-alternatives: using /usr/bin/xz to provide /usr/bin/lzma (lzma) in auto mode
Setting up patch (2.7.5-1) ...
Setting up dpkg-dev (1.18.4ubuntu1.1) ...
Setting up build-essential (12.1ubuntu2) ...
Setting up libfakeroot:amd64 (1.20.2-1ubuntu1) ...
Setting up fakeroot (1.20.2-1ubuntu1) ...
update-alternatives: using /usr/bin/fakeroot-sysv to provide /usr/bin/fakeroot (fakeroot) in auto mode
Setting up liberror-perl (0.17-1.2) ...
Setting up git-man (1:2.7.4-0ubuntu1) ...
Setting up git (1:2.7.4-0ubuntu1) ...
Setting up python-pkg-resources (20.7.0-1) ...
Setting up gyp (0.1+20150913git1f374df9-1ubuntu1) ...
Setting up javascript-common (11) ...
Setting up libalgorithm-diff-perl (1.19.03-1) ...
Setting up libalgorithm-diff-xs-perl (0.04-4build1) ...
Setting up libalgorithm-merge-perl (0.08-3) ...
Setting up libfile-fcntllock-perl (0.22-3) ...
Setting up libjs-jquery (1.11.3+dfsg-4) ...
Setting up libjs-node-uuid (1.4.0-1) ...
Setting up libjs-underscore (1.7.0~dfsg-1ubuntu1) ...
Setting up zlib1g-dev:amd64 (1:1.2.8.dfsg-2ubuntu4) ...
Setting up libssl-dev:amd64 (1.0.2g-1ubuntu4.6) ...
Setting up libssl-doc (1.0.2g-1ubuntu4.6) ...
Setting up libuv1:amd64 (1.8.0-1) ...
Setting up libuv1-dev:amd64 (1.8.0-1) ...
Setting up manpages-dev (4.04-2) ...
Setting up nodejs (4.2.6~dfsg-1ubuntu4.1) ...
update-alternatives: using /usr/bin/nodejs to provide /usr/bin/js (js) in auto mode
Setting up node-async (0.8.0-1) ...
Setting up node-node-uuid (1.4.0-1) ...
Setting up node-underscore (1.7.0~dfsg-1ubuntu1) ...
Setting up rename (0.20-4) ...
update-alternatives: using /usr/bin/file-rename to provide /usr/bin/rename (rename) in auto mode
Setting up libjs-inherits (2.0.1-3) ...
Setting up node-abbrev (1.0.5-2) ...
Setting up node-ansi (0.3.0-2) ...
Setting up node-ansi-color-table (1.0.0-1) ...
Setting up node-archy (0.0.2-1) ...
Setting up node-inherits (2.0.1-3) ...
Setting up node-block-stream (0.0.7-1) ...
Setting up node-delayed-stream (0.0.5-1) ...
Setting up node-combined-stream (0.0.5-1) ...
Setting up node-cookie-jar (0.3.1-1) ...
Setting up node-forever-agent (0.5.1-1) ...
Setting up node-mime (1.3.4-1) ...
Setting up node-form-data (0.1.0-1) ...
Setting up node-rimraf (2.2.8-1) ...
Setting up node-mkdirp (0.5.0-1) ...
Setting up node-graceful-fs (3.0.2-1) ...
Setting up node-fstream (0.1.24-1) ...
Setting up node-lru-cache (2.3.1-1) ...
Setting up node-sigmund (1.0.0-1) ...
Setting up node-minimatch (1.0.0-1) ...
Setting up node-fstream-ignore (0.0.6-2) ...
Setting up node-github-url-from-git (1.1.1-1) ...
Setting up node-once (1.1.1-1) ...
Setting up node-glob (4.0.5-1) ...
Setting up nodejs-dev (4.2.6~dfsg-1ubuntu4.1) ...
Setting up node-nopt (3.0.1-1) ...
Setting up node-npmlog (0.0.4-1) ...
Setting up node-osenv (0.1.0-1) ...
Setting up node-tunnel-agent (0.3.1-1) ...
Setting up node-json-stringify-safe (5.0.0-1) ...
Setting up node-qs (2.2.4-1) ...
Setting up node-request (2.26.1-1) ...
Setting up node-semver (2.1.0-2) ...
Setting up node-tar (1.0.3-2) ...
Setting up node-which (1.0.5-2) ...
Setting up node-gyp (3.0.3-2ubuntu1) ...
Setting up node-ini (1.1.0-1) ...
Setting up node-lockfile (0.4.1-1) ...
Setting up node-mute-stream (0.0.4-1) ...
Setting up node-normalize-package-data (0.2.2-1) ...
Setting up node-read (1.0.5-1) ...
Setting up node-read-package-json (1.2.4-1) ...
Setting up node-retry (0.6.0-1) ...
Setting up node-sha (1.2.3-1) ...
Setting up node-slide (1.1.4-1) ...
Setting up npm (3.5.2-0ubuntu4) ...
Processing triggers for libc-bin (2.23-0ubuntu5) ...
Processing triggers for systemd (229-4ubuntu16) ...
Processing triggers for ca-certificates (20160104ubuntu1) ...
Updating certificates in /etc/ssl/certs...
173 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.

Now we need to apply a Workaround for Ubuntu I once had seen on StackOverflow:

(container)$ ln -s nodejs /usr/bin/node

Let us clone the BrowserStack Protractor Example:

(container)# git clone https://github.com/browserstack/protractor-browserstack
Cloning into 'protractor-browserstack'...
remote: Counting objects: 185, done.
remote: Total 185 (delta 0), reused 0 (delta 0), pack-reused 185
Receiving objects: 100% (185/185), 28.39 KiB | 0 bytes/s, done.
Resolving deltas: 100% (72/72), done.
Checking connectivity... done.

The next commands is needed for downloading and installing the dependencies:

(container)# cd protractor-browserstack; npm install
> bufferutil@1.2.1 install /app/protractor-browserstack/node_modules/bufferutil
> node-gyp rebuild

make: Entering directory '/app/protractor-browserstack/node_modules/bufferutil/build'
 CXX(target) Release/obj.target/bufferutil/src/bufferutil.o
 SOLINK_MODULE(target) Release/obj.target/bufferutil.node
 COPY Release/bufferutil.node
make: Leaving directory '/app/protractor-browserstack/node_modules/bufferutil/build'

> utf-8-validate@1.2.2 install /app/protractor-browserstack/node_modules/utf-8-validate
> node-gyp rebuild

make: Entering directory '/app/protractor-browserstack/node_modules/utf-8-validate/build'
 CXX(target) Release/obj.target/validation/src/validation.o
 SOLINK_MODULE(target) Release/obj.target/validation.node
 COPY Release/validation.node
make: Leaving directory '/app/protractor-browserstack/node_modules/utf-8-validate/build'
protractor-browserstack@0.1.0 /app/protractor-browserstack
+-- browserstack-local@1.3.0
| +-- https-proxy-agent@1.0.0
| | +-- agent-base@2.0.1
| | | `-- semver@5.0.3
| | +-- debug@2.6.2
| | | `-- ms@0.7.2
| | `-- extend@3.0.0
| +-- is-running@2.1.0
| +-- sinon@1.17.7
| | +-- formatio@1.1.1
| | +-- lolex@1.3.2
| | +-- samsam@1.1.2
| | `-- util@0.10.3
| `-- temp-fs@0.9.9
| `-- rimraf@2.5.4
| `-- glob@7.1.1
| +-- fs.realpath@1.0.0
| +-- inflight@1.0.6
| | `-- wrappy@1.0.2
| +-- minimatch@3.0.3
| | `-- brace-expansion@1.1.6
| | +-- balanced-match@0.4.2
| | `-- concat-map@0.0.1
| +-- once@1.4.0
| `-- path-is-absolute@1.0.1
`-- protractor@2.5.1
 +-- accessibility-developer-tools@2.6.0
 +-- adm-zip@0.4.4
 +-- glob@3.2.11
 | +-- inherits@2.0.1
 | `-- minimatch@0.3.0
 | +-- lru-cache@2.7.3
 | `-- sigmund@1.0.1
 +-- html-entities@1.1.3
 +-- jasmine@2.3.2
 | +-- exit@0.1.2
 | +-- glob@3.2.11
 | | `-- minimatch@0.3.0
 | `-- jasmine-core@2.3.4
 +-- jasminewd@1.1.0
 +-- jasminewd2@0.0.6
 +-- lodash@2.4.2
 +-- minijasminenode@1.1.1
 +-- optimist@0.6.1
 | +-- minimist@0.0.10
 | `-- wordwrap@0.0.3
 +-- q@1.0.0
 +-- request@2.57.0
 | +-- aws-sign2@0.5.0
 | +-- bl@0.9.5
 | | `-- readable-stream@1.0.34
 | | +-- core-util-is@1.0.2
 | | +-- isarray@0.0.1
 | | `-- string_decoder@0.10.31
 | +-- caseless@0.10.0
 | +-- combined-stream@1.0.5
 | | `-- delayed-stream@1.0.0
 | +-- forever-agent@0.6.1
 | +-- form-data@0.2.0
 | | +-- async@0.9.2
 | | `-- combined-stream@0.0.7
 | | `-- delayed-stream@0.0.5
 | +-- har-validator@1.8.0
 | | +-- bluebird@2.11.0
 | | +-- chalk@1.1.3
 | | | +-- ansi-styles@2.2.1
 | | | +-- escape-string-regexp@1.0.5
 | | | +-- has-ansi@2.0.0
 | | | | `-- ansi-regex@2.1.1
 | | | +-- strip-ansi@3.0.1
 | | | `-- supports-color@2.0.0
 | | +-- commander@2.9.0
 | | | `-- graceful-readlink@1.0.1
 | | `-- is-my-json-valid@2.16.0
 | | +-- generate-function@2.0.0
 | | +-- generate-object-property@1.2.0
 | | | `-- is-property@1.0.2
 | | +-- jsonpointer@4.0.1
 | | `-- xtend@4.0.1
 | +-- hawk@2.3.1
 | | +-- boom@2.10.1
 | | +-- cryptiles@2.0.5
 | | +-- hoek@2.16.3
 | | `-- sntp@1.0.9
 | +-- http-signature@0.11.0
 | | +-- asn1@0.1.11
 | | +-- assert-plus@0.1.5
 | | `-- ctype@0.5.3
 | +-- isstream@0.1.2
 | +-- json-stringify-safe@5.0.1
 | +-- mime-types@2.0.14
 | | `-- mime-db@1.12.0
 | +-- node-uuid@1.4.7
 | +-- oauth-sign@0.8.2
 | +-- qs@3.1.0
 | +-- stringstream@0.0.5
 | +-- tough-cookie@2.3.2
 | | `-- punycode@1.4.1
 | `-- tunnel-agent@0.4.3
 +-- saucelabs@1.0.1
 +-- selenium-webdriver@2.47.0
 | +-- tmp@0.0.24
 | +-- ws@0.8.1
 | | +-- bufferutil@1.2.1
 | | | +-- bindings@1.2.1
 | | | `-- nan@2.5.1
 | | +-- options@0.0.6
 | | +-- ultron@1.0.2
 | | `-- utf-8-validate@1.2.2
 | | `-- nan@2.4.0
 | `-- xml2js@0.4.4
 | +-- sax@0.6.1
 | `-- xmlbuilder@8.2.2
 `-- source-map-support@0.2.10
 `-- source-map@0.1.32
 `-- amdefine@1.0.1

Now let us specify the BrowserStack credentials you can find in the “Automate” section on your BrowserStack Account Settings page:

(container)# export BROWSERSTACK_USERNAME=your_browserstack_user_id
(container)# export BROWSERSTACK_ACCESS_KEY=your_browserstack_key

Note that the environment variables differ from the the ones we have used in the Gulp examples above: BROWSERSTACK_USERNAME instead of BROWSERSTACK_USER and  BROWSERSTACK_ACCESS_KEY instead of BROWSERSTACK_KEY

Finally, we start the test session via npm run local:

(container)# npm run local

Connecting local
Connected. Now testing...
Using the selenium server at http://hub-cloud.browserstack.com/wd/hub
[launcher] Running 1 instances of WebDriver
.

Finished in 0.763 seconds
1 test, 1 assertion, 0 failures

[launcher] 0 instance(s) of WebDriver still running
[launcher] chrome #1 passed

With that, we have automatically tested a chrome browser using BrowserStack:

Excellent!

On this BrowserStack link, you can see in detail, which steps were taken during the automated test:

And here are visual logs with screenshot:

Summary

In this short introduction of cross browser testing vie BrowserStack, we have

Next Steps:

Further Reading

3

Java Build Automation Part 2: Create executable jar using Gradle


Original title: How to build a lean JAR File with Gradle

2016-11-14-19_15_52

In this step by step guide, we will show that Gradle is a good alternative to Maven for packaging java code into executable jar files. In order to keep the executable jar files “lean”, we will keep the dependent jar files outside of the jar in a separate folder.

Tools Used

  1. Maven 3.3.9
  2. JDK 1.8.0_101
  3. log4j 1.2.17 (downloaded automatically)
  4. Joda-time 2.5 (downloaded automatically)
  5. Git-2.8.4 with GNU bash 4.3.42(5)

Why using Gradle for a Maven Project?

In this blog post, we will show how Gradle can be used to create a executable/runnable jar. The task has been accomplished on this popular Mkyong blog post by using Maven. Why would we want to do the same task using Gradle?

By working with both, Maven and Gradle, I have found that:

  • Gradle allows me to move any resource file to outside of the jar without the need of any additional Linux script or alike;
  • Gradle allows me to easily create an executable/runnable jar for the JUnit tests, even if those are not separated into a separate project.

Moreover, while Maven is descriptive, Gradle is procedural in nature. With Maven, you describe the goal and you rely on Maven and its plugins to perform the steps you had in mind. Whereas with Gradle, you have explicit control on each step of the build process. Gradle is easy to understand for programmers and it gives them fine-grained control over the build process.

The Goal: a lean, executable JAR File

In the following step by step guide, we will create a lean executable jar file with all dependent libraries and resources.

Step 1 Download Hello World Maven Project of Mkyong

Download this hello world Maven project you can find on this popular HowTo page from Mkyong:

curl -OJ http://www.mkyong.com/wp-content/uploads/2012/11/maven-create-a-jar.zip
unzip maven-create-a-jar.zip
cd dateUtils

Logs:

$ curl -OJ http://www.mkyong.com/wp-content/uploads/2012/11/maven-create-a-jar.zip
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  7439  100  7439    0     0  23722      0 --:--:-- --:--:-- --:--:-- 24963

olive@LAPTOP-P5GHOHB7  /d/veits/eclipseWorkspaceRecent/MkYong/ttt
$ unzip maven-create-a-jar.zip
Archive:  maven-create-a-jar.zip
   creating: dateUtils/
  inflating: dateUtils/.classpath
  inflating: dateUtils/.DS_Store
   creating: __MACOSX/
   creating: __MACOSX/dateUtils/
  inflating: __MACOSX/dateUtils/._.DS_Store
  inflating: dateUtils/.project
   creating: dateUtils/.settings/
  inflating: dateUtils/.settings/org.eclipse.jdt.core.prefs
  inflating: dateUtils/log4j.properties
  inflating: dateUtils/pom.xml
   creating: dateUtils/src/
   creating: dateUtils/src/main/
   creating: dateUtils/src/main/java/
   creating: dateUtils/src/main/java/com/
   creating: dateUtils/src/main/java/com/mkyong/
   creating: dateUtils/src/main/java/com/mkyong/core/
   creating: dateUtils/src/main/java/com/mkyong/core/utils/
  inflating: dateUtils/src/main/java/com/mkyong/core/utils/App.java
   creating: dateUtils/src/main/resources/
  inflating: dateUtils/src/main/resources/log4j.properties
   creating: dateUtils/src/test/
   creating: dateUtils/src/test/java/
   creating: dateUtils/src/test/java/com/
   creating: dateUtils/src/test/java/com/mkyong/
   creating: dateUtils/src/test/java/com/mkyong/core/
   creating: dateUtils/src/test/java/com/mkyong/core/utils/
  inflating: dateUtils/src/test/java/com/mkyong/core/utils/AppTest.java
olive@LAPTOP-P5GHOHB7  /d/veits/eclipseWorkspaceRecent/MkYong/ttt
$ cd dateUtils/

olive@LAPTOP-P5GHOHB7  /d/veits/eclipseWorkspaceRecent/MkYong/ttt/dateUtils
$ 

Step 2 (optional): Create GIT Repository

In order to see, which files have been changed by which step, we can create a local GIT repository like follows

git init
# echo "Converting Maven to Gradle" > Readme.txt
git add .
git commit -m "first commit"

After each step, you then can perform the last two commands with a different message, so you can always go back to a previous step, if you need to do so. If you have made changes in a step that you have not committed yet, you can go back easily to the last clean commit state by issuing the command

# go back to status of last commit:
git stash -u

Warning: this will delete any new files you have created since the last commit.

Step 3 (required): Initialize Gradle

gradle init

This will automatically create a file build.gradle file from the Maven POM file with following content:

apply plugin: 'java'
apply plugin: 'maven'

group = 'com.mkyong.core.utils'
version = '1.0-SNAPSHOT'

description = """dateUtils"""

sourceCompatibility = 1.7
targetCompatibility = 1.7

repositories {

     maven { url "http://repo.maven.apache.org/maven2" }
}
dependencies {
    compile group: 'joda-time', name: 'joda-time', version:'2.5'
    compile group: 'log4j', name: 'log4j', version:'1.2.17'
    testCompile group: 'junit', name: 'junit', version:'4.11'
}

Step 4 (required): Gather Data

Since we are starting from a Maven project, which is prepared to create a runnable JAR via Maven already, we can extract the needed data from the POM.xml file:

MAINCLASS=`grep '<mainClass' pom.xml | cut -f2 -d">" | cut -f1 -d"<"`

Note: In cases with non-existing maven plugin, you need to set the MAINCLASS manually, e.g.

MAINCLASS=com.mkyong.core.utils.App

We also can define, where the dependency jars will be copied to later:

DEPENDENCY_JARS=dependency-jars

Logs:

$ MAINCLASS=`grep '<mainClass' pom.xml | cut -f2 -d">" | cut -f1 -d"<"`
$ echo $MAINCLASS
com.mkyong.core.utils.App
$ DEPENDENCY_JARS=dependency-jars
echo $DEPENDENCY_JARS
dependency-jars

Step 5 (required): Prepare to copy dependent Jars

Here, we will add instructions to the build.gradle file, which dependency JAR files are to be copied into a directory accessible by the executable jar.

We will need to copy the jars, we depend on, to a folder the runnable jar will access later on. See e.g. this StackOverflow question on this topic.

cat << END >> build.gradle

// copy dependency jars to build/libs/$DEPENDENCY_JARS 
task copyJarsToLib (type: Copy) {
    def toDir = "build/libs/$DEPENDENCY_JARS"

    // create directories, if not already done:
    file(toDir).mkdirs()

    // copy jars to lib folder:
    from configurations.compile
    into toDir
}
END

Step 6 (required): Prepare the Creation of an executable JAR File

In this step, we define in the build.gradle file, how to create an executable jar file.

cat << END >> build.gradle
jar {
    // exclude log properties (recommended)
    exclude ("log4j.properties")

    // make jar executable: see http://stackoverflow.com/questions/21721119/creating-runnable-jar-with-gradle
    manifest {
        attributes (
            'Main-Class': '$MAINCLASS',
            // add classpath to Manifest; see http://stackoverflow.com/questions/30087427/add-classpath-in-manifest-file-of-jar-in-gradle
            "Class-Path": '. dependency-jars/' + configurations.compile.collect { it.getName() }.join(' dependency-jars/')
            )
    }
}
END

Step 7 (required): Define build Dependencies

Up to now, a task copyJarsToLib was defined, but this task will not be executed, unless we tell Gradle to do so. In this step, we will specify that each time, a Jar is created, the copyJarsToLib task is to be performed beforehand. This can be done by telling Gradle that the jar goal depends on the copyJarsToLib task like follows:

cat << END >> build.gradle

// always call copyJarsToLib when building jars:
jar.dependsOn copyJarsToLib
END

Step 8 (required): Build Project

Meanwhile, the build.gradle file should have following content:

apply plugin: 'java'
apply plugin: 'maven'

group = 'com.mkyong.core.utils'
version = '1.0-SNAPSHOT'

description = """dateUtils"""

sourceCompatibility = 1.7
targetCompatibility = 1.7

repositories {

     maven { url "http://repo.maven.apache.org/maven2" }
}
dependencies {
    compile group: 'joda-time', name: 'joda-time', version:'2.5'
    compile group: 'log4j', name: 'log4j', version:'1.2.17'
    testCompile group: 'junit', name: 'junit', version:'4.11'
}

// copy dependency jars to build/libs/dependency-jars
task copyJarsToLib (type: Copy) {
    def toDir = "build/libs/dependency-jars"

    // create directories, if not already done:
    file(toDir).mkdirs()

    // copy jars to lib folder:
    from configurations.compile
    into toDir
}

jar {
    // exclude log properties (recommended)
    exclude ("log4j.properties")

    // make jar executable: see http://stackoverflow.com/questions/21721119/creating-runnable-jar-with-gradle
    manifest {
        attributes (
            'Main-Class': 'com.mkyong.core.utils.App',
            // add classpath to Manifest; see http://stackoverflow.com/questions/30087427/add-classpath-in-manifest-file-of-jar-in-gradle
            "Class-Path": '. dependency-jars/' + configurations.compile.collect { it.getName() }.join(' dependency-jars/')
            )
    }
}

// always call copyJarsToLib when building jars:
jar.dependsOn copyJarsToLib

Now is the time to create the runnable jar file:

gradle build

Note: Be patient at this step: it can appear to be hanging for several minutes, if it is run the first time, while it is working in the background.

This will create the runnable jar on build/libs/dateUtils-1.0-SNAPSHOT.jar and will copy the dependency jars to build/libs/dependency-jars/

Logs:

$ gradle build
:compileJava
warning: [options] bootstrap class path not set in conjunction with -source 1.7
1 warning
:processResources
:classes
:copyJarsToLib
:jar
:assemble
:compileTestJava
warning: [options] bootstrap class path not set in conjunction with -source 1.7
1 warning
:processTestResources UP-TO-DATE
:testClasses
:test
:check
:build

BUILD SUCCESSFUL

Total time: 3.183 secs

$ ls build/libs/
dateUtils-1.0-SNAPSHOT.jar dependency-jars

$ ls build/libs/dependency-jars/
joda-time-2.5.jar log4j-1.2.17.jar

Step 9: Execute the JAR file

It is best practice to exclude the log4j.properties file from the runnable jar file, and place it outside of the jar file, since we want to be able to change logging levels at runtime. This is, why we had excluded the properties file in step 6. In order to avoid an error “No appenders could be found for logger”, we need not specify the location of the log4j.properties properly on the command-line.

Step 9.1 Execute JAR file on Linux

On a Linux system, we run the command like follows:

java -jar -Dlog4j.configuration=file:full_path_to_log4j.properties build/libs/dateUtils-1.0-SNAPSHOT.jar

Example:

$ java -jar -Dlog4j.configuration=file:/usr/home/me/dateUtils/log4j.properties build/libs/dateUtils-1.0-SNAPSHOT.jar
11:47:33,018 DEBUG App:18 - getLocalCurrentDate() is executed!
2016-11-14

Note: if the log4j.properties file is on the current directory on a Linux machine, we also can create a batch file run.sh with the content

#!/usr/bin/env bash
java -jar -Dlog4j.configuration=file:`pwd`/log4j.properties build/libs/dateUtils-1.0-SNAPSHOT.jar

and run it via bash run.sh

Step 9.1 Execute JAR file on Windows

In case of Windows in a CMD shell all paths need to be in Windows style:

java -jar -Dlog4j.configuration=file:D:\veits\eclipseWorkspaceRecent\MkYong\dateUtils\log4j.properties build\libs\dateUtils-1.0-SNAPSHOT.jar
11:45:30,007 DEBUG App:18 - getLocalCurrentDate() is executed!
2016-11-14

If we run the command on a Windows GNU bash shell, the syntax is kind of mixed: the path to the jar file is in Linux style while the path to the log properties file needs to be in Windows style (this is, how the Windows java.exe file expects the input of this option):

$ java -jar -Dlog4j.configuration=file:'D:\veits\eclipseWorkspaceRecent\MkYong\dateUtils\log4j.properties' build/libs/dateUtils-1.0-SNAPSHOT.jar
11:45:30,007 DEBUG App:18 - getLocalCurrentDate() is executed!
2016-11-14

Inverted commas have been used in order to avoid the necessity of escaped backslashes like D:\\veits\\eclipseWorkspaceRecent\\… needed on a Windows system.

Note: if the log4j.properties file is on the current directory on a Windows machine, we also can create a batch file run.bat with the content

java -jar -Dlog4j.configuration=file:%cd%\log4j.properties build\libs\dateUtils-1.0-SNAPSHOT.jar

To run the bat file on GNU bash on Windows, just type ./run.bat

Yepp, that is it: the hello world executable file is printing the date to the console, just as it did in Mkyong’s blog post, where the executable file was created using Maven.

simpleicons_interface_folder-download-symbol-svg

Download the source code from GIT.

Note: in the source code, you also will find a file named prepare_build.gradle.sh, which can be run on a bash shell and will replace the manual steps 4 to 7.

References

Next Steps

  • create an even leaner jar with resource files kept outside of the executable jar. This opens the opportunity to changing resource files at runtime.
  • create an executable jar file that will run the JUnit tests.

 

4

AWS Automation Part 4: Using Terraform for AWS Automation


This is part 4 of a blog post series, in which we explore how to automate Amazon Web Services (AWS) using the Terraform open source software by HashiCorp. Similar to Cloudify, Terraform is a versatile way to codify any type of infrastructure and to spin up a production-like demo or staging environment on any IaaS cloud like AWS, Azure or Google Cloud within minutes.

In this blog post, we will compare Terraform with Vagrant (providing links to other comparisons like Cloudify along the way), before we will use Terraform to spin up a single Ubuntu virtual machine instance on AWS. In the Appendix, we will also show, how to access the instance using SSH.

The series is divided into four parts:

  • In Part 1: AWS EC2 Introduction, introduces Amazon Web Services EC2 (AWS EC2) and will show how to sign into a free trial of Amazon, create, start, shut down and terminate a virtual machine on the AWS EC2 console.
  • Part 2: Automate AWS using Vagrant will lead you through the process how to use Vagrant to perform the same tasks you have performed in part 1, but now we will use local Vagrantfiles in order to automate the process. Please be sure to check out part 4, which shows a much simpler way to perform the same using Terraform.
  • Part 3: Deploy Docker Host on AWS using Vagrant shows, how Vagrant helps you to go beyond simple creation, startup, shutdown and termination of a virtual machine. In less than 10 minutes, you will be able to install a Docker host on AWS. With a few additional clicks on the AWS EC2 console, you are ready to start your first Docker container in the AWS cloud.
  • Part 4: Automate AWS using Terraform (this post) is showing that spinning up a virtual machine instance on AWS using Terraform is even simpler than using the Vagrant AWS plugin we have used in part 2 and 3. Additionally, Terraform opens up the we will be able to use the same tool to provision our resources to other clouds like Azure and Google Cloud.

Document Versions

  • 2016-09-22: initial published version

Contents

In this blog post we will explore, how to get started with Amazon Web Services (AWS). After signing in to a free trial of Amazon, we will show how to create, spin up and terminate virtual machines in the cloud using Amazon’s AWS EC2 web based console. After that, a step by step guide will lead us through the process of performing the same tasks in an automated way using Terraform.

While the shown tasks could also be performed with AWS CLI commands, Terraform potentially allows for more sophisticated provisioning tasks like Software Installation and upload & execution of arbitrary shell scripts.

Why Terraform?

In part 2 of this series, we had chosen the Vagrant AWS provider plugin to automatically spin up a virtual machine instance on Amazon AWS. Developers are used to Vagrant, since Vagrant offers a great way to locally spin up the virtual machines developers need to perform their development tasks. Even though Vagrant is most often used for local VirtualBox provider, the AWS plugin allows Vagrant to spin up virtual machine instances on AWS EC2, as we have demonstrated in the blog post.

One of the differences between Vagrant and Terraform is the language used: While Vagrant requires Chef-like Ruby programming (developers like it), Terraform Terraform uses a language called HashiCorp Configuration Language (HCL). Here is an example:

# A variable definition:
variable "ami" {
  description = "the AMI to use"
}

# A resource definition:
resource "aws_instance" "web" {
  ami               = "${var.ami}"
  count             = 2
  source_dest_check = false

  ...(AWS credential management skipped here)

  connection {
    user = "myuser"
  }
}

The same code would look similar to following in a Vagrantfile:

AMI = "ami-1234567"
$instance_name_prefix = "myinstance"
Vagrant.configure("2") do |config|
  (1..2).each do |i|
    config.vm.provider :aws do |aws, override|
      config.vm.define vm_name = "%s-%02d" % [$instance_name_prefix, i] do |config|
      aws.ami = "#{AMI}"
      ...(AWS credential management skipped here)
      override.vm.box = "dummy"
      override.ssh.username = "myuser"

    end
  end
end

We can see that a Vagrantfile is a Ruby program, while the Terraform language reads more like a status description file. It is a matter of taste, whether you prefer the one over the other. I assume that Ruby programming gives you more fine-grained possibilities to adapt the environments to your needs, while Terraform potentially offers the possibility to gain a better overview on the desired state.

In my opinion, the biggest difference between Vagrant and Terraform is the scope of those tools: according to HashiCorp, Vagrant is not designed for production-like environments. HashiCorp’s Terraform Intro is pointing out the following:

Modern software is increasingly networked and distributed. Although tools like Vagrant exist to build virtualized environments for demos, it is still very challenging to demo software on real infrastructure which more closely matches production environments.

Software writers can provide a Terraform configuration to create, provision and bootstrap a demo on cloud providers like AWS. This allows end users to easily demo the software on their own infrastructure, and even enables tweaking parameters like cluster size to more rigorously test tools at any scale.

List of supported Terraform Providers
List of supported Terraform Providers

We could argue that all of that can also be done with Vagrant and its AWS plugin. However, the big difference is that Terraform comes with a long, long list of supported providers as seen on the right-hand side of this page. We find all major IaaS Providers like AWS, MS Azure, Google Engine, DigitialOcean and SoftLayer but also an important PaaS provider like Heroku. Moreover, we find support for local virtual infrastructure providers like OpenStack and some “initial support” for VMware tools like vSphere and vCloud. Unfortunately, VirtualBox is missing in the official list, so developers either keep working with Vagrant locally, or they could try using a third party Terraform VirtualBox provider. Also Docker support is also classified as “initial support” and Docker Cloud as well as Kubernetes or OpenShift Origin are missing altogether.

Terraform tries to codify any type of resources to its list, so we even can find interfaces to DNS providers Databases, Mailproviders and many more. With that, it can spin up a whole environment including virtual machine instances, DNS services, networking, content delivery network services and more. HashiCorp’s Terraform introductory web page about use cases tells us that Terraform can spin up a distributed sophisticated demo or staging environment in less than 30 sec.

Further Reading about Terraform vs XYZ: You may also want to check out Terraform’s comparison page or an informative slideset of Nati Shalom, GigaSpaces or this CloudFormation vs Terraform comparison.

Why offering yet another ‘Hello World’ for Amazon Web Service Automation via Terraform?

The Terraform web portal is providing an AWS hello world example already. The reason I am offering yet another ‘Hello World’ example is, that the other guides assume that you already have created an AWS user with the appropriate rights. In this guide, we will describe, how this is done. Moreover, we will show in an Appendix, which steps are necessary to access the created virtual machine instance via SSH.

Why Amazon Web Services?

According to Gartner’s 2015 report, Amazon Web Services is the leader in the IaaS space, followed by Microsoft Azure. See below the Gartner’s magic quadrant on IaaS:

Gartner 2015 MQ

Source: Gartner (May 2015)

There are many articles out there that compare AWS with Microsoft Azure. From reading those articles, the following over-simplified summary has burnt its traces into my brain:

Amazon Web Services vs. Microsoft Azure is like Open Source Linux world vs. the commercial Microsoft Software world. For a long time, we will need both sides of the world.

Now that we have decided to begin with the open source side of the world, let us get started.

Signing into Amazon Web Services

In order to get started, you need to sign into the Amazon Web Services, if not already done so. For that, visit https://aws.amazon.com/, scroll down and push the Get Started for Free button. This is starting a free tier trial account for up to 12 months and up to two time 750 hrs of computing time; Linux and Windows 2012 server on a small virtual machine.

Note that you will be offered options that are free along with other services that are not for free, so you need to be a little bit careful. Terraform with its easy automation will help us to minimize the resources needed.

2016-03-27_231950_capture_008

I had signed into AWS long ago, but as far as I remember, you need to choose “I am a new User”, add your email address and desired password and a set of personal data (I am not sure whether I had to add my credit card, since I am an Amazon customer anyway).

2016.03.31-19_50_22-hc_001

If you are interested to creating, launching, stopping and terminating virtual machine instances using the Amazon EC2 console (a web portal), you might want to have a look to part 1 of this series:

2016.04.03-21_24_41-hc_001

In this part of the series, we will concentrate on automating the tasks.

AWS Automation using Terraform

Now we sill use Terraform in order to automate launching a virtual machine instance on AWS from an existing image (AMI). Let us start:

Step 0: Set HTTP proxy, if needed

The tests within this Hello World blog post have been performed without a HTTP proxy. If you are located behind a proxy, this should be supported as pointed out here. Try following commands:

On Mac/*nix systems:

export http_proxy='http://myproxy.dns.name:8080'
export https_proxy='http://myproxy.dns.name:8080'

On Windows:

set http_proxy='http://myproxy.dns.name:8080'
set https_proxy='http://myproxy.dns.name:8080'

Replace myproxy.dns.name and 8080 by the IP address or DNS name and port owned by the HTTP proxy in your environment.

Step 1a Native Installation

It is best, if you have direct Internet access (behind a firewall, but without any HTTP proxy).  Install Terraform on your local machine. The installation procedure depends on your operating system and is described here. I have taken the Docker alternative in Step 1b instead, though.

Step 1b Docker Alternative

If you have access to a Docker host, you also just can run any terraform command by creating a function like follows:

terraform() { 
  docker run -it --rm -v `pwd`:/currentdir --workdir=/currentdir hashicorp/terraform:light $@; 
}

For permanent definition, write those three lines it in the ~/.bashrc file of your Docker host.

After that terraform commands can be issued on the docker host as if terraform was installed on the Docker host. The first time the command is performed, a 20 MB terraform light image will be downloaded automatically from Docker Hub:

$ terraform --version
Unable to find image 'hashicorp/terraform:light' locally
light: Pulling from hashicorp/terraform

ece78a7c791f: Downloading [=====> ] 2.162 MB/18.03 MB
...
Terraform v0.7.4

Once the image is downloaded, the next time you issue the command, the output will look the same, as if the software was installed locally:

$ terraform --version
Terraform v0.7.4

Step 2: Create a Terraform Plan

We create a file named aws_example.tf like follows:

provider "aws" {
  access_key = "ACCESS_KEY_HERE"
  secret_key = "SECRET_KEY_HERE"
  region     = "us-east-1"
}

resource "aws_instance" "example" {
  ami           = "ami-0d729a60"
  instance_type = "t2.micro"
}

You can get the access key from AWS IAM Users page (click the user), if it exists already. However, the secret key is secret and has been provided at the time the access key has been created. If the secret key is unavailable, try creating a new one on the AWS IAM Users page (click the user).

The ami of the main images can be retrieved from the AWS console after being logged in. Simulate installing an instance by clicking “Lauch Instance” and browse through the main images. The image number starting with “ami” is displayed there:

2016-09-20-20_57_46-ec2-management-console

We have copied the ami number of the Ubuntu Server this time. Then you can cancel the instance creation.

Your region is displayed after the question mark as part of the AWS console URL, once you are logged in:

2016-09-20-21_00_43-ec2-management-console

If you are a “free tier” user of AWS, only use “t1-micro” or “t2-micro” as instance-type. None of the other types are free tier eligible, even not the smaller “t2-nano”, see this comment.

Step 3: Simulate the Terraform Plan

To see, what will happen, if you execute a terraform template, just issue the following command in bold:

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.


The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

+ aws_instance.example
    ami:                      "ami-26c43149"
    availability_zone:        "<computed>"
    ebs_block_device.#:       "<computed>"
    ephemeral_block_device.#: "<computed>"
    instance_state:           "<computed>"
    instance_type:            "t2.micro"
    key_name:                 "<computed>"
    network_interface_id:     "<computed>"
    placement_group:          "<computed>"
    private_dns:              "<computed>"
    private_ip:               "<computed>"
    public_dns:               "<computed>"
    public_ip:                "<computed>"
    root_block_device.#:      "<computed>"
    security_groups.#:        "<computed>"
    source_dest_check:        "true"
    subnet_id:                "<computed>"
    tenancy:                  "<computed>"
    vpc_security_group_ids.#: "<computed>"


Plan: 1 to add, 0 to change, 0 to destroy.

Step 4: Set permissions of the AWS User

This step is not described in the Quick Start guides I have come across. You can try to skip this step and proceed with the next step. However, if the user owning the AWS credentials you have specified above, you may encounter following Error:

Error launching source instance: UnauthorizedOperation: You are not authorized to perform this operation.

In this case, following steps will fix the issue:

Step 4.1: Create a new user on the AWS IAM Users page , if not already done.

Step 4.2: Assign the needed access rights to the user like follows:

Adapt and goto the  AWS IAM Link https://console.aws.amazon.com/iam/home?region=eu-central-1#policies. The link needs to be adapted to your region; e.g. change eu-central-1 by the right one from the region list that applies to your account.

Click the “Get Started” button, if the list of policies is not visible already. After that, you should see the list of policies and a filter field:

2016.05.06-10_42_28-hc_001

In the Filter field, search for the term AmazonEC2FullAccess. 

Click on the AmazonEC2FullAccess Policy Name and then choose the tab Attached Identities.

2016.05.06-10_50_14-hc_001

Click the Attach button and attach the main user (in the screenshot above, my main user “oveits” is already attached; in your case, the list will be empty before you click the Attach button, most likely).

Step 5: Apply the Terraform Plan

Note: this step will launch AWS EC2 virtual machine instances. Depending on your pay plan, this might cause some cost.

To apply the Terraform plan, issue the following command:

$ terraform apply
aws_instance.example: Creating...
  ami:                      "" => "ami-26c43149"
  availability_zone:        "" => "<computed>"
  ebs_block_device.#:       "" => "<computed>"
  ephemeral_block_device.#: "" => "<computed>"
  instance_state:           "" => "<computed>"
  instance_type:            "" => "t2.micro"
  key_name:                 "" => "<computed>"
  network_interface_id:     "" => "<computed>"
  placement_group:          "" => "<computed>"
  private_dns:              "" => "<computed>"
  private_ip:               "" => "<computed>"
  public_dns:               "" => "<computed>"
  public_ip:                "" => "<computed>"
  root_block_device.#:      "" => "<computed>"
  security_groups.#:        "" => "<computed>"
  source_dest_check:        "" => "true"
  subnet_id:                "" => "<computed>"
  tenancy:                  "" => "<computed>"
  vpc_security_group_ids.#: "" => "<computed>"
aws_instance.example: Still creating... (10s elapsed)
aws_instance.example: Still creating... (20s elapsed)
aws_instance.example: Creation complete

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

Note also the information on the terraform.tfstate file, which should not get lost. This shows us that Terraform is not stateless: it will not synchronize with the current state of the Provider AWS, leading to potential problems if the tfstate file and the real world get out of sync.

In the AWS console, we indeed can see that a Ubuntu instance has been launched:

2016-09-20-16_05_10-ec2-management-console

2016-09-20-16_05_55-ec2-management-console

I have not expected it to be that easy, because:

  • Unlike the Vagrant example, I was not forced to specify the SSH key
  • Unlike the Vagrant example, I was not forced to adapt the security rule to allow SSH traffic to the instance.

Unlike Vagrant, Terraform does not need SSH access to the virtual machine instance in order to spin it up.

Step 6: Destroy the Instance

Now let us destroy the instance again:

Step 6.1: Check the Plan for Destruction
$ terraform plan -destroy
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.

aws_instance.example: Refreshing state... (ID: i-8e3f1832)

The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

- aws_instance.example
Step 6.2 Apply the Plan with “Destroy” Option

And now let us apply the destruction:

$ terraform destroy
Do you really want to destroy?
  Terraform will delete all your managed infrastructure.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

aws_instance.example: Refreshing state... (ID: i-8e3f1832)
aws_instance.example: Destroying...
aws_instance.example: Still destroying... (10s elapsed)
aws_instance.example: Still destroying... (20s elapsed)
aws_instance.example: Still destroying... (30s elapsed)
aws_instance.example: Still destroying... (40s elapsed)
aws_instance.example: Still destroying... (50s elapsed)
aws_instance.example: Still destroying... (1m0s elapsed)
aws_instance.example: Destruction complete

Checking with AWS console:

2016-09-20-16_21_36-ec2-management-console

And yes, indeed, the instance was terminated. Note that AWS will keep the instance in terminated status for some time before automatically removing it.

Note also that a created instance will be charged as if it was up 15 minutes minimum. Therefore, it is not a good idea to run such examples in a loop, or with a large number of instances.

 

DONE!

thumps_up_3

Appendix A: Access the virtual machine via SSH

Step A.1: Check, whether you already have SSH access

Try to connect to the virtual machine instance via SSH (for information on SSH clients, check out Appendix C). If you are prompted to accept the SSH fingerprint, the security rule does not need to be updated and you can go to the next step. If there is a timeout instead, perform the steps in Appendix B: Adapt Security Rule manually.

Step A.2: Provision the SSH Key

Step A.2.1 Create or find your SSH key pair

You can follow this guide and let AWS create it for you on this page, or you can use a local OpenSSH installation to create the key pair. I have gone the AWS way this time.

Step A.2.2 Retrieve public Key Data

The key pair you have created contains a public data that looks similar to follows:

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC22rLY2IreKhTqhdiYMIo+8p+LkuRroHZdm4OPagVuxmT1iz9o4NsPP3MIgwXjGFNtC0Sb6puuFilat+sswXEFLca2G2dWVtITpGfb4EJt72i2AmSi8TL/0UudhO9bkfZUPtlxJNrMKsxLQ62ukIC3b927CMgBMBFrLcAIh/WWQsB/KInOAID8GN+MssR7RwpAxEDXb1ZFtaaAzR2p3B3QdTzazUCZgzEMY6c3K4I4eaIzzONRV7rUUH3UC61GwXAORQLXsOBzHW0uOgIhlOTIMG0zkQtwJfLBoQKz/zQFFYX9gEoA/ElVNTrwWwX9gsJzpz6hdL/koD3tionbE6vJ

with or without email-address appended. We need the whole string (including the email-address, if present) in the next step:

Step A.2.3 Specify SSH Key as Terraform Resource

The public key data is now written into a .tf file (I have used the name aws_keyfile.tf) as described here.

resource "aws_key_pair" "deployer" {
  key_name = "deployer-key" 
  public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD3F6tyPEFEzV0LX3X8BsXdMsQz1x2cEikKDEY0aIj41qgxMCP/iteneqXSIFZBp5vizPvaoIR3Um9xK7PGoW8giupGn+EPuxIA4cDM4vzOqOkiMPhz5XK0whEjkVzTo4+S0puvDZuwIsdiW9mxhJc7tgBNL0cYlWSYVkz4G/fslNfRPW5mYAM49f4fhtxPb5ok4Q2Lg9dPKVHO/Bgeu5woMc7RY0p1ej6D4CKFE6lymSDJpW0YHX/wqE9+cfEauh7xZcG0q9t2ta6F6fmX0agvpFyZo8aFbXeUBr7osSCJNgvavWbM/06niWrOvYX2xwWdhXmXSrbX8ZbabVohBK41 email@example.com"
}

As you will see below, this resource will be added to the list of available SSH keys in the AWS console -> EC2 Dashboard -> Key Pairs.

Step A.2.4 Assign Key to the AWS Instance

We now use the key named “deployer_key” in the instance definition. For that, we edit aws_example.tf and add the key_name:

provider "aws" {
  access_key = "MYKEY"
  secret_key = "MYSECRETKEY"
  region     = "eu-central-1"
}

resource "aws_instance" "example" {
  ami           = "ami-26c43149"
  instance_type = "t2.micro"
  key_name = "deployer_key"
}

As you will see below, the key_name will be applied to the new instance, allowing us to SSH into the virtual machine instance.

Step A.2.5 Review Terraform Plan

After that the plan looks like follows (in the shown output, the instance is running already, but this is irrelevant, since a new instance will be created anyway):

vagrant@localhost /mnt/nfs/terraform $ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.


The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

+ aws_instance.example
    ami:                      "ami-26c43149"
    availability_zone:        "<computed>"
    ebs_block_device.#:       "<computed>"
    ephemeral_block_device.#: "<computed>"
    instance_state:           "<computed>"
    instance_type:            "t2.micro"
    key_name:                 "deployer-key"
    network_interface_id:     "<computed>"
    placement_group:          "<computed>"
    private_dns:              "<computed>"
    private_ip:               "<computed>"
    public_dns:               "<computed>"
    public_ip:                "<computed>"
    root_block_device.#:      "<computed>"
    security_groups.#:        "<computed>"
    source_dest_check:        "true"
    subnet_id:                "<computed>"
    tenancy:                  "<computed>"
    vpc_security_group_ids.#: "<computed>"

+ aws_key_pair.deployer
    fingerprint: "<computed>"
    key_name:    "deployer-key"
    public_key:  "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC22rLY2IreKhTqhdiYMIo+8p+LkuRroHZdm4OPagVuxmT1iz9o4NsPP3MIgwXjGFNtC0Sb6puuFilat+sswXEFLca2G2dWVtITpGfb4EJt72i2AmSi8TL/0UudhO9bkfZUPtlxJNrMKsxLQ62ukIC3b927CMgBMBFrLcAIh/WWQsB/KInOAID8GN+MssR7RwpAxEDXb1ZFtaaAzR2p3B3QdTzazUCZgzEMY6c3K4I4eaIzzONRV7rUUH3UC61GwXAORQLXsOBzHW0uOgIhlOTIMG0zkQtwJfLBoQKz/zQFFYX9gEoA/ElVNTrwWwX9gsJzpz6hdL/koD3tionbE6vJ"


Plan: 2 to add, 0 to change, 0 to destroy.

The public key will be provisioned to AWS and an instance will be created with the appropriate SSH key. Let us try:

Step A.2.6 Apply the Terraform Plan

vagrant@localhost /mnt/nfs/terraform $ terraform apply
aws_key_pair.deployer: Creating...
  fingerprint: "" => "<computed>"
  key_name:    "" => "deployer-key"
  public_key:  "" => "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC22rLY2IreKhTqhdiYMIo+8p+LkuRroHZdm4OPagVuxmT1iz9o4NsPP3MIgwXjGFNtC0Sb6puuFilat+sswXEFLca2G2dWVtITpGfb4EJt72i2AmSi8TL/0UudhO9bkfZUPtlxJNrMKsxLQ62ukIC3b927CMgBMBFrLcAIh/WWQsB/KInOAID8GN+MssR7RwpAxEDXb1ZFtaaAzR2p3B3QdTzazUCZgzEMY6c3K4I4eaIzzONRV7rUUH3UC61GwXAORQLXsOBzHW0uOgIhlOTIMG0zkQtwJfLBoQKz/zQFFYX9gEoA/ElVNTrwWwX9gsJzpz6hdL/koD3tionbE6vJ"
aws_instance.example: Creating...
  ami:                      "" => "ami-26c43149"
  availability_zone:        "" => "<computed>"
  ebs_block_device.#:       "" => "<computed>"
  ephemeral_block_device.#: "" => "<computed>"
  instance_state:           "" => "<computed>"
  instance_type:            "" => "t2.micro"
  key_name:                 "" => "deployer-key"
  network_interface_id:     "" => "<computed>"
  placement_group:          "" => "<computed>"
  private_dns:              "" => "<computed>"
  private_ip:               "" => "<computed>"
  public_dns:               "" => "<computed>"
  public_ip:                "" => "<computed>"
  root_block_device.#:      "" => "<computed>"
  security_groups.#:        "" => "<computed>"
  source_dest_check:        "" => "true"
  subnet_id:                "" => "<computed>"
  tenancy:                  "" => "<computed>"
  vpc_security_group_ids.#: "" => "<computed>"
aws_key_pair.deployer: Creation complete
aws_instance.example: Still creating... (10s elapsed)
aws_instance.example: Still creating... (20s elapsed)
aws_instance.example: Creation complete

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

Now the new key is visible in the AWS console -> EC2 (Dashboard) -> click “Key Pairs”:

2016-09-21-21_17_11-ec2-management-console

The second key named “deployer-key” is the one we just have created.

A new instance has been launched and the correct key is assigned (AWS console -> EC2 (Dashboard) -> click “Running Instances”):

2016-09-22-11_21_19-ec2-management-console

Now I should be able to connect to the system using the key. But which user? I have first tried “root”, but during login, I was informed that I should use “ubuntu” instead of “root”.
It has worked with following data:

2016-09-22-11_27_52-putty-configuration

The public DNS name or public IP address to be used can be retrieved either from the AWS console

2016-09-22-11_24_17-ec2-management-console

or better from a terraform show command:

$ terraform show
aws_instance.example:
  id = i-fc755340
  ami = ami-26c43149
  availability_zone = eu-central-1b
  disable_api_termination = false
  ebs_block_device.# = 0
  ebs_optimized = false
  ephemeral_block_device.# = 0
  iam_instance_profile =
  instance_state = running
  instance_type = t2.micro
  key_name = deployer-key
  monitoring = false
  network_interface_id = eni-b84a0cc4
  private_dns = ip-172-31-17-159.eu-central-1.compute.internal
  private_ip = 172.31.17.159
  public_dns = ec2-52-29-3-233.eu-central-1.compute.amazonaws.com
  public_ip = 52.29.3.233
  root_block_device.# = 1
  root_block_device.0.delete_on_termination = true
  root_block_device.0.iops = 100
  root_block_device.0.volume_size = 8
  root_block_device.0.volume_type = gp2
  security_groups.# = 0
  source_dest_check = true
  subnet_id = subnet-f373b088
  tags.% = 0
  tenancy = default
  vpc_security_group_ids.# = 1
  vpc_security_group_ids.611464124 = sg-0433846d
aws_key_pair.deployer:
  id = deployer-key
  key_name = deployer-key
  public_key = ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC22rLY2IreKhTqhdiYMIo+8p+LkuRroHZdm4OPagVuxmT1iz9o4NsPP3MIgwXjGFNtC0Sb6puuFilat+sswXEFLca2G2dWVtITpGfb4EJt72i2AmSi8TL/0UudhO9bkfZUPtlxJNrMKsxLQ62ukIC3b927CMgBMBFrLcAIh/WWQsB/KInOAID8GN+MssR7RwpAxEDXb1ZFtaaAzR2p3B3QdTzazUCZgzEMY6c3K4I4eaIzzONRV7rUUH3UC61GwXAORQLXsOBzHW0uOgIhlOTIMG0zkQtwJfLBoQKz/zQFFYX9gEoA/ElVNTrwWwX9gsJzpz6hdL/koD3tionbE6vJ

2016-09-21-21_32_06-putty-configuration

We need to specify the private key, which is named AWS_SSH_Key.ppk in my case (even though we have chosen to call it deployer_key in the resource). In case of putty, the key needs to be in .ppk format. See Appendix C, how a .pem file (as you get it from AWS) can be converted to .ppk format.
2016-09-22-11_34_09-ubuntuip-172-31-17-159_

With this information, we can log in via SSH (assuming that you have performed step A.1 and Appendix B, if needed; otherwise, you may get a timeout).

Appendix B: Adapt Security Rule manually

Step B.1: Check, whether you have SSH access

Try to connect to the virtual machine instance. If you are prompted to accept the SSH fingerprint, the security rule does not need to be updated and you can stop here. If there is a timeout, go to the next step.

Step B.1: Updating the security group

In this step, we will adapt the security group manually in order to allow SSH access to the instance. Note that in Appendix B, we show how this step can be automated with a shell script. But now, let us perform the step manually.

2016.04.01-13_00_29-hc_001

In the EC2 console, under Network&Security -> Security Groups (in my case in EU Central 1: https://eu-central-1.console.aws.amazon.com/ec2/v2/home?region=eu-central-1#SecurityGroups:sort=groupId), we can find the default security group. We need to edit the inbound rule to allow the current source IP address. For that, select the policy group, click on the “Inbound” tab on the bottom, specify “My IP” as source and save the policy:

2016.04.01-13_05_18-hc_001

Now, if you try to connect to the virtual machine instance, you should be asked by your SSH client, whether or not you permanently add the SSH key fingerprint locally.

DONE

Note: if your IP address changes frequently, you might want to automate the update of the security rule. Check out Appendix B of part 2 of this blog series for this.

Appendix C: SSH Connection Client Alternatives

C.1. SSH Connection via a *nix operating Client (or bash on Windows)

On a *nix machine or on a bash shell on Windows, you can connect via the *nix built-in SSH client. The following command line connection worked for me on a bash shell on my Windows machine. Replace the path to the private PEM file and the public DNS name, so that it works for you as well:

$ssh ubuntu@ec2-52-29-14-175.eu-central-1.compute.amazonaws.com -i /g/veits/PC/PKI/AWS/AWS_SSH_Key.pem
The authenticity of host 'ec2-52-29-14-175.eu-central-1.compute.amazonaws.com (52.29.14.175)' can't be established.
ECDSA key fingerprint is e2:34:6c:92:e6:5d:73:b0:95:cc:1f:b7:43:bb:54:39.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ec2-52-29-14-175.eu-central-1.compute.amazonaws.com,52.29.14.175' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-74-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Fri Apr  1 20:38:25 UTC 2016

  System load:  0.08              Processes:           98
  Usage of /:   10.0% of 7.74GB   Users logged in:     0
  Memory usage: 6%                IP address for eth0: 172.31.21.237
  Swap usage:   0%

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.
ubuntu@ip-172-31-21-237:~$

DONE

C.2 SSH Connection via putty on Windows

Since I am using a Windows machine and the formatting of a ssh session in a CMD console does not work well (especially, if you try to use vim), I prefer to use putty on Windows.

In putty, add the host ubuntu@<public DNS>:

2016-04-01_224807_capture_004

and add the path to the private key file on Connection->SSH->Auth->Private key file for authentication:

2016-04-01_131935_capture_003

Note that the pem file needs to be converted to a ppk format putty understands. For that, import the pem file using Putty Key Generator (puttygen) via Conversions->Import Key->choose pem file -> Save private key with ppk extension.

2016.04.01-13_23_46-hc_001

2016.04.01-13_26_46-hc_001

Now add the path to the ppk file to Connection->SSH->Auth->Private key file for authentication: in the putty client, press the “yes” button, and we are logged in:

2016-04-01_224815_capture_005

DONE

Appendix D: Sharing Files between Windows Host and Docker Host using NFS (temporary Solution)

Locally, I am running Windows 10 and I am using a Docker Host created by Vagrant as a virtual Machine on VirtualBox. I have not (yet) configured/installed Vagrant synced folders as described on this web page. Instead, I have set up an NFS server on Windows and map it within the Docker Host like follows:

Step D.1: Install winnfsd

Step D.2: On the Windows machine, create shared folder and start NFS daemon

On a DOS window (run “CMD”), run following commands (adapt the path to a path that is appropriate for you):

mkdir D:\NFS
winnfsd.exe D:\NFS

On Linux host, mount the folder:

$ sudo mkdir /mnt/nfs; sudo mount -t nfs -o 'vers=3,nolock,udp' LAPTOP-P5GHOHB7:/D/NFS /mnt/nfs

where LAPTOP-P5GHOHB7 is my Windows machine’s name and I have found the required options here (without the -o options, I had received a message mount.nfs: mount system call failed.).

Note that this solution is not permanent, since

  • winnfsd.exe is running in foreground within the DOS Window
  • after reboot of the Docker host, the mount command needs to be issued again

TODO: describe the permanent solution, which survives the reboot of the Windows host (need to evaluate different NFS servers, or run winnfsd.exe as a daemon) and the Docker host (via fstab file)

Summary

It is much simpler to spin up and down AWS virtual machine instances by using Terraform than by using Vagrant’s AWS plugin. The reason is that Terraform’s AWS API (unlike Vagrant) does not require SSH access to the instance. Therefore, we were not forced to adapt any SSH related security settings. It just works with Terraform.

To be fair: if the AWS instances are part of a development or staging environment, you will need SSH access to the virtual machine instances in most of the cases anyway. In those cases, a few additional steps are necessary, as shown in Appendix A and B. However, at the end, you only need to add a few lines to the Terraform template.

Adapting the default security rule in order to allow SSH access is the same as with Vagrant. Here, we have shown, how this is done manually and in part 2 we offer an automated way of performing this based on the AWS CLI.

The actual SSH access is more convenient with Vagrant with its  nice vagrant ssh command.

All in all, the effort of automating AWS using Terraform requires equal or less effort than Vagrant. And we gain a more clearly laid out description of the infrastructure resources and more flexibility to apply the same set of resources on a mixed hybrid environment on OpenStack and the IaaS clouds like AWS, Azure and Google Cloud, among others.

Possible Next Steps

  • Provisioning and assignment of your own security rules based on Terraform described here
  • Test remote file upload and remote command execution (are SSH keys needed for this to be successful?)
  • Upload/Synchronization of local images with AWS images

 

 


<< Part 1 | Part 2Part 3 | Part 4 >>

4

IT Automation Part IV: Ansible Tower “Hello World” Example


This is part IV of a little “Hello World” example for Ansible, an IT automation (DevOps) tool. This time, we will get acquainted with Ansible Tower, a web front end for Ansible. The post has following content:

  • Quickest way of “installing” an Ansible Tower trial system (via Vagrant this time)
  • Step by step guide for a minimalistic “Hello World!” example, following closely the official documentation with following differences:
    • more insights on the dependencies of the steps (discussed with a simple dependency diagram)
    • shorter, simple page step by step tutorial with links to the official documentation at each step
  • Ansible Tower Benefits

Posts of this series:

  • Part I: Ansible Hello World with a comparison of Ansible vs. Salt vs. Chef vs. Puppet and a hello world example with focus on Playbooks (i.e. tasks), Inventories (i.e. groups of targets) and remote shell script execution.
  • Part II: Ansible Hello World reloaded with focus on templating: create and upload files based on jinja2 templates.
  • Part III: Salt Hello World example: same content as part I, but with Salt instead of Ansible
  • Part IV: Ansible Tower Hello World: investigates Ansible Tower, a professional Web Portal for Ansible

Why not “Installing” Ansible Tower via Docker?

In parts I to III of this blog series, I have made good experience downloading Docker images from Docker Hub instead of installing the software in question. This is, how we had “installed” Ansible and Saltstack. Also this time, I have tried to do so, but the attempts were less successful. I have tested following images:

  1. leowmjw/ubuntu-ansible-tower is a manual build and 8 months old at the time of writing. I had succeeded to accomplish almost all steps, but at the end, the launched jobs were stuck in the “pending” state and I could not find out, what is wrong.
  2. ybalt/ansible-tower is an automated build, but still 6 months old. The services seem to start, but first greeting I got was a “Fatal Error” welcome screen (maybe, since I did not map the certs private folder, but I did not want to mess around with certifications)

At this point, I had given up following the Docker path for now, even though there are many other public Ansible Tower images on Docker Hub I could test. Instead, I have found that installing Ansible Tower on a VirtualBox image by using Vagrant is a very convenient way of tackling the installation part. It is recommended for proof of concept purposes only, but it is an officially offered way of creating an Ansible Tower service; see this official documentation.

Installing Ansible Tower via Vagrant

Prerequisites

  • VirtualBox 5 is installed. You can get the software from here. See the Appendix A of part 1 of this series for some possible workarounds, if you have problems installing VirtualBox on a Windows system.
  • Vagrant is installed. Get the software from here.
  • Enough resources on the host system: additional 2 GB RAM and 2 vCPUs. However, the top command on my Notebook shows that the host system is running with less than 20% CPU load, although the Ansible Tower is running.

Step 1: install Ansible Tower via Vagrant and connect via SSH

Following this instructions, you can issue the following commands in order to download, start and connect to the Ansible Tower image:

$ vagrant init ansible/tower # creates the Vagrantfile
$ vagrant box add ansible/tower # optional; downloads the Vagrant box 

As an addition to the official documentation, I have added the second command. The download might take a long time,  and the next command will require your attention.

give you some time for a long coffee break, while the next command requires your immediate attention: I had been prompted for the Administrator password twice. If you fail to provide the password in time, you might come back from the coffee break and be surprised by an error with a message like: VBoxManage.exe: error: Failed to create the host-only adapter.

$ vagrant up --provider virtualbox # the --provider option is optional, since virtualbox is the default provider
$ vagrant ssh # connects to the machine

The following (or similar) content will be shown:

$ vagrant ssh
Last login: Sat Jun 11 00:16:51 2016 from gateway

  Welcome to Ansible Tower!

  Log into the web interface here:

    https://10.42.0.42/

    Username: admin
    Password: <some_random_password>

  The documentation for Ansible Tower is available here:

    http://www.ansible.com/tower/

  For help, email support@ansible.com
[vagrant@ansible-tower ~]$

Step 2: follow the official quick start guide (with minor changes)

We closely follow the official quick start guide with a minor adaption at step “4. Create the Credential” below. This is, because Vagrant does not allow for password login of the root user.

Step 2.1: Log into Ansible Tower

We now can connect via browser to http://www.ansible.com/tower/ and log in with the credentials:

2016-06-11 (1)

Step 2.2 Import a License

Since we are logged in the first time ever, we will be provided with a pop-up like follows:
ansible_tower_no-license
Click on “Get a Free Tower Trial License” and choose the appropriate license version. I have applied for the permanent free license for up to 10 nodes, and I have received an email from support@ansible.com with the license file. You can either drag&drop the license file to the browser, or you can copy the license file content into the corresponding field,  so the content of the License File field looks similar to:

{
    "company_name": "YourCompanyName", 
    "contact_email": "youremail@company.com", 
    "contact_name": "Your Name", 
    "hostname": "yourhostname", 
    "instance_count": 10, 
    "license_date": 2128169547, 
    "license_key": "your_license_key", 
    "license_type": "basic", 
    "subscription_name": "Basic Tower up to 10 Nodes"
}

After checking the “I agree …” checkbox and submitting the form, you will be rewarded with a License Accepted pop-up window:

ansible_tower_qs-licenseaccepted

After clicking OK, you will reach the Dashboard:

ansible_tower_qs-home-dashboard

Step 2.3: Add all elements needed for a minimum Hello World Example

Now: where to start? For that, let us have a look at the data model and the dependencies. The data model of Ansible Tower, as provided by the Ansible Tower documentation looks like follows:

ansible_tower_TowerHierarchy

Note that (at least for new versions of Ansible), users can be directly attached to an organization with no need to create a team. The same holds for inventory groups. Let us get rid of those and perform a “Hello World” in following simplified model:

2016.06.13-18_07_15-hc_001

Okay, the figure might look slightly more complex than the one in the Ansible Tower documentation. This is, because I have added all mandatory parent to child dependencies, similar to the Unified Modeling Language notation.

Note: In the Ansible Tower model, each child has only a single parent, while each parent can have an arbitrary number of children. The only exception I have found so far are the Projects: each project can be assigned to one or more organizations.

As in the real world, you need to follow the arrows, if you want to reach your goal:

  1. Users, inventories and projects depend on organizations. However an organization named “default” is already pre-installed, so we do not need to do anything here.
  2. we need to create a user before we can create a credential
  3. we need to create an inventory before we can create a host
  4. we need to add a playbook directory, before we can define a project
  5. we need to create a playbook file, before we can define a job template. In addition, a credentials, an inventory and a project needs to be available.
  6. A job can be launched from a job template panel only

The official quick start guide goes through this model like follows:

2016.06.13-18_45_45-hc_001

Let us follow the same order. Nothing needs to be done for the organization  (“Step 0”): a “default” organization exists already. So, let us start with Step 1:

  1. Create a User
    Click on 2016.06.13-18_52_06-hc_001 and then on Users and then on 2016.06.13-18_53_09-hc_001 and add and save following data:
    ansible_tower_qs-organizations-create-user-form
  2. Create an Inventory
    Click on Inventories, then 2016.06.13-18_53_09-hc_001 and and and save following data:
    ansible_tower_qs-inventories-click-to-save-new-inventory
  3. Create a Host
    Click on Inventories, then on the newly created inventory, and then on 2016.06.13-18_53_09-hc_001 on the far right. Add and save following data:
    ansible_tower_qs-inventories-host-properties-form
  4. Create a Credential
    Click on 2016.06.13-18_52_06-hc_001 and then on Credentials and then on 2016.06.13-18_53_09-hc_001 . Here we deviate a little bit from the official documentation. The documentation asks to add and save following data:
    ansible_tower_qs-credentials-check-ssh-password-ask-at-runtimes-for-new-credential
    However, we have installed Ansible Tower using Vagrant, and per default, the root user cannot log in with a password. Therefore we will use the vagrant user and log in with the private SSH key like follows:
    2016.06.13-19_31_35-hc_001
    The private SSH key can be found on the host system, from where you have started the ansible tower. From the directory containing the Vagrantfile, you need to navigate to CODE .vagrant/machines/default/virtualbox CODE and open the file CODE private_key. The content needs to be cut & paste into the Private Key field above.
  5. Create a Playbook Directory
    Start an ssh session to the Ansible Tower host. In our case, this is done by issuing
    CODE vagrant ssh CODE
    as shown above. Issue the command
    CODE sudo mkdir /var/lib/awx/projects/helloworld CODE
  6. Create a Playbook
    In the ssh session, start an editor session, e.g.
    CODE vi /var/lib/awx/projects/helloworld/helloworld.yml CODE:
    cut&paste following content into the file:

    ---
    - name: Hello World!
      hosts: all
    
      tasks:
    
      - name: Hello World!
        shell: echo "Hi! Tower is working!"
  7. Create a Project
    Click on Projects and on 2016.06.13-18_53_09-hc_001 and add and save following data:
    ansible_tower_qs-projects-create-project-form
  8. Create a Job Template
    Click on Job Templates and on 2016.06.13-18_53_09-hc_001 and add and save following data:
    ansible_tower_qs-job-templates-form
    Note that you need to use the other credential named “Vagrant User per SSH Key” we have created in Step 4 above.
  9. Create (Launch) a Job
    Click on Job Templates and on 2016.06.13-19_22_38-hc_001 in the corresponding line of the job template. In case of a password credential, add the password into the prompt (not needed for SSH key authentication):
    ansible_tower_qs-jobs-enter-ssh-password-to-launch-job

This will lead to following result:

ansible_tower_qs-jobs-home-showing-successful-job

The command line output can be checked by clicking the 2016.06.13-19_28_00-hc_001 button:

ansible_tower_qs-job-results-stdout

Why Ansible Tower?

When we compare the many steps needed to run a simple “Hello World” example we could have accomplished on the command-line in less than half of the time (see e.g. part 1 of this blog series), we could ask ourselves, why someone would like to use Ansible tower at all. The main reasons (as I see it) are like follows:

  • Fine-grained role-based access control: in a real-world example, most steps above would be performed by an Ansible Tower administrator. As an example, you could define a team that is allowed to launch certain jobs only. Moreover, Ansible Tower seems to allow a quite fine-grained access control on what a team or user is allowed to do with Inventories and Job Templates (see 2016.06.13-20_33_32-hc_001 -> Teams -> Permissions):2016.06.13-20_27_22-hc_001
  • Unlike StaltStack, Ansible (without Tower) does not have any notion of immediate or scheduled background jobs. In addition, it does not automatically record the execution time and results of any performed jobs, making audit trails a complex task. Ansible Tower is filling this gap.
  • Nice and easy graphical handling including statistics etc.
  • Ansible Tower offers a modern RESTful API, which allows it to integrate with existing tools and processes

Summary

In this blog post we have shown how Vagrant can help to quickly set up a local Ansible Tower trial. We have discussed the Ansible Tower object model in its simplest form, and could explain, why and which minimal steps are needed to perform a simple “Hello World”. At the end, we have compared Ansible Tower with simple command-line based Ansible (without Tower).

8

AWS Automation based on Vagrant — Part 1: Getting started with AWS


In this blog post series we will explore, how to automate Amazon Web Services (AWS) by using Vagrant. The series is divided into three parts. Readers that are interested in the automation part only can skip part 1 (the AWS EC2 console part) and jump directly to part 2, since both, part 1 and part 2 are self-contained.

  • In Part 1, we will introduce Amazon Web Services (AWS) and will show how to sign into a free trial of Amazon, create, start, shut down and terminate a virtual machine on the AWS EC2 console.
  • Part 2 will lead you through the process how to use Vagrant to perform the same tasks you have performed in part 1, but now we will use local Vagrantfiles in order to automate the process.
  • Part 3 is the shortest part and will show, how Vagrant helps you to go beyond simple creation, startup, shutdown and termination of a virtual machine. In less than 10 minutes, you will be able to install a Docker host on AWS. With a few additional clicks on the AWS EC2 console, you are ready to start your first Docker container in the AWS cloud.

At the end, you will have running a Docker host in the public cloud, allowing you to load any of the images from Docker Hub instead of installing any software.

Document Versions

v1 (2016-04-03): initial release
v2 (2016-04-11): improved the step by step procedure
v3 (2016-04-21): added a chapter Appendix A about AWS cost control

Executive Summary

According to Gartner, Amazon Web Services (AWS) is the No. one service provider in the public cloud IaaS space. Amazon is offering a “free tier” test account for up to 12 months and up to 750 hrs of a t2.micro Linux instance as well as 750 hrs of a t2.micro Windows 2012 instance. For more details, check the free tier limits page. For services outside the free tier limits, check the AWS simple monthly (cost) calculator.

Per default, AWS is assigning a dynamic private and a dynamic public IP address. The public IP address and DNS name will change every time you restart the instance.

Deleting an instance is done by “Terminating” it. For a long time, the terminated instance will still be visible in the instance dashboard as “Terminated”. The sense and non-sense of this is discussed in this forum post.

Contents of Part 1

Why offering yet another ‘Hello World’ for Amazon Web Service Automation using Vagrant?

The reason is, that the other guides I have found do not start start from scratch and I have learned the hard way that the they assume that you already have created an AWS user with the appropriate rights. Since I benefit from all those other Evangelists out there helping me with my projects, I feel obliged to pay back my share.

Many thanks to Brian Cantoni, who has shared with us a (much shorter) Quick Start Guide on the same topic. Part 2 of my detailed step by step guide is based on his work.

Why Amazon Web Services?

According to Gartner’s 2015 report, Amazon Web Services is the leader in the IaaS space, followed by Microsoft Azure. See below the Gartner’s magic quadrant on IaaS:

Gartner 2015 MQ

Source: Gartner (May 2015)

There are many articles out there that compare AWS with Microsoft Azure. From reading those articles, the following over-simplified summary has burnt its traces into my brain:

Amazon Web Services vs. Microsoft Azure is like Open Source Linux world vs. the commercial Microsoft Software world. For a long time, we will need both sides of the world.

Now that we have decided to begin with the open source side of the world, let us get started.

Getting started with Amazon Web Services

Step 1: sign in to AWS

In order to get started, you need to sign into the Amazon Web Services, if not already done so. For that, visit https://aws.amazon.com/, scroll down and push the Get Started for Free button. This is starting a free tier trial account for up to 12 months and up to two time 750 hrs of computing time; Linux and Windows 2012 server on a small virtual machine.

Note that you will be offered options that are free along with other services that are not for free, so you need to be a little bit careful. Vagrant with its easy automation will help us to minimize the resources needed.

2016-03-27_231950_capture_008

I had signed into AWS long ago, but as far as I remember, you need to choose “I am a new User”, add your email address and desired password and a set of personal data (I am not sure whether I had to add my credit card, since I am an Amazon customer anyway).

2016.03.31-19_50_22-hc_001

Install an Ubuntu machine from the EC2 image repository

Step 2: Enter EC2 Console

Now we want to create our first virtual machine on AWS. After having singed in, you are offered to enter AWS home (the link depends on the region you are in, so I do not confuse you with a link that might not work for you) and you can enter the AWS EC2 console on the upper left:

2016.03.31-19_51_47-hc_001

Step 3: Choose and Launch Instance

On the following page, you are offered to create your first virtual machine instance:

2016.03.27-23_22_49-hc_001

Choose Launch Instance. I am an Ubuntu fan, so I have chosen the HVM version of Ubuntu:

Step 3.1: Choose Image

2016.03.27-23_28_26-hc_001

This image is ‘Free tier eligible’ so I expect not to be charged for it. Note that there are two image types offered for each operating system: HVM and PV. HVM seems to have a better performance. See here a description of the differences.

2016.04.03-18_14_42-hc_001

Note, that only t1.micro is ‘Free tier eligible’. Larger images will not come for free, as we might have expected. However, note that also the smaller t2.nano instance is not ‘Free tier eligible’. If you want to use a t2.nano image, you will have to pay for it from day one.

If you plan making use of services that are not ‘Free tier eligible’, the AWS simple monthly (cost) calculator helps you to estimate your monthly cost.

Step 3.2: Launch Instance

Now click on Review and Launch.

2016.04.03-18_16_37-hc_001

Step 3.3: Adapt Security Settings

We get a security alert we take seriously: creating an instance that is open to the Internet is not a good idea, so we click “Edit security groups”:

2016.04.03-18_19_34-hc_001

From the drop down list of the Source, we select “My IP”, before we press “Review and Launch”. Then we can review the instance data again and press Launch:

2016.04.03-18_22_51-hc_001

Step 3.4: Create and download SSH Key Pair

In the next pop up window, you are offered to create a new SSH key pair. Let us do so, and call the key “AWS_SSH_key” and download the corresponding PEM file to a place you will need later on to connect to your instance:

2016.04.03-18_25_04-hc_001

Now press “Launch Instances”. You will be redirected to a page that helps you with connection to your Instance:

2016.04.03-18_28_44-hc_001

Step 3.5: Check Instance Status

After clicking on the Instance Link, we will see that the instance is running and the “Status Checks” are being performed:

2016.04.03-18_30_17-hc_001

In the description, we also will find some important information on the instance like the Public IP and the Public DNS name (FQDN). This information will be needed now, since we want to connect to the instance via SSH

Note the IP address and the Public DNS will change every time the image is started. For static IP addresses, a so-called Elastic IP needs to be rented from AWS. If this IP is assigned to a free tier instance, also the rented Elastic IP seems to be free of charge.

 

Step 4: Connect via SSH

If you are connecting your instance from a Linux or Unix operating system, follow Step 4 a) and use the built-in SSH client. For Windows systems, we recommend to follow step 4 b) based on putty.

Note: With Cygwin on Windows, you might also try using step 4 a). However, other Linux emulations on Windows like the bash shell that comes with Git do not play well with editors like vim, so I recommend following 4 b) in this case.

Step 4 a) Connection from a *nix operating system

On a Unix or Linux machine or on a bash shell on Windows, you can connect via the *nix built-in SSH client. The following command line connection worked for me on a bash shell on my Windows machine. Replace the path to the private PEM file and the public DNS name, so that it works for you as well:

$ssh ubuntu@ec2-52-29-14-175.eu-central-1.compute.amazonaws.com -i /g/veits/PC/PKI/AWS/AWS_SSH_Key.pem
The authenticity of host 'ec2-52-29-14-175.eu-central-1.compute.amazonaws.com (52.29.14.175)' can't be established.
ECDSA key fingerprint is e2:34:6c:92:e6:5d:73:b0:95:cc:1f:b7:43:bb:54:39.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ec2-52-29-14-175.eu-central-1.compute.amazonaws.com,52.29.14.175' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-74-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Fri Apr  1 20:38:25 UTC 2016

  System load:  0.08              Processes:           98
  Usage of /:   10.0% of 7.74GB   Users logged in:     0
  Memory usage: 6%                IP address for eth0: 172.31.21.237
  Swap usage:   0%

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.
ubuntu@ip-172-31-21-237:~$

Step 4 b) Alternatively, on Windows, use putty to connect via SSH:

Since I am using a Windows machine and the formatting of a ssh session in a CMD console using command line ssh in a bash does not work well (try using vim in a Windows CMD console), I prefer to use putty on Windows.

In putty, add the host ubuntu@<public DNS>:

2016-04-01_224807_capture_004

 

Convert the pem file to a ppk format putty understands. For that, import the pem file using Putty Key Generator (puttygen) via Conversions->Import Key->choose pem file -> Save private key with ppk extension.

2016.04.01-13_23_46-hc_001

2016.04.01-13_26_46-hc_001

Now you can add the path to the ppk file to Connection->SSH->Auth->Private key file for authentication: in the putty client.

2016-04-01_131935_capture_003

To save the changes, you need to click on Session on the left Category Pane and then press Save:

2016-04-03_184454_capture_007

Now, press the “Open” button, accept the SSH security key:

2016-04-03_184623_capture_008

and you should be logged in:

2016-04-01_224815_capture_005

thumps_up_3

Step 5: Destroy the Instance on AWS

In order to save money (or trial workhours in our case), and when you are ready with playing around with the instance, let us destroy the instance in the AWS EC2 console again:

2016.04.03-18_49_08-hc_001

Select the instance, choose Actions->Instance State->Stop. Note that any changed to the instance will be lost, if you stop the system:

2016.04.03-18_57_23-hc_001

Only the private IP addresses and DNS names are kept, while the public IP and DNS are freed up. Next time you start the system, the public IP address and public DNS name will be different and you will need to update the DNS in your SSH client for external access.

2016.04.03-19_01_19-hc_001

Alternatively, you also can terminate the instance, which will delete the instance from AWS database. Note, however, that you still we see the instance in a “Terminated” status. The sense and non-sense of this is discussed in this forum post.

Appendix A: Cost Control with AWS

An estimation of the expected cost can be calculated with the AWS monthly cost calculator tool.

The actual cost can be observed on AWS’ billing page. At the bottom of the page, there is a “Set your first billing alarm” link that allows to define an email alarm as soon as a certain threshold is exceeded.

Note for users that are not in the East of the US: I was a little bit confused that the  “Set your first billing alarm” link (https://console.aws.amazon.com/cloudwatch/home?region=us-east-1&#s=Alarms&alarmAction=ListBillingAlarms) contains a variable region=us-east-1, while I am using resources in eu-central-1 only. However, the corresponding link https://eu-central-1.console.aws.amazon.com/cloudwatch/home?region=eu-central-1#alarm:alarmFilter=ANY does not allow to set any billing alarms. I assume that the billing for all regions is performed centrally in US East for all regions (I hope).


Next: AWS Automation using Vagrant — Part 2: Installation and Usage of the Vagrant AWS Plugin

<< Part 1 | Part 2Part 3 >>

9

AWS Automation based on Vagrant — Part 2: Installation and Usage of the Vagrant AWS Plugin


This is part 2 of a blog post series, in which we will explore, how to automate Amazon Web Services (AWS) by using the Vagrant AWS provider plugin.

Note that part 2 is self-contained: it contains all information that is needed to accomplish the tasks at hand without the need to go through part 1 first. Those of you, who have started with part 1 may jump directly to the chapter AWS Automation using Vagrant (step by step guide).

The series is divided into four parts:

  • In Part 1: AWS EC2 Introduction, introduces Amazon Web Services EC2 (AWS EC2) and will show how to sign into a free trial of Amazon, create, start, shut down and terminate a virtual machine on the AWS EC2 console.
  • Part 2: Automate AWS using Vagrant (this post) will lead you through the process how to use Vagrant to perform the same tasks you have performed in part 1, but now we will use local Vagrantfiles in order to automate the process. Please be sure to check out part 4, which shows a much simpler way to perform the same using Terraform.
  • Part 3: Deploy Docker Host on AWS using Vagrant shows, how Vagrant helps you to go beyond simple creation, startup, shutdown and termination of a virtual machine. In less than 10 minutes, you will be able to install a Docker host on AWS. With a few additional clicks on the AWS EC2 console, you are ready to start your first Docker container in the AWS cloud.
  • Part 4: Automate AWS using Terraform is showing that spinning up a virtual machine instance on AWS using Terraform is even simpler than using the Vagrant AWS plugin we have used in part 2 and 3. Additionally, Terraform opens up the we will be able to use the same tool to provision our resources to other clouds like Azure and Google Cloud.

At the end, you will have running a Docker host in the public cloud, allowing you to load any of the images from Docker Hub instead of installing any software.

Document Versions

  • V2016-04-01: initial published version
  • V2016-04-14 : added Automate the Security Rule Update chapter
  • V2016-04-21: added Next Steps chapter
  • V2016-05-06: added more details and screenshots for Step 7: create user and add access rights; added Document Versions chapter.

Contents of Part 2

In this blog post we will explore, how to get started with Amazon Web Services (AWS). After signing in to a free trial of Amazon, we will show how to create, spin up and terminate virtual machines in the cloud using Amazon’s AWS EC2 web based console. After that, a step by step guide will lead us through the process of performing the same tasks in an automated way using Vagrant.

While the shown tasks could also be performed with AWS CLI commands, Vagrant potentially allows for more sophisticated provisioning tasks like Software Installation and upload & execution of arbitrary shell scripts.

Why offering yet another ‘Hello World’ for Amazon Web Service Automation using Vagrant?

The reason is, that the other guides I have found do not start start from scratch and I have learned the hard way that the they assume that you already have created an AWS user with the appropriate rights. Since I benefit from all those other Evangelists out there helping me with my projects, I feel obliged to pay back my share.

Many thanks to Brian Cantoni, who has shared with us a (much shorter) Quick Start Guide on the same topic. Part 2 (this post) of my detailed step by step guide is based on his work.

Why Amazon Web Services?

According to Gartner’s 2015 report, Amazon Web Services is the leader in the IaaS space, followed by Microsoft Azure. See below the Gartner’s magic quadrant on IaaS:

Gartner 2015 MQ

Source: Gartner (May 2015)

There are many articles out there that compare AWS with Microsoft Azure. From reading those articles, the following over-simplified summary has burnt its traces into my brain:

Amazon Web Services vs. Microsoft Azure is like Open Source Linux world vs. the commercial Microsoft Software world. For a long time, we will need both sides of the world.

Now that we have decided to begin with the open source side of the world, let us get started.

Signing into Amazon Web Services

In order to get started, you need to sign into the Amazon Web Services, if not already done so. For that, visit https://aws.amazon.com/, scroll down and push the Get Started for Free button. This is starting a free tier trial account for up to 12 months and up to two time 750 hrs of computing time; Linux and Windows 2012 server on a small virtual machine.

Note that you will be offered options that are free along with other services that are not for free, so you need to be a little bit careful. Vagrant with its easy automation will help us to minimize the resources needed.

2016-03-27_231950_capture_008

I had signed into AWS long ago, but as far as I remember, you need to choose “I am a new User”, add your email address and desired password and a set of personal data (I am not sure whether I had to add my credit card, since I am an Amazon customer anyway).

2016.03.31-19_50_22-hc_001

 

If you are interested to creating, launching, stopping and terminating virtual machine instances using the Amazon EC2 console (a web portal), you might want to have a look to part 1 of this series:

2016.04.03-21_24_41-hc_001

In this part 2 of the series, we will concentrate on automating the tasks.

AWS Automation using Vagrant

Now we will use Vagrant in order to automate the installation of an image. Before trying it myself, I had expected that I can spin up any existing Vagrant box (that is Vagrant’s name of a Vagrant image) on AWS. However, I have learned now that this is not the case: instead, we will need to use a dummy Vagrant box supporting the AWS provider, which in turn will be used to spin up an existing AWS image (a so-called AMI) in the cloud. No Vagrant box is being uploaded to the cloud during the process.

Let us start:

Step 0: Set HTTP proxy, if needed

Note, that the Vagrant setup will not finish successfully in step 10.1, if your local machine does not have SSH access over the Internet to your AWS EC2 instance. If you are located behind a HTTP proxy, you will be able to start and terminate an AWS instance via Vagrant, but Vagrant will hang infinitely and you will not be able to provision the AWS instance.

If you have no other chance and you are located behind a HTTP proxy, and you want to test how to start and terminate an AWS instance only, you can run following commands before trying to install and use Vagrant:

On *nix systems:

export http_proxy='http://myproxy.dns.name:8080'
export https_proxy='http://myproxy.dns.name:8080'

On Windows:

set http_proxy='http://myproxy.dns.name:8080'
set https_proxy='http://myproxy.dns.name:8080'

Replace myproxy.dns.name and 8080 by the IP address or DNS name and port owned by the HTTP proxy in your environment.

Step 1: It is best, if you have direct Internet access (behind a firewall, but without any HTTP proxy).  Install Vagrant on your local machine. The installation procedure depends on your operating system and is described here.

Step 2: Install the Vagrant AWS plugin

vagrant plugin install vagrant-aws

Step 3: download the dummy Vagrant box

Vagrant boxes need to be built for the provider you use. Most Vagrant boxes do not support the provider AWS. The easiest way to work around this issue is to load a dummy box that supports the AWS provider and to override the image that is spin up in the Cloud by using an override command in the Vagrantfile. There, you will point to one of the available Amazon images (called AMIs) on AWS EC2. But for now, let us download the dummy Vagrant box

vagrant box add dummy https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box

Step 4: Create a directory and within the directory, issue the command

vagrant init

This will create a template Vagrantfile in the directory.

Step 5: Adapt the Vagrantfile

Step 5.1: Add following lines into the Vagrantfile that has just been created:

# Vagrantfile
Vagrant.configure(2) do |config|
 config.vm.provider :aws do |aws, override|
   aws.access_key_id = ENV['AWS_KEY']
   aws.secret_access_key = ENV['AWS_SECRET']
   aws.keypair_name = ENV['AWS_KEYNAME']
   aws.ami = "ami-a7fdfee2"
   aws.region = "us-west-1"
   aws.instance_type = "t2.micro"

   override.vm.box = "dummy"
   override.ssh.username = "ubuntu"
   override.ssh.private_key_path = ENV['AWS_KEYPATH']
 end

Step 5.2: Note that you need to adapt the aws.region to the region you have signed in. See here a list of regions AWS offers. In my case, this was:

aws.region = "eu-central-1"

Step 5.3: In addition, you will need to update the aws.ami value to the one you have seen in the EC2 console when choosing the image after pressing the Lauch Instance button. In my case, this was

aws.ami = "ami-87564feb"

Step 6: Define the AWS credentials

Step 6.1: Create a file called ‘aws-credentials’ with following content:

export AWS_KEY='your-key'
export AWS_SECRET='your-secret'
export AWS_KEYNAME='your-keyname'
export AWS_KEYPATH='your-keypath'

Step 6.2: Find the AWS Key ID and Secret Access Key

On the users tab of the IAM console, click the create new users button and create a user of your choice. You will automatically be displayed the ‘Access Key ID’ and  the  ‘Secret Access Key’. In the file above, replace 'your-key' and 'your-secret' by those values.

Step 6.3: Add SSH Key pair name and SSH Key path

On the EC2 console -> Network Securit-> Key Pairs, create and download a new SSH key. You will be prompted for a SSH Key name and the download path. In the ‘aws-credentials’, replace 'your-keyname' and 'your-keypath' by those values.

Step 7: Add a user and apply the appropriate permissions

This step is not described in the Quick Start guides I have come across and this has caused some errors I will show in the Appendix as a reference. In order to avoid running into the errors, do the following:

Step 7.1: Create a new user on the AWS IAM Users page , if not already done.

Step 7.2: Assign the needed access rights to the user like follows:

Adapt and goto the  AWS IAM Link https://console.aws.amazon.com/iam/home?region=eu-central-1#policies. The link needs to be adapted to your region; e.g. change eu-central-1 by the right one from the region list that applies to your account.

Click the “Get Started” button, if the list of policies is not visible already. After that, you should see the list of policies and a filter field:

2016.05.06-10_42_28-hc_001

In the Filter field, search for the term AmazonEC2FullAccess. 

Click on the AmazonEC2FullAccess Policy Name and then choose the tab Attached Identities.

2016.05.06-10_50_14-hc_001

Click the Attach button and attach the main user (in the screenshot above, my main user “oveits” is already attached; in your case, the list will be empty, most likely).

Step 8: Write credentials into Environment variables

In step 6, we have created and edited a file called aws-credentials. Now is the time to write the values into the environment variables by issuing the command

source aws-credentials

Step 9: Create and spin up the virtual machine on AWS

Note: if you get a nokogiri/nokogiri LoadError at this step, see the Appendix below.

Now we should have prepared everything that we can create and spin up a virtual machine with a single command:

$vagrant up --provider=aws
Bringing machine 'default' up with 'aws' provider...
==> default: Warning! The AWS provider doesn't support any of the Vagrant
==> default: high-level network configurations (`config.vm.network`). They
==> default: will be silently ignored.
==> default: Launching an instance with the following settings...
==> default: -- Type: t2.micro
==> default: -- AMI: ami-87564feb
==> default: -- Region: eu-central-1
==> default: -- Keypair: AWS_SSH_Key
==> default: -- Block Device Mapping: []
==> default: -- Terminate On Shutdown: false
==> default: -- Monitoring: false
==> default: -- EBS optimized: false
==> default: -- Source Destination check:
==> default: -- Assigning a public IP address in a VPC: false
==> default: -- VPC tenancy specification: default
==> default: Waiting for instance to become "ready"...
==> default: Waiting for SSH to become available...

Note: the console might hang at this state for a long time (e.g. 20 minutes). Do not send a Ctrl-C to the terminal in this case, since this will terminate the instance again. Note that opening a new terminal does not help, since Vagrant does not allow to send a new command as long as the vagrant up has not finished. If it takes longer than 20 minutes, then check, whether your local machine has SSH access to the Internet.

If you have checked that you have general SSH access to the Internet, but the Vagrant console hangs at the “Waiting for SSH” state, do not worry yet: we will update the security setting of the AWS instance in step 10.1 below, before Vagrant can detect that the instance is available. For now, we will ignore the hanging Vagrant console. Instead, we will go to the EC2 console. There, you will see that we already have created an instance already (‘0 Running Instances’ is replaced by ‘1 Running Instances’):

2016.03.31-01_27_55-hc_001

Even though the ‘vagrant up’ command might be still hanging in the ‘Waiting for SSH’ status, the instance is up and running. After clicking on the “1 Running Instances” link we will see something like:

2016.03.31-01_29_55-hc_001

Step 10: Access the virtual machine via SSH:

Step 10.1: Updating the security group

In this step, we will adapt the security group manually in order to allow SSH access to the instance. Note that in Appendix B, we show how this step can be automated with a shell script. But now, let us perform the step manually.

2016.04.01-13_00_29-hc_001

In the EC2 console, under Network&Security -> Security Groups (in my case in EU Central 1: https://eu-central-1.console.aws.amazon.com/ec2/v2/home?region=eu-central-1#SecurityGroups:sort=groupId), we can find the default security group. We need to edit the inbound rule to allow the current source IP address. For that, select the policy group, click on the “Inbound” tab on the bottom, specify “My IP” as source and save the policy:

2016.04.01-13_05_18-hc_001

Now check in the console, where you had performed the ‘vagrant up’ command. The command should have finished by now. If the Vagrant console is still hanging, now is the right time to get worried. 😉

In this case, please also check out the note in Step 0.

$vagrant up --provider=aws

Bringing machine 'default' up with 'aws' provider...
...
==> default: Machine is booted and ready for use! 
No host IP was given to the Vagrant core NFS helper. This is an internal error that should be reported as a bug.

For now, we can savely ignore the NFS bug, since we do not need NFS yet…

Step 10.2: Connect via SSH

Now you can SSH into the machine. You need to specify the username=ubuntu, the IP address or FQDN of the machine and the SSH key path we have created in step 6.3. The IP address or FQDN can be read on the EC2 console Instances Description tab:

2016.04.01-22_31_27-hc_001

Note the IP address and the Public DNS will change every time the image is started.

Step 10.2 a) Connection via Vagrant

This is the simplest way to connect to your image: on the console of your local machine, just type

vagrant ssh

and you will be in, as long as the security policy permits this.

$vagrant ssh
Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-74-generic x86_64)

 * Documentation: https://help.ubuntu.com/

 System information as of Fri Apr 1 20:47:44 UTC 2016

 System load: 0.0 Processes: 99
 Usage of /: 10.0% of 7.74GB Users logged in: 1
 Memory usage: 6% IP address for eth0: 172.31.21.237
 Swap usage: 0%

 Graph this data and manage this system at:
 https://landscape.canonical.com/

 Get cloud support with Ubuntu Advantage Cloud Guest:
 http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.


Last login: Fri Apr 1 20:47:45 2016 from ppp-93-104-168-193.dynamic.mnet-online.de
ubuntu@ip-172-31-21-237:~$

Note that vagrant ssh does not play well with editors like vim on a Windows command shell. See below Step 10.2 c), how to use putty instead.

Step 10.2. b) Connection via a *nix operating client

Alternatively, on a *nix machine or on a bash shell on Windows, you can connect via the *nix built-in SSH client. The following command line connection worked for me on a bash shell on my Windows machine. Replace the path to the private PEM file and the public DNS name, so that it works for you as well:

$ssh ubuntu@ec2-52-29-14-175.eu-central-1.compute.amazonaws.com -i /g/veits/PC/PKI/AWS/AWS_SSH_Key.pem
The authenticity of host 'ec2-52-29-14-175.eu-central-1.compute.amazonaws.com (52.29.14.175)' can't be established.
ECDSA key fingerprint is e2:34:6c:92:e6:5d:73:b0:95:cc:1f:b7:43:bb:54:39.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ec2-52-29-14-175.eu-central-1.compute.amazonaws.com,52.29.14.175' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-74-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Fri Apr  1 20:38:25 UTC 2016

  System load:  0.08              Processes:           98
  Usage of /:   10.0% of 7.74GB   Users logged in:     0
  Memory usage: 6%                IP address for eth0: 172.31.21.237
  Swap usage:   0%

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.
ubuntu@ip-172-31-21-237:~$

Step 10.2 c) Alternatively, perform a SSH Connection via putty on Windows:

Since I am using a Windows machine and the formatting of a ssh session in a CMD console using ‘vagrant ssh’ does not work well (especially, if you try to use vim), I prefer to use putty on Windows.

In putty, add the host ubuntu@<public DNS>:

2016-04-01_224807_capture_004

and add the path to the private key file on Connection->SSH->Auth->Private key file for authentication:

2016-04-01_131935_capture_003

Note that the pem file needs to be converted to a ppk format putty understands. For that, import the pem file using Putty Key Generator (puttygen) via Conversions->Import Key->choose pem file -> Save private key with ppk extension.

2016.04.01-13_23_46-hc_001

2016.04.01-13_26_46-hc_001

Now add the path to the ppk file to Connection->SSH->Auth->Private key file for authentication: in the putty client, press the “yes” button, and we are logged in:

2016-04-01_224815_capture_005

thumps_up_3

Step 11: Destroy the Instance on AWS

In order to save money (or trial workhours in our case), let us destroy the image again by using Vagrant:

$vagrant destroy
 default: Are you sure you want to destroy the 'default' VM? [y/N] y
==> default: Terminating the instance...

That was very quick (less than a second) and we see that the image is shutting down:

2016.03.31-01_45_42-hc_001

Within 2-3 minutes we see, that the machine is terminated and I have learned from googling around that the image will be deleted in the next 10 to 15 minutes.

2016.03.31-01_46_54-hc_001.png

In the terminated and/or deleted status, the instance does not create any cost and I can go to bed.

Appendix A: Installing the AWS CLI

The AWS CLI local installation has helped me with troubleshooting the issues I had during the setup. If you need the AWS CLI, you can see here how to install it.

After that, you need to add the credentials like follows:

$ aws configure
AWS Access Key ID [****************FJMQ]:
AWS Secret Access Key [****************DVVn]:
Default region name [eu-central-1a]: eu-central-1
Default output format [None]:
The following command had helped me to find out I had a wrong region in the Vagrantfile:
$ aws ec2 describe-key-pairs --key-name AWS_SSH_Key
 A client error (UnauthorizedOperation) occurred when calling the DescribeKeyPairs operation: You are not authorized to perform this operation.

Appendix B: Automate the Security Rule Update

Above, we have shown that you either

  1. need to allow all SSH traffic from anywhere or
  2. to update the rules to only allow your current source IP address in the WAS EC2 console every time your source IP address changes (once a day in most home networks).

Option 1. is insecure and option 2. is cumbersome. In this appendix, we will show, how option 2. can be automated. Here is a step by step guide:

Step B.1: Install AWS CLI

For that, follow the instructions in Appendix A: install the AWS CLI and add the keys

Step B.2: Verify the AWS user rights

Make sure the AWS user has the needed rights/permissions.  rights. For that, follow the instructions in Step 7 of the main document and add AmazonEC2FullAccess for the main user, if not already done.

Step B.3: Test that you can see the security policies

In order to be quicker, you also can skip step B.3 to B.5 and jump directly to B.6 for creating shell scripts that add and remove security rules. However, steps B.3 to B.5 help you understand, what we are doing in the scripts and help verifying that each single command is successful.

On the local command line, perform the command

aws ec2 describe-security-groups

A long answer that starts like follows should be seen:

{
    "SecurityGroups": [
        {
            "IpPermissionsEgress": [
                    ...(egress rules)...
            ],
            "Description": "default VPC security group",
            "IpPermissions": [
                    ...(ingress rules)...
            ],
            "GroupName": "default",
            "VpcId": "vpc-a6e13ecf",
            "OwnerId": "923026411698",
            "GroupId": "sg-0433846d"
        },
...(other security groups)...
}

Step B.4: Test how to add and remove a new ingress rule

Now we can add a new ingress rule, see also the AWS doc on this topic. First we will simulate the add by specifying the –dry-run option:

$aws ec2 authorize-security-group-ingress --dry-run --group-id sg-0433846d --ip-permissions '[{"IpProtocol": "tcp", "FromPort": 22, "ToPort": 22, "IpRanges": [{"CidrIp": "11.22.33.44/32"}]}]'

A client error (DryRunOperation) occurred when calling the AuthorizeSecurityGroupEgress operation: Request would have succeeded, but DryRun flag is set.

This was the right answer. Note that you will need to use your own –group-id as shown in the output of the default security rule above. The IP address does not matter in the moment, since we are testing the API only for now.

Now we run the command again without –dry-run:

aws ec2 authorize-security-group-ingress --group-id sg-0433846d --ip-permissions '[{"IpProtocol": "tcp", "FromPort": 22, "ToPort": 22, "IpRanges": [{"CidrIp": "11.22.33.44/32"}]}]'

If everything works right, there will be no response and you will reach the prompt again. You can use the aws ec2 describe-security-groups command again in order to check that a new ingress rule has been added.

Now we will test that the rule is removed again by issuing the command

aws ec2 revoke-security-group-ingress --group-id sg-0433846d --ip-permissions '[{"IpProtocol": "tcp", "FromPort": 22, "ToPort": 22, "IpRanges": [{"CidrIp": "11.22.33.44/32"}]}]'

I.e., we just need to replace “authorize” by “revoke”. You can use the aws ec2 describe-security-groups command again in order to check that a new ingress rule has been removed.

Step B.5: Find your public IP address

This step will work only on a Linux shell (also a bash shell on Windows will work, though).

You could log into your home network’s NAT router in order to find you own Internet IP address. However, there is a more clever way to find the Internet public IP address, as shown in this link): just ask one of the Internet services http://ipinfo.io/ip or http://checkip.dyndns.org. Those can also be tested in an Internet browser. The ipinfo.io service has proven to respond much faster than the checkip service. Let us concentrate on the ipinfo service.

Using a bash shell, and assuming that curl or wget is intalled we will write the current public Internet IP address to a variable via one of the following commands:

currentIP=`wget http://ipinfo.io/ip -qO -`
# or equivalent:
currentIP=`curl -s http://ipinfo.io/ip`

In the following step, we will use the the wget version in the shell scripts.

Step B.6: Put it all together

Step B.6.1: Create a shell script that will add the right rule

Now let us create a file named addSecurityRule.sh with following content:

#!/bin/bash
# addSecurityRule.sh
[ -r lastIP ] && [ -r removeSecurityRule.sh ] && ./removeSecurityRule.sh
currentIP=`wget http://ipinfo.io/ip -qO -`
aws ec2 authorize-security-group-ingress --group-id sg-0433846d --ip-permissions "[{\"IpProtocol\": \"tcp\", \"FromPort\": 22, \"ToPort\": 22, \"IpRanges\": [{\"CidrIp\": \"$currentIP/32\"}]}]" && echo $currentIP > lastIP

The line with removeSecurityRule.sh will remove a security rule, before creating a new one, if applicable. The currentIP line will detect the current public IP address as seen from the Internet (with courtesy of this link). Finally, the aws ec2 line will add the current public IP address to the ones who are allowed to access the instances via SSH.

Step B.6.2: Create a shell script that will remove the rule

The following script named “removeSecurityRule.sh” will remove the security rule again. Note that this step is important, since a security group supports only up to 50 rules, and we need to clean the security group again, after a rule is not needed anymore.

#!/bin/bash
# removeSecurityRule.sh
if [ -r lastIP ]; then
 currentIP=`cat lastIP`
 aws ec2 revoke-security-group-ingress --group-id sg-0433846d --ip-permissions "[{\"IpProtocol\": \"tcp\", \"FromPort\": 22, \"ToPort\": 22, \"IpRanges\": [{\"CidrIp\": \"$currentIP/32\"}]}]" && echo $currentIP > lastIP
else
 echo "$0: no file named lastIP found!"
 exit 1
fi

Now, with those scripts available, we just need to issue a command

./addSecurityRule.sh

before issuing the other commands

source aws-credentials
vagrant up

Appendix C: Troubleshooting Steps / Errors

Because the other quick guides were missing some steps, I was running in two errors:

C.1 Wrong region leading to: “The key pair ‘AWS_SSH_Key’ does not exist”

$vagrant up --provider=aws
Bringing machine 'default' up with 'aws' provider...
==> default: Warning! The AWS provider doesn't support any of the Vagrant
==> default: high-level network configurations (`config.vm.network`). They
==> default: will be silently ignored.
==> default: Launching an instance with the following settings...
==> default: -- Type: t2.micro
==> default: -- AMI: ami-a7fdfee2
==> default: -- Region: us-west-1
==> default: -- Keypair: AWS_SSH_Key
==> default: -- Block Device Mapping: []
==> default: -- Terminate On Shutdown: false
==> default: -- Monitoring: false
==> default: -- EBS optimized: false
==> default: -- Source Destination check:
==> default: -- Assigning a public IP address in a VPC: false
==> default: -- VPC tenancy specification: default
C:/Users/vo062111/.vagrant.d/gems/gems/excon-0.49.0/lib/excon/middlewares/expects.rb:6:in `response_call': The key pair 'AWS_SSH_Key' does not exist (Fog::Compute::AWS::NotFound)
 from C:/Users/vo062111/.vagrant.d/gems/gems/excon-0.49.0/lib/excon/middlewares/response_parser.rb:8:in `response_call'
 from C:/Users/vo062111/.vagrant.d/gems/gems/excon-0.49.0/lib/excon/connection.rb:389:in `response'
 from C:/Users/vo062111/.vagrant.d/gems/gems/excon-0.49.0/lib/excon/connection.rb:253:in `request'
 from C:/Users/vo062111/.vagrant.d/gems/gems/fog-xml-0.1.2/lib/fog/xml/sax_parser_connection.rb:35:in `request'
 from C:/Users/vo062111/.vagrant.d/gems/gems/fog-xml-0.1.2/lib/fog/xml/connection.rb:7:in `request'
 from C:/Users/vo062111/.vagrant.d/gems/gems/fog-aws-0.9.2/lib/fog/aws/compute.rb:525:in `_request'
 from C:/Users/vo062111/.vagrant.d/gems/gems/fog-aws-0.9.2/lib/fog/aws/compute.rb:520:in `request'
 ...

This was, because I had the wrong region configured in the Vagrant file. I have changed this to

aws.region = "eu-central-1"

In addition, the AMI was wrong. In the EC2 console, I find after pressing Launch Instance:

2016.03.31-01_04_13-hc_001

Therefore I have changed the AMI to

aws.ami = "ami-87564feb"

Then again:

$vagrant up --provider=aws
Bringing machine 'default' up with 'aws' provider...
==> default: Warning! The AWS provider doesn't support any of the Vagrant
==> default: high-level network configurations (`config.vm.network`). They
==> default: will be silently ignored.
==> default: Launching an instance with the following settings...
==> default: -- Type: t2.micro
==> default: -- AMI: ami-87564feb
==> default: -- Region: eu-central-1
==> default: -- Keypair: AWS_SSH_Key
==> default: -- Block Device Mapping: []
==> default: -- Terminate On Shutdown: false
==> default: -- Monitoring: false
==> default: -- EBS optimized: false
==> default: -- Source Destination check:
==> default: -- Assigning a public IP address in a VPC: false
==> default: -- VPC tenancy specification: default
There was an error talking to AWS. The error message is shown
below:

UnauthorizedOperation => You are not authorized to perform this operation. Encoded authorization failure message: txYvhypUYdsHX-FIv1N2GGtnAMcIKBbjGrG9PHmCIhG33l8IMxmEhc0W4NuS_ST-5U
Wb-ApATEe56XxQWB2xVu289LKRrT08FhXZHziH_QLGgPb-THBTn0lonbRcsLtkGZurjMzflVYbddqiM34XI0x4aR_VqHWAKLIl3p4Kk3A2Oovu_u4tLT-qYBZ0lovD0bvFH8geve4gpvNI63SSyyWbfBvMI5sQ7SOQ_3E_sYMH8lJ2nhpSPI
OKpcC9fGOJ3EQZBJwlg-76UplZZdlJzGtGTl2XL8lc5OtdqeTNuqivMJbz-GxXH5p0XUvpdeNA-utYJPmWWiGubghz44n_NMuXk58W4p7hlrNDDMu3YGGqMBMKWUUUXAA6SM1o-nm2SNq-xqeZWWrvweRwGzEdBKYz-4jwdmUbSyC3F9rmGs
7vQFKe2lcz9yQwmKTlOfOBDxXsHke5wBu-ii1misYh_ljI0uTiuQc0PlR9IS6jy8A6Raavb3XTYwUlSrqbzefmprEiAkLlvKiCsdNQP8VNbCLtxKUhL3g

C.2 Missing Permissions leading to: “UnauthorizedOperation => You are not authorized to perform this operation”

Then I tried to attach the policy AmazonEC2FullAccess to the user oveits on https://console.aws.amazon.com/iam/home?region=eu-central-1#policies

Search for AmazonEC2FullAccess on the IAM policies link https://console.aws.amazon.com/iam/home?region=eu-central-1#policies (you need adapt the link to your region!)

Select and choose attach, then select the user you have created above.

Then I was trying again, and it worked as described in Step 9.

C.3 Error: “cannot load such file — nokogiri/nokogiri (LoadError)”

When I have tried to issue

vagrant up --provision

I have run into the error

C:/HashiCorp/Vagrant/embedded/gems/gems/nokogiri-1.6.3.1-x86-mingw32/lib/nokogiri.rb:29:in `require': cannot load such file -- nokogiri/nokogiri (LoadError)
 from C:/HashiCorp/Vagrant/embedded/gems/gems/nokogiri-1.6.3.1-x86-mingw32/lib/nokogiri.rb:29:in `rescue in <top (required)>'
 from C:/HashiCorp/Vagrant/embedded/gems/gems/nokogiri-1.6.3.1-x86-mingw32/lib/nokogiri.rb:25:in `<top (required)>'
 from D:/veits/Vagrant/.vagrant.d/gems/gems/fog-xml-0.1.2/lib/fog/xml.rb:2:in `require'
 from D:/veits/Vagrant/.vagrant.d/gems/gems/fog-xml-0.1.2/lib/fog/xml.rb:2:in `<top (required)>'
 from D:/veits/Vagrant/.vagrant.d/gems/gems/fog-1.38.0/lib/fog.rb:13:in `require'
 from D:/veits/Vagrant/.vagrant.d/gems/gems/fog-1.38.0/lib/fog.rb:13:in `<top (required)>'
...

I had this error after installing Vagrant 1.8.1 on a new Windows 10 machine. This error seems to be related with this Vagrant issue. After upgrading Vagrant to 1.8.6 by downloading the new version and installing it over the old 1.8.1 version, the problem was resolved.

C.4 Error: An access key ID must be specified via “access_key_id”

This and similar error messages will occur, if Step 6.1 was not accomplished before issuing ‘vagrant up”. This causes the environment variables AWS_KEY and AWS_SECRET etc. to be not defined.

C.5 No host IP was given to the Vagrant core NFS helper. This is an internal error that should be reported as a bug.

Seen after upgrading Vagrant to 1.8.6 and issuing the command ‘vagrant up –provider=aws’. This error is reported here to be a bug of the aws plugin. As workaround, they discuss to add following override line to the Vagrantfile:

config.vm.provider :aws do |aws, override|
   ...
   override.nfs.functional = false
end

However, if you do so, you run into the next problem: “No synced folder implementation is available for your synced folders”.

Therefore, I have chosen to ignore the error message, since it does not prevent the instance to be launched successfully, as you can verify in the AWS console. If the security settings on AWS are correct, you also can connect to the instance succesfully via ‘vagrant ssh’.

Summary

According to Gartner, Amazon Web Services (AWS) is the No. one service in public cloud IaaS space. Amazon is offering a “free tier” test account for up to 12 months and up to 750 hrs of a t2.micro Linux instance as well as 750 hrs of a t2.micro Windows 2012 instance. For more details, check the free tier limits page. For services outside the free tier limits, check the AWS simple monthly (cost) calculator.

Per default, AWS is assigning a dynamic private and a dynamic public IP address. The public IP address will change every time you restart the instance.

Deleting an instance is done by “Terminating” it. For 10 to 20 minutes, the terminated instance will still be visible in the instance dashboard as “Terminated”. The sense and non-sense of this is discussed in this forum post.

I have shown that Vagrant can be used as a means to automate the management of AWS images, including creation, starting and terminating the image. Each of those task will take only a single command on your local machine’s command line, once the first nine steps of this guide are accomplished.

Note that Vagrant is not uploading any Vagrant boxes, as those, who know Vagrant, might expect. Instead, it is only used as a user front end to create, spin up and terminate the existing AWS images (AMIs).

Next Steps (Brainstorming):

  • Done: see Appendix B: Automate the Security Rule Update
    learn how to automate the update of the security policy to only allow the current local IP.
  • Done: see Part 3: Provisioning (Installation of software) on an AWS Instance via Vagrant
    • install a Docker host using scripts found on https://github.com/William-Yeh/docker-enabled-vagrant, my favorite Docker host in terms of performance (see performance comparison with CoreOS on this blog post).

<< Part 1 | Part 2 | Part 3 >>

4

IT Automation Part III: Saltstack “Hello World” Example


This blog post will explore basic capabilities of Salt by going through a hands-on “hello world” IT automation example. Moreover, those basic capabilities are compared for Salt vs. Ansible.

This post is (almost) self-contained, and there is no need to read part I and part II of this series first.

Salt is a quite recent IT automation tool (project started in 2011), which was built to scale well beyond tens of thousands of servers. In this Hello World example, we will investigate, how to run Salt in a Docker container and how to perform some simple remote shell commands on the target machines.

Posts of this series:

  • Part I: Ansible Hello World with a comparison of Ansible vs. Salt vs. Chef vs. Puppet and a hello world example with focus on Playbooks (i.e. tasks), Inventories (i.e. groups of targets) and remote shell script execution.
  • Part II: Ansible Hello World reloaded with focus on templating: create and upload files based on jinja2 templates.
  • Part III: Salt Hello World example: same content as part I, but with Salt instead of Ansible
  • Part IV: Ansible Tower Hello World: investigates Ansible Tower, a professional Web Portal for Ansible

2015.12.04-13_48_53-hc_001

Why Salt?

Salt is a newcomer in the game of IT automation. Salt is competing against Ansible, Chef, Puppet and maybe CFEngine (in reverse order of begin of project). You will find recent comparisons of Puppet vs. Chef vs. Ansible vs. Salt in the articles here you and here. A more comprehensive, but older article can be found on this InfoWorld article from 2013.

Most of the articles mention that Salt’s strength is its scalability. However, most of the articles do not mention following (I think) important advantages of Salt:

  • Salt is is Apache 2.0 licensed, whereas Ansible is GPL licensed. Therefore, Salt can be used by developers of commercial software without fear that they could be forced to publish the source code of all their work.
  • Salt comes with a REST interface (example) a that is independent of the Web portal, whereas the REST interface is part of Ansible Tower, which in turn is a commercial product.
  • Salt has the better step by step guides. However, for both, Ansible and Salt, the documentation sometimes fails to make clear, which feature is available in the development version only, which is annoying, if you are using the latest stable version, as we will do in this article.

You will find some more practical differences we will have found out during our “hello world” journey on the Summary section.

Goal

Our main goal today is to get familiar with Salt’s way of defining provisioning tasks and target hosts and groups, similar to what we have done in part 1 of this series.

Provisioning tasks are defined as

Targets and groups are defined within

For a in-depth comparison of targets and groups within Ansible vs. Salt, see Summary section. We will see that all in all, the target&group definition and search capabilities are much more elaborated in case of Salt than they are in case of Ansible.

Use Case

The example use case, we will explore, is:

As an IT administrator, I want to manage target machine groups in order to run arbitrary shell commands on a group of target machines with a single CLI command on the management machine.

The steps that need to be performed are:

  • prepare and test that the automation tool can connect to the target machines
  • define target groups and shell commands and centrally save the information for later usage
  • remotely run the shell commands on the specified targets

We will follow closely the Saltstack Fundamentals guide, with the difference that we will use lightweight Docker images as a replacement of the much larger Vagrant Virtualbox machines. For me, this is reducing download times and the needed resources on my notebook. Note, that I did not choose the saltstack official-looking Docker images, since they are lacking any documentation. Instead, we will use this Docker image from jacksoncage that has a good documentation (thanks to Jackson!).

Prerequisites

  • Install a Docker host. On Windows and iMac, you will find a convenient way of doing so described in the Chapter “Install a Docker host” of part I of this blog post series.

Download and Start Salt

Once, you have installed Docker, downloading, installing and running a Salt master is done by a single docker command on the Docker host. However, in order to allow the configuration files to be stored on the Docker hosts, we create and enter a directory called “master” before:

Step 1: download and install a Salt master with a single Docker command:

mkdir ~/master; cd ~/master; 
docker run -i -t --name=master -h master -p 4505 -p 4506 \
   -p 8080 -p 8081 -e SALT_NAME=master -e SALT_USE=master \
   -v `pwd`/srv/salt:/srv/salt:rw jacksoncage/salt

This is done in an interactive session, so you need to open a second SSH session for connection to the Docker host and issue the command

docker exec -it master bash

In order to connect to the command-line of the Salt master.

Step 2: download and install a Salt target (=”minion”) with a single Docker command:

Now, on a third Docker host SSH session, we start a target machine, a so-called minion:

mkdir ~/minion1; cd ~/minion1;
docker run -i -t --name=minion1 -h minion1 --link master:master -p 4505 -p 4506 \
 -p 8080 -p 8081 -e SALT_NAME=minion1 -e SALT_USE=minion \
 -v `pwd`/srv/salt:/srv/salt:rw jacksoncage/salt

The resulting log should look something like:

vagrant@localhost ~/minion1 $ docker run -i -t --name=minion1 -h minion1 --link master:master -p 4505 -p 4506 -p 8080 -p 8081 -e SALT_NAME=minion1 -e SALT_USE=minion -v `pwd`/srv/salt:/srv/salt:rw jacksoncage/salt
INFO: Starting salt-minion with log level info with hostname minion1
[INFO ] Setting up the Salt Minion "minion1"
[WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate.
[INFO ] Generating keys: /etc/salt/pki/minion
[INFO ] Authentication with master at 172.17.0.5 successful!
[WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate.
[INFO ] Added mine.update to schedular
[INFO ] Added new job __mine_interval to scheduler
[INFO ] Minion is starting as user 'root'
[INFO ] Starting pub socket on ipc:///var/run/salt/minion/minion_event_c5a7daa544_pub.ipc
[INFO ] Starting pull socket on ipc:///var/run/salt/minion/minion_event_c5a7daa544_pull.ipc
[INFO ] Minion is ready to receive requests!
[INFO ] Running scheduled job: __mine_interval

The log shows, that the minion is automatically generating keys and is authenticating with the master. Nice feature. We note:

For Salt you need to install an agent but for Ansible you need to manually add public SSH keys. What is better?

Note that Salt is rewarding your agent installation effort (see Appendix A) with an automated key creation and distribution system. This is superior to Ansible’s requirement to manually import the public SSH key to all targets.

Step 3 (optional): perform your first remote task issued from command-line of the master

Now we are ready to issue our first remote command. On the docker exec session on the master, just enter:

salt 'minion1' disk.usage

and see, what happens:

root@master:/# salt 'minion1' disk.usage
minion1:
    ----------
    /:
        ----------
        1K-blocks:
            41251136
        available:
            12871844
        capacity:
            68%
        filesystem:
            none
        used:
            26643380
    /dev:
        ----------
        1K-blocks:
            766904
        available:
            766904
        capacity:
            0%
        filesystem:
            tmpfs
        used:
            0

Cool! You have just performed your first remote task via Salt.

Step 4 (optional): perform your first remote shell command issued from command-line of the master

While it is recommended to make use of Salt’s large library of commands, we can always revert back to the execution of shell commands like follows:

salt 'minion1' cmd.run 'df'

In this case, we will get something like:

root@master:/# salt 'minion1' cmd.run 'df'
minion1:
    Filesystem                                             1K-blocks     Used Available Use% Mounted on
    rootfs                                                  41251136 26644516  12870708  68% /
    none                                                    41251136 26644516  12870708  68% /
    tmpfs                                                     766904        0    766904   0% /dev
    shm                                                        65536        0     65536   0% /dev/shm
    tmpfs                                                     766904        0    766904   0% /sys/fs/cgroup
    /dev/disk/by-uuid/3af531bb-7c15-4e60-b23f-4853c47ccc91  41251136 26644516  12870708  68% /srv/salt
    tmpfs                                                     766904        0    766904   0% /proc/kcore
    tmpfs                                                     766904        0    766904   0% /proc/latency_stats
    tmpfs                                                     766904        0    766904   0% /proc/timer_stats

So, we have executed our first arbitrary shell command.

Step 5 (optional): perform our hello world command similar like we did in Part I using Ansible:

Let us now perform the same command we had performed on Ansible with:

ansible all -i '192.168.33.10,' -u vagrant -m shell -a "echo hello world\! > hello_world"

For Salt, this translates to:

salt 'minion1' cmd.run "echo hello world\! > hello_world"

Since the command does not produce any output on STDOUT, we only get:

minion1:

But where will we find the file?

Per default, Salt runs as root in /root. This can also be found out via:

salt 'minion1' cmd.run 'whoami;pwd'

Instead of logging into minion1, we also can check the content of the file remotely:

salt 'minion1' cmd.run 'cat /root/hello_world'

Now we get:

minion1:
    hello world!

Success!

Step 6: perform our hello world command as a non-root user:

In part 1 of this series, we have used Ansible to perform a command as a non-root user like follows:

ansible all -i '192.168.33.10,' -u vagrant -m shell -a "echo hello world\! > hello_world"

Let us try to perform the corresponding command on Salt. For that, we need to create a user on the minion:

Step 6.1: create a user

The Docker images do not come with a salt user; Salt is run as root. We do not want to perform all tests as root. Therefore, let us create a test user:

salt minion1 user.add test

and you can check the list of users with:

salt minion1 user.list_users
Step 6.2: perform a command as a non-root user

Now we try to execute the command as different user:

salt 'minion1' cmd.run runas=test 'whoami'

with this, we get:

minion1:
    test

Side discussion about the documentation issue: Now I understand why people tend to complain about Salt’s documentation: I had quite a long Odyssey to find the runas variable we have used above. The official documentation of the cmd module does not mention the runas variable. I had only found the user variable, but it did not work the desired way (maybe it is meant as the local user, not the remote user?). This has lead me to upgrade the system to the latest stable salt version 2015.5.3 (Lithium), which is a real challenge (see Appendix C below). And this did not help at all. I had considered other workarounds like using sudo -u test <command>, but this workaround has its own challenges, e.g. if you want to redirect the output of a command to a file as a non-root user. Let us forget about upgrades and workarounds for now. We have found the real solution…

Now let us perform the same command as we did in Part 1:

salt 'minion1' cmd.run runas=test "echo hello world\! > hello_world"
salt 'minion1' cmd.run runas=test "cat /home/test/hello_world"

and we get the expected output from the second command:

minion1:
    hello world!

Perfect, that works!

Defining Tasks, so-called States

In Part 1 of the series, we have created an Ansible playbook with following content:

---
# This playbook will write "Hello World!" to the file hello_world
- name: Echo
  hosts: 192.168.33.10
  remote_user: vagrant 

  tasks: 
  - name: echo 
    shell: echo Hello World! > hello_world

How does this translate to Salt? Salt does not define tasks, but it defines desired states. However, this is a matter of semantics only: with Ansible, after you have run a playbook with the task “upgrade”, the state of the system is upgraded. In Salt, you define a state “upgraded” and if you apply this salt state, you also will receive a system that is upgraded.

Step 7: create a state file (corresponds to an Ansible playbook file):

Our Ansible example above will translate to following state file we save on /srv/salt/hello_world.sls:

# This state file will write "Hello World!" to the file hello_world as user test
run_echo_hello_world:
  cmd.run:
    - name: echo Hello World > hello_world
    - user: test

Note: in the current version of Salt (tested 2015.5.3 Lithium) there is an inconsistency between the variables runas and user: for  cmd.run on command-line you need runas (and user is ignored), whereas in the state file, you need to specify the user and (runas is ignored). Try it out. If you want to be sure, you can set both, user and runas to the same value for both, the command line and the state file.

Step 8: apply the state file and check the result:

Now we apply the state file:

salt 'minion*' state.apply hello_world

and we check the file(s) with:

root@master:/# salt 'minion*' cmd.run runas=test 'ls -l hello_world; cat hello_world'  
minion1:
    -rw-r--r-- 1 test test 13 Nov 27 18:06 hello_world
    Hello World!

Perfect, that works now!

Targets and Groups

Salt offers many possibilities to choose your targets. You can explicitly specify the salt minion name like in the command

salt 'minion1' test.ping

Globbing allows us to perform the same command on different minions. E.g. to choose all minions whose name start with ‘minion’ we issue the command:

salt 'minion*' test.ping

Instead we also can use regular expressions:

salt -E 'minion[0-9]' test.ping

list the minion names:

salt -L 'minion1,minion2' test.ping

or even choose per IP address or per subnet:

salt -S '172.17.0.0/24' test.ping

The A very flexible possibility to choose the targets is by using so-called grains. To choose all minions running Ubuntu we issue the command:

salt -G 'os:Ubuntu' test.ping

Farther below, we will define our own grain in order to choose the right targets.

Last but not least, we can combine all target types using and and or operators like follows:

salt -C 'G@os:Ubuntu and minion* or S@172.17.0.0/24' test.ping
or
salt -C '( G@os:Ubuntu and minion* ) or S@172.17.0.0/24' test.ping

where the parenthesis are not mandatory in this case, since the operator ‘and’ takes precedence before ‘or’.

Note that the whitespaces between parenthesis and expressions are required: e.g. '(G@os:Ubuntu and minion*) or S@172.17.0.0/24' would not work the way expected.

Note that the fundamentals get started example is wrong here: it has specified an example “S@192.168.50.*" which does not work and should be replaced by S@192.168.50.0/24. I have sent a question to the salt-users google group to find out how to report this error.

Now we have understood how to choose targets on the command line, let us define something that comes close to the inventory file of Ansible. On part I of this series, we have defined a group called “vagranthosts” in an inventory file /etc/ansible/hosts with the content

[vagranthosts]
192.168.33.10

Step 9: create a node group:

In case of Salt, we can do something similar wit did in case of Ansible by using so-called Node Groups. Node Groups are something like saved node searches and are specified in the /etc/salt/master configuration file:

nodegroups:
  myglobbinggroup: 'minion*'
  myregexgroup:    'E@minion[0-9]'
  mysubnetgroup:   'S@172.17.0.0/24'
  mylistgroup:     'L@minion1,minion2'
  mygrainsgroup:   'G@os:Ubuntu'
  mycompoundgroup1: 'G@os:Ubuntu and ( minion* or S@172.17.0.0/24 )'

Starting from version 2015.8.0, the nodegroup can also be used in compound nodegroup expressions, e.g.

  # requires version >=2015.8.0:
  mycompoundgroup2: 'G@os:Ubuntu and ( N@myglobbinggroup or S@172.17.0.0/24 )'

Note also that there seems to be a problem with the example group4 on the nodegroups documentation:  If you use it, it leads to the error message “‘list’ object has no attribute ‘split'” (tested with version 2015.5.3). I have not tested it with the most recent development version >=2015.8.0, though:

  # does not work in v2015.5.3 (not tested with other versions yet):
  group4:
    - 'G@foo:bar'
    - 'or'
    - 'G@foo:baz'

Step 10 (optional): test node groups:

Now we can specify the group on the command line. All of the following commands apart from the last one should have the same results:

salt -N 'myglobbinggroup' test.ping
salt -N 'myregexgroup' test.ping
salt -N 'mysubnetgroup' test.ping
salt -N 'mylistgroup' test.ping
salt -N 'mygrainsgroup' test.ping
salt -N 'mycompoundgroup1' test.ping
# the next one may fail, as pointed out above:
salt -N 'mycompoundgroup2' test.ping

Step 11: use node group to finish our use case:

Now let us perform our provisioning task to come closer to our use case, namely to perform an echo hello world, redirect it into a file and show that the file content is the one expected.

Step 11.1: cleaning:

First let us make sure the hello_world file is removed on the minion:

salt -N 'myregexgroup' cmd.run runas=test 'ls -l hello_world; rm hello_world; ls -l hello_world'

You should get a one or two messages like:

   ls: cannot access hello_world: No such file or directory

Step 11.2: creating the hello_world file using the pre-defined state:

We already have defined the state and the group, so we can apply the state like follows:

salt -N 'myregexgroup' state.apply hello_world

Note that the state had defined that the file is created as user “test”. We can verify the content of the file by issuing the following command:

salt -N 'myregexgroup' cmd.run runas=test 'ls -l hello_world; cat hello_world'

We will get:

minion1:
    -rw-r--r-- 1 test test 13 Dec  3 12:28 hello_world
    Hello World!

Bingo!

Now we have used Salt to perform the same provisioning task as we had in part I using Ansible.

Summary: Salt vs. Ansible again

So, what did we learn? We have performed the same “hello world” tasks on Salt, as we did for Ansible in part I:  namely performing a remote shell command that has created a file. For that we have defined a Salt state, which corresponds to a playbook in Ansible. For defining the user and target (group), we have configured a node group on the /etc/salt/master configuration file, which corresponds to an Ansible inventory file.

Salt States vs. Ansible Playbooks

Salt states and Ansible playbooks are very much alike: both group a set of smaller task into a named task group. The difference is that the naming of the task group is associated with a summary task name (e.g. “create user” in case of Ansible playbooks, while the naming will be associated with a to be reached state (e.g. “user created”) in case of Salt state files.

Salt Node Groups vs. Ansible Inventory

Targets and groups are implemented quite differently in Ansible and Salt:

  • In Ansible, target groups within an inventory file are listing individual targets (even if name ranges like ‘node[01:10]’ are allowed), while
  • Salt Node Groups in the master configuration file work more like an advanced target search like “look for all connected targets with Operating System Ubuntu that have IP addresses in Subnet 172.0.0.0/24”

So, Salt Node Groups offer many more possibilities to define target groups than Ansible Inventories with the drawbacks that

  • there is no error message in Salt, if the target you want to update is not reachable at time of provisioning attempt. The unreachable target is just ignored, even if we make use of explicit target lists.
  • node groups are defined in the master configuration file, which is less flexible than Ansible Inventory files, whose path can be chosen at runtime using the ansible -i switch.

Salt Documentation vs. Ansible Documentation

Salt has better official step by step guides than Ansible and both, Salt and Ansible have their challenges with documentation. E.g. in case of Salt I have detected errors in the step by step guides and those do not seem to be under version control based on GitHub. And for both, Ansible and Salt, I have spent a lot of time finding out that a described feature I wanted to test is not available in the latest stable release, but requires an upgrade to a development release.

Ease of getting started with Salt vs. Ansible

In had expected that Salt is more complex to get started with, since it requires the installation of a Salt agent on the target machines (minions). However, Ansible had its own challenges, since it largely relies on SSH public keys that need to distributed manually.

A great plus: background operations and RESTful interface of Salt

If I want to get an IT automation tool with

  • background operations (see Appendix B) and a
  • RESTful interface,

without the need to buy commercial software, Salt seems to be a good choice, while Ansible does not offer those freeware possibilities. However, if you are looking for a professional, commercial alternative, you might want to test Ansible Tower, which allows you to create job templates via Web UI, gives RESTful access to many pre-defined objects like inventories, users, teams, projects, etc. See here the data sheet of Ansible Tower.

Note: there is a minor drawback of Ansible Tower: playbooks cannot be manipulated via the Web UI or the REST interface. The recommendation is to manage playbooks via Git.

Coming back to Salt’s background jobs and REST API, it looks like

  • the Salt’s background jobs do not seem to have in-built error reporting and retry mechanisms implemented: it seems to be the task of the administrator to poll and evaluate the result of background jobs or to make sure that a failed job will cause a Administrator defined action.
  • There is no pre-defined REST interface to perform tasks. It is the task of the Administrator to create the web hooks. See this example on how you can use Salt’s rest_cherrypy module to create web hooks with the goal to perform pre-defined tasks. And it does not look very RESTful, what we get at the end: a web hook that allows to POST variables to, which will be used to perform a set of tasks. There are no objects and there are no CRUD methods (create/read/update/delete) to manipulate those objects.

The license Topic…

Last but not least, Salt is Apache 2.0 licensed and you potentially can integrate Salt tightly into your (commercial) software without being forced to publish the source of your software.

Appendix A: Installing Salt on Target Machines (Minions)

Salt requires Salt agent to be installed on the target machines (a.k.a. minions). For most types *nix of targets, the installation can be bootstrapped with shell scripts found on git. I have not tested it yet, but in theory, installing the latest salt-master development build on Ubuntu should be as easy as typing following commands on the Linux shell:

# the next 2 commands are only needed in case you are behind a HTTP proxy:
# (adapt the http proxy name/IP address and port to fit to your environment)
export http_proxy=http://proxy.company.com:8080
export https_proxy=http://proxy.company.com:8080
# install curl, if not already installed (test with "which curl"):
apt-get update
apt-get install curl

# download the salt bootstrap shell script:
curl -o install_salt.sh -L https://bootstrap.saltstack.com
# install salt-master development version:
sudo sh install_salt.sh -M -N git develop
# or install salt-minion development version:
sudo sh install_salt.sh git develop

Appendix B: Running Jobs in the Background

Ansible offers background jobs, but only as part of the commercial web portal product “Ansible Tower”. For Salt, background jobs are integral part of the core and can be initiated and managed from command line (and REST??).

salt 'minion1' --async cmd.run 'df'

result:

root@master:/# salt 'minion1' --async cmd.run 'df' 
Executed command with job ID: 20151126092909642966

Retrieve the result of the job:

root@master:/# salt 'minion1' --async cmd.run 'df'
Executed command with job ID: 20151126092909642966
root@master:/# salt-run jobs.list_job 20151126092909642966
[WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate.
Arguments:
    - df
Function:
    cmd.run
Minions:
    - minion1
Result:
    ----------
    minion1:
        ----------
        return:
            Filesystem                                             1K-blocks     Used Available Use% Mounted on
            rootfs                                                  41251136 26576440  12938784  68% /
            none                                                    41251136 26576440  12938784  68% /
            tmpfs                                                     766904        0    766904   0% /dev
            shm                                                        65536        0     65536   0% /dev/shm
            tmpfs                                                     766904        0    766904   0% /sys/fs/cgroup
            /dev/disk/by-uuid/3af531bb-7c15-4e60-b23f-4853c47ccc91  41251136 26576440  12938784  68% /srv/salt
            tmpfs                                                     766904        0    766904   0% /proc/kcore
            tmpfs                                                     766904        0    766904   0% /proc/latency_stats
            tmpfs                                                     766904        0    766904   0% /proc/timer_stats
StartTime:
    2015, Nov 26 09:29:09.642966
Target:
    minion1
Target-type:
    glob
User:
    root
jid:
    20151126092909642966

offers the possibilities to run jobs asynchronously on the command line.

Appendix C: Upgrade Salt via Salt

Don’t do it!

If you insist to do it, please consider the following topics:

  • Upgrading Salt is not needed for accomplishing the tasks shown in this blog post
  • Upgrading Salt via Salt is a real challenge, because
    • upgrading Salt requires a restart of the salt-minion service, which will cause the remote salt upgrade process to hang
    • upgrading Salt did not work using the pkg.install module in my case, and remote upgrade using apt-get install cannot work, since the administrator is asked some questions, even if the -y flag is set.
  • If you Upgrade the Master of the used Docker master image, then the Minions are not compatible anymore. Old Minions might still work (after a restart of the salt-minion process), but if you spin up a new minion from the docker image (which has the old version), it fails to connect to the master.

So, fpr performing this hello world and beyond, upgrade Salt only, if absolutely necessary and better perform the upgrade locally on the system (or use another IT automation tool like Ansible to do so).

For the brave among you, here is a log of all my pitfalls. I have tried to follow the instructions in the article Upgrade salt-master and minions on Ubuntu servers:

Step 1 – Update your apt repositories

First you need to make sure your apt repositories are up to date, so you get the latest stable versions. Easiest way to do this is via salt itself:

sudo salt '*' cmd.run 'apt-get update'

or in case you are behind a proxy http://proxy.company.com:8080:

sudo salt '*' cmd.run 'export http_proxy=http://proxy.company.com:8080; apt-get update'

A problem I see with this command is that the command might take quite long without any feedback for a long time. Instead of starting a new, separate SSH session to the master, we also can perform the salt command in the background with

sudo salt '*' cmd.run --async 'export http_proxy=http://proxy.company.com:8080; apt-get update'

In this case, we will get a feedback like

root@master:/# sudo salt 'minion1' cmd.run --async 'export http_proxy=http://proxy.company.com:8080; apt-get update'
 Executed command with job ID: 20151127133545387540

Find the status of the job by typing something like:

salt-run jobs.list_job 20151127133545387540

If the command is not yet finished, we will get a Result: “———-“. If it has finished, and was successful, we will get something like:

Arguments:
    - export http_proxy=http://172.28.12.5:8080; apt-get update
Function:
    cmd.run
Minions:
    - minion1
Result:
    ----------
    minion1:
        ----------
        return:
            Ign http://ppa.launchpad.net trusty InRelease
            Get:1 http://ppa.launchpad.net trusty Release.gpg [316 B]
            Get:2 http://ppa.launchpad.net trusty Release [15.1 kB]
            Ign http://archive.ubuntu.com trusty InRelease
            Get:3 http://archive.ubuntu.com trusty-updates InRelease [64.4 kB]
            Get:4 http://ppa.launchpad.net trusty/main amd64 Packages [2138 B]
            Get:5 http://archive.ubuntu.com trusty-security InRelease [64.4 kB]
            Get:6 http://archive.ubuntu.com trusty Release.gpg [933 B]
            Get:7 http://archive.ubuntu.com trusty-updates/main Sources [309 kB]
            Get:8 http://archive.ubuntu.com trusty-updates/restricted Sources [5219 B]
            Get:9 http://archive.ubuntu.com trusty-updates/universe Sources [181 kB]
            Get:10 http://archive.ubuntu.com trusty-updates/main amd64 Packages [824 kB]
            Get:11 http://archive.ubuntu.com trusty-updates/restricted amd64 Packages [23.4 kB]
            Get:12 http://archive.ubuntu.com trusty-updates/universe amd64 Packages [426 kB]
            Get:13 http://archive.ubuntu.com trusty Release [58.5 kB]
            Get:14 http://archive.ubuntu.com trusty-security/main Sources [126 kB]
            Get:15 http://archive.ubuntu.com trusty-security/restricted Sources [3920 B]
            Get:16 http://archive.ubuntu.com trusty-security/universe Sources [36.0 kB]
            Get:17 http://archive.ubuntu.com trusty-security/main amd64 Packages [465 kB]
            Get:18 http://archive.ubuntu.com trusty-security/restricted amd64 Packages [20.2 kB]
            Get:19 http://archive.ubuntu.com trusty-security/universe amd64 Packages [156 kB]
            Get:20 http://archive.ubuntu.com trusty/main Sources [1335 kB]
            Get:21 http://archive.ubuntu.com trusty/restricted Sources [5335 B]
            Get:22 http://archive.ubuntu.com trusty/universe Sources [7926 kB]
            Get:23 http://archive.ubuntu.com trusty/main amd64 Packages [1743 kB]
            Get:24 http://archive.ubuntu.com trusty/restricted amd64 Packages [16.0 kB]
            Get:25 http://archive.ubuntu.com trusty/universe amd64 Packages [7589 kB]
            Fetched 21.4 MB in 32s (657 kB/s)
            Reading package lists...
StartTime:
    2015, Nov 27 13:35:45.387540
Target:
    minion1
Target-type:
    glob
User:
    sudo_root
jid:
    20151127133545387540

Step 2 – Upgrade your master

Upgrading the master first ensures you don’t run into any version compatibility issues between your master and minions. So ssh into your master and run:

Better create a backup of /etc/salt/master and /etc/salt/minion, since the files will be overwritten during the upgrade:

sudo cp -p /etc/salt/master ~/master.bak
sudo cp -p /etc/salt/minion ~/minion.bak
sudo apt-get -y upgrade salt-master

or in case you are behind a proxy http://proxy.company.com:8080:

sudo http_proxy=http://proxy.company.com:8080 apt-get -y upgrade salt-master

Note that the process will ask you, whether /etc/salt/master and /etc/salt/minion should be overwritten. Since I have believed that I have not changed those configurations, I have answered with “Y” for both, but we can review the changes, since we have created backups above. However, I did not know that the docker start.sh command had performed changes in those files, so I better should have chosen “N”.

Since I have chosen “Y”, I had got:

root@master:/# salt '*' test.version
[CRITICAL] Could not deserialize msgpack message: This often happens when trying to read a file not in binary mode.Please open an issue and include the following error:
...(traceback information)...

Not good. Try again:

root@master:/# sudo http_proxy=http://172.28.12.5:8080 apt-get -y upgrade salt-master
 Reading package lists... Done
 Building dependency tree
 Reading state information... Done
 Calculating upgrade... Done
 salt-master is already the newest version.
 The following packages have been kept back:
 python-pip
 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.

Okay, is already upgraded, even though the upgrade process was stuck. Then, I have tried to upgrade python-pip:

root@master:/# sudo http_proxy=http://172.28.12.5:8080 apt-get -y upgrade python-pip

But I was still getting the critical error. After stopping and restarting the docker master container, I get:

root@master:/# salt '*' test.version
minion1:
    Minion did not return. [No response]
master:
    Minion did not return. [No response]

Might this might be, because the salt-minion version is not compatible with the salt-master version? No. I have restarted the minion service by issuing

service salt-minion restart

on minion1 (which will stop the docker container; so you need to perform docker start minion1 again). After that, we get:

root@master:/# salt '*' test.version
minion1:
    2014.7.2
master:
    Minion did not return. [No response]

So, at least the old minion is working again. However, doing the same on the master does not help: it does not even try to authenticate with the salt master (on localhost). Maybe, since we have changed /etc/salt/minion?

Yes, that was it. The default master name seems to be “salt” instead of “master” and I had to add the line

master: master

to /etc/salt/minion and perform a ‘service salt-minion restart’, then the local minion is finding its master again.

root@master:/etc/salt# salt '*' test.version
master:
    2015.5.3
minion1:
    2014.7.2

Now let us upgrade the minion as well.

Step 3 – Upgrade your minions

Before we attempt to upgrade, let’s take a quick look at the existing versions we have running. This might surprise you, I definitely found a couple of cloud instances that were running older version of salt-minion that I somehow had not upgraded in the past. So get a list of what version of salt your minions are running issue this salt command:

sudo salt '*' test.version

And you’ll get a nice display of every version currently in use. Another useful option here is the command ‘manage.versions’ which shows you a list of up to date minions vs those that need updating. Here is how you run it:

salt-run manage.versions

As I see it today, it is not possible to upgrade salt minion versions using apt-get install via salt. The reasons are:

  1. Upgrade of salt-minion requires a reboot of salt-minion. If this is done via salt, the process is stuck. However, there is a workaround using the “at” module described in this salt FAQ.
  2. Even if the -y switch is given to apt-get install salt-minion, there are two interactive steps during the installation process: you are asked, whether you want to rewrite the files /etc/salt/minion and /etc/salt/master. Possibly this is fixed by using the pkg.install module of salt, but pkg.install did not work in my case.

Now I tried to follow the instructions like:

sudo salt '*' pkg.install salt-minion refresh=True

This is the correct way to do it with Salt, but it was stuck in my case. Therefore I have tried:

sudo salt '*' cmd.run 'apt-get -y install salt-minion'

or in case you are behind a proxy http://172.28.12.5:8080:

sudo salt '*' cmd.run 'http_proxy=http://172.28.12.5:8080 apt-get -y install salt-minion'

This is not accepted by cmd.run (although the command works locally on the minion. Is this a salt bug?). Instead we get:

master:
    TypeError encountered executing cmd.run: run() takes at least 1 argument (0 given). See debug log for more info.

Therefore, I have exchanged this by:

sudo salt '*' cmd.run 'echo http_proxy=http://172.28.12.5:8080 apt-get -y install salt-minion > salt-minion_install.sh; sh salt-minion_install.sh; rm salt-minion_install.sh'

However, this was stuck infinitely and I had to kill apt-get locally on the minion.

At the end, I have restarted the minion, and performed the commands

http_proxy=http://172.28.12.5:8080 apt-get -y install salt-minion
service salt-minion restart

locally on the minion. This has worked.

Step 4 – Verify everything worked

Everything should be upgraded now and running the latest version of salt-minion. You can verify this by running the test.version command again:

sudo salt '*' test.version

If you see some minions aren’t using the latest version you may need to manually intervene to see what is stopping apt from upgrading things for you.

Appendix D: Details on Targets, Groups and Variables for Ansible and Salt

In case of Ansible, targets and target groups are defined in so-called inventory files.

  • Inventory files are kept separate from other Ansible configuration files. Inventory-files can be selected on the command-line (default: /etc/ansible/hosts).
  • Groups can be defined as a
    • List of targets (IP addresses or FQDNs) with the possibility to define ranges like www[01:50].example.com or db-[a:f].example.com
  • Host variables and group variables can be defined in the inventory files (required: version >2.0) or in separate files in host_vars or group_vars folders that have the same as the host or the group.
  • Other variables can be assigned on the fly on command line using the -e switch.

In case of Salt, targets and target groups are defined as Node Groups in the /etc/salt/master configuration file.

  • Node Group definitions are not kept separate from the Salt configuration file. The configuration file can be selected from command line (default /etc/salt/master),
  • Node Groups can be defined as
    • Individual Salt Names with globbing supported (e.g. ‘minion*’)
    • List of targets (IP addresses or Salt Names)
    • IP addresses or Subnets
    • List of other groups (version > 2015.8.0)
    • Regular Expressions
    • Grains (e.g. matching against operating system types, roles, user-defined grains, etc.)
      • user-defined grains can defined centrally or on the target machine
    • combination of all above as compound search allowing and and or operators.
  • Host variables can be defined as grain withing the master and/or minion configuration files. Those can be used in state files using Jinja2 semantics, e.g. {% if grains['os'] == 'RedHat' %}
  • Other variables can be defined using jinja symantics in state files and so-called pillar sls files (best practice). See here for some examples. We will need this in part IV, where we will work with Jinja2 templates, similar to what we have done in part II of the series.

All in all, the target&group definition and search capabilities are much more elaborated in case of Salt than they are in case of Ansible.

Goto summary section

4

IT Automation Part II: Ansible “Hello World” for Templating


This post is a continuation of my previous  post, where we have gone through a little “Hello World” example using Ansible, an IT automation tool. Last time we had performed SSH remote shell commands. This time, we will go though a little templating use case, where

  1. a shell script and a data file are created from Jinja2 templates,
  2. the files are uploaded,
  3. the shell script is performed on the remote target machine and
  4. the log is retrieved from the target machine.

We will see, how surprisingly easy those tasks are performed using Ansible and how input variables (like host name, order ID) come into play.

Posts of this series:

  • Part I: Ansible Hello World with a comparison of Ansible vs. Salt vs. Chef vs. Puppet and a hello world example with focus on Playbooks (i.e. tasks), Inventories (i.e. groups of targets) and remote shell script execution.
  • Part II: Ansible Hello World reloaded with focus on templating: create and upload files based on jinja2 templates.
  • Part III: Salt Hello World example: same content as part I, but with Salt instead of Ansible
  • Part IV: Ansible Tower Hello World: investigates Ansible Tower, a professional Web Portal for Ansible

2015.11.20-17_32_39-hc_001

 

Use Case

Our main goal today is to work with jinja2 templates and variables. For that, we look at following use case:

As an SaaS provider, I want to automatically configure an application via the application’s command-line base import mechanism.

The steps that need to be performed are:

  • create a host- and order-specific import script and an import data file
  • upload the import files
  • remotely run the script, which in turn imports the data file
  • retrieve the result (log)

1. Create import script

Prerequisites

If you have followed the instructions in part 1 of the Ansible Hello World example, then all prerequisites are met and you can skip this section. If not, you need to prepare the system like follows

  1. Install a Docker host. For that, follow the instructions “1. Install a Docker Host” on my previous blog.
  2. Follow the instructions 2. Create an Ansible Docker Image.
    download the Ansible Docker image and
  3. start the Ansible container using “docker run -it williamyeh/ansible:ubuntu14.04-onbuild /bin/bash”
  4. Create an inventory file /etc/ansible/inventory with the content
vi /etc/ansible/hosts

and add the following lines:

[vagranthosts]
192.168.33.10

Here, 192.168.33.10 is the IP address of the used Ubuntu VM, installed by Vagrant and based on a Vagrant box “ubuntu-trusty64-docker” from William-Yeh.

1.1 Create and upload a static import script

1. Create an import script on the Ansible machine like follows:

cd /tmp; vi importscript.sh

Add and save the following content:

#!/bin/sh
echo now simulating the import of the file /tmp/import-data.txt
# here you would perform the actual import...

2. In the same /tmp directory, create a playbook file “copy_file.yml” with following content:

---
# This playbook uses the win_ping module to test connectivity to Windows hosts
- name: Copy File
  hosts: vagranthosts
  remote_user: vagranthost

  tasks:
  - name: copy file
    copy: src=/tmp/importscript.sh dest=/tmp/importscript.sh

3. Now we can run the playbook:

ansible-playbook -i /etc/ansible/hosts copy_file.yml

And we get:

root@930360e7db68:/tmp# ansible-playbook -i /etc/ansible/hosts copy_file.yml

PLAY [Copy Files] *************************************************************

GATHERING FACTS ***************************************************************
ok: [192.168.33.10]

TASK: [copy file] *************************************************************
changed: [192.168.33.10]

PLAY RECAP ********************************************************************
192.168.33.10 : ok=2 changed=1 unreachable=0 failed=0

Here, we have replaced the list of hosts by a reference to the inventory file.

4. And we verify that the file has been transferred:

vagrant@localhost ~ $ cat /tmp/importscript.sh
#!/bin/sh
echo now simulating the import of the file /tmp/import-data.txt
# here you would perform the actual import...

All in all, we have successfully uploaded a static file to the target machine.

1.2 Create and upload a templated import script

1. Create an import script template on the Ansible machine like follows:

cd /tmp; vi importscript.sh.jinja2

Add and save the following content:

#!/bin/sh
echo now simulating the import of the file /tmp/import-data-{{orderNumber}}.txt on the host={{ansible_host}}
# here you would perform the actual import...

Note, that we have introduced two variables “orderNumber” and “ansible_host”, which need to be considered at the time we run the ansible playbook.

2. In the same /tmp directory, create a playbook file “template_file.yml”

cp copy_file.yml template_file.yml; vi template_file.yml

with following content:

---
# This playbook uses the win_ping module to test connectivity to Windows hosts
- name: Copy File
  hosts: vagranthosts
  remote_user: vagranthost

  tasks:
  - name: create file from template and copy to remote system 
    template: src=/tmp/importscript.shi.jinja2 dest=/tmp/importscript_from_template.sh

We have replaced the copy statement by a template statement.

3. Now we can run the playbook:

ansible-playbook -i /etc/ansible/hosts template_file.yml

but we get an error:

root@930360e7db68:/tmp# ansible-playbook -i /etc/ansible/hosts template_file.yml

PLAY [Copy Files] *************************************************************

GATHERING FACTS ***************************************************************
ok: [192.168.33.10]

TASK: [create file from template and copy to remote system] *******************
fatal: [192.168.33.10] => {'msg': "AnsibleUndefinedVariable: One or more undefined variables: 'orderNumber' is undefined", 'failed': True}
fatal: [192.168.33.10] => {'msg': "AnsibleUndefinedVariable: One or more undefined variables: 'orderNumber' is undefined", 'failed': True}

FATAL: all hosts have already failed -- aborting

PLAY RECAP ********************************************************************
 to retry, use: --limit @/root/template_file.retry

192.168.33.10 : ok=1 changed=0 unreachable=1 failed=0

It seems like the template module is verifying that all variables in a template are resolved. Good: that feature prevents us from uploading half-resolved template files.

4. Now let us try again, but now we specify the orderNumber variable on the command line:

ansible-playbook -i /etc/ansible/hosts --extra-vars="orderNumber=2015-11-20-0815" template_file.yml

This time we get positive feedback:

root@930360e7db68:/tmp# ansible-playbook -i /etc/ansible/hosts --extra-vars="orderNumber=2015-11-20-0815" template_file.yml
PLAY [Copy Files] *************************************************************

GATHERING FACTS ***************************************************************
ok: [192.168.33.10]

TASK: [create file from template and copy to remote system] *******************
changed: [192.168.33.10]

PLAY RECAP ********************************************************************
192.168.33.10 : ok=2 changed=1 unreachable=0 failed=0

Successful, this time…

But wait a moment, wasn’t there a second variable “ansible_host” in the jinja2 template file, we have forgotten to specify at commandline? Let us see, what we get at the target system:

vagrant@localhost ~ $ cat /tmp/importscript_from_template.sh
#!/bin/sh
echo now simulating the import of the file /tmp/import-data-2015-11-20-0815.txt on the host=192.168.33.10
# here you would perform the actual import...

We find that ansible_host is a pre-defined Ansible variable, that automatically is set to the host as defined in the inventory file.

Are there other possibilities to define variables? Yes, many of them: the commandline (as just tested), the playbook, the host section within the playbook, the inventory file (for development versions >2.0 only!, …), playbook include files&roles, and many more; see e.g. a long precedence list of the variable definitions in the official Ansible variables documentation.

1.3 Testing variables in the Playbook

Let us test variables in the playbook’s host section:

1. For that we change the content of the file “importscript.sh.jinja2” like follows, introducing a new “who” variable:

#!/bin/sh
echo import performed by {{who}}
echo now simulating the import of the file /tmp/import-data-{{orderNumber}}.txt on the host={{ansible_host}}
# here you would perform the actual import...

And we add the “who” variable to the host in the playbook like follows:

---
# This playbook uses the win_ping module to test connectivity to Windows hosts
- name: Copy Files created from templates
  hosts: vagranthosts
  vars:
    who: me
  remote_user: vagrant

  tasks:
  - name: create file from template and copy to remote system
    template: src=/tmp/importscript.sh.jinja2 dest=/tmp/importscript_from_template.sh

2. and we re-run the command

ansible-playbook -i /etc/ansible/hosts --extra-vars="orderNumber=2015-11-20-0815" template_file.yml

and we get:

PLAY [Copy Files created from templates] **************************************

GATHERING FACTS ***************************************************************
ok: [192.168.33.10]

TASK: [create file from template and copy to remote system] *******************
changed: [192.168.33.10]

PLAY RECAP ********************************************************************
192.168.33.10 : ok=2 changed=1 unreachable=0 failed=0

3. let us validate the content of the transferred file:

vagrant@localhost ~ $ cat /tmp/importscript_from_template.sh
#!/bin/sh
echo import performed by me
echo now simulating the import of the file /tmp/import-data-2015-11-20-0815.txt on the host=192.168.33.10
# here you would perform the actual import...

Perfect; success: all in all, we have created a template file with

  • pre-defined variables,
  • variables that are host-specific and
  • variables that are defined at runtime as an argument of the CLI command.

Note, that a variable cannot be defined under the task section (you will get the error message “ERROR: vars is not a legal parameter in an Ansible task or handler”, if you try to). As a workaround, if you want to use task-specific variables, you can create a playbook per task and define the variable under the host section of the playbook.

Note also, that it is considered as Ansible best practice to define host-specific variables in the inventory file instead of the playbook. Check out the documentation in order to find several ways to define variables in the inventory. However, be careful, since the Docker image is still on version 1.9.4 (at the time of writing this is the latest stable release) and specification of variables in the inventory file requires v2.0.

2. Upload an import data file, perform the shell script and download the result log file

1. In order to come closer to our use case, we still need transfer a data file, and execute the import script on the remote target. For that, we define:

vi /tmp/import-data.txt.jinja2

2. add the content

# this is the import data for order={{orderNumber}} and host={{ansible_host}}
# imported by {{who}}
Some import data

3. create a playbook named import_playbook.yml like follows:

cp template_file.yml import_playbook.yml; vi import_playbook.yml

with the content:

---
# This playbook uses the win_ping module to test connectivity to Windows hosts
- name: Copy Files created from templates
  hosts: vagranthosts
  vars:
    who: me
  remote_user: vagrant

  tasks:
  - name: create import script file from template and copy to remote system
    template: src=/tmp/importscript.sh.jinja2 dest=/tmp/importscript-{{orderNumber}}.sh
  - name: create import data file from template and copy to remote system
    template: src=/tmp/import-data.txt.jinja2 dest=/tmp/import-data-{{orderNumber}}.txt
  - name: perform the import of /tmp/import-data-{{orderNumber}}.txt
    shell: /bin/sh /tmp/importscript-{{orderNumber}}.sh > /tmp/importscript-{{orderNumber}}.log
  - name: fetch the log from the target system
    fetch: src=/tmp/importscript-{{orderNumber}}.log dest=/tmp

In this playbook, we perform all required steps in our use case: upload script and data, perform the script and retrieve the detailed feedback we would have got, if we had performed the script locally on the target machine.

4. run the playbook

ansible-playbook -i /etc/ansible/hosts --extra-vars="orderNumber=2015-11-20-0816" import_playbook.yml

With that we get the output:

root@930360e7db68:/tmp# ansible-playbook -i /etc/ansible/hosts --extra-vars="orderNumber=2015-11-20-0816" import_playbook.yml

PLAY [Copy Files created from templates] **************************************

GATHERING FACTS ***************************************************************
ok: [192.168.33.10]

TASK: [create import script file from template and copy to remote system] *****
changed: [192.168.33.10]

TASK: [create import data file from template and copy to remote system] *******
changed: [192.168.33.10]

TASK: [perform the import of /tmp/import-data-{{orderNumber}}.txt] ************
changed: [192.168.33.10]

TASK: [fetch the log from the target system] **********************************
ok: [192.168.33.10]

PLAY RECAP ********************************************************************
192.168.33.10 : ok=5 changed=3 unreachable=0 failed=0

5. On the remote target, we check the files created:

vagrant@localhost ~ $ cat /tmp/importscript-2015-11-20-0816.sh
#!/bin/sh
echo import performed by me
echo now simulating the import of the file /tmp/import-data-2015-11-20-0816.txt on the host=192.168.33.10
# here you would perform the actual import...
vagrant@localhost ~ $ cat /tmp/import-data-2015-11-20-0816.txt
# this is the import data for order=2015-11-20-0816 and host=192.168.33.10
# imported by me
Some import data
vagrant@localhost ~ $ cat /tmp/importscript-2015-11-20-0816.log
import performed by me
now simulating the import of the file /tmp/import-data-2015-11-20-0816.txt on the host=192.168.33.10

6. And on the Ansible machine, check the retrieved log file:

root@930360e7db68:/tmp# cat /tmp/192.168.33.10/tmp/importscript-2015-11-20-0816.log
import performed by me
now simulating the import of the file /tmp/import-data-2015-11-20-0816.txt on the host=192.168.33.10

Note that the file is automatically copied into a path that consists of the specified /tmp” base path, the ansible-host and the source path. This behavior can be suppressed with the variable flat=yes (see http://docs.ansible.com/ansible/fetch_module.html) for details.

Summary

We have shown, how easy it is to implement an IT automation use case, where a script and a data file are created from template, the files are uploaded to a remote target, the script is run and the command line log is retrieved.

Further Testing

If you want to go through a more sophisticated Jinja2 example, you might want to check out this blog post of Daniel Schneller I have found via google.