0

Getting Started with DC/OS on AWS


In the step-by-step tutorial Getting Started with DC/OS on Vagrant, we have learned how to install a MesosPhere DC/OS data center operating system locally. This time, we will install a DC/OS system on AWS Cloud: existing AWS CloudFormation templates will help us create a fully functional DC/OS data center with a Mesos master and five Mesos slaves within less than two hours. At the end, we will test the environment by starting a “Hello World” service based on Docker from DC/OS’ administration panel and accessing the application from the Internet.

MesoSphere DC/OS is a Data Center Operating System, which is built upon Apache Mesos and Mesosphere Marathon, an open source container orchestration platform. It has the target to hide the complexity of data centers when deploying applications.

AWS, Amazon Web Services is the leading provider offering Infrastructure as a Service and more.

 

Beware that running DC/OS on AWS does not come for free. I am still in the free tier period, so I had to pay only $0.48 for a test duration of less than 45 minutes (measured from the time I have created to the point in time I have terminated the stack). However, the induced cost might be higher in your case. Also, I had to pay a lot more, as the time of usage increased and some of the free usage limits were exceeded.

I recommend to check your current bill before and after the the test on the AWS Billing Home for the region US-West-2.

The guide has been tested for the region us-west-2 and us-east-2. However, it has worked only for us-west-2; probably because the correct image IDs are missing for us-east-2.

We are loosely following https://aws.amazon.com/blogs/apn/announcing-mesosphere-dcos-on-aws/, but we had to add correct some commands and add some instructions on user permissions.

See also

Prerequisites

Step 1: Configure your Credentials

You need to have entered your AWS Access Key and Secret on the ~/.aws/credentials file:

[default]
aws_access_key_id = XXXXXXX
aws_secret_access_key = KKKKKKKK

Step 2: Create an SSH Key for DC/OS

aws --region us-west-2 ec2 create-key-pair --key-name dcos-demo-key --output text --query KeyMaterial > dcos-demo-key_us-west-2.pem
cp dcos-demo-key_us-west-2.pem dcos-demo-key.pem
chmod 600 dcos-demo-key.pem

This will create an additional key pair on region us-west-2 (before, I had no key pair on this region; now it is one key):

Step 3: Find Cloud Formation Template URL

The official DCOS documentation v1.10 on AWS installation offers two options:

For our tests, we will choose the basic variant with one Mesos master and five Mesos slaves.

The corresponding CloudFormation Templates can be found on this page.

We copy the “Launch Stack” link for us-west-2 with Single Master and paste it here:

https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3-us-west-2.amazonaws.com/downloads.dcos.io/dcos/EarlyAccess/commit/a5ecc9af5d9ca903f53fa16f6f0ebd597095652e/cloudformation/single-master.cloudformation.json

From the link, we can see that the template URL is as follows. On a Linux shell (e.g. GIT Bash on Windows), we define:

TEMPLATE_URL=https://s3-us-west-2.amazonaws.com/downloads.dcos.io/dcos/EarlyAccess/commit/a5ecc9af5d9ca903f53fa16f6f0ebd597095652e/cloudformation/single-master.cloudformation.json

Step 4: Launch the CloudFormation Stack from AWS CLI

Step 4.1: First Attempt to launch the Stack

From our main instructions page, we find something like:

aws --region us-west-2 cloudformation create-stack --stack-name dcos-demo \
    --template-url ${TEMPLATE_URL} \
    --parameters ParameterKey=AcceptEULA,ParameterValue="Yes",ParameterKey=KeyName,ParameterValue="dcos-demo-key" \
    --capabilities CAPABILITY_IAM

Note that there were some errors in the instructions page: the line feed formatting was wrong and a comma was missing. This has been corrected above.

If your AWS CLI is using a user without CloudFormation permissions, you will receive the following error message:

A client error (AccessDenied) occurred when calling the CreateStack operation: User: arn:aws:iam::924855196031:user/secadmin is not authorized to perform: cloudformation:CreateStack on resource: arn:aws:cloudformation:us-east-2:924855196031:stack/dcos-demo/*

If you have not encountered this error, you can skip the next three substeps.

Step 4.2: Create Policy for CloudFormation Permissions

On the EC2 Dashboard of the AWS Console for us-west-2 (choose right region in the URL), choose

–> Services
–> IAM
–> Policies
–> Create Policy
–> Select Policy Generator
–> Choose Parameters:
Effect: Allow
AWS Service: AWS CloudFormation
Actions: All
Actions ARN: *

–> Add Statement
–> edit Name, e.g. “CloudFormation”

–> Create Policy

Step 4.3: Attach Policy to User

–> Users
–> Choose your user
–> Add Permission
–> Attach existing policies directly
–> check “CloudFormation”

–> Next Review

–> Add permissions

Step 4.4: Try again: Create Policy for CloudFormation Permissions

TEMPLATE_URL=https://s3-us-west-2.amazonaws.com/downloads.dcos.io/dcos/EarlyAccess/commit/a5ecc9af5d9ca903f53fa16f6f0ebd597095652e/cloudformation/single-master.cloudformation.json
aws --region us-west-2 cloudformation create-stack --stack-name dcos-demo \
    --template-url ${TEMPLATE_URL} \
    --parameters ParameterKey=AcceptEULA,ParameterValue="Yes",ParameterKey=KeyName,ParameterValue="dcos-demo-key" \
    --capabilities CAPABILITY_IAM

This time we get following Response:

{
“StackId”: “arn:aws:cloudformation:us-west-2:924855196031:stack/dcos-demo/0c90e5c0-c716-11e7-9e0d-50d5ca2e7cd2”
}

After some minutes, we will see CREATE_COMPLETE in the AWS Console of US West 2:

On the EC2 Dashboard, we see:

After clicking the “8 Running Instances” link, we see:

The DC/OS is up and running!

Excellent! Thump up!

If you see other errors like

  • API: s3:CreateBucket Access Denied
  • API: iam:CreateRole User: arn:aws:iam::924855196031:user/secadmin is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::924855196031:role/dcos-demo-SlaveRole-LP582D7P32GZ
  • The following resource(s) failed to create: [Vpc, ExhibitorS3Bucket, SlaveRole, DHCPOptions]. . Rollback requested by user.

then follow the instructions in Appendix A. Those are permissions issues.

Step 5 (recommended): Restrict Admin Access

The default is that the machines are open to the Internet world. I recommend to change the settings, so only you can access your systems.

On the EC2 Dashboard -> Security Groups, check out the security group with the description “Enable admin access to servers” and edit the source IP addresses:

Replace 0.0.0.0/0 (any) to “My IP” for all sources.

–> Save

Note, this step needs to be repeated any time your source IP address changes. See Step B6 of AWS Automation based on Vagrant — Part 2: Installation and Usage of the Vagrant AWS Plugin, if you are interested in an example that shows how to update the security rules to point to “My IP” per shell script based on AWS CLI.

TODO: find a better way to secure the admin interfaces, e.g. by adapting the CloudFoundation templates before starting the stack. This way, the admin interfaces are not open to the world from the beginning on.

Step 6: Access the DC/OS Admin Console

Now let us access our DC/OS Admin Console. For that, let us find the public DNS name of the master:

$ aws cloudformation describe-stacks --region us-west-2 | grep dcos-demo-ElasticL | awk -F '"' '{print $4}'
dcos-demo-ElasticL-XRZ8I3ZZ2BB2-549374334.us-west-2.elb.amazonaws.com

This is the DNS name we can connect to:

In my case, I have signed in with Google.

We reach a nice dashboard:

DCOS Dashboard on AWS

Step 7: Install DCOS CLI

The easiest way to automate application orchestration is to make use of the DCOS CLI. For that, click on your name and then “Install CLI” and follow the instructions. You will find some dcos command examples in my previous blog post on DC/OS.

I have followed the Windows instructions, i.e.

dcos cluster setup http://dcos-demo-elasticl-pu3fgu8047kg-271238338.us-west-2.elb.amazonaws.com

A browser window was started and I have logged into the browser session via Google. Then the token was offered:

I had to Copy and paste the token into the command line:

Enter OpenID Connect ID Token: eyJ0eXAiOiJKV1QiLCJ…

After that you should be able to see the dcos services:

dcos service
NAME              HOST     ACTIVE  TASKS  CPU    MEM    DISK  ID
marathon       10.0.5.242   True     4    2.75  1836.0  0.0   d456c8ce-f0e6-4c61-9974-94e3426f5fe8-0001
metronome      10.0.5.242   True     0    0.0    0.0    0.0   d456c8ce-f0e6-4c61-9974-94e3426f5fe8-0000

Marathon and Metronome are already running.

Step 8: Install Marathon LB

(dcos package describe --config marathon-lb)
dcos package install marathon-lb
By Deploying, you agree to the Terms and Conditions https://mesosphere.com/catalog-terms-conditions/#community-services
We recommend at least 2 CPUs and 1GiB of RAM for each Marathon-LB instance.

*NOTE*: For additional ```Enterprise Edition``` DC/OS instructions, see https://docs.mesosphere.com/administration/id-and-access-mgt/service-auth/mlb-auth/
Continue installing? [yes/no] yes
Installing Marathon app for package [marathon-lb] version [1.11.1]
Marathon-lb DC/OS Service has been successfully installed!
See https://github.com/mesosphere/marathon-lb for documentation.

After clicking on marathon-lb, we the details of the configuration of the marathon load balancer:

 

Step 9: Create a Hello World Application

Similar to the blog post, where we have installed DC/OS locally via Vagrant, let us create a hello world application. We choose a NginX application that is displaying some information on the source and destination IP addresses and ports seen from within the container. For that, let us click

–> Services

–> RUN A SERVICE

–> JSON Configuration

Cut and paste following text into the field:

{
   "id": "nginx-hello-world-service",
   "container": {
     "type": "DOCKER",
     "docker": {
       "image": "nginxdemos/hello",
       "network": "BRIDGE",
       "portMappings": [
         { "hostPort": 0, "containerPort": 80, "servicePort": 10007 }
       ]
     }
   },
   "instances": 3,
   "cpus": 0.1,
   "mem": 100,
   "healthChecks": [{
       "protocol": "HTTP",
       "path": "/",
       "portIndex": 0,
       "timeoutSeconds": 2,
       "gracePeriodSeconds": 15,
       "intervalSeconds": 3,
       "maxConsecutiveFailures": 2
   }],
   "labels":{
     "HAPROXY_DEPLOYMENT_GROUP":"nginx-hostname",
     "HAPROXY_DEPLOYMENT_ALT_PORT":"10007",
     "HAPROXY_GROUP":"external",
     "HAPROXY_0_REDIRECT_TO_HTTPS":"true",
     "HAPROXY_0_VHOST": "dcos-demo-PublicSl-1NSRAFIDG6VZS-267420313.us-west-2.elb.amazonaws.com"
   }
}

As HAPROXY_0_VHOST you need to use the public slave’s load balancer address you can retrieve via AWS CLI via:

$ aws cloudformation describe-stacks --region us-west-2 | grep dcos-demo-PublicSl | awk -F '"' '{print $4}' 
dcos-demo-PublicSl-1NSRAFIDG6VZS-267420313.us-west-2.elb.amazonaws.com

 

Now:

–> REVIEW & RUN

–> RUN SERVICE

You will see that the nginx-hello-world-service is being deployed:

After some seconds, the 3 containers are up&running:

 

After clicking on the name of the service, you will see the three containers:

Note that the column “UPDATED” will disappear, if the browser width is too low. If you have a small screen, you can scale the browser content with CTRL and Minus.

Step 10 (optional): Reach the service from inside

On an internal host, I can reach the NginX server via two ways:

Step 10.1: Access Application Container on a Private Slave

The following command will return the HTML code of the single container running on a private slave:

curl 10.0.2.9:14679 # SlaveServerGroup

Here, we have chosen the Endpoint address we can retrieve from the services details page:

Step 10.2: Access the Load Balancer Address

We can also contact the internal load balancer endpoint for the service. This has the advantage that the access is load balanced among the different containers we have started for the service.

curl 10.0.6.204:10007 # PublicSlaveServerGroup

Here we have combined the Public slave IP address with the HAPROXY port we have configured as a label:

Excellent! Thump up!

In the next step, we will access the load balancer endpoint via the Internet.

Step 11: Connect to the Service via Internet

Step 11.1: Direct Connection to the Public Slave

The CloudFormation stack is configured in a way that allows reaching the public slave via the Internet on port 10007. This allows us to access the hello world application directly:

Step 11.2: Connection via AWS Load Balancer

Consider a case where we have more than one public slave. In those situations, it is better to access the service via AWS load balancer, which will distribute the load among the different public slave marathon load balancers (i.e. HAPROXY load balancers). In our case, we access the service on port 80: http://dcos-demo-PublicSl-1NSRAFIDG6VZS-267420313.us-west-2.elb.amazonaws.com

The load balancer address can be retrieved via

$ aws cloudformation describe-stacks --region us-west-2 | grep dcos-demo-PublicSl | awk -F '"' '{print $4}'
dcos-demo-PublicSl-1NSRAFIDG6VZS-267420313.us-west-2.elb.amazonaws.com

By pasting the return value into the browser, we are redirected to the corresponding https page:

After refreshing the page, we will see that we will get answers from the other two containers as well:

With that, we have learned how to create a service and access it from the Internet.

Excellent! Thump up!

 

Step 12: Explore the Marathon Load Balancer

You can access the marathon load balancer by retrieving the public IP address of the public slave from the AWS console (EC2):

We then access the HA Proxy statistics page and configuration page by entering the public IP address or DNS name into the URL field, and adding one of the following strings:

  • :9090/haproxy?stats
  • :9090/_haproxy_getconfig

Step13: Delete the Stack

Do not forget to delete the stack, since it will induce quite a bit of cost if you fail to do so. The stack can be deleted via AWS CLI as follows:

aws --region us-west-2 cloudformation delete-stack --stack-name dcos-demo

Better you check on the  AWS Console that all resources have been deleted successfully:

Excellent! Thump up!

Summary

In this blog post, we have learned to install a DC/OS Cluster on AWS using an existing CloudFormation template. For that, we have used AWS CLI to spin up a DC/OS environment with a single master, a single public slave, and five private slaves (see Appendix ?? below how to tweak the template to run only two private slaves in order to save some money).

Similar to the tests we had performed on a local machine using Vagrant described in the post Getting Started with DC/OS on Vagrant, we have installed a marathon load balancer, before we have deployed a three-container hello-world application. We have shown how to access this application from the public Internet using the AWS elastic load balancer that has been installed automatically via the CloudFormation stack. Moreover, we have shown how to access the marathon load balancer’s statistics and configuration page.

In the course of this step by step tutorial, we have mastered

  • user permission challenges (see step 4 and Appendix A)
  • networking challenges

We had to figure out that the services are only reachable via the AWS load balancers.

Appendix A: Add required User Permissions

Appendix A1: Remedy S3 Permission Error

Symptoms


If your user lacks the correct S3 permissions, we will get following errors in the  AWS Console, when trying to start the CloudFormation stack:

  • API: s3:CreateBucket Access Denied
  • API: iam:CreateRole User: arn:aws:iam::924855196031:user/secadmin is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::924855196031:role/dcos-demo-SlaveRole-LP582D7P32GZ
  • The following resource(s) failed to create: [Vpc, ExhibitorS3Bucket, SlaveRole, DHCPOptions]. . Rollback requested by user.

Resolution

  1. Add S3 Permissions

2) Add IAM Policy:

Add Permissions -> Create policy

-> Policy Generator -> Select ->

-> Add Statement -> Next Step -> Edit Name “IAM” -> Create Policy

-> Filter: Policy Type: Custom managed

-> Choose “IAM”

Let us delete it via console and try again:

TEMPLATE_URL=https://s3-us-west-2.amazonaws.com/downloads.dcos.io/dcos/EarlyAccess/commit/14509fe1e7899f439527fb39867194c7a425c771/cloudformation/single-master.cloudformation.json
aws --region us-west-2 cloudformation create-stack --stack-name dcos-demo \
    --template-url ${TEMPLATE_URL} \
    --parameters ParameterKey=AcceptEULA,ParameterValue="Yes",ParameterKey=KeyName,ParameterValue="dcos-demo-key" \
    --capabilities CAPABILITY_IAM

Now we get following success messages on the AWS console:

After some minutes in the EC2 console:

 

 

Appendix B: [AcceptEULA] do not exist in the template

TEMPLATE_URL=https://s3-us-west-2.amazonaws.com/downloads.dcos.io/dcos/EarlyAccess/commit/14509fe1e7899f439527fb39867194c7a425c771/cloudformation/single-master.cloudformation.json
aws --region us-east-2 cloudformation create-stack --stack-name dcos-demo \
    --template-url ${TEMPLATE_URL} \
    --parameters ParameterKey=AcceptEULA,ParameterValue="Yes" ParameterKey=KeyName,ParameterValue="dcos-demo-key" \
    --capabilities CAPABILITY_IAM

This time we get:

A client error (ValidationError) occurred when calling the CreateStack operation: Parameters: [AcceptEULA] do not exist in the template

This StackOverflow Q&A has pointed to the right direction: I tried to wrap all parameters in ”, but then I got a syntax error, that a comma is expected. The correct syntax turned out to be:

TEMPLATE_URL=https://s3-us-west-2.amazonaws.com/downloads.dcos.io/dcos/EarlyAccess/commit/14509fe1e7899f439527fb39867194c7a425c771/cloudformation/single-master.cloudformation.json
aws --region us-east-2 cloudformation create-stack --stack-name dcos-demo \
    --template-url ${TEMPLATE_URL} \
    --parameters ParameterKey=AcceptEULA,ParameterValue="Yes",ParameterKey=KeyName,ParameterValue="dcos-demo-key" \
    --capabilities CAPABILITY_IAM

with commata between all parameters.

Appendix C: “Template error: Unable to get mapping for NATAmi::us-east-2::default”

How to Reproduce:

Get Key for region=us-east-2 from here: copy the link address of the corresponding Launch Stack Link and paste it somewhere:

https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/new?templateURL=https://s3-us-west-2.amazonaws.com/downloads.dcos.io/dcos/EarlyAccess/commit/a5ecc9af5d9ca903f53fa16f6f0ebd597095652e/cloudformation/single-master.cloudformation.json

TEMPLATE_URL=https://s3-us-west-2.amazonaws.com/downloads.dcos.io/dcos/EarlyAccess/commit/a5ecc9af5d9ca903f53fa16f6f0ebd597095652e/cloudformation/single-master.cloudformation.json;

Create a key for US East:

aws --region us-east-2 ec2 create-key-pair --key-name dcos-demo-key --output text --query KeyMaterial > dcos-demo-key_us-east-2.pem;
cp -i dcos-demo-key_us-east-2.pem dcos-demo-key.pem;
chmod 600 dcos-demo-key.pem;

Try starting the Stack:

aws --region us-east-2 cloudformation create-stack --stack-name dcos-demo \
    --template-url ${TEMPLATE_URL} \
    --parameters ParameterKey=AcceptEULA,ParameterValue="Yes",ParameterKey=KeyName,ParameterValue="dcos-demo-key" \
    --capabilities CAPABILITY_IAM;

If the user has all needed permissions (see steps 4.x above), then we get the following error:

A client error (ValidationError) occurred when calling the CreateStack operation: Template error: Unable to get mapping for NATAmi::us-east-2::default

Workaround

I have not investigated this issue. However, I guess that the error has to do with missing mappings for the images (AMI). A workaround is to use region=us-west-2 instead of us-east-2.

Appendix D: ERROR: “parameter value decos-demo-key for parameter name KeyName does not exist”

Reproduce

If you closely follow the instructions on https://aws.amazon.com/blogs/apn/announcing-mesosphere-dcos-on-aws/, correct the syntax errors in the aws commands, but keep the wrong key name “decos-demo-key” instead of “dcos-demo-key”, you will encounter the following problem:

After creation of the stack, we ask for the status:

aws --region us-west-2 cloudformation describe-stacks --stack-name dcos-demo --query Stacks[0].StackStatus

You will get the response:

"ROLLBACK_COMPLETE"

On the AWS Console of US West 2 we get:

The following error message is displayed:

Parameter validation failed: parameter value decos-demo-key for parameter name KeyName does not exist. Rollback requested by user.

Solution:

Correct the demo key name: “dcos-demo-key” instead of “decos-demo-key”

Appendix E: Adapt the CloudFormation Template to your Needs

The CloudFormation template is spinning up one master, one public slave, a NAT machine and five (!) private slaves. For the purpose of hello world testing we are performing, two instead of five private slaves are plenty. For that, I have adapted the CloudFormation template as follows:

Step E.1: Download CloudFormation Template

curl https://s3-us-west-2.amazonaws.com/downloads.dcos.io/dcos/EarlyAccess/commit/a5ecc9af5d9ca903f53fa16f6f0ebd597095652e/cloudformation/single-master.cloudformation.json

Step E.2 Adapt CloudFormation Template

I have added following parameter to the template (in blue):

        "SlaveInstanceCount": {
            "Description": "Required: Specify the number of private agent nodes or accept the default.",
            "Default": "5",
            "Type": "Number"
        },
        "SlaveInstanceCountDesired": {
            "Description": "Required: Specify the number of private agent nodes or accept the default.",
            "Default": "2",
            "Type": "Number"
        },
        "PublicSlaveInstanceCount": {
            "Description": "Required: Specify the number of public agent nodes or accept the default.",
            "Default": "1",
            "Type": "Number"
        },

The default of this parameter is two instead of five.

In the same template, I have changed following parts (in blue)

        "SlaveServerGroup": {
            "CreationPolicy": {
                "ResourceSignal": {
                    "Timeout": {
                        "Fn::FindInMap": [
                            "Parameters",
                            "StackCreationTimeout",
                            "default"
                        ]
                    },
                    "Count": {
                        "Ref": "SlaveInstanceCountDesired"
                    }
                }
            },
            "Properties": {
                "MaxSize": {
                    "Ref": "SlaveInstanceCount"
                },
                "DesiredCapacity": {
                    "Ref": "SlaveInstanceCountDesired"
                },
                "MinSize": {
                    "Ref": "SlaveInstanceCountDesired"
                },

Note that the stack will be stuck in CREATE_IN_PROGRESS if the first Count is not changed from SlaveInstanceCount to SlaveInstanceCountDesired.

Step E.3: Create S3 Bucket

The template is too large to use it directly per file: you will get following error if you try to use the template as file TEMPLATE_FILE=template-file-name:

aws --region us-west-2 cloudformation create-stack --stack-name dcos-demo \
 --template-body ${TEMPLATE_FILE} \
 --parameters ParameterKey=AcceptEULA,ParameterValue="Yes",ParameterKey=KeyName,ParameterValue="AWS_SSH_Key" \
 --capabilities CAPABILITY_IAM
An error occurred (ValidationError) when calling the CreateStack operation: 1 validation error detected: Value '<the json cloudformation template is printed>' at 'templateBody' failed to satisfy constraint: Member must have length less than or equal to 51200

The solution is to move the template to an S3 bucket in the same region. Now let us create the bucket:

aws s3api create-bucket --bucket my-us-west-2-bucket --region us-west-2

Step E.4: Copy Template to S3 Bucket

The template file can be copied to the S3 bucket via a command like:

aws s3 cp template_filename s3://my-us-west-2-bucket/

Step E.5: Use Template

Now we are ready to use the S3 bucket URL to create the stack:

TEMPLATE_URL='https://s3.amazonaws.com/my-us-west-2-bucket/template_filename'
SSH_KEY=dcos-demo-key
   aws --region us-west-2 cloudformation create-stack --stack-name dcos-demo \
       --template-url ${TEMPLATE_URL} \
       --parameters ParameterKey=AcceptEULA,ParameterValue="Yes",ParameterKey=KeyName,ParameterValue="AWS_SSH_Key" \
       --capabilities CAPABILITY_IAM

After 15 minutes or so, you should see that the stack is up and running with two private slave instances:

Excellent! Thump up!

Appendix F: Configuration

F.1 Master cloud-config.yml

Is found on /usr/share/oem/cloud-config.yml:

#cloud-config

coreos:
  units:
    - name: etcd.service
      runtime: true
      drop-ins:
        - name: 10-oem.conf
          content: |
            [Service]
            Environment=ETCD_PEER_ELECTION_TIMEOUT=1200

    - name: etcd2.service
      runtime: true
      drop-ins:
        - name: 10-oem.conf
          content: |
            [Service]
            Environment=ETCD_ELECTION_TIMEOUT=1200

    - name: user-configdrive.service
      mask: yes

    - name: user-configvirtfs.service
      mask: yes

    - name: oem-cloudinit.service
      command: restart
      runtime: yes
      content: |
        [Unit]
        Description=Cloudinit from EC2-style metadata

        [Service]
        Type=oneshot
        ExecStart=/usr/bin/coreos-cloudinit --oem=ec2-compat

  oem:
    id: ami
    name: Amazon EC2
    version-id: 0.0.7
    home-url: http://aws.amazon.com/ec2/
    bug-report-url: https://github.com/coreos/bugs/issues

F.2 Public Slave cloud-config.yml

#cloud-config

coreos:
  units:
    - name: etcd.service
      runtime: true
      drop-ins:
        - name: 10-oem.conf
          content: |
            [Service]
            Environment=ETCD_PEER_ELECTION_TIMEOUT=1200

    - name: etcd2.service
      runtime: true
      drop-ins:
        - name: 10-oem.conf
          content: |
            [Service]
            Environment=ETCD_ELECTION_TIMEOUT=1200

    - name: user-configdrive.service
      mask: yes

    - name: user-configvirtfs.service
      mask: yes

    - name: oem-cloudinit.service
      command: restart
      runtime: yes
      content: |
        [Unit]
        Description=Cloudinit from EC2-style metadata

        [Service]
        Type=oneshot
        ExecStart=/usr/bin/coreos-cloudinit --oem=ec2-compat

  oem:
    id: ami
    name: Amazon EC2
    version-id: 0.0.7
    home-url: http://aws.amazon.com/ec2/
    bug-report-url: https://github.com/coreos/bugs/issues

References

 

0

Behavior-Driven Angular – Part 2: Inserting REST Data as “innerHTML” into a Web Application


Today, we will extend the behavior-driven development example of the previous blog post and add the blog content to the document. Like last time, we will retrieve the HTML content from the WordPress API. Sounds easy, right? We will see that the challenge is to display the HTML content correctly, so we do not see escaped HTML like “<p>…” on the page.

As in part 1, we will follow a “test first” strategy: we will create the e2e test specification before we implement the actual code.

Within the Protractor/Jasmine framework, we will learn how to match the text and the inner HTML of browser DOM elements with functions like expect(...).toEqual("..."), .toContain("...") and .toMatch(/regex/) functions. The latter gives us the full flexibility of regular expressions.

Check out this book on Amazon: Angular Test-Driven Development

Plan for Today

Today, we plan to complement the blog title we have shown last time with the blog content, similar to the blog post Angular 4 Hello World Quickstart, which we will uses as our data mine. We will only show the title and the content as follows:

Before we start coding, we will add an e2e test that defines our expectation.

Step 0: Clone the GIT Repository and install the Application

This step can be skipped if you have followed part 1 of this series.

I am assuming that you have a Docker host available with 1.5GB or more RAM, GIT is installed on that host.

alias cli='docker run -it --rm -w /app -v $(pwd):/app -p 4200:4200 oveits/angular-cli:1.4.3 $@'
alias protractor='docker run -it --privileged --rm --net=host -v /dev/shm:/dev/shm -v $(pwd):/protractor webnicer/protractor-headless $@'
git clone https://github.com/oveits/consuming-a-restful-web-service-with-angular.git
cd consuming-a-restful-web-service-with-angular
git checkout -b 320ae88
cli npm i
chown -R $(whoami) .
cli ng serve --host 0.0.0.0

Phase 1: Create an e2e Test

Step 1.1: Create a GIT Feature Branch

As always with a new feature, let us create a feature branch (on a second terminal):

$ cd /vagrant/consuming-a-restful-web-service-with-angular/

$ protractor
[20:24:22] I/direct - Using ChromeDriver directly...
[20:24:22] I/launcher - Running 1 instances of WebDriver
Jasmine started

  consuming-a-restful-web-service-with-angular App
    ✓ should display the title

Executed 1 of 1 spec SUCCESS in 0.756 sec.
[20:24:27] I/launcher - 0 instance(s) of WebDriver still running
[20:24:27] I/launcher - chrome #01 passed

$ git checkout -b feature/0004-add-blog-content

You might need to adapt the path to your project. The protractor command is optional, but it will ensure that the e2e tests have worked on your machine before you start changing the code. I have seen some permissions topic described in the Appendices, which have made me cautious.

Step 1.2 (optional): Apply new Test Functions to the Blog Title

We would like to add a test that checks, whether the blog content is showing on the page. There are many Jasmine specification examples out there. Somehow, I have stumbled over this example. In order to verify that the functions I found there work fine, I thought it would be a good idea to write a new test similar to the ones in the example, but apply the test to the blog title before we write a new test for the blog content. This way, we can verify that we apply the correct syntax.

I have kept the original specification code, but I have added following code to the spec:

// e2e/app.e2e-spec.ts
import { browser, by, element } from 'protractor';
import { AppPage } from './app.po';

describe('consuming-a-restful-web-service-with-angular App', () => {
  let page: AppPage;

  beforeEach(() => {
    page = new AppPage();
  });

  it('should display the title', () => {
    page.navigateTo();
    expect(page.getParagraphText()).toContain('Angular 4 Hello World Quickstart');
  });
});

describe('Blog', () => {

  beforeEach(() => {
    browser.get('/');
  });

  it('should display the blog title as header 1 and id="blog_title"', () => {
    expect(element(by.css('h1')).getText()).toEqual('Angular 4 Hello World Quickstart');
  });
});

Both protractor e2e tests are successful without changing the code:

$ protractor
[20:59:51] I/direct - Using ChromeDriver directly...
[20:59:51] I/launcher - Running 1 instances of WebDriver
Jasmine started

  consuming-a-restful-web-service-with-angular App
    ✓ should display the title

  Blog
    ✓ should display the blog title as header 1 and id="blog_title"

Executed 2 of 2 specs SUCCESS in 2 secs.
[20:59:57] I/launcher - 0 instance(s) of WebDriver still running
[20:59:57] I/launcher - chrome #01 passed

Save it. Note: for pushing the changes to Github, you will need to fork my project and work with your fork. Otherwise, you can keep the git backups locally only.

git commit -am'1.2 added an addional test for the title looking for the first H1 header (successful test)'
git push

Step 1.3 (optional): Refine the Test

Step 1.3.1 Create a Test looking for a specific Element per ID

Since the blog content will not be a header, we will need to look for something, which is unique on the page. We use an ID for fetching the correct element from the page:

import { browser, by, element } from 'protractor';

...

describe('Blog', () => {

  beforeEach(() => {
    browser.get('/');
  });

  const blog_title = element(by.id('blog_title'));

  it('should display the blog title as header 1 and id="blog_title"', () => {
    expect(element(by.css('h1')).getText()).toEqual('Angular 4 Hello World Quickstart');
    expect(blog_title.getText()).toEqual('Angular 4 Hello World Quickstart');
  });
});

Now the protractor test will fail. This is because we have not set the ID on the HTML template yet:

$ protractor
[21:07:51] I/direct - Using ChromeDriver directly...
[21:07:51] I/launcher - Running 1 instances of WebDriver
Jasmine started

  consuming-a-restful-web-service-with-angular App
    ✓ should display the title

  Blog
    ✗ should display the blog title as header 1 and id="blog_title"
      - Failed: No element found using locator: By(css selector, *[id="blog_title"])
          at WebDriverError (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/error.js:27:5)
          at NoSuchElementError (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/error.js:242:5)
          at /usr/local/lib/node_modules/protractor/built/element.js:808:27
          at ManagedPromise.invokeCallback_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:1379:14)
          at TaskQueue.execute_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2913:14)
          at TaskQueue.executeNext_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2896:21)
          at asyncRun (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2775:27)
          at /usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:639:7
          at process._tickCallback (internal/process/next_tick.js:103:7)Error
          at ElementArrayFinder.applyAction_ (/usr/local/lib/node_modules/protractor/built/element.js:461:27)
          at ElementArrayFinder._this.(anonymous function) [as getText] (/usr/local/lib/node_modules/protractor/built/element.js:103:30)
          at ElementFinder.(anonymous function) [as getText] (/usr/local/lib/node_modules/protractor/built/element.js:829:22)
          at Object. (/protractor/e2e/app.e2e-spec.ts:28:23)
          at /usr/local/lib/node_modules/protractor/node_modules/jasminewd2/index.js:94:23
          at new ManagedPromise (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:1082:7)
          at controlFlowExecute (/usr/local/lib/node_modules/protractor/node_modules/jasminewd2/index.js:80:18)
          at TaskQueue.execute_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2913:14)
          at TaskQueue.executeNext_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2896:21)
          at asyncRun (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2820:25)
      From: Task: Run it("should display the blog title as header 1 and id="blog_title"") in control flow
          at Object. (/usr/local/lib/node_modules/protractor/node_modules/jasminewd2/index.js:79:14)
          at /usr/local/lib/node_modules/protractor/node_modules/jasminewd2/index.js:16:5
          at ManagedPromise.invokeCallback_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:1379:14)
          at TaskQueue.execute_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2913:14)
          at TaskQueue.executeNext_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2896:21)
          at asyncRun (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2775:27)
      From asynchronous test:
      Error
          at Suite. (/protractor/e2e/app.e2e-spec.ts:26:3)
          at Object. (/protractor/e2e/app.e2e-spec.ts:18:1)
          at Module._compile (module.js:570:32)
          at Module.m._compile (/protractor/node_modules/ts-node/src/index.ts:392:23)
          at Module._extensions..js (module.js:579:10)
          at Object.require.extensions.(anonymous function) [as .ts] (/protractor/node_modules/ts-node/src/index.ts:395:12)

**************************************************
*                    Failures                    *
**************************************************

1) Blog should display the blog title as header 1 and id="blog_title"
  - Failed: No element found using locator: By(css selector, *[id="blog_title"])

Executed 2 of 2 specs (1 FAILED) in 2 secs.
[21:07:58] I/launcher - 0 instance(s) of WebDriver still running
[21:07:58] I/launcher - chrome #01 failed 1 test(s)
[21:07:58] I/launcher - overall: 1 failed spec(s)
[21:07:58] E/launcher - Process exited with error code 1

To save the change:

git commit -am'1.3.1 search title by element id (failed e2e test)'

Step 1.3.2 Fix the Test

Let us fix the failed test like follows: In the HTML template src/app/app.component.html, we specify the element ID:

<h1 id="blog_title">{{title}}</h1>

Now the protractor test is successful again:

$ protractor
[21:14:27] I/direct - Using ChromeDriver directly...
[21:14:27] I/launcher - Running 1 instances of WebDriver
Jasmine started

  consuming-a-restful-web-service-with-angular App
    ✓ should display the title

  Blog
    ✓ should display the blog title as header 1 and id="blog_title"

Executed 2 of 2 specs SUCCESS in 2 secs.
[21:14:34] I/launcher - 0 instance(s) of WebDriver still running
[21:14:34] I/launcher - chrome #01 passed

That was simple. Now let us apply our learnings to the blog content.

To save the change:

git commit -am'1.3.2 add ID to HTML template (success)'; git push

Phase 2: Create the Test for the Blog Content

The content of the blog can be seen on WordPress:

The content starts with: In this hello world style tutorial, we will follow a step by step guide to a working Angular 4 application.

Let us search for that on our application.

Step 2.1 Add Blog Content e2e Tests

Similar to what we have done for the Blog Title, let us create an e2e test for the blog content. We add the parts in blue to e2e/app.e2e-spec.ts:

// e2e/app.e2e-spec.ts
...
describe('Blog', () => {

  beforeEach(() => {
    browser.get('/');
  });

  const blog_title = element(by.id('blog_title'));
  const blog_content = element(by.id('blog_content'));

  it('should display the blog title as header 1 and id="blog_title"', () => {
    expect(element(by.css('h1')).getText()).toEqual('Angular 4 Hello World Quickstart');
    expect(blog_title.getText()).toEqual('Angular 4 Hello World Quickstart');
  });

  it('should display the blog content', () => {
    expect(blog_content.getText()).toContain('In this hello world style tutorial, we will follow a step by step guide to a working Angular 4 application.');
  });
});

Since the content is quite large, we did not compare it with the equality operator, but we have used the ‘toContain’ function instead.

The new protractor test fails as expected:

$ protractor
[21:23:04] I/direct - Using ChromeDriver directly...
[21:23:04] I/launcher - Running 1 instances of WebDriver
Jasmine started

  consuming-a-restful-web-service-with-angular App
    ✓ should display the title

  Blog
    ✓ should display the blog title as header 1 and id="blog_title"
    ✗ should display the blog content
      - Failed: No element found using locator: By(css selector, *[id="blog_content"])
          at WebDriverError (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/error.js:27:5)
          at NoSuchElementError (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/error.js:242:5)
          at /usr/local/lib/node_modules/protractor/built/element.js:808:27
          at ManagedPromise.invokeCallback_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:1379:14)
          at TaskQueue.execute_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2913:14)
          at TaskQueue.executeNext_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2896:21)
          at asyncRun (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2775:27)
          at /usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:639:7
          at process._tickCallback (internal/process/next_tick.js:103:7)Error
          at ElementArrayFinder.applyAction_ (/usr/local/lib/node_modules/protractor/built/element.js:461:27)
          at ElementArrayFinder._this.(anonymous function) [as getText] (/usr/local/lib/node_modules/protractor/built/element.js:103:30)
          at ElementFinder.(anonymous function) [as getText] (/usr/local/lib/node_modules/protractor/built/element.js:829:22)
          at Object. (/protractor/e2e/app.e2e-spec.ts:33:25)
          at /usr/local/lib/node_modules/protractor/node_modules/jasminewd2/index.js:94:23
          at new ManagedPromise (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:1082:7)
          at controlFlowExecute (/usr/local/lib/node_modules/protractor/node_modules/jasminewd2/index.js:80:18)
          at TaskQueue.execute_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2913:14)
          at TaskQueue.executeNext_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2896:21)
          at asyncRun (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2820:25)
      From: Task: Run it("should display the blog content") in control flow
          at Object. (/usr/local/lib/node_modules/protractor/node_modules/jasminewd2/index.js:79:14)
          at /usr/local/lib/node_modules/protractor/node_modules/jasminewd2/index.js:16:5
          at ManagedPromise.invokeCallback_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:1379:14)
          at TaskQueue.execute_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2913:14)
          at TaskQueue.executeNext_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2896:21)
          at asyncRun (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2775:27)
      From asynchronous test:
      Error
          at Suite. (/protractor/e2e/app.e2e-spec.ts:32:3)
          at Object. (/protractor/e2e/app.e2e-spec.ts:18:1)
          at Module._compile (module.js:570:32)
          at Module.m._compile (/protractor/node_modules/ts-node/src/index.ts:392:23)
          at Module._extensions..js (module.js:579:10)
          at Object.require.extensions.(anonymous function) [as .ts] (/protractor/node_modules/ts-node/src/index.ts:395:12)

**************************************************
*                    Failures                    *
**************************************************

1) Blog should display the blog content
  - Failed: No element found using locator: By(css selector, *[id="blog_content"])

Executed 3 of 3 specs (1 FAILED) in 3 secs.
[21:23:12] I/launcher - 0 instance(s) of WebDriver still running
[21:23:12] I/launcher - chrome #01 failed 1 test(s)
[21:23:12] I/launcher - overall: 1 failed spec(s)
[21:23:12] E/launcher - Process exited with error code 1

To save the change:

git commit -am'2.1 add test for blog content (failed)'; git push

Step 2.2 Fix the Blog Content Test

Let us fix the test now.

Step 2.2.1 Add the Blog Content to the HTML Template

In order to display the blog content, we need to add the following to the HTML template src/app/app.component.html:

Step 2.2.2 Define the Variable ‘content’ in the Component

However, as long as the variable ‘content’ is not defined, we will have added an empty div. To define the variable, we must change the component src/app/app.component.ts

import { Component, OnInit } from '@angular/core';
import { Http } from '@angular/http';
import { Response } from '@angular/http';
import 'rxjs/add/operator/map'

@Component({
  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.css']
})
export class AppComponent implements OnInit {

  title : any = null
  content : any = null

  constructor(private _http: Http) {}

  ngOnInit() {
     this._http.get('https://public-api.wordpress.com/rest/v1.1/sites/oliverveits.wordpress.com/posts/3078')
                .map((res: Response) => res.json())
                 .subscribe(data => {
                        this.title = data.title;
                        this.content = data.content;
                        console.log(data);
                });
  }
}

That’s it: the e2e tests are successful:

$ protractor
[21:30:12] I/direct - Using ChromeDriver directly...
[21:30:12] I/launcher - Running 1 instances of WebDriver
Jasmine started

  consuming-a-restful-web-service-with-angular App
    ✓ should display the title

  Blog
    ✓ should display the blog title as header 1 and id="blog_title"
    ✓ should display the blog content

Executed 3 of 3 specs SUCCESS in 3 secs.
[21:30:19] I/launcher - 0 instance(s) of WebDriver still running
[21:30:19] I/launcher - chrome #01 passed

To save the change:

git commit -am'2.2.2 Added content to HTML template and component (success)'; git push

Step 2.3 Explore the Result

Now let us have a look at what we have accomplished and let us open the browser on http://localhost:4200:

The good news is: the content is there.

😉

The bad news it: it is not readable because the HTML code in the blog content variable has been HTML escaped.

😦

This is the standard behavior in Angular. So what can we do now? The solution to the problem can be found in Step 2.3 of my original post: we need to set the innerHTML of the div instead of adding the content as text. But, as we are performing a “behavior-driven” approach, let us try to write the tests first.

Step 2.4 Improve the e2e Test Spec

Let us add an additional line to the test specification in order to make sure, we will see the HTML in the correct format:

import { browser, by, element } from 'protractor';

describe('Blog', () => {

  beforeEach(() => {
    browser.get('/');
  });

  const blog_title = element(by.id('blog_title'));
  const blog_content = element(by.id('blog_content'));

  it('should display the blog title as header 1 and id="blog_title"', () => {
    expect(element(by.css('h1')).getText()).toEqual('Angular 4 Hello World Quickstart');
    expect(blog_title.getText()).toEqual('Angular 4 Hello World Quickstart');
  });

  it('should display the blog content', () => {
    expect(blog_content.getText()).toContain('In this hello world style tutorial, we will follow a step by step guide to a working Angular 4 application.');
    
  });
});

With that, we test, whether the innerHTML of the div element starts with the correct HTML code. For that, we have made use of two functionalities of Jasmine:

  1. reading the innerHTML of an element with the getInnerHtml() function
  2. matching against a regular expression with toMatch(/regexp/)

As expected, the protractor test fails with the message

To save the change:

git commit -am'2.4 added innerHTML test for content with regular expression (fail)'; git push

Step 2.5 Fulfill the improved e2e Test

We can see that the content is escaped (e.g.  instead of ). Let us fix that by specifying the innerHTML like follows:

As soon as the content is loaded, the innerHTML ‘Loading…’ will be replaced by the content retrieved from WordPress.

Let us run the test:

$ protractor
[20:55:50] I/direct - Using ChromeDriver directly...
[20:55:50] I/launcher - Running 1 instances of WebDriver
Jasmine started
[20:55:56] W/element - more than one element found for locator By(css selector, app-root h1) - the first result will be used

  consuming-a-restful-web-service-with-angular App
    ✓ should display blog title

[20:55:57] W/element - more than one element found for locator By(css selector, h1) - the first result will be used
  Blog
    ✓ should display the blog title as header 1 and id="blog_title"
    ✓ should display the blog content

Executed 3 of 3 specs SUCCESS in 3 secs.
[20:55:57] I/launcher - 0 instance(s) of WebDriver still running
[20:55:57] I/launcher - chrome #01 passed

That was easy, again.

To save the change:

git commit -am'2.5 Fix the content innerHTML test (success)'; git push

Step 3: Explore the Final Result

Now let us head over to the browser on URL http://localhost:4200 again:

Even though there is no styling implemented yet, that looks much better now. This is, what we had in mind to implement today.

Excellent! Thump up!

 

As a wrap-up, the changes can be merged into the develop branch: the tests are successful and also the explorative “tests” have shown a correct result.


git checkout develop
git merge feature/0004-add-blog-content
git push

Summary

In this blog post, we have shown how to retrieve HTML-formated data from the WordPress API and display it in a correct format. In a “test-driven” approach, we have created Protractor e2e test specifications, before we have implemented the function.

Appendix: Error message: failed loading configuration file ./protractor.conf.js

After successfully cloning and installing the repo, I had seen following error message, when trying to perform the e2e tests:

$ protractor
[19:23:16] E/configParser - Error code: 105
[19:23:16] E/configParser - Error message: failed loading configuration file ./protractor.conf.js
[19:23:16] E/configParser - Error: Cannot find module 'jasmine-spec-reporter'
    at Function.Module._resolveFilename (module.js:469:15)
    at Function.Module._load (module.js:417:25)
    at Module.require (module.js:497:17)
    at require (internal/module.js:20:19)
    at Object. (/protractor/protractor.conf.js:4:26)
    at Module._compile (module.js:570:32)
    at Object.Module._extensions..js (module.js:579:10)
    at Module.load (module.js:487:32)
    at tryModuleLoad (module.js:446:12)
    at Function.Module._load (module.js:438:3)

Resolution:

I have seen that the cli command was creating all files as user root. This was because I had defined

alias cli='docker run -it --rm -w /app -v $(pwd):/app oveits/angular-cli:1.4.3 $@'

After changing this to

alias cli='docker run -it --rm -w /app -v $(pwd):/app -p 4200:4200 -u $(id -u $(whoami)) oveits/angular-cli:1.4.3 $@'

and re-performing the cli npm i after the clone, the problem was resolved. However, this has caused the next ‘npm i’ issue described below, and it is better to perform following workaround:

Better:

  1. Keep the first version of the alias
  2. After applying ‘cli npm i’, perform the command sudo chown -R $(whoami) PROJECT_ROOT_DIR .

Appendix npm i: Error: EACCES: permission denied, mkdir ‘/.npm’

npm ERR! Linux 4.2.0-42-generic
npm ERR! argv "/usr/local/bin/node" "/usr/local/bin/npm" "i"
npm ERR! node v6.11.2
npm ERR! npm  v3.10.10
npm ERR! path /.npm
npm ERR! code EACCES
npm ERR! errno -13
npm ERR! syscall mkdir

npm ERR! Error: EACCES: permission denied, mkdir '/.npm'
npm ERR!     at Error (native)
npm ERR!  { Error: EACCES: permission denied, mkdir '/.npm'
npm ERR!     at Error (native)
npm ERR!   errno: -13,
npm ERR!   code: 'EACCES',
npm ERR!   syscall: 'mkdir',
npm ERR!   path: '/.npm',
npm ERR!   parent: 'consuming-a-restful-web-service-with-angular' }
npm ERR!
npm ERR! Please try running this command again as root/Administrator.

npm ERR! Please include the following file with any support request:
npm ERR!     /app/npm-debug.log

The reason is, that I had defined

alias cli='docker run -it --rm -w /app -v $(pwd):/app -p 4200:4200 -u $(id -u $(whoami)) oveits/angular-cli:1.4.3 $@'

With that, npm i is run as the vagrant user with ID=900. However, inside the container, neither the user “vagrant” nor the user ID 900 is defined. This seems to cause the problem that the cli npm i command wants to create a directory /$HOME/.npm, but $HOME is not set. Therefore, the user with id=900 wants to create the file /.npm, but only root is allowed to do so.

The better workaround is to define

alias cli='docker run -it --rm -w /app -v $(pwd):/app -p 4200:4200 oveits/angular-cli:1.4.3 $@'

without the -u option and perform a command

chown -R $(whoami) .

where needed (e.g. after each npm i command).

0

Behavior-Driven Angular – part 1: Consuming a RESTful Web Service with Angular 4


In this step-by-step tutorial, we will follow a behavior-driven development approach to create an Angular 4 application from Angular CLI. The hello-world-like application will consume the WordPress REST API and it will display a blog post title. We will create and run end-to-end test scripts that simulate the customer behavior on a Chrome browser within a Protractor headless Docker container.

As a side feature of this tutorial, we will demonstrate basic Git handling: we will learn how to create a GIT Repository, create a feature branch, commit code changes, and merge the tested and fully functional feature branch into the main development branch.

Check out this book on Amazon: Angular Test-Driven Development

Introduction

My post Consuming a RESTful Web Service with Angular 4 has grown much more popular than expected. Thanks a lot for your views! The September ist still not finished, and the article has achieved more than 3000 views in its fourth month. I hope the trend will keep on:

😉

So, why would I want to rework a blog post that seemingly plays a chord in the developer’s community? The reasons I have started to rework the example are:

  • I have come to a point, where I had the need to refactor the code. However, I do not like refactoring before I do not have a good coverage of end-to-end tests. This was fixed easily in my previous blog post Angular end-to-end Testing.
  • The next topic was not so easy to be resolved: I had created a working example, but when I have created a GIT repository from it, Angular CLI had a problem with new clones of that code. An Angular problem I could not resolve easily and it looked like I had to start from scratch. This is, what I am doing now, committing many snapshots to GIT. If I so, why not explaining to my audience, what I am doing and why? This way, the current post has become an example that demonstrates basic GIT handling.

This blog post will fix those two issues, I deem.

Even if I am tempted to automate many of the development process steps, we will keep it simple without the usage of DevOps tools like Jenkins with BitBucket, Sonar, BrowserStack, JMeter Integration and Docker data center integration you would find in real-world agile projects. Some of such topics can be explored in more detail on my other blog posts on Jenkins (explore the “Jenkins Tutorial” drop-down menu of my blog).

Why behavior driven development?

I have made a very good experience with behavior driven development (BDD), or “test first” development. Some years ago, I have applied this principle on a ProvisioningEngine I had developed based on Ruby on Rails and java (Apache Camel). The advantages of BDD I see are:

  • better customer view: if you follow the behavior driven principle, your first thought is, how the web pages look like and how the web pages behave with respect to customer actions — in detail. This helps me to always start with the customer view in mind.
  • higher motivation: as a developer, I find it rewarding to start with test development with “red” test cases that become green over time
  • higher quality: I often challenge myself to optimize my code (e.g. make it DRYer of more versatile). In this process, I do not want to sacrifice and previous achievements. A large set of unit tests and e2e test help me to keep the set of features intact in phases of code restructuring

Okay, as an Angular beginner, I deem I am far from being an ideal behavior driven Angular developer. However, at some point in future, I believe that I can increase my hobby development productivity by applying principles like BDD together with build&deployment automation based on TravisCI, CircleCI or a local Jenkins system to my development approach.

Overview

Along the way, we will get acquainted with a set of typical error messages and we will learn how to cope with them.

So, if you are ready for a quick ride into a simple “test first” strategy example with GIT repo handling, buckle up and start coding with me in four phases:

😉

  • Phase 1: Create a Hello World App based on Angular CLI
  • Phase 2: Adapt the end-to-end Tests
  • Phase 3: Adapt the Code
  • Phase 4: Verify the successful e2e Tests

If you do not care about BDD and GIT, then you might also want head over to the post Consuming a RESTful Web Service with Angular 4. Or better, follow the instructions you find here, but omit the steps related to e2e testing (protractor) and/or GIT.

Phase 1: Create a Hello World App based on Angular CLI

In this phase, we will

  • use an Angular CLI Docker image to create a new application
  • fix some problems with the end to end testing inherent in the standard hello world app
  • save and upload the changes to GIT

Step 1.0: Get access to a Docker Host with enough Resources

If you do not have access to a Docker host yet, I recommend following the step 0 instructions on my JHipster post. I recommend to use a Docker host with at least 1.5 GB RAM. To be honest, this is a guess. I always test on a 4 GB Docker Host Virtualbox VM, but I know that 750 MB RAM is not sufficient.

Step 1.1: Prepare an alias for later use

Let us first define an alias that helps us to shorten the commands thereafter.

(dockerhost)$ alias cli='docker run -it --rm -w /app -v $(pwd):/app -p 4200:4200 -u $(id -u $(whoami)) oveits/angular-cli:1.4.3 $@'

Why this complicated user option -u $(id -u $(whoami))? The reason is that

  • if we omit it, then all new files will be created as root, so we will get permissions problems later on
  • If we use ‘centos’, then the container will complain that he does not find the user ‘centos’ in its passwd file
  • If we use the ID of centos, then it works. However, it might not work in all cases. This time, the ID of centos user is 1000, and by chance, a user (named ‘node’) exists on the container as well. But let us live with this uncertainty for now.

With each cli something command, we will start a something command on an Angular CLI @ Alpine container originally created by Alex Such and enriched with git and bash by me.

Consider appending the alias command to your Docker host’s ~/.bashrc file, so the alias is persistent.

Step 1.2: Create a Project and install required Modules

Now let us create a new project and install the node modules via npm:

(dockerhost)$ cli ng new consuming-a-restful-web-service-with-angular
(dockerhost)$ cd consuming-a-restful-web-service-with-angular
(dockerhost)$ cli npm install
npm info it worked if it ends with ok
npm info using npm@3.10.10
npm info using node@v6.11.2
npm info attempt registry request try #1 at 7:54:24 PM
npm http request GET https://registry.npmjs.org/fsevents
npm http 200 https://registry.npmjs.org/fsevents
npm info lifecycle consuming-a-restful-web-service-with-angular@0.0.0~preinstall: consuming-a-restful-web-service-with-angular@0.0.0
npm info linkStuff consuming-a-restful-web-service-with-angular@0.0.0
npm info lifecycle consuming-a-restful-web-service-with-angular@0.0.0~install: consuming-a-restful-web-service-with-angular@0.0.0
npm info lifecycle consuming-a-restful-web-service-with-angular@0.0.0~postinstall: consuming-a-restful-web-service-with-angular@0.0.0
npm info lifecycle consuming-a-restful-web-service-with-angular@0.0.0~prepublish: consuming-a-restful-web-service-with-angular@0.0.0
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@^1.0.0 (node_modules/chokidar/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.1.2: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
npm info ok

Step 1.3 (optional): Create a local GIT Repository

Now is a good time to create a git repository and to commit the initial code.

If you have not installed GIT on your Docker host, depending on the operating system of your Docker host, you might need to install it first (e.g. apt-get update; apt-get install -y git in case of Ubuntu, or yum install -y git in case of CentOS). Alternatively, you may want to use the git I have installed in the container. In that case, prepend a “do” before the git command, e.g. try cli git --version. However, a git diff does not look nice in a container, so I recommend to install GIT on your Docker host instead.

Now let us initialize the git repo, add all files and commit the changes:

(dockerhost)$ git init
(dockerhost)$ git add .
(dockerhost)$ git commit -m'initial commit'

Now let us start the service in a container:

(dockerhost)$ cli ng serve --host 0.0.0.0
** NG Live Development Server is listening on 0.0.0.0:4200, open your browser on http://localhost:4200/ **
Date: 2017-09-26T20:04:45.036Z
Hash: 24fe32460222f3b3faf2
Time: 15376ms
chunk {inline} inline.bundle.js, inline.bundle.js.map (inline) 5.83 kB [entry] [rendered]
chunk {main} main.bundle.js, main.bundle.js.map (main) 8.88 kB {vendor} [initial] [rendered]
chunk {polyfills} polyfills.bundle.js, polyfills.bundle.js.map (polyfills) 209 kB {inline} [initial] [rendered]
chunk {styles} styles.bundle.js, styles.bundle.js.map (styles) 11.3 kB {inline} [initial] [rendered]
chunk {vendor} vendor.bundle.js, vendor.bundle.js.map (vendor) 2.29 MB [initial] [rendered]

webpack: Compiled successfully.

Step 1.4: Perform end-to-end Tests

Step 1.4.1: Use a Protractor Docker Image to perform the Tests

In the spirit of “test first” strategies of “behavior-driven development”, let us check the end-to-end tests that come with Angular CLI 1.4.3. We will see that they are broken and need to be adapted.

Like above, we will use a Docker container for the task. This time we will use the Docker image protractor-headless from webnicer. In a second terminal, we first define an alias, enter the project root folder and run protractor.

(dockerhost)$ alias protractor='docker run -it --privileged --rm --net=host -v /dev/shm:/dev/shm -v $(pwd):/protractor webnicer/protractor-headless $@'
(dockerhost)$ cd consuming-a-restful-web-service-with-angular
(dockerhost)$ protractor

[20:20:34] I/direct - Using ChromeDriver directly...
[20:20:34] I/launcher - Running 1 instances of WebDriver
Jasmine started

  consuming-a-restful-web-service-with-angular App
    ✗ should display welcome message
      - Failed: Error while waiting for Protractor to sync with the page: "Could not find testability for element."
          at /usr/local/lib/node_modules/protractor/built/browser.js:272:23
          at ManagedPromise.invokeCallback_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:1379:14)
          at TaskQueue.execute_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2913:14)
          at TaskQueue.executeNext_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2896:21)
          at asyncRun (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2775:27)
          at /usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:639:7
          at process._tickCallback (internal/process/next_tick.js:103:7)Error
          at ElementArrayFinder.applyAction_ (/usr/local/lib/node_modules/protractor/built/element.js:461:27)
          at ElementArrayFinder._this.(anonymous function) [as getText] (/usr/local/lib/node_modules/protractor/built/element.js:103:30)
          at ElementFinder.(anonymous function) [as getText] (/usr/local/lib/node_modules/protractor/built/element.js:829:22)
          at AppPage.getParagraphText (/protractor/e2e/app.po.ts:9:43)
          at Object. (/protractor/e2e/app.e2e-spec.ts:12:17)
          at /usr/local/lib/node_modules/protractor/node_modules/jasminewd2/index.js:94:23
          at new ManagedPromise (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:1082:7)
          at controlFlowExecute (/usr/local/lib/node_modules/protractor/node_modules/jasminewd2/index.js:80:18)
          at TaskQueue.execute_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2913:14)
          at TaskQueue.executeNext_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2896:21)
      From: Task: Run it("should display welcome message") in control flow
          at Object. (/usr/local/lib/node_modules/protractor/node_modules/jasminewd2/index.js:79:14)
          at /usr/local/lib/node_modules/protractor/node_modules/jasminewd2/index.js:16:5
          at ManagedPromise.invokeCallback_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:1379:14)
          at TaskQueue.execute_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2913:14)
          at TaskQueue.executeNext_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2896:21)
          at asyncRun (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2775:27)
      From asynchronous test:
      Error
          at Suite. (/protractor/e2e/app.e2e-spec.ts:10:3)
          at Object. (/protractor/e2e/app.e2e-spec.ts:3:1)
          at Module._compile (module.js:570:32)
          at Module.m._compile (/protractor/node_modules/ts-node/src/index.ts:392:23)
          at Module._extensions..js (module.js:579:10)
          at Object.require.extensions.(anonymous function) [as .ts] (/protractor/node_modules/ts-node/src/index.ts:395:12)

**************************************************
*                    Failures                    *
**************************************************

1) consuming-a-restful-web-service-with-angular App should display welcome message
  - Failed: Error while waiting for Protractor to sync with the page: "Could not find testability for element."

Executed 1 of 1 spec (1 FAILED) in 0.878 sec.
[20:20:41] I/launcher - 0 instance(s) of WebDriver still running
[20:20:41] I/launcher - chrome #01 failed 1 test(s)
[20:20:41] I/launcher - overall: 1 failed spec(s)
[20:20:41] E/launcher - Process exited with error code 1

Even though my application is listening on port 4200,  we can see that the e2e tests have a problem.

Step 1.4.2: Correct the Protractor sync Issue

As already pointed out in this blog post, we need to add the option

useAllAngular2AppRoots: true

to our protractor.conf.js file. At the end, the file has following content (correction in blue):

// protractor.conf.js
// Protractor configuration file, see link for more information
// https://github.com/angular/protractor/blob/master/lib/config.ts

const { SpecReporter } = require('jasmine-spec-reporter');

exports.config = {
  allScriptsTimeout: 11000,
  specs: [
    './e2e/**/*.e2e-spec.ts'
  ],
  capabilities: {
    'browserName': 'chrome'
  },
  directConnect: true,
  baseUrl: 'http://localhost:4200/',
  useAllAngular2AppRoots: true,
  framework: 'jasmine',
  jasmineNodeOpts: {
    showColors: true,
    defaultTimeoutInterval: 30000,
    print: function() {}
  },
  onPrepare() {
    require('ts-node').register({
      project: 'e2e/tsconfig.e2e.json'
    });
    jasmine.getEnv().addReporter(new SpecReporter({ spec: { displayStacktrace: true } }));
  }
};

After that, the e2e test is still not successful:

$ protractor
[20:30:32] I/direct - Using ChromeDriver directly...
[20:30:32] I/launcher - Running 1 instances of WebDriver
Jasmine started

  consuming-a-restful-web-service-with-angular App
    ✗ should display welcome message
      - Expected 'Welcome to !' to equal 'Welcome to app!'.
          at Object. (/protractor/e2e/app.e2e-spec.ts:12:37)
          at /usr/local/lib/node_modules/protractor/node_modules/jasminewd2/index.js:94:23
          at new ManagedPromise (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:1082:7)
          at controlFlowExecute (/usr/local/lib/node_modules/protractor/node_modules/jasminewd2/index.js:80:18)
          at TaskQueue.execute_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2913:14)
          at TaskQueue.executeNext_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2896:21)
          at asyncRun (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2820:25)
          at /usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:639:7
          at process._tickCallback (internal/process/next_tick.js:103:7)

**************************************************
*                    Failures                    *
**************************************************

1) consuming-a-restful-web-service-with-angular App should display welcome message
  - Expected 'Welcome to !' to equal 'Welcome to app!'.

Executed 1 of 1 spec (1 FAILED) in 0.848 sec.
[20:30:40] I/launcher - 0 instance(s) of WebDriver still running
[20:30:40] I/launcher - chrome #01 failed 1 test(s)
[20:30:40] I/launcher - overall: 1 failed spec(s)
[20:30:40] E/launcher - Process exited with error code 1

Step 1.4.3: Correct the e2e Test Script

The reason is that the app.component.ts file is not correct. In the HTML template, we find a line

Welcome to {{title}}!

but in the component file, the title is missing:

import { Component, OnInit } from '@angular/core';

@Component({
  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.css']
})
export class AppComponent implements OnInit {

  constructor() { }

  ngOnInit() {
  }

}

This is leading to following corrupt web page:

Let us correct this now (in blue):

import { Component, OnInit } from '@angular/core';

@Component({
  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.css']
})
export class AppComponent implements OnInit {

  title : any = null

  constructor() { }

  ngOnInit() {
     this.title = "app";
  }

}

Now the Web page looks better:

Now the e2e tests are successful:

$ protractor
[20:53:42] I/direct - Using ChromeDriver directly...
[20:53:42] I/launcher - Running 1 instances of WebDriver
Jasmine started

  consuming-a-restful-web-service-with-angular App
    ✓ should display welcome message

Executed 1 of 1 spec SUCCESS in 0.956 sec.
[20:53:50] I/launcher - 0 instance(s) of WebDriver still running
[20:53:50] I/launcher - chrome #01 passed

The angular CLI installation works as expected now.

Excellent! Thump up!

Let us save the changes:

(dockerhost)$ git add protractor.conf.js
(dockerhost)$ git commit -m'protractor.conf.js: added useAllAngular2AppRoots: true for avoiding sync problems'
(dockerhost)$ git add src/app/app.component.ts
(dockerhost)$ git commit -m'app component: defined missing title'

Now is the time to sign up with Github and save the project. In my case, I have created following project: a project like follows:

https://github.com/oveits/consuming-a-restful-web-service-with-angular

Once this is done, we can upload the changes as follows:

(dockerhost)$ git remote add origin https://github.com/oveits/consuming-a-restful-web-service-with-angular.git
(dockerhost)$ git push -u origin master

Phase 2: Adapt the end-to-end Tests

In this phase, we will

  • based on the input from the WordPress API, we will plan, how the web page should look like from a customer’s point of view.
  • We will adapt the e2e tests, so they reflect the (assumed) customer’s expectations.
  • We will save the changed code to the remote GIT repository.

Step 2.1: Planning

In an attempt to follow a behavior driven development process, we will write/adapt the end to end tests first, before we perform the changes. For this, let us outline our plan:

  • We would like to create a Web page that displays the title and content of a WordPress Article
  • the WordPress article of our choice is the first angular blog post I have written: the Angular 4 Hello World Quickstart blog post
  • The article will be retrieved dynamically from the WordPress API, a REST API.

Step 2.2: Explore the WordPress REST API

Let us have a look at the WordPress API. The WordPress API can be explored via the WordPress.com REST API console. We can display a list of blog posts like so:

We can see that the blog post we would like to display has the ID 3078 and the title and content star like follows:

  • title: “Angular 4 Hello World Quickstart”
  • content: “<p>In this hello world style tutorial, we will follow a step by step guide to a working Angular 4 application. We will also …

The  single blog post can be retrieved with the URL

https://public-api.wordpress.com/rest/v1.1/sites/oliverveits.wordpress.com/posts/3078

We can verify this by copying the URL into a Browser:

Step 2.3: Adapt the end-to-end Tests

With the knowledge about the title and content of the blog post, we can re-write the end-to-end (e2e) test. The e2e test is found in the e2e folder:

ls e2e/
app.e2e-spec.ts app.po.ts tsconfig.e2e.json

$ cat e2e/app.e2e-spec.ts
import { AppPage } from './app.po';

describe('consuming-a-restful-web-service-with-angular App', () => {
  let page: AppPage;

  beforeEach(() => {
    page = new AppPage();
  });

  it('should display welcome message', () => {
    page.navigateTo();
    expect(page.getParagraphText()).toEqual('Welcome to app!');
  });
});

Instead of searching for the text ‘Welcome to app’, let us search for the title “Angular 4 Hello World Quickstart”:

$ cat e2e/app.e2e-spec.ts
import { AppPage } from './app.po';

describe('consuming-a-restful-web-service-with-angular App', () => {
  let page: AppPage;

  beforeEach(() => {
    page = new AppPage();
  });

  it('should display the title', () => {
    page.navigateTo();
    expect(page.getParagraphText()).toContain('Angular 4 Hello World Quickstart');
  });
});

The e2e test should fail now with the message Expected 'Welcome to app!' to contain 'Angular 4 Hello World Quickstart'

$ protractor
[20:46:02] I/direct - Using ChromeDriver directly...
[20:46:02] I/launcher - Running 1 instances of WebDriver
Jasmine started

  consuming-a-restful-web-service-with-angular App
    ✗ should display welcome message
      - Expected 'Welcome to app!' to contain 'Angular 4 Hello World Quickstart'.
          at Object. (/protractor/e2e/app.e2e-spec.ts:12:37)
          at /usr/local/lib/node_modules/protractor/node_modules/jasminewd2/index.js:94:23
          at new ManagedPromise (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:1082:7)
          at controlFlowExecute (/usr/local/lib/node_modules/protractor/node_modules/jasminewd2/index.js:80:18)
          at TaskQueue.execute_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2913:14)
          at TaskQueue.executeNext_ (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2896:21)
          at asyncRun (/usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2820:25)
          at /usr/local/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:639:7
          at process._tickCallback (internal/process/next_tick.js:103:7)

**************************************************
*                    Failures                    *
**************************************************

1) consuming-a-restful-web-service-with-angular App should display welcome message
  - Expected 'Welcome to app!' to contain 'Angular 4 Hello World Quickstart'.

Executed 1 of 1 spec (1 FAILED) in 0.907 sec.
[20:46:21] I/launcher - 0 instance(s) of WebDriver still running
[20:46:21] I/launcher - chrome #01 failed 1 test(s)
[20:46:21] I/launcher - overall: 1 failed spec(s)
[20:46:21] E/launcher - Process exited with error code 1

Step 2.4: Save the Changes on a separate GIT Branch

We believe that the e2e tests are correct now, so it is a good time to create a new git feature branch and commit the code:

git checkout -b feature/0001-retrieve-and-display-WordPress-title-from-API
git add .
git commit -m'adapted e2e tests to display WordPress blog title'
git push

Phase 3: Adapt the Code

Now, after having written the e2e tests, let us change the code, so our app fulfills the expectations.

Step 3.1: Define the HTML View

In the spirit of a behavior driven approach, let us define the view first. For that we replace the content of the app’s template file:

$ cat src/app/app.component.html
<h1>{{title}}</h1>

The output of the application now is:

This is because, in the Hello World app, we have set the title to the static value ‘app’. The e2e tests are not successful and the error ‘Expected ‘app’ to contain ‘Angular 4 Hello World Quickstart’.’ is thrown when we run protractor.

Step 3.2: Subscribe an Observable

As can be seen in many tutorials, we now subscribe to an observable like follows:

$ cat src/app/app.component.ts
import { Component, OnInit } from '@angular/core';

@Component({
  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.css']
})
export class AppComponent implements OnInit {

  title : any = null

  constructor() { }

  ngOnInit() {
     //this.title = "app";
     this._http.get('https://public-api.wordpress.com/rest/v1.1/sites/oliverveits.wordpress.com/posts/3078')
                .map((res: Response) => res.json())
                 .subscribe(data => {
                        this.title = data.title;
                        console.log(data);
                });
  }

}

We perform an HTTP GET on the WordPress API’s URL, map the response to a JSON object and subscribe the retrieved data. The data should contain a title, which we assign to the local title variable.

However, we will see in the log:

ERROR in /app/src/app/app.component.ts (16,11): Property '_http' does not exist on type 'AppComponent'.

And in the browser, we see:

Let us fix that now.

Step 3.3: Define private local _http Variable

In angular, we can define the private local _http variable in the constructor:

constructor(private _http: Http) {}

Once, this is done, the error message is changed to:

ERROR in /app/src/app/app.component.ts (12,30): Cannot find name 'Http'.

Step 3.4: Import Http Components

The used Http module is not known to our app component. Let us change this now. We add the following line

import { Http } from '@angular/http';

to the file src/app/app.component.ts. The error message changes to:

ERROR in /app/src/app/app.component.ts (18,18): Property 'map' does not exist on type 'Observable<Response>'.

Step 3.5: Import map

The map function needs to be imported as well:

import 'rxjs/add/operator/map'

Now we get an illegible error like follows:

ERROR in /app/src/app/app.component.ts (18,6): The type argument for type parameter 'T' cannot be inferred from the usage. Consider specifying the type arguments explicitly.
  Type argument candidate 'Response' is not a valid type argument because it is not a supertype of candidate 'Response'.
    Types of property 'type' are incompatible.
      Type 'ResponseType' is not assignable to type 'ResponseType'. Two different types with this name exist, but they are unrelated.
        Type '"basic"' is not assignable to type 'ResponseType'.

Step 3.6: Import Response Type

We finally can get rid of the quite illegible error message by adding another import:

import { Response } from '@angular/http';

However, this still does not lead to the desired result. In the browser we see an empty page:

and the e2e tests fail with the following message:

$ protractor
...
Failed: Angular could not be found on the page http://localhost:4200/. If this is not an Angular application, you may need to turn off waiting for Angular. Please see https://github.com/angular/protractor/blob/master/docs/timeouts.md#waiting-for-angular-on-page-load

Step 3.7: Add HttpModule in the app Module

The solution of the above error lies in the src/app/app.module.ts (added parts in blue). We first need to add the HttpModule to the imports, which alters the error message to

$ cat src/app/app.module.ts
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { HttpModule }    from '@angular/http';

import { AppComponent } from './app.component';

@NgModule({
  declarations: [
    AppComponent
  ],
  imports: [
    HttpModule,
    BrowserModule
  ],
  providers: [],
  bootstrap: [AppComponent]
})
export class AppModule { }

This seems to have been the last stepping stone towards success:

Phase 4: Verify the successful e2e Tests

Now the e2e tests are successful:

$ protractor
[22:16:06] I/direct - Using ChromeDriver directly...
[22:16:06] I/launcher - Running 1 instances of WebDriver
Jasmine started

  consuming-a-restful-web-service-with-angular App
    ✓ should display welcome message

Executed 1 of 1 spec SUCCESS in 1 sec.
[22:16:14] I/launcher - 0 instance(s) of WebDriver still running
[22:16:14] I/launcher - chrome #01 passed

That is, how the e2e Tests should look like. Success!

Excellent! Thump up!

Step 4.2: Save the changes to the develop branch on GIT

Since our new feature “retrieve and display a blog title from WordPress API” has been verified to work fine, it is time to commit the change and push it to the remote repository:

git add .
git commit -m'added all code needed for successful e2e tests'
git push
git checkout -b develop
git push

In addition to that, we can create a new “develop” branch, if it does not exist yet:

git checkout -b develop
git push

In case the develop branch exist already, you need to merge the code to the instead of creating the develop branch:

git checkout develop
git merge feature/0001-retrieve-and-display-WordPress-title-from-API
git push

It makes sense to allow a merge to the develop branch only if the code is fully tested. This way, we will never break the code in the develop branch.

For large teams, several measures can be taken to make sure that only high quality code enters the develop branch: e.g. on BitBucket GIT, you can allow a merge only, if code has been reviewed and acknowledged by a certain number of team members. Moreover, you can integrate the repository with a Jenkins system: with the correct plugins, you can make sure that a merge is allowed only in case all quality gates (e2e test, unit tests, style, performance, …) in the Jenkins pipeline are met.

However, if you are a hobby developer working on a , it is probably sufficient if you run the tests manually before you merge the changed code into the develop or master branch.

Summary

In this hello world style step-by-step guide, we have learned

  • How to create a new Hello World project using Angular CLI, repair the e2e tests and save the changes on GIT.
  • How to create/adapt the e2e tests in advance a “test first” manner.
  • How to consume a REST service using Angular 4 and verify the result using the e2e test scripts we have created before.

Next Steps

In part 2 of this series, we will learn how to add and display HTML content to the body of our application. We will see that we cannot just use the method we have used for the title. If we do so, we will see escaped HTML code like follows:

<p>In this hello world style tutorial,…

We will show how to make Angular accept the HTML code and display it correctly.

References:

Appendix A: Adding Docker Support

This is, how I have added Docker support for the application, following my tl:dr of the blog post Angular 4 Docker Example.

A.1 Add Dockerfile and NginX config

git clone https://github.com/oveits/consuming-a-restful-web-service-with-angular
git checkout -b feature/0002-docker-support
curl -O https://raw.githubusercontent.com/avatsaev/angular4-docker-example/master/Dockerfile
curl -O https://raw.githubusercontent.com/avatsaev/angular4-docker-example/master/nginx/default.conf
mkdir nginx
mv default.conf nginx/

remove ‘package-log.json’ from Dockerfile

git add .
git commit -m 'added Dockerfile and nginx config file'
git push

A.2 Build the Docker  Image

On a docker host, I have issued following commands:

docker build . --tag oveits/consuming-a-restful-web-service-with-angular:v0.2
docker push oveits/consuming-a-restful-web-service-with-angular:v0.2
docker tag oveits/consuming-a-restful-web-service-with-angular:v0.2 oveits/consuming-a-restful-web-service-with-angular:latest
docker push oveits/consuming-a-restful-web-service-with-angular:v0.2
docker push oveits/consuming-a-restful-web-service-with-angular:latest

A.3 Run the Service

$ alias consuming='docker run --rm --name consuming-a-restful-web-service-with-angular -d -p 80:80 oveits/consuming-a-restful-web-service-with-angular $@'
$ consuming

A.4 Access the Service

In a browser, head to the public DNS of the image:

Works!

Excellent! Thump up!

 

 

0

Angular 4: Automatic Table of Contents


In this step by step tutorial, we will go through the process of creating a two-level automatic Table of Contents by adding Angular Typescript/javascript code.

We will perform following steps:

  • We will discuss alternative solutions.
  • We will start an Angular Docker Container.
  • We will download a demo application with server-side rendering and WordPress REST API integration.
  • Finally, we will enrich the Web page with a two-level table of contents using javascript methods.

In this blog post, we will we will concentrate on HTML and javascript, i.e. we will not care about styles of the table of contents. Moreover, the links to the headlines will be added later in part 2 of this little series.

Goal

The target of this exercise is to scan the content of a page for occurrences of headlines of level 1 and 2 (i.e. h1 and h2 elements)…

<h1>What is Angular?</h1>
<h1>Angular Hello World via Quickstart</h1>
<h2>Step 1: Start CentOS Container</h2>

… and to generate an unordered list of the headlines, e.g.

  • What is Angular?
  • Angular Hello World via Quickstart
    • Step 1: Start CentOS Container

In HTML, this is a unordered nested list like follows:

<ul> 
    <li>
        What is Angular?
    </li>  
    <li>
        Angular Hello World via Quickstart
        <ul>  
            <li>
                Step 1: Start CentOS Container
            </li>
            <li>
                ...
            </li>
        </ul>
    </li>
</ul>

We will scan the document using the querySelectorAll("h1, h2") and we will create the unordered list with javascript functions like appendChild(document.createElement("ul")) and.appendChild(document.createElement("li"))

Step 0: Make a Decision: Integrating an existing Solution or Starting from Scratch?

I have been looking for a table of contents generator for angular. I have found following material:

I guess any of those possibilities are fit to do, what we need, but since I am new to Angular, I have decided to create a TOC from scratch. Firstly, I will get more familiar with the Angular code, and secondly, I will have full control on the result. If you are familiar on how to integrate existing modules or directives, the links above might be a good alternative for you.

Step 1: Start the Base Project in a Docker Container

You are still reading? That means you have decided to create the table of contents from scratch without the help of any of the offered table of contents modules. Okay, let us start.

I have chosen to apply the needed changes to my Universal Base Project I have created in my previous blog post Angular 4: Boosting Performance through Server Side Rendering. For that we start an Angular Docker image and download the Universal Code from GIT:

(dockerhost)$ mkdir toc; cd toc
(dockerhost)$ docker run -it -p 8001:8000 -v $(pwd):/localdir oveits/angular_hello_world:centos bash
(container)# git clone https://github.com/oveits/ng-universal-demo
(container)# cd ng-universal-demo

My version of the ng-universal-demo has added a blog page that is automatically created from the contents of a WordPress blog post by downloading the information from WordPress’ REST API.

Let us install the dependencies and start the server. The npm run watch command will make sure that the transpilation from typescript to javascript is automatically re-done, as soon as a file change is detected:

(container)# npm i
(container)# npm run watch &
(container)# npm run server

Note: if you need help with setting up a Docker host on a Windows system, you may want to check out Step 0 of this blog post (search for the term “Install a Docker Host”). There, we describe how to run an Ubuntu based Docker host inside a Virtualbox VM.

Step 2: Generate a private getToc Function

In this step, we create a private function that will return a table of contents from the string content in the argument. For that, we create a private function getToc within the BologView class we had created in my previous blog post:

src/app/+blog/blog.module.ts

export class BlogView implements OnInit {
  ...
  private getToc(content: any) {
     // add code here ...
  }
  ...
}

Step 2.1: Generate a DIV element

I have learned that Angular is using typescript and typescript is a super-set of javascript. So, why not starting with normal typescript code? I have tried following valid (I hope!) javascript code.

var div = document.createElement("div");

Even though it seemed to work, I have seen following error messages in my Universal project:

ERROR TypeError: this.html.charCodeAt is not a function
...
ERROR ReferenceError: document is not defined

Even though the TOC I had created with this command was visible in the browser, it did not show up in the HTML source. Therefore, I guess, the code is valid in the Browser, but it is not valid on the server, so server-side rendering does not take place. Therefore, I have replaced the line by:

 (as a screenshot, because WordPress gets confused by the embedded HTML content).

Note: that with angular.element, we see more serious errors than with document.createElement, as you will see below. Before long, we will revert back to the original code with document.createElement, in order to avoid major problems with server side rendering.

Step 2.2: Read in the Content to the DIV

In my case, the content has been read from a REST API as a string with HTML code inside:

content = "In this hello world style tutorial, we will follow a step by step guide..."

It is easy to read in the HTML code into the div:

div.innerHTML = content;

Step 2.3: Read the Headlines from the Content

Now, we would like to read all headlines level 1 and 2 from it. This can be done with the querySelectorAll function:

var myArrayOfNodes = [].slice.call(div.querySelectorAll("h1, h2"));

I have cast it in an array of DOM nodes for easier manipulation.

Step 2.4: Create the Table of Contents

Now we create the table of contents:

var toc = document.createElement("ul");

Step 2.5: For each Headline, create a List Item in the correct Level

There might more elegant solutions to solve this, but we can make sure that we are at the top level of list items by defining a target pointer that is reset to the top level if a level 1 headline is detected. If a level 2 headline is detected, and we are still at the top level, we will create a nested unordered list (UL) within the current list item (LI). Here, we repeat the commands of Step 2.1 to 2.4 in order to get the full picture:

private getToc(content: any) {
 // create div

 // read content into div:
 div.innerHTML = content;

 // create an array of headlines:
initialize table of contents (toc) and select all level 1 and 2 headers, reading them into an array:
 var myArrayOfNodes = [].slice.call(div.querySelectorAll("h1, h2"));

 // 
 var toc = document.createElement("ul");
 var pointer = toc;
 var myArrayOfNodes = [].slice.call(div.querySelectorAll("h1, h2"));

 // loop through the array of headlines
 myArrayOfNodes.forEach(
     function(value, key, listObj) {
     console.log(value.tagName + ": " + value.innerHTML);

     // if we have detected a top level headline:
     if ( "H1" == value.tagName ) {
         // reset the pointer to top level:
         pointer = toc;
     }
     
     // if we are at top level and we have detected a headline level 2
     if ( "H2" == value.tagName && pointer == toc ) {
         // create a nested unordered list
         pointer = pointer.appendChild(document.createElement("ul"));
     }
 
     // for each headline, create a list item with the corresponding HTML content:
     var li = target.appendChild(document.createElement("li"));
     li.innerHTML = value.innerHTML;
     }
  
     // for debugging:
     console.log(toc.innerHTML);
 }

 return(
     toc.innerHTML
 );
}

Note that we will replace the line  by the line var div = document.createElement("div"); soon, since it behaves better with server side rendering. See below.

Finally, we return the unordered nested list as a string by using the innerHTML function. We also could return the toc as DOM, but I have decided to return the innerHTML since this is the same format we get the title and the content from WordPress’ REST API.

Step 3: Assign the Table of Contents to a Class Variable

Now, since we have defined a function that can create a table of contents from any HTML content, we need to make use of it. Remember from the last blog, that we had read the title and content from a blog into public class variables. We now add a variable named “toc” and assign the result of getToc(content) to it. The changes are marked in blue.

export class BlogView implements OnInit {
 title: any = null;
 content: any = null;
 toc: any = null;

 constructor(private http: Http) {
 }

 ngOnInit(){
    this.getMyBlog();
 }

 private getMyBlog() {
     return this.http.get('https://public-api.wordpress.com/rest/v1.1/sites/oliverveits.wordpress.com/posts/3078')
         .map((res: Response) => res.json())
         .subscribe(data => {
             this.title = data.title;
             this.content = data.content;
             this.toc = this.getToc(data.content);
         console.log(data);
     });
 }

The only new line (in blue) it the one, where we write the table of contents into a public variable named toc.

Step 4: Place the Table of Contents in the HTML Template

Last but not least, we want to make the table of contents visible by adding it to the HTML template.

src/app/+blog/blog.module.html

Here, we have added the second line. We now can see, why we have returned the table of contents as  String: this way we are able to handle the toc variable as if it was just another element returned from the WordPress REST API, all of which are HTML content in string format.

Step 5: Check the Results

Finally, it is time to open a Browser, point it to localhost:8001 (since we have chosen port 8001 in step 1 above) and check the results:

We can see that the unordered list shows up between title and content the way expected.

Excellent! Thump up!

But does it play well with server-side rendering? Let us check:

No, it does not. The whole HTML content is missing.

😦

I could partially remedy the problem by changing back the line

which is causing an error “window is not defined”

by

var div = document.createElement("div");

which is causing the error “ERROR TypeError: this.html.charCodeAt is not a function”.

However, the latter error is better in the sense that title and content are shown as HTML source code again:

The table of contents still does not show up as HTML source code, but title and content are back. However, the table of contents is visible in the browser, which is more important than showing it in the source code:

And this is as good as we can get for today. We will accept the …

Caveat

Table of contents is not shown in the HTML source code.

Workaround: in getToc, analyze the input HTML content string without converting it to a DOM object and create the output table of contents using string functions only. However, this approach is error-prone and tedious, so I have decided to live with the error messages and the fact that the table of contents does not show up as source HTML code.

Summary

Based on an example with server-side rendering and content retrieved via the WordPress REST API, we have performed following steps:

  • We have shown how to create a private getToc function that will create a table of contents from the web page.
  • We have shown how to analyze the document.
  • We have created a nested two-level table of contents from the list of headlines of the document.

The generic javascript functions we have used do not play well with node.js that is used in case of server-side rendering. However, the table of contents shows up in the browser so the solution will be fit for pure client-side rendering. Moreover, we have suggested a workaround that even will work in a situation with server-side rendering: create the table of contents as an explicit string containing HTML code.

Note: The resulting code can be cloned via

git clone https://github.com/oveits/ng-universal-demo; cd ng-universal-demo
git checkout 8b3948a8 # to be sure to be at the same status of the repo as described in the blog

Next

See Part 2: Adding Links to the Table of Contents items

2

Getting Started with DC/OS on Vagrant


In the course of this Hello World style tutorial, we will explore DC/OS, a Data Center Operating System developed and open sourced by Mesosphere with the target to hide the complexity of data centers. We will

  • install DC/OS on your local PC or Notebook using Vagrant and VirtualBox,
  • deploy a “hello world” application with more than one instance,
  • load balance between the application instances
  • and make sure the service is reachable from the outside world.

See also part 2: A Step towards productive Docker: installing and testing DC/OS on AWS (starts from scratch and does not require to have read/tested the current post).

DC/OS is a Data Center Operating System is built upon Apache Mesos and Mesosphere Marathon, an open source container orchestration platform. It has the target to hide the complexity of data centers when deploying applications: DC/OS performs the job of deploying your application on your data center hardware: DC/OS will automatically and choose the hardware servers to run your application on. It helps scaling your application according to your needs by adding or removing application instances at a push of a button. DC/OS will make sure that your client’s requests are load balanced and routed to you application instances: there is no need to manually re-configure the load-balancer(s), if you add or destroy an instance of your application: DC/OS will take care of this for you.

Note: If you want to get started with Marathon and Mesos first, you might be interested in this blog post, especially, if the resource requirements of this blog post exceeds what you have at hand: for the DC/OS tutorial you will need 10 GB or RAM, while in the Marathon/Mesos tutorial, 4 GB are sufficient.

Table of Contents

Target

What I want to do in this session:

  • Install DC/OS on the local machine using Vagrant+VirtualBox
  • Explore the networking and load balancing capabilities of DC/OS

Tools and Versions used

  • Vagrant 1.8.6
  • Virtualbox 5.0.20 r106931
  • for Windows: GNU bash, version 4.3.42(5)-release (x86_64-pc-msys)
  • DCOS 1.8.8

Prerequisites

  • 10 GB free DRAM
  • tested with 4 virtual CPUs (Quad Core CPU)
  • Git is installed

Step 1: Install Vagrant and VirtualBox

Step 1.1: Install VirtualBox

Download and install VirtualBox. I am running version 5.0.20 r106931.

If the installation fails with error message “Setup Wizard ended prematurely” see Appendix A: Virtualbox Installation Workaround below

Step 1.2: Install Vagrant

Download and install Vagrant (requires a reboot).

Step 2: Download Vagrant Box

We are following the Readme on https://github.com/dcos/dcos-vagrant:

Since this might be a long-running task (especially, if you are sitting in a hotel with low speed Internet connection like I do in the moment), we best start by downloading DC/OS first:

(base system)$ vagrant box add https://downloads.dcos.io/dcos-vagrant/metadata.json
==> box: Loading metadata for box 'https://downloads.dcos.io/dcos-vagrant/metadata.json'
==> box: Adding box 'mesosphere/dcos-centos-virtualbox' (v0.8.0) for provider: virtualbox
 box: Downloading: https://downloads.dcos.io/dcos-vagrant/dcos-centos-virtualbox-0.8.0.box
 box: Progress: 100% (Rate: 132k/s, Estimated time remaining: --:--:--)
 box: Calculating and comparing box checksum...
==> box: Successfully added box 'mesosphere/dcos-centos-virtualbox' (v0.8.0) for 'virtualbox'!

Step 3: Clone DCOS-Vagrant Repo

On another window, we clone the dcos-vagrant git repo:

(base system)$ git clone https://github.com/dcos/dcos-vagrant
Cloning into 'dcos-vagrant'...
remote: Counting objects: 2171, done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 2171 (delta 0), reused 0 (delta 0), pack-reused 2167
Receiving objects: 100% (2171/2171), 14.98 MiB | 123.00 KiB/s, done.
Resolving deltas: 100% (1297/1297), done.
Checking connectivity... done.
(base system)$ cd dcos-vagrant

VagrantConfig.yaml shows:

m1:
 ip: 192.168.65.90
 cpus: 2
 memory: 1024
 type: master
a1:
 ip: 192.168.65.111
 cpus: 4
 memory: 6144
 memory-reserved: 512
 type: agent-private
p1:
 ip: 192.168.65.60
 cpus: 2
 memory: 1536
 memory-reserved: 512
 type: agent-public
 aliases:
 - spring.acme.org
 - oinker.acme.org
boot:
 ip: 192.168.65.50
 cpus: 2
 memory: 1024
 type: boot

m1 is the DC/OS master. Private containers will run on a1, while the load balancer containers are public and will run on p1.

Step 4: Install Vagrant Hostmanager Plugin

Installation of the Vagrant Hostmanager Plugin is required; I had tried without, because I did not think that it works on Windows. However, vagrant up will not succeed, if the plugin is not installed; the presence of the plugin is checked before booting up the Vagrant box.

(base system)$ vagrant plugin install vagrant-hostmanager
Installing the 'vagrant-hostmanager' plugin. This can take a few minutes...
Installed the plugin 'vagrant-hostmanager (1.8.5)'!

Note: Some version updates later (VirtualBox 5.1.28 r117968 (Qt5.6.2)), I have found out, that also the VirtualBox Guest additions are needed in order to avoid the error message sbin/mount.vboxsf: mounting failed with the error: No such device.
For that, I needed to re-apply the command
vagrant plugin install vagrant-vbguest.

However, it still did not work. I could vagrant ssh to the box and I found in /var/log/vboxadd-install.log that it did not find the kernel headers during installation of the vbox guest additions. yum install kernel-headers returned that kernel-headers-3.10.0-693.5.2.el7.x86_64 were already installed. However, ls /usr/src/kernels/ showed, that there is a directory named 3.10.0-327.36.1.el7.x86_64 instead of 3.10.0-327.36.1.el7.x86_64. Now I have done a sudo ln -s 3.10.0-327.36.1.el7.x86_64 3.10.0-327.el7.x86_64 within the directory /usr/src/kernels/, and I could do a vagrant up with no problems. I guess un-installing and re-installing the headers would work as well.

All this did not work, but I have found that the build link on was wrong (hint was found here):

I fixed the link with cd /lib/modules/3.10.0-327.el7.x86_64; sudo mv build build.broken; sudo ln -s /usr/src/kernels/3.10.0-327.36.1.el7.x86_64 build
then cd /opt/VBoxGuestAdditions-*/init; sudo ./vboxadd setup

But still did not work! I give up and try installing DC/OS on AWS. Keep tuned.

Step 5: Boot DC/OS

Below I have set the DCOS_VERSION in order to get the exact same results next time I perform the test. If you omit to set the environment variable, the latest stable version will be used, when you boot up the VirtualBox VM:

(base system)$ export DCOS_VERSION=1.8.8
(base system)$ vagrant up Vagrant Patch Loaded: GuestLinux network_interfaces (1.8.6) Validating Plugins... Validating User Config... Downloading DC/OS 1.8.8 Installer... Source: https://downloads.dcos.io/dcos/stable/commit/602edc1b4da9364297d166d4857fc8ed7b0b65ca/dcos_generate_config.sh Destination: installers/dcos/dcos_generate_config-1.8.8.sh Progress: 16% (Rate: 1242k/s, Estimated time remaining: 0:09:16)

The speed of the hotel Internet seems to be better now, this late in the night…

(base system)$ vagrant up
Vagrant Patch Loaded: GuestLinux network_interfaces (1.8.6)
Validating Plugins...
Validating User Config...
Downloading DC/OS 1.8.8 Installer...
Source: https://downloads.dcos.io/dcos/stable/commit/602edc1b4da9364297d166d4857fc8ed7b0b65ca/dcos_generate_config.sh
Destination: installers/dcos/dcos_generate_config-1.8.8.sh
Progress: 100% (Rate: 1612k/s, Estimated time remaining: --:--:--)
Validating Installer Checksum...
Using DC/OS Installer: installers/dcos/dcos_generate_config-1.8.8.sh
Using DC/OS Config: etc/config-1.8.yaml
Validating Machine Config...
Configuring VirtualBox Host-Only Network...
Bringing machine 'm1' up with 'virtualbox' provider...
Bringing machine 'a1' up with 'virtualbox' provider...
Bringing machine 'p1' up with 'virtualbox' provider...
Bringing machine 'boot' up with 'virtualbox' provider...
==> m1: Importing base box 'mesosphere/dcos-centos-virtualbox'...
==> m1: Matching MAC address for NAT networking...
==> m1: Checking if box 'mesosphere/dcos-centos-virtualbox' is up to date...
==> m1: Setting the name of the VM: m1.dcos
==> m1: Fixed port collision for 22 => 2222. Now on port 2201.
==> m1: Clearing any previously set network interfaces...
==> m1: Preparing network interfaces based on configuration...
    m1: Adapter 1: nat
    m1: Adapter 2: hostonly
==> m1: Forwarding ports...
    m1: 22 (guest) => 2201 (host) (adapter 1)
==> m1: Running 'pre-boot' VM customizations...
==> m1: Booting VM...
==> m1: Waiting for machine to boot. This may take a few minutes...
    m1: SSH address: 127.0.0.1:2201
    m1: SSH username: vagrant
    m1: SSH auth method: private key
    m1: Warning: Remote connection disconnect. Retrying...
    m1: Warning: Remote connection disconnect. Retrying...
    m1: Warning: Remote connection disconnect. Retrying...
    m1: Warning: Remote connection disconnect. Retrying...
    m1: Warning: Remote connection disconnect. Retrying...
    m1: Warning: Remote connection disconnect. Retrying...
    m1: Warning: Remote connection disconnect. Retrying...
    m1: Warning: Remote connection disconnect. Retrying...
    m1: Warning: Remote connection disconnect. Retrying...
==> m1: Machine booted and ready!
==> m1: Checking for guest additions in VM...
==> m1: Setting hostname...
==> m1: Configuring and enabling network interfaces...
==> m1: Mounting shared folders...
    m1: /vagrant => D:/veits/Vagrant/ubuntu-trusty64-docker_2017-02/dcos-vagrant
==> m1: Updating /etc/hosts file on active guest machines...
==> m1: Updating /etc/hosts file on host machine (password may be required)...
==> m1: Running provisioner: shell...
    m1: Running: inline script
==> m1: Running provisioner: dcos_ssh...
    host: Generating new keys...
==> m1: Inserting generated public key within guest...
==> m1: Configuring vagrant to connect using generated private key...
==> m1: Removing insecure key from the guest, if it's present...
==> m1: Running provisioner: shell...
    m1: Running: script: Certificate Authorities
==> m1: >>> Installing Certificate Authorities
==> m1: Running provisioner: shell...
    m1: Running: script: Install Probe
==> m1: Probe already installed: /usr/local/sbin/probe
==> m1: Running provisioner: shell...
    m1: Running: script: Install jq
==> m1: jq already installed: /usr/local/sbin/jq
==> m1: Running provisioner: shell...
    m1: Running: script: Install DC/OS Postflight
==> m1: >>> Installing DC/OS Postflight: /usr/local/sbin/dcos-postflight
==> a1: Importing base box 'mesosphere/dcos-centos-virtualbox'...
==> a1: Matching MAC address for NAT networking...
==> a1: Checking if box 'mesosphere/dcos-centos-virtualbox' is up to date...
==> a1: Setting the name of the VM: a1.dcos
==> a1: Fixed port collision for 22 => 2222. Now on port 2202.
==> a1: Clearing any previously set network interfaces...
==> a1: Preparing network interfaces based on configuration...
    a1: Adapter 1: nat
    a1: Adapter 2: hostonly
==> a1: Forwarding ports...
    a1: 22 (guest) => 2202 (host) (adapter 1)
==> a1: Running 'pre-boot' VM customizations...
==> a1: Booting VM...
==> a1: Waiting for machine to boot. This may take a few minutes...
    a1: SSH address: 127.0.0.1:2202
    a1: SSH username: vagrant
    a1: SSH auth method: private key
    a1: Warning: Remote connection disconnect. Retrying...
    a1: Warning: Remote connection disconnect. Retrying...
    a1: Warning: Remote connection disconnect. Retrying...
    a1: Warning: Remote connection disconnect. Retrying...
    a1: Warning: Remote connection disconnect. Retrying...
    a1: Warning: Remote connection disconnect. Retrying...
    a1: Warning: Remote connection disconnect. Retrying...
    a1: Warning: Remote connection disconnect. Retrying...
    a1: Warning: Remote connection disconnect. Retrying...
    a1: Warning: Remote connection disconnect. Retrying...
==> a1: Machine booted and ready!
==> a1: Checking for guest additions in VM...
==> a1: Setting hostname...
==> a1: Configuring and enabling network interfaces...
==> a1: Mounting shared folders...
    a1: /vagrant => D:/veits/Vagrant/ubuntu-trusty64-docker_2017-02/dcos-vagrant
==> a1: Updating /etc/hosts file on active guest machines...
==> a1: Updating /etc/hosts file on host machine (password may be required)...
==> a1: Running provisioner: shell...
    a1: Running: inline script
==> a1: Running provisioner: dcos_ssh...
    host: Found existing keys
==> a1: Inserting generated public key within guest...
==> a1: Configuring vagrant to connect using generated private key...
==> a1: Removing insecure key from the guest, if it's present...
==> a1: Running provisioner: shell...
    a1: Running: script: Certificate Authorities
==> a1: >>> Installing Certificate Authorities
==> a1: Running provisioner: shell...
    a1: Running: script: Install Probe
==> a1: Probe already installed: /usr/local/sbin/probe
==> a1: Running provisioner: shell...
    a1: Running: script: Install jq
==> a1: jq already installed: /usr/local/sbin/jq
==> a1: Running provisioner: shell...
    a1: Running: script: Install DC/OS Postflight
==> a1: >>> Installing DC/OS Postflight: /usr/local/sbin/dcos-postflight
==> a1: Running provisioner: shell...
    a1: Running: script: Install Mesos Memory Modifier
==> a1: >>> Installing Mesos Memory Modifier: /usr/local/sbin/mesos-memory
==> a1: Running provisioner: shell...
    a1: Running: script: DC/OS Agent-private
==> a1: Skipping DC/OS private agent install (boot machine will provision in parallel)
==> p1: Importing base box 'mesosphere/dcos-centos-virtualbox'...
==> p1: Matching MAC address for NAT networking...
==> p1: Checking if box 'mesosphere/dcos-centos-virtualbox' is up to date...
==> p1: Setting the name of the VM: p1.dcos
==> p1: Fixed port collision for 22 => 2222. Now on port 2203.
==> p1: Clearing any previously set network interfaces...
==> p1: Preparing network interfaces based on configuration...
    p1: Adapter 1: nat
    p1: Adapter 2: hostonly
==> p1: Forwarding ports...
    p1: 22 (guest) => 2203 (host) (adapter 1)
==> p1: Running 'pre-boot' VM customizations...
==> p1: Booting VM...
==> p1: Waiting for machine to boot. This may take a few minutes...
    p1: SSH address: 127.0.0.1:2203
    p1: SSH username: vagrant
    p1: SSH auth method: private key
    p1: Warning: Remote connection disconnect. Retrying...
    p1: Warning: Remote connection disconnect. Retrying...
    p1: Warning: Remote connection disconnect. Retrying...
    p1: Warning: Remote connection disconnect. Retrying...
    p1: Warning: Remote connection disconnect. Retrying...
    p1: Warning: Remote connection disconnect. Retrying...
    p1: Warning: Remote connection disconnect. Retrying...
    p1: Warning: Remote connection disconnect. Retrying...
    p1: Warning: Remote connection disconnect. Retrying...
    p1: Warning: Remote connection disconnect. Retrying...
==> p1: Machine booted and ready!
==> p1: Checking for guest additions in VM...
==> p1: Setting hostname...
==> p1: Configuring and enabling network interfaces...
==> p1: Mounting shared folders...
    p1: /vagrant => D:/veits/Vagrant/ubuntu-trusty64-docker_2017-02/dcos-vagrant
==> p1: Updating /etc/hosts file on active guest machines...
==> p1: Updating /etc/hosts file on host machine (password may be required)...
==> p1: Running provisioner: shell...
    p1: Running: inline script
==> p1: Running provisioner: dcos_ssh...
    host: Found existing keys
==> p1: Inserting generated public key within guest...
==> p1: Configuring vagrant to connect using generated private key...
==> p1: Removing insecure key from the guest, if it's present...
==> p1: Running provisioner: shell...
    p1: Running: script: Certificate Authorities
==> p1: >>> Installing Certificate Authorities
==> p1: Running provisioner: shell...
    p1: Running: script: Install Probe
==> p1: Probe already installed: /usr/local/sbin/probe
==> p1: Running provisioner: shell...
    p1: Running: script: Install jq
==> p1: jq already installed: /usr/local/sbin/jq
==> p1: Running provisioner: shell...
    p1: Running: script: Install DC/OS Postflight
==> p1: >>> Installing DC/OS Postflight: /usr/local/sbin/dcos-postflight
==> p1: Running provisioner: shell...
    p1: Running: script: Install Mesos Memory Modifier
==> p1: >>> Installing Mesos Memory Modifier: /usr/local/sbin/mesos-memory
==> p1: Running provisioner: shell...
    p1: Running: script: DC/OS Agent-public
==> p1: Skipping DC/OS public agent install (boot machine will provision in parallel)
==> boot: Importing base box 'mesosphere/dcos-centos-virtualbox'...
==> boot: Matching MAC address for NAT networking...
==> boot: Checking if box 'mesosphere/dcos-centos-virtualbox' is up to date...
==> boot: Setting the name of the VM: boot.dcos
==> boot: Fixed port collision for 22 => 2222. Now on port 2204.
==> boot: Clearing any previously set network interfaces...
==> boot: Preparing network interfaces based on configuration...
    boot: Adapter 1: nat
    boot: Adapter 2: hostonly
==> boot: Forwarding ports...
    boot: 22 (guest) => 2204 (host) (adapter 1)
==> boot: Running 'pre-boot' VM customizations...
==> boot: Booting VM...
==> boot: Waiting for machine to boot. This may take a few minutes...
    boot: SSH address: 127.0.0.1:2204
    boot: SSH username: vagrant
    boot: SSH auth method: private key
    boot: Warning: Remote connection disconnect. Retrying...
    boot: Warning: Remote connection disconnect. Retrying...
    boot: Warning: Remote connection disconnect. Retrying...
    boot: Warning: Remote connection disconnect. Retrying...
    boot: Warning: Remote connection disconnect. Retrying...
    boot: Warning: Remote connection disconnect. Retrying...
    boot: Warning: Remote connection disconnect. Retrying...
    boot: Warning: Remote connection disconnect. Retrying...
    boot: Warning: Remote connection disconnect. Retrying...
    boot: Warning: Remote connection disconnect. Retrying...
==> boot: Machine booted and ready!
==> boot: Checking for guest additions in VM...
==> boot: Setting hostname...
==> boot: Configuring and enabling network interfaces...
==> boot: Mounting shared folders...
    boot: /vagrant => D:/veits/Vagrant/ubuntu-trusty64-docker_2017-02/dcos-vagrant
==> boot: Updating /etc/hosts file on active guest machines...
==> boot: Updating /etc/hosts file on host machine (password may be required)...
==> boot: Running provisioner: shell...
    boot: Running: inline script
==> boot: Running provisioner: dcos_ssh...
    host: Found existing keys
==> boot: Inserting generated public key within guest...
==> boot: Configuring vagrant to connect using generated private key...
==> boot: Removing insecure key from the guest, if it's present...
==> boot: Running provisioner: shell...
    boot: Running: script: Certificate Authorities
==> boot: >>> Installing Certificate Authorities
==> boot: Running provisioner: shell...
    boot: Running: script: Install Probe
==> boot: Probe already installed: /usr/local/sbin/probe
==> boot: Running provisioner: shell...
    boot: Running: script: Install jq
==> boot: jq already installed: /usr/local/sbin/jq
==> boot: Running provisioner: shell...
    boot: Running: script: Install DC/OS Postflight
==> boot: >>> Installing DC/OS Postflight: /usr/local/sbin/dcos-postflight
==> boot: Running provisioner: shell...
    boot: Running: script: DC/OS Boot
==> boot: Error: No such image or container: zookeeper-boot
==> boot: >>> Starting zookeeper (for exhibitor bootstrap and quorum)
==> boot: a58a678182b4c60df5fd4e1a0b86407456a33c75f4289c7fd7b0ce761afed567
==> boot: Error: No such image or container: nginx-boot
==> boot: >>> Starting nginx (for distributing bootstrap artifacts to cluster)
==> boot: c4bceea034f4d7488ae5ddd6ed708640a56064b191cd3d640a3311a58c5dcb5b
==> boot: >>> Downloading dcos_generate_config.sh (for building bootstrap image for system)
==> boot:   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
==> boot:                                  Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
 22  723M   22  160M    0     0   171M      0  0:00:04 --:--:--  0:00:04  171M
 41  723M   41  300M    0     0   155M      0  0:00:04  0:00:01  0:00:03  139M
 65  723M   65  471M    0     0   160M      0  0:00:04  0:00:02  0:00:02  155M
 88  723M   88  642M    0     0   163M      0  0:00:04  0:00:03  0:00:01  160M
100  723M  100  723M    0     0   164M      0  0:00:04  0:00:04 --:--:--  163M
==> boot: Running provisioner: dcos_install...
==> boot: Reading etc/config-1.8.yaml
==> boot: Analyzing machines
==> boot: Generating Configuration: ~/dcos/genconf/config.yaml
==> boot: sudo: cat << EOF > ~/dcos/genconf/config.yaml
==> boot:       ---
==> boot:       master_list:
==> boot:       - 192.168.65.90
==> boot:       agent_list:
==> boot:       - 192.168.65.111
==> boot:       - 192.168.65.60
==> boot:       cluster_name: dcos-vagrant
==> boot:       bootstrap_url: http://192.168.65.50
==> boot:       exhibitor_storage_backend: static
==> boot:       master_discovery: static
==> boot:       resolvers:
==> boot:       - 10.0.2.3
==> boot:       superuser_username: admin
==> boot:       superuser_password_hash: "\$6\$rounds=656000\$123o/Qz.InhbkbsO\$kn5IkpWm5CplEorQo7jG/27LkyDgWrml36lLxDtckZkCxu22uihAJ4DOJVVnNbsz/Y5MCK3B1InquE6E7Jmh30"
==> boot:       ssh_port: 22
==> boot:       ssh_user: vagrant
==> boot:       check_time: false
==> boot:       exhibitor_zk_hosts: 192.168.65.50:2181
==> boot:
==> boot:       EOF
==> boot:
==> boot: Generating IP Detection Script: ~/dcos/genconf/ip-detect
==> boot: sudo: cat << 'EOF' > ~/dcos/genconf/ip-detect
==> boot:       #!/usr/bin/env bash
==> boot:       set -o errexit
==> boot:       set -o nounset
==> boot:       set -o pipefail
==> boot:       echo $(/usr/sbin/ip route show to match 192.168.65.90 | grep -Eo '[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}' | tail -1)
==> boot:
==> boot:       EOF
==> boot:
==> boot: Importing Private SSH Key: ~/dcos/genconf/ssh_key
==> boot: sudo: cp /vagrant/.vagrant/dcos/private_key_vagrant ~/dcos/genconf/ssh_key
==> boot:
==> boot: Generating DC/OS Installer Files: ~/dcos/genconf/serve/
==> boot: sudo: cd ~/dcos && bash ~/dcos/dcos_generate_config.sh --genconf && cp -rpv ~/dcos/genconf/serve/* /var/tmp/dcos/ && echo ok > /var/tmp/dcos/ready
==> boot:
==> boot:       Extracting image from this script and loading into docker daemon, this step can take a few minutes
==> boot:       dcos-genconf.602edc1b4da9364297-5df43052907c021eeb.tar
==> boot:       ====> EXECUTING CONFIGURATION GENERATION
==> boot:       Generating configuration files...
==> boot:       Final arguments:{
==> boot:         "adminrouter_auth_enabled":"true",
==> boot:         "bootstrap_id":"5df43052907c021eeb5de145419a3da1898c58a5",
==> boot:         "bootstrap_tmp_dir":"tmp",
==> boot:         "bootstrap_url":"http://192.168.65.50",
==> boot:         "check_time":"false",
==> boot:         "cluster_docker_credentials":"{}",
==> boot:         "cluster_docker_credentials_dcos_owned":"false",
==> boot:         "cluster_docker_credentials_enabled":"false",
==> boot:         "cluster_docker_credentials_write_to_etc":"false",
==> boot:         "cluster_docker_registry_enabled":"false",
==> boot:         "cluster_docker_registry_url":"",
==> boot:         "cluster_name":"dcos-vagrant",
==> boot:         "cluster_packages":"[\"dcos-config--setup_4869fa95533aed5aad36093272289e6bd389b458\", \"dcos-metadata--setup_4869fa95533aed5aad36093272289e6bd389b458\"]",
==> boot:         "config_id":"4869fa95533aed5aad36093272289e6bd389b458",
==> boot:         "config_yaml":"      \"agent_list\": |-\n        [\"192.168.65.111\", \"192.168.65.60\"]\n      \"bootstrap_url\": |-\n        http://192.168.65.50\n      \"check_time\": |-\n        false\n      \"cluster_name\": |-\n        dcos-vagrant\n      \"exhibitor_storage_backend\": |-\n        static\n      \"exhibitor_zk_hosts\": |-\n        192.168.65.50:2181\n      \"master_discovery\": |-\n        static\n      \"master_list\": |-\n        [\"192.168.65.90\"]\n      \"provider\": |-\n        onprem\n      \"resolvers\": |-\n        [\"10.0.2.3\"]\n      \"ssh_port\": |-\n        22\n      \"ssh_user\": |-\n        vagrant\n      \"superuser_password_hash\": |-\n        $6$rounds=656000$123o/Qz.InhbkbsO$kn5IkpWm5CplEorQo7jG/27LkyDgWrml36lLxDtckZkCxu22uihAJ4DOJVVnNbsz/Y5MCK3B1InquE6E7Jmh30\n      \"superuser_username\": |-\n        admin\n",
==> boot:         "curly_pound":"{#",
==> boot:         "custom_auth":"false",
==> boot:         "dcos_gen_resolvconf_search_str":"",
==> boot:         "dcos_image_commit":"602edc1b4da9364297d166d4857fc8ed7b0b65ca",
==> boot:         "dcos_overlay_config_attempts":"4",
==> boot:         "dcos_overlay_enable":"true",
==> boot:         "dcos_overlay_mtu":"1420",
==> boot:         "dcos_overlay_network":"{\"vtep_subnet\": \"44.128.0.0/20\", \"overlays\": [{\"prefix\": 24, \"name\": \"dcos\", \"subnet\": \"9.0.0.0/8\"}], \"vtep_mac_oui\": \"70:B3:D5:00:00:00\"}",
==> boot:         "dcos_remove_dockercfg_enable":"false",
==> boot:         "dcos_version":"1.8.8",
==> boot:         "dns_search":"",
==> boot:         "docker_remove_delay":"1hrs",
==> boot:         "docker_stop_timeout":"20secs",
==> boot:         "exhibitor_static_ensemble":"1:192.168.65.90",
==> boot:         "exhibitor_storage_backend":"static",
==> boot:         "expanded_config":"\"DO NOT USE THIS AS AN ARGUMENT TO OTHER ARGUMENTS. IT IS TEMPORARY\"",
==> boot:         "gc_delay":"2days",
==> boot:         "ip_detect_contents":"'#!/usr/bin/env bash\n\n  set -o errexit\n\n  set -o nounset\n\n  set -o pipefail\n\n  echo $(/usr/sbin/ip route show to match 192.168.65.90 | grep -Eo ''[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}''\n  | tail -1)\n\n\n  '\n",
==> boot:         "ip_detect_filename":"genconf/ip-detect",
==> boot:         "ip_detect_public_contents":"'#!/usr/bin/env bash\n\n  set -o errexit\n\n  set -o nounset\n\n  set -o pipefail\n\n  echo $(/usr/sbin/ip route show to match 192.168.65.90 | grep -Eo ''[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}''\n  | tail -1)\n\n\n  '\n",
==> boot:         "master_discovery":"static",
==> boot:         "master_dns_bindall":"true",
==> boot:         "master_list":"[\"192.168.65.90\"]",
==> boot:         "master_quorum":"1",
==> boot:         "mesos_container_logger":"org_apache_mesos_LogrotateContainerLogger",
==> boot:         "mesos_dns_ip_sources":"[\"host\", \"netinfo\"]",
==> boot:         "mesos_dns_resolvers_str":"\"resolvers\": [\"10.0.2.3\"]",
==> boot:         "mesos_hooks":"",
==> boot:         "mesos_isolation":"cgroups/cpu,cgroups/mem,disk/du,network/cni,filesystem/linux,docker/runtime,docker/volume",
==> boot:         "mesos_log_directory_max_files":"162",
==> boot:         "mesos_log_retention_count":"137",
==> boot:         "mesos_log_retention_mb":"4000",
==> boot:         "minuteman_forward_metrics":"false",
==> boot:         "minuteman_max_named_ip":"11.255.255.255",
==> boot:         "minuteman_max_named_ip_erltuple":"{11,255,255,255}",
==> boot:         "minuteman_min_named_ip":"11.0.0.0",
==> boot:         "minuteman_min_named_ip_erltuple":"{11,0,0,0}",
==> boot:         "num_masters":"1",
==> boot:         "oauth_auth_host":"https://dcos.auth0.com",
==> boot:         "oauth_auth_redirector":"https://auth.dcos.io",
==> boot:         "oauth_available":"true",
==> boot:         "oauth_client_id":"3yF5TOSzdlI45Q1xspxzeoGBe9fNxm9m",
==> boot:         "oauth_enabled":"true",
==> boot:         "oauth_issuer_url":"https://dcos.auth0.com/",
==> boot:         "package_names":"[\n  \"dcos-config\",\n  \"dcos-metadata\"\n]",
==> boot:         "provider":"onprem",
==> boot:         "resolvers":"[\"10.0.2.3\"]",
==> boot:         "resolvers_str":"10.0.2.3",
==> boot:         "rexray_config":"{\"rexray\": {\"modules\": {\"default-docker\": {\"disabled\": true}, \"default-admin\": {\"host\": \"tcp://127.0.0.1:61003\"}}, \"loglevel\": \"info\"}}",
==> boot:         "rexray_config_contents":"\"rexray:\\n  loglevel: info\\n  modules:\\n    default-admin:\\n      host: tcp://127.0.0.1:61003\\n\\\n  \\    default-docker:\\n      disabled: true\\n\"\n",
==> boot:         "rexray_config_preset":"",
==> boot:         "telemetry_enabled":"true",
==> boot:         "template_filenames":"[\n  \"dcos-config.yaml\",\n  \"cloud-config.yaml\",\n  \"dcos-metadata.yaml\",\n  \"dcos-services.yaml\"\n]",
==> boot:         "ui_banner":"false",
==> boot:         "ui_banner_background_color":"#1E232F",
==> boot:         "ui_banner_dismissible":"null",
==> boot:         "ui_banner_footer_content":"null",
==> boot:         "ui_banner_foreground_color":"#FFFFFF",
==> boot:         "ui_banner_header_content":"null",
==> boot:         "ui_banner_header_title":"null",
==> boot:         "ui_banner_image_path":"null",
==> boot:         "ui_branding":"false",
==> boot:         "ui_external_links":"false",
==> boot:         "use_mesos_hooks":"false",
==> boot:         "use_proxy":"false",
==> boot:         "user_arguments":"{\n  \"agent_list\":\"[\\\"192.168.65.111\\\", \\\"192.168.65.60\\\"]\",\n  \"bootstrap_url\":\"http://192.168.65.50\",\n  \"check_time\":\"false\",\n  \"cluster_name\":\"dcos-vagrant\",\n  \"exhibitor_storage_backend\":\"static\",\n  \"exhibitor_zk_hosts\":\"192.168.65.50:2181\",\n  \"master_discovery\":\"static\",\n  \"master_list\":\"[\\\"192.168.65.90\\\"]\",\n  \"provider\":\"onprem\",\n  \"resolvers\":\"[\\\"10.0.2.3\\\"]\",\n  \"ssh_port\":\"22\",\n  \"ssh_user\":\"vagrant\",\n  \"superuser_password_hash\":\"$6$rounds=656000$123o/Qz.InhbkbsO$kn5IkpWm5CplEorQo7jG/27LkyDgWrml36lLxDtckZkCxu22uihAJ4DOJVVnNbsz/Y5MCK3B1InquE6E7Jmh30\",\n  \"superuser_username\":\"admin\"\n}",
==> boot:         "weights":""
==> boot:       }
==> boot:       Generating configuration files...
==> boot:       Final arguments:{
==> boot:         "adminrouter_auth_enabled":"true",
==> boot:         "bootstrap_id":"5df43052907c021eeb5de145419a3da1898c58a5",
==> boot:         "bootstrap_tmp_dir":"tmp",
==> boot:         "bootstrap_url":"http://192.168.65.50",
==> boot:         "check_time":"false",
==> boot:         "cluster_docker_credentials":"{}",
==> boot:         "cluster_docker_credentials_dcos_owned":"false",
==> boot:         "cluster_docker_credentials_enabled":"false",
==> boot:         "cluster_docker_credentials_write_to_etc":"false",
==> boot:         "cluster_docker_registry_enabled":"false",
==> boot:         "cluster_docker_registry_url":"",
==> boot:         "cluster_name":"dcos-vagrant",
==> boot:         "cluster_packages":"[\"dcos-config--setup_4869fa95533aed5aad36093272289e6bd389b458\", \"dcos-metadata--setup_4869fa95533aed5aad36093272289e6bd389b458\"]",
==> boot:         "config_id":"4869fa95533aed5aad36093272289e6bd389b458",
==> boot:         "config_yaml":"      \"agent_list\": |-\n        [\"192.168.65.111\", \"192.168.65.60\"]\n      \"bootstrap_url\": |-\n        http://192.168.65.50\n      \"check_time\": |-\n        false\n      \"cluster_name\": |-\n        dcos-vagrant\n      \"exhibitor_storage_backend\": |-\n        static\n      \"exhibitor_zk_hosts\": |-\n        192.168.65.50:2181\n      \"master_discovery\": |-\n        static\n      \"master_list\": |-\n        [\"192.168.65.90\"]\n      \"provider\": |-\n        onprem\n      \"resolvers\": |-\n        [\"10.0.2.3\"]\n      \"ssh_port\": |-\n        22\n      \"ssh_user\": |-\n        vagrant\n      \"superuser_password_hash\": |-\n        $6$rounds=656000$123o/Qz.InhbkbsO$kn5IkpWm5CplEorQo7jG/27LkyDgWrml36lLxDtckZkCxu22uihAJ4DOJVVnNbsz/Y5MCK3B1InquE6E7Jmh30\n      \"superuser_username\": |-\n        admin\n",
==> boot:         "curly_pound":"{#",
==> boot:         "custom_auth":"false",
==> boot:         "dcos_gen_resolvconf_search_str":"",
==> boot:         "dcos_image_commit":"602edc1b4da9364297d166d4857fc8ed7b0b65ca",
==> boot:         "dcos_overlay_config_attempts":"4",
==> boot:         "dcos_overlay_enable":"true",
==> boot:         "dcos_overlay_mtu":"1420",
==> boot:         "dcos_overlay_network":"{\"vtep_subnet\": \"44.128.0.0/20\", \"overlays\": [{\"prefix\": 24, \"name\": \"dcos\", \"subnet\": \"9.0.0.0/8\"}], \"vtep_mac_oui\": \"70:B3:D5:00:00:00\"}",
==> boot:         "dcos_remove_dockercfg_enable":"false",
==> boot:         "dcos_version":"1.8.8",
==> boot:         "dns_search":"",
==> boot:         "docker_remove_delay":"1hrs",
==> boot:         "docker_stop_timeout":"20secs",
==> boot:         "exhibitor_static_ensemble":"1:192.168.65.90",
==> boot:         "exhibitor_storage_backend":"static",
==> boot:         "expanded_config":"\"DO NOT USE THIS AS AN ARGUMENT TO OTHER ARGUMENTS. IT IS TEMPORARY\"",
==> boot:         "gc_delay":"2days",
==> boot:         "ip_detect_contents":"'#!/usr/bin/env bash\n\n  set -o errexit\n\n  set -o nounset\n\n  set -o pipefail\n\n  echo $(/usr/sbin/ip route show to match 192.168.65.90 | grep -Eo ''[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}''\n  | tail -1)\n\n\n  '\n",
==> boot:         "ip_detect_filename":"genconf/ip-detect",
==> boot:         "ip_detect_public_contents":"'#!/usr/bin/env bash\n\n  set -o errexit\n\n  set -o nounset\n\n  set -o pipefail\n\n  echo $(/usr/sbin/ip route show to match 192.168.65.90 | grep -Eo ''[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}''\n  | tail -1)\n\n\n  '\n",
==> boot:         "master_discovery":"static",
==> boot:         "master_dns_bindall":"true",
==> boot:         "master_list":"[\"192.168.65.90\"]",
==> boot:         "master_quorum":"1",
==> boot:         "mesos_container_logger":"org_apache_mesos_LogrotateContainerLogger",
==> boot:         "mesos_dns_ip_sources":"[\"host\", \"netinfo\"]",
==> boot:         "mesos_dns_resolvers_str":"\"resolvers\": [\"10.0.2.3\"]",
==> boot:         "mesos_hooks":"",
==> boot:         "mesos_isolation":"cgroups/cpu,cgroups/mem,disk/du,network/cni,filesystem/linux,docker/runtime,docker/volume",
==> boot:         "mesos_log_directory_max_files":"162",
==> boot:         "mesos_log_retention_count":"137",
==> boot:         "mesos_log_retention_mb":"4000",
==> boot:         "minuteman_forward_metrics":"false",
==> boot:         "minuteman_max_named_ip":"11.255.255.255",
==> boot:         "minuteman_max_named_ip_erltuple":"{11,255,255,255}",
==> boot:         "minuteman_min_named_ip":"11.0.0.0",
==> boot:         "minuteman_min_named_ip_erltuple":"{11,0,0,0}",
==> boot:         "num_masters":"1",
==> boot:         "oauth_auth_host":"https://dcos.auth0.com",
==> boot:         "oauth_auth_redirector":"https://auth.dcos.io",
==> boot:         "oauth_available":"true",
==> boot:         "oauth_client_id":"3yF5TOSzdlI45Q1xspxzeoGBe9fNxm9m",
==> boot:         "oauth_enabled":"true",
==> boot:         "oauth_issuer_url":"https://dcos.auth0.com/",
==> boot:         "package_names":"[\n  \"dcos-config\",\n  \"dcos-metadata\"\n]",
==> boot:         "provider":"onprem",
==> boot:         "resolvers":"[\"10.0.2.3\"]",
==> boot:         "resolvers_str":"10.0.2.3",
==> boot:         "rexray_config":"{\"rexray\": {\"modules\": {\"default-docker\": {\"disabled\": true}, \"default-admin\": {\"host\": \"tcp://127.0.0.1:61003\"}}, \"loglevel\": \"info\"}}",
==> boot:         "rexray_config_contents":"\"rexray:\\n  loglevel: info\\n  modules:\\n    default-admin:\\n      host: tcp://127.0.0.1:61003\\n\\\n  \\    default-docker:\\n      disabled: true\\n\"\n",
==> boot:         "rexray_config_preset":"",
==> boot:         "telemetry_enabled":"true",
==> boot:         "template_filenames":"[\n  \"dcos-config.yaml\",\n  \"cloud-config.yaml\",\n  \"dcos-metadata.yaml\",\n  \"dcos-services.yaml\"\n]",
==> boot:         "ui_banner":"false",
==> boot:         "ui_banner_background_color":"#1E232F",
==> boot:         "ui_banner_dismissible":"null",
==> boot:         "ui_banner_footer_content":"null",
==> boot:         "ui_banner_foreground_color":"#FFFFFF",
==> boot:         "ui_banner_header_content":"null",
==> boot:         "ui_banner_header_title":"null",
==> boot:         "ui_banner_image_path":"null",
==> boot:         "ui_branding":"false",
==> boot:         "ui_external_links":"false",
==> boot:         "use_mesos_hooks":"false",
==> boot:         "use_proxy":"false",
==> boot:         "user_arguments":"{\n  \"agent_list\":\"[\\\"192.168.65.111\\\", \\\"192.168.65.60\\\"]\",\n  \"bootstrap_url\":\"http://192.168.65.50\",\n  \"check_time\":\"false\",\n  \"cluster_name\":\"dcos-vagrant\",\n  \"exhibitor_storage_backend\":\"static\",\n  \"exhibitor_zk_hosts\":\"192.168.65.50:2181\",\n  \"master_discovery\":\"static\",\n  \"master_list\":\"[\\\"192.168.65.90\\\"]\",\n  \"provider\":\"onprem\",\n  \"resolvers\":\"[\\\"10.0.2.3\\\"]\",\n  \"ssh_port\":\"22\",\n  \"ssh_user\":\"vagrant\",\n  \"superuser_password_hash\":\"$6$rounds=656000$123o/Qz.InhbkbsO$kn5IkpWm5CplEorQo7jG/27LkyDgWrml36lLxDtckZkCxu22uihAJ4DOJVVnNbsz/Y5MCK3B1InquE6E7Jmh30\",\n  \"superuser_username\":\"admin\"\n}",
==> boot:         "weights":""
==> boot:       }
==> boot:       Package filename: packages/dcos-config/dcos-config--setup_4869fa95533aed5aad36093272289e6bd389b458.tar.xz
==> boot:       Package filename: packages/dcos-metadata/dcos-metadata--setup_4869fa95533aed5aad36093272289e6bd389b458.tar.xz
==> boot:       Generating Bash configuration files for DC/OS
==> boot:       ‘/root/dcos/genconf/serve/bootstrap’ -> ‘/var/tmp/dcos/bootstrap’
==> boot:       ‘/root/dcos/genconf/serve/bootstrap/5df43052907c021eeb5de145419a3da1898c58a5.bootstrap.tar.xz’ -> ‘/var/tmp/dcos/bootstrap/5df43052907c021eeb5de145419a3da1898c58a5.bootstrap.tar.xz’
==> boot:       ‘/root/dcos/genconf/serve/bootstrap/5df43052907c021eeb5de145419a3da1898c58a5.active.json’ -> ‘/var/tmp/dcos/bootstrap/5df43052907c021eeb5de145419a3da1898c58a5.active.json’
==> boot:       ‘/root/dcos/genconf/serve/bootstrap.latest’ -> ‘/var/tmp/dcos/bootstrap.latest’
==> boot:       ‘/root/dcos/genconf/serve/cluster-package-info.json’ -> ‘/var/tmp/dcos/cluster-package-info.json’
==> boot:       ‘/root/dcos/genconf/serve/dcos_install.sh’ -> ‘/var/tmp/dcos/dcos_install.sh’
==> boot:       ‘/root/dcos/genconf/serve/packages’ -> ‘/var/tmp/dcos/packages’
==> boot:       ‘/root/dcos/genconf/serve/packages/dcos-metadata’ -> ‘/var/tmp/dcos/packages/dcos-metadata’
==> boot:       ‘/root/dcos/genconf/serve/packages/dcos-metadata/dcos-metadata--setup_4869fa95533aed5aad36093272289e6bd389b458.tar.xz’ -> ‘/var/tmp/dcos/packages/dcos-metadata/dcos-metadata--setup_4869fa95533aed5aad36093272289e6bd389b458.tar.xz’
==> boot:       ‘/root/dcos/genconf/serve/packages/dcos-config’ -> ‘/var/tmp/dcos/packages/dcos-config’
==> boot:       ‘/root/dcos/genconf/serve/packages/dcos-config/dcos-config--setup_4869fa95533aed5aad36093272289e6bd389b458.tar.xz’ -> ‘/var/tmp/dcos/packages/dcos-config/dcos-config--setup_4869fa95533aed5aad36093272289e6bd389b458.tar.xz’
==> m1: Installing DC/OS (master)
==> m1: sudo: bash -ceu "curl --fail --location --silent --show-error --verbose http://boot.dcos/dcos_install.sh | bash -s -- master"
==> m1:
==> m1:       * About to connect() to boot.dcos port 80 (#0)
==> m1:       *   Trying 192.168.65.50...
==> m1:       * Connected to boot.dcos (192.168.65.50) port 80 (#0)
==> m1:       > GET /dcos_install.sh HTTP/1.1
==> m1:       > User-Agent: curl/7.29.0
==> m1:       > Host: boot.dcos
==> m1:       > Accept: */*
==> m1:       >
==> m1:       < HTTP/1.1 200 OK ==> m1:       < Server: nginx/1.11.4 ==> m1:       < Date: Tue, 07 Mar 2017 22:46:20 GMT ==> m1:       < Content-Type: application/octet-stream ==> m1:       < Content-Length: 15293 ==> m1:       < Last-Modified: Tue, 07 Mar 2017 22:46:11 GMT ==> m1:       < Connection: keep-alive ==> m1:       < ETag: "58bf3833-3bbd" ==> m1:       < Accept-Ranges: bytes ==> m1:       < ==> m1:       { [data not shown]
==> m1:       * Connection #0 to host boot.dcos left intact
==> m1:       Starting DC/OS Install Process
==> m1:       Running preflight checks
==> m1:       Checking if DC/OS is already installed:
==> m1:       PASS (Not installed)
==> m1:       PASS Is SELinux disabled?
==> m1:       Checking if docker is installed and in PATH:
==> m1:       PASS
==> m1:       Checking docker version requirement (>= 1.6):
==> m1:       PASS (1.11.2)
==> m1:       Checking if curl is installed and in PATH:
==> m1:       PASS
==> m1:       Checking if bash is installed and in PATH:
==> m1:       PASS
==> m1:       Checking if ping is installed and in PATH:
==> m1:       PASS
==> m1:       Checking if tar is installed and in PATH:
==> m1:       PASS
==> m1:       Checking if xz is installed and in PATH:
==> m1:       PASS
==> m1:       Checking if unzip is installed and in PATH:
==> m1:       PASS
==> m1:       Checking if ipset is installed and in PATH:
==> m1:       PASS
==> m1:       Checking if systemd-notify is installed and in PATH:
==> m1:       PASS
==> m1:       Checking if systemd is installed and in PATH:
==> m1:       PASS
==> m1:       Checking systemd version requirement (>= 200):
==> m1:       PASS (219)
==> m1:       Checking if group 'nogroup' exists:
==> m1:       PASS
==> m1:       Checking if port 53 (required by spartan) is in use:
==> m1:       PASS
==> m1:       Checking if port 80 (required by adminrouter) is in use:
==> m1:       PASS
==> m1:       Checking if port 443 (required by adminrouter) is in use:
==> m1:       PASS
==> m1:       Checking if port 1050 (required by 3dt) is in use:
==> m1:       PASS
==> m1:       Checking if port 2181 (required by zookeeper) is in use:
==> m1:       PASS
==> m1:       Checking if port 5050 (required by mesos-master) is in use:
==> m1:       PASS
==> m1:       Checking if port 7070 (required by cosmos) is in use:
==> m1:       PASS
==> m1:       Checking if port 8080 (required by marathon) is in use:
==> m1:       PASS
==> m1:       Checking if port 8101 (required by dcos-oauth) is in use:
==> m1:       PASS
==> m1:       Checking if port 8123 (required by mesos-dns) is in use:
==> m1:       PASS
==> m1:       Checking if port 8181 (required by exhibitor) is in use:
==> m1:       PASS
==> m1:       Checking if port 9000 (required by metronome) is in use:
==> m1:       PASS
==> m1:       Checking if port 9942 (required by metronome) is in use:
==> m1:       PASS
==> m1:       Checking if port 9990 (required by cosmos) is in use:
==> m1:       PASS
==> m1:       Checking if port 15055 (required by dcos-history) is in use:
==> m1:       PASS
==> m1:       Checking if port 33107 (required by navstar) is in use:
==> m1:       PASS
==> m1:       Checking if port 36771 (required by marathon) is in use:
==> m1:       PASS
==> m1:       Checking if port 41281 (required by zookeeper) is in use:
==> m1:       PASS
==> m1:       Checking if port 42819 (required by spartan) is in use:
==> m1:       PASS
==> m1:       Checking if port 43911 (required by minuteman) is in use:
==> m1:       PASS
==> m1:       Checking if port 46839 (required by metronome) is in use:
==> m1:       PASS
==> m1:       Checking if port 61053 (required by mesos-dns) is in use:
==> m1:       PASS
==> m1:       Checking if port 61420 (required by epmd) is in use:
==> m1:       PASS
==> m1:       Checking if port 61421 (required by minuteman) is in use:
==> m1:       PASS
==> m1:       Checking if port 62053 (required by spartan) is in use:
==> m1:       PASS
==> m1:       Checking if port 62080 (required by navstar) is in use:
==> m1:       PASS
==> m1:       Checking Docker is configured with a production storage driver:
==> m1:       WARNING: bridge-nf-call-iptables is disabled
==> m1:       WARNING: bridge-nf-call-ip6tables is disabled
==> m1:       PASS (overlay)
==> m1:       Creating directories under /etc/mesosphere
==> m1:       Creating role file for master
==> m1:       Configuring DC/OS
==> m1:       Setting and starting DC/OS
==> m1:       Created symlink from /etc/systemd/system/multi-user.target.wants/dcos-setup.service to /etc/systemd/system/dcos-setup.service.
==> a1: Installing DC/OS (agent)
==> p1: Installing DC/OS (agent-public)
==> a1: sudo: bash -ceu "curl --fail --location --silent --show-error --verbose http://boot.dcos/dcos_install.sh | bash -s -- slave"
==> p1: sudo: bash -ceu "curl --fail --location --silent --show-error --verbose http://boot.dcos/dcos_install.sh | bash -s -- slave_public"
==> a1:
==> p1:
==> a1:       * About to connect() to boot.dcos port 80 (#0)
==> p1:       * About to connect() to boot.dcos port 80 (#0)
==> a1:       *   Trying 192.168.65.50...
==> p1:       *   Trying 192.168.65.50...
==> a1:       * Connected to boot.dcos (192.168.65.50) port 80 (#0)
==> p1:       * Connected to boot.dcos (192.168.65.50) port 80 (#0)
==> p1:       > GET /dcos_install.sh HTTP/1.1
==> p1:       > User-Agent: curl/7.29.0
==> p1:       > Host: boot.dcos
==> p1:       > Accept: */*
==> p1:       >
==> a1:       > GET /dcos_install.sh HTTP/1.1
==> a1:       > User-Agent: curl/7.29.0
==> a1:       > Host: boot.dcos
==> a1:       > Accept: */*
==> a1:       >
==> p1:       < HTTP/1.1 200 OK ==> p1:       < Server: nginx/1.11.4 ==> p1:       < Date: Tue, 07 Mar 2017 22:48:31 GMT ==> p1:       < Content-Type: application/octet-stream ==> p1:       < Content-Length: 15293 ==> p1:       < Last-Modified: Tue, 07 Mar 2017 22:46:11 GMT ==> p1:       < Connection: keep-alive ==> p1:       < ETag: "58bf3833-3bbd" ==> p1:       < Accept-Ranges: bytes ==> p1:       < ==> p1:       { [data not shown]
==> a1:       < HTTP/1.1 200 OK ==> a1:       < Server: nginx/1.11.4 ==> a1:       < Date: Tue, 07 Mar 2017 22:48:31 GMT ==> a1:       < Content-Type: application/octet-stream ==> a1:       < Content-Length: 15293 ==> a1:       < Last-Modified: Tue, 07 Mar 2017 22:46:11 GMT ==> a1:       < Connection: keep-alive ==> a1:       < ETag: "58bf3833-3bbd" ==> a1:       < Accept-Ranges: bytes ==> a1:       < ==> a1:       { [data not shown]
==> p1:       * Connection #0 to host boot.dcos left intact
==> a1:       * Connection #0 to host boot.dcos left intact
==> p1:       Starting DC/OS Install Process
==> p1:       Running preflight checks
==> p1:       Checking if DC/OS is already installed: PASS (Not installed)
==> a1:       Starting DC/OS Install Process
==> a1:       Running preflight checks
==> a1:       Checking if DC/OS is already installed: PASS (Not installed)
==> a1:       PASS Is SELinux disabled?
==> p1:       PASS Is SELinux disabled?
==> p1:       Checking if docker is installed and in PATH:
==> p1:       PASS
==> p1:       Checking docker version requirement (>= 1.6):
==> p1:       PASS (1.11.2)
==> p1:       Checking if curl is installed and in PATH:
==> p1:       PASS
==> p1:       Checking if bash is installed and in PATH:
==> a1:       Checking if docker is installed and in PATH:
==> p1:       PASS
==> p1:       Checking if ping is installed and in PATH:
==> a1:       PASS
==> a1:       Checking docker version requirement (>= 1.6):
==> p1:       PASS
==> p1:       Checking if tar is installed and in PATH:
==> a1:       PASS (1.11.2)
==> p1:       PASS
==> a1:       Checking if curl is installed and in PATH:
==> p1:       Checking if xz is installed and in PATH:
==> a1:       PASS
==> p1:       PASS
==> p1:       Checking if unzip is installed and in PATH:
==> a1:       Checking if bash is installed and in PATH:
==> a1:       PASS
==> p1:       PASS
==> p1:       Checking if ipset is installed and in PATH:
==> p1:       PASS
==> p1:       Checking if systemd-notify is installed and in PATH:
==> a1:       Checking if ping is installed and in PATH:
==> p1:       PASS
==> a1:       PASS
==> a1:       Checking if tar is installed and in PATH:
==> p1:       Checking if systemd is installed and in PATH:
==> a1:       PASS
==> p1:       PASS
==> p1:       Checking systemd version requirement (>= 200):
==> a1:       Checking if xz is installed and in PATH:
==> p1:       PASS (219)
==> p1:       Checking if group 'nogroup' exists:
==> p1:       PASS
==> p1:       Checking if port 53 (required by spartan) is in use:
==> a1:       PASS
==> a1:       Checking if unzip is installed and in PATH:
==> p1:       PASS
==> p1:       Checking if port 5051 (required by mesos-agent) is in use:
==> a1:       PASS
==> p1:       PASS
==> p1:       Checking if port 34451 (required by navstar) is in use:
==> a1:       Checking if ipset is installed and in PATH:
==> p1:       PASS
==> p1:       Checking if port 39851 (required by spartan) is in use:
==> a1:       PASS
==> p1:       PASS
==> p1:       Checking if port 43995 (required by minuteman) is in use:
==> a1:       Checking if systemd-notify is installed and in PATH:
==> p1:       PASS
==> p1:       Checking if port 61001 (required by agent-adminrouter) is in use:
==> a1:       PASS
==> p1:       PASS
==> p1:       Checking if port 61420 (required by epmd) is in use:
==> a1:       Checking if systemd is installed and in PATH:
==> p1:       PASS
==> p1:       Checking if port 61421 (required by minuteman) is in use:
==> p1:       PASS
==> p1:       Checking if port 62053 (required by spartan) is in use:
==> a1:       PASS
==> a1:       Checking systemd version requirement (>= 200):
==> a1:       PASS (219)
==> a1:       Checking if group 'nogroup' exists:
==> p1:       PASS
==> p1:       Checking if port 62080 (required by navstar) is in use:
==> a1:       PASS
==> a1:       Checking if port 53 (required by spartan) is in use:
==> p1:       PASS
==> p1:       Checking Docker is configured with a production storage driver:
==> a1:       PASS
==> a1:       Checking if port 5051 (required by mesos-agent) is in use:
==> p1:       WARNING: bridge-nf-call-iptables is disabled
==> p1:       WARNING: bridge-nf-call-ip6tables is disabled
==> a1:       PASS
==> a1:       Checking if port 34451 (required by navstar) is in use:
==> p1:       PASS (overlay)
==> p1:       Creating directories under /etc/mesosphere
==> a1:       PASS
==> a1:       Checking if port 39851 (required by spartan) is in use:
==> p1:       Creating role file for slave_public
==> a1:       PASS
==> a1:       Checking if port 43995 (required by minuteman) is in use:
==> p1:       Configuring DC/OS
==> a1:       PASS
==> a1:       Checking if port 61001 (required by agent-adminrouter) is in use:
==> a1:       PASS
==> a1:       Checking if port 61420 (required by epmd) is in use:
==> a1:       PASS
==> a1:       Checking if port 61421 (required by minuteman) is in use:
==> a1:       PASS
==> a1:       Checking if port 62053 (required by spartan) is in use:
==> a1:       PASS
==> a1:       Checking if port 62080 (required by navstar) is in use:
==> a1:       PASS
==> a1:       Checking Docker is configured with a production storage driver:
==> p1:       Setting and starting DC/OS
==> a1:       WARNING: bridge-nf-call-iptables is disabled
==> a1:       WARNING: bridge-nf-call-ip6tables is disabled
==> a1:       PASS (overlay)
==> a1:       Creating directories under /etc/mesosphere
==> a1:       Creating role file for slave
==> a1:       Configuring DC/OS
==> a1:       Setting and starting DC/OS
==> a1:       Created symlink from /etc/systemd/system/multi-user.target.wants/dcos-setup.service to /etc/systemd/system/dcos-setup.service.
==> p1:       Created symlink from /etc/systemd/system/multi-user.target.wants/dcos-setup.service to /etc/systemd/system/dcos-setup.service.
==> m1: DC/OS Postflight
==> a1: DC/OS Postflight
==> p1: DC/OS Postflight
==> m1: sudo: dcos-postflight
==> a1: sudo: dcos-postflight
==> p1: sudo: dcos-postflight
==> a1:
==> p1:
==> m1:
==> a1: Setting Mesos Memory: 5632 (role=*)
==> a1: sudo: mesos-memory 5632
==> a1:
==> a1:       Updating /var/lib/dcos/mesos-resources
==> a1: Restarting Mesos Agent
==> a1: sudo: bash -ceu "systemctl stop dcos-mesos-slave.service && rm -f /var/lib/mesos/slave/meta/slaves/latest && systemctl start dcos-mesos-slave.service --no-block"
==> a1:
==> p1: Setting Mesos Memory: 1024 (role=slave_public)
==> p1: sudo: mesos-memory 1024 slave_public
==> p1:
==> p1:       Updating /var/lib/dcos/mesos-resources
==> p1: Restarting Mesos Agent
==> p1: sudo: bash -ceu "systemctl stop dcos-mesos-slave-public.service && rm -f /var/lib/mesos/slave/meta/slaves/latest && systemctl start dcos-mesos-slave-public.service --no-block"
==> p1:
==> boot: DC/OS Installation Complete
==> boot: Web Interface: http://m1.dcos/
==> boot: DC/OS Installation Complete
==> boot: Web Interface: http://m1.dcos/

The VirtualBox GUI shows the four machines we had seen in the VagrantConfig.yaml. They are up and running:

Picture showing 4 VirtualBox machines m1.dcos, a1.dcos, p1.dcos and boot.dcos

Step 6: Log into the DC/OS GUI

Now let us access the Web UI on m1.dcos:

The Vagrant Hostmanager Plugin works also on Windows: we can check this by reading the hosts file on C:\Windows\System32\drivers\etc\hosts. It contains the DNS mappings for the four machines (a1.dcos, boot.dcos, m1.dcos and p1.dcos). The DNS mapping for spring.acme.org with alias oinker.acme.org will be missing in your case and will be added at a later step, when we are installing the Marathon load balancer based on HAProxy.

The host manager has added m1 and some other FQDNs to the hosts file (found on C:\Windows\System32\drivers\etc\hosts):

## vagrant-hostmanager-start id: 9f1502eb-71bf-4e6a-b3bc-44a83db628b7
192.168.65.111 a1.dcos

192.168.65.50 boot.dcos

192.168.65.90 m1.dcos

192.168.65.60 p1.dcos
192.168.65.60 spring.acme.org oinker.acme.org
## vagrant-hostmanager-end

After login in via Google,

and pressing the Allow button, we reach at the DC/OS Dashboard:

(scrolling down)

Step 7: Install the DCOS CLI

Now we will continue to follow the DC/OS 101 Tutorial and install the DC/OS CLI. This can be done by clicking the profile on the lower left of the Web GUI:

-> 

-> 

-> 

Choose the operating system type you are working on. In my case, I have a Windows system and I have performed following steps:

Step 8: Configure DC/OS Master URL

First, we cd into the the  folder, where dcos.exe is located (D:\veits\downloads\DCOS CLI in my case), before we configure the core DCOS URL:

Windows> cd /D "D:\veits\downloads\DCOS CLI"
Windows> dcos config set core.dcos_url http://m1.dcos
Windows> dcos
Command line utility for the Mesosphere Datacenter Operating
System (DC/OS). The Mesosphere DC/OS is a distributed operating
system built around Apache Mesos. This utility provides tools
for easy management of a DC/OS installation.

Available DC/OS commands:

        auth            Authenticate to DC/OS cluster
        config          Manage the DC/OS configuration file
        experimental    Experimental commands. These commands are under development and are subject to change
        help            Display help information about DC/OS
        job             Deploy and manage jobs in DC/OS
        marathon        Deploy and manage applications to DC/OS
        node            Administer and manage DC/OS cluster nodes
        package         Install and manage DC/OS software packages
        service         Manage DC/OS services
        task            Manage DC/OS tasks

Get detailed command description with 'dcos  --help'.

Step 9: Receive Token from the DC/OS Master

Windows> dcos auth login

Please go to the following link in your browser:

    http://m1.dcos/login?redirect_uri=urn:ietf:wg:oauth:2.0:oob
Enter OpenID Connect ID Token:eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIm...-YqOARGFN5Ewcf6YWlw <-------(shortened)
Login successful! 

Here, I have cut&paste the link I have marked in red into the browser URL field:

Then logged in as Google user:

-> 

-> I have signed in with Google

-> 

-> clicked Copy to Clipboard

-> paste the clipboard to the terminal as shown above already (here again) and press <enter>:

Enter OpenID Connect ID Token:eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIm...-YqOARGFN5Ewcf6YWlw <-------(shortened)
Login successful!

With that, you make sure only you have access to the (virtual) cluster.

Step 10 (optional): Explore DC/OS and Marathon

With the dcos service command, we will see, that Marathon is running already:

Windows> dcos service
NAME           HOST      ACTIVE  TASKS  CPU  MEM  DISK  ID
marathon  192.168.65.90   True     0    0.0  0.0  0.0   1d3a11d0-1c3e-4ec2-8485-d1a3aa43c465-0001

With dcos node we see that two (virtual) nodes are connected (as we might have noticed on the dashboard as well):

Windows> dcos node
   HOSTNAME           IP                           ID
192.168.65.111  192.168.65.111  1d3a11d0-1c3e-4ec2-8485-d1a3aa43c465-S2
192.168.65.60   192.168.65.60   1d3a11d0-1c3e-4ec2-8485-d1a3aa43c465-S3

The first one is a1, the private agent, and the second one is p1, the public agent.

With dcos log --leader we can check the Mesos master log:

Windows> dcos node log --leader
dcos-log is not supported
Falling back to files API...
I0309 13:11:45.152153  3217 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:45654 with User-Agent='python-requests/2.10.0'
I0309 13:11:47.176911  3214 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:45660 with User-Agent='python-requests/2.10.0'
I0309 13:11:48.039836  3214 http.cpp:390] HTTP GET for /master/state from 192.168.65.90:41141 with User-Agent='Mesos-State / Host: m1, Pid: 5258'
I0309 13:11:49.195853  3216 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:45666 with User-Agent='python-requests/2.10.0'
I0309 13:11:51.216013  3217 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:45672 with User-Agent='python-requests/2.10.0'
I0309 13:11:51.376802  3217 master.cpp:5478] Performing explicit task state reconciliation for 1 tasks of framework 1d3a11d0-1c3e-4ec2-8485-d1a3aa43c465-0001 (marathon) at scheduler-1a712a58-a49a-4c45-a89a-823b827a49bf@192.168.65.90:15101
I0309 13:11:53.236994  3217 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:45678 with User-Agent='python-requests/2.10.0'
I0309 13:11:55.257347  3216 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:45684 with User-Agent='python-requests/2.10.0'
I0309 13:11:57.274785  3217 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:45690 with User-Agent='python-requests/2.10.0'
I0309 13:11:57.462590  3213 http.cpp:390] HTTP GET for /master/state.json from 192.168.65.90:45704 with User-Agent='Mesos-DNS'

Finally, dcos help shows the output

Windows> dcos help
Description:
    The Mesosphere Datacenter Operating System (DC/OS) spans all of the machines in
your datacenter or cloud and treats them as a single, shared set of resources.

Usage:
    dcos [options] [] [...]

Options:
    --debug
        Enable debug mode.
    --help
        Print usage.
    --log-level=
        Set the logging level. This setting does not affect the output sent to
        stdout. The severity levels are:
        The severity level:
        * debug    Prints all messages.
        * info     Prints informational, warning, error, and critical messages.
        * warning  Prints warning, error, and critical messages.
        * error    Prints error and critical messages.
        * critical Prints only critical messages to stderr.
    --version
        Print version information

Environment Variables:
    DCOS_CONFIG
        Set the path to the DC/OS configuration file. By default, this variable
        is set to ~/.dcos/dcos.toml.
    DCOS_DEBUG
        Indicates whether to print additional debug messages to stdout. By
        default this is set to false.
    DCOS_LOG_LEVEL
        Prints log messages to stderr at or above the level indicated. This is
        equivalent to the --log-level command-line option.

You can also check the CLI documentation.

Step 11: Deploy a Hello World Service per GUI

If you follow steps 11 and 12, you will see in step 13 that the default networking settings are sub-optimal. You can skip steps 11 to 14, if you wish to create a hello service with an improved networking including load balancing.

Now we will create a Hello World Service. For that, log into the DC/OS, if not done already and navigate to Services:

-> Deploy Service (DC/OS)

-> 

Here we have chosen only 0.1 CPU, since Mesos is quite strict on the resource reservations: the sum of CPUs reserved for the applications cannot exceed the number you have at hand, even if the application does not need the resources really. This is, what we have seen in my previous Mesos blog post, where we have deployed hello world applications that only printed out a “Hello World” once a second with a reservation of one CPU. With two CPUs available, I could not start more than two such hello world applications.

Let us deploy a container from the image nginxdemos/hello:

-> 

-> 

Now the Service is getting deployed:

Step 12: Connect to the NginX Service

When we click on the nginx-via-gui service name, we will see that the service is running on the private Mesos agent a1 on 192.168.65.111:

We can directly access the service by entering the private agent’s IP address 192.168.65.111  or name a1.dcos in the Browser’s URL field:

Here we can see that we have a quite simple networking model: the Windows host uses IP address 192.168.65.1 to reach the server on 192.168.65.111, which is the private Mesos agent’s IP address. The NginX container is just sharing the private agent’s network interface.

Because of the simple networking model, that was easier than expected:

  1. in other situations, you often need to configure port forwarding on VirtualBox VM, but not this time: the Mesos Agent is configured with a secondary Ethernet interface with host networking, which allows to connect from the VirtualBox host to any port of the private agent without VirtualBox port forwarding.
  2. in other situations, you often need to configure a port mapping between the docker container and the docker host (the Mesos agent in this case) is needed. Why not this time? Let us explore this in more detail in the next optional step.

Step 13 (optional): Explore the Default Mesos Networking

While deploying the service, we have not reviewed the network tab yet. However, we can do this now by clicking on the service, then “Edit” and then “Network”:

The default network setting is the “Host” networking, which means that the container is sharing the host’s network interface directly. The image, we have chosen is exposing port 80. This is, why we can reach the service by entering the host’s name or IP address with port 80 to the URL field of the browser.

Since the container is re-using the Docker host’s network interface, a port mapping is not needed, as we can confirm with a docker ps command:

(Vagranthost)$ vagrant ssh a1
...
(a1)$ docker ps
CONTAINER ID        IMAGE                         COMMAND             CREATED             STATUS              PORTS               NAMES
cd5a068aaa28        oveits/docker-nginx-busybox   "/usr/sbin/nginx"   39 minutes ago      Up 39 minutes                           mesos-1d3a11d0-1c3e-4ec2-8485-d1a3aa43c465-S2.39067bbf-c4b6-448b-9eb9-975c050bcf57

Here we cannot see any port mapping here (the PORTS field is empty).

Note that the default network configuration does not allow to scale the service: port 80 is already occupied.

 

Let us confirm this assumption by trying to scale the NginX service to two containers:

On Services -> Drop-down list right of name -> Scale

->choose 2 instances:

-> Scale Service

Now the service continually tries to start the second container and the status is toggling between Waiting, Running and Delayed:

Delayed

Running

Waiting

As expected, the second docker container cannot start, because port 80 is already occupied on the docker host. The error log shows:

I0324 11:23:01.820436 7765 exec.cpp:161] Version: 1.0.3
I0324 11:23:01.825763 7769 exec.cpp:236] Executor registered on agent 1d3a11d0-1c3e-4ec2-8485-d1a3aa43c465-S2
I0324 11:23:01.827263 7772 docker.cpp:815] Running docker -H unix:///var/run/docker.sock run --cpu-shares 102 --memory 33554432 -e MARATHON_APP_VERSION=2017-03-24T18:18:00.202Z -e HOST=192.168.65.111 -e MARATHON_APP_RESOURCE_CPUS=0.1 -e MARATHON_APP_RESOURCE_GPUS=0 -e MARATHON_APP_DOCKER_IMAGE=oveits/docker-nginx-busybox -e PORT_10000=10298 -e MESOS_TASK_ID=nginx.ea26c7af-10be-11e7-9134-70b3d5800001 -e PORT=10298 -e MARATHON_APP_RESOURCE_MEM=32.0 -e PORTS=10298 -e MARATHON_APP_RESOURCE_DISK=2.0 -e MARATHON_APP_LABELS= -e MARATHON_APP_ID=/nginx -e PORT0=10298 -e LIBPROCESS_IP=192.168.65.111 -e MESOS_SANDBOX=/mnt/mesos/sandbox -e MESOS_CONTAINER_NAME=mesos-1d3a11d0-1c3e-4ec2-8485-d1a3aa43c465-S2.f752b208-f7d1-49d6-8cdd-cbb62eaf4768 -v /var/lib/mesos/slave/slaves/1d3a11d0-1c3e-4ec2-8485-d1a3aa43c465-S2/frameworks/1d3a11d0-1c3e-4ec2-8485-d1a3aa43c465-0001/executors/nginx.ea26c7af-10be-11e7-9134-70b3d5800001/runs/f752b208-f7d1-49d6-8cdd-cbb62eaf4768:/mnt/mesos/sandbox --net host --name mesos-1d3a11d0-1c3e-4ec2-8485-d1a3aa43c465-S2.f752b208-f7d1-49d6-8cdd-cbb62eaf4768 oveits/docker-nginx-busybox
nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (2: No such file or directory)
2017/03/24 18:23:01 [emerg] 1#0: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2017/03/24 18:23:01 [notice] 1#0: try again to bind() after 500ms
2017/03/24 18:23:01 [emerg] 1#0: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2017/03/24 18:23:01 [notice] 1#0: try again to bind() after 500ms
2017/03/24 18:23:01 [emerg] 1#0: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2017/03/24 18:23:01 [notice] 1#0: try again to bind() after 500ms
2017/03/24 18:23:01 [emerg] 1#0: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2017/03/24 18:23:01 [notice] 1#0: try again to bind() after 500ms
2017/03/24 18:23:01 [emerg] 1#0: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2017/03/24 18:23:01 [notice] 1#0: try again to bind() after 500ms
2017/03/24 18:23:01 [emerg] 1#0: still could not bind()
nginx: [emerg] still could not bind()

This is not a good configuration. Can we choose a different type of networking at the time we start the service? Let us follow Step 14 to create the same service, but now in a scalable and load-balanced fashion:

Step 14: Deploy a Hello World Service per JSON with improved Networking and Load-Balancing

Step 14.1: Install Marathon Load Balancer

Step 14.1.1: Check, if Marathon LB is already installed

In the moment, the Marathon Load Balancer is not installed. This can be checked with following DCOS CLI command:

(DCOS CLI Client)$ dcos package list
There are currently no installed packages. Please use `dcos package install` to install a package.

Step 14.1.2 (optional): Check Options of Marathon Package

Let us install the Marathon Load balancer by following the version 1.8 documentation. First, we will have a look to the package (optional):

(DCOS CLI Client)$ dcos package describe --config marathon-lb
{
  "$schema": "http://json-schema.org/schema#",
  "properties": {
    "marathon-lb": {
      "properties": {
        "auto-assign-service-ports": {
          "default": false,
          "description": "Auto assign service ports for tasks which use IP-per-task. See https://github.com/mesosphere/marathon-lb#mesos-with-ip-per-task-support for details.",
          "type": "boolean"
        },
        "bind-http-https": {
          "default": true,
          "description": "Reserve ports 80 and 443 for the LB. Use this if you intend to use virtual hosts.",
          "type": "boolean"
        },
        "cpus": {
          "default": 2,
          "description": "CPU shares to allocate to each marathon-lb instance.",
          "minimum": 1,
          "type": "number"
        },
        "haproxy-group": {
          "default": "external",
          "description": "HAProxy group parameter. Matches with HAPROXY_GROUP in the app labels.",
          "type": "string"
        },
        "haproxy-map": {
          "default": true,
          "description": "Enable HAProxy VHost maps for fast VHost routing.",
          "type": "boolean"
        },
        "haproxy_global_default_options": {
          "default": "redispatch,http-server-close,dontlognull",
          "description": "Default global options for HAProxy.",
          "type": "string"
        },
        "instances": {
          "default": 1,
          "description": "Number of instances to run.",
          "minimum": 1,
          "type": "integer"
        },
        "marathon-uri": {
          "default": "http://marathon.mesos:8080",
          "description": "URI of Marathon instance",
          "type": "string"
        },
        "maximumOverCapacity": {
          "default": 0.2,
          "description": "Maximum over capacity.",
          "minimum": 0,
          "type": "number"
        },
        "mem": {
          "default": 1024.0,
          "description": "Memory (MB) to allocate to each marathon-lb task.",
          "minimum": 256.0,
          "type": "number"
        },
        "minimumHealthCapacity": {
          "default": 0.5,
          "description": "Minimum health capacity.",
          "minimum": 0,
          "type": "number"
        },
        "name": {
          "default": "marathon-lb",
          "description": "Name for this LB instance",
          "type": "string"
        },
        "role": {
          "default": "slave_public",
          "description": "Deploy marathon-lb only on nodes with this role.",
          "type": "string"
        },
        "secret_name": {
          "default": "",
          "description": "Name of the Secret Store credentials to use for DC/OS service authentication. This should be left empty unless service authentication is needed.",
          "type": "string"
        },
        "ssl-cert": {
          "description": "TLS Cert and private key for HTTPS.",
          "type": "string"
        },
        "strict-mode": {
          "default": false,
          "description": "Enable strict mode. This requires that you explicitly enable each backend with `HAPROXY_{n}_ENABLED=true`.",
          "type": "boolean"
        },
        "sysctl-params": {
          "default": "net.ipv4.tcp_tw_reuse=1 net.ipv4.tcp_fin_timeout=30 net.ipv4.tcp_max_syn_backlog=10240 net.ipv4.tcp_max_tw_buckets=400000 net.ipv4.tcp_max_orphans=60000 net.core.somaxconn=10000",
          "description": "sysctl params to set at startup for HAProxy.",
          "type": "string"
        },
        "template-url": {
          "default": "",
          "description": "URL to tarball containing a directory templates/ to customize haproxy config.",
          "type": "string"
        }
      },
      "required": [
        "cpus",
        "mem",
        "haproxy-group",
        "instances",
        "name"
      ],
      "type": "object"
    }
  },
  "type": "object"
}

Step 14.1.3: Install and Check Marathon Load Balancer

We install the Marathon Package now. We will keep the default configuration:

$ dcos package install marathon-lb
We recommend at least 2 CPUs and 1GiB of RAM for each Marathon-LB instance.

*NOTE*: ```Enterprise Edition``` DC/OS requires setting up the Service Account in all security modes.
Follow these instructions to setup a Service Account: https://docs.mesosphere.com/administration/id-and-access-mgt/service-auth/mlb-auth/
Continue installing? [yes/no] yes
Installing Marathon app for package [marathon-lb] version [1.5.1]
Marathon-lb DC/OS Service has been successfully installed!
See https://github.com/mesosphere/marathon-lb for documentation.

Now let uch check that the package is installed:

$ dcos package list
NAME VERSION APP COMMAND DESCRIPTION
marathon-lb 1.5.1 /marathon-lb --- HAProxy configured using Marathon state

We are able to see the load balancer service on the GUI as well:

After clicking on marathon-lb service  and the container  and scrolling down (see note), we see, that the load balancer is serving the ports 80, 443, 9090, 9091, and 10000 to 10100. We will use one of the high ports soon.

 

Note: scrolling is a little bit tricky at the moment, you might need to re-size the browser view with ctrl minus or ctrl plus to see the scroll bar on the right. Another possibility is to click into the black part of the browser page and use the arrow keys thereafter

Port 9090 is used by the load balancer admin interface. We can see the statistics there:

Step 14.2: Create an Application using Marathon Load Balancer

Now let us follow this instructions to add a service that makes use of the Marathon Load Balancer:

Step 14.2.1: Define the Application’s Configuration File

Save following File content as nginx-hostname-app.json:

{
   "id": "nginx-hostname",
   "container": {
     "type": "DOCKER",
     "docker": {
       "image": "nginxdemos/hello",
       "network": "BRIDGE",
       "portMappings": [
         { "hostPort": 0, "containerPort": 80, "servicePort": 10006 }
       ]
     }
   },
   "instances": 3,
   "cpus": 0.25,
   "mem": 100,
   "healthChecks": [{
       "protocol": "HTTP",
       "path": "/",
       "portIndex": 0,
       "timeoutSeconds": 2,
       "gracePeriodSeconds": 15,
       "intervalSeconds": 3,
       "maxConsecutiveFailures": 2
   }],
   "labels":{
     "HAPROXY_DEPLOYMENT_GROUP":"nginx-hostname",
     "HAPROXY_DEPLOYMENT_ALT_PORT":"10007",
     "HAPROXY_GROUP":"external",
     "HAPROXY_0_REDIRECT_TO_HTTPS":"true",
     "HAPROXY_0_VHOST": "192.168.65.111"
   }
}

If you are running in another environment than the one we have created using Vagrant, you might need to adapt the IP address: replace 192.168.65.111 in the HAPROXY_0_VHOST by your public agent’s IP address.

Step 14.2.2 Create Service using DCOS CLI

Now create the Marathon app using the DCOS CLI (in my case, I have not adapted the Path variable yet, so I had to issue a cd to the full_path_to_dcos.exe, “D:\veits\downloads\DCOS CLI\dcos.exe” in my case.

$ cd <folder_containing_dcos.exe> # needed, if dcos.exe is not in your PATH
$ dcos marathon app add full_path_to_nginx-hostname-app.json
Created deployment 63bac617-792c-488e-8489-80428b1c1e34
$ dcos marathon app list
ID               MEM   CPUS  TASKS  HEALTH  DEPLOYMENT  WAITING  CONTAINER  CMD                                         
/marathon-lb     1024   2     1/1    1/1       ---      False      DOCKER   ['sse', '-m', 'http://marathon.mesos:8080', '--health-check', '--haproxy-map', '--group', 'external']
/nginx-hostname  100   0.25   3/3    3/3       ---      False      DOCKER   None     

On the GUI, under Service we find:

Marathon: Service: nginx-hostname

After clicking on the service name nginx-hostname, we see more details on the three healthy containers that have been started:

nginx-hostname: three containers running on the public agent 192.168.65.111

Now, the service is reachable via curl from within the Mesos netowork (testing on the private agent a1):

(a1)$ curl http://marathon-lb.marathon.mesos:10006

But can we reach it from outside? Yes: marathon-lb.marathon.mesos is mapped to the public agent’s (p1) address 192.168.65.60 and we can reach http://192.168.65.60:10006 from the inside …

(a1)$ curl http://192.168.65.60:10006

…as well as from the outside:

NginX Hostname - Container 3

The image we have chosen will return the server name (i.e. the container ID), the server address and port as seen by the server (172.17.0.x with port 80), the called URI (root), the date and the client IP address and port.

When reloading the page via the browser’s reload button, the answering container will change randomly:

NginX Hostname - Container 2

NginX Hostname - Container 1

This proves that the request are load-balanced between the three NginX containers and can be reached from the Machine hosting the public agent VirtualBox VM. In the next step, we will make sure that the NginX service can be reached from any machine in your local area network.

Step 15: Reaching the Server from the outside World

In case of a physical machine as public agent, the service will be reachable from the local area network (LAN) already. However, in our case, the public agent p1 is a VirtualBox VM using host networks. Since VirtualBox host networks are only reachable from the VirtualBox host, an additional step has to be taken, if the service is to be reachable from outside.

Note that the outside interface of the HAProxy on the DC/OS master hosting the is attached to a VirtualBox host network 192.168.65.0/24. So, if you want to reach the address from the local area network, an additional mapping from an outside interface of the VirtualBox host p1 to port 10006 is needed.

For that, choose

-> VirtualBox GUI

-> p1.dcos

-> Edit

-> Network

Then

-> Adapter1

-> Port Forwarding

-> Add (+)

-> choose a name and map a host port to the port 10006 we have used in the JSON file above:

-> OK

 

In this example you will be able to reach the service via any reachable IP address of the VirtualBox host on port 8081:

With that, the service is reachable from any machine in the local area network.

Appendix A: Virtualbox Installation Problem Resolution

  • On Windows 7 or Windows 10, download the installer. Easy.
  • When I start the installer, everything seems to be on track until I see “rolling back action” and I finally get this:
    “Oracle VM Virtualbox x.x.x Setup Wizard ended prematurely”

Resolution of the “Setup Wizard ended prematurely” Problem

Let us try to resolve the problem: the installer of Virtualbox downloaded from Oracle shows the exact same error: “…ended prematurely”. This is not a docker bug. Playing with conversion tools from Virtualbox to VMware did not lead to the desired results.

The Solution: Google is your friend: the winner is:https://forums.virtualbox.org/viewtopic.php?f=6&t=61785. After backing up the registry and changing the registry entry

HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Network -> MaxFilters from 8 to 20 (decimal)

and a reboot of the Laptop, the installation of Virtualbox is successful.

Note: while this workaround has worked on my Windows 7 notebook, it has not worked on my new Windows 10 machine. However, I have managed to install VirtualBox on Windows 10 by de-selecting the USB support module during the VirtualBox installation process. I remember having seen a forum post pointing to that workaround, with the additional information that the USB drivers were installed automatically at the first time a USB device was added to a host (not yet tested on my side).

Appendix B: dcos node log --leader results in “No files exist. Exiting.” Message

Days later, I have tried again:

dcos node log --leader
dcos-log is not supported
Falling back to files API...
No files exist. Exiting.

The reason is that the Token has expired:

Windows> dcos service
Your core.dcos_acs_token is invalid. Please run: `dcos auth login`

The reason is that the Token has expired:

Windows> dcos auth login

Please go to the following link in your browser:

http://m1.dcos/login?redirect_uri=urn:ietf:wg:oauth:2.0:oob

Enter OpenID Connect ID Token:(paste in the key here)
Login successful!

Now we can try again:

Windows> dcos node log --leader
dcos-log is not supported
Falling back to files API...
I0324 09:36:18.030959 4042 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:49308 with User-Agent='python-requests/2.10.0'
I0324 09:36:18.285975 4047 master.cpp:5478] Performing explicit task state reconciliation for 1 tasks of framework 1d3a11d0-1c3e-4ec2-8485-d1a3aa43c465-0001 (marathon) at scheduler-908fbaff-5dd6-4089-a417-c10c068f5d85@192.168.65.90:15101
I0324 09:36:20.054447 4047 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:49314 with User-Agent='python-requests/2.10.0'
I0324 09:36:22.072386 4044 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:49320 with User-Agent='python-requests/2.10.0'
I0324 09:36:22.875411 4041 http.cpp:390] HTTP GET for /master/slaves from 192.168.65.90:49324 with User-Agent='Go-http-client/1.1'
I0324 09:36:24.083292 4041 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:49336 with User-Agent='python-requests/2.10.0'
I0324 09:36:26.091071 4047 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:49346 with User-Agent='python-requests/2.10.0'
I0324 09:36:28.099954 4047 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:49352 with User-Agent='python-requests/2.10.0'
I0324 09:36:29.773558 4047 http.cpp:390] HTTP GET for /master/state.json from 192.168.65.90:49354 with User-Agent='Mesos-DNS'
I0324 09:36:30.116576 4046 http.cpp:390] HTTP GET for /master/state-summary from 192.168.65.90:49360 with User-Agent='python-requests/2.10.0'

Appendix C: Finding the DC/OS Version

Get DC/OS Version (found via this Mesosphere help desk page):

$ curl http://m1/dcos-metadata/dcos-version.json
{
 "version": "1.8.8",
 "dcos-image-commit": "602edc1b4da9364297d166d4857fc8ed7b0b65ca",
 "bootstrap-id": "5df43052907c021eeb5de145419a3da1898c58a5"
}

Appendix D: Error Message, when changing Service Name

If you see the following message when editing a service:

requirement failed: IP address (Some(IpAddress(List(),Map(),DiscoveryInfo(List()),Some(dcos)))) and ports (List(PortDefinition(0,tcp,None,Map()))) are not allowed at the same time

 

Workaround: Destroy and Re-Create Service

Destroy the service and create a new service like follows:

Copy original service in json format (service -> edit -> choose JSON Mode on upper right corner -> ctrl-a ctrl-c -> Cancel)

Create new service

-> Services
-> Deploy Service
-> Edit
-> JSON Mode
-> click into text field
-> ctrl-a ctrl-v
-> edit ID and VIP_0 <– names should be the same: here “nginx-dcos-network-load-balanced-wo-marathon-lb”

-> Deploy

 

Next Steps

  • Explore the multi-tenant capabilities of DC/OS and Mesos/Marathon: can I use the same infrastructure for more than one customer?
    • Separate Logins, customer A should not see resources of customer B
    • Shared resources and separate resource reservations (pool) for the customers
    • Strict resource reservation vs. scheduler based resource reservation
    • Comparison with OpenShift: does OpenShift offer a resource reservation?
  • Running Jenkins on Mesos Marathon of Mesos Job
    • docker socks usage
1

Jenkins Part 6: Automated Cross Browser Testing via BrowserStack


With the BrowserStack cloud-based solution, there is no need to buy many different hardware types for testing your web site for many different mobile devices and operating systems. In this blog post, we will learn how to integrate BrowserStack-based automated cross browser tests into a continuous integration workflow controlled by the pupular Jenkins tool.

Jenkins + BrowserStack

First we will demonstrate how to use BrowserStack manually, before we automate the browser tests with the help of a Protractor Github example from BrowserStack. You will need to sign into a BrowserStack trial account with 30 minutes free manual testing and 100 minutes free automated testing. For this tutorial, we will need less than 6 minutes automated testing time.

Moreover, we will integrate the BrowserStack based tests into a Jenkins build job. At the end we will generate individual and trend Jenkins test reports with the help of the Jasmine reporting tool.

Note: The difference to my previous blog post is, that I have concentrated on Protractor without Gulp on Ubuntu this time. More importantly, I have added the Jenkins integration. In addition, you will find the description of many possible errors and their resolution in the appendix.

Table of Contents

Tools and Versions used

  • Vagrant 1.8.6
  • Virtualbox 5.0.20
  • Docker 1.12.1
  • Jenkins 2.32.2
  • Job DSL Plugin 1.58
  • for Windows: GNU bash, version 4.3.42(5)-release (x86_64-pc-msys)

Prerequisites (for the Jenkins part):

  • Free DRAM for the a Docker Host VM >~ 4 GB
  • Docker Host is available. If not, follow the “Prerequisite Step” below.
  • Tested with 2 vCPU (1 vCPU might work as well).

Getting Acquainted with BrowserStack

After signing up for a BrowserStack account, you get a 30 minute free live testing session. Start a local browser, and connect to the BrowserStack start URL. You will be asked to install a Browser plugin, which will take the role of a BrowserStack client.

You can choose any of the many operating systems and browser types

2017-03-04-17_14_58-dashboard

Note that you can interrupt the session any time by clicking Stop on the left (I had overlooked that, so I have wasted most of my 30 minutes time…)

Now you type in the URL you want to load:

Jenkins running on an iOS Simulator on BrowserStack

As you can see, I have typed in localhost:8080 on the remote iOS simulator running on the BrowserStack cloud. However, the browser is loading the Jenkins server page, which is running on my local notebook. The browser does not try to really load localhost (i.e. the iOS the browser is running on). Instead the HTTP request is directed to the locally running Chrome plugin, which is then resolving the DNS name “localhost” locally. This is called local testing, which we will explore in more detail now, before we start our step by step guide.

About BrowserStack Local Testing

Establishing a Tunnel

Local testing means, that Jenkins is connecting to BrowserStack.com via a tunnel the browser is running in the cloud, but all traffic from the browser to the system under test is relayed by the local BrowserStack client running on the Jenkins server:

2017-02-28-18_41_25
Steps to establish a tunnel between BrowserStack client and the repeater/proxy in the BrowserStack Cloud

Local Testing

2017-02-28-18_46_10
Steps to run a browser in the BrowserStack Cloud. All requests from the browser are proxied by the repeater and the BrowserStack client before being sent to the local system under test.

Prerequisite Step: Create a Docker Host via Vagrant

For the case, you do not have a Docker host at hand, you may want to follow Step 1 and 2 of part 1 of my Jenkins tutorial series. After having tried out many options to install Docker, the Vagrant way of installing a Docker is my favorite…

Part 1: Automatic Testing via BrowserStack

As an introduction, this part will show how to perform automated BrowserStack testing from command line without the need to use Jenkins.

Part 1 of this blog post is not a prerequisite to run part 2, which performs similar steps (and more) in the Jenkins way.

Step 1.1: Sign up for BrowserStack

For completing the steps of this tutorial, you need to sign up for a BrowserStack account. Pricing information can be found here. However, for completing the tasks of this tutorial, I did not need to sign up for any of the paid plans.

Step 1.2: Run Ubuntu Docker Container

I have looked for a simple Protractor example and I have found BrowserStack’s Protractor example on GitHub. Let us run it on an Ubuntu 16.04 Docker container, since the official Jenkins Dockerhub image seems to be based on a system that understands apt-get (see the Docker image layer visualizer).

Let us start a recent Ubuntu 16.04 container:

(dockerhost)$ sudo docker run -it ubuntu:16.04 bash 
(container)# mkdir /app; cd /app

Step 1.3: Install Node.js, NPM and GIT

We will need to install Node.js, NPM and GIT:

(container)$ apt-get update && apt-get install -y nodejs npm git
Get:1 http://archive.ubuntu.com/ubuntu xenial InRelease [247 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Get:4 http://archive.ubuntu.com/ubuntu xenial/main Sources [1103 kB]
Get:5 http://archive.ubuntu.com/ubuntu xenial/restricted Sources [5179 B]
Get:6 http://archive.ubuntu.com/ubuntu xenial/universe Sources [9802 kB]
Get:7 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages [1558 kB]
Get:8 http://archive.ubuntu.com/ubuntu xenial/restricted amd64 Packages [14.1 kB]
Get:9 http://archive.ubuntu.com/ubuntu xenial/universe amd64 Packages [9827 kB]
Get:10 http://archive.ubuntu.com/ubuntu xenial-updates/main Sources [296 kB]
Get:11 http://archive.ubuntu.com/ubuntu xenial-updates/restricted Sources [2815 B]
Get:12 http://archive.ubuntu.com/ubuntu xenial-updates/universe Sources [176 kB]
Get:13 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages [623 kB]
Get:14 http://archive.ubuntu.com/ubuntu xenial-updates/restricted amd64 Packages [12.4 kB]
Get:15 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 Packages [546 kB]
Get:16 http://archive.ubuntu.com/ubuntu xenial-security/main Sources [75.0 kB]
Get:17 http://archive.ubuntu.com/ubuntu xenial-security/restricted Sources [2392 B]
Get:18 http://archive.ubuntu.com/ubuntu xenial-security/universe Sources [27.0 kB]
Get:19 http://archive.ubuntu.com/ubuntu xenial-security/main amd64 Packages [282 kB]
Get:20 http://archive.ubuntu.com/ubuntu xenial-security/restricted amd64 Packages [12.0 kB]
Get:21 http://archive.ubuntu.com/ubuntu xenial-security/universe amd64 Packages [113 kB]
Fetched 24.9 MB in 3min 13s (129 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
 binutils build-essential bzip2 ca-certificates cpp cpp-5 dpkg-dev fakeroot file g++ g++-5 gcc gcc-5 git-man gyp
 ifupdown iproute2 isc-dhcp-client isc-dhcp-common javascript-common krb5-locales less libalgorithm-diff-perl
 libalgorithm-diff-xs-perl libalgorithm-merge-perl libasan2 libasn1-8-heimdal libatm1 libatomic1 libbsd0 libc-dev-bin
 libc6-dev libcc1-0 libcilkrts5 libcurl3-gnutls libdns-export162 libdpkg-perl libedit2 liberror-perl libexpat1
 libfakeroot libffi6 libfile-fcntllock-perl libgcc-5-dev libgdbm3 libgmp10 libgnutls30 libgomp1 libgssapi-krb5-2
 libgssapi3-heimdal libhcrypto4-heimdal libheimbase1-heimdal libheimntlm0-heimdal libhogweed4 libhx509-5-heimdal
 libicu55 libidn11 libisc-export160 libisl15 libitm1 libjs-inherits libjs-jquery libjs-node-uuid libjs-underscore
 libk5crypto3 libkeyutils1 libkrb5-26-heimdal libkrb5-3 libkrb5support0 libldap-2.4-2 liblsan0 libmagic1 libmnl0
 libmpc3 libmpfr4 libmpx0 libnettle6 libp11-kit0 libperl5.22 libpopt0 libpython-stdlib libpython2.7-minimal
 libpython2.7-stdlib libquadmath0 libroken18-heimdal librtmp1 libsasl2-2 libsasl2-modules libsasl2-modules-db
 libsqlite3-0 libssl-dev libssl-doc libssl1.0.0 libstdc++-5-dev libtasn1-6 libtsan0 libubsan0 libuv1 libuv1-dev
 libwind0-heimdal libx11-6 libx11-data libxau6 libxcb1 libxdmcp6 libxext6 libxmuu1 libxtables11 linux-libc-dev make
 manpages manpages-dev mime-support netbase node-abbrev node-ansi node-ansi-color-table node-archy node-async
 node-block-stream node-combined-stream node-cookie-jar node-delayed-stream node-forever-agent node-form-data
 node-fstream node-fstream-ignore node-github-url-from-git node-glob node-graceful-fs node-gyp node-inherits node-ini
 node-json-stringify-safe node-lockfile node-lru-cache node-mime node-minimatch node-mkdirp node-mute-stream
 node-node-uuid node-nopt node-normalize-package-data node-npmlog node-once node-osenv node-qs node-read
 node-read-package-json node-request node-retry node-rimraf node-semver node-sha node-sigmund node-slide node-tar
 node-tunnel-agent node-underscore node-which nodejs-dev openssh-client openssl patch perl perl-modules-5.22 python
 python-minimal python-pkg-resources python2.7 python2.7-minimal rename rsync xauth xz-utils zlib1g-dev
Suggested packages:
 binutils-doc bzip2-doc cpp-doc gcc-5-locales debian-keyring g++-multilib g++-5-multilib gcc-5-doc libstdc++6-5-dbg
 gcc-multilib autoconf automake libtool flex bison gdb gcc-doc gcc-5-multilib libgcc1-dbg libgomp1-dbg libitm1-dbg
 libatomic1-dbg libasan2-dbg liblsan0-dbg libtsan0-dbg libubsan0-dbg libcilkrts5-dbg libmpx0-dbg libquadmath0-dbg
 gettext-base git-daemon-run | git-daemon-sysvinit git-doc git-el git-email git-gui gitk gitweb git-arch git-cvs
 git-mediawiki git-svn ppp rdnssd iproute2-doc resolvconf avahi-autoipd isc-dhcp-client-ddns apparmor apache2
 | lighttpd | httpd glibc-doc gnutls-bin krb5-doc krb5-user libsasl2-modules-otp libsasl2-modules-ldap
 libsasl2-modules-sql libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal libstdc++-5-doc make-doc
 man-browser node-hawk node-aws-sign node-oauth-sign node-http-signature debhelper ssh-askpass libpam-ssh keychain
 monkeysphere ed diffutils-doc perl-doc libterm-readline-gnu-perl | libterm-readline-perl-perl python-doc python-tk
 python-setuptools python2.7-doc binfmt-support openssh-server
The following NEW packages will be installed:
 binutils build-essential bzip2 ca-certificates cpp cpp-5 dpkg-dev fakeroot file g++ g++-5 gcc gcc-5 git git-man gyp
 ifupdown iproute2 isc-dhcp-client isc-dhcp-common javascript-common krb5-locales less libalgorithm-diff-perl
 libalgorithm-diff-xs-perl libalgorithm-merge-perl libasan2 libasn1-8-heimdal libatm1 libatomic1 libbsd0 libc-dev-bin
 libc6-dev libcc1-0 libcilkrts5 libcurl3-gnutls libdns-export162 libdpkg-perl libedit2 liberror-perl libexpat1
 libfakeroot libffi6 libfile-fcntllock-perl libgcc-5-dev libgdbm3 libgmp10 libgnutls30 libgomp1 libgssapi-krb5-2
 libgssapi3-heimdal libhcrypto4-heimdal libheimbase1-heimdal libheimntlm0-heimdal libhogweed4 libhx509-5-heimdal
 libicu55 libidn11 libisc-export160 libisl15 libitm1 libjs-inherits libjs-jquery libjs-node-uuid libjs-underscore
 libk5crypto3 libkeyutils1 libkrb5-26-heimdal libkrb5-3 libkrb5support0 libldap-2.4-2 liblsan0 libmagic1 libmnl0
 libmpc3 libmpfr4 libmpx0 libnettle6 libp11-kit0 libperl5.22 libpopt0 libpython-stdlib libpython2.7-minimal
 libpython2.7-stdlib libquadmath0 libroken18-heimdal librtmp1 libsasl2-2 libsasl2-modules libsasl2-modules-db
 libsqlite3-0 libssl-dev libssl-doc libssl1.0.0 libstdc++-5-dev libtasn1-6 libtsan0 libubsan0 libuv1 libuv1-dev
 libwind0-heimdal libx11-6 libx11-data libxau6 libxcb1 libxdmcp6 libxext6 libxmuu1 libxtables11 linux-libc-dev make
 manpages manpages-dev mime-support netbase node-abbrev node-ansi node-ansi-color-table node-archy node-async
 node-block-stream node-combined-stream node-cookie-jar node-delayed-stream node-forever-agent node-form-data
 node-fstream node-fstream-ignore node-github-url-from-git node-glob node-graceful-fs node-gyp node-inherits node-ini
 node-json-stringify-safe node-lockfile node-lru-cache node-mime node-minimatch node-mkdirp node-mute-stream
 node-node-uuid node-nopt node-normalize-package-data node-npmlog node-once node-osenv node-qs node-read
 node-read-package-json node-request node-retry node-rimraf node-semver node-sha node-sigmund node-slide node-tar
 node-tunnel-agent node-underscore node-which nodejs nodejs-dev npm openssh-client openssl patch perl
 perl-modules-5.22 python python-minimal python-pkg-resources python2.7 python2.7-minimal rename rsync xauth xz-utils
 zlib1g-dev
0 upgraded, 179 newly installed, 0 to remove and 2 not upgraded.
Need to get 79.4 MB of archives.
After this operation, 337 MB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 libatm1 amd64 1:2.5.1-1.5 [24.2 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial/main amd64 libmnl0 amd64 1.0.3-5 [12.0 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial/main amd64 libpopt0 amd64 1.16-10 [26.0 kB]
Get:4 http://archive.ubuntu.com/ubuntu xenial/main amd64 libgdbm3 amd64 1.8.3-13.1 [16.9 kB]
Get:5 http://archive.ubuntu.com/ubuntu xenial/main amd64 libxau6 amd64 1:1.0.8-1 [8376 B]
Get:6 http://archive.ubuntu.com/ubuntu xenial/main amd64 libxdmcp6 amd64 1:1.1.2-1.1 [11.0 kB]
Get:7 http://archive.ubuntu.com/ubuntu xenial/main amd64 libxcb1 amd64 1.11.1-1ubuntu1 [40.0 kB]
Get:8 http://archive.ubuntu.com/ubuntu xenial/main amd64 libx11-data all 2:1.6.3-1ubuntu2 [113 kB]
Get:9 http://archive.ubuntu.com/ubuntu xenial/main amd64 libx11-6 amd64 2:1.6.3-1ubuntu2 [571 kB]
Get:10 http://archive.ubuntu.com/ubuntu xenial/main amd64 libxext6 amd64 2:1.3.3-1 [29.4 kB]
Get:11 http://archive.ubuntu.com/ubuntu xenial/main amd64 perl-modules-5.22 all 5.22.1-9 [2641 kB]
Get:12 http://archive.ubuntu.com/ubuntu xenial/main amd64 libperl5.22 amd64 5.22.1-9 [3371 kB]
Get:13 http://archive.ubuntu.com/ubuntu xenial/main amd64 perl amd64 5.22.1-9 [237 kB]
Get:14 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpython2.7-minimal amd64 2.7.12-1ubuntu0~16.04.1 [339 kB]
Get:15 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python2.7-minimal amd64 2.7.12-1ubuntu0~16.04.1 [1295 kB]
Get:16 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-minimal amd64 2.7.11-1 [28.2 kB]
Get:17 http://archive.ubuntu.com/ubuntu xenial/main amd64 mime-support all 3.59ubuntu1 [31.0 kB]
Get:18 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libexpat1 amd64 2.1.0-7ubuntu0.16.04.2 [71.3 kB]
Get:19 http://archive.ubuntu.com/ubuntu xenial/main amd64 libffi6 amd64 3.2.1-4 [17.8 kB]
Get:20 http://archive.ubuntu.com/ubuntu xenial/main amd64 libsqlite3-0 amd64 3.11.0-1ubuntu1 [396 kB]
Get:21 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libssl1.0.0 amd64 1.0.2g-1ubuntu4.6 [1082 kB]
Get:22 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpython2.7-stdlib amd64 2.7.12-1ubuntu0~16.04.1 [1884 kB]
Get:23 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python2.7 amd64 2.7.12-1ubuntu0~16.04.1 [224 kB]
Get:24 http://archive.ubuntu.com/ubuntu xenial/main amd64 libpython-stdlib amd64 2.7.11-1 [7656 B]
Get:25 http://archive.ubuntu.com/ubuntu xenial/main amd64 python amd64 2.7.11-1 [137 kB]
Get:26 http://archive.ubuntu.com/ubuntu xenial/main amd64 libgmp10 amd64 2:6.1.0+dfsg-2 [240 kB]
Get:27 http://archive.ubuntu.com/ubuntu xenial/main amd64 libmpfr4 amd64 3.1.4-1 [191 kB]
Get:28 http://archive.ubuntu.com/ubuntu xenial/main amd64 libmpc3 amd64 1.0.3-1 [39.7 kB]
Get:29 http://archive.ubuntu.com/ubuntu xenial/main amd64 bzip2 amd64 1.0.6-8 [32.7 kB]
Get:30 http://archive.ubuntu.com/ubuntu xenial/main amd64 libmagic1 amd64 1:5.25-2ubuntu1 [216 kB]
Get:31 http://archive.ubuntu.com/ubuntu xenial/main amd64 file amd64 1:5.25-2ubuntu1 [21.2 kB]
Get:32 http://archive.ubuntu.com/ubuntu xenial/main amd64 iproute2 amd64 4.3.0-1ubuntu3 [522 kB]
Get:33 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 ifupdown amd64 0.8.10ubuntu1.2 [54.9 kB]
Get:34 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libisc-export160 amd64 1:9.10.3.dfsg.P4-8ubuntu1.5 [153 kB]
Get:35 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libdns-export162 amd64 1:9.10.3.dfsg.P4-8ubuntu1.5 [665 kB]
Get:36 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 isc-dhcp-client amd64 4.3.3-5ubuntu12.6 [223 kB]
Get:37 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 isc-dhcp-common amd64 4.3.3-5ubuntu12.6 [105 kB]
Get:38 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 less amd64 481-2.1ubuntu0.1 [110 kB]
Get:39 http://archive.ubuntu.com/ubuntu xenial/main amd64 libbsd0 amd64 0.8.2-1 [41.7 kB]
Get:40 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libnettle6 amd64 3.2-1ubuntu0.16.04.1 [93.5 kB]
Get:41 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libhogweed4 amd64 3.2-1ubuntu0.16.04.1 [136 kB]
Get:42 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libidn11 amd64 1.32-3ubuntu1.1 [45.6 kB]
Get:43 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libp11-kit0 amd64 0.23.2-5~ubuntu16.04.1 [105 kB]
Get:44 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libtasn1-6 amd64 4.7-3ubuntu0.16.04.1 [43.2 kB]
Get:45 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgnutls30 amd64 3.4.10-4ubuntu1.2 [547 kB]
Get:46 http://archive.ubuntu.com/ubuntu xenial/main amd64 libxtables11 amd64 1.6.0-2ubuntu3 [27.2 kB]
Get:47 http://archive.ubuntu.com/ubuntu xenial/main amd64 netbase all 5.3 [12.9 kB]
Get:48 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 openssl amd64 1.0.2g-1ubuntu4.6 [492 kB]
Get:49 http://archive.ubuntu.com/ubuntu xenial/main amd64 ca-certificates all 20160104ubuntu1 [191 kB]
Get:50 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 krb5-locales all 1.13.2+dfsg-5ubuntu2 [13.2 kB]
Get:51 http://archive.ubuntu.com/ubuntu xenial/main amd64 libroken18-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [41.2 kB]
Get:52 http://archive.ubuntu.com/ubuntu xenial/main amd64 libasn1-8-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [174 kB]
Get:53 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkrb5support0 amd64 1.13.2+dfsg-5ubuntu2 [30.8 kB]
Get:54 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libk5crypto3 amd64 1.13.2+dfsg-5ubuntu2 [81.2 kB]
Get:55 http://archive.ubuntu.com/ubuntu xenial/main amd64 libkeyutils1 amd64 1.5.9-8ubuntu1 [9904 B]
Get:56 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkrb5-3 amd64 1.13.2+dfsg-5ubuntu2 [273 kB]
Get:57 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgssapi-krb5-2 amd64 1.13.2+dfsg-5ubuntu2 [120 kB]
Get:58 http://archive.ubuntu.com/ubuntu xenial/main amd64 libhcrypto4-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [84.9 kB]
Get:59 http://archive.ubuntu.com/ubuntu xenial/main amd64 libheimbase1-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [29.2 kB]
Get:60 http://archive.ubuntu.com/ubuntu xenial/main amd64 libwind0-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [48.2 kB]
Get:61 http://archive.ubuntu.com/ubuntu xenial/main amd64 libhx509-5-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [107 kB]
Get:62 http://archive.ubuntu.com/ubuntu xenial/main amd64 libkrb5-26-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [202 kB]
Get:63 http://archive.ubuntu.com/ubuntu xenial/main amd64 libheimntlm0-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [15.1 kB]
Get:64 http://archive.ubuntu.com/ubuntu xenial/main amd64 libgssapi3-heimdal amd64 1.7~git20150920+dfsg-4ubuntu1 [96.1 kB]
Get:65 http://archive.ubuntu.com/ubuntu xenial/main amd64 libsasl2-modules-db amd64 2.1.26.dfsg1-14build1 [14.5 kB]
Get:66 http://archive.ubuntu.com/ubuntu xenial/main amd64 libsasl2-2 amd64 2.1.26.dfsg1-14build1 [48.7 kB]
Get:67 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libldap-2.4-2 amd64 2.4.42+dfsg-2ubuntu3.1 [161 kB]
Get:68 http://archive.ubuntu.com/ubuntu xenial/main amd64 librtmp1 amd64 2.4+20151223.gitfa8646d-1build1 [53.9 kB]
Get:69 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcurl3-gnutls amd64 7.47.0-1ubuntu2.2 [184 kB]
Get:70 http://archive.ubuntu.com/ubuntu xenial/main amd64 libedit2 amd64 3.1-20150325-1ubuntu2 [76.5 kB]
Get:71 http://archive.ubuntu.com/ubuntu xenial/main amd64 libicu55 amd64 55.1-7 [7643 kB]
Get:72 http://archive.ubuntu.com/ubuntu xenial/main amd64 libsasl2-modules amd64 2.1.26.dfsg1-14build1 [47.5 kB]
Get:73 http://archive.ubuntu.com/ubuntu xenial/main amd64 libxmuu1 amd64 2:1.1.2-2 [9674 B]
Get:74 http://archive.ubuntu.com/ubuntu xenial/main amd64 manpages all 4.04-2 [1087 kB]
Get:75 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 openssh-client amd64 1:7.2p2-4ubuntu2.1 [587 kB]
Get:76 http://archive.ubuntu.com/ubuntu xenial/main amd64 rsync amd64 3.1.1-3ubuntu1 [325 kB]
Get:77 http://archive.ubuntu.com/ubuntu xenial/main amd64 xauth amd64 1:1.0.9-1ubuntu2 [22.7 kB]
Get:78 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 binutils amd64 2.26.1-1ubuntu1~16.04.3 [2310 kB]
Get:79 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libc-dev-bin amd64 2.23-0ubuntu5 [68.7 kB]
Get:80 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 linux-libc-dev amd64 4.4.0-66.87 [833 kB]
Get:81 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libc6-dev amd64 2.23-0ubuntu5 [2078 kB]
Get:82 http://archive.ubuntu.com/ubuntu xenial/main amd64 libisl15 amd64 0.16.1-1 [524 kB]
Get:83 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 cpp-5 amd64 5.4.0-6ubuntu1~16.04.4 [7653 kB]
Get:84 http://archive.ubuntu.com/ubuntu xenial/main amd64 cpp amd64 4:5.3.1-1ubuntu1 [27.7 kB]
Get:85 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcc1-0 amd64 5.4.0-6ubuntu1~16.04.4 [38.8 kB]
Get:86 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgomp1 amd64 5.4.0-6ubuntu1~16.04.4 [55.0 kB]
Get:87 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libitm1 amd64 5.4.0-6ubuntu1~16.04.4 [27.4 kB]
Get:88 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libatomic1 amd64 5.4.0-6ubuntu1~16.04.4 [8912 B]
Get:89 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libasan2 amd64 5.4.0-6ubuntu1~16.04.4 [264 kB]
Get:90 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 liblsan0 amd64 5.4.0-6ubuntu1~16.04.4 [105 kB]
Get:91 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libtsan0 amd64 5.4.0-6ubuntu1~16.04.4 [244 kB]
Get:92 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libubsan0 amd64 5.4.0-6ubuntu1~16.04.4 [95.3 kB]
Get:93 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcilkrts5 amd64 5.4.0-6ubuntu1~16.04.4 [40.1 kB]
Get:94 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libmpx0 amd64 5.4.0-6ubuntu1~16.04.4 [9766 B]
Get:95 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libquadmath0 amd64 5.4.0-6ubuntu1~16.04.4 [131 kB]
Get:96 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgcc-5-dev amd64 5.4.0-6ubuntu1~16.04.4 [2237 kB]
Get:97 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 gcc-5 amd64 5.4.0-6ubuntu1~16.04.4 [8577 kB]
Get:98 http://archive.ubuntu.com/ubuntu xenial/main amd64 gcc amd64 4:5.3.1-1ubuntu1 [5244 B]
Get:99 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libstdc++-5-dev amd64 5.4.0-6ubuntu1~16.04.4 [1426 kB]
Get:100 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 g++-5 amd64 5.4.0-6ubuntu1~16.04.4 [8300 kB]
Get:101 http://archive.ubuntu.com/ubuntu xenial/main amd64 g++ amd64 4:5.3.1-1ubuntu1 [1504 B]
Get:102 http://archive.ubuntu.com/ubuntu xenial/main amd64 make amd64 4.1-6 [151 kB]
Get:103 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libdpkg-perl all 1.18.4ubuntu1.1 [195 kB]
Get:104 http://archive.ubuntu.com/ubuntu xenial/main amd64 xz-utils amd64 5.1.1alpha+20120614-2ubuntu2 [78.8 kB]
Get:105 http://archive.ubuntu.com/ubuntu xenial/main amd64 patch amd64 2.7.5-1 [90.4 kB]
Get:106 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 dpkg-dev all 1.18.4ubuntu1.1 [584 kB]
Get:107 http://archive.ubuntu.com/ubuntu xenial/main amd64 build-essential amd64 12.1ubuntu2 [4758 B]
Get:108 http://archive.ubuntu.com/ubuntu xenial/main amd64 libfakeroot amd64 1.20.2-1ubuntu1 [25.5 kB]
Get:109 http://archive.ubuntu.com/ubuntu xenial/main amd64 fakeroot amd64 1.20.2-1ubuntu1 [61.8 kB]
Get:110 http://archive.ubuntu.com/ubuntu xenial/main amd64 liberror-perl all 0.17-1.2 [19.6 kB]
Get:111 http://archive.ubuntu.com/ubuntu xenial/main amd64 git-man all 1:2.7.4-0ubuntu1 [735 kB]
Get:112 http://archive.ubuntu.com/ubuntu xenial/main amd64 git amd64 1:2.7.4-0ubuntu1 [3006 kB]
Get:113 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-pkg-resources all 20.7.0-1 [108 kB]
Get:114 http://archive.ubuntu.com/ubuntu xenial/universe amd64 gyp all 0.1+20150913git1f374df9-1ubuntu1 [265 kB]
Get:115 http://archive.ubuntu.com/ubuntu xenial/main amd64 javascript-common all 11 [6066 B]
Get:116 http://archive.ubuntu.com/ubuntu xenial/main amd64 libalgorithm-diff-perl all 1.19.03-1 [47.6 kB]
Get:117 http://archive.ubuntu.com/ubuntu xenial/main amd64 libalgorithm-diff-xs-perl amd64 0.04-4build1 [11.0 kB]
Get:118 http://archive.ubuntu.com/ubuntu xenial/main amd64 libalgorithm-merge-perl all 0.08-3 [12.0 kB]
Get:119 http://archive.ubuntu.com/ubuntu xenial/main amd64 libfile-fcntllock-perl amd64 0.22-3 [32.0 kB]
Get:120 http://archive.ubuntu.com/ubuntu xenial/main amd64 libjs-jquery all 1.11.3+dfsg-4 [161 kB]
Get:121 http://archive.ubuntu.com/ubuntu xenial/universe amd64 libjs-node-uuid all 1.4.0-1 [11.1 kB]
Get:122 http://archive.ubuntu.com/ubuntu xenial/main amd64 libjs-underscore all 1.7.0~dfsg-1ubuntu1 [46.7 kB]
Get:123 http://archive.ubuntu.com/ubuntu xenial/main amd64 zlib1g-dev amd64 1:1.2.8.dfsg-2ubuntu4 [168 kB]
Get:124 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libssl-dev amd64 1.0.2g-1ubuntu4.6 [1344 kB]
Get:125 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libssl-doc all 1.0.2g-1ubuntu4.6 [1079 kB]
Get:126 http://archive.ubuntu.com/ubuntu xenial/universe amd64 libuv1 amd64 1.8.0-1 [57.4 kB]
Get:127 http://archive.ubuntu.com/ubuntu xenial/universe amd64 libuv1-dev amd64 1.8.0-1 [74.7 kB]
Get:128 http://archive.ubuntu.com/ubuntu xenial/main amd64 manpages-dev all 4.04-2 [2048 kB]
Get:129 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 nodejs amd64 4.2.6~dfsg-1ubuntu4.1 [3161 kB]
Get:130 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-async all 0.8.0-1 [22.2 kB]
Get:131 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-node-uuid all 1.4.0-1 [2530 B]
Get:132 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-underscore all 1.7.0~dfsg-1ubuntu1 [3780 B]
Get:133 http://archive.ubuntu.com/ubuntu xenial/main amd64 rename all 0.20-4 [12.0 kB]
Get:134 http://archive.ubuntu.com/ubuntu xenial/universe amd64 libjs-inherits all 2.0.1-3 [2794 B]
Get:135 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-abbrev all 1.0.5-2 [3592 B]
Get:136 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-ansi all 0.3.0-2 [8590 B]
Get:137 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-ansi-color-table all 1.0.0-1 [4478 B]
Get:138 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-archy all 0.0.2-1 [3660 B]
Get:139 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-inherits all 2.0.1-3 [3060 B]
Get:140 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-block-stream all 0.0.7-1 [4832 B]
Get:141 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-delayed-stream all 0.0.5-1 [4750 B]
Get:142 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-combined-stream all 0.0.5-1 [4958 B]
Get:143 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-cookie-jar all 0.3.1-1 [3746 B]
Get:144 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-forever-agent all 0.5.1-1 [3194 B]
Get:145 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-mime all 1.3.4-1 [11.9 kB]
Get:146 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-form-data all 0.1.0-1 [6412 B]
Get:147 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-rimraf all 2.2.8-1 [5702 B]
Get:148 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-mkdirp all 0.5.0-1 [4690 B]
Get:149 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-graceful-fs all 3.0.2-1 [7102 B]
Get:150 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-fstream all 0.1.24-1 [19.5 kB]
Get:151 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-lru-cache all 2.3.1-1 [5674 B]
Get:152 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-sigmund all 1.0.0-1 [3818 B]
Get:153 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-minimatch all 1.0.0-1 [14.0 kB]
Get:154 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-fstream-ignore all 0.0.6-2 [5586 B]
Get:155 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-github-url-from-git all 1.1.1-1 [3138 B]
Get:156 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-once all 1.1.1-1 [2608 B]
Get:157 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-glob all 4.0.5-1 [13.2 kB]
Get:158 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 nodejs-dev amd64 4.2.6~dfsg-1ubuntu4.1 [265 kB]
Get:159 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-nopt all 3.0.1-1 [9544 B]
Get:160 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-npmlog all 0.0.4-1 [5844 B]
Get:161 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-osenv all 0.1.0-1 [3772 B]
Get:162 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-tunnel-agent all 0.3.1-1 [4018 B]
Get:163 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-json-stringify-safe all 5.0.0-1 [3544 B]
Get:164 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-qs all 2.2.4-1 [7574 B]
Get:165 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-request all 2.26.1-1 [14.5 kB]
Get:166 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-semver all 2.1.0-2 [16.2 kB]
Get:167 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-tar all 1.0.3-2 [17.5 kB]
Get:168 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-which all 1.0.5-2 [3678 B]
Get:169 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-gyp all 3.0.3-2ubuntu1 [23.2 kB]
Get:170 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-ini all 1.1.0-1 [4770 B]
Get:171 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-lockfile all 0.4.1-1 [5450 B]
Get:172 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-mute-stream all 0.0.4-1 [4096 B]
Get:173 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-normalize-package-data all 0.2.2-1 [9286 B]
Get:174 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-read all 1.0.5-1 [4314 B]
Get:175 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-read-package-json all 1.2.4-1 [7780 B]
Get:176 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-retry all 0.6.0-1 [6172 B]
Get:177 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-sha all 1.2.3-1 [4272 B]
Get:178 http://archive.ubuntu.com/ubuntu xenial/universe amd64 node-slide all 1.1.4-1 [6118 B]
Get:179 http://archive.ubuntu.com/ubuntu xenial/universe amd64 npm all 3.5.2-0ubuntu4 [1586 kB]
Fetched 79.4 MB in 40s (1962 kB/s)
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package libatm1:amd64.
(Reading database ... 7256 files and directories currently installed.)
Preparing to unpack .../libatm1_1%3a2.5.1-1.5_amd64.deb ...
Unpacking libatm1:amd64 (1:2.5.1-1.5) ...
Selecting previously unselected package libmnl0:amd64.
Preparing to unpack .../libmnl0_1.0.3-5_amd64.deb ...
Unpacking libmnl0:amd64 (1.0.3-5) ...
Selecting previously unselected package libpopt0:amd64.
Preparing to unpack .../libpopt0_1.16-10_amd64.deb ...
Unpacking libpopt0:amd64 (1.16-10) ...
Selecting previously unselected package libgdbm3:amd64.
Preparing to unpack .../libgdbm3_1.8.3-13.1_amd64.deb ...
Unpacking libgdbm3:amd64 (1.8.3-13.1) ...
Selecting previously unselected package libxau6:amd64.
Preparing to unpack .../libxau6_1%3a1.0.8-1_amd64.deb ...
Unpacking libxau6:amd64 (1:1.0.8-1) ...
Selecting previously unselected package libxdmcp6:amd64.
Preparing to unpack .../libxdmcp6_1%3a1.1.2-1.1_amd64.deb ...
Unpacking libxdmcp6:amd64 (1:1.1.2-1.1) ...
Selecting previously unselected package libxcb1:amd64.
Preparing to unpack .../libxcb1_1.11.1-1ubuntu1_amd64.deb ...
Unpacking libxcb1:amd64 (1.11.1-1ubuntu1) ...
Selecting previously unselected package libx11-data.
Preparing to unpack .../libx11-data_2%3a1.6.3-1ubuntu2_all.deb ...
Unpacking libx11-data (2:1.6.3-1ubuntu2) ...
Selecting previously unselected package libx11-6:amd64.
Preparing to unpack .../libx11-6_2%3a1.6.3-1ubuntu2_amd64.deb ...
Unpacking libx11-6:amd64 (2:1.6.3-1ubuntu2) ...
Selecting previously unselected package libxext6:amd64.
Preparing to unpack .../libxext6_2%3a1.3.3-1_amd64.deb ...
Unpacking libxext6:amd64 (2:1.3.3-1) ...
Selecting previously unselected package perl-modules-5.22.
Preparing to unpack .../perl-modules-5.22_5.22.1-9_all.deb ...
Unpacking perl-modules-5.22 (5.22.1-9) ...
Selecting previously unselected package libperl5.22:amd64.
Preparing to unpack .../libperl5.22_5.22.1-9_amd64.deb ...
Unpacking libperl5.22:amd64 (5.22.1-9) ...
Selecting previously unselected package perl.
Preparing to unpack .../perl_5.22.1-9_amd64.deb ...
Unpacking perl (5.22.1-9) ...
Selecting previously unselected package libpython2.7-minimal:amd64.
Preparing to unpack .../libpython2.7-minimal_2.7.12-1ubuntu0~16.04.1_amd64.deb ...
Unpacking libpython2.7-minimal:amd64 (2.7.12-1ubuntu0~16.04.1) ...
Selecting previously unselected package python2.7-minimal.
Preparing to unpack .../python2.7-minimal_2.7.12-1ubuntu0~16.04.1_amd64.deb ...
Unpacking python2.7-minimal (2.7.12-1ubuntu0~16.04.1) ...
Selecting previously unselected package python-minimal.
Preparing to unpack .../python-minimal_2.7.11-1_amd64.deb ...
Unpacking python-minimal (2.7.11-1) ...
Selecting previously unselected package mime-support.
Preparing to unpack .../mime-support_3.59ubuntu1_all.deb ...
Unpacking mime-support (3.59ubuntu1) ...
Selecting previously unselected package libexpat1:amd64.
Preparing to unpack .../libexpat1_2.1.0-7ubuntu0.16.04.2_amd64.deb ...
Unpacking libexpat1:amd64 (2.1.0-7ubuntu0.16.04.2) ...
Selecting previously unselected package libffi6:amd64.
Preparing to unpack .../libffi6_3.2.1-4_amd64.deb ...
Unpacking libffi6:amd64 (3.2.1-4) ...
Selecting previously unselected package libsqlite3-0:amd64.
Preparing to unpack .../libsqlite3-0_3.11.0-1ubuntu1_amd64.deb ...
Unpacking libsqlite3-0:amd64 (3.11.0-1ubuntu1) ...
Selecting previously unselected package libssl1.0.0:amd64.
Preparing to unpack .../libssl1.0.0_1.0.2g-1ubuntu4.6_amd64.deb ...
Unpacking libssl1.0.0:amd64 (1.0.2g-1ubuntu4.6) ...
Selecting previously unselected package libpython2.7-stdlib:amd64.
Preparing to unpack .../libpython2.7-stdlib_2.7.12-1ubuntu0~16.04.1_amd64.deb ...
Unpacking libpython2.7-stdlib:amd64 (2.7.12-1ubuntu0~16.04.1) ...
Selecting previously unselected package python2.7.
Preparing to unpack .../python2.7_2.7.12-1ubuntu0~16.04.1_amd64.deb ...
Unpacking python2.7 (2.7.12-1ubuntu0~16.04.1) ...
Selecting previously unselected package libpython-stdlib:amd64.
Preparing to unpack .../libpython-stdlib_2.7.11-1_amd64.deb ...
Unpacking libpython-stdlib:amd64 (2.7.11-1) ...
Processing triggers for libc-bin (2.23-0ubuntu5) ...
Setting up libpython2.7-minimal:amd64 (2.7.12-1ubuntu0~16.04.1) ...
Setting up python2.7-minimal (2.7.12-1ubuntu0~16.04.1) ...
Linking and byte-compiling packages for runtime python2.7...
Setting up python-minimal (2.7.11-1) ...
Selecting previously unselected package python.
(Reading database ... 10145 files and directories currently installed.)
Preparing to unpack .../python_2.7.11-1_amd64.deb ...
Unpacking python (2.7.11-1) ...
Selecting previously unselected package libgmp10:amd64.
Preparing to unpack .../libgmp10_2%3a6.1.0+dfsg-2_amd64.deb ...
Unpacking libgmp10:amd64 (2:6.1.0+dfsg-2) ...
Selecting previously unselected package libmpfr4:amd64.
Preparing to unpack .../libmpfr4_3.1.4-1_amd64.deb ...
Unpacking libmpfr4:amd64 (3.1.4-1) ...
Selecting previously unselected package libmpc3:amd64.
Preparing to unpack .../libmpc3_1.0.3-1_amd64.deb ...
Unpacking libmpc3:amd64 (1.0.3-1) ...
Selecting previously unselected package bzip2.
Preparing to unpack .../bzip2_1.0.6-8_amd64.deb ...
Unpacking bzip2 (1.0.6-8) ...
Selecting previously unselected package libmagic1:amd64.
Preparing to unpack .../libmagic1_1%3a5.25-2ubuntu1_amd64.deb ...
Unpacking libmagic1:amd64 (1:5.25-2ubuntu1) ...
Selecting previously unselected package file.
Preparing to unpack .../file_1%3a5.25-2ubuntu1_amd64.deb ...
Unpacking file (1:5.25-2ubuntu1) ...
Selecting previously unselected package iproute2.
Preparing to unpack .../iproute2_4.3.0-1ubuntu3_amd64.deb ...
Unpacking iproute2 (4.3.0-1ubuntu3) ...
Selecting previously unselected package ifupdown.
Preparing to unpack .../ifupdown_0.8.10ubuntu1.2_amd64.deb ...
Unpacking ifupdown (0.8.10ubuntu1.2) ...
Selecting previously unselected package libisc-export160.
Preparing to unpack .../libisc-export160_1%3a9.10.3.dfsg.P4-8ubuntu1.5_amd64.deb ...
Unpacking libisc-export160 (1:9.10.3.dfsg.P4-8ubuntu1.5) ...
Selecting previously unselected package libdns-export162.
Preparing to unpack .../libdns-export162_1%3a9.10.3.dfsg.P4-8ubuntu1.5_amd64.deb ...
Unpacking libdns-export162 (1:9.10.3.dfsg.P4-8ubuntu1.5) ...
Selecting previously unselected package isc-dhcp-client.
Preparing to unpack .../isc-dhcp-client_4.3.3-5ubuntu12.6_amd64.deb ...
Unpacking isc-dhcp-client (4.3.3-5ubuntu12.6) ...
Selecting previously unselected package isc-dhcp-common.
Preparing to unpack .../isc-dhcp-common_4.3.3-5ubuntu12.6_amd64.deb ...
Unpacking isc-dhcp-common (4.3.3-5ubuntu12.6) ...
Selecting previously unselected package less.
Preparing to unpack .../less_481-2.1ubuntu0.1_amd64.deb ...
Unpacking less (481-2.1ubuntu0.1) ...
Selecting previously unselected package libbsd0:amd64.
Preparing to unpack .../libbsd0_0.8.2-1_amd64.deb ...
Unpacking libbsd0:amd64 (0.8.2-1) ...
Selecting previously unselected package libnettle6:amd64.
Preparing to unpack .../libnettle6_3.2-1ubuntu0.16.04.1_amd64.deb ...
Unpacking libnettle6:amd64 (3.2-1ubuntu0.16.04.1) ...
Selecting previously unselected package libhogweed4:amd64.
Preparing to unpack .../libhogweed4_3.2-1ubuntu0.16.04.1_amd64.deb ...
Unpacking libhogweed4:amd64 (3.2-1ubuntu0.16.04.1) ...
Selecting previously unselected package libidn11:amd64.
Preparing to unpack .../libidn11_1.32-3ubuntu1.1_amd64.deb ...
Unpacking libidn11:amd64 (1.32-3ubuntu1.1) ...
Selecting previously unselected package libp11-kit0:amd64.
Preparing to unpack .../libp11-kit0_0.23.2-5~ubuntu16.04.1_amd64.deb ...
Unpacking libp11-kit0:amd64 (0.23.2-5~ubuntu16.04.1) ...
Selecting previously unselected package libtasn1-6:amd64.
Preparing to unpack .../libtasn1-6_4.7-3ubuntu0.16.04.1_amd64.deb ...
Unpacking libtasn1-6:amd64 (4.7-3ubuntu0.16.04.1) ...
Selecting previously unselected package libgnutls30:amd64.
Preparing to unpack .../libgnutls30_3.4.10-4ubuntu1.2_amd64.deb ...
Unpacking libgnutls30:amd64 (3.4.10-4ubuntu1.2) ...
Selecting previously unselected package libxtables11:amd64.
Preparing to unpack .../libxtables11_1.6.0-2ubuntu3_amd64.deb ...
Unpacking libxtables11:amd64 (1.6.0-2ubuntu3) ...
Selecting previously unselected package netbase.
Preparing to unpack .../archives/netbase_5.3_all.deb ...
Unpacking netbase (5.3) ...
Selecting previously unselected package openssl.
Preparing to unpack .../openssl_1.0.2g-1ubuntu4.6_amd64.deb ...
Unpacking openssl (1.0.2g-1ubuntu4.6) ...
Selecting previously unselected package ca-certificates.
Preparing to unpack .../ca-certificates_20160104ubuntu1_all.deb ...
Unpacking ca-certificates (20160104ubuntu1) ...
Selecting previously unselected package krb5-locales.
Preparing to unpack .../krb5-locales_1.13.2+dfsg-5ubuntu2_all.deb ...
Unpacking krb5-locales (1.13.2+dfsg-5ubuntu2) ...
Selecting previously unselected package libroken18-heimdal:amd64.
Preparing to unpack .../libroken18-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libroken18-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libasn1-8-heimdal:amd64.
Preparing to unpack .../libasn1-8-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libasn1-8-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libkrb5support0:amd64.
Preparing to unpack .../libkrb5support0_1.13.2+dfsg-5ubuntu2_amd64.deb ...
Unpacking libkrb5support0:amd64 (1.13.2+dfsg-5ubuntu2) ...
Selecting previously unselected package libk5crypto3:amd64.
Preparing to unpack .../libk5crypto3_1.13.2+dfsg-5ubuntu2_amd64.deb ...
Unpacking libk5crypto3:amd64 (1.13.2+dfsg-5ubuntu2) ...
Selecting previously unselected package libkeyutils1:amd64.
Preparing to unpack .../libkeyutils1_1.5.9-8ubuntu1_amd64.deb ...
Unpacking libkeyutils1:amd64 (1.5.9-8ubuntu1) ...
Selecting previously unselected package libkrb5-3:amd64.
Preparing to unpack .../libkrb5-3_1.13.2+dfsg-5ubuntu2_amd64.deb ...
Unpacking libkrb5-3:amd64 (1.13.2+dfsg-5ubuntu2) ...
Selecting previously unselected package libgssapi-krb5-2:amd64.
Preparing to unpack .../libgssapi-krb5-2_1.13.2+dfsg-5ubuntu2_amd64.deb ...
Unpacking libgssapi-krb5-2:amd64 (1.13.2+dfsg-5ubuntu2) ...
Selecting previously unselected package libhcrypto4-heimdal:amd64.
Preparing to unpack .../libhcrypto4-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libhcrypto4-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libheimbase1-heimdal:amd64.
Preparing to unpack .../libheimbase1-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libheimbase1-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libwind0-heimdal:amd64.
Preparing to unpack .../libwind0-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libwind0-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libhx509-5-heimdal:amd64.
Preparing to unpack .../libhx509-5-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libhx509-5-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libkrb5-26-heimdal:amd64.
Preparing to unpack .../libkrb5-26-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libkrb5-26-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libheimntlm0-heimdal:amd64.
Preparing to unpack .../libheimntlm0-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libheimntlm0-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libgssapi3-heimdal:amd64.
Preparing to unpack .../libgssapi3-heimdal_1.7~git20150920+dfsg-4ubuntu1_amd64.deb ...
Unpacking libgssapi3-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Selecting previously unselected package libsasl2-modules-db:amd64.
Preparing to unpack .../libsasl2-modules-db_2.1.26.dfsg1-14build1_amd64.deb ...
Unpacking libsasl2-modules-db:amd64 (2.1.26.dfsg1-14build1) ...
Selecting previously unselected package libsasl2-2:amd64.
Preparing to unpack .../libsasl2-2_2.1.26.dfsg1-14build1_amd64.deb ...
Unpacking libsasl2-2:amd64 (2.1.26.dfsg1-14build1) ...
Selecting previously unselected package libldap-2.4-2:amd64.
Preparing to unpack .../libldap-2.4-2_2.4.42+dfsg-2ubuntu3.1_amd64.deb ...
Unpacking libldap-2.4-2:amd64 (2.4.42+dfsg-2ubuntu3.1) ...
Selecting previously unselected package librtmp1:amd64.
Preparing to unpack .../librtmp1_2.4+20151223.gitfa8646d-1build1_amd64.deb ...
Unpacking librtmp1:amd64 (2.4+20151223.gitfa8646d-1build1) ...
Selecting previously unselected package libcurl3-gnutls:amd64.
Preparing to unpack .../libcurl3-gnutls_7.47.0-1ubuntu2.2_amd64.deb ...
Unpacking libcurl3-gnutls:amd64 (7.47.0-1ubuntu2.2) ...
Selecting previously unselected package libedit2:amd64.
Preparing to unpack .../libedit2_3.1-20150325-1ubuntu2_amd64.deb ...
Unpacking libedit2:amd64 (3.1-20150325-1ubuntu2) ...
Selecting previously unselected package libicu55:amd64.
Preparing to unpack .../libicu55_55.1-7_amd64.deb ...
Unpacking libicu55:amd64 (55.1-7) ...
Selecting previously unselected package libsasl2-modules:amd64.
Preparing to unpack .../libsasl2-modules_2.1.26.dfsg1-14build1_amd64.deb ...
Unpacking libsasl2-modules:amd64 (2.1.26.dfsg1-14build1) ...
Selecting previously unselected package libxmuu1:amd64.
Preparing to unpack .../libxmuu1_2%3a1.1.2-2_amd64.deb ...
Unpacking libxmuu1:amd64 (2:1.1.2-2) ...
Selecting previously unselected package manpages.
Preparing to unpack .../manpages_4.04-2_all.deb ...
Unpacking manpages (4.04-2) ...
Selecting previously unselected package openssh-client.
Preparing to unpack .../openssh-client_1%3a7.2p2-4ubuntu2.1_amd64.deb ...
Unpacking openssh-client (1:7.2p2-4ubuntu2.1) ...
Selecting previously unselected package rsync.
Preparing to unpack .../rsync_3.1.1-3ubuntu1_amd64.deb ...
Unpacking rsync (3.1.1-3ubuntu1) ...
Selecting previously unselected package xauth.
Preparing to unpack .../xauth_1%3a1.0.9-1ubuntu2_amd64.deb ...
Unpacking xauth (1:1.0.9-1ubuntu2) ...
Selecting previously unselected package binutils.
Preparing to unpack .../binutils_2.26.1-1ubuntu1~16.04.3_amd64.deb ...
Unpacking binutils (2.26.1-1ubuntu1~16.04.3) ...
Selecting previously unselected package libc-dev-bin.
Preparing to unpack .../libc-dev-bin_2.23-0ubuntu5_amd64.deb ...
Unpacking libc-dev-bin (2.23-0ubuntu5) ...
Selecting previously unselected package linux-libc-dev:amd64.
Preparing to unpack .../linux-libc-dev_4.4.0-66.87_amd64.deb ...
Unpacking linux-libc-dev:amd64 (4.4.0-66.87) ...
Selecting previously unselected package libc6-dev:amd64.
Preparing to unpack .../libc6-dev_2.23-0ubuntu5_amd64.deb ...
Unpacking libc6-dev:amd64 (2.23-0ubuntu5) ...
Selecting previously unselected package libisl15:amd64.
Preparing to unpack .../libisl15_0.16.1-1_amd64.deb ...
Unpacking libisl15:amd64 (0.16.1-1) ...
Selecting previously unselected package cpp-5.
Preparing to unpack .../cpp-5_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking cpp-5 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package cpp.
Preparing to unpack .../cpp_4%3a5.3.1-1ubuntu1_amd64.deb ...
Unpacking cpp (4:5.3.1-1ubuntu1) ...
Selecting previously unselected package libcc1-0:amd64.
Preparing to unpack .../libcc1-0_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libcc1-0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libgomp1:amd64.
Preparing to unpack .../libgomp1_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libgomp1:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libitm1:amd64.
Preparing to unpack .../libitm1_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libitm1:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libatomic1:amd64.
Preparing to unpack .../libatomic1_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libatomic1:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libasan2:amd64.
Preparing to unpack .../libasan2_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libasan2:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package liblsan0:amd64.
Preparing to unpack .../liblsan0_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking liblsan0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libtsan0:amd64.
Preparing to unpack .../libtsan0_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libtsan0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libubsan0:amd64.
Preparing to unpack .../libubsan0_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libubsan0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libcilkrts5:amd64.
Preparing to unpack .../libcilkrts5_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libcilkrts5:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libmpx0:amd64.
Preparing to unpack .../libmpx0_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libmpx0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libquadmath0:amd64.
Preparing to unpack .../libquadmath0_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libquadmath0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package libgcc-5-dev:amd64.
Preparing to unpack .../libgcc-5-dev_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libgcc-5-dev:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package gcc-5.
Preparing to unpack .../gcc-5_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking gcc-5 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package gcc.
Preparing to unpack .../gcc_4%3a5.3.1-1ubuntu1_amd64.deb ...
Unpacking gcc (4:5.3.1-1ubuntu1) ...
Selecting previously unselected package libstdc++-5-dev:amd64.
Preparing to unpack .../libstdc++-5-dev_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking libstdc++-5-dev:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package g++-5.
Preparing to unpack .../g++-5_5.4.0-6ubuntu1~16.04.4_amd64.deb ...
Unpacking g++-5 (5.4.0-6ubuntu1~16.04.4) ...
Selecting previously unselected package g++.
Preparing to unpack .../g++_4%3a5.3.1-1ubuntu1_amd64.deb ...
Unpacking g++ (4:5.3.1-1ubuntu1) ...
Selecting previously unselected package make.
Preparing to unpack .../archives/make_4.1-6_amd64.deb ...
Unpacking make (4.1-6) ...
Selecting previously unselected package libdpkg-perl.
Preparing to unpack .../libdpkg-perl_1.18.4ubuntu1.1_all.deb ...
Unpacking libdpkg-perl (1.18.4ubuntu1.1) ...
Selecting previously unselected package xz-utils.
Preparing to unpack .../xz-utils_5.1.1alpha+20120614-2ubuntu2_amd64.deb ...
Unpacking xz-utils (5.1.1alpha+20120614-2ubuntu2) ...
Selecting previously unselected package patch.
Preparing to unpack .../patch_2.7.5-1_amd64.deb ...
Unpacking patch (2.7.5-1) ...
Selecting previously unselected package dpkg-dev.
Preparing to unpack .../dpkg-dev_1.18.4ubuntu1.1_all.deb ...
Unpacking dpkg-dev (1.18.4ubuntu1.1) ...
Selecting previously unselected package build-essential.
Preparing to unpack .../build-essential_12.1ubuntu2_amd64.deb ...
Unpacking build-essential (12.1ubuntu2) ...
Selecting previously unselected package libfakeroot:amd64.
Preparing to unpack .../libfakeroot_1.20.2-1ubuntu1_amd64.deb ...
Unpacking libfakeroot:amd64 (1.20.2-1ubuntu1) ...
Selecting previously unselected package fakeroot.
Preparing to unpack .../fakeroot_1.20.2-1ubuntu1_amd64.deb ...
Unpacking fakeroot (1.20.2-1ubuntu1) ...
Selecting previously unselected package liberror-perl.
Preparing to unpack .../liberror-perl_0.17-1.2_all.deb ...
Unpacking liberror-perl (0.17-1.2) ...
Selecting previously unselected package git-man.
Preparing to unpack .../git-man_1%3a2.7.4-0ubuntu1_all.deb ...
Unpacking git-man (1:2.7.4-0ubuntu1) ...
Selecting previously unselected package git.
Preparing to unpack .../git_1%3a2.7.4-0ubuntu1_amd64.deb ...
Unpacking git (1:2.7.4-0ubuntu1) ...
Selecting previously unselected package python-pkg-resources.
Preparing to unpack .../python-pkg-resources_20.7.0-1_all.deb ...
Unpacking python-pkg-resources (20.7.0-1) ...
Selecting previously unselected package gyp.
Preparing to unpack .../gyp_0.1+20150913git1f374df9-1ubuntu1_all.deb ...
Unpacking gyp (0.1+20150913git1f374df9-1ubuntu1) ...
Selecting previously unselected package javascript-common.
Preparing to unpack .../javascript-common_11_all.deb ...
Unpacking javascript-common (11) ...
Selecting previously unselected package libalgorithm-diff-perl.
Preparing to unpack .../libalgorithm-diff-perl_1.19.03-1_all.deb ...
Unpacking libalgorithm-diff-perl (1.19.03-1) ...
Selecting previously unselected package libalgorithm-diff-xs-perl.
Preparing to unpack .../libalgorithm-diff-xs-perl_0.04-4build1_amd64.deb ...
Unpacking libalgorithm-diff-xs-perl (0.04-4build1) ...
Selecting previously unselected package libalgorithm-merge-perl.
Preparing to unpack .../libalgorithm-merge-perl_0.08-3_all.deb ...
Unpacking libalgorithm-merge-perl (0.08-3) ...
Selecting previously unselected package libfile-fcntllock-perl.
Preparing to unpack .../libfile-fcntllock-perl_0.22-3_amd64.deb ...
Unpacking libfile-fcntllock-perl (0.22-3) ...
Selecting previously unselected package libjs-jquery.
Preparing to unpack .../libjs-jquery_1.11.3+dfsg-4_all.deb ...
Unpacking libjs-jquery (1.11.3+dfsg-4) ...
Selecting previously unselected package libjs-node-uuid.
Preparing to unpack .../libjs-node-uuid_1.4.0-1_all.deb ...
Unpacking libjs-node-uuid (1.4.0-1) ...
Selecting previously unselected package libjs-underscore.
Preparing to unpack .../libjs-underscore_1.7.0~dfsg-1ubuntu1_all.deb ...
Unpacking libjs-underscore (1.7.0~dfsg-1ubuntu1) ...
Selecting previously unselected package zlib1g-dev:amd64.
Preparing to unpack .../zlib1g-dev_1%3a1.2.8.dfsg-2ubuntu4_amd64.deb ...
Unpacking zlib1g-dev:amd64 (1:1.2.8.dfsg-2ubuntu4) ...
Selecting previously unselected package libssl-dev:amd64.
Preparing to unpack .../libssl-dev_1.0.2g-1ubuntu4.6_amd64.deb ...
Unpacking libssl-dev:amd64 (1.0.2g-1ubuntu4.6) ...
Selecting previously unselected package libssl-doc.
Preparing to unpack .../libssl-doc_1.0.2g-1ubuntu4.6_all.deb ...
Unpacking libssl-doc (1.0.2g-1ubuntu4.6) ...
Selecting previously unselected package libuv1:amd64.
Preparing to unpack .../libuv1_1.8.0-1_amd64.deb ...
Unpacking libuv1:amd64 (1.8.0-1) ...
Selecting previously unselected package libuv1-dev:amd64.
Preparing to unpack .../libuv1-dev_1.8.0-1_amd64.deb ...
Unpacking libuv1-dev:amd64 (1.8.0-1) ...
Selecting previously unselected package manpages-dev.
Preparing to unpack .../manpages-dev_4.04-2_all.deb ...
Unpacking manpages-dev (4.04-2) ...
Selecting previously unselected package nodejs.
Preparing to unpack .../nodejs_4.2.6~dfsg-1ubuntu4.1_amd64.deb ...
Unpacking nodejs (4.2.6~dfsg-1ubuntu4.1) ...
Selecting previously unselected package node-async.
Preparing to unpack .../node-async_0.8.0-1_all.deb ...
Unpacking node-async (0.8.0-1) ...
Selecting previously unselected package node-node-uuid.
Preparing to unpack .../node-node-uuid_1.4.0-1_all.deb ...
Unpacking node-node-uuid (1.4.0-1) ...
Selecting previously unselected package node-underscore.
Preparing to unpack .../node-underscore_1.7.0~dfsg-1ubuntu1_all.deb ...
Unpacking node-underscore (1.7.0~dfsg-1ubuntu1) ...
Selecting previously unselected package rename.
Preparing to unpack .../archives/rename_0.20-4_all.deb ...
Unpacking rename (0.20-4) ...
Selecting previously unselected package libjs-inherits.
Preparing to unpack .../libjs-inherits_2.0.1-3_all.deb ...
Unpacking libjs-inherits (2.0.1-3) ...
Selecting previously unselected package node-abbrev.
Preparing to unpack .../node-abbrev_1.0.5-2_all.deb ...
Unpacking node-abbrev (1.0.5-2) ...
Selecting previously unselected package node-ansi.
Preparing to unpack .../node-ansi_0.3.0-2_all.deb ...
Unpacking node-ansi (0.3.0-2) ...
Selecting previously unselected package node-ansi-color-table.
Preparing to unpack .../node-ansi-color-table_1.0.0-1_all.deb ...
Unpacking node-ansi-color-table (1.0.0-1) ...
Selecting previously unselected package node-archy.
Preparing to unpack .../node-archy_0.0.2-1_all.deb ...
Unpacking node-archy (0.0.2-1) ...
Selecting previously unselected package node-inherits.
Preparing to unpack .../node-inherits_2.0.1-3_all.deb ...
Unpacking node-inherits (2.0.1-3) ...
Selecting previously unselected package node-block-stream.
Preparing to unpack .../node-block-stream_0.0.7-1_all.deb ...
Unpacking node-block-stream (0.0.7-1) ...
Selecting previously unselected package node-delayed-stream.
Preparing to unpack .../node-delayed-stream_0.0.5-1_all.deb ...
Unpacking node-delayed-stream (0.0.5-1) ...
Selecting previously unselected package node-combined-stream.
Preparing to unpack .../node-combined-stream_0.0.5-1_all.deb ...
Unpacking node-combined-stream (0.0.5-1) ...
Selecting previously unselected package node-cookie-jar.
Preparing to unpack .../node-cookie-jar_0.3.1-1_all.deb ...
Unpacking node-cookie-jar (0.3.1-1) ...
Selecting previously unselected package node-forever-agent.
Preparing to unpack .../node-forever-agent_0.5.1-1_all.deb ...
Unpacking node-forever-agent (0.5.1-1) ...
Selecting previously unselected package node-mime.
Preparing to unpack .../node-mime_1.3.4-1_all.deb ...
Unpacking node-mime (1.3.4-1) ...
Selecting previously unselected package node-form-data.
Preparing to unpack .../node-form-data_0.1.0-1_all.deb ...
Unpacking node-form-data (0.1.0-1) ...
Selecting previously unselected package node-rimraf.
Preparing to unpack .../node-rimraf_2.2.8-1_all.deb ...
Unpacking node-rimraf (2.2.8-1) ...
Selecting previously unselected package node-mkdirp.
Preparing to unpack .../node-mkdirp_0.5.0-1_all.deb ...
Unpacking node-mkdirp (0.5.0-1) ...
Selecting previously unselected package node-graceful-fs.
Preparing to unpack .../node-graceful-fs_3.0.2-1_all.deb ...
Unpacking node-graceful-fs (3.0.2-1) ...
Selecting previously unselected package node-fstream.
Preparing to unpack .../node-fstream_0.1.24-1_all.deb ...
Unpacking node-fstream (0.1.24-1) ...
Selecting previously unselected package node-lru-cache.
Preparing to unpack .../node-lru-cache_2.3.1-1_all.deb ...
Unpacking node-lru-cache (2.3.1-1) ...
Selecting previously unselected package node-sigmund.
Preparing to unpack .../node-sigmund_1.0.0-1_all.deb ...
Unpacking node-sigmund (1.0.0-1) ...
Selecting previously unselected package node-minimatch.
Preparing to unpack .../node-minimatch_1.0.0-1_all.deb ...
Unpacking node-minimatch (1.0.0-1) ...
Selecting previously unselected package node-fstream-ignore.
Preparing to unpack .../node-fstream-ignore_0.0.6-2_all.deb ...
Unpacking node-fstream-ignore (0.0.6-2) ...
Selecting previously unselected package node-github-url-from-git.
Preparing to unpack .../node-github-url-from-git_1.1.1-1_all.deb ...
Unpacking node-github-url-from-git (1.1.1-1) ...
Selecting previously unselected package node-once.
Preparing to unpack .../node-once_1.1.1-1_all.deb ...
Unpacking node-once (1.1.1-1) ...
Selecting previously unselected package node-glob.
Preparing to unpack .../node-glob_4.0.5-1_all.deb ...
Unpacking node-glob (4.0.5-1) ...
Selecting previously unselected package nodejs-dev.
Preparing to unpack .../nodejs-dev_4.2.6~dfsg-1ubuntu4.1_amd64.deb ...
Unpacking nodejs-dev (4.2.6~dfsg-1ubuntu4.1) ...
Selecting previously unselected package node-nopt.
Preparing to unpack .../node-nopt_3.0.1-1_all.deb ...
Unpacking node-nopt (3.0.1-1) ...
Selecting previously unselected package node-npmlog.
Preparing to unpack .../node-npmlog_0.0.4-1_all.deb ...
Unpacking node-npmlog (0.0.4-1) ...
Selecting previously unselected package node-osenv.
Preparing to unpack .../node-osenv_0.1.0-1_all.deb ...
Unpacking node-osenv (0.1.0-1) ...
Selecting previously unselected package node-tunnel-agent.
Preparing to unpack .../node-tunnel-agent_0.3.1-1_all.deb ...
Unpacking node-tunnel-agent (0.3.1-1) ...
Selecting previously unselected package node-json-stringify-safe.
Preparing to unpack .../node-json-stringify-safe_5.0.0-1_all.deb ...
Unpacking node-json-stringify-safe (5.0.0-1) ...
Selecting previously unselected package node-qs.
Preparing to unpack .../node-qs_2.2.4-1_all.deb ...
Unpacking node-qs (2.2.4-1) ...
Selecting previously unselected package node-request.
Preparing to unpack .../node-request_2.26.1-1_all.deb ...
Unpacking node-request (2.26.1-1) ...
Selecting previously unselected package node-semver.
Preparing to unpack .../node-semver_2.1.0-2_all.deb ...
Unpacking node-semver (2.1.0-2) ...
Selecting previously unselected package node-tar.
Preparing to unpack .../node-tar_1.0.3-2_all.deb ...
Unpacking node-tar (1.0.3-2) ...
Selecting previously unselected package node-which.
Preparing to unpack .../node-which_1.0.5-2_all.deb ...
Unpacking node-which (1.0.5-2) ...
Selecting previously unselected package node-gyp.
Preparing to unpack .../node-gyp_3.0.3-2ubuntu1_all.deb ...
Unpacking node-gyp (3.0.3-2ubuntu1) ...
Selecting previously unselected package node-ini.
Preparing to unpack .../node-ini_1.1.0-1_all.deb ...
Unpacking node-ini (1.1.0-1) ...
Selecting previously unselected package node-lockfile.
Preparing to unpack .../node-lockfile_0.4.1-1_all.deb ...
Unpacking node-lockfile (0.4.1-1) ...
Selecting previously unselected package node-mute-stream.
Preparing to unpack .../node-mute-stream_0.0.4-1_all.deb ...
Unpacking node-mute-stream (0.0.4-1) ...
Selecting previously unselected package node-normalize-package-data.
Preparing to unpack .../node-normalize-package-data_0.2.2-1_all.deb ...
Unpacking node-normalize-package-data (0.2.2-1) ...
Selecting previously unselected package node-read.
Preparing to unpack .../node-read_1.0.5-1_all.deb ...
Unpacking node-read (1.0.5-1) ...
Selecting previously unselected package node-read-package-json.
Preparing to unpack .../node-read-package-json_1.2.4-1_all.deb ...
Unpacking node-read-package-json (1.2.4-1) ...
Selecting previously unselected package node-retry.
Preparing to unpack .../node-retry_0.6.0-1_all.deb ...
Unpacking node-retry (0.6.0-1) ...
Selecting previously unselected package node-sha.
Preparing to unpack .../node-sha_1.2.3-1_all.deb ...
Unpacking node-sha (1.2.3-1) ...
Selecting previously unselected package node-slide.
Preparing to unpack .../node-slide_1.1.4-1_all.deb ...
Unpacking node-slide (1.1.4-1) ...
Selecting previously unselected package npm.
Preparing to unpack .../npm_3.5.2-0ubuntu4_all.deb ...
Unpacking npm (3.5.2-0ubuntu4) ...
Processing triggers for libc-bin (2.23-0ubuntu5) ...
Processing triggers for systemd (229-4ubuntu16) ...
Setting up libatm1:amd64 (1:2.5.1-1.5) ...
Setting up libmnl0:amd64 (1.0.3-5) ...
Setting up libpopt0:amd64 (1.16-10) ...
Setting up libgdbm3:amd64 (1.8.3-13.1) ...
Setting up libxau6:amd64 (1:1.0.8-1) ...
Setting up libxdmcp6:amd64 (1:1.1.2-1.1) ...
Setting up libxcb1:amd64 (1.11.1-1ubuntu1) ...
Setting up libx11-data (2:1.6.3-1ubuntu2) ...
Setting up libx11-6:amd64 (2:1.6.3-1ubuntu2) ...
Setting up libxext6:amd64 (2:1.3.3-1) ...
Setting up perl-modules-5.22 (5.22.1-9) ...
Setting up libperl5.22:amd64 (5.22.1-9) ...
Setting up perl (5.22.1-9) ...
update-alternatives: using /usr/bin/prename to provide /usr/bin/rename (rename) in auto mode
Setting up mime-support (3.59ubuntu1) ...
Setting up libexpat1:amd64 (2.1.0-7ubuntu0.16.04.2) ...
Setting up libffi6:amd64 (3.2.1-4) ...
Setting up libsqlite3-0:amd64 (3.11.0-1ubuntu1) ...
Setting up libssl1.0.0:amd64 (1.0.2g-1ubuntu4.6) ...
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76.)
debconf: falling back to frontend: Readline
Setting up libpython2.7-stdlib:amd64 (2.7.12-1ubuntu0~16.04.1) ...
Setting up python2.7 (2.7.12-1ubuntu0~16.04.1) ...
Setting up libpython-stdlib:amd64 (2.7.11-1) ...
Setting up python (2.7.11-1) ...
Setting up libgmp10:amd64 (2:6.1.0+dfsg-2) ...
Setting up libmpfr4:amd64 (3.1.4-1) ...
Setting up libmpc3:amd64 (1.0.3-1) ...
Setting up bzip2 (1.0.6-8) ...
Setting up libmagic1:amd64 (1:5.25-2ubuntu1) ...
Setting up file (1:5.25-2ubuntu1) ...
Setting up iproute2 (4.3.0-1ubuntu3) ...
Setting up ifupdown (0.8.10ubuntu1.2) ...
Creating /etc/network/interfaces.
Setting up libisc-export160 (1:9.10.3.dfsg.P4-8ubuntu1.5) ...
Setting up libdns-export162 (1:9.10.3.dfsg.P4-8ubuntu1.5) ...
Setting up isc-dhcp-client (4.3.3-5ubuntu12.6) ...
Setting up isc-dhcp-common (4.3.3-5ubuntu12.6) ...
Setting up less (481-2.1ubuntu0.1) ...
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76.)
debconf: falling back to frontend: Readline
Setting up libbsd0:amd64 (0.8.2-1) ...
Setting up libnettle6:amd64 (3.2-1ubuntu0.16.04.1) ...
Setting up libhogweed4:amd64 (3.2-1ubuntu0.16.04.1) ...
Setting up libidn11:amd64 (1.32-3ubuntu1.1) ...
Setting up libp11-kit0:amd64 (0.23.2-5~ubuntu16.04.1) ...
Setting up libtasn1-6:amd64 (4.7-3ubuntu0.16.04.1) ...
Setting up libgnutls30:amd64 (3.4.10-4ubuntu1.2) ...
Setting up libxtables11:amd64 (1.6.0-2ubuntu3) ...
Setting up netbase (5.3) ...
Setting up openssl (1.0.2g-1ubuntu4.6) ...
Setting up ca-certificates (20160104ubuntu1) ...
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76.)
debconf: falling back to frontend: Readline
Setting up krb5-locales (1.13.2+dfsg-5ubuntu2) ...
Setting up libroken18-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libasn1-8-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libkrb5support0:amd64 (1.13.2+dfsg-5ubuntu2) ...
Setting up libk5crypto3:amd64 (1.13.2+dfsg-5ubuntu2) ...
Setting up libkeyutils1:amd64 (1.5.9-8ubuntu1) ...
Setting up libkrb5-3:amd64 (1.13.2+dfsg-5ubuntu2) ...
Setting up libgssapi-krb5-2:amd64 (1.13.2+dfsg-5ubuntu2) ...
Setting up libhcrypto4-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libheimbase1-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libwind0-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libhx509-5-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libkrb5-26-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libheimntlm0-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libgssapi3-heimdal:amd64 (1.7~git20150920+dfsg-4ubuntu1) ...
Setting up libsasl2-modules-db:amd64 (2.1.26.dfsg1-14build1) ...
Setting up libsasl2-2:amd64 (2.1.26.dfsg1-14build1) ...
Setting up libldap-2.4-2:amd64 (2.4.42+dfsg-2ubuntu3.1) ...
Setting up librtmp1:amd64 (2.4+20151223.gitfa8646d-1build1) ...
Setting up libcurl3-gnutls:amd64 (7.47.0-1ubuntu2.2) ...
Setting up libedit2:amd64 (3.1-20150325-1ubuntu2) ...
Setting up libicu55:amd64 (55.1-7) ...
Setting up libsasl2-modules:amd64 (2.1.26.dfsg1-14build1) ...
Setting up libxmuu1:amd64 (2:1.1.2-2) ...
Setting up manpages (4.04-2) ...
Setting up openssh-client (1:7.2p2-4ubuntu2.1) ...
Setting up rsync (3.1.1-3ubuntu1) ...
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of restart.
Setting up xauth (1:1.0.9-1ubuntu2) ...
Setting up binutils (2.26.1-1ubuntu1~16.04.3) ...
Setting up libc-dev-bin (2.23-0ubuntu5) ...
Setting up linux-libc-dev:amd64 (4.4.0-66.87) ...
Setting up libc6-dev:amd64 (2.23-0ubuntu5) ...
Setting up libisl15:amd64 (0.16.1-1) ...
Setting up cpp-5 (5.4.0-6ubuntu1~16.04.4) ...
Setting up cpp (4:5.3.1-1ubuntu1) ...
Setting up libcc1-0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libgomp1:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libitm1:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libatomic1:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libasan2:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up liblsan0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libtsan0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libubsan0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libcilkrts5:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libmpx0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libquadmath0:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up libgcc-5-dev:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up gcc-5 (5.4.0-6ubuntu1~16.04.4) ...
Setting up gcc (4:5.3.1-1ubuntu1) ...
Setting up libstdc++-5-dev:amd64 (5.4.0-6ubuntu1~16.04.4) ...
Setting up g++-5 (5.4.0-6ubuntu1~16.04.4) ...
Setting up g++ (4:5.3.1-1ubuntu1) ...
update-alternatives: using /usr/bin/g++ to provide /usr/bin/c++ (c++) in auto mode
Setting up make (4.1-6) ...
Setting up libdpkg-perl (1.18.4ubuntu1.1) ...
Setting up xz-utils (5.1.1alpha+20120614-2ubuntu2) ...
update-alternatives: using /usr/bin/xz to provide /usr/bin/lzma (lzma) in auto mode
Setting up patch (2.7.5-1) ...
Setting up dpkg-dev (1.18.4ubuntu1.1) ...
Setting up build-essential (12.1ubuntu2) ...
Setting up libfakeroot:amd64 (1.20.2-1ubuntu1) ...
Setting up fakeroot (1.20.2-1ubuntu1) ...
update-alternatives: using /usr/bin/fakeroot-sysv to provide /usr/bin/fakeroot (fakeroot) in auto mode
Setting up liberror-perl (0.17-1.2) ...
Setting up git-man (1:2.7.4-0ubuntu1) ...
Setting up git (1:2.7.4-0ubuntu1) ...
Setting up python-pkg-resources (20.7.0-1) ...
Setting up gyp (0.1+20150913git1f374df9-1ubuntu1) ...
Setting up javascript-common (11) ...
Setting up libalgorithm-diff-perl (1.19.03-1) ...
Setting up libalgorithm-diff-xs-perl (0.04-4build1) ...
Setting up libalgorithm-merge-perl (0.08-3) ...
Setting up libfile-fcntllock-perl (0.22-3) ...
Setting up libjs-jquery (1.11.3+dfsg-4) ...
Setting up libjs-node-uuid (1.4.0-1) ...
Setting up libjs-underscore (1.7.0~dfsg-1ubuntu1) ...
Setting up zlib1g-dev:amd64 (1:1.2.8.dfsg-2ubuntu4) ...
Setting up libssl-dev:amd64 (1.0.2g-1ubuntu4.6) ...
Setting up libssl-doc (1.0.2g-1ubuntu4.6) ...
Setting up libuv1:amd64 (1.8.0-1) ...
Setting up libuv1-dev:amd64 (1.8.0-1) ...
Setting up manpages-dev (4.04-2) ...
Setting up nodejs (4.2.6~dfsg-1ubuntu4.1) ...
update-alternatives: using /usr/bin/nodejs to provide /usr/bin/js (js) in auto mode
Setting up node-async (0.8.0-1) ...
Setting up node-node-uuid (1.4.0-1) ...
Setting up node-underscore (1.7.0~dfsg-1ubuntu1) ...
Setting up rename (0.20-4) ...
update-alternatives: using /usr/bin/file-rename to provide /usr/bin/rename (rename) in auto mode
Setting up libjs-inherits (2.0.1-3) ...
Setting up node-abbrev (1.0.5-2) ...
Setting up node-ansi (0.3.0-2) ...
Setting up node-ansi-color-table (1.0.0-1) ...
Setting up node-archy (0.0.2-1) ...
Setting up node-inherits (2.0.1-3) ...
Setting up node-block-stream (0.0.7-1) ...
Setting up node-delayed-stream (0.0.5-1) ...
Setting up node-combined-stream (0.0.5-1) ...
Setting up node-cookie-jar (0.3.1-1) ...
Setting up node-forever-agent (0.5.1-1) ...
Setting up node-mime (1.3.4-1) ...
Setting up node-form-data (0.1.0-1) ...
Setting up node-rimraf (2.2.8-1) ...
Setting up node-mkdirp (0.5.0-1) ...
Setting up node-graceful-fs (3.0.2-1) ...
Setting up node-fstream (0.1.24-1) ...
Setting up node-lru-cache (2.3.1-1) ...
Setting up node-sigmund (1.0.0-1) ...
Setting up node-minimatch (1.0.0-1) ...
Setting up node-fstream-ignore (0.0.6-2) ...
Setting up node-github-url-from-git (1.1.1-1) ...
Setting up node-once (1.1.1-1) ...
Setting up node-glob (4.0.5-1) ...
Setting up nodejs-dev (4.2.6~dfsg-1ubuntu4.1) ...
Setting up node-nopt (3.0.1-1) ...
Setting up node-npmlog (0.0.4-1) ...
Setting up node-osenv (0.1.0-1) ...
Setting up node-tunnel-agent (0.3.1-1) ...
Setting up node-json-stringify-safe (5.0.0-1) ...
Setting up node-qs (2.2.4-1) ...
Setting up node-request (2.26.1-1) ...
Setting up node-semver (2.1.0-2) ...
Setting up node-tar (1.0.3-2) ...
Setting up node-which (1.0.5-2) ...
Setting up node-gyp (3.0.3-2ubuntu1) ...
Setting up node-ini (1.1.0-1) ...
Setting up node-lockfile (0.4.1-1) ...
Setting up node-mute-stream (0.0.4-1) ...
Setting up node-normalize-package-data (0.2.2-1) ...
Setting up node-read (1.0.5-1) ...
Setting up node-read-package-json (1.2.4-1) ...
Setting up node-retry (0.6.0-1) ...
Setting up node-sha (1.2.3-1) ...
Setting up node-slide (1.1.4-1) ...
Setting up npm (3.5.2-0ubuntu4) ...
Processing triggers for libc-bin (2.23-0ubuntu5) ...
Processing triggers for systemd (229-4ubuntu16) ...
Processing triggers for ca-certificates (20160104ubuntu1) ...
Updating certificates in /etc/ssl/certs...
173 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.

Step 1.4: Make sure that ‘node’ is found in the PATH

Many NPM packages run ‘node’ commands. For those to work, the system needs to find the ‘node’ executable. In case of Ubuntu, there is an node executable on /usr/sbin/node, but it has nothing to do with Node.js. Ubuntu installs Node.js on /usr/bin/nodejs. For the system to find the correct node command, it is sufficient to create a sybolic link like follows (see also this StackOverflow Q&A):

(container)$ ln -s nodejs /usr/bin/node

If this step is not performed, you will hit an error message

gyp: Call to 'node -e "require('nan')"' returned exit status 127 while in binding.gyp. while trying to load binding.gyp

in the npm install step below; see Appendix A for details.

Step 1.5: Clone BrowserStack Protractor Example from Git

The BrowserStack Protractor Example is cloned like follows:

(container)# git clone https://github.com/browserstack/protractor-browserstack
Cloning into 'protractor-browserstack'...
remote: Counting objects: 185, done.
remote: Total 185 (delta 0), reused 0 (delta 0), pack-reused 185
Receiving objects: 100% (185/185), 28.39 KiB | 0 bytes/s, done.
Resolving deltas: 100% (72/72), done.
Checking connectivity... done.

Step 1.6: Install Dependencies

The next commands is needed for downloading and installing the dependencies:

(container)# cd protractor-browserstack; npm install
> bufferutil@1.2.1 install /app/protractor-browserstack/node_modules/bufferutil
> node-gyp rebuild

make: Entering directory '/app/protractor-browserstack/node_modules/bufferutil/build'
 CXX(target) Release/obj.target/bufferutil/src/bufferutil.o
 SOLINK_MODULE(target) Release/obj.target/bufferutil.node
 COPY Release/bufferutil.node
make: Leaving directory '/app/protractor-browserstack/node_modules/bufferutil/build'

> utf-8-validate@1.2.2 install /app/protractor-browserstack/node_modules/utf-8-validate
> node-gyp rebuild

make: Entering directory '/app/protractor-browserstack/node_modules/utf-8-validate/build'
 CXX(target) Release/obj.target/validation/src/validation.o
 SOLINK_MODULE(target) Release/obj.target/validation.node
 COPY Release/validation.node
make: Leaving directory '/app/protractor-browserstack/node_modules/utf-8-validate/build'
protractor-browserstack@0.1.0 /app/protractor-browserstack
+-- browserstack-local@1.3.0
| +-- https-proxy-agent@1.0.0
| | +-- agent-base@2.0.1
| | | `-- semver@5.0.3
| | +-- debug@2.6.2
| | | `-- ms@0.7.2
| | `-- extend@3.0.0
| +-- is-running@2.1.0
| +-- sinon@1.17.7
| | +-- formatio@1.1.1
| | +-- lolex@1.3.2
| | +-- samsam@1.1.2
| | `-- util@0.10.3
| `-- temp-fs@0.9.9
| `-- rimraf@2.5.4
| `-- glob@7.1.1
| +-- fs.realpath@1.0.0
| +-- inflight@1.0.6
| | `-- wrappy@1.0.2
| +-- minimatch@3.0.3
| | `-- brace-expansion@1.1.6
| | +-- balanced-match@0.4.2
| | `-- concat-map@0.0.1
| +-- once@1.4.0
| `-- path-is-absolute@1.0.1
`-- protractor@2.5.1
 +-- accessibility-developer-tools@2.6.0
 +-- adm-zip@0.4.4
 +-- glob@3.2.11
 | +-- inherits@2.0.1
 | `-- minimatch@0.3.0
 | +-- lru-cache@2.7.3
 | `-- sigmund@1.0.1
 +-- html-entities@1.1.3
 +-- jasmine@2.3.2
 | +-- exit@0.1.2
 | +-- glob@3.2.11
 | | `-- minimatch@0.3.0
 | `-- jasmine-core@2.3.4
 +-- jasminewd@1.1.0
 +-- jasminewd2@0.0.6
 +-- lodash@2.4.2
 +-- minijasminenode@1.1.1
 +-- optimist@0.6.1
 | +-- minimist@0.0.10
 | `-- wordwrap@0.0.3
 +-- q@1.0.0
 +-- request@2.57.0
 | +-- aws-sign2@0.5.0
 | +-- bl@0.9.5
 | | `-- readable-stream@1.0.34
 | | +-- core-util-is@1.0.2
 | | +-- isarray@0.0.1
 | | `-- string_decoder@0.10.31
 | +-- caseless@0.10.0
 | +-- combined-stream@1.0.5
 | | `-- delayed-stream@1.0.0
 | +-- forever-agent@0.6.1
 | +-- form-data@0.2.0
 | | +-- async@0.9.2
 | | `-- combined-stream@0.0.7
 | | `-- delayed-stream@0.0.5
 | +-- har-validator@1.8.0
 | | +-- bluebird@2.11.0
 | | +-- chalk@1.1.3
 | | | +-- ansi-styles@2.2.1
 | | | +-- escape-string-regexp@1.0.5
 | | | +-- has-ansi@2.0.0
 | | | | `-- ansi-regex@2.1.1
 | | | +-- strip-ansi@3.0.1
 | | | `-- supports-color@2.0.0
 | | +-- commander@2.9.0
 | | | `-- graceful-readlink@1.0.1
 | | `-- is-my-json-valid@2.16.0
 | | +-- generate-function@2.0.0
 | | +-- generate-object-property@1.2.0
 | | | `-- is-property@1.0.2
 | | +-- jsonpointer@4.0.1
 | | `-- xtend@4.0.1
 | +-- hawk@2.3.1
 | | +-- boom@2.10.1
 | | +-- cryptiles@2.0.5
 | | +-- hoek@2.16.3
 | | `-- sntp@1.0.9
 | +-- http-signature@0.11.0
 | | +-- asn1@0.1.11
 | | +-- assert-plus@0.1.5
 | | `-- ctype@0.5.3
 | +-- isstream@0.1.2
 | +-- json-stringify-safe@5.0.1
 | +-- mime-types@2.0.14
 | | `-- mime-db@1.12.0
 | +-- node-uuid@1.4.7
 | +-- oauth-sign@0.8.2
 | +-- qs@3.1.0
 | +-- stringstream@0.0.5
 | +-- tough-cookie@2.3.2
 | | `-- punycode@1.4.1
 | `-- tunnel-agent@0.4.3
 +-- saucelabs@1.0.1
 +-- selenium-webdriver@2.47.0
 | +-- tmp@0.0.24
 | +-- ws@0.8.1
 | | +-- bufferutil@1.2.1
 | | | +-- bindings@1.2.1
 | | | `-- nan@2.5.1
 | | +-- options@0.0.6
 | | +-- ultron@1.0.2
 | | `-- utf-8-validate@1.2.2
 | | `-- nan@2.4.0
 | `-- xml2js@0.4.4
 | +-- sax@0.6.1
 | `-- xmlbuilder@8.2.2
 `-- source-map-support@0.2.10
 `-- source-map@0.1.32
 `-- amdefine@1.0.1

Note: this command requires ‘make’ to be installed. In the official Ubuntu 16.04 Docker image, this is the case. However, if it is missing in your case, you need to issue the command apt-get install build-essential as root or with sudo.

Step 1.7: Add BrowserStack Credentials

Now let us specify the BrowserStack credentials you can find in the “Automate” section on your BrowserStack Account Settings page:

(container)# export BROWSERSTACK_USERNAME=your_browserstack_user_id
(container)# export BROWSERSTACK_ACCESS_KEY=your_browserstack_key

Note that the environment variables differ from the the ones we had used in the Gulp examples on my previous blog post: BROWSERSTACK_USERNAME instead of BROWSERSTACK_USER and  BROWSERSTACK_ACCESS_KEY instead of BROWSERSTACK_KEY

Step 1.8: Run the BrowserStack automated Test

Finally, we start the test session via npm run local:

(container)# npm run local

Connecting local
Connected. Now testing...
Using the selenium server at http://hub-cloud.browserstack.com/wd/hub
[launcher] Running 1 instances of WebDriver
.

Finished in 0.763 seconds
1 test, 1 assertion, 0 failures

[launcher] 0 instance(s) of WebDriver still running
[launcher] chrome #1 passed

With that, we have automatically tested a chrome browser using BrowserStack:

Excellent!

Step 1.9: Review the Results on BrowserStack Automate Logs Page

On this BrowserStack link, you can see in detail, which steps were taken during the automated test:

And here are visual logs with screenshot:

Soon after running the first automated test via BrowserStack, I have found an Email from BrowserStack in my Email inbox with the information that they have noticed my first automated test and that I can contact them in case of any questions.

Part 2: Integration of BrowserStack into Jenkins

Step 2.1: Start and Connect to Jenkins

If you have a Jenkins Server that is up and running, you can skip this step, including the sub-steps.

Step 2.1.1: Start Jenkins in interactive Terminal Mode

Make sure that port 8080 is unused on the Docker host. If you were following all steps in part 1 of the Jenkins blog series, you might need to stop cadvisor:

(dockerhost)$ sudo docker stop cadvisor

I assume that jenkins_home is already created, all popular plugins are installed and an Admin user has been created as shown in part 1 of the blog series. We start the Jenkins container with the jenkins_home Docker host volume mapped to /var/jenkins_home:

(dockerhost)$ cd <path_to_jenkins_home> # in my case: cd /vagrant/jenkins_home/
(dockerhost:jenkins_home)$ sudo docker run -it --rm --name jenkins -p8080:8080 -p50000:50000 -v`pwd`:/var/jenkins_home jenkins
Running from: /usr/share/jenkins/jenkins.war
...
Jenkins initial setup is required. An admin user has been created and a password generated.
Please use the following password to proceed to installation:

50c150e35a774cexxxxxxxxxxxxxxx

This may also be found at: /var/jenkins_home/secrets/initialAdminPassword
...
--> setting agent port for jnlp
--> setting agent port for jnlp... done

Step 2.1.2: Open Jenkins in a Browser

Now we want to connect to the Jenkins portal. For that, open a browser and open the URL

<your_jenkins_host>:8080

In our case, Jenkins is running in a container and we have mapped the container-port 8080 to the local port 8080 of the Docker host. On the Docker host, we can open the URL.

localhost:8080

Note: In case of Vagrant with VirtualBox, per default, there is only a NAT-based interface and you need to create port-forwarding for any port you want to reach from outside (also the local machine you are working on is to be considered as outside). In this case, we need to add an entry in the port forwarding list of VirtualBox:
2016-11-30-19_22_22-regel-fur-port-weiterleitung

Note that this configuration is not permanent, unless you define the port mappings in the Vagrantfile as follows (see official Vagrant documentation):

config.vm.network "forwarded_port", guest: 8080, host: 8080

Step 2.1.3: Initialize Jenkins: Unlock Jenkins

Unlock Jenkins

insert the one-time password found in the log during startup or on /var/jenkins_home/secrets/initialAdminPassword

-> Continue

Step 2.1.4: Initialize Jenkins: Install Plugins

-> Install suggested plugins

Installing suggested plugins

Wait for the installation process to complete.

Step 2.1.5: Initialize Jenkins: Create Admin User

-> Create First Admin User

-> Save and Finish or “Continue as admin”

-> Start using Jenkins

Note: I recommend to log out and log in in order to test the login.

Step 2.2: Prepare Git Usage

As described in this StackOverflow Q&A, we need to add the Git username and email address, since Jenkins tries to tag and commit on the Git repo, which requires those configuration items to be set. For that, we perform:

-> 2017-02-25-18_04_32-job-dsl-hello-world-job-1-console-jenkins

-> Manage Jenkins

-> Configure System

-> scroll down to “Git plugin”

-> Git plugin; user.name = jenkins and user.email = admin@jenkins.org

Step 2.3: Install the Jenkins Node.js Plugin

-> Jenkins Home leading to the Dashboard (left upper corner)

-> Manage Jenkins

-> Manage Plugins

-> Plugins - Available

->  2017-03-04-19_19_34-update-center-jenkins

-> Choose: NodeJS

-> Install without restart

-> observe:

NodeJS Plugin Installation Success

Step 2.4: Prepare Node.js Usage

-> Manage Jenkins

-> Global Tool Configuration

-> NodeJS: specify name: "NodeJS 7.7.1" and add Global npm packages to install: "gulp"

-> Choose NodeJS, check “Install automatically” and specify Global npm packages to install: protractor:

Choose NodeJS, check "Install automatically" and specify Global npm packages to install: protractor

 

-> Save Button (in Global Tool Installer)

Step 2.5: Install the BrowserStack Plugin

The BrowserStack Plugin can be installed like any other Jenkins plugin:

-> Jenkins Home leading to the Dashboard (left upper corner)

-> Manage Jenkins
-> Manage Plugins
-> Plugins - Available

-> Filter: BrowserStack

-> Choose: BrowserStack

-> Install without restart

– > observe:

BrowserStack Installation Success

Step 2.6: Prepare BrowserStack Usage

I am closely following the official documentation:

-> Manage Jenkins

-> Configure System

-> BrowserStack > BrowserStack Credentials > Add > Jenkins

-> Add Username and Access Key as found on your BrowserStack account page:

Insert Username and Key as shown on https://www.browserstack.com/accounts/settings > Aoutomate

-> Add

 

BrowserStack Global Settings shows used credentials after having configured them

-> Save

Step 2.7: Create BrowserStack Jenkins Job

-> New Item

-> Enter Item Name: BrowserStackJob

-> Freestyle Project

-> OK

Step 2.8: Configure BrowserStack Jenkins Job

Step 2.8.1: Configure Git Download

In my case, I have forked the following project from Github: https://github.com/browserstack/protractor-browserstack

-> 

Step 2.8.2: Configure BrowserStack Build Environment

-> Build Environment: check "BrowserStack" and "BrowserStack Local"; keep defaults

Step 2.8.3: Configure Build Step: Set Path

In addition check the checkbox for “Provide Node & npm bin/ folder to PATH”

-> in addition check the checkbox for: Provide Node and; npm bin folder to PATH

If you click on the (?), you will be informed that the BrowserStack user and password are available as environment variables.

Select the BrowserStack credentials (username, access key) to use for this project.
These values will be available as BROWSERSTACK_USER and BROWSERSTACK_ACCESSKEY environment variables.

Step 2.8.4: Configure Build Step: Execute shell: npm run local

-> 

-> we add following shell script:

export BROWSERSTACK_USERNAME=$BROWSERSTACK_USER
export BROWSERSTACK_ACCESS_KEY=$BROWSERSTACK_ACCESSKEY
npm install
npm run local

2017-03-17 22_37_29-BrowserStackJob Config [Jenkins]

This is, because the BrowserStack example expects the username and password to be provided in the variables BROWSERSTACK_USERNAME and BROWSERSTACK_ACCESS_KEY, which differs from the names the username and password are encapsulated by the Jenkins BrowserStack plugin.

We needed to add npm install to download all dependencies, before we run the test with npm run local. The local script is defined in the package.json file of the project.

Now we need to save the settings:

-> Save

Step 2.9: Add ‘make’ to Jenkins Container

If this step is omitted, we would receive an error message in the next step

> node-gyp rebuild

gyp ERR! build error 
gyp ERR! stack Error: not found: make

With the following commands, we log in as root (with user ID 0) and install the build-essential:

(dockerhost)$ docker exec -u 0 jenkins bash
(container)# apt-get update
(container)# apt-get install build-essential

Step 2.10: Build Project

-> Build Now

-> Click on #1

-> Console Output

The last view lines of the output show, that the test was successful:

Excellent!

Step 2.11: Generate a Jenkins Test Report

Step 2.11.1: Add Jasmine to the Project

For adding a Test report, we add the plugin jasmine-reporters to the repository. For that, I have forked the original BrowserStack Protractor Git repository to /protractor-browserstack. I have made following changes (in bold):

package.json

{
  "name": "protractor-browserstack",
  "version": "0.1.0",
  "readme": "Protractor Integration with [BrowserStack](https://www.browserstack.com)",
  "description": "Selenium examples for Protractor and BrowserStack Automate",
  "scripts": {
    "test": "npm run single && npm run local && npm run parallel",
    "single": "./node_modules/.bin/protractor conf/single.conf.js",
    "local": "./node_modules/.bin/protractor conf/local.conf.js",
    "parallel": "./node_modules/.bin/protractor conf/parallel.conf.js",
    "parallel_local": "./node_modules/.bin/protractor conf/parallel_local.conf.js"
  },
  "repository": {
    "type": "git",
    "url": "https://github.com/browserstack/protractor-browserstack"
  },
  "dependencies": {
    "browserstack-local": "^1.0.0",
    "protractor": "^2.5.1",
    "jasmine-reporters": "^1.0.0"
  },
  "license": "MIT"
}

I have added a comma at the end of the protractor dependency line and I have added jasmine 1.0.0 to the dependencies. Note that the current jasmine 2.2.1 version did not generate any reports.

conf/local.conf.js

var browserstack = require('browserstack-local');

exports.config = {
  'specs': [ '../specs/local.js' ],
  'seleniumAddress': 'http://hub-cloud.browserstack.com/wd/hub',

  'capabilities': {
    'browserstack.user': process.env.BROWSERSTACK_USERNAME || 'BROWSERSTACK_USERNAME',
    'browserstack.key': process.env.BROWSERSTACK_ACCESS_KEY || 'BROWSERSTACK_ACCESS_KEY',
    'build': 'protractor-browserstack',
    'name': 'local_test',
    'browserName': 'chrome',
    'browserstack.local': true,
    'browserstack.debug': 'true'
  },

  // Add Jasmine JUnit reporter
  onPrepare: function() {
    require('jasmine-reporters');
    jasmine.getEnv().addReporter(
        new jasmine.JUnitXmlReporter('xmloutput', true, true)
    );
  },

  // Code to start browserstack local before start of test
  beforeLaunch: function(){
    console.log("Connecting local");
    return new Promise(function(resolve, reject){
      exports.bs_local = new browserstack.Local();
      exports.bs_local.start({'key': exports.config.capabilities['browserstack.key'] }, function(error) {
        if (error) return reject(error);
        console.log('Connected. Now testing...');

        resolve();
      });
    });
  },

  // Code to stop browserstack local after end of test
  afterLaunch: function(){
    return new Promise(function(resolve, reject){
      exports.bs_local.stop(resolve);
    });
  }
};

Note, that the syntax for Jasmine2 looks different:

var jasmineReporters = require('jasmine-reporters');
    jasmine.getEnv().addReporter(
        new jasmineReporters.JUnitXmlReporter('xmloutput', true, true)
    );

The old syntax is throwing an error JUnitXmlReporter is not a constructor., if you use it with Jasmine2. With the new syntax, Jasmine2 did not throw any errors, but it also did not create any XML reports.

Step 2.11.2 (optional): Run Jenkins Job and Check XML Files

-> Build Now

-> Click on #1 (choose highest number)

-> Console Output

Now the xml reports should have been created. To check this, let us connect to the Jenkins docker container:

(dockerhost)$ docker exec -it jenkins bash
(container)$ cd workspace/BrowserStackJob
(container)$ ls -l xmloutput
total 1
-rwxrwxrwx 1 900 900 311 Mar 18 13:18 TEST-BrowserStackLocalTesting.xml

Step 2.12: Generate individual Jenkins Test report

Step 2.12.1: Specify the Jenkins Report Path

Jenkins dashboard

-> BrowserStackJob

-> 

-> Add post-build action: Publish JUnit test result report

-> in the “Test report XMLs” field, specify the path to the XML reports as defined in conf/local.conf.js (in our case: xmloutput/*.xml):

specify the path to the XML reports as defined in conf/local.conf.js (in our case: xmloutput/*.xml)

-> 

Step 2.12.2: Generate the individual Jenkins Test Report

-> Build Now

Now the build report is showing a link to the Test Result:

-> click on Test Result will reveal a table, which shows, how many tests have failed (0), passed (1), total (1) and diff (+1)

Individual Test Result

For troubleshooting: the console output should show the term “Recording test results”:

Step 2.13: Generate a Test Result Trend Graph

Just rerun the “Build now”, so you have at least two or better three tests with test results. Then the Test Result Graph will show up.

Jenkins Job showing Test Results Trend

Note: do not use the aggregated downstream results:

This had caused a lot of confusion, because the number of the tests is shown to be zero, as can be seen in the graph in build #19 to #23. Then I have found Phil’s answer to this Stackoverflow question, which has helped me to resolve the issue by removing the aggregation. And voila, the number of tests is correct again (build #24).

 

Excellent!

Appendix A: Error : “/bin/sh: 1: node: not found”

Symptoms: Error “node not found”

Full log:

root@6dbba34bf92c:/vagrant/protractor-browserstack# npm install
> bufferutil@1.2.1 install /vagrant/protractor-browserstack/node_modules/bufferutil
> node-gyp rebuild

/bin/sh: 1: node: not found
gyp: Call to 'node -e "require('nan')"' returned exit status 127 while in binding.gyp. while trying to load binding.gyp
gyp ERR! configure error
gyp ERR! stack Error: `gyp` failed with exit code: 1
gyp ERR! stack at ChildProcess.onCpExit (/usr/share/node-gyp/lib/configure.js:354:16)
gyp ERR! stack at emitTwo (events.js:87:13)
gyp ERR! stack at ChildProcess.emit (events.js:172:7)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:200:12)
gyp ERR! System Linux 4.2.0-42-generic
gyp ERR! command "/usr/bin/nodejs" "/usr/bin/node-gyp" "rebuild"
gyp ERR! cwd /vagrant/protractor-browserstack/node_modules/bufferutil
gyp ERR! node -v v4.2.6
gyp ERR! node-gyp -v v3.0.3
gyp ERR! not ok
npm WARN install:bufferutil@1.2.1 bufferutil@1.2.1 install: `node-gyp rebuild`
npm WARN install:bufferutil@1.2.1 Exit status 1

> utf-8-validate@1.2.2 install /vagrant/protractor-browserstack/node_modules/utf-8-validate
> node-gyp rebuild

/bin/sh: 1: node: not found
gyp: Call to 'node -e "require('nan')"' returned exit status 127 while in binding.gyp. while trying to load binding.gyp
gyp ERR! configure error
gyp ERR! stack Error: `gyp` failed with exit code: 1
gyp ERR! stack at ChildProcess.onCpExit (/usr/share/node-gyp/lib/configure.js:354:16)
gyp ERR! stack at emitTwo (events.js:87:13)
gyp ERR! stack at ChildProcess.emit (events.js:172:7)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:200:12)
gyp ERR! System Linux 4.2.0-42-generic
gyp ERR! command "/usr/bin/nodejs" "/usr/bin/node-gyp" "rebuild"
gyp ERR! cwd /vagrant/protractor-browserstack/node_modules/utf-8-validate
gyp ERR! node -v v4.2.6
gyp ERR! node-gyp -v v3.0.3
gyp ERR! not ok
npm WARN install:utf-8-validate@1.2.2 utf-8-validate@1.2.2 install: `node-gyp rebuild`
npm WARN install:utf-8-validate@1.2.2 Exit status 1

Resolution: make sure the Node.js “node” is found via the PATH

In case of Ubuntu, there is an executable node on /usr/sbin/node that has nothing to do with Node.js. The problem is, that Node.js is installed as nodejs instead of node and that many NPM commands try to execute node. We need to make sure that the NPM installation commands will find the correct Node.js node. One way of doing so, is to link /usr/bin/node to nodejs in the same folder. This works, since the existing /usr/sbin PATH is found behind the /usr/bin PATH per default.

$ sudo ln -s nodejs /usr/bin/node

or

# ln -s nodejs /usr/bin/node

if you are root (we are root in the container above)..

Then make sure that the correct node is found:

$ which node
/usr/bin/node

If the other /usr/sbin/node executable is hiding /usr/bin/node, then you might need to adapt the PATH with export PATH=/usr/bin:$PATH.

After adding the symbolic link, the full log looks like follows:

root@6dbba34bf92c:/vagrant/protractor-browserstack# npm install
> bufferutil@1.2.1 install /vagrant/protractor-browserstack/node_modules/bufferutil
> node-gyp rebuild

make: Entering directory '/vagrant/protractor-browserstack/node_modules/bufferutil/build'
 CXX(target) Release/obj.target/bufferutil/src/bufferutil.o
 SOLINK_MODULE(target) Release/obj.target/bufferutil.node
 COPY Release/bufferutil.node
make: Leaving directory '/vagrant/protractor-browserstack/node_modules/bufferutil/build'

> utf-8-validate@1.2.2 install /vagrant/protractor-browserstack/node_modules/utf-8-validate
> node-gyp rebuild

make: Entering directory '/vagrant/protractor-browserstack/node_modules/utf-8-validate/build'
 CXX(target) Release/obj.target/validation/src/validation.o
 SOLINK_MODULE(target) Release/obj.target/validation.node
 COPY Release/validation.node
make: Leaving directory '/vagrant/protractor-browserstack/node_modules/utf-8-validate/build'
protractor-browserstack@0.1.0 /vagrant/protractor-browserstack
`-- protractor@2.5.1
  `-- selenium-webdriver@2.47.0
    `-- ws@0.8.1
      +-- bufferutil@1.2.1
      | +-- bindings@1.2.1
      | `-- nan@2.5.1
      `-- utf-8-validate@1.2.2
        `-- nan@2.4.0

Appendix B: Solve Git Problem: “tell me who you are”

Symptoms: Git Error: status code 128

In a new installation of Jenkins, Git does not seem to work out of the box. You can see this by choosing the Jenkins project Job-DSL-Hello-World-Job on the dashboard, then click “build now”, if the build was not already automatically triggered. Then:

-> Build History

-> Last Build (link works only, if Jenkins is running on localhost:8080 and you have chosen the same job name)

-> Console Output

There, we will see:

Caused by: hudson.plugins.git.GitException: Command "git tag -a -f -m Jenkins Build #1 jenkins-Job-DSL-Hello-World-Job-1" returned status code 128:
stdout: 
stderr: 
*** Please tell me who you are.

Run

  git config --global user.email "you@example.com"
  git config --global user.name "Your Name"

to set your account's default identity.
Omit --global to set the identity only in this repository.

fatal: empty ident name (for <jenkins@61915398735e.(none)>) not allowed

Resolution:

Step 1: Enter Git Username and Email

As described in this StackOverflow Q&A: we can resolve this issue by either suppressing the git tagging, or (I think this is better) by adding your username and email address to git:

-> 2017-02-25-18_04_32-job-dsl-hello-world-job-1-console-jenkins

-> Manage Jenkins

-> Configure System

-> scroll down to “Git plugin”

-> Git plugin; user.name = jenkins and user.email = admin@jenkins.org

Step 2: Re-run “Build Now” on the Project

To test the new configuration, we go to

-> the Job-DSL-Hello-World-Job and press

-> Build Now

Now, we should see a BUILD SUCCESS like follows:

-> Build History

-> #nnn

-> Console Output

If everything went fine, we will a “BUILD SUCCESS”:

Job DSL Hello World Job: BUILD SUCCESS

Appendix C: NPM Error: “Auth Token must be alphanumeric characters only”

Symptoms

After cloning the protractor-browserstack and successfully installing the dependencies, the following command fails with a cryptic message

/vagrant/protractor-browserstack/node_modules/q/q.js:126
 throw e;

But reading further, there is a meaningful error message like follows:

LocalError: Auth Token must be alphanumeric characters only. Please fetch it from Local Testing section of settings page: https://www.browserstack.com/accounts/settings

The full log looks like follows:

# npm run local
> protractor-browserstack@0.1.0 local /vagrant/protractor-browserstack
> protractor conf/local.conf.js

Connecting local

/vagrant/protractor-browserstack/node_modules/q/q.js:126
 throw e;
 ^
LocalError: Auth Token must be alphanumeric characters only. Please fetch it from Local Testing section of settings page: https://www.browserstack.com/accounts/settings
 at /vagrant/protractor-browserstack/node_modules/browserstack-local/lib/Local.js:57:20
 at ChildProcess.exithandler (child_process.js:204:7)
 at emitTwo (events.js:87:13)
 at ChildProcess.emit (events.js:172:7)
 at maybeClose (internal/child_process.js:821:16)
 at Process.ChildProcess._handle.onexit (internal/child_process.js:211:5)

npm ERR! Linux 4.2.0-42-generic
npm ERR! argv "/usr/bin/nodejs" "/usr/bin/npm" "run" "local"
npm ERR! node v4.2.6
npm ERR! npm v3.5.2
npm ERR! code ELIFECYCLE
npm ERR! protractor-browserstack@0.1.0 local: `protractor conf/local.conf.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the protractor-browserstack@0.1.0 local script 'protractor conf/local.conf.js'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the protractor-browserstack package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! protractor conf/local.conf.js
npm ERR! You can get information on how to open an issue for this project with:
npm ERR! npm bugs protractor-browserstack
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! npm owner ls protractor-browserstack
npm ERR! There is likely additional logging output above.
npm ERR! Linux 4.2.0-42-generic
npm ERR! argv "/usr/bin/nodejs" "/usr/bin/npm" "run" "local"
npm ERR! node v4.2.6
npm ERR! npm v3.5.2
npm ERR! path npm-debug.log.705992348
npm ERR! code ETXTBSY
npm ERR! errno -26
npm ERR! syscall rename

npm ERR! ETXTBSY: text file is busy, rename 'npm-debug.log.705992348' -> 'npm-debug.log'
npm ERR!
npm ERR! If you need help, you may report this error at:
npm ERR! <https://github.com/npm/npm/issues>

npm ERR! Please include the following file with any support request:
npm ERR! /vagrant/protractor-browserstack/npm-debug.log

Resolution: Specify BrowserStack Username and Password in the correct environment variables

The error indicates that the BrowserStack User and/or Password is not set correctly. I had hit that problem, since I was testing another example with Gulp on the same system and the BrowserStack User and Password variables looked similar, but were not exactly the same.

To resolve the issue, let us specify the BrowserStack credentials you can find in the “Automate” section on your BrowserStack Account Settings page:

(container)# export BROWSERSTACK_USERNAME=your_browserstack_user_id
(container)# export BROWSERSTACK_ACCESS_KEY=your_browserstack_key

After that, the full log looks like follows:

# npm run local
> protractor-browserstack@0.1.0 local /vagrant/protractor-browserstack
> protractor conf/local.conf.js

Connecting local
Connected. Now testing...
Using the selenium server at http://hub-cloud.browserstack.com/wd/hub
[launcher] Running 1 instances of WebDriver
.

Finished in 0.925 seconds
1 test, 1 assertion, 0 failures

[launcher] 0 instance(s) of WebDriver still running
[launcher] chrome #1 passed

Appendix D: NPM Error: TypeError: jasmine.JUnitXmlReporter is not a constructor

Symptoms

This error was created with the Protractor Github example from BrowserStack with Jasmine 2.2.1 and Jasmine 1.x.x syntax (see below). This creates an “Error: TypeError: jasmine.JUnitXmlReporter is not a constructor”.

In package.json, jasmine 2.2.1 is defined as dependency:

package.json

{
  "name": "protractor-browserstack",
  "version": "0.1.0",
  "readme": "Protractor Integration with [BrowserStack](https://www.browserstack.com)",
  "description": "Selenium examples for Protractor and BrowserStack Automate",
  "scripts": {
    "test": "npm run single && npm run local && npm run parallel",
    "single": "./node_modules/.bin/protractor conf/single.conf.js",
    "local": "./node_modules/.bin/protractor conf/local.conf.js",
    "parallel": "./node_modules/.bin/protractor conf/parallel.conf.js",
    "parallel_local": "./node_modules/.bin/protractor conf/parallel_local.conf.js"
  },
  "repository": {
    "type": "git",
    "url": "https://github.com/browserstack/protractor-browserstack"
  },
  "dependencies": {
    "browserstack-local": "^1.0.0",
    "protractor": "^2.5.1",
    "jasmine-reporters": "^2.2.1"
  },
  "license": "MIT"
}

In conf/local.conf.js, I eronneously had used the Jasmine 1.x.x style syntax:

conf/local.conf.js

var browserstack = require('browserstack-local');

exports.config = {
  'specs': [ '../specs/local.js' ],
  'seleniumAddress': 'http://hub-cloud.browserstack.com/wd/hub',

  'capabilities': {
    'browserstack.user': process.env.BROWSERSTACK_USERNAME || 'BROWSERSTACK_USERNAME',
    'browserstack.key': process.env.BROWSERSTACK_ACCESS_KEY || 'BROWSERSTACK_ACCESS_KEY',
    'build': 'protractor-browserstack',
    'name': 'local_test',
    'browserName': 'chrome',
    'browserstack.local': true,
    'browserstack.debug': 'true'
  },

  // Add Jasmine JUnit reporter
  onPrepare: function() {
    require('jasmine-reporters');
    jasmine.getEnv().addReporter(
        new jasmine.JUnitXmlReporter('xmloutput', true, true)
    );
  },

  // Code to start browserstack local before start of test
  beforeLaunch: function(){
    console.log("Connecting local");
    return new Promise(function(resolve, reject){
      exports.bs_local = new browserstack.Local();
      exports.bs_local.start({'key': exports.config.capabilities['browserstack.key'] }, function(error) {
        if (error) return reject(error);
        console.log('Connected. Now testing...');

        resolve();
      });
    });
  },

  // Code to stop browserstack local after end of test
  afterLaunch: function(){
    return new Promise(function(resolve, reject){
      exports.bs_local.stop(resolve);
    });
  }
};

Note, that the syntax for Jasmine2 looks different:

var jasmineReporters = require('jasmine-reporters');
    jasmine.getEnv().addReporter(
        new jasmineReporters.JUnitXmlReporter('xmloutput', true, true)
    );

The old syntax is throwing an error JUnitXmlReporter is not a constructor., if you use it with Jasmine2. With the new syntax, Jasmine2 did not throw any errors, but it also did not create any XML reports.

 

This create the above error. See the full Jenkins log:

Started by user Jenkins Admin
Building in workspace /var/jenkins_home/workspace/BrowserStackJob
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/oveits/protractor-browserstack # timeout=10
Fetching upstream changes from https://github.com/oveits/protractor-browserstack
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/oveits/protractor-browserstack +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision aad0d2c559ba55885c28316776f2053e590b1393 (refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f aad0d2c559ba55885c28316776f2053e590b1393
 > git rev-list 41afcec2b4b430129f4d83a03837c215c942912e # timeout=10
[BrowserStack] Local: Starting BrowserStack Local...
[BrowserStack] Local: Started
[BrowserStack] BROWSERSTACK_USER=oliverveits1
[BrowserStack] BROWSERSTACK_ACCESSKEY=********************
[BrowserStack] BROWSERSTACK_LOCAL=true
[BrowserStack] BROWSERSTACK_LOCAL_IDENTIFIER=bc7f6106f7f4421e843cd7dca7c039c1
[BrowserStack] BROWSERSTACK_BUILD=jenkins-BrowserStackJob-14
[BrowserStack] BROWSERSTACK_USER=oliverveits1
[BrowserStack] BROWSERSTACK_ACCESSKEY=********************
[BrowserStack] BROWSERSTACK_LOCAL=true
[BrowserStack] BROWSERSTACK_LOCAL_IDENTIFIER=bc7f6106f7f4421e843cd7dca7c039c1
[BrowserStack] BROWSERSTACK_BUILD=jenkins-BrowserStackJob-14
[BrowserStackJob] $ /bin/sh -xe /tmp/hudson3133472361627303547.sh
+ export BROWSERSTACK_USERNAME=<removed manually>
+ export BROWSERSTACK_ACCESS_KEY=<removed manually>
+ npm install
protractor-browserstack@0.1.0 /var/jenkins_home/workspace/BrowserStackJob
└─┬ jasmine-reporters@2.2.1 
  └── xmldom@0.1.27 

+ npm run local

> protractor-browserstack@0.1.0 local /var/jenkins_home/workspace/BrowserStackJob
> protractor conf/local.conf.js

Connecting local
Connected. Now testing...
Using the selenium server at http://hub-cloud.browserstack.com/wd/hub
[launcher] Running 1 instances of WebDriver
[launcher] Error: TypeError: jasmine.JUnitXmlReporter is not a constructor
    at onPrepare (/var/jenkins_home/workspace/BrowserStackJob/conf/local.conf.js:25:7)
    at /var/jenkins_home/workspace/BrowserStackJob/node_modules/protractor/lib/util.js:56:41
    at Function.promise (/var/jenkins_home/workspace/BrowserStackJob/node_modules/q/q.js:650:9)
    at Object.exports.runFilenameOrFn_ (/var/jenkins_home/workspace/BrowserStackJob/node_modules/protractor/lib/util.js:46:12)
    at Runner.runTestPreparer (/var/jenkins_home/workspace/BrowserStackJob/node_modules/protractor/lib/runner.js:76:17)
    at Object.exports.run (/var/jenkins_home/workspace/BrowserStackJob/node_modules/protractor/lib/frameworks/jasmine.js:68:17)
    at /var/jenkins_home/workspace/BrowserStackJob/node_modules/protractor/lib/runner.js:333:35
    at _fulfilled (/var/jenkins_home/workspace/BrowserStackJob/node_modules/q/q.js:797:54)
    at self.promiseDispatch.done (/var/jenkins_home/workspace/BrowserStackJob/node_modules/q/q.js:826:30)
    at Promise.promise.promiseDispatch (/var/jenkins_home/workspace/BrowserStackJob/node_modules/q/q.js:759:13)
[launcher] Process exited with error code 100

npm ERR! Linux 4.2.0-42-generic
npm ERR! argv "/var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_7.7.1/bin/node" "/var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/NodeJS_7.7.1/bin/npm" "run" "local"
npm ERR! node v7.7.1
npm ERR! npm  v4.1.2
npm ERR! code ELIFECYCLE
npm ERR! protractor-browserstack@0.1.0 local: `protractor conf/local.conf.js`
npm ERR! Exit status 100
npm ERR! 
npm ERR! Failed at the protractor-browserstack@0.1.0 local script 'protractor conf/local.conf.js'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the protractor-browserstack package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR!     protractor conf/local.conf.js
npm ERR! You can get information on how to open an issue for this project with:
npm ERR!     npm bugs protractor-browserstack
npm ERR! Or if that isn't available, you can get their info via:
npm ERR!     npm owner ls protractor-browserstack
npm ERR! There is likely additional logging output above.

npm ERR! Please include the following file with any support request:
npm ERR!     /var/jenkins_home/workspace/BrowserStackJob/npm-debug.log
Build step 'Execute shell' marked build as failure
[BrowserStack] Local: Stopping BrowserStack Local...
[BrowserStack] Local: Stopped
Finished: FAILURE

Resolution

Either change the Jasmine version in package.json to 1.0.0, or change the syntax in conf/local.conf.js to something similar to:

var jasmineReporters = require('jasmine-reporters');
    jasmine.getEnv().addReporter(
        new jasmineReporters.JUnitXmlReporter('xmloutput', true, true)
    );

See e.g. the accepted answer of this StackOverflow question.

However, version 2.x.x did not create any XML reports in my case. This seems to be a known incompatibility with Protractor. Therefore, if you are using Protractor, I recommend to change the Jasmine version to 1.0.0 and keep the syntax in conf/local.conf.js in the version 1 syntax.

Appendix E: Jenkins Test Trend showing no Tests

Symptoms

The Jenkins Test Trend shows zero tests, even though the XML reports are present in the Workspace. Here in builds #19 to #23:

Jenkins Job showing Test Results Trend

Resolution

Remove the aggregated downstream results under your project > configure:

See also Phil’s answer to this Stackoverflow question, which has helped me to resolve the issue by removing the aggregation. And voila, the number of tests is correct again (build #24).

Appendix F: Updating Jenkins

Updating Jenkins (in my case: from 2.32.1 to 2.32.2) was as simple as following the steps below

Note: you might want to make a backup of your jenkins_home though. Just in case…

(dockerhost)$ cd <path_to_jenkins_home> # in my case: cd /vagrant/jenkins_home/
(dockerhost)$ docker pull jenkins # to update the jenkins image
(dockerhost)$ docker rm jenkins # to make shure the container named jenkins is removed
(dockerhost:jenkins_home)$ sudo docker run -d --rm --name jenkins -p8080:8080 -p50000:50000 -v`pwd`:/var/jenkins_home jenkins

However, after that, some data was unreadable:

2017-02-24-20_04_36-manage-old-data-jenkins

I have clicked

-> Manage Jenkins
-> Manage
-> Discard Unreadable Data

to resolve the issue (hopefully…). At least, after that, the warning was gone.

Summary

In this blog post, we

  • got acquainted with BrowserStack
    • performed manual tests
  • learned about BrowserStack local testing that allows to run remote browsers, but show the content of web sites that are available locally only
  • ran BrowserStack tests from command line
    • installed Node.js and NPM and Git
    • cloned a Protractor example with BrowserStack
    • performed automated tests from command line
  • integrated BrowserStack into Jenkins
    • installed a Docker host and a Jenkins Docker Container
    • installed the BrowserStack Plugin, Node.js, NPM and Protractor on Jenkins
    • performed the NPM installation on shell script
    • performed the automated tests as part of the Jenkins pipeline
    • installed Jasmine to made sure that individual and trend test reports are shown
  • At the end, I have recorded several error situations and their resolutions as appendices

We have seen that BrowserStack can help to perform tests with many different browsers on many different operating systems and hardware without the need to buy and install any mobile equipment.

Further Reading

1

Jenkins Part 4.2: Code Quality Tests via Checkstyle


2017-01-19-23_30_11

Today, we will show how to use Checkstyle for improving the style of Java code. First, we will add Checkstyle to Gradle in order to create XML reports for a single build. Jenkins allows us to visualize the results of more than one test/build run into historic reports. After that, we will show, how a developer can use the Eclipse Checkstyle plugin in order to create better code:

2017-02-01-04_43_16-github-triggered-build-jenkins

This blog post series is divided into following parts:

    • Part 1: Installation and Configuration of Jenkins, loading Plugins
    • Part 2: Creating our first Jenkins job: GitHub download and Software build
    • Part 3: Periodic and automatically triggered Builds
    • Part 4.1: running automated tests: Functional Tests via Java JUnit
    • Part 4.2: running automated tests: Code Quality Test via Checkstyle (this post)
    • Part 4.3: running automated tests: Performance Tests with JMeter (work in progress)

What is Jenkins?

Jenkins is the leading open source automation server mostly used in continuous integration and continuous deployment pipelines. Jenkins provides hundreds of plugins to support building, deploying and automating any project.

 

Jenkins build, test and deployment pipeline
Jenkins build, test and deployment pipeline

A typical workflow is visualized above: a developer checks in the code changes into the repository. Jenkins will detect the change, build (compile) the software, test it and prepare to deploy it on a system. Depending on the configuration, the deployment is triggered by a human person, or automatically performed by Jenkins.

For more information, see the introduction found in part 1 of this blog series.

Checking Code with Checkstyle

In this post, we will show how to configure Jenkins for automated code checking as part of the Post-Build Tests:

2017-01-19-04_15_46

After this tutorial has been followed, we will have learned how to apply standard or custom checks on the code quality using Checkstyle in Eclipse and Jenkins.

Tools & Versions used

      • Vagrant 1.8.6
      • Virtualbox 5.0.20
      • Docker 1.12.1
      • Jenkins 2.32.1
        • Checkstyle Plug-in 3.47
      • Eclipse Kepler Service Release 2 (Build id: 20140224-0627)
        • Checkstyle Plug-in 7.2.0.201611082205

Prerequisites:

      • Free DRAM for the a Docker Host VM >~ 4 GB
      • Docker Host is available, Jenkins is installed and a build process is configured. For that, perform all steps in part 1 to part 3 of this blog series (new: you now can skip part 1, if you wish)
      • Tested with 2 vCPU (1 vCPU might work as well)

Step 1: Start Jenkins in interactive Terminal Mode

Make sure that port 8080 is unused on the Docker host. If you were following all steps in part 1 of the series, you might need to stop cadvisor:

(dockerhost)$ sudo docker stop cadvisor

I assume that jenkins_home is already created, all popular plugins are installed and an Admin user has been created as shown in part 1 of the blog series. We start the Jenkins container with the jenkins_home Docker host volume mapped to /var/jenkins_home:

(dockerhost)$ cd <path_to_jenkins_home> # in my case: cd /vagrant/jenkins_home/
(dockerhost:jenkins_home)$ sudo docker run -it --rm --name jenkins -p8080:8080 -p50000:50000 -v`pwd`:/var/jenkins_home jenkins
Running from: /usr/share/jenkins/jenkins.war
...
--> setting agent port for jnlp
--> setting agent port for jnlp... done

Step 2: Open Jenkins in a Browser

Now we want to connect to the Jenkins portal. For that, open a browser and open the URL

<your_jenkins_host>:8080

In our case, Jenkins is running in a container and we have mapped the container-port 8080 to the local port 8080 of the Docker host. On the Docker host, we can open the URL.

localhost:8080

Note: In case of Vagrant with VirtualBox, per default, there is only a NAT-based interface and you need to create port-forwarding for any port you want to reach from outside (also the local machine you are working on is to be considered as outside). In this case, we need to add an entry in the port forwarding list of VirtualBox:
2016-11-30-19_22_22-regel-fur-port-weiterleitung

We have created this entry in part 1 already, but I have seen that the entries were gone again, which seems to be a VirtualBox bug. I have added it again now.

Log in with the admin account we have created in the last session:

2016-12-09-10_24_00-jenkins

Step 3: Code Analysis: Checkstyle

With Gradle, we can invoke the Checkstyle plugin as follows:

Step 3.1: Prepare Gradle for performing Checkstyle

Add to build.gradle:

apply plugin: 'checkstyle'

tasks.withType(Checkstyle) {
 ignoreFailures = true
 reports {
 html.enabled = true
 }
}

We have set ignoreFailures to true, since we do not want the Gradle build to fail for now. We are just interested in the Checkstyle reports for now.

We can download an example Checkstyle configuration file from the Apache Camel repository, for example:

git clone <yourprojectURL>
mkdir -p <yourprojectDir>/config/checkstyle/
curl https://raw.githubusercontent.com/apache/camel/master/buildingtools/src/main/resources/camel-checkstyle.xml > <yourprojectDir>/config/checkstyle/checkstyle.xml

Step 3.2 (optional): Test Checkstyle locally

If you have no Git and/or no Gradle installed, you may want to skip this step and directly proceed to the next step, so Jenkins is performing this task for you.

We can locally invoke CheckStyle as follows:

gradle check

Step 3.3: Configure Jenkins to invoke Checkstyle

Adding Gradle Checkstyle tests to be performed before each build is as simple as performing Step 4.1 and then adding “check” as a goal to the list of Jenkins Build Gradle Tasks:

On Dashboard -> Click on Project name -> Configure -> Build, add “check” before the jar task:

2016-12-28-15_33_58-github-triggered-build-config-jenkins

Click Save.

Now we verify the setting by either checking changed code into the SW repository (now is a good time to commit and push the changes performed in Step 4.1) or by clicking “Build now” -> Click on Build Link in Build History -> Console Output in the Project home:

2016-12-28-15_39_37-github-triggered-build-725-console-jenkins2016-12-28-15_40_08-github-triggered-build-725-console-jenkins

We have received a very long list of CheckStyle Errors, but, as configured, the build does not fail nevertheless.

At the same time, CheckStyle Reports should be available on Jenkins now:

The Links specified in the output are only available on Jenkins, but since Jenkins is running as a Docker container on Vagrant VM residing in

D:\veits\Vagrant\ubuntu-trusty64-docker_openshift-installer\jenkins_home

I need to access the files on

file:///D:/veits/Vagrant/ubuntu-trusty64-docker_openshift-installer/jenkins_home/workspace/GitHub%20Triggered%20Build/build/reports/checkstyle/

2016-12-28-15_48_11-index-von-d__veits_vagrant_ubuntu-trusty64-docker_openshift-installer_jenkins_ho

And on main.html we find:

2016-12-28-15_49_04-main-html

Wow, it seems like I really need to clean the code…

Step 4: Visualize the CheckStyle Warnings and Errors to the Developer

Usually Jenkins is not running as a Docker container on the Developer’s PC or Notebook, so he has no access to the above report files. We need to publish the statistics via the Jenkins portal. For that, we need to install the CheckStyle Jenkins plugin:

Step 4.1 (optional): Install the “Static Analysis Utilities”

Note: I have not tried it out, but I believe that this step is not necessary, since the next step will automatically install all plugins the Checksytle plug-in depends on.

On Jenkins -> Manage Jenkins -> Manage Plugins -> Available

In the Filter field, type “Static Analysis U”

2016-12-28-22_44_53-update-center-jenkins

Check the checkbox of “Static Analysis Utilities” and Install without restart.

2016-12-28-22_47_06-update-center-jenkins

Step 4.2: Install Checkstyle Plugin

On Jenkins -> Manage Jenkins -> Manage Plugins -> Available

In the Filter field, type “Checkstyle ” (with white space at the end; this will limit the number of hits):

2016-12-28-22_56_26-update-center-jenkins

Check the checkbox of “Checkstyle Plug-in” and Install without restart.

2016-12-28-22_58_22-update-center-jenkins

Step 4.3: Configure Checkstyle Reporting in Jenkins

On Dashboard -> <your Project> -> Configure -> Post-build Actions -> Add post-build action, choose

Publish Checkstyle analysis results

Now add the path, where Gradle is placing its result xml files:

**/build/reports/checkstyle/*.xml

2016-12-28-23_10_57-github-triggered-build-config-jenkins

And click Save.

Step 4.4: Manually trigger a new Build

On the Project page, click “Build now”, then click on the build and then “Console output”:

2016-12-28-23_17_16-github-triggered-build-726-console-jenkins

We now can see [CHECKSTYLE] messages after the build, telling us, that the reports were collected. Now, where can we see them?

Step 4.5: Review Checkstyle Statistics

On the Project page, choose Status:

2016-12-28-23_20_31-github-triggered-build-726-jenkins-v2

and click on Checkstyle Warnings on the left, or the warnings link in the center of the page, and we get a graphical representation of the Checkstyle statistics:

2016-12-29-12_27_34-jenkins

When clicking on one of the File Links (MyRouteBuilder.java in this case), we can get an overview of the Warning types for this file:

2016-12-29-12_28_37-jenkins

We choose the category Indentation and get details on the warnings:

2016-12-29-09_03_58-jenkins

and after clicking on one of the links in the Warnings field, we see the java code causing the warning:

2016-12-29-09_05_56-jenkins

Okay, Camel’s Checkstyle configuration does not like my style of grouping each route’s first line with a smaller indent than the rest of the route:

2016-12-29-09_10_26-jenkins

And it does not seem to accept my style of putting the ; in a new line at the end of a route as seen by choosing the Whitespace category and then choosing an occurence:

2016-12-29-12_34_10-jenkins

I either need to change this style, or I need to adapte the checkstyle.xml configuration file to ignore those warnings.

Step 5: Improve Code Style

For the developer, it is very inconvenient to use the Jenkins Checkstyle messages from the console and match them with the code. We need something better than that: the Eclipse Checkstyle plugin.

Step 5.1: Install Eclipse Checkstyle Plugin via local Installation

Since the recommended installation via Marketplace did not work in my case (see Appendix A), I have followed some hints about a local installation found on StackOverflow:

Download Checkstyle from Sourceforge.

2016-12-30-13_54_36-add-repository

2016-12-30-13_55_14-install

In the next window, you are asked to specify some credentials we do not have. However, you can just ignore the window and click Cancel:

2016-12-30-14_01_54-login-required

->Cancel

Then the installation proceeds:

2016-12-30-14_04_17-install

2016-12-30-14_04_26-install

2016-12-30-14_04_33-installing-software

Now I had to klick OK on security warnings twice:

2016-12-29-19_55_50-security-warning

At the end, I had to restart Eclipse:

2016-12-30-19_09_15-software-updates

Now, the Checkstyle plugin is installed on Eclipse.

Step 5.2: Configure Project for Checkstyle Usage

The project in question must be enabled for Checkstyle usage by editing the Project Properties:

2017-01-07-23_14_44

Choosing the Checkstyle style. For now, let us choose the Google Checks in the drop-down list:

2017-01-07-23_18_41-properties-for-simple-restful-file-storage

Then confirm that the project is being re-built:

2017-01-07-23_18_50-rebuild-suggested

Now the code is more yellow than white, with many hints how to improve the code:

2017-01-07-23_28_00-java-ee-simple-restful-file-storage_src_main_java_de_oveits_simplerestfulfiles

However, the hints do not go away, if you correct the code. Do we need to rebuild again? Let us test:

2017-01-07-23_30_36-java-ee-simple-restful-file-storage_src_main_java_de_oveits_simplerestfulfiles

Google style does not like that there is no empty line before the package line (sorry, in German):

2017-01-07-23_29_57-java-ee-simple-restful-file-storage_src_main_java_de_oveits_simplerestfulfiles

So, let us add an empty line and save the file. However, the style warning does not change:

2017-01-07-23_31_53-java-ee-simple-restful-file-storage_src_main_java_de_oveits_simplerestfulfiles

Let us rebuild the project:

2017-01-07-23_33_05

Yes, after the re-build: the warning has disappeared:

2017-01-07-23_43_01-java-ee-simple-restful-file-storage_src_main_java_de_oveits_simplerestfulfiles

Step 5.3: Download and Create Custom Checkstyle Profile in Eclipse

In the Jenkins Checkstyle tests above, we have used following custom Checkstyle configuration file:

$ curl https://raw.githubusercontent.com/apache/camel/master/buildingtools/src/main/resources/camel-checkstyle.xml > <yourprojectDir>/config/checkstyle/checkstyle.xml

I.e. the Checkstyle file is found on <yourprojectDir>/config/checkstyle/checkstyle.xml

Correct:

2017-01-07-23_49_13-java-ee-simple-restful-file-storage_src_main_java_de_oveits_simplerestfulfiles

2017-01-07-23_52_04-java-ee-simple-restful-file-storage_src_main_java_de_oveits_simplerestfulfiles

2017-01-07-23_52_57-preferences

2017-01-07-23_55_39-check-configuration-properties

Step 5.4: Assign Custom Checkstyle Profile to the Project

To assign the new Checkstyle profile to the project, we change the project’s Checkstyle properties by

Project->Properties -> Checkstyle

2017-01-07-23_14_44

-> Choose new Checkstyle profile -> OK

2017-01-08-00_01_13-properties-for-simple-restful-file-storage

On the Rebuild suggested window -> Yes

2017-01-08-00_01_18-rebuild-suggested

This works fine:

2017-01-18-02_29_51-java-ee-simple-restful-file-storage_src_main_java_de_oveits_simplerestfulfiles-v2

In the code, we can see the Checkstyle warnings. To get more information on the specific Checkstyle warning, the warning text can be retrieved via the mouse over function on the left of the code line, or on the markers tab on the lower pane of Eclipse.

Step 5.5: Improve Code Style

Step 5.5.1: Change Code

In order to test, how the developer can improve the code style, let us replace some of the tabs by four spaces here:

2017-01-18-02_48_39-java-ee-simple-restful-file-storage_src_main_java_de_oveits_simplerestfulfiles

Save the file now.

Step 5.5.2: Update Maven

Unfortunately, the Checkstyle warnings update process is a little cumbersome for custom Checkstyle profiles, it seems: we need to

  1. save the changed file,
  2. update Maven and
  3. rebuild the project.

Let us update Maven first:

right-click the project folder in the left pane -> Maven -> Update Project -> OK

2017-01-18-02_54_032017-01-18-02_58_21-update-maven-project

Then all Checkstyle markers are gone (although I have not changed all occurrences of a tab):

2017-01-18-02_59_32-java-ee-simple-restful-file-storage_src_main_java_de_oveits_simplerestfulfiles

Step 5.5.3 Rebuild the Project

To get the Checkstyle warnings back, we need to rebuild the project:

Project -> Build Project

2017-01-18-03_02_56

Now we can see that some of the Checkstyle warnings are gone:

2017-01-18-03_04_05-java-ee-simple-restful-file-storage_src_main_java_de_oveits_simplerestfulfiles

Next time, you check in the code to the Gir repository, you will see that the number of Checkstyle warnings we get from Jenkins via Gradle will decrease…

Step 6: Verify Jenkins Results

Since we have improved the source code, we expect the Jenkins Checkstyle warnings to decrease. We can verify this by doing the following:

-> save, commit and push the improved code -> log into Jenkins -> check out the build process that is triggered by the code push (or we can manually trigger the build process by clicking project -> Build now)

On the dashboard, we will see, that the Checkstyle statistics have (very) slightly improved:

2017-01-18-04_37_06-github-triggered-build-jenkins-v2

On the upper right edge of the figure, the number of warnings is slightly lower. The code quality is far from being perfect, but we now have all tools and plugins needed to improve the situation.

After changing all tabs by 4 spaces each, the number of Checkstyle violations goes down by ~50%. That is a good start.

2017-01-19-22_51_58-github-triggered-build-jenkins-v2

Perfect, we have learned how to use the Checkstyle plugin for Eclipse in order to produce better code. And the Jenkins Checkstyle plugin allows us to admire the progress we make.

😉

thumps_up_3

Appendix A: Problems with installing Checkstyle Eclipse Plugin via Marketplace

Note: this way of installation is recommended officially, but has failed in my case. If you hit the same problem, try the local installation as described in step 5.1 above.

To improve the style, it would be much too cumbersome to click through all 360 style warnings, edit the Java code, build the Code and check again. It is much better to give the programmer immediate feedback of the warning within the IDE. I am using Eclipse, so I need to install the Checkstyle Eclipse plugin as follows:

Choose Eclipse -> Help -> Eclipse Marketplace

2016-12-29-09_16_30-java-ee-simple-restful-file-storage_src_main_java_de_oveits_simplerestfulfiles

Search for “Checkstyle” and click install:

2016-12-29-09_19_00-eclipse-marketplace

And then “confirm”:

2016-12-29-09_20_01-eclipse-marketplace

What is that?

2016-12-29-09_21_18-proceed-with-installation_

I install it anyway. At this point, it is hanging quite a while:

2016-12-29-09_24_24-eclipse-marketplace

so, let me get a morning coffee…

After approximately two minutes, I can see it to proceed to 4 / 15. Good sign.

After the coffee, I still see 4 / 15. Not a good sign:

2016-12-29-09_44_41-eclipse-marketplace

Meanwhile I am researching the steps needed for performance testing…

After 2 hours or so: 6/15

This will take all day!

2016-12-29-11_31_35-eclipse-marketplace

Some hours later, I checked again, and I have seen the following:

2016-12-29-19_49_10-eclipse-marketplace

I have confirmed, confirmed the license:

2016-12-29-19_50_51-eclipse-marketplace

And have pressed Finish.

Then software gets installed:

2016-12-29-19_52_04-installing-software

I hope I will not break my good old Eclipse installation (it is installed locally, not in a virtual machine or container and it has ever worked better than any new version I have tested…).

After a two or three minutes:

2016-12-29-19_55_57-security-warning

I have confirmed with “OK”…

Then I had been asked to restart Eclipse and I have confirmed.

Problem: however, Checkstyle is still not installed:

Help -> Eclipse Marketplace

2016-12-30-13_20_09-eclipse-marketplace

Let us try again by clicking “Install”:

2016-12-30-13_24_45-eclipse-marketplace

2016-12-30-13_24_59-proceed-with-installation_

This does not work

Workaround

Instead of installing Checkstyle via the Eclipse marketplace, better install the Eclipse Checkstyle Plugin via download (see Step 5.1)

Summary

In this blog post we have performed following tasks:

  1. Started Jenkins in a Docker container
  2. Installed the Checkstyle Gradle Plugin to create Checkstyle plugins as XML files
  3. Installed the Checkstyle Jenkins Plugin to summarize the XML files into graphical historic reports
  4. Installed the Checkstyle Eclipse Plugin to be able to improve the code
  5. Installed custom Checkstyle policies
  6. Visualized the Code Improvement
  7. were happy

All steps apart from the installation of the Eclipse Checkstyle plugin were quite straightforward. For the Eclipse Checkstyle installation, we had to revert back to a local download and installation method described in step 5.1: the installation via Eclipse marketplace had failed. At the end, we could reduce the number of Checkstyle warnings by 50% without much effort.

Further Reading

2

Jenkins Part 4.1: Functional Java Tests via JUnit


2016-11-30-18_19_38

You also think that functional tests are one of the most important ingredients for delivering high quality software? You share my opinion that we should help the developer automating this task in order to get comparable results and to receive meaningful trend reports?

I will cover functional tests here. Instructions on how to perform code quality tests and performance tests are in draft status and will be covered in the next two blog posts.

Any questions and/or comments are highly welcome.

Introduction

As a developer you try hard to deliver high quality software.

You hate searching for this nasty bug that had been introduced unnoticed days ago. Or was it weeks ago? By whom? In which code?

Manual functional and performance testing after each commited code change quickly becomes a NO-GO as the number of features is rising constantly. In this blog post, we will show, how Jenkins can help you with both: delivering high quality software and minimizing the time needed to find the cause of a bug.

How about …

  1. creating automated tests for each functionality and performance at different levels (end to end, and unit tests)
  2. running the automated tests after each code change
  3. keeping track of the test results

… in order to avoid any bad surprises late in the game?

Okay: for 1., the developer needs to create automated functional and perfomance tests; I guess, there is no way around this. Better do this even before writing the actual code. For 2. and 3., however, automation tools like Jenkins step in and can be of great help. The developers checks in the code and Jenkins can do the rest for you.

In the current  blog post, we will show how to integrate automated JUnit functional tests into a Jenkins build pipeline. We will see that JUnit tests can be invoked easily via Gradle (Okay, Maven is more popular than Gradle, I guess, but I like Gradle because of some advantages I have discussed here; However, just give me a hint in a comment to this blog and I will prioritize the creation of a Maven version of this blog post). The Jenkins JUnit plug-in will be used to

  1. display reports on single build runs as well as
  2. display trend analysis graphs like the following one I have borrowed from here:
2016-12-30-18_41_45-jenkins-junit-project-home-jpg-826x707
Source: http://nelsonwells.net/2012/09/how-jenkins-ci-parses-and-displays-junit-output/

In this and the next two blog posts, we plan to cover following quality gate measures:

  • Part 4.1: Functional Tests (this blog post): we will use Java JUnit tests performed before building the executable JAR. Jenkins will report the test trend
  • Part 4.2: Code Quality Tests (coming soon): we will use the Checkstyle Gradle plugin for reporting to which degree the code adheres to the Apache Foundations formal rules
  • Part 4.3: Performance Tests (planned): we will use JMeter for testing and reporting the performance trend performed after the Java build using external performance testers like JMeter

Older blogs of this series:

This blog post series about Jenkins build pipelines is divided into following parts:

    • Part 1: Installation and Configuration of Jenkins, loading Plugins
    • Part 2: Creating our first Jenkins job: GitHub download and Software build
    • Part 3: Periodic and automatically triggered Builds

What is Jenkins?

Jenkins is the leading open source automation server mostly used in continuous integration and continuous deployment pipelines. Jenkins provides hundreds of plugins to support building, deploying and automating any project.

2016-12-30-21_04_46

A typical workflow is visualized above: a developer checks in the code changes into the repository. Jenkins will detect the change, build (compile) the software, test it and prepare to deploy it on a system. Depending on the configuration, the deployment is triggered by a human person, or automatically performed by Jenkins. After each step, the developer is informed depending on the priorites defined.

For more information, see the introduction found in part 1 of this blog series.

Automated Functional Testing based on JUnit

In this blog post, we will show how we need to configure Gradle and Jenkins for automated JUnit testing and reporting. In order to build a quality gate, we will reverse the original order and perform the JUnit tests before we build the executable JAR file (we do not want to create JAR files that are not functional):

2016-12-28-12_50_23

Tools used

      • Vagrant 1.8.6
      • Virtualbox 5.0.20
      • Docker 1.12.1
      • Jenkins 2.19.3
        • JUnit Plug-in 1.19

Prerequisites:

      • Free DRAM for the a Docker Host VM >~ 4 GB
      • Docker Host is available, Jenkins is installed and a build process is configured. For that, perform all steps in part 1 to part 3 of this blog series
      • Tested with 2 vCPU (1 vCPU might work as well)

Step 1: Start Jenkins in interactive Terminal Mode

Make sure that port 8080 is unused on the Docker host. If you were following all steps in part 1 of the series, you might need to stop cadvisor:

(dockerhost)$ sudo docker stop cadvisor

I assume that jenkins_home is already created, all popular plugins are installed and an Admin user has been created as shown in part 1 of the blog series. We start the Jenkins container with the jenkins_home Docker host volume mapped to /var/jenkins_home:

(dockerhost)$ cd <path_to_jenkins_home> # in my case: cd /vagrant/jenkins_home/
(dockerhost:jenkins_home)$ sudo docker run -it --rm --name jenkins -p8080:8080 -p50000:50000 -v`pwd`:/var/jenkins_home jenkins
Running from: /usr/share/jenkins/jenkins.war
...
--> setting agent port for jnlp
--> setting agent port for jnlp... done

Step 2: Open Jenkins in a Browser

Now we want to connect to the Jenkins portal. For that, open a browser and open the URL

<your_jenkins_host>:8080

In our case, Jenkins is running in a container and we have mapped the container-port 8080 to the local port 8080 of the Docker host. On the Docker host, we can open the URL.

localhost:8080

Note: In case of Vagrant with VirtualBox, per default, there is only a NAT-based interface and you need to create port-forwarding for any port you want to reach from outside (also the local machine you are working on is to be considered as outside). In this case, we need to add an entry in the port forwarding list of VirtualBox:
2016-11-30-19_22_22-regel-fur-port-weiterleitung

We have created this entry in part 1 already, but I have seen that the entries were gone again, which seems to be a VirtualBox bug. I have added it again now.

Log in with the admin account we have created in the last session:

2016-12-09-10_24_00-jenkins

Step 3: Pre-Build JUnit Tests invoked by Gradle

In this step, we will invoke Gradle Tests before building the JAR. For that, we should verify locally that the Gradle tests are successful and then define a test Gradle task in the build process.

Step 3.1 (optional): Verify that Gradle Tests are successful

You can skip this test and directly let Jenkins do this for you. This may come handy, if you have not installed Git and/or Gradle locally.

Prerequisites

  • Your Java project has successful JUnit tests defined
  • Git is installed
  • The Project is cloned to a local directory
  • Gradle is installed

In order to test, whether the JUnit tests are successful, we can test those on a system with the project cloned (git, java and gradle must be installed):

(basesystem)$ gradle test
Starting a Gradle Daemon (subsequent builds will be faster)
Parallel execution is an incubating feature.
:compileJava UP-TO-DATE
:processResources UP-TO-DATE
:classes UP-TO-DATE
:compileTestJava
warning: [options] bootstrap class path not set in conjunction with -source 1.6
1 warning
:processTestResources
:testClasses
:test

BUILD SUCCESSFUL

Total time: 29.9 secs

With that, we have verified that the command “gradle test”succeeds.

Note that the JUnit test must be designed in a way that they are independent of whether or not the JAR file is run in parallel. No simple way of running the executable JAR file in parallel to the execution of the JUnit tests seems to exist. In my case, I had to alter the JUnit tests to fulfill this prerequisite.

Step 3.2: Add Gradle test Task to Jenkins

As long as JUnit tests are defined in src/test of the project, adding Gradle tests to Jenkiny is as simple as adding “test” as a task to the list of Jenkins Build Gradle Tasks as follows:

On Dashboard -> Click on Project name -> Configure -> Build, add “task ” before the jar task:

2016-12-28-08_50_11-github-triggered-build-config-jenkins

Click Save.

If you have made local code changes on the project, now is the best time to commit and push them to the Git repository. If you have followed the steps in part 3, then this will automatically trigger a build process, so you do not need to click on “Build now” in that case. Otherwise, click on “Build now” on the Jenkins project page (e.g. Dashboard -> click on project name -> “Build now”).

Now we observe the result by clicking on the build process, then -> “Console Output”:

2016-12-28-09_45_34-github-triggered-build-724-console-jenkins

Don’t be confused by the blinking red ball on the upper left of the Console Output page: we see a BUILD SUCCESSFUL message and if we re-enter the same page, the ball is turned to static blue, indicating a successful build.

Step 4: Add JUnit Test Result Reporting to Jenkins

Now we will show how to add the JUnit test reports to the Jenkins build process.

Step 4.1: Install Jenkins JUnit Plugin

For Jenkins JUnit reporting, we need to install the JUnit Plug-in. For that, goto -> Jenkins Dashboard -> Manage Jenkins -> Manage Plugins -> Available -> Enter “JUnit Plugin” to the Find field -> Install

Note: If you do not find the plugin on the Available tab, search for it in the “Installed” tab.

You can install the plugin without reloading Jenkins.

Step 4.2: Configure Jenkins to collect and display the JUnit Test Results

In this step, we will configure Jenkins, so it will display the test results for individual builds as well as trend reporting. For that, navigate to:

Jenkins -> (choose Project) -> Configure -> Post-build Actions -> Publish JUnit test results report

2016-12-30-14_15_25-github-triggered-build-config-jenkins

Add

**/build/test-results/test/TEST-*.xml

to the “Test report XMLs” field, since this is the path, where Gradle is placing its JUnit test result reports (I have found the info here).

2016-12-30-14_18_51-github-triggered-build-config-jenkins

Now click Save.

Step 4.3: Verify JUnit individual Test Reporting

To test the Jenkins JUnit reporting feature, we trigger a clean build by adding “clean” to the Gradle tasks on Project -> Configure -> Build:

2016-12-30-17_43_59-github-triggered-build-config-jenkins

and clicking Save.

Then trigger a new build by clicking on Project -> Build now.

Then click on the Build Process, and then on Console output:

2016-12-30-17_48_38-github-triggered-build-731-console-jenkins

…scrolling down…

2016-12-30-17_50_01-github-triggered-build-731-console-jenkins

Do not be confused that the build process never seems to finish. Just click the Back to Project link:

Back to Project

On the Status page, we see that there were no failed tests:

2016-12-30-17_55_34-github-triggered-build-731-jenkins-v2

When we click on the Tests Result link on the left (or on the lower middle part on the Status page), we will see more details:

2016-12-30-17_58_25-github-triggered-build-731-test-results-jenkins-v2

We can see that we have had four tests (Create/Read/Update/Delete a file) and 100% of them were successful.

Step 4.3: Verify JUnit Test Trend Reporting

On the project’s Status page, a Test Trend graph is automatically added, as soon as there are two or more tests available. For that, click on “Build Now” on the left for a second time and click ENABLE AUTO REFRESH on the upper right. After the second build is complete, the (hopefully) blue Test Result Trend graph is showing up on the project status page:

2016-12-30-18_12_21-github-triggered-build-jenkins

The new blue graph shows that we had 4 successful tests in the last two builds.

Note: disregard the red Checkstyle Trend graph for now. This is something we will cover in the next blog post.

Step 5: Verify failed Test Reporting

Per default, Gradle build will fail, if one of the JUnit tests has failed, so it is building a strict quality gate. Will the test result be collected and reported nevertheless?

Let us test this now by breaking one of the JUnit tests by purpose. We have added an assert message that is expected to fail in one of the tests:

2016-12-30-19_27_46-java-ee-simple-restful-file-storage_src_test_java_de_oveits_simplerestfulfiles-v2

Now we commit and push the change to the SW repository:

$ git clone <Repository-URL>
$ cd <Repository Dir>
<perform the code changes here...>
$ git diff src/test/java/de/oveits/simplerestfulfilestorage/SimpleRestfulFileStorageTests.java
diff --git a/src/test/java/de/oveits/simplerestfulfilestorage/SimpleRestfulFileStorageTests.java b/src/test/java/de/oveits/simplerestfulfilestorage/SimpleRestfulFileStorageTests.java
index 684d30f..10200d5 100644
--- a/src/test/java/de/oveits/simplerestfulfilestorage/SimpleRestfulFileStorageTests.java
+++ b/src/test/java/de/oveits/simplerestfulfilestorage/SimpleRestfulFileStorageTests.java
@@ -115,6 +115,9 @@ public class SimpleRestfulFileStorageTests extends CamelSpringTestSupport {
 // mock expectations need to be specified before sending the message:
 mock.expectedBodiesReceived("File ttt created: href=http://localhost:2005/files/ttt");
 mock.expectedMessageCount(1);
+ ^M
+ // In order to break this test for Jenkins test reporting, we temporarily add a requirement that will fail:^M
+ mock.expectedMessageCount(2);^M

 template.sendBodyAndHeaders("direct:recipientList", body, headers);

$ git add src/test/java/de/oveits/simplerestfulfilestorage/SimpleRestfulFileStorageTests.java
$ git commit -m "Breaking a JUnit test by purpose for Jenkins reporting tests"
[jenkinstest 33655b9] Breaking a JUnit test by purpose for Jenkins reporting tests
 1 file changed, 4 insertions(+), 1 deletion(-)

olive@LAPTOP-P5GHOHB7 /d/veits/eclipseWorkspaceRecent/simple-restful-file-storage (jenkinstest)
$ git push
Counting objects: 9, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (6/6), done.
Writing objects: 100% (9/9), 744 bytes | 0 bytes/s, done.
Total 9 (delta 4), reused 0 (delta 0)
remote: Resolving deltas: 100% (4/4), completed with 4 local objects.
To https://github.com/oveits/simple-restful-file-storage.git
 edb49f7..33655b9 jenkinstest -> jenkinstest

This will automatically trigger a new build (if you have followed part 3 of this series; otherwise just press “Build Now” on Jenkin’s project page).

We can see on the dashboard, that the build has failed:

2016-12-30-19_36_12-dashboard-jenkins

This was expected. Now let us click on the Project name, and we will see, what happened:

2016-12-30-19_37_40-github-triggered-build-jenkins

Perfect, that is exactly, what I wanted to achieve: On the Test Result Trend, we can see that we have performed 4 tests, one of which has failed.

Let us fix the failed test by commenting out (or removing) the wrong code again:

2016-12-30-19_40_01-java-ee-simple-restful-file-storage_src_test_java_de_oveits_simplerestfulfiles-v2

After

$ git add <file>
$ git commit -m "Fixed JUnit test again to test Jenkins JUnit trend report"
$ git push

The next build should be successful again and we can see in the trends graph that the failed test is fixed again:

2016-12-30-19_47_48-github-triggered-build-jenkins

thumps_up_3

Summary

In this blog post, we have shown

  1. How to add Java functional tests to the Jenkins build pipeline based on Gradle JUnit Plugin
  2. How to install the JUnit plug-in to Jenkins for report collection
  3. How to display JUnit test results for individual builds on the Jenkins portal
  4. How to display JUnit trend analysis on the Jenkins portal

The only challenge I have encountered is, that I had to re-write my JUnit tests in a way that they were successful when run stand-alone. Before they were successful only, if the executable JAR file was started manually before running the JUnit tests. This was resolved in a way specific to the framework used (Apache Camel in this case).

Coming Soon: Code Analysis Trend Analysis via Jenkins Checkstyle plugin

Further Reading

7

Jenkins Part 3.1: periodic vs triggered Builds


2016-11-30-18_19_38

Today, we will make sure that Jenkins will detect a code change in the software repository without manual intervention. We will show two methods to do so:

  1. Periodic Builds via Schedulers: Jenkins periodically asks the software repository for any code changes
  2. Triggered Builds via Webhooks: Jenkins is triggered by the software repository to perform the build task

We will see that the triggering build processes is more challenging to set up, but has quite some advantages in terms of economics and handling, once it is set up properly. See also the Summary at the end of this post.

This blog post series is divided into following parts:

    • Part 1: Installation and Configuration of Jenkins, loading Plugins
    • Part 2: Creating our first Jenkins job: GitHub download and Software build
    • Part 3 (this blog): Periodic and automatically triggered Builds
    • Part 4 (planned): running automated tests

What is Jenkins?

Jenkins is the leading open source automation server mostly used in continuous integration and continuous deployment pipelines. Jenkins provides hundreds of plugins to support building, deploying and automating any project.

 

Jenkins build, test and deployment pipeline
Jenkins build, test and deployment pipeline

A typical workflow is visualized above: a developer checks in the code changes into the repository. Jenkins will detect the change, build (compile) the software, test it and prepare to deploy it on a system. Depending on the configuration, the deployment is triggered by a human person, or automatically performed by Jenkins.

For more information, see the introduction found in part 1 of this blog series.

Automatic Jenkins Workflow: Periodic Polling

In this chapter, we will show how we need to configure Jenkins for automatic polling of the Software repository and start the build process, if code changes are detected.

2016-12-09-10_12_31

Tools used

      • Vagrant 1.8.6
      • Virtualbox 5.0.20
      • Docker 1.12.1
      • Jenkins 2.19.3

Prerequisites:

      • Free DRAM for the a Docker Host VM >~ 4 GB
      • Docker Host is available, Jenkins is installed and a build process is configured. For that, perform all steps in part 1 and part 2 of this blog series
      • Tested with 2 vCPU (1 vCPU might work as well)

Step 1: Start Jenkins in interactive Terminal Mode

Make sure that port 8080 is unused on the Docker host. If you were following all steps in part 1 of the series, you might need to stop cadvisor:

(dockerhost)$ sudo docker stop cadvisor

I assume that jenkins_home is already created, all popular plugins are installed and an Admin user has been created as shown in part 1 of the blog series. We start the Jenkins container with the jenkins_home Docker host volume mapped to /var/jenkins_home:

(dockerhost)$ cd <path_to_jenkins_home> # in my case: cd /vagrant/jenkins_home/
(dockerhost:jenkins_home)$ sudo docker run -it --rm --name jenkins -p8080:8080 -p50000:50000 -v`pwd`:/var/jenkins_home jenkins
Running from: /usr/share/jenkins/jenkins.war
...
--> setting agent port for jnlp
--> setting agent port for jnlp... done

Step 2: Open Jenkins in a Browser

Now we want to connect to the Jenkins portal. For that, open a browser and open the URL

<your_jenkins_host>:8080

In our case, Jenkins is running in a container and we have mapped the container-port 8080 to the local port 8080 of the Docker host. On the Docker host, we can open the URL.

localhost:8080

Note: In case of Vagrant with VirtualBox, per default, there is only a NAT-based interface and you need to create port-forwarding for any port you want to reach from outside (also the local machine you are working on is to be considered as outside). In this case, we need to add an entry in the port forwarding list of VirtualBox:
2016-11-30-19_22_22-regel-fur-port-weiterleitung

We have created this entry in part 1 already, but I have seen that the entries were gone again, which seems to be a VirtualBox bug. I have added it again now.

Log in with the admin account we have created in the last session:

2016-12-09-10_24_00-jenkins

Step 3: Configure Project for periodic Polling of SW Repository

Step 3.1: Goto Build Trigger Configuration

On the Jenkins Dashboard, find the hidden triangle right of the project name,

2016-12-09-18_17_35-dashboard-jenkins

In the drop-down list, choose “Configure”

2016-12-09-18_18_06-dashboard-jenkins

(also possible: on the Dashboard, click on the project name and then “Configure”).

Step 3.2: Configure a Schedule

We scroll down to “Build Triggers” and check “Build periodically” and specify that it will be done every 10 minutes (H/10 * * * *). I do not recommend to use lower values than that since I have seen that even my monster notebook with i7-6700HQ and 64GB RAM is quite a bit stressed by the build those many build processes.

2016-12-22-23_54_06-github-triggered-build-config-jenkins

Note that this is a very short polling period for our test purposes only; we do not want to wait very long after a code change is detected.

Note also: you can click the Blue Question Markright of the Schedule text box to get help with the scheduler syntax.

Step 3.2: Save

Click Save

Step 4: Change the content of the Software Repository

Now we expect that a change of the SW repository is detected latest 2 minutes after new code is checked in. Let us do so now: In this case, I have changed the content of README.md and commited the change:

(local repository)$ git add README.md
(local repository)$ git commit -m "changed README"
(local repository)$ git push

Within 2 minutes, I see a new job #24 running on the lower left:

2016-12-09-18_35_13-dashboard-jenkins

It seems that the page needs to be reloaded by refreshing the browser, so the dashboard displays the #24 build process as “Last Success”:

The build process was very quick, since we have not changed any relevant source code. The console log can be reached via the Jenkins -> Project Link -> Build History -> click on build number -> Console:

2016-12-11-21_55_22-github-triggered-build-687-console-jenkins

As you can see, after some hours, the git repository is downloaded even if there was no code change at all. However, Gradle will detect that the JAR file is up-to-date and it will not re-build the JAR file, unless there is a code change.

The disadvantage of a scheduled build process with high frequency is that the number of builds in the build history is increasing quickly:

2016-12-11-22_02_24-github-triggered-build-jenkins

Note: The build history is spammed by many successful builds with no code change, and it is not easy to find the interesting build among all those many unnecessary builds. Let us try to improve the situation by replacing periodic, scheduled builds by triggered builds:

Step 5: Triggered Builds

In Step 4, we have seen that periodic builds should not be performed in a very short timeframe, because:

  1. the Jenkins server is stressed quite a bit by configuring a too low build frequency
  2. the build history is polluted by information of many irrelevant build processes with no changed code.

Therefore, it is much better to create a triggered build. The target is to trigger a build process every time the developer is checking in new code to the software repository:

2016-12-21-15_12_25

In this way, a periodic build is not necessary, or can be done much less frequently.

What do we need to do?

  1. Make sure that the Jenkins server is reachable from the SW repository
  2. Configure the SW repository with a web hook for informing Jenkins upon each code change
  3. Configure Jenkins for triggered build

Let us start:

Step 5.1 Configure Jenkins for triggered Build

On the Jenkins Dashboard, click on the project:

2016-12-22-18_56_44-dashboard-jenkins

and then “Configure” on the left pane:

2016-12-22-18_58_28-github-triggered-build-jenkins

Scroll down to Build Triggers and check the “Trigger build remotels (e..g. , from scripts)” checkbox and choose an individual secret token (do not use the one you see here):

2016-12-22-19_03_16-github-triggered-build-config-jenkins

You will be provided with the build trigger URL, which is in my case:

JENKINS_URL/job/GitHub%20Triggered%20Build/build?token=TOKEN_NAME

And the JENKINS_URL is the URL needed to be contacted by the Git Repository. Save the URL above for later use.

Now click Save.

Step 5.2 Test Trigger URL locally

Now we can test the trigger URL locally on the Docker Host as follows (as found on this StackOverflow Q&A):

We need to retrieve a so-called Jenkins-Crumb:

(dockerhost)$ CRUMB=$(curl -s 'http://admin:your_admin_password@localhost:8080/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,":",//crumb)')
(dockerhost)$ echo $CRUMB
Jenkins-Crumb:CCCCCCCCCCCCCCCCCCCCCCCCCC

Please make a note of the returned Jenkins-Crumb, since we will need this value in the next step.

Then we can use the Jenkins-Crumb as header in the build trigger request:

(dockerhost)$ curl -H $CRUMB 'http://admin:your_admin_password@localhost:8080/job/GitHub%20Triggered%20Build/build?token=hdsghewriohwziowrhwsn'

This should trigger a new build on Jenkins: