4

AWS Automation Part 4: Using Terraform for AWS Automation


This is part 4 of a blog post series, in which we explore how to automate Amazon Web Services (AWS) using the Terraform open source software by HashiCorp. Similar to Cloudify, Terraform is a versatile way to codify any type of infrastructure and to spin up a production-like demo or staging environment on any IaaS cloud like AWS, Azure or Google Cloud within minutes.

In this blog post, we will compare Terraform with Vagrant (providing links to other comparisons like Cloudify along the way), before we will use Terraform to spin up a single Ubuntu virtual machine instance on AWS. In the Appendix, we will also show, how to access the instance using SSH.

The series is divided into four parts:

  • In Part 1: AWS EC2 Introduction, introduces Amazon Web Services EC2 (AWS EC2) and will show how to sign into a free trial of Amazon, create, start, shut down and terminate a virtual machine on the AWS EC2 console.
  • Part 2: Automate AWS using Vagrant will lead you through the process how to use Vagrant to perform the same tasks you have performed in part 1, but now we will use local Vagrantfiles in order to automate the process. Please be sure to check out part 4, which shows a much simpler way to perform the same using Terraform.
  • Part 3: Deploy Docker Host on AWS using Vagrant shows, how Vagrant helps you to go beyond simple creation, startup, shutdown and termination of a virtual machine. In less than 10 minutes, you will be able to install a Docker host on AWS. With a few additional clicks on the AWS EC2 console, you are ready to start your first Docker container in the AWS cloud.
  • Part 4: Automate AWS using Terraform (this post) is showing that spinning up a virtual machine instance on AWS using Terraform is even simpler than using the Vagrant AWS plugin we have used in part 2 and 3. Additionally, Terraform opens up the we will be able to use the same tool to provision our resources to other clouds like Azure and Google Cloud.

Document Versions

  • 2016-09-22: initial published version

Contents

In this blog post we will explore, how to get started with Amazon Web Services (AWS). After signing in to a free trial of Amazon, we will show how to create, spin up and terminate virtual machines in the cloud using Amazon’s AWS EC2 web based console. After that, a step by step guide will lead us through the process of performing the same tasks in an automated way using Terraform.

While the shown tasks could also be performed with AWS CLI commands, Terraform potentially allows for more sophisticated provisioning tasks like Software Installation and upload & execution of arbitrary shell scripts.

Why Terraform?

In part 2 of this series, we had chosen the Vagrant AWS provider plugin to automatically spin up a virtual machine instance on Amazon AWS. Developers are used to Vagrant, since Vagrant offers a great way to locally spin up the virtual machines developers need to perform their development tasks. Even though Vagrant is most often used for local VirtualBox provider, the AWS plugin allows Vagrant to spin up virtual machine instances on AWS EC2, as we have demonstrated in the blog post.

One of the differences between Vagrant and Terraform is the language used: While Vagrant requires Chef-like Ruby programming (developers like it), Terraform Terraform uses a language called HashiCorp Configuration Language (HCL). Here is an example:

# A variable definition:
variable "ami" {
  description = "the AMI to use"
}

# A resource definition:
resource "aws_instance" "web" {
  ami               = "${var.ami}"
  count             = 2
  source_dest_check = false

  ...(AWS credential management skipped here)

  connection {
    user = "myuser"
  }
}

The same code would look similar to following in a Vagrantfile:

AMI = "ami-1234567"
$instance_name_prefix = "myinstance"
Vagrant.configure("2") do |config|
  (1..2).each do |i|
    config.vm.provider :aws do |aws, override|
      config.vm.define vm_name = "%s-%02d" % [$instance_name_prefix, i] do |config|
      aws.ami = "#{AMI}"
      ...(AWS credential management skipped here)
      override.vm.box = "dummy"
      override.ssh.username = "myuser"

    end
  end
end

We can see that a Vagrantfile is a Ruby program, while the Terraform language reads more like a status description file. It is a matter of taste, whether you prefer the one over the other. I assume that Ruby programming gives you more fine-grained possibilities to adapt the environments to your needs, while Terraform potentially offers the possibility to gain a better overview on the desired state.

In my opinion, the biggest difference between Vagrant and Terraform is the scope of those tools: according to HashiCorp, Vagrant is not designed for production-like environments. HashiCorp’s Terraform Intro is pointing out the following:

Modern software is increasingly networked and distributed. Although tools like Vagrant exist to build virtualized environments for demos, it is still very challenging to demo software on real infrastructure which more closely matches production environments.

Software writers can provide a Terraform configuration to create, provision and bootstrap a demo on cloud providers like AWS. This allows end users to easily demo the software on their own infrastructure, and even enables tweaking parameters like cluster size to more rigorously test tools at any scale.

List of supported Terraform Providers
List of supported Terraform Providers

We could argue that all of that can also be done with Vagrant and its AWS plugin. However, the big difference is that Terraform comes with a long, long list of supported providers as seen on the right-hand side of this page. We find all major IaaS Providers like AWS, MS Azure, Google Engine, DigitialOcean and SoftLayer but also an important PaaS provider like Heroku. Moreover, we find support for local virtual infrastructure providers like OpenStack and some “initial support” for VMware tools like vSphere and vCloud. Unfortunately, VirtualBox is missing in the official list, so developers either keep working with Vagrant locally, or they could try using a third party Terraform VirtualBox provider. Also Docker support is also classified as “initial support” and Docker Cloud as well as Kubernetes or OpenShift Origin are missing altogether.

Terraform tries to codify any type of resources to its list, so we even can find interfaces to DNS providers Databases, Mailproviders and many more. With that, it can spin up a whole environment including virtual machine instances, DNS services, networking, content delivery network services and more. HashiCorp’s Terraform introductory web page about use cases tells us that Terraform can spin up a distributed sophisticated demo or staging environment in less than 30 sec.

Further Reading about Terraform vs XYZ: You may also want to check out Terraform’s comparison page or an informative slideset of Nati Shalom, GigaSpaces or this CloudFormation vs Terraform comparison.

Why offering yet another ‘Hello World’ for Amazon Web Service Automation via Terraform?

The Terraform web portal is providing an AWS hello world example already. The reason I am offering yet another ‘Hello World’ example is, that the other guides assume that you already have created an AWS user with the appropriate rights. In this guide, we will describe, how this is done. Moreover, we will show in an Appendix, which steps are necessary to access the created virtual machine instance via SSH.

Why Amazon Web Services?

According to Gartner’s 2015 report, Amazon Web Services is the leader in the IaaS space, followed by Microsoft Azure. See below the Gartner’s magic quadrant on IaaS:

Gartner 2015 MQ

Source: Gartner (May 2015)

There are many articles out there that compare AWS with Microsoft Azure. From reading those articles, the following over-simplified summary has burnt its traces into my brain:

Amazon Web Services vs. Microsoft Azure is like Open Source Linux world vs. the commercial Microsoft Software world. For a long time, we will need both sides of the world.

Now that we have decided to begin with the open source side of the world, let us get started.

Signing into Amazon Web Services

In order to get started, you need to sign into the Amazon Web Services, if not already done so. For that, visit https://aws.amazon.com/, scroll down and push the Get Started for Free button. This is starting a free tier trial account for up to 12 months and up to two time 750 hrs of computing time; Linux and Windows 2012 server on a small virtual machine.

Note that you will be offered options that are free along with other services that are not for free, so you need to be a little bit careful. Terraform with its easy automation will help us to minimize the resources needed.

2016-03-27_231950_capture_008

I had signed into AWS long ago, but as far as I remember, you need to choose “I am a new User”, add your email address and desired password and a set of personal data (I am not sure whether I had to add my credit card, since I am an Amazon customer anyway).

2016.03.31-19_50_22-hc_001

If you are interested to creating, launching, stopping and terminating virtual machine instances using the Amazon EC2 console (a web portal), you might want to have a look to part 1 of this series:

2016.04.03-21_24_41-hc_001

In this part of the series, we will concentrate on automating the tasks.

AWS Automation using Terraform

Now we sill use Terraform in order to automate launching a virtual machine instance on AWS from an existing image (AMI). Let us start:

Step 0: Set HTTP proxy, if needed

The tests within this Hello World blog post have been performed without a HTTP proxy. If you are located behind a proxy, this should be supported as pointed out here. Try following commands:

On Mac/*nix systems:

export http_proxy='http://myproxy.dns.name:8080'
export https_proxy='http://myproxy.dns.name:8080'

On Windows:

set http_proxy='http://myproxy.dns.name:8080'
set https_proxy='http://myproxy.dns.name:8080'

Replace myproxy.dns.name and 8080 by the IP address or DNS name and port owned by the HTTP proxy in your environment.

Step 1a Native Installation

It is best, if you have direct Internet access (behind a firewall, but without any HTTP proxy).  Install Terraform on your local machine. The installation procedure depends on your operating system and is described here. I have taken the Docker alternative in Step 1b instead, though.

Step 1b Docker Alternative

If you have access to a Docker host, you also just can run any terraform command by creating a function like follows:

terraform() { 
  docker run -it --rm -v `pwd`:/currentdir --workdir=/currentdir hashicorp/terraform:light $@; 
}

For permanent definition, write those three lines it in the ~/.bashrc file of your Docker host.

After that terraform commands can be issued on the docker host as if terraform was installed on the Docker host. The first time the command is performed, a 20 MB terraform light image will be downloaded automatically from Docker Hub:

$ terraform --version
Unable to find image 'hashicorp/terraform:light' locally
light: Pulling from hashicorp/terraform

ece78a7c791f: Downloading [=====> ] 2.162 MB/18.03 MB
...
Terraform v0.7.4

Once the image is downloaded, the next time you issue the command, the output will look the same, as if the software was installed locally:

$ terraform --version
Terraform v0.7.4

Step 2: Create a Terraform Plan

We create a file named aws_example.tf like follows:

provider "aws" {
  access_key = "ACCESS_KEY_HERE"
  secret_key = "SECRET_KEY_HERE"
  region     = "us-east-1"
}

resource "aws_instance" "example" {
  ami           = "ami-0d729a60"
  instance_type = "t2.micro"
}

You can get the access key from AWS IAM Users page (click the user), if it exists already. However, the secret key is secret and has been provided at the time the access key has been created. If the secret key is unavailable, try creating a new one on the AWS IAM Users page (click the user).

The ami of the main images can be retrieved from the AWS console after being logged in. Simulate installing an instance by clicking “Lauch Instance” and browse through the main images. The image number starting with “ami” is displayed there:

2016-09-20-20_57_46-ec2-management-console

We have copied the ami number of the Ubuntu Server this time. Then you can cancel the instance creation.

Your region is displayed after the question mark as part of the AWS console URL, once you are logged in:

2016-09-20-21_00_43-ec2-management-console

If you are a “free tier” user of AWS, only use “t1-micro” or “t2-micro” as instance-type. None of the other types are free tier eligible, even not the smaller “t2-nano”, see this comment.

Step 3: Simulate the Terraform Plan

To see, what will happen, if you execute a terraform template, just issue the following command in bold:

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.


The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

+ aws_instance.example
    ami:                      "ami-26c43149"
    availability_zone:        "<computed>"
    ebs_block_device.#:       "<computed>"
    ephemeral_block_device.#: "<computed>"
    instance_state:           "<computed>"
    instance_type:            "t2.micro"
    key_name:                 "<computed>"
    network_interface_id:     "<computed>"
    placement_group:          "<computed>"
    private_dns:              "<computed>"
    private_ip:               "<computed>"
    public_dns:               "<computed>"
    public_ip:                "<computed>"
    root_block_device.#:      "<computed>"
    security_groups.#:        "<computed>"
    source_dest_check:        "true"
    subnet_id:                "<computed>"
    tenancy:                  "<computed>"
    vpc_security_group_ids.#: "<computed>"


Plan: 1 to add, 0 to change, 0 to destroy.

Step 4: Set permissions of the AWS User

This step is not described in the Quick Start guides I have come across. You can try to skip this step and proceed with the next step. However, if the user owning the AWS credentials you have specified above, you may encounter following Error:

Error launching source instance: UnauthorizedOperation: You are not authorized to perform this operation.

In this case, following steps will fix the issue:

Step 4.1: Create a new user on the AWS IAM Users page , if not already done.

Step 4.2: Assign the needed access rights to the user like follows:

Adapt and goto the  AWS IAM Link https://console.aws.amazon.com/iam/home?region=eu-central-1#policies. The link needs to be adapted to your region; e.g. change eu-central-1 by the right one from the region list that applies to your account.

Click the “Get Started” button, if the list of policies is not visible already. After that, you should see the list of policies and a filter field:

2016.05.06-10_42_28-hc_001

In the Filter field, search for the term AmazonEC2FullAccess. 

Click on the AmazonEC2FullAccess Policy Name and then choose the tab Attached Identities.

2016.05.06-10_50_14-hc_001

Click the Attach button and attach the main user (in the screenshot above, my main user “oveits” is already attached; in your case, the list will be empty before you click the Attach button, most likely).

Step 5: Apply the Terraform Plan

Note: this step will launch AWS EC2 virtual machine instances. Depending on your pay plan, this might cause some cost.

To apply the Terraform plan, issue the following command:

$ terraform apply
aws_instance.example: Creating...
  ami:                      "" => "ami-26c43149"
  availability_zone:        "" => "<computed>"
  ebs_block_device.#:       "" => "<computed>"
  ephemeral_block_device.#: "" => "<computed>"
  instance_state:           "" => "<computed>"
  instance_type:            "" => "t2.micro"
  key_name:                 "" => "<computed>"
  network_interface_id:     "" => "<computed>"
  placement_group:          "" => "<computed>"
  private_dns:              "" => "<computed>"
  private_ip:               "" => "<computed>"
  public_dns:               "" => "<computed>"
  public_ip:                "" => "<computed>"
  root_block_device.#:      "" => "<computed>"
  security_groups.#:        "" => "<computed>"
  source_dest_check:        "" => "true"
  subnet_id:                "" => "<computed>"
  tenancy:                  "" => "<computed>"
  vpc_security_group_ids.#: "" => "<computed>"
aws_instance.example: Still creating... (10s elapsed)
aws_instance.example: Still creating... (20s elapsed)
aws_instance.example: Creation complete

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

Note also the information on the terraform.tfstate file, which should not get lost. This shows us that Terraform is not stateless: it will not synchronize with the current state of the Provider AWS, leading to potential problems if the tfstate file and the real world get out of sync.

In the AWS console, we indeed can see that a Ubuntu instance has been launched:

2016-09-20-16_05_10-ec2-management-console

2016-09-20-16_05_55-ec2-management-console

I have not expected it to be that easy, because:

  • Unlike the Vagrant example, I was not forced to specify the SSH key
  • Unlike the Vagrant example, I was not forced to adapt the security rule to allow SSH traffic to the instance.

Unlike Vagrant, Terraform does not need SSH access to the virtual machine instance in order to spin it up.

Step 6: Destroy the Instance

Now let us destroy the instance again:

Step 6.1: Check the Plan for Destruction
$ terraform plan -destroy
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.

aws_instance.example: Refreshing state... (ID: i-8e3f1832)

The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

- aws_instance.example
Step 6.2 Apply the Plan with “Destroy” Option

And now let us apply the destruction:

$ terraform destroy
Do you really want to destroy?
  Terraform will delete all your managed infrastructure.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

aws_instance.example: Refreshing state... (ID: i-8e3f1832)
aws_instance.example: Destroying...
aws_instance.example: Still destroying... (10s elapsed)
aws_instance.example: Still destroying... (20s elapsed)
aws_instance.example: Still destroying... (30s elapsed)
aws_instance.example: Still destroying... (40s elapsed)
aws_instance.example: Still destroying... (50s elapsed)
aws_instance.example: Still destroying... (1m0s elapsed)
aws_instance.example: Destruction complete

Checking with AWS console:

2016-09-20-16_21_36-ec2-management-console

And yes, indeed, the instance was terminated. Note that AWS will keep the instance in terminated status for some time before automatically removing it.

Note also that a created instance will be charged as if it was up 15 minutes minimum. Therefore, it is not a good idea to run such examples in a loop, or with a large number of instances.

 

DONE!

thumps_up_3

Appendix A: Access the virtual machine via SSH

Step A.1: Check, whether you already have SSH access

Try to connect to the virtual machine instance via SSH (for information on SSH clients, check out Appendix C). If you are prompted to accept the SSH fingerprint, the security rule does not need to be updated and you can go to the next step. If there is a timeout instead, perform the steps in Appendix B: Adapt Security Rule manually.

Step A.2: Provision the SSH Key

Step A.2.1 Create or find your SSH key pair

You can follow this guide and let AWS create it for you on this page, or you can use a local OpenSSH installation to create the key pair. I have gone the AWS way this time.

Step A.2.2 Retrieve public Key Data

The key pair you have created contains a public data that looks similar to follows:

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC22rLY2IreKhTqhdiYMIo+8p+LkuRroHZdm4OPagVuxmT1iz9o4NsPP3MIgwXjGFNtC0Sb6puuFilat+sswXEFLca2G2dWVtITpGfb4EJt72i2AmSi8TL/0UudhO9bkfZUPtlxJNrMKsxLQ62ukIC3b927CMgBMBFrLcAIh/WWQsB/KInOAID8GN+MssR7RwpAxEDXb1ZFtaaAzR2p3B3QdTzazUCZgzEMY6c3K4I4eaIzzONRV7rUUH3UC61GwXAORQLXsOBzHW0uOgIhlOTIMG0zkQtwJfLBoQKz/zQFFYX9gEoA/ElVNTrwWwX9gsJzpz6hdL/koD3tionbE6vJ

with or without email-address appended. We need the whole string (including the email-address, if present) in the next step:

Step A.2.3 Specify SSH Key as Terraform Resource

The public key data is now written into a .tf file (I have used the name aws_keyfile.tf) as described here.

resource "aws_key_pair" "deployer" {
  key_name = "deployer-key" 
  public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD3F6tyPEFEzV0LX3X8BsXdMsQz1x2cEikKDEY0aIj41qgxMCP/iteneqXSIFZBp5vizPvaoIR3Um9xK7PGoW8giupGn+EPuxIA4cDM4vzOqOkiMPhz5XK0whEjkVzTo4+S0puvDZuwIsdiW9mxhJc7tgBNL0cYlWSYVkz4G/fslNfRPW5mYAM49f4fhtxPb5ok4Q2Lg9dPKVHO/Bgeu5woMc7RY0p1ej6D4CKFE6lymSDJpW0YHX/wqE9+cfEauh7xZcG0q9t2ta6F6fmX0agvpFyZo8aFbXeUBr7osSCJNgvavWbM/06niWrOvYX2xwWdhXmXSrbX8ZbabVohBK41 email@example.com"
}

As you will see below, this resource will be added to the list of available SSH keys in the AWS console -> EC2 Dashboard -> Key Pairs.

Step A.2.4 Assign Key to the AWS Instance

We now use the key named “deployer_key” in the instance definition. For that, we edit aws_example.tf and add the key_name:

provider "aws" {
  access_key = "MYKEY"
  secret_key = "MYSECRETKEY"
  region     = "eu-central-1"
}

resource "aws_instance" "example" {
  ami           = "ami-26c43149"
  instance_type = "t2.micro"
  key_name = "deployer_key"
}

As you will see below, the key_name will be applied to the new instance, allowing us to SSH into the virtual machine instance.

Step A.2.5 Review Terraform Plan

After that the plan looks like follows (in the shown output, the instance is running already, but this is irrelevant, since a new instance will be created anyway):

vagrant@localhost /mnt/nfs/terraform $ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.


The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

+ aws_instance.example
    ami:                      "ami-26c43149"
    availability_zone:        "<computed>"
    ebs_block_device.#:       "<computed>"
    ephemeral_block_device.#: "<computed>"
    instance_state:           "<computed>"
    instance_type:            "t2.micro"
    key_name:                 "deployer-key"
    network_interface_id:     "<computed>"
    placement_group:          "<computed>"
    private_dns:              "<computed>"
    private_ip:               "<computed>"
    public_dns:               "<computed>"
    public_ip:                "<computed>"
    root_block_device.#:      "<computed>"
    security_groups.#:        "<computed>"
    source_dest_check:        "true"
    subnet_id:                "<computed>"
    tenancy:                  "<computed>"
    vpc_security_group_ids.#: "<computed>"

+ aws_key_pair.deployer
    fingerprint: "<computed>"
    key_name:    "deployer-key"
    public_key:  "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC22rLY2IreKhTqhdiYMIo+8p+LkuRroHZdm4OPagVuxmT1iz9o4NsPP3MIgwXjGFNtC0Sb6puuFilat+sswXEFLca2G2dWVtITpGfb4EJt72i2AmSi8TL/0UudhO9bkfZUPtlxJNrMKsxLQ62ukIC3b927CMgBMBFrLcAIh/WWQsB/KInOAID8GN+MssR7RwpAxEDXb1ZFtaaAzR2p3B3QdTzazUCZgzEMY6c3K4I4eaIzzONRV7rUUH3UC61GwXAORQLXsOBzHW0uOgIhlOTIMG0zkQtwJfLBoQKz/zQFFYX9gEoA/ElVNTrwWwX9gsJzpz6hdL/koD3tionbE6vJ"


Plan: 2 to add, 0 to change, 0 to destroy.

The public key will be provisioned to AWS and an instance will be created with the appropriate SSH key. Let us try:

Step A.2.6 Apply the Terraform Plan

vagrant@localhost /mnt/nfs/terraform $ terraform apply
aws_key_pair.deployer: Creating...
  fingerprint: "" => "<computed>"
  key_name:    "" => "deployer-key"
  public_key:  "" => "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC22rLY2IreKhTqhdiYMIo+8p+LkuRroHZdm4OPagVuxmT1iz9o4NsPP3MIgwXjGFNtC0Sb6puuFilat+sswXEFLca2G2dWVtITpGfb4EJt72i2AmSi8TL/0UudhO9bkfZUPtlxJNrMKsxLQ62ukIC3b927CMgBMBFrLcAIh/WWQsB/KInOAID8GN+MssR7RwpAxEDXb1ZFtaaAzR2p3B3QdTzazUCZgzEMY6c3K4I4eaIzzONRV7rUUH3UC61GwXAORQLXsOBzHW0uOgIhlOTIMG0zkQtwJfLBoQKz/zQFFYX9gEoA/ElVNTrwWwX9gsJzpz6hdL/koD3tionbE6vJ"
aws_instance.example: Creating...
  ami:                      "" => "ami-26c43149"
  availability_zone:        "" => "<computed>"
  ebs_block_device.#:       "" => "<computed>"
  ephemeral_block_device.#: "" => "<computed>"
  instance_state:           "" => "<computed>"
  instance_type:            "" => "t2.micro"
  key_name:                 "" => "deployer-key"
  network_interface_id:     "" => "<computed>"
  placement_group:          "" => "<computed>"
  private_dns:              "" => "<computed>"
  private_ip:               "" => "<computed>"
  public_dns:               "" => "<computed>"
  public_ip:                "" => "<computed>"
  root_block_device.#:      "" => "<computed>"
  security_groups.#:        "" => "<computed>"
  source_dest_check:        "" => "true"
  subnet_id:                "" => "<computed>"
  tenancy:                  "" => "<computed>"
  vpc_security_group_ids.#: "" => "<computed>"
aws_key_pair.deployer: Creation complete
aws_instance.example: Still creating... (10s elapsed)
aws_instance.example: Still creating... (20s elapsed)
aws_instance.example: Creation complete

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

Now the new key is visible in the AWS console -> EC2 (Dashboard) -> click “Key Pairs”:

2016-09-21-21_17_11-ec2-management-console

The second key named “deployer-key” is the one we just have created.

A new instance has been launched and the correct key is assigned (AWS console -> EC2 (Dashboard) -> click “Running Instances”):

2016-09-22-11_21_19-ec2-management-console

Now I should be able to connect to the system using the key. But which user? I have first tried “root”, but during login, I was informed that I should use “ubuntu” instead of “root”.
It has worked with following data:

2016-09-22-11_27_52-putty-configuration

The public DNS name or public IP address to be used can be retrieved either from the AWS console

2016-09-22-11_24_17-ec2-management-console

or better from a terraform show command:

$ terraform show
aws_instance.example:
  id = i-fc755340
  ami = ami-26c43149
  availability_zone = eu-central-1b
  disable_api_termination = false
  ebs_block_device.# = 0
  ebs_optimized = false
  ephemeral_block_device.# = 0
  iam_instance_profile =
  instance_state = running
  instance_type = t2.micro
  key_name = deployer-key
  monitoring = false
  network_interface_id = eni-b84a0cc4
  private_dns = ip-172-31-17-159.eu-central-1.compute.internal
  private_ip = 172.31.17.159
  public_dns = ec2-52-29-3-233.eu-central-1.compute.amazonaws.com
  public_ip = 52.29.3.233
  root_block_device.# = 1
  root_block_device.0.delete_on_termination = true
  root_block_device.0.iops = 100
  root_block_device.0.volume_size = 8
  root_block_device.0.volume_type = gp2
  security_groups.# = 0
  source_dest_check = true
  subnet_id = subnet-f373b088
  tags.% = 0
  tenancy = default
  vpc_security_group_ids.# = 1
  vpc_security_group_ids.611464124 = sg-0433846d
aws_key_pair.deployer:
  id = deployer-key
  key_name = deployer-key
  public_key = ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC22rLY2IreKhTqhdiYMIo+8p+LkuRroHZdm4OPagVuxmT1iz9o4NsPP3MIgwXjGFNtC0Sb6puuFilat+sswXEFLca2G2dWVtITpGfb4EJt72i2AmSi8TL/0UudhO9bkfZUPtlxJNrMKsxLQ62ukIC3b927CMgBMBFrLcAIh/WWQsB/KInOAID8GN+MssR7RwpAxEDXb1ZFtaaAzR2p3B3QdTzazUCZgzEMY6c3K4I4eaIzzONRV7rUUH3UC61GwXAORQLXsOBzHW0uOgIhlOTIMG0zkQtwJfLBoQKz/zQFFYX9gEoA/ElVNTrwWwX9gsJzpz6hdL/koD3tionbE6vJ

2016-09-21-21_32_06-putty-configuration

We need to specify the private key, which is named AWS_SSH_Key.ppk in my case (even though we have chosen to call it deployer_key in the resource). In case of putty, the key needs to be in .ppk format. See Appendix C, how a .pem file (as you get it from AWS) can be converted to .ppk format.
2016-09-22-11_34_09-ubuntuip-172-31-17-159_

With this information, we can log in via SSH (assuming that you have performed step A.1 and Appendix B, if needed; otherwise, you may get a timeout).

Appendix B: Adapt Security Rule manually

Step B.1: Check, whether you have SSH access

Try to connect to the virtual machine instance. If you are prompted to accept the SSH fingerprint, the security rule does not need to be updated and you can stop here. If there is a timeout, go to the next step.

Step B.1: Updating the security group

In this step, we will adapt the security group manually in order to allow SSH access to the instance. Note that in Appendix B, we show how this step can be automated with a shell script. But now, let us perform the step manually.

2016.04.01-13_00_29-hc_001

In the EC2 console, under Network&Security -> Security Groups (in my case in EU Central 1: https://eu-central-1.console.aws.amazon.com/ec2/v2/home?region=eu-central-1#SecurityGroups:sort=groupId), we can find the default security group. We need to edit the inbound rule to allow the current source IP address. For that, select the policy group, click on the “Inbound” tab on the bottom, specify “My IP” as source and save the policy:

2016.04.01-13_05_18-hc_001

Now, if you try to connect to the virtual machine instance, you should be asked by your SSH client, whether or not you permanently add the SSH key fingerprint locally.

DONE

Note: if your IP address changes frequently, you might want to automate the update of the security rule. Check out Appendix B of part 2 of this blog series for this.

Appendix C: SSH Connection Client Alternatives

C.1. SSH Connection via a *nix operating Client (or bash on Windows)

On a *nix machine or on a bash shell on Windows, you can connect via the *nix built-in SSH client. The following command line connection worked for me on a bash shell on my Windows machine. Replace the path to the private PEM file and the public DNS name, so that it works for you as well:

$ssh ubuntu@ec2-52-29-14-175.eu-central-1.compute.amazonaws.com -i /g/veits/PC/PKI/AWS/AWS_SSH_Key.pem
The authenticity of host 'ec2-52-29-14-175.eu-central-1.compute.amazonaws.com (52.29.14.175)' can't be established.
ECDSA key fingerprint is e2:34:6c:92:e6:5d:73:b0:95:cc:1f:b7:43:bb:54:39.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ec2-52-29-14-175.eu-central-1.compute.amazonaws.com,52.29.14.175' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-74-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Fri Apr  1 20:38:25 UTC 2016

  System load:  0.08              Processes:           98
  Usage of /:   10.0% of 7.74GB   Users logged in:     0
  Memory usage: 6%                IP address for eth0: 172.31.21.237
  Swap usage:   0%

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.
ubuntu@ip-172-31-21-237:~$

DONE

C.2 SSH Connection via putty on Windows

Since I am using a Windows machine and the formatting of a ssh session in a CMD console does not work well (especially, if you try to use vim), I prefer to use putty on Windows.

In putty, add the host ubuntu@<public DNS>:

2016-04-01_224807_capture_004

and add the path to the private key file on Connection->SSH->Auth->Private key file for authentication:

2016-04-01_131935_capture_003

Note that the pem file needs to be converted to a ppk format putty understands. For that, import the pem file using Putty Key Generator (puttygen) via Conversions->Import Key->choose pem file -> Save private key with ppk extension.

2016.04.01-13_23_46-hc_001

2016.04.01-13_26_46-hc_001

Now add the path to the ppk file to Connection->SSH->Auth->Private key file for authentication: in the putty client, press the “yes” button, and we are logged in:

2016-04-01_224815_capture_005

DONE

Appendix D: Sharing Files between Windows Host and Docker Host using NFS (temporary Solution)

Locally, I am running Windows 10 and I am using a Docker Host created by Vagrant as a virtual Machine on VirtualBox. I have not (yet) configured/installed Vagrant synced folders as described on this web page. Instead, I have set up an NFS server on Windows and map it within the Docker Host like follows:

Step D.1: Install winnfsd

Step D.2: On the Windows machine, create shared folder and start NFS daemon

On a DOS window (run “CMD”), run following commands (adapt the path to a path that is appropriate for you):

mkdir D:\NFS
winnfsd.exe D:\NFS

On Linux host, mount the folder:

$ sudo mkdir /mnt/nfs; sudo mount -t nfs -o 'vers=3,nolock,udp' LAPTOP-P5GHOHB7:/D/NFS /mnt/nfs

where LAPTOP-P5GHOHB7 is my Windows machine’s name and I have found the required options here (without the -o options, I had received a message mount.nfs: mount system call failed.).

Note that this solution is not permanent, since

  • winnfsd.exe is running in foreground within the DOS Window
  • after reboot of the Docker host, the mount command needs to be issued again

TODO: describe the permanent solution, which survives the reboot of the Windows host (need to evaluate different NFS servers, or run winnfsd.exe as a daemon) and the Docker host (via fstab file)

Summary

It is much simpler to spin up and down AWS virtual machine instances by using Terraform than by using Vagrant’s AWS plugin. The reason is that Terraform’s AWS API (unlike Vagrant) does not require SSH access to the instance. Therefore, we were not forced to adapt any SSH related security settings. It just works with Terraform.

To be fair: if the AWS instances are part of a development or staging environment, you will need SSH access to the virtual machine instances in most of the cases anyway. In those cases, a few additional steps are necessary, as shown in Appendix A and B. However, at the end, you only need to add a few lines to the Terraform template.

Adapting the default security rule in order to allow SSH access is the same as with Vagrant. Here, we have shown, how this is done manually and in part 2 we offer an automated way of performing this based on the AWS CLI.

The actual SSH access is more convenient with Vagrant with its  nice vagrant ssh command.

All in all, the effort of automating AWS using Terraform requires equal or less effort than Vagrant. And we gain a more clearly laid out description of the infrastructure resources and more flexibility to apply the same set of resources on a mixed hybrid environment on OpenStack and the IaaS clouds like AWS, Azure and Google Cloud, among others.

Possible Next Steps

  • Provisioning and assignment of your own security rules based on Terraform described here
  • Test remote file upload and remote command execution (are SSH keys needed for this to be successful?)
  • Upload/Synchronization of local images with AWS images

 

 


<< Part 1 | Part 2Part 3 | Part 4 >>

2

Getting started with Docker Cloud


In this blog post, we will explore, how to get started with Docker Cloud, a Docker orchestration cloud service that helps deploying and managing Docker hosts clusters and Docker containers running on those host clusters.

Background

Since Docker has acquired Tutum in Oct. 2015, Docker is able to offer a service called Docker Cloud. Other than what you might expect from its name, Docker Cloud is not a Docker host as a Service offering: i.e. you cannot rent Docker hosts from it and you cannot push your Docker containers, if you have not signed up to one of the following IaaS providers:

Instead, Docker Cloud offers a Docker orchestration as a Service for Docker hosts and Docker containers and as such it

  1. allows the automatic deployment of Docker host clusters on one of the data centers of the said IaaS providers. Once a Docker host (cluster) is deployed, the
  2. Docker Cloud service allows to deploy, manage and terminate Docker containers (i.e. dockerized services) on those Docker hosts.

Overview

This is, what we want to achieve:

  • Deploy Docker host and Docker container via Docker Cloud:

2016.05.06-15_15_33-hc_001

  • Deploy SSH keys for admin SSH access to the Docker hosts:

2016.05.06-15_19_10-hc_001

Let us start:

Step 1: Sign up for Docker Cloud

Click  Try Docker Cloud on the Docker Cloud Home Page and follow the instructions or goto the next step.

Step 2: Create Docker Cloud User on IaaS Portal

You could use your main user’s access credentials. However, from a security perspective, it is better to create a new user for Docker Cloud access with the permissions Docker Cloud needs.

On the User’s view of the AWS IAM console, we push the Create New Users button and enter a user name “dockercloud”. Leave the “Generate an access key” for each user checked.

2016.05.06-10_18_59-hc_001

Click the Create and then the Download Credentials button and save the CSV to a save place.

Step 3: Assign the needed access rights to the dockercloud user

If this step is skipped, you will get following error messages or warnings in the next step:

  • Invalid AWS credentials
    -> this one will prevent you to add the AWS credentials to Docker Cloud and is critical. We will get rid of this error in Step 3.1.
  • Couldn’t load instance profiles
    -> this warning that did not prevent me from successfully deploying Docker hosts and clusters. However; I have found how to get rid of the warning in Step 3.2.

Step 3.1: Get rid of the “Invalid AWS credentials” error

Invalid AWS credentials

Adapt and goto the  Policies tab of the AWS IAM Link.

Click the “Get Started” button, if the list of policies is not visible already. After that, you should see the list of policies and a filter field:

2016.05.06-10_42_28-hc_001

In the Filter field, search for the term AmazonEC2FullAccess. 

Click on the AmazonEC2FullAccess Policy Name and then choose the tab Attached Identities.

2016.05.06-10_50_14-hc_001

Click the Attach button and attach the dockercloud user:

2016.05.06-10_58_21-hc_001

Now we have taken care of the first (critical) error.

Step 3.2: Get rid of the “Couldn’t load instance profiles” warning

Couldn't load instance profiles

To get rid of this warning, rerun Step 3.1, with AmazonEC2FullAccess  by IAMReadOnlyAccess.

Step 4: Add your IaaS login credentials on the Docker Cloud account page.

On the Docker Cloud account page, click on the AWS add the Access Key ID and the Secret Access Key from the csv file you have downloaded in the previous step.

If you are interested, AWS’ official documentation can show you how to retrieve the access credentials on your own account, if needed.

Step 5: Launch a Docker host cluster

Once you have added the login credential on the account page, you either can

  • Bring your own node
    or you can
  • launch a new node cluster

on the Nodes Docker Cloud page without logging into your IaaS provider’s web page.

Be sure to select the right data center, so you are not confused when connecting to the IaaS provider’s portal (in my case: AWS console) and not finding your instance.
Also, if you are using the AWS free tier, you need to choose t2.micro deployment size instead of the offered t2.nano deployment: although t2.nano is smaller than t2.micro, it is not “free tier eligible”, but t2.micro is. See AWS’ comment on this on the bottom of this page.

2016.05.06-11_47_50-hc_001

Note: since disk space comes at a cost, for our purposes it is sufficient to deploy e.g. only 6 GB instead of the default 60 GB disk.

Click the Launch node cluster button. We get following view:

2016.05.06-11_50_57-hc_001

Step 6 (optional): View the Docker hosts instance(s) in your IaaS provider’s web portal

After clicking the Launch node cluster button, the Docker host(s) are automatically created on your IaaS data center. You optionally can review the virtual instance on the IaaS provider’s web portal (the AWS Console in my case). Choose the first icon Compute->EC2.

2016.05.06-11_55_43-hc_001

We will see the running instance by clicking on the Running instances link of on the Instances->Instances menu item of the left sidebar:

2016.05.06-11_56_40-hc_001

Step 7: Create a Service on Docker Cloud

On the Docker Cloud Services page, we click on the Create service button. If you have no better alternative, you choose the public docker-nginx-busybox image, which has a tiny footprint (~ 8 MB image size).

2016.05.06-12_06_38-hc_001

Click to override ports defined in image, check the “Published” checkbox and add the ports you want to use to access the image from the Internet:

2016.05.06-12_08_38-hc_001

Note: I just have found out that the https port 443 of the chosen docker image is not functional, currently. Better use port 80 only.

Click the 2016.05.06-12_10_31-hc_001 button and within seconds, the image will be downloaded and a container will be started on the Docker host. The service should be shown as “started”:

2016.05.06-12_12_27-hc_001

Step 7: Access the Service from the Internet

There are two ways to access the service. The unconvenient one is to find the DNS name of the Docker host in the AWS console and access an URL http://<Docker host name or IP address> in a browser.

The more convenient option is to click on the service name on Docker Cloud

2016.05.06-12_23_08-hc_001

and click on the port 80 link. We will reach the service:

2016-05-06_122516_capture_124

Now we have deployed our first simple Internet service using Docker Cloud.

thumps_up_3

Step 8: Terminate the Docker Host

In case the Docker host is not needed anymore, and you need to save computing time / IaaS cost, do not forget to terminate the Docker host. This is best to be done in the Docker Cloud nodes page.

Note: Docker Cloud offers to terminate (i.e. destroy) the node only ; stopping and re-starting of the node is possible only via the IaaS portal (AWS console). Although the public IP address of the instance may change, the Docker Cloud service link becomes functional again as soon as the service is started.

2016.05.06-17_57_55-hc_001

Here, we have pressed the Terminate button to terminate the node. The deployed service will become dysfunctional. However, we can keep the defined service for later deployment on a different Docker host (node). On the AWS EC2 Console, we can verify that the instance is terminated:

2016.05.06-18_01_52-hc_001

After some time (less than an hour?), the instance (and its SSD volume) will disappear from the console view.

Note: Stopping the instance instead of terminating the instance comes at a certain cost for the image’s disk space: e.g. for central Europe, the AWS monthly cost calculator currently shows ~7.14 US$/month for a 60 GB General purpose SSD, as deployed per default by Docker Cloud). I am not sure, whether you have to pay for the whole disk (60 GB in this case) or only for the disk space used (< 4 GB in this case).

For checking on the current AWS cost, consult the AWS billing page.

Appendix A: Maintenance Access to the docker host via SSH

For troubleshooting purposes, it makes sense to get SSH access to the automatically deployed Docker host.

Note that SSH key pair created by AWS do not work per default for Docker hosts deployed from Docker Cloud, since Docker cloud deploys its own SSH keys. We need to upload our own SSH key to Docker Cloud in order to access the deployed hosts via SSH.

Step A.1: Upload public SSH key to all Docker hosts

The recommended way to deploy a SSH public key to all docker hosts deployed by Docker Cloud is to deploy an special image provided by Docker Cloud for this purpose. For that, click on Deploy to Cloud  on this Docker documentation page. The following page will be displayed:

2016.04.27-23_00_33-hc_001

Step A.2: Edit the AUTHORIZED_KEYS.

  • For this, find your public key you want to use (in my case: G:\veits\PC\PKI\AWS\AWS_SSH_Key_pub.pem). The content should start with “ssh-rsa”.If you do not have an SSH key yet, you can create and download a private key on AWS following the AWS instructions if you do not want to use the traditional way to create the key pair using openSSH (e.g. see the “Generating RSA Keys”  chapter of this Ubuntu instructions). Note that apart from downloading your private key, you will also need to create a public key from the private key, which is described in the same AWS documentation page in the chapters starting with “Retrieving the Public Key …” in the second half of the page.
  • Click on Create and Deploy

Step A.3: use private key in the SSH client

Now your instances will have the public key added and you can SSH to the instance with the corresponding private key. Note that in case of Windows putty, the private PEM key file needs to be converted into the PKK file format putty understands. For this, the tool puttygen is used. Use Conversions->Import to import the PEM file and click on “Save private key” for saving the PKK file. This is the one you need to add to the right pane on Connection->SSH->Auth.

2016-04-27_231004_capture_059

Do not forget to click on Session and Save to make the change permanent.

2016-04-27_231011_capture_060

Then we click on “Open” in order to connect to the system:

2016-04-27_231019_capture_061

From here, the docker commands can be performed. Try with docker help.

Summary

In this blog post, we have shown, how to create a Docker host cluster on an AWS data center through usage of the Docker Cloud web portal. After this, a containerized service can be launched and accessed from the Internet after a view clicks. In order to get maintenance access to the Docker hosts via SSH, a public SSH key must be deployed. For that, a special Docker image is deployed with the Stacks feature from the Docker Cloud.

From my point of view, the whole process is quite straightforward with the only tricky part being the assignment of the correct access rights to the dockercloud user.

4

AWS Automation based on Vagrant — Part 3: Creating a Docker Host on AWS in 10 Minutes using Vagrant


Okay, I am cheating a little bit with respect of the time of 10 minutes mentioned in the title: I assume that this step by step guide has been accomplished already. This might take you an hour or so.

After that you are ready to run a Docker host on AWS within 10 minutes with only 2 lines of additional code. With a few more clicks in the Amazon web portal (the AWS EC2 console) you are ready to access the newly created Docker host. After downloading a Docker Python image you will print a Python-created “Hello World!” to the console of the Docker host.

The series is divided into three parts:

  • In Part 1, we will introduce Amazon Web Services (AWS) and will show how to sign into a free trial of Amazon, create, start, shut down and terminate a virtual machine on the AWS EC2 console.
  • Part 2 will lead you through the process how to use Vagrant to perform the same tasks you have performed in part 1, but now we will use local Vagrantfiles in order to automate the process.
  • Part 3 (this blog post) is the shortest part and will show, how Vagrant helps you to go beyond simple creation, startup, shutdown and termination of a virtual machine. In less than 10 minutes, you will be able to install a Docker host on AWS. With a few additional clicks on the AWS EC2 console, you are ready to start your first Docker container in the AWS cloud.

Document versions

v1 (2016-04-06): intial release of this document
v2 (2016-04-12): documented a provisioning error I have hit in the Caveats section at the end.

Prerequisites

  • Your Firewall allows you to access systems via Internet using SSH with no proxy in between. In most cases, this is possible from a home network or a hot spot, but in most cases, this is not permitted from within a corporate network using HTTP proxies.
  • You have followed this step by step guide in order to set up Vagrant as a AWS provider. After this, you will have…
    • … signed into AWS
    • … created an AWS user with the appropriate priviledges
    • … installed Vagrant and the Vagrant AWS Provider
    • … created a Vagrantfile with the appropriate information to connect to AWS
    • … tested the creation and termination of an Ubuntu image on AWS by using the local Vagrant command line interface

Step by Step Guide

Step 1: Adapt the Vagrant File

Add the two config.vm.provision lines to the the existing Vagrantfile created in the other step by step guide

# Vagrantfile
...
Vagrant.configure(2) do |config|
  ...
  config.vm.provision :shell, :inline => "sudo wget https://raw.githubusercontent.com/oveits/docker-enabled-vagrant/master/ubuntu-trusty/vagrant-provision.sh -O /tmp/vagrant-provision.sh", :privileged => true
  config.vm.provision :shell, :inline => "sudo bash /tmp/vagrant-provision.sh", :privileged => true
end

Step 2: Launch and Provision Instance

Back on the local command line, issue the command:

vagrant up --provision

to create and launch the new instance on AWS and install docker with many useful docker tools.
Or, if the image is already up and running, we do not want to create the instance, but only install Docker on the existing image by issuing the command:

vagrant provision

If you happen to hit a curl error here, please see the Caveats section at the end.

After that, you will be able to observe in the local console, that lots of software is downloaded (this is quite quick, when run in the cloud, since AWS has a good Internet connection. The log file will end with some error commands that can be savely ignored:

==> default: e67def44f1a2: Download complete
==> default: e67def44f1a2: Pull complete
==> default: e67def44f1a2: Pull complete
==> default: a3ed95caeb02: Pull complete
==> default: a3ed95caeb02: Pull complete
==> default: Digest: sha256:c46c830e33c04cadebcd09d4c89faf5a0f1ccb46b4d8cfc4d72900e401869c7a
==> default: Status: Downloaded newer image for weaveworks/plugin:1.4.6
==> default: docker: "rm" requires a minimum of 1 argument.
==> default: See 'docker rm --help'.
==> default:
==> default: Usage: docker rm [OPTIONS] CONTAINER [CONTAINER...]
==> default:
==> default:
==> default: Remove one or more containers
==> default: Failed to remove image (busybox): Error response from daemon: No such image: busybox:latest
[/f/veits/Vagrant/ubuntu-trusty64-docker-aws-test]

Step 3: Update the Security Policy

In the EC2 console, under Network&Security -> Security Groups (in my case in EU Central 1: https://eu-central-1.console.aws.amazon.com/ec2/v2/home?region=eu-central-1#SecurityGroups:sort=groupId), we can find the default security group. We need to edit the inbound rule to allow the current source IP address. For that, select the policy group, click on the “Inbound” tab on the bottom, specify “My IP” as source and save the policy:

2016.04.01-13_05_18-hc_001

Now we should be able to access the system.

Step 4: Access the System

Note: This step and the following steps will work only, if your firewall allows you to access systems in the Internet using SSH.

When you log in, you can issue your first docker commands. Note that you might need to update your security setting in order to allow for access from your IP address, like described in the other step by step guide, or see below the Appendix A.

$vagrant ssh
Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-74-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Sat Apr  2 20:24:18 UTC 2016

  System load:  0.01              Processes:              111
  Usage of /:   18.9% of 7.74GB   Users logged in:        1
  Memory usage: 14%               IP address for eth0:    172.31.30.67
  Swap usage:   0%                IP address for docker0: 172.17.0.1

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud


*** System restart required ***
ubuntu@ip-172-31-30-67:~$ sudo docker search python
NAME                     DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
python                   Python is an interpreted, interactive, obj...   738       [OK]

Step 5: Test a docker image with a Python hello world

Now let us perform a Python hello world, using the corresponding python docker image:

$echo 'print("hello world!")' > helloworld.py
$docker run -it --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp python python helloworld.py

Or, we can set a new alias, which allows us for a simpler syntax in future (not that the alias will not survive a reboot, if not written to .bashrc. Moreoever, it will not survive a termination/creation cycle, if the alias is not provisioned via Vagrantfile):

ubuntu@localhost:~$ alias python='docker run -it --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp python python'
ubuntu@localhost:~$ python helloworld.py
hello world!

Caveats

After trying again to perform vagrant provision in order to verify the 10 minutes installation time, I hit the following problem on line 125 of /tmp/vagrant-provision.sh (a file that is uploaded automatically as specified by the Vagrantfile):

default: curl: (56) SSL read: error:00000000:lib(0):func(0):reason(0), errno 104

The problem seems to be caused in line

curl -o docker-machine -L https://github.com/docker/machine/releases/download/$MACHINE_VERSION/docker-machine-`uname -s`-`uname -m`

I have not found a reason for the error yet. My workaround was to issue vagrant provision a second time. Docker seems to work thereafter.

Summary

In this blog post, we have shown how Vagrant can be used to perform more sophisticated provisioning tasks than creation and termination of virtual machines. From our local Vagrant console, we have installed lots of useful Docker Software in less than 10 minutes and we have verified the results by downloading and testing the Python Docker image.


<< Part 1 | Part 2 |Part 3 >>

 

8

AWS Automation based on Vagrant — Part 1: Getting started with AWS


In this blog post series we will explore, how to automate Amazon Web Services (AWS) by using Vagrant. The series is divided into three parts. Readers that are interested in the automation part only can skip part 1 (the AWS EC2 console part) and jump directly to part 2, since both, part 1 and part 2 are self-contained.

  • In Part 1, we will introduce Amazon Web Services (AWS) and will show how to sign into a free trial of Amazon, create, start, shut down and terminate a virtual machine on the AWS EC2 console.
  • Part 2 will lead you through the process how to use Vagrant to perform the same tasks you have performed in part 1, but now we will use local Vagrantfiles in order to automate the process.
  • Part 3 is the shortest part and will show, how Vagrant helps you to go beyond simple creation, startup, shutdown and termination of a virtual machine. In less than 10 minutes, you will be able to install a Docker host on AWS. With a few additional clicks on the AWS EC2 console, you are ready to start your first Docker container in the AWS cloud.

At the end, you will have running a Docker host in the public cloud, allowing you to load any of the images from Docker Hub instead of installing any software.

Document Versions

v1 (2016-04-03): initial release
v2 (2016-04-11): improved the step by step procedure
v3 (2016-04-21): added a chapter Appendix A about AWS cost control

Executive Summary

According to Gartner, Amazon Web Services (AWS) is the No. one service provider in the public cloud IaaS space. Amazon is offering a “free tier” test account for up to 12 months and up to 750 hrs of a t2.micro Linux instance as well as 750 hrs of a t2.micro Windows 2012 instance. For more details, check the free tier limits page. For services outside the free tier limits, check the AWS simple monthly (cost) calculator.

Per default, AWS is assigning a dynamic private and a dynamic public IP address. The public IP address and DNS name will change every time you restart the instance.

Deleting an instance is done by “Terminating” it. For a long time, the terminated instance will still be visible in the instance dashboard as “Terminated”. The sense and non-sense of this is discussed in this forum post.

Contents of Part 1

Why offering yet another ‘Hello World’ for Amazon Web Service Automation using Vagrant?

The reason is, that the other guides I have found do not start start from scratch and I have learned the hard way that the they assume that you already have created an AWS user with the appropriate rights. Since I benefit from all those other Evangelists out there helping me with my projects, I feel obliged to pay back my share.

Many thanks to Brian Cantoni, who has shared with us a (much shorter) Quick Start Guide on the same topic. Part 2 of my detailed step by step guide is based on his work.

Why Amazon Web Services?

According to Gartner’s 2015 report, Amazon Web Services is the leader in the IaaS space, followed by Microsoft Azure. See below the Gartner’s magic quadrant on IaaS:

Gartner 2015 MQ

Source: Gartner (May 2015)

There are many articles out there that compare AWS with Microsoft Azure. From reading those articles, the following over-simplified summary has burnt its traces into my brain:

Amazon Web Services vs. Microsoft Azure is like Open Source Linux world vs. the commercial Microsoft Software world. For a long time, we will need both sides of the world.

Now that we have decided to begin with the open source side of the world, let us get started.

Getting started with Amazon Web Services

Step 1: sign in to AWS

In order to get started, you need to sign into the Amazon Web Services, if not already done so. For that, visit https://aws.amazon.com/, scroll down and push the Get Started for Free button. This is starting a free tier trial account for up to 12 months and up to two time 750 hrs of computing time; Linux and Windows 2012 server on a small virtual machine.

Note that you will be offered options that are free along with other services that are not for free, so you need to be a little bit careful. Vagrant with its easy automation will help us to minimize the resources needed.

2016-03-27_231950_capture_008

I had signed into AWS long ago, but as far as I remember, you need to choose “I am a new User”, add your email address and desired password and a set of personal data (I am not sure whether I had to add my credit card, since I am an Amazon customer anyway).

2016.03.31-19_50_22-hc_001

Install an Ubuntu machine from the EC2 image repository

Step 2: Enter EC2 Console

Now we want to create our first virtual machine on AWS. After having singed in, you are offered to enter AWS home (the link depends on the region you are in, so I do not confuse you with a link that might not work for you) and you can enter the AWS EC2 console on the upper left:

2016.03.31-19_51_47-hc_001

Step 3: Choose and Launch Instance

On the following page, you are offered to create your first virtual machine instance:

2016.03.27-23_22_49-hc_001

Choose Launch Instance. I am an Ubuntu fan, so I have chosen the HVM version of Ubuntu:

Step 3.1: Choose Image

2016.03.27-23_28_26-hc_001

This image is ‘Free tier eligible’ so I expect not to be charged for it. Note that there are two image types offered for each operating system: HVM and PV. HVM seems to have a better performance. See here a description of the differences.

2016.04.03-18_14_42-hc_001

Note, that only t1.micro is ‘Free tier eligible’. Larger images will not come for free, as we might have expected. However, note that also the smaller t2.nano instance is not ‘Free tier eligible’. If you want to use a t2.nano image, you will have to pay for it from day one.

If you plan making use of services that are not ‘Free tier eligible’, the AWS simple monthly (cost) calculator helps you to estimate your monthly cost.

Step 3.2: Launch Instance

Now click on Review and Launch.

2016.04.03-18_16_37-hc_001

Step 3.3: Adapt Security Settings

We get a security alert we take seriously: creating an instance that is open to the Internet is not a good idea, so we click “Edit security groups”:

2016.04.03-18_19_34-hc_001

From the drop down list of the Source, we select “My IP”, before we press “Review and Launch”. Then we can review the instance data again and press Launch:

2016.04.03-18_22_51-hc_001

Step 3.4: Create and download SSH Key Pair

In the next pop up window, you are offered to create a new SSH key pair. Let us do so, and call the key “AWS_SSH_key” and download the corresponding PEM file to a place you will need later on to connect to your instance:

2016.04.03-18_25_04-hc_001

Now press “Launch Instances”. You will be redirected to a page that helps you with connection to your Instance:

2016.04.03-18_28_44-hc_001

Step 3.5: Check Instance Status

After clicking on the Instance Link, we will see that the instance is running and the “Status Checks” are being performed:

2016.04.03-18_30_17-hc_001

In the description, we also will find some important information on the instance like the Public IP and the Public DNS name (FQDN). This information will be needed now, since we want to connect to the instance via SSH

Note the IP address and the Public DNS will change every time the image is started. For static IP addresses, a so-called Elastic IP needs to be rented from AWS. If this IP is assigned to a free tier instance, also the rented Elastic IP seems to be free of charge.

 

Step 4: Connect via SSH

If you are connecting your instance from a Linux or Unix operating system, follow Step 4 a) and use the built-in SSH client. For Windows systems, we recommend to follow step 4 b) based on putty.

Note: With Cygwin on Windows, you might also try using step 4 a). However, other Linux emulations on Windows like the bash shell that comes with Git do not play well with editors like vim, so I recommend following 4 b) in this case.

Step 4 a) Connection from a *nix operating system

On a Unix or Linux machine or on a bash shell on Windows, you can connect via the *nix built-in SSH client. The following command line connection worked for me on a bash shell on my Windows machine. Replace the path to the private PEM file and the public DNS name, so that it works for you as well:

$ssh ubuntu@ec2-52-29-14-175.eu-central-1.compute.amazonaws.com -i /g/veits/PC/PKI/AWS/AWS_SSH_Key.pem
The authenticity of host 'ec2-52-29-14-175.eu-central-1.compute.amazonaws.com (52.29.14.175)' can't be established.
ECDSA key fingerprint is e2:34:6c:92:e6:5d:73:b0:95:cc:1f:b7:43:bb:54:39.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ec2-52-29-14-175.eu-central-1.compute.amazonaws.com,52.29.14.175' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-74-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Fri Apr  1 20:38:25 UTC 2016

  System load:  0.08              Processes:           98
  Usage of /:   10.0% of 7.74GB   Users logged in:     0
  Memory usage: 6%                IP address for eth0: 172.31.21.237
  Swap usage:   0%

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.
ubuntu@ip-172-31-21-237:~$

Step 4 b) Alternatively, on Windows, use putty to connect via SSH:

Since I am using a Windows machine and the formatting of a ssh session in a CMD console using command line ssh in a bash does not work well (try using vim in a Windows CMD console), I prefer to use putty on Windows.

In putty, add the host ubuntu@<public DNS>:

2016-04-01_224807_capture_004

 

Convert the pem file to a ppk format putty understands. For that, import the pem file using Putty Key Generator (puttygen) via Conversions->Import Key->choose pem file -> Save private key with ppk extension.

2016.04.01-13_23_46-hc_001

2016.04.01-13_26_46-hc_001

Now you can add the path to the ppk file to Connection->SSH->Auth->Private key file for authentication: in the putty client.

2016-04-01_131935_capture_003

To save the changes, you need to click on Session on the left Category Pane and then press Save:

2016-04-03_184454_capture_007

Now, press the “Open” button, accept the SSH security key:

2016-04-03_184623_capture_008

and you should be logged in:

2016-04-01_224815_capture_005

thumps_up_3

Step 5: Destroy the Instance on AWS

In order to save money (or trial workhours in our case), and when you are ready with playing around with the instance, let us destroy the instance in the AWS EC2 console again:

2016.04.03-18_49_08-hc_001

Select the instance, choose Actions->Instance State->Stop. Note that any changed to the instance will be lost, if you stop the system:

2016.04.03-18_57_23-hc_001

Only the private IP addresses and DNS names are kept, while the public IP and DNS are freed up. Next time you start the system, the public IP address and public DNS name will be different and you will need to update the DNS in your SSH client for external access.

2016.04.03-19_01_19-hc_001

Alternatively, you also can terminate the instance, which will delete the instance from AWS database. Note, however, that you still we see the instance in a “Terminated” status. The sense and non-sense of this is discussed in this forum post.

Appendix A: Cost Control with AWS

An estimation of the expected cost can be calculated with the AWS monthly cost calculator tool.

The actual cost can be observed on AWS’ billing page. At the bottom of the page, there is a “Set your first billing alarm” link that allows to define an email alarm as soon as a certain threshold is exceeded.

Note for users that are not in the East of the US: I was a little bit confused that the  “Set your first billing alarm” link (https://console.aws.amazon.com/cloudwatch/home?region=us-east-1&#s=Alarms&alarmAction=ListBillingAlarms) contains a variable region=us-east-1, while I am using resources in eu-central-1 only. However, the corresponding link https://eu-central-1.console.aws.amazon.com/cloudwatch/home?region=eu-central-1#alarm:alarmFilter=ANY does not allow to set any billing alarms. I assume that the billing for all regions is performed centrally in US East for all regions (I hope).


Next: AWS Automation using Vagrant — Part 2: Installation and Usage of the Vagrant AWS Plugin

<< Part 1 | Part 2Part 3 >>

9

AWS Automation based on Vagrant — Part 2: Installation and Usage of the Vagrant AWS Plugin


This is part 2 of a blog post series, in which we will explore, how to automate Amazon Web Services (AWS) by using the Vagrant AWS provider plugin.

Note that part 2 is self-contained: it contains all information that is needed to accomplish the tasks at hand without the need to go through part 1 first. Those of you, who have started with part 1 may jump directly to the chapter AWS Automation using Vagrant (step by step guide).

The series is divided into four parts:

  • In Part 1: AWS EC2 Introduction, introduces Amazon Web Services EC2 (AWS EC2) and will show how to sign into a free trial of Amazon, create, start, shut down and terminate a virtual machine on the AWS EC2 console.
  • Part 2: Automate AWS using Vagrant (this post) will lead you through the process how to use Vagrant to perform the same tasks you have performed in part 1, but now we will use local Vagrantfiles in order to automate the process. Please be sure to check out part 4, which shows a much simpler way to perform the same using Terraform.
  • Part 3: Deploy Docker Host on AWS using Vagrant shows, how Vagrant helps you to go beyond simple creation, startup, shutdown and termination of a virtual machine. In less than 10 minutes, you will be able to install a Docker host on AWS. With a few additional clicks on the AWS EC2 console, you are ready to start your first Docker container in the AWS cloud.
  • Part 4: Automate AWS using Terraform is showing that spinning up a virtual machine instance on AWS using Terraform is even simpler than using the Vagrant AWS plugin we have used in part 2 and 3. Additionally, Terraform opens up the we will be able to use the same tool to provision our resources to other clouds like Azure and Google Cloud.

At the end, you will have running a Docker host in the public cloud, allowing you to load any of the images from Docker Hub instead of installing any software.

Document Versions

  • V2016-04-01: initial published version
  • V2016-04-14 : added Automate the Security Rule Update chapter
  • V2016-04-21: added Next Steps chapter
  • V2016-05-06: added more details and screenshots for Step 7: create user and add access rights; added Document Versions chapter.

Contents of Part 2

In this blog post we will explore, how to get started with Amazon Web Services (AWS). After signing in to a free trial of Amazon, we will show how to create, spin up and terminate virtual machines in the cloud using Amazon’s AWS EC2 web based console. After that, a step by step guide will lead us through the process of performing the same tasks in an automated way using Vagrant.

While the shown tasks could also be performed with AWS CLI commands, Vagrant potentially allows for more sophisticated provisioning tasks like Software Installation and upload & execution of arbitrary shell scripts.

Why offering yet another ‘Hello World’ for Amazon Web Service Automation using Vagrant?

The reason is, that the other guides I have found do not start start from scratch and I have learned the hard way that the they assume that you already have created an AWS user with the appropriate rights. Since I benefit from all those other Evangelists out there helping me with my projects, I feel obliged to pay back my share.

Many thanks to Brian Cantoni, who has shared with us a (much shorter) Quick Start Guide on the same topic. Part 2 (this post) of my detailed step by step guide is based on his work.

Why Amazon Web Services?

According to Gartner’s 2015 report, Amazon Web Services is the leader in the IaaS space, followed by Microsoft Azure. See below the Gartner’s magic quadrant on IaaS:

Gartner 2015 MQ

Source: Gartner (May 2015)

There are many articles out there that compare AWS with Microsoft Azure. From reading those articles, the following over-simplified summary has burnt its traces into my brain:

Amazon Web Services vs. Microsoft Azure is like Open Source Linux world vs. the commercial Microsoft Software world. For a long time, we will need both sides of the world.

Now that we have decided to begin with the open source side of the world, let us get started.

Signing into Amazon Web Services

In order to get started, you need to sign into the Amazon Web Services, if not already done so. For that, visit https://aws.amazon.com/, scroll down and push the Get Started for Free button. This is starting a free tier trial account for up to 12 months and up to two time 750 hrs of computing time; Linux and Windows 2012 server on a small virtual machine.

Note that you will be offered options that are free along with other services that are not for free, so you need to be a little bit careful. Vagrant with its easy automation will help us to minimize the resources needed.

2016-03-27_231950_capture_008

I had signed into AWS long ago, but as far as I remember, you need to choose “I am a new User”, add your email address and desired password and a set of personal data (I am not sure whether I had to add my credit card, since I am an Amazon customer anyway).

2016.03.31-19_50_22-hc_001

 

If you are interested to creating, launching, stopping and terminating virtual machine instances using the Amazon EC2 console (a web portal), you might want to have a look to part 1 of this series:

2016.04.03-21_24_41-hc_001

In this part 2 of the series, we will concentrate on automating the tasks.

AWS Automation using Vagrant

Now we will use Vagrant in order to automate the installation of an image. Before trying it myself, I had expected that I can spin up any existing Vagrant box (that is Vagrant’s name of a Vagrant image) on AWS. However, I have learned now that this is not the case: instead, we will need to use a dummy Vagrant box supporting the AWS provider, which in turn will be used to spin up an existing AWS image (a so-called AMI) in the cloud. No Vagrant box is being uploaded to the cloud during the process.

Let us start:

Step 0: Set HTTP proxy, if needed

Note, that the Vagrant setup will not finish successfully in step 10.1, if your local machine does not have SSH access over the Internet to your AWS EC2 instance. If you are located behind a HTTP proxy, you will be able to start and terminate an AWS instance via Vagrant, but Vagrant will hang infinitely and you will not be able to provision the AWS instance.

If you have no other chance and you are located behind a HTTP proxy, and you want to test how to start and terminate an AWS instance only, you can run following commands before trying to install and use Vagrant:

On *nix systems:

export http_proxy='http://myproxy.dns.name:8080'
export https_proxy='http://myproxy.dns.name:8080'

On Windows:

set http_proxy='http://myproxy.dns.name:8080'
set https_proxy='http://myproxy.dns.name:8080'

Replace myproxy.dns.name and 8080 by the IP address or DNS name and port owned by the HTTP proxy in your environment.

Step 1: It is best, if you have direct Internet access (behind a firewall, but without any HTTP proxy).  Install Vagrant on your local machine. The installation procedure depends on your operating system and is described here.

Step 2: Install the Vagrant AWS plugin

vagrant plugin install vagrant-aws

Step 3: download the dummy Vagrant box

Vagrant boxes need to be built for the provider you use. Most Vagrant boxes do not support the provider AWS. The easiest way to work around this issue is to load a dummy box that supports the AWS provider and to override the image that is spin up in the Cloud by using an override command in the Vagrantfile. There, you will point to one of the available Amazon images (called AMIs) on AWS EC2. But for now, let us download the dummy Vagrant box

vagrant box add dummy https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box

Step 4: Create a directory and within the directory, issue the command

vagrant init

This will create a template Vagrantfile in the directory.

Step 5: Adapt the Vagrantfile

Step 5.1: Add following lines into the Vagrantfile that has just been created:

# Vagrantfile
Vagrant.configure(2) do |config|
 config.vm.provider :aws do |aws, override|
   aws.access_key_id = ENV['AWS_KEY']
   aws.secret_access_key = ENV['AWS_SECRET']
   aws.keypair_name = ENV['AWS_KEYNAME']
   aws.ami = "ami-a7fdfee2"
   aws.region = "us-west-1"
   aws.instance_type = "t2.micro"

   override.vm.box = "dummy"
   override.ssh.username = "ubuntu"
   override.ssh.private_key_path = ENV['AWS_KEYPATH']
 end

Step 5.2: Note that you need to adapt the aws.region to the region you have signed in. See here a list of regions AWS offers. In my case, this was:

aws.region = "eu-central-1"

Step 5.3: In addition, you will need to update the aws.ami value to the one you have seen in the EC2 console when choosing the image after pressing the Lauch Instance button. In my case, this was

aws.ami = "ami-87564feb"

Step 6: Define the AWS credentials

Step 6.1: Create a file called ‘aws-credentials’ with following content:

export AWS_KEY='your-key'
export AWS_SECRET='your-secret'
export AWS_KEYNAME='your-keyname'
export AWS_KEYPATH='your-keypath'

Step 6.2: Find the AWS Key ID and Secret Access Key

On the users tab of the IAM console, click the create new users button and create a user of your choice. You will automatically be displayed the ‘Access Key ID’ and  the  ‘Secret Access Key’. In the file above, replace 'your-key' and 'your-secret' by those values.

Step 6.3: Add SSH Key pair name and SSH Key path

On the EC2 console -> Network Securit-> Key Pairs, create and download a new SSH key. You will be prompted for a SSH Key name and the download path. In the ‘aws-credentials’, replace 'your-keyname' and 'your-keypath' by those values.

Step 7: Add a user and apply the appropriate permissions

This step is not described in the Quick Start guides I have come across and this has caused some errors I will show in the Appendix as a reference. In order to avoid running into the errors, do the following:

Step 7.1: Create a new user on the AWS IAM Users page , if not already done.

Step 7.2: Assign the needed access rights to the user like follows:

Adapt and goto the  AWS IAM Link https://console.aws.amazon.com/iam/home?region=eu-central-1#policies. The link needs to be adapted to your region; e.g. change eu-central-1 by the right one from the region list that applies to your account.

Click the “Get Started” button, if the list of policies is not visible already. After that, you should see the list of policies and a filter field:

2016.05.06-10_42_28-hc_001

In the Filter field, search for the term AmazonEC2FullAccess. 

Click on the AmazonEC2FullAccess Policy Name and then choose the tab Attached Identities.

2016.05.06-10_50_14-hc_001

Click the Attach button and attach the main user (in the screenshot above, my main user “oveits” is already attached; in your case, the list will be empty, most likely).

Step 8: Write credentials into Environment variables

In step 6, we have created and edited a file called aws-credentials. Now is the time to write the values into the environment variables by issuing the command

source aws-credentials

Step 9: Create and spin up the virtual machine on AWS

Note: if you get a nokogiri/nokogiri LoadError at this step, see the Appendix below.

Now we should have prepared everything that we can create and spin up a virtual machine with a single command:

$vagrant up --provider=aws
Bringing machine 'default' up with 'aws' provider...
==> default: Warning! The AWS provider doesn't support any of the Vagrant
==> default: high-level network configurations (`config.vm.network`). They
==> default: will be silently ignored.
==> default: Launching an instance with the following settings...
==> default: -- Type: t2.micro
==> default: -- AMI: ami-87564feb
==> default: -- Region: eu-central-1
==> default: -- Keypair: AWS_SSH_Key
==> default: -- Block Device Mapping: []
==> default: -- Terminate On Shutdown: false
==> default: -- Monitoring: false
==> default: -- EBS optimized: false
==> default: -- Source Destination check:
==> default: -- Assigning a public IP address in a VPC: false
==> default: -- VPC tenancy specification: default
==> default: Waiting for instance to become "ready"...
==> default: Waiting for SSH to become available...

Note: the console might hang at this state for a long time (e.g. 20 minutes). Do not send a Ctrl-C to the terminal in this case, since this will terminate the instance again. Note that opening a new terminal does not help, since Vagrant does not allow to send a new command as long as the vagrant up has not finished. If it takes longer than 20 minutes, then check, whether your local machine has SSH access to the Internet.

If you have checked that you have general SSH access to the Internet, but the Vagrant console hangs at the “Waiting for SSH” state, do not worry yet: we will update the security setting of the AWS instance in step 10.1 below, before Vagrant can detect that the instance is available. For now, we will ignore the hanging Vagrant console. Instead, we will go to the EC2 console. There, you will see that we already have created an instance already (‘0 Running Instances’ is replaced by ‘1 Running Instances’):

2016.03.31-01_27_55-hc_001

Even though the ‘vagrant up’ command might be still hanging in the ‘Waiting for SSH’ status, the instance is up and running. After clicking on the “1 Running Instances” link we will see something like:

2016.03.31-01_29_55-hc_001

Step 10: Access the virtual machine via SSH:

Step 10.1: Updating the security group

In this step, we will adapt the security group manually in order to allow SSH access to the instance. Note that in Appendix B, we show how this step can be automated with a shell script. But now, let us perform the step manually.

2016.04.01-13_00_29-hc_001

In the EC2 console, under Network&Security -> Security Groups (in my case in EU Central 1: https://eu-central-1.console.aws.amazon.com/ec2/v2/home?region=eu-central-1#SecurityGroups:sort=groupId), we can find the default security group. We need to edit the inbound rule to allow the current source IP address. For that, select the policy group, click on the “Inbound” tab on the bottom, specify “My IP” as source and save the policy:

2016.04.01-13_05_18-hc_001

Now check in the console, where you had performed the ‘vagrant up’ command. The command should have finished by now. If the Vagrant console is still hanging, now is the right time to get worried. 😉

In this case, please also check out the note in Step 0.

$vagrant up --provider=aws

Bringing machine 'default' up with 'aws' provider...
...
==> default: Machine is booted and ready for use! 
No host IP was given to the Vagrant core NFS helper. This is an internal error that should be reported as a bug.

For now, we can savely ignore the NFS bug, since we do not need NFS yet…

Step 10.2: Connect via SSH

Now you can SSH into the machine. You need to specify the username=ubuntu, the IP address or FQDN of the machine and the SSH key path we have created in step 6.3. The IP address or FQDN can be read on the EC2 console Instances Description tab:

2016.04.01-22_31_27-hc_001

Note the IP address and the Public DNS will change every time the image is started.

Step 10.2 a) Connection via Vagrant

This is the simplest way to connect to your image: on the console of your local machine, just type

vagrant ssh

and you will be in, as long as the security policy permits this.

$vagrant ssh
Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-74-generic x86_64)

 * Documentation: https://help.ubuntu.com/

 System information as of Fri Apr 1 20:47:44 UTC 2016

 System load: 0.0 Processes: 99
 Usage of /: 10.0% of 7.74GB Users logged in: 1
 Memory usage: 6% IP address for eth0: 172.31.21.237
 Swap usage: 0%

 Graph this data and manage this system at:
 https://landscape.canonical.com/

 Get cloud support with Ubuntu Advantage Cloud Guest:
 http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.


Last login: Fri Apr 1 20:47:45 2016 from ppp-93-104-168-193.dynamic.mnet-online.de
ubuntu@ip-172-31-21-237:~$

Note that vagrant ssh does not play well with editors like vim on a Windows command shell. See below Step 10.2 c), how to use putty instead.

Step 10.2. b) Connection via a *nix operating client

Alternatively, on a *nix machine or on a bash shell on Windows, you can connect via the *nix built-in SSH client. The following command line connection worked for me on a bash shell on my Windows machine. Replace the path to the private PEM file and the public DNS name, so that it works for you as well:

$ssh ubuntu@ec2-52-29-14-175.eu-central-1.compute.amazonaws.com -i /g/veits/PC/PKI/AWS/AWS_SSH_Key.pem
The authenticity of host 'ec2-52-29-14-175.eu-central-1.compute.amazonaws.com (52.29.14.175)' can't be established.
ECDSA key fingerprint is e2:34:6c:92:e6:5d:73:b0:95:cc:1f:b7:43:bb:54:39.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ec2-52-29-14-175.eu-central-1.compute.amazonaws.com,52.29.14.175' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-74-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Fri Apr  1 20:38:25 UTC 2016

  System load:  0.08              Processes:           98
  Usage of /:   10.0% of 7.74GB   Users logged in:     0
  Memory usage: 6%                IP address for eth0: 172.31.21.237
  Swap usage:   0%

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.
ubuntu@ip-172-31-21-237:~$

Step 10.2 c) Alternatively, perform a SSH Connection via putty on Windows:

Since I am using a Windows machine and the formatting of a ssh session in a CMD console using ‘vagrant ssh’ does not work well (especially, if you try to use vim), I prefer to use putty on Windows.

In putty, add the host ubuntu@<public DNS>:

2016-04-01_224807_capture_004

and add the path to the private key file on Connection->SSH->Auth->Private key file for authentication:

2016-04-01_131935_capture_003

Note that the pem file needs to be converted to a ppk format putty understands. For that, import the pem file using Putty Key Generator (puttygen) via Conversions->Import Key->choose pem file -> Save private key with ppk extension.

2016.04.01-13_23_46-hc_001

2016.04.01-13_26_46-hc_001

Now add the path to the ppk file to Connection->SSH->Auth->Private key file for authentication: in the putty client, press the “yes” button, and we are logged in:

2016-04-01_224815_capture_005

thumps_up_3

Step 11: Destroy the Instance on AWS

In order to save money (or trial workhours in our case), let us destroy the image again by using Vagrant:

$vagrant destroy
 default: Are you sure you want to destroy the 'default' VM? [y/N] y
==> default: Terminating the instance...

That was very quick (less than a second) and we see that the image is shutting down:

2016.03.31-01_45_42-hc_001

Within 2-3 minutes we see, that the machine is terminated and I have learned from googling around that the image will be deleted in the next 10 to 15 minutes.

2016.03.31-01_46_54-hc_001.png

In the terminated and/or deleted status, the instance does not create any cost and I can go to bed.

Appendix A: Installing the AWS CLI

The AWS CLI local installation has helped me with troubleshooting the issues I had during the setup. If you need the AWS CLI, you can see here how to install it.

After that, you need to add the credentials like follows:

$ aws configure
AWS Access Key ID [****************FJMQ]:
AWS Secret Access Key [****************DVVn]:
Default region name [eu-central-1a]: eu-central-1
Default output format [None]:
The following command had helped me to find out I had a wrong region in the Vagrantfile:
$ aws ec2 describe-key-pairs --key-name AWS_SSH_Key
 A client error (UnauthorizedOperation) occurred when calling the DescribeKeyPairs operation: You are not authorized to perform this operation.

Appendix B: Automate the Security Rule Update

Above, we have shown that you either

  1. need to allow all SSH traffic from anywhere or
  2. to update the rules to only allow your current source IP address in the WAS EC2 console every time your source IP address changes (once a day in most home networks).

Option 1. is insecure and option 2. is cumbersome. In this appendix, we will show, how option 2. can be automated. Here is a step by step guide:

Step B.1: Install AWS CLI

For that, follow the instructions in Appendix A: install the AWS CLI and add the keys

Step B.2: Verify the AWS user rights

Make sure the AWS user has the needed rights/permissions.  rights. For that, follow the instructions in Step 7 of the main document and add AmazonEC2FullAccess for the main user, if not already done.

Step B.3: Test that you can see the security policies

In order to be quicker, you also can skip step B.3 to B.5 and jump directly to B.6 for creating shell scripts that add and remove security rules. However, steps B.3 to B.5 help you understand, what we are doing in the scripts and help verifying that each single command is successful.

On the local command line, perform the command

aws ec2 describe-security-groups

A long answer that starts like follows should be seen:

{
    "SecurityGroups": [
        {
            "IpPermissionsEgress": [
                    ...(egress rules)...
            ],
            "Description": "default VPC security group",
            "IpPermissions": [
                    ...(ingress rules)...
            ],
            "GroupName": "default",
            "VpcId": "vpc-a6e13ecf",
            "OwnerId": "923026411698",
            "GroupId": "sg-0433846d"
        },
...(other security groups)...
}

Step B.4: Test how to add and remove a new ingress rule

Now we can add a new ingress rule, see also the AWS doc on this topic. First we will simulate the add by specifying the –dry-run option:

$aws ec2 authorize-security-group-ingress --dry-run --group-id sg-0433846d --ip-permissions '[{"IpProtocol": "tcp", "FromPort": 22, "ToPort": 22, "IpRanges": [{"CidrIp": "11.22.33.44/32"}]}]'

A client error (DryRunOperation) occurred when calling the AuthorizeSecurityGroupEgress operation: Request would have succeeded, but DryRun flag is set.

This was the right answer. Note that you will need to use your own –group-id as shown in the output of the default security rule above. The IP address does not matter in the moment, since we are testing the API only for now.

Now we run the command again without –dry-run:

aws ec2 authorize-security-group-ingress --group-id sg-0433846d --ip-permissions '[{"IpProtocol": "tcp", "FromPort": 22, "ToPort": 22, "IpRanges": [{"CidrIp": "11.22.33.44/32"}]}]'

If everything works right, there will be no response and you will reach the prompt again. You can use the aws ec2 describe-security-groups command again in order to check that a new ingress rule has been added.

Now we will test that the rule is removed again by issuing the command

aws ec2 revoke-security-group-ingress --group-id sg-0433846d --ip-permissions '[{"IpProtocol": "tcp", "FromPort": 22, "ToPort": 22, "IpRanges": [{"CidrIp": "11.22.33.44/32"}]}]'

I.e., we just need to replace “authorize” by “revoke”. You can use the aws ec2 describe-security-groups command again in order to check that a new ingress rule has been removed.

Step B.5: Find your public IP address

This step will work only on a Linux shell (also a bash shell on Windows will work, though).

You could log into your home network’s NAT router in order to find you own Internet IP address. However, there is a more clever way to find the Internet public IP address, as shown in this link): just ask one of the Internet services http://ipinfo.io/ip or http://checkip.dyndns.org. Those can also be tested in an Internet browser. The ipinfo.io service has proven to respond much faster than the checkip service. Let us concentrate on the ipinfo service.

Using a bash shell, and assuming that curl or wget is intalled we will write the current public Internet IP address to a variable via one of the following commands:

currentIP=`wget http://ipinfo.io/ip -qO -`
# or equivalent:
currentIP=`curl -s http://ipinfo.io/ip`

In the following step, we will use the the wget version in the shell scripts.

Step B.6: Put it all together

Step B.6.1: Create a shell script that will add the right rule

Now let us create a file named addSecurityRule.sh with following content:

#!/bin/bash
# addSecurityRule.sh
[ -r lastIP ] && [ -r removeSecurityRule.sh ] && ./removeSecurityRule.sh
currentIP=`wget http://ipinfo.io/ip -qO -`
aws ec2 authorize-security-group-ingress --group-id sg-0433846d --ip-permissions "[{\"IpProtocol\": \"tcp\", \"FromPort\": 22, \"ToPort\": 22, \"IpRanges\": [{\"CidrIp\": \"$currentIP/32\"}]}]" && echo $currentIP > lastIP

The line with removeSecurityRule.sh will remove a security rule, before creating a new one, if applicable. The currentIP line will detect the current public IP address as seen from the Internet (with courtesy of this link). Finally, the aws ec2 line will add the current public IP address to the ones who are allowed to access the instances via SSH.

Step B.6.2: Create a shell script that will remove the rule

The following script named “removeSecurityRule.sh” will remove the security rule again. Note that this step is important, since a security group supports only up to 50 rules, and we need to clean the security group again, after a rule is not needed anymore.

#!/bin/bash
# removeSecurityRule.sh
if [ -r lastIP ]; then
 currentIP=`cat lastIP`
 aws ec2 revoke-security-group-ingress --group-id sg-0433846d --ip-permissions "[{\"IpProtocol\": \"tcp\", \"FromPort\": 22, \"ToPort\": 22, \"IpRanges\": [{\"CidrIp\": \"$currentIP/32\"}]}]" && echo $currentIP > lastIP
else
 echo "$0: no file named lastIP found!"
 exit 1
fi

Now, with those scripts available, we just need to issue a command

./addSecurityRule.sh

before issuing the other commands

source aws-credentials
vagrant up

Appendix C: Troubleshooting Steps / Errors

Because the other quick guides were missing some steps, I was running in two errors:

C.1 Wrong region leading to: “The key pair ‘AWS_SSH_Key’ does not exist”

$vagrant up --provider=aws
Bringing machine 'default' up with 'aws' provider...
==> default: Warning! The AWS provider doesn't support any of the Vagrant
==> default: high-level network configurations (`config.vm.network`). They
==> default: will be silently ignored.
==> default: Launching an instance with the following settings...
==> default: -- Type: t2.micro
==> default: -- AMI: ami-a7fdfee2
==> default: -- Region: us-west-1
==> default: -- Keypair: AWS_SSH_Key
==> default: -- Block Device Mapping: []
==> default: -- Terminate On Shutdown: false
==> default: -- Monitoring: false
==> default: -- EBS optimized: false
==> default: -- Source Destination check:
==> default: -- Assigning a public IP address in a VPC: false
==> default: -- VPC tenancy specification: default
C:/Users/vo062111/.vagrant.d/gems/gems/excon-0.49.0/lib/excon/middlewares/expects.rb:6:in `response_call': The key pair 'AWS_SSH_Key' does not exist (Fog::Compute::AWS::NotFound)
 from C:/Users/vo062111/.vagrant.d/gems/gems/excon-0.49.0/lib/excon/middlewares/response_parser.rb:8:in `response_call'
 from C:/Users/vo062111/.vagrant.d/gems/gems/excon-0.49.0/lib/excon/connection.rb:389:in `response'
 from C:/Users/vo062111/.vagrant.d/gems/gems/excon-0.49.0/lib/excon/connection.rb:253:in `request'
 from C:/Users/vo062111/.vagrant.d/gems/gems/fog-xml-0.1.2/lib/fog/xml/sax_parser_connection.rb:35:in `request'
 from C:/Users/vo062111/.vagrant.d/gems/gems/fog-xml-0.1.2/lib/fog/xml/connection.rb:7:in `request'
 from C:/Users/vo062111/.vagrant.d/gems/gems/fog-aws-0.9.2/lib/fog/aws/compute.rb:525:in `_request'
 from C:/Users/vo062111/.vagrant.d/gems/gems/fog-aws-0.9.2/lib/fog/aws/compute.rb:520:in `request'
 ...

This was, because I had the wrong region configured in the Vagrant file. I have changed this to

aws.region = "eu-central-1"

In addition, the AMI was wrong. In the EC2 console, I find after pressing Launch Instance:

2016.03.31-01_04_13-hc_001

Therefore I have changed the AMI to

aws.ami = "ami-87564feb"

Then again:

$vagrant up --provider=aws
Bringing machine 'default' up with 'aws' provider...
==> default: Warning! The AWS provider doesn't support any of the Vagrant
==> default: high-level network configurations (`config.vm.network`). They
==> default: will be silently ignored.
==> default: Launching an instance with the following settings...
==> default: -- Type: t2.micro
==> default: -- AMI: ami-87564feb
==> default: -- Region: eu-central-1
==> default: -- Keypair: AWS_SSH_Key
==> default: -- Block Device Mapping: []
==> default: -- Terminate On Shutdown: false
==> default: -- Monitoring: false
==> default: -- EBS optimized: false
==> default: -- Source Destination check:
==> default: -- Assigning a public IP address in a VPC: false
==> default: -- VPC tenancy specification: default
There was an error talking to AWS. The error message is shown
below:

UnauthorizedOperation => You are not authorized to perform this operation. Encoded authorization failure message: txYvhypUYdsHX-FIv1N2GGtnAMcIKBbjGrG9PHmCIhG33l8IMxmEhc0W4NuS_ST-5U
Wb-ApATEe56XxQWB2xVu289LKRrT08FhXZHziH_QLGgPb-THBTn0lonbRcsLtkGZurjMzflVYbddqiM34XI0x4aR_VqHWAKLIl3p4Kk3A2Oovu_u4tLT-qYBZ0lovD0bvFH8geve4gpvNI63SSyyWbfBvMI5sQ7SOQ_3E_sYMH8lJ2nhpSPI
OKpcC9fGOJ3EQZBJwlg-76UplZZdlJzGtGTl2XL8lc5OtdqeTNuqivMJbz-GxXH5p0XUvpdeNA-utYJPmWWiGubghz44n_NMuXk58W4p7hlrNDDMu3YGGqMBMKWUUUXAA6SM1o-nm2SNq-xqeZWWrvweRwGzEdBKYz-4jwdmUbSyC3F9rmGs
7vQFKe2lcz9yQwmKTlOfOBDxXsHke5wBu-ii1misYh_ljI0uTiuQc0PlR9IS6jy8A6Raavb3XTYwUlSrqbzefmprEiAkLlvKiCsdNQP8VNbCLtxKUhL3g

C.2 Missing Permissions leading to: “UnauthorizedOperation => You are not authorized to perform this operation”

Then I tried to attach the policy AmazonEC2FullAccess to the user oveits on https://console.aws.amazon.com/iam/home?region=eu-central-1#policies

Search for AmazonEC2FullAccess on the IAM policies link https://console.aws.amazon.com/iam/home?region=eu-central-1#policies (you need adapt the link to your region!)

Select and choose attach, then select the user you have created above.

Then I was trying again, and it worked as described in Step 9.

C.3 Error: “cannot load such file — nokogiri/nokogiri (LoadError)”

When I have tried to issue

vagrant up --provision

I have run into the error

C:/HashiCorp/Vagrant/embedded/gems/gems/nokogiri-1.6.3.1-x86-mingw32/lib/nokogiri.rb:29:in `require': cannot load such file -- nokogiri/nokogiri (LoadError)
 from C:/HashiCorp/Vagrant/embedded/gems/gems/nokogiri-1.6.3.1-x86-mingw32/lib/nokogiri.rb:29:in `rescue in <top (required)>'
 from C:/HashiCorp/Vagrant/embedded/gems/gems/nokogiri-1.6.3.1-x86-mingw32/lib/nokogiri.rb:25:in `<top (required)>'
 from D:/veits/Vagrant/.vagrant.d/gems/gems/fog-xml-0.1.2/lib/fog/xml.rb:2:in `require'
 from D:/veits/Vagrant/.vagrant.d/gems/gems/fog-xml-0.1.2/lib/fog/xml.rb:2:in `<top (required)>'
 from D:/veits/Vagrant/.vagrant.d/gems/gems/fog-1.38.0/lib/fog.rb:13:in `require'
 from D:/veits/Vagrant/.vagrant.d/gems/gems/fog-1.38.0/lib/fog.rb:13:in `<top (required)>'
...

I had this error after installing Vagrant 1.8.1 on a new Windows 10 machine. This error seems to be related with this Vagrant issue. After upgrading Vagrant to 1.8.6 by downloading the new version and installing it over the old 1.8.1 version, the problem was resolved.

C.4 Error: An access key ID must be specified via “access_key_id”

This and similar error messages will occur, if Step 6.1 was not accomplished before issuing ‘vagrant up”. This causes the environment variables AWS_KEY and AWS_SECRET etc. to be not defined.

C.5 No host IP was given to the Vagrant core NFS helper. This is an internal error that should be reported as a bug.

Seen after upgrading Vagrant to 1.8.6 and issuing the command ‘vagrant up –provider=aws’. This error is reported here to be a bug of the aws plugin. As workaround, they discuss to add following override line to the Vagrantfile:

config.vm.provider :aws do |aws, override|
   ...
   override.nfs.functional = false
end

However, if you do so, you run into the next problem: “No synced folder implementation is available for your synced folders”.

Therefore, I have chosen to ignore the error message, since it does not prevent the instance to be launched successfully, as you can verify in the AWS console. If the security settings on AWS are correct, you also can connect to the instance succesfully via ‘vagrant ssh’.

Summary

According to Gartner, Amazon Web Services (AWS) is the No. one service in public cloud IaaS space. Amazon is offering a “free tier” test account for up to 12 months and up to 750 hrs of a t2.micro Linux instance as well as 750 hrs of a t2.micro Windows 2012 instance. For more details, check the free tier limits page. For services outside the free tier limits, check the AWS simple monthly (cost) calculator.

Per default, AWS is assigning a dynamic private and a dynamic public IP address. The public IP address will change every time you restart the instance.

Deleting an instance is done by “Terminating” it. For 10 to 20 minutes, the terminated instance will still be visible in the instance dashboard as “Terminated”. The sense and non-sense of this is discussed in this forum post.

I have shown that Vagrant can be used as a means to automate the management of AWS images, including creation, starting and terminating the image. Each of those task will take only a single command on your local machine’s command line, once the first nine steps of this guide are accomplished.

Note that Vagrant is not uploading any Vagrant boxes, as those, who know Vagrant, might expect. Instead, it is only used as a user front end to create, spin up and terminate the existing AWS images (AMIs).

Next Steps (Brainstorming):

  • Done: see Appendix B: Automate the Security Rule Update
    learn how to automate the update of the security policy to only allow the current local IP.
  • Done: see Part 3: Provisioning (Installation of software) on an AWS Instance via Vagrant
    • install a Docker host using scripts found on https://github.com/William-Yeh/docker-enabled-vagrant, my favorite Docker host in terms of performance (see performance comparison with CoreOS on this blog post).

<< Part 1 | Part 2 | Part 3 >>

1

Choosing the right IaaS Provider for a custom Appliance or: how hard is it to install from ISO in the cloud?


Which Cloud Infrastructure provider allows to install custom appliances via ISO? The short answer is: none of the IaaS market leaders Amazon Web Serviese (AWS), Microsoft Azure offer the requested functionality, but they offer the workaround to locally install the virtual machine (VM) and upload the VM to the cloud. The cheaper alternative DigitalOcean does not offer any of those possibilities.

At the end, I thought I have found  the perfect solution to my problem: Ravello Systems a.k.a. Oracle ravello is a meta cloud infrastructure provider (they call it “nested virtualization provider”), which is re-selling the infrastructure from other IaaS providers like Amazon AWS and Google Engine. They offer a portal that supports the installation of a VM from an ISO in the cloud. Details see below. They write:

2016.03.30-13_57_04-hc_001

However, Ravello was ignoring my request for a trial for more than two months.

Ravello’s trial seems to be open for companies only? I even told them that I am about to found my own company, this did not help.

2016.03.30-13_08_51-hc_001

If you are representing a large company and if you are offering them a prospect to earn a lot of money, they might be reacting differently in your case, though. Good luck.

I am back at installing the image locally and uploading it to Amazon AWS. Maybe this is the cheaper alternative, anyway. I am back at the bright side of life…

2016.03.30-13_06_53-hc_001

At the end, after more than 2 months, I have got the activation link. The ISO Upload tool has some challenges with HTTP proxies, but is seems to work now.

Document Versions

v1.0 (2016-03-14): initially published version
v1.1 (2016-03-21): added a note on ravello’s nested virtualization solution, which makes the solution suitable for VMware testing on public non-VMware clouds
v1.2 (2016-03-23): added a note of my problems of getting a trial account; I have created a service ticket.
v1.3 (2016-03-30): Ravello has chosen to close my ticket without helping me. I am looking for alternatives.
v1.4 (2016-04-09): After I have complained about the closed ticket, they wrote a note that they are sorry on 2016-03-30. However, I have still not got an account. I have sent a new email asking for the status today.
v1.5 (2016-05-25): I have received an activation link on May, 11th. It has taken more than 2 months to get it. I am not sure, if I am still interested…

The Use Case

Integration of high tech systems with legacy systems is fun. At least, it is fun, if you have easy access from you development machine to the legacy systems. In my case, I was lucky enough: the legacy systems I am dealing with are modern communication systems that can be run on VMware. Using a two year old software version of the system, I have run the legacy system on my development machine. With that I could run my integration software against the real target system.

But why have I used a two year old software version of the legacy system? That is, why: the most recent versions of that system have such a high demand on the virtual resources (vCPU, DRAM) that it has outgrown my development machine: it was quite a bit…

2016.03.14-19_48_01-hc_001

…overloaded.

Possible Solutions

How to deal with this? Some thoughts of mine are:

  • I could buy a new notebook with, say 64 GB RAM.2016.03.14-20_25_24-hc_001
    • this is an expensive option. Moreover, I am a road warrior type of developer and do a lot of coding in the train. Most notebooks with 64GB RAM are bulky and heavy and you need to take a power plant with you if you do not want to run out of energy during your trip.
  • I could develop a lightweight simulator that is mocking the behavior of the legacy system.
    • In the long run, I need to do something along those lines anyway: I want to come closer to Continuous Integration+Deployment process and for the automated tests in the CI/CD system, it is much simpler to run a simulator as part of the software than to run the tests against bulky legacy systems.
  • I could develop and test (including integration tests) in the IaaS cloud.

2016.03.14-18_57_14-hc_001

The Cloud Way of Providing a Test Environment

Yes, the IaaS cloud option is a particularly interesting one; especially, if development is done as a side job because:

  • I need to pay only for resources I use.
  • For most functional tests, I do not need full performance. I can go with cheaper, shared resources.
  • I can pimp up the legacy system and reserve resources for performance tests, while freeing up the resources again after finish of the test.
  • Last but not least, I am a cloud evangelist and therefore I should eat my own dog food (or drink my own champagne, I hope).

However: which are the potential challenges?

2016.03.14-19_01_14-hc_001

  1. Installation challenges of the legacy system in the cloud.
  2. How much do you pay for the VM, if it is shut down? Open Topic, but will not (yet) investigated in this blog post.
  3. How long does it take from opening the lid of the development notebook until I can access the legacy system? Open Topic, but will not (yet) investigated in this blog post.

First things first: in this post, I will concentrate on challenge 1.

The Cloud Way of installing (a custom appliance from ISO)

2016.03.14-19_04_03-hc_001

In my case, the legacy system must be installed from ISO. From my first investigation, it seems that this is a challenge with many IaaS providers. Let us have a closer look:

Comparison of IaaS Providers

2016.03.14-19_05_07-hc_001

  • DigitalOcean: they do not support the installation from ISO. See this forum post.
    • there is no workaround like local installation and upload of the image, see here. Shoot. 😦

2016.03.14-19_06_58-hc_001

  • AWS: same thing: no ISO installation support.
    1. For AWS, the workaround is to install the system locally and to upload and convert the VM. See this stackoverflow post.
      One moment: didn’t I say the legacy system is too large for my notebook? Not a good option. 😦
    2. Another workaround for AWS is to use a nested virtualization provider like ravello systems: they claim here that the installation of an AWS image from ISO is no problem.
      Note: ravello’s nested virtualization solution places an additional Hypervisor on top of AWS’ XEN hypervisor, in order to run VMware VMs on public clouds that do not support VMware VMs natively. This will not increase the performance, though and is intended for test environments only. However, this is exactly, what I am aiming at (for now).

Ravello claims: “With Ravello, uploading an ISO file is as simple as uploading your files to dropbox. Once the file is in your library in Ravello simply add the CD-ROM device to your VM and select your customer ISO file from your library.”2016.03.14-19_38_13-hc_001

2016.03.14-19_08_58-hc_001

  • Microsoft Azure: not fully clear…
    • I have found here the information that an ISO can be attached to an existing VM. I do not know, though, whether or not the VM can be installed from the ISO by booting from ISO.
    • you can create a local image in VHD format and upload it to the cloud. However, the only (convenient) way to create this image is to install the VM on Hyper-V. I do not have access to Hyper-V and I do not want to spend any time on this for now. 😦

Among those options, it seems like only AWS and ravello are possible feasible for me.

Even so, I need to take the risk caused by the fact that my legacy systems are supported on VMware only. However, this is a risk I need to accept, if I want to go with a low cost mainstream IaaS provider. A private cloud on dedicated VMware infrastructure is prohibitive with respect to effort and price.

Decision:

I have a more powerful notebook at home and I could install the image locally. However, I will give the meta IaaS provider Ravello Systems a try and I will install the legacy system via their overlay cloud. Within Ravello systems, I will choose AWS as the backend IaaS provider, because AWS is the number one IaaS provider (see this article pointing to the Gartner report) and therefore I want to gain some experience with AWS.

Note about the pricing comparison between AWS and ravello: I believe that ravello comes at higher rates (estimated 30-50%). But please do not take this for granted and calculate yourself, using the AWS monthly calculator and the ravello pricing page.

HowTo

More than 2 months after my application, I finally got an activation link. Looking for how to import the ISO, I have found this ravello link. However, the documentation is not good. They write:

To download the tool, click Download VM Import Tool on the Library page.

However, there is no Download VM Import Tool on the Library page. Instead, you can choose Library->Disk Images ->Import Disk Image in order to reach the import tool download page (or click this direct link).

After installing the GUI tool on Windows using the exe file, I am redirected to the browser login page of the tool:

2016.05.24-14_53_30-hc_001

If you are behind a proxy, you will receive the following connectivity problem error:

2016.05.24-14_59_19-hc_001

The link will lead here. The process is to create a config.properties file on a folder named .ravello in the user’s home directory (%HOME%\.ravello).

Note: be sure to use %HOME%\.ravello and not %USERPROFILE%\.ravello, if those two pathes differ in your case (in my case they do: %HOME% is my local Git directory on F:\veits\git).

The file config.properties needs to have following content:

[upload]
proxy_address = <ip address of proxy server>
proxy_port = <port on which the proxy server accepts connections>

The nasty thing is, that you need to kill the RavelloImageImportServer.exe task in case of Windows or the ravello-vm-upload process in case of Linux.

The problem is, that

  1. they do not tell you how to restart the process. In my case, I have found RavelloImageImportServer.exe on C:\Program Files (x86)\Ravello Systems\Ravello import utility. I have restarted it.
  2. Even though I have created the properties file, the import tool does not find the proxy configuration on %USERPROFILE%\.ravello. Crap! I have found out that the import tool is looking for %HOME%\.ravello instead, which has been set by my local git installation to be on F:\veits\git. I was close to giving up…

Finally, I have managed to upload the ISO:

2016.05.24-16_01_49-hc_001

From there, it should be possible to create an empty VM, attach the ISO on it and boot the VM from ISO…

No luck: after some time, the upload is stopped due to no apparent reason:

2016.05.24-20_48_42-hc_001

The pause button as well as the resume button are greyed out. No way to resume the upload. Well thought, but not so good implemented. Okay, the service is quite new. Let us see, how ravello works, if we give them a few additional months…

After connection to the Internet without HTTP proxy (my notebook was in standby for a while), I have seen, that I could not log into the local GUI upload tool anymore. The process was consuming a constant 25% of my dual core CPU. Workaround: renaming the config.properties file (or maybe remove/comment out its content), killing and restarting of the process brought back the GUI upload process to normal.

Summary

I have shortly investigated, which options I have to run a legacy system on an IaaS provider cloud network.

Before I found out that ravello’s service times are sub-optimal, I initially thought that the meta IaaS provider called Ravello Systems is the winner of this investigation:

2016.03.14-19_38_13-hc_001

However, I see following problems:

  • it has taken ravello more than two (!) months to provide me with an activation link.
  • An ISO or VM upload requires the installation of a local tool
  • the GUI tool has problems to handle HTTP proxies. I have followed their instructions, but I could not get it to work, initially. At the end, I have found out, that the tool is not looking in Maybe the tool is looking in %USERPROFILE%\.ravello, but in %HOME%\.ravello, which is a GIT home directory and does not match C:\Users\myusername in my case.
  • another problem might be that Ravello is running the VMware VMs on top of a Hypervisor layer, which in turn translates the VM to the underlying infrastructure. There is a high risk that this will work only for test labs with low CPU consumptions. This is to be tested.

In the short time I have invested into the investigation, I have found that

  1. ravello had seemed to be the best alternative, since the system can be installed in the cloud with
  2. A reader of my blog suggests to check out Vultr. Since ravello has its own drawbacks (service: long time to respond, longer time to help, GUI import tools seems to have weaknesses: I could not get it to work from behind a HTTP proxy, even if I follow the instructions), Vultr might be a real good alternative with low pricing.
  3. Amazon AWS is an alternative, if it is O.K. for you not to install from ISO, but to install locally and upload the created custom VM.

The following alternatives have major drawbacks:

  • Microsoft Azure requires the local installation of the VM using Hyper-V and I do not have such a system. I have not found a statement, whether it is possible to boot a Microsoft Azure VM from ISO (do you know that?).
  • DigitalOcean neither supports an installation from ISO, nor does it support the upload of custom VMs.

See also:

Next steps:

  • once the ISO is uploaded, create a VM and try to boot the VM from ISO.
  • Try out Vultr.

Update 2016-03-21: I have applied for a trial with ravello on March 17th, but no reaction so far, apart from the automatic email reply. I have opened a ticket yesterday and I got an email that they will come back to me…

Danger_No Entry

Update 2016-03-23: still waiting…

Update 2016-03-30: instead of helping me, Ravello’s support has sent an email that they did not get any response from me (response about what?) and they have closed the ticket, along with a link with the possibility to give feedback. My feedback was “not satisfied”. Let us see, how they react.

Update 2016-05-11: I have received the activation link, more than 2 months after my application. I have signed in although I do not know, if I am still interested. I have added the HowTo chapter, but I have failed to upload the ISO via a HTTP proxy, even though I have followed the instructions closely.

Meanwhile, I have signed up for a native AWS account. The intent of this blog was to find a provider that makes it more easy to install an image from ISO: I did not want to install locally, and then upload and convert the image, because my SSD disk is notoriously full. Ravello was the only alternative I had found in a quick Internet research. However, Ravello had failed to provide me with a valid registration within 2 months.